Jan 31 09:00:54 crc systemd[1]: Starting Kubernetes Kubelet... Jan 31 09:00:54 crc restorecon[4687]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:54 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 31 09:00:55 crc restorecon[4687]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 31 09:00:55 crc restorecon[4687]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Jan 31 09:00:55 crc kubenswrapper[4830]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 31 09:00:55 crc kubenswrapper[4830]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Jan 31 09:00:55 crc kubenswrapper[4830]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 31 09:00:55 crc kubenswrapper[4830]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 31 09:00:55 crc kubenswrapper[4830]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 31 09:00:55 crc kubenswrapper[4830]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 31 09:00:55 crc kubenswrapper[4830]: I0131 09:00:55.965225 4830 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 31 09:00:55 crc kubenswrapper[4830]: W0131 09:00:55.973212 4830 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 31 09:00:55 crc kubenswrapper[4830]: W0131 09:00:55.973275 4830 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 31 09:00:55 crc kubenswrapper[4830]: W0131 09:00:55.973286 4830 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 31 09:00:55 crc kubenswrapper[4830]: W0131 09:00:55.973296 4830 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 31 09:00:55 crc kubenswrapper[4830]: W0131 09:00:55.973306 4830 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 31 09:00:55 crc kubenswrapper[4830]: W0131 09:00:55.973316 4830 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 31 09:00:55 crc kubenswrapper[4830]: W0131 09:00:55.973325 4830 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 31 09:00:55 crc kubenswrapper[4830]: W0131 09:00:55.973333 4830 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 31 09:00:55 crc kubenswrapper[4830]: W0131 09:00:55.973342 4830 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 31 09:00:55 crc kubenswrapper[4830]: W0131 09:00:55.973350 4830 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 31 09:00:55 crc kubenswrapper[4830]: W0131 09:00:55.973359 4830 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 31 09:00:55 crc kubenswrapper[4830]: W0131 09:00:55.973367 4830 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 31 09:00:55 crc kubenswrapper[4830]: W0131 09:00:55.973376 4830 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 31 09:00:55 crc kubenswrapper[4830]: W0131 09:00:55.973385 4830 feature_gate.go:330] unrecognized feature gate: Example Jan 31 09:00:55 crc kubenswrapper[4830]: W0131 09:00:55.973394 4830 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 31 09:00:55 crc kubenswrapper[4830]: W0131 09:00:55.973403 4830 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 31 09:00:55 crc kubenswrapper[4830]: W0131 09:00:55.973412 4830 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 31 09:00:55 crc kubenswrapper[4830]: W0131 09:00:55.973420 4830 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 31 09:00:55 crc kubenswrapper[4830]: W0131 09:00:55.973428 4830 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 31 09:00:55 crc kubenswrapper[4830]: W0131 09:00:55.973436 4830 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 31 09:00:55 crc kubenswrapper[4830]: W0131 09:00:55.973445 4830 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 31 09:00:55 crc kubenswrapper[4830]: W0131 09:00:55.973455 4830 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 31 09:00:55 crc kubenswrapper[4830]: W0131 09:00:55.973467 4830 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 31 09:00:55 crc kubenswrapper[4830]: W0131 09:00:55.973480 4830 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 31 09:00:55 crc kubenswrapper[4830]: W0131 09:00:55.973492 4830 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 31 09:00:55 crc kubenswrapper[4830]: W0131 09:00:55.973502 4830 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 31 09:00:55 crc kubenswrapper[4830]: W0131 09:00:55.973511 4830 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 31 09:00:55 crc kubenswrapper[4830]: W0131 09:00:55.973520 4830 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 31 09:00:55 crc kubenswrapper[4830]: W0131 09:00:55.973528 4830 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 31 09:00:55 crc kubenswrapper[4830]: W0131 09:00:55.973536 4830 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 31 09:00:55 crc kubenswrapper[4830]: W0131 09:00:55.973545 4830 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 31 09:00:55 crc kubenswrapper[4830]: W0131 09:00:55.973566 4830 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 31 09:00:55 crc kubenswrapper[4830]: W0131 09:00:55.973577 4830 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 31 09:00:55 crc kubenswrapper[4830]: W0131 09:00:55.973589 4830 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 31 09:00:55 crc kubenswrapper[4830]: W0131 09:00:55.973599 4830 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 31 09:00:55 crc kubenswrapper[4830]: W0131 09:00:55.973610 4830 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 31 09:00:55 crc kubenswrapper[4830]: W0131 09:00:55.973621 4830 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 31 09:00:55 crc kubenswrapper[4830]: W0131 09:00:55.973633 4830 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 31 09:00:55 crc kubenswrapper[4830]: W0131 09:00:55.973649 4830 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 31 09:00:55 crc kubenswrapper[4830]: W0131 09:00:55.973663 4830 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 31 09:00:55 crc kubenswrapper[4830]: W0131 09:00:55.973674 4830 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 31 09:00:55 crc kubenswrapper[4830]: W0131 09:00:55.973686 4830 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 31 09:00:55 crc kubenswrapper[4830]: W0131 09:00:55.973697 4830 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 31 09:00:55 crc kubenswrapper[4830]: W0131 09:00:55.973711 4830 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 31 09:00:55 crc kubenswrapper[4830]: W0131 09:00:55.973757 4830 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 31 09:00:55 crc kubenswrapper[4830]: W0131 09:00:55.973771 4830 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 31 09:00:55 crc kubenswrapper[4830]: W0131 09:00:55.973784 4830 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 31 09:00:55 crc kubenswrapper[4830]: W0131 09:00:55.973796 4830 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 31 09:00:55 crc kubenswrapper[4830]: W0131 09:00:55.973807 4830 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 31 09:00:55 crc kubenswrapper[4830]: W0131 09:00:55.973819 4830 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 31 09:00:55 crc kubenswrapper[4830]: W0131 09:00:55.973830 4830 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 31 09:00:55 crc kubenswrapper[4830]: W0131 09:00:55.973842 4830 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 31 09:00:55 crc kubenswrapper[4830]: W0131 09:00:55.973853 4830 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 31 09:00:55 crc kubenswrapper[4830]: W0131 09:00:55.973865 4830 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 31 09:00:55 crc kubenswrapper[4830]: W0131 09:00:55.973876 4830 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 31 09:00:55 crc kubenswrapper[4830]: W0131 09:00:55.973887 4830 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 31 09:00:55 crc kubenswrapper[4830]: W0131 09:00:55.973899 4830 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 31 09:00:55 crc kubenswrapper[4830]: W0131 09:00:55.973910 4830 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 31 09:00:55 crc kubenswrapper[4830]: W0131 09:00:55.973921 4830 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 31 09:00:55 crc kubenswrapper[4830]: W0131 09:00:55.973932 4830 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 31 09:00:55 crc kubenswrapper[4830]: W0131 09:00:55.973944 4830 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 31 09:00:55 crc kubenswrapper[4830]: W0131 09:00:55.973955 4830 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 31 09:00:55 crc kubenswrapper[4830]: W0131 09:00:55.973966 4830 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 31 09:00:55 crc kubenswrapper[4830]: W0131 09:00:55.973977 4830 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 31 09:00:55 crc kubenswrapper[4830]: W0131 09:00:55.973987 4830 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 31 09:00:55 crc kubenswrapper[4830]: W0131 09:00:55.973998 4830 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 31 09:00:55 crc kubenswrapper[4830]: W0131 09:00:55.974011 4830 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 31 09:00:55 crc kubenswrapper[4830]: W0131 09:00:55.974022 4830 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 31 09:00:55 crc kubenswrapper[4830]: W0131 09:00:55.974051 4830 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 31 09:00:55 crc kubenswrapper[4830]: W0131 09:00:55.974063 4830 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 31 09:00:55 crc kubenswrapper[4830]: W0131 09:00:55.974078 4830 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 31 09:00:55 crc kubenswrapper[4830]: I0131 09:00:55.974310 4830 flags.go:64] FLAG: --address="0.0.0.0" Jan 31 09:00:55 crc kubenswrapper[4830]: I0131 09:00:55.974339 4830 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Jan 31 09:00:55 crc kubenswrapper[4830]: I0131 09:00:55.974361 4830 flags.go:64] FLAG: --anonymous-auth="true" Jan 31 09:00:55 crc kubenswrapper[4830]: I0131 09:00:55.974380 4830 flags.go:64] FLAG: --application-metrics-count-limit="100" Jan 31 09:00:55 crc kubenswrapper[4830]: I0131 09:00:55.974398 4830 flags.go:64] FLAG: --authentication-token-webhook="false" Jan 31 09:00:55 crc kubenswrapper[4830]: I0131 09:00:55.974412 4830 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Jan 31 09:00:55 crc kubenswrapper[4830]: I0131 09:00:55.974431 4830 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Jan 31 09:00:55 crc kubenswrapper[4830]: I0131 09:00:55.974447 4830 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Jan 31 09:00:55 crc kubenswrapper[4830]: I0131 09:00:55.974461 4830 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Jan 31 09:00:55 crc kubenswrapper[4830]: I0131 09:00:55.974473 4830 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Jan 31 09:00:55 crc kubenswrapper[4830]: I0131 09:00:55.974487 4830 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Jan 31 09:00:55 crc kubenswrapper[4830]: I0131 09:00:55.974503 4830 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Jan 31 09:00:55 crc kubenswrapper[4830]: I0131 09:00:55.974517 4830 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Jan 31 09:00:55 crc kubenswrapper[4830]: I0131 09:00:55.974530 4830 flags.go:64] FLAG: --cgroup-root="" Jan 31 09:00:55 crc kubenswrapper[4830]: I0131 09:00:55.974543 4830 flags.go:64] FLAG: --cgroups-per-qos="true" Jan 31 09:00:55 crc kubenswrapper[4830]: I0131 09:00:55.974555 4830 flags.go:64] FLAG: --client-ca-file="" Jan 31 09:00:55 crc kubenswrapper[4830]: I0131 09:00:55.974567 4830 flags.go:64] FLAG: --cloud-config="" Jan 31 09:00:55 crc kubenswrapper[4830]: I0131 09:00:55.974581 4830 flags.go:64] FLAG: --cloud-provider="" Jan 31 09:00:55 crc kubenswrapper[4830]: I0131 09:00:55.974593 4830 flags.go:64] FLAG: --cluster-dns="[]" Jan 31 09:00:55 crc kubenswrapper[4830]: I0131 09:00:55.974609 4830 flags.go:64] FLAG: --cluster-domain="" Jan 31 09:00:55 crc kubenswrapper[4830]: I0131 09:00:55.974621 4830 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Jan 31 09:00:55 crc kubenswrapper[4830]: I0131 09:00:55.974633 4830 flags.go:64] FLAG: --config-dir="" Jan 31 09:00:55 crc kubenswrapper[4830]: I0131 09:00:55.974647 4830 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Jan 31 09:00:55 crc kubenswrapper[4830]: I0131 09:00:55.974661 4830 flags.go:64] FLAG: --container-log-max-files="5" Jan 31 09:00:55 crc kubenswrapper[4830]: I0131 09:00:55.974681 4830 flags.go:64] FLAG: --container-log-max-size="10Mi" Jan 31 09:00:55 crc kubenswrapper[4830]: I0131 09:00:55.974694 4830 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Jan 31 09:00:55 crc kubenswrapper[4830]: I0131 09:00:55.974707 4830 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Jan 31 09:00:55 crc kubenswrapper[4830]: I0131 09:00:55.974721 4830 flags.go:64] FLAG: --containerd-namespace="k8s.io" Jan 31 09:00:55 crc kubenswrapper[4830]: I0131 09:00:55.974768 4830 flags.go:64] FLAG: --contention-profiling="false" Jan 31 09:00:55 crc kubenswrapper[4830]: I0131 09:00:55.974780 4830 flags.go:64] FLAG: --cpu-cfs-quota="true" Jan 31 09:00:55 crc kubenswrapper[4830]: I0131 09:00:55.974793 4830 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Jan 31 09:00:55 crc kubenswrapper[4830]: I0131 09:00:55.974805 4830 flags.go:64] FLAG: --cpu-manager-policy="none" Jan 31 09:00:55 crc kubenswrapper[4830]: I0131 09:00:55.974818 4830 flags.go:64] FLAG: --cpu-manager-policy-options="" Jan 31 09:00:55 crc kubenswrapper[4830]: I0131 09:00:55.974835 4830 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Jan 31 09:00:55 crc kubenswrapper[4830]: I0131 09:00:55.974847 4830 flags.go:64] FLAG: --enable-controller-attach-detach="true" Jan 31 09:00:55 crc kubenswrapper[4830]: I0131 09:00:55.974860 4830 flags.go:64] FLAG: --enable-debugging-handlers="true" Jan 31 09:00:55 crc kubenswrapper[4830]: I0131 09:00:55.974872 4830 flags.go:64] FLAG: --enable-load-reader="false" Jan 31 09:00:55 crc kubenswrapper[4830]: I0131 09:00:55.974884 4830 flags.go:64] FLAG: --enable-server="true" Jan 31 09:00:55 crc kubenswrapper[4830]: I0131 09:00:55.974898 4830 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Jan 31 09:00:55 crc kubenswrapper[4830]: I0131 09:00:55.974914 4830 flags.go:64] FLAG: --event-burst="100" Jan 31 09:00:55 crc kubenswrapper[4830]: I0131 09:00:55.974927 4830 flags.go:64] FLAG: --event-qps="50" Jan 31 09:00:55 crc kubenswrapper[4830]: I0131 09:00:55.974939 4830 flags.go:64] FLAG: --event-storage-age-limit="default=0" Jan 31 09:00:55 crc kubenswrapper[4830]: I0131 09:00:55.974951 4830 flags.go:64] FLAG: --event-storage-event-limit="default=0" Jan 31 09:00:55 crc kubenswrapper[4830]: I0131 09:00:55.974964 4830 flags.go:64] FLAG: --eviction-hard="" Jan 31 09:00:55 crc kubenswrapper[4830]: I0131 09:00:55.974981 4830 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Jan 31 09:00:55 crc kubenswrapper[4830]: I0131 09:00:55.974993 4830 flags.go:64] FLAG: --eviction-minimum-reclaim="" Jan 31 09:00:55 crc kubenswrapper[4830]: I0131 09:00:55.975007 4830 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Jan 31 09:00:55 crc kubenswrapper[4830]: I0131 09:00:55.975024 4830 flags.go:64] FLAG: --eviction-soft="" Jan 31 09:00:55 crc kubenswrapper[4830]: I0131 09:00:55.975037 4830 flags.go:64] FLAG: --eviction-soft-grace-period="" Jan 31 09:00:55 crc kubenswrapper[4830]: I0131 09:00:55.975049 4830 flags.go:64] FLAG: --exit-on-lock-contention="false" Jan 31 09:00:55 crc kubenswrapper[4830]: I0131 09:00:55.975065 4830 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Jan 31 09:00:55 crc kubenswrapper[4830]: I0131 09:00:55.975077 4830 flags.go:64] FLAG: --experimental-mounter-path="" Jan 31 09:00:55 crc kubenswrapper[4830]: I0131 09:00:55.975089 4830 flags.go:64] FLAG: --fail-cgroupv1="false" Jan 31 09:00:55 crc kubenswrapper[4830]: I0131 09:00:55.975102 4830 flags.go:64] FLAG: --fail-swap-on="true" Jan 31 09:00:55 crc kubenswrapper[4830]: I0131 09:00:55.975114 4830 flags.go:64] FLAG: --feature-gates="" Jan 31 09:00:55 crc kubenswrapper[4830]: I0131 09:00:55.975144 4830 flags.go:64] FLAG: --file-check-frequency="20s" Jan 31 09:00:55 crc kubenswrapper[4830]: I0131 09:00:55.975157 4830 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Jan 31 09:00:55 crc kubenswrapper[4830]: I0131 09:00:55.975170 4830 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Jan 31 09:00:55 crc kubenswrapper[4830]: I0131 09:00:55.975183 4830 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Jan 31 09:00:55 crc kubenswrapper[4830]: I0131 09:00:55.975196 4830 flags.go:64] FLAG: --healthz-port="10248" Jan 31 09:00:55 crc kubenswrapper[4830]: I0131 09:00:55.975208 4830 flags.go:64] FLAG: --help="false" Jan 31 09:00:55 crc kubenswrapper[4830]: I0131 09:00:55.975221 4830 flags.go:64] FLAG: --hostname-override="" Jan 31 09:00:55 crc kubenswrapper[4830]: I0131 09:00:55.975233 4830 flags.go:64] FLAG: --housekeeping-interval="10s" Jan 31 09:00:55 crc kubenswrapper[4830]: I0131 09:00:55.975246 4830 flags.go:64] FLAG: --http-check-frequency="20s" Jan 31 09:00:55 crc kubenswrapper[4830]: I0131 09:00:55.975259 4830 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Jan 31 09:00:55 crc kubenswrapper[4830]: I0131 09:00:55.975271 4830 flags.go:64] FLAG: --image-credential-provider-config="" Jan 31 09:00:55 crc kubenswrapper[4830]: I0131 09:00:55.975283 4830 flags.go:64] FLAG: --image-gc-high-threshold="85" Jan 31 09:00:55 crc kubenswrapper[4830]: I0131 09:00:55.975295 4830 flags.go:64] FLAG: --image-gc-low-threshold="80" Jan 31 09:00:55 crc kubenswrapper[4830]: I0131 09:00:55.975307 4830 flags.go:64] FLAG: --image-service-endpoint="" Jan 31 09:00:55 crc kubenswrapper[4830]: I0131 09:00:55.975320 4830 flags.go:64] FLAG: --kernel-memcg-notification="false" Jan 31 09:00:55 crc kubenswrapper[4830]: I0131 09:00:55.975332 4830 flags.go:64] FLAG: --kube-api-burst="100" Jan 31 09:00:55 crc kubenswrapper[4830]: I0131 09:00:55.975345 4830 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Jan 31 09:00:55 crc kubenswrapper[4830]: I0131 09:00:55.975359 4830 flags.go:64] FLAG: --kube-api-qps="50" Jan 31 09:00:55 crc kubenswrapper[4830]: I0131 09:00:55.975371 4830 flags.go:64] FLAG: --kube-reserved="" Jan 31 09:00:55 crc kubenswrapper[4830]: I0131 09:00:55.975384 4830 flags.go:64] FLAG: --kube-reserved-cgroup="" Jan 31 09:00:55 crc kubenswrapper[4830]: I0131 09:00:55.975396 4830 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Jan 31 09:00:55 crc kubenswrapper[4830]: I0131 09:00:55.975409 4830 flags.go:64] FLAG: --kubelet-cgroups="" Jan 31 09:00:55 crc kubenswrapper[4830]: I0131 09:00:55.975422 4830 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Jan 31 09:00:55 crc kubenswrapper[4830]: I0131 09:00:55.975437 4830 flags.go:64] FLAG: --lock-file="" Jan 31 09:00:55 crc kubenswrapper[4830]: I0131 09:00:55.975450 4830 flags.go:64] FLAG: --log-cadvisor-usage="false" Jan 31 09:00:55 crc kubenswrapper[4830]: I0131 09:00:55.975463 4830 flags.go:64] FLAG: --log-flush-frequency="5s" Jan 31 09:00:55 crc kubenswrapper[4830]: I0131 09:00:55.975476 4830 flags.go:64] FLAG: --log-json-info-buffer-size="0" Jan 31 09:00:55 crc kubenswrapper[4830]: I0131 09:00:55.975495 4830 flags.go:64] FLAG: --log-json-split-stream="false" Jan 31 09:00:55 crc kubenswrapper[4830]: I0131 09:00:55.975521 4830 flags.go:64] FLAG: --log-text-info-buffer-size="0" Jan 31 09:00:55 crc kubenswrapper[4830]: I0131 09:00:55.975534 4830 flags.go:64] FLAG: --log-text-split-stream="false" Jan 31 09:00:55 crc kubenswrapper[4830]: I0131 09:00:55.975547 4830 flags.go:64] FLAG: --logging-format="text" Jan 31 09:00:55 crc kubenswrapper[4830]: I0131 09:00:55.975560 4830 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Jan 31 09:00:55 crc kubenswrapper[4830]: I0131 09:00:55.975574 4830 flags.go:64] FLAG: --make-iptables-util-chains="true" Jan 31 09:00:55 crc kubenswrapper[4830]: I0131 09:00:55.975587 4830 flags.go:64] FLAG: --manifest-url="" Jan 31 09:00:55 crc kubenswrapper[4830]: I0131 09:00:55.975601 4830 flags.go:64] FLAG: --manifest-url-header="" Jan 31 09:00:55 crc kubenswrapper[4830]: I0131 09:00:55.975630 4830 flags.go:64] FLAG: --max-housekeeping-interval="15s" Jan 31 09:00:55 crc kubenswrapper[4830]: I0131 09:00:55.975642 4830 flags.go:64] FLAG: --max-open-files="1000000" Jan 31 09:00:55 crc kubenswrapper[4830]: I0131 09:00:55.975658 4830 flags.go:64] FLAG: --max-pods="110" Jan 31 09:00:55 crc kubenswrapper[4830]: I0131 09:00:55.975671 4830 flags.go:64] FLAG: --maximum-dead-containers="-1" Jan 31 09:00:55 crc kubenswrapper[4830]: I0131 09:00:55.975684 4830 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Jan 31 09:00:55 crc kubenswrapper[4830]: I0131 09:00:55.975697 4830 flags.go:64] FLAG: --memory-manager-policy="None" Jan 31 09:00:55 crc kubenswrapper[4830]: I0131 09:00:55.975710 4830 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Jan 31 09:00:55 crc kubenswrapper[4830]: I0131 09:00:55.975771 4830 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Jan 31 09:00:55 crc kubenswrapper[4830]: I0131 09:00:55.975787 4830 flags.go:64] FLAG: --node-ip="192.168.126.11" Jan 31 09:00:55 crc kubenswrapper[4830]: I0131 09:00:55.975800 4830 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Jan 31 09:00:55 crc kubenswrapper[4830]: I0131 09:00:55.975830 4830 flags.go:64] FLAG: --node-status-max-images="50" Jan 31 09:00:55 crc kubenswrapper[4830]: I0131 09:00:55.975843 4830 flags.go:64] FLAG: --node-status-update-frequency="10s" Jan 31 09:00:55 crc kubenswrapper[4830]: I0131 09:00:55.975856 4830 flags.go:64] FLAG: --oom-score-adj="-999" Jan 31 09:00:55 crc kubenswrapper[4830]: I0131 09:00:55.975869 4830 flags.go:64] FLAG: --pod-cidr="" Jan 31 09:00:55 crc kubenswrapper[4830]: I0131 09:00:55.975881 4830 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Jan 31 09:00:55 crc kubenswrapper[4830]: I0131 09:00:55.975902 4830 flags.go:64] FLAG: --pod-manifest-path="" Jan 31 09:00:55 crc kubenswrapper[4830]: I0131 09:00:55.975914 4830 flags.go:64] FLAG: --pod-max-pids="-1" Jan 31 09:00:55 crc kubenswrapper[4830]: I0131 09:00:55.975927 4830 flags.go:64] FLAG: --pods-per-core="0" Jan 31 09:00:55 crc kubenswrapper[4830]: I0131 09:00:55.975939 4830 flags.go:64] FLAG: --port="10250" Jan 31 09:00:55 crc kubenswrapper[4830]: I0131 09:00:55.975952 4830 flags.go:64] FLAG: --protect-kernel-defaults="false" Jan 31 09:00:55 crc kubenswrapper[4830]: I0131 09:00:55.975967 4830 flags.go:64] FLAG: --provider-id="" Jan 31 09:00:55 crc kubenswrapper[4830]: I0131 09:00:55.975979 4830 flags.go:64] FLAG: --qos-reserved="" Jan 31 09:00:55 crc kubenswrapper[4830]: I0131 09:00:55.975992 4830 flags.go:64] FLAG: --read-only-port="10255" Jan 31 09:00:55 crc kubenswrapper[4830]: I0131 09:00:55.976005 4830 flags.go:64] FLAG: --register-node="true" Jan 31 09:00:55 crc kubenswrapper[4830]: I0131 09:00:55.976017 4830 flags.go:64] FLAG: --register-schedulable="true" Jan 31 09:00:55 crc kubenswrapper[4830]: I0131 09:00:55.976030 4830 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Jan 31 09:00:55 crc kubenswrapper[4830]: I0131 09:00:55.976052 4830 flags.go:64] FLAG: --registry-burst="10" Jan 31 09:00:55 crc kubenswrapper[4830]: I0131 09:00:55.976064 4830 flags.go:64] FLAG: --registry-qps="5" Jan 31 09:00:55 crc kubenswrapper[4830]: I0131 09:00:55.976076 4830 flags.go:64] FLAG: --reserved-cpus="" Jan 31 09:00:55 crc kubenswrapper[4830]: I0131 09:00:55.976088 4830 flags.go:64] FLAG: --reserved-memory="" Jan 31 09:00:55 crc kubenswrapper[4830]: I0131 09:00:55.976102 4830 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Jan 31 09:00:55 crc kubenswrapper[4830]: I0131 09:00:55.976113 4830 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Jan 31 09:00:55 crc kubenswrapper[4830]: I0131 09:00:55.976123 4830 flags.go:64] FLAG: --rotate-certificates="false" Jan 31 09:00:55 crc kubenswrapper[4830]: I0131 09:00:55.976133 4830 flags.go:64] FLAG: --rotate-server-certificates="false" Jan 31 09:00:55 crc kubenswrapper[4830]: I0131 09:00:55.976142 4830 flags.go:64] FLAG: --runonce="false" Jan 31 09:00:55 crc kubenswrapper[4830]: I0131 09:00:55.976152 4830 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Jan 31 09:00:55 crc kubenswrapper[4830]: I0131 09:00:55.976163 4830 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Jan 31 09:00:55 crc kubenswrapper[4830]: I0131 09:00:55.976174 4830 flags.go:64] FLAG: --seccomp-default="false" Jan 31 09:00:55 crc kubenswrapper[4830]: I0131 09:00:55.976184 4830 flags.go:64] FLAG: --serialize-image-pulls="true" Jan 31 09:00:55 crc kubenswrapper[4830]: I0131 09:00:55.976193 4830 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Jan 31 09:00:55 crc kubenswrapper[4830]: I0131 09:00:55.976204 4830 flags.go:64] FLAG: --storage-driver-db="cadvisor" Jan 31 09:00:55 crc kubenswrapper[4830]: I0131 09:00:55.976214 4830 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Jan 31 09:00:55 crc kubenswrapper[4830]: I0131 09:00:55.976225 4830 flags.go:64] FLAG: --storage-driver-password="root" Jan 31 09:00:55 crc kubenswrapper[4830]: I0131 09:00:55.976236 4830 flags.go:64] FLAG: --storage-driver-secure="false" Jan 31 09:00:55 crc kubenswrapper[4830]: I0131 09:00:55.976246 4830 flags.go:64] FLAG: --storage-driver-table="stats" Jan 31 09:00:55 crc kubenswrapper[4830]: I0131 09:00:55.976256 4830 flags.go:64] FLAG: --storage-driver-user="root" Jan 31 09:00:55 crc kubenswrapper[4830]: I0131 09:00:55.976266 4830 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Jan 31 09:00:55 crc kubenswrapper[4830]: I0131 09:00:55.976306 4830 flags.go:64] FLAG: --sync-frequency="1m0s" Jan 31 09:00:55 crc kubenswrapper[4830]: I0131 09:00:55.976318 4830 flags.go:64] FLAG: --system-cgroups="" Jan 31 09:00:55 crc kubenswrapper[4830]: I0131 09:00:55.976327 4830 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Jan 31 09:00:55 crc kubenswrapper[4830]: I0131 09:00:55.976343 4830 flags.go:64] FLAG: --system-reserved-cgroup="" Jan 31 09:00:55 crc kubenswrapper[4830]: I0131 09:00:55.976355 4830 flags.go:64] FLAG: --tls-cert-file="" Jan 31 09:00:55 crc kubenswrapper[4830]: I0131 09:00:55.976365 4830 flags.go:64] FLAG: --tls-cipher-suites="[]" Jan 31 09:00:55 crc kubenswrapper[4830]: I0131 09:00:55.976378 4830 flags.go:64] FLAG: --tls-min-version="" Jan 31 09:00:55 crc kubenswrapper[4830]: I0131 09:00:55.976388 4830 flags.go:64] FLAG: --tls-private-key-file="" Jan 31 09:00:55 crc kubenswrapper[4830]: I0131 09:00:55.976397 4830 flags.go:64] FLAG: --topology-manager-policy="none" Jan 31 09:00:55 crc kubenswrapper[4830]: I0131 09:00:55.976407 4830 flags.go:64] FLAG: --topology-manager-policy-options="" Jan 31 09:00:55 crc kubenswrapper[4830]: I0131 09:00:55.976417 4830 flags.go:64] FLAG: --topology-manager-scope="container" Jan 31 09:00:55 crc kubenswrapper[4830]: I0131 09:00:55.976428 4830 flags.go:64] FLAG: --v="2" Jan 31 09:00:55 crc kubenswrapper[4830]: I0131 09:00:55.976441 4830 flags.go:64] FLAG: --version="false" Jan 31 09:00:55 crc kubenswrapper[4830]: I0131 09:00:55.976457 4830 flags.go:64] FLAG: --vmodule="" Jan 31 09:00:55 crc kubenswrapper[4830]: I0131 09:00:55.976473 4830 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Jan 31 09:00:55 crc kubenswrapper[4830]: I0131 09:00:55.976487 4830 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Jan 31 09:00:55 crc kubenswrapper[4830]: W0131 09:00:55.976926 4830 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 31 09:00:55 crc kubenswrapper[4830]: W0131 09:00:55.976978 4830 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 31 09:00:55 crc kubenswrapper[4830]: W0131 09:00:55.976995 4830 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 31 09:00:55 crc kubenswrapper[4830]: W0131 09:00:55.977006 4830 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 31 09:00:55 crc kubenswrapper[4830]: W0131 09:00:55.977018 4830 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 31 09:00:55 crc kubenswrapper[4830]: W0131 09:00:55.977028 4830 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 31 09:00:55 crc kubenswrapper[4830]: W0131 09:00:55.977044 4830 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 31 09:00:55 crc kubenswrapper[4830]: W0131 09:00:55.977056 4830 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 31 09:00:55 crc kubenswrapper[4830]: W0131 09:00:55.977069 4830 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 31 09:00:55 crc kubenswrapper[4830]: W0131 09:00:55.977082 4830 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 31 09:00:55 crc kubenswrapper[4830]: W0131 09:00:55.977094 4830 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 31 09:00:55 crc kubenswrapper[4830]: W0131 09:00:55.977106 4830 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 31 09:00:55 crc kubenswrapper[4830]: W0131 09:00:55.977117 4830 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 31 09:00:55 crc kubenswrapper[4830]: W0131 09:00:55.977128 4830 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 31 09:00:55 crc kubenswrapper[4830]: W0131 09:00:55.977143 4830 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 31 09:00:55 crc kubenswrapper[4830]: W0131 09:00:55.977156 4830 feature_gate.go:330] unrecognized feature gate: Example Jan 31 09:00:55 crc kubenswrapper[4830]: W0131 09:00:55.977167 4830 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 31 09:00:55 crc kubenswrapper[4830]: W0131 09:00:55.977178 4830 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 31 09:00:55 crc kubenswrapper[4830]: W0131 09:00:55.977189 4830 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 31 09:00:55 crc kubenswrapper[4830]: W0131 09:00:55.977200 4830 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 31 09:00:55 crc kubenswrapper[4830]: W0131 09:00:55.977211 4830 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 31 09:00:55 crc kubenswrapper[4830]: W0131 09:00:55.977222 4830 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 31 09:00:55 crc kubenswrapper[4830]: W0131 09:00:55.977232 4830 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 31 09:00:55 crc kubenswrapper[4830]: W0131 09:00:55.977243 4830 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 31 09:00:55 crc kubenswrapper[4830]: W0131 09:00:55.977254 4830 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 31 09:00:55 crc kubenswrapper[4830]: W0131 09:00:55.977266 4830 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 31 09:00:55 crc kubenswrapper[4830]: W0131 09:00:55.977280 4830 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 31 09:00:55 crc kubenswrapper[4830]: W0131 09:00:55.977291 4830 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 31 09:00:55 crc kubenswrapper[4830]: W0131 09:00:55.977306 4830 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 31 09:00:55 crc kubenswrapper[4830]: W0131 09:00:55.977320 4830 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 31 09:00:55 crc kubenswrapper[4830]: W0131 09:00:55.977332 4830 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 31 09:00:55 crc kubenswrapper[4830]: W0131 09:00:55.977343 4830 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 31 09:00:55 crc kubenswrapper[4830]: W0131 09:00:55.977355 4830 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 31 09:00:55 crc kubenswrapper[4830]: W0131 09:00:55.977366 4830 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 31 09:00:55 crc kubenswrapper[4830]: W0131 09:00:55.977379 4830 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 31 09:00:55 crc kubenswrapper[4830]: W0131 09:00:55.977391 4830 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 31 09:00:55 crc kubenswrapper[4830]: W0131 09:00:55.977402 4830 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 31 09:00:55 crc kubenswrapper[4830]: W0131 09:00:55.977413 4830 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 31 09:00:55 crc kubenswrapper[4830]: W0131 09:00:55.977426 4830 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 31 09:00:55 crc kubenswrapper[4830]: W0131 09:00:55.977437 4830 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 31 09:00:55 crc kubenswrapper[4830]: W0131 09:00:55.977447 4830 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 31 09:00:55 crc kubenswrapper[4830]: W0131 09:00:55.977458 4830 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 31 09:00:55 crc kubenswrapper[4830]: W0131 09:00:55.977469 4830 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 31 09:00:55 crc kubenswrapper[4830]: W0131 09:00:55.977483 4830 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 31 09:00:55 crc kubenswrapper[4830]: W0131 09:00:55.977497 4830 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 31 09:00:55 crc kubenswrapper[4830]: W0131 09:00:55.977509 4830 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 31 09:00:55 crc kubenswrapper[4830]: W0131 09:00:55.977523 4830 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 31 09:00:55 crc kubenswrapper[4830]: W0131 09:00:55.977535 4830 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 31 09:00:55 crc kubenswrapper[4830]: W0131 09:00:55.977546 4830 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 31 09:00:55 crc kubenswrapper[4830]: W0131 09:00:55.977558 4830 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 31 09:00:55 crc kubenswrapper[4830]: W0131 09:00:55.977570 4830 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 31 09:00:55 crc kubenswrapper[4830]: W0131 09:00:55.977582 4830 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 31 09:00:55 crc kubenswrapper[4830]: W0131 09:00:55.977593 4830 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 31 09:00:55 crc kubenswrapper[4830]: W0131 09:00:55.977605 4830 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 31 09:00:55 crc kubenswrapper[4830]: W0131 09:00:55.977618 4830 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 31 09:00:55 crc kubenswrapper[4830]: W0131 09:00:55.977629 4830 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 31 09:00:55 crc kubenswrapper[4830]: W0131 09:00:55.977640 4830 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 31 09:00:55 crc kubenswrapper[4830]: W0131 09:00:55.977651 4830 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 31 09:00:55 crc kubenswrapper[4830]: W0131 09:00:55.977662 4830 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 31 09:00:55 crc kubenswrapper[4830]: W0131 09:00:55.977672 4830 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 31 09:00:55 crc kubenswrapper[4830]: W0131 09:00:55.977683 4830 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 31 09:00:55 crc kubenswrapper[4830]: W0131 09:00:55.977694 4830 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 31 09:00:55 crc kubenswrapper[4830]: W0131 09:00:55.977705 4830 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 31 09:00:55 crc kubenswrapper[4830]: W0131 09:00:55.977715 4830 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 31 09:00:55 crc kubenswrapper[4830]: W0131 09:00:55.977760 4830 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 31 09:00:55 crc kubenswrapper[4830]: W0131 09:00:55.977773 4830 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 31 09:00:55 crc kubenswrapper[4830]: W0131 09:00:55.977784 4830 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 31 09:00:55 crc kubenswrapper[4830]: W0131 09:00:55.977795 4830 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 31 09:00:55 crc kubenswrapper[4830]: W0131 09:00:55.977805 4830 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 31 09:00:55 crc kubenswrapper[4830]: W0131 09:00:55.977816 4830 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 31 09:00:55 crc kubenswrapper[4830]: W0131 09:00:55.977827 4830 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 31 09:00:55 crc kubenswrapper[4830]: I0131 09:00:55.977861 4830 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 31 09:00:55 crc kubenswrapper[4830]: I0131 09:00:55.991475 4830 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Jan 31 09:00:55 crc kubenswrapper[4830]: I0131 09:00:55.991525 4830 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 31 09:00:55 crc kubenswrapper[4830]: W0131 09:00:55.991618 4830 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 31 09:00:55 crc kubenswrapper[4830]: W0131 09:00:55.991627 4830 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 31 09:00:55 crc kubenswrapper[4830]: W0131 09:00:55.991632 4830 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 31 09:00:55 crc kubenswrapper[4830]: W0131 09:00:55.991636 4830 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 31 09:00:55 crc kubenswrapper[4830]: W0131 09:00:55.991640 4830 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 31 09:00:55 crc kubenswrapper[4830]: W0131 09:00:55.991645 4830 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 31 09:00:55 crc kubenswrapper[4830]: W0131 09:00:55.991649 4830 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 31 09:00:55 crc kubenswrapper[4830]: W0131 09:00:55.991654 4830 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 31 09:00:55 crc kubenswrapper[4830]: W0131 09:00:55.991658 4830 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 31 09:00:55 crc kubenswrapper[4830]: W0131 09:00:55.991664 4830 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 31 09:00:55 crc kubenswrapper[4830]: W0131 09:00:55.991669 4830 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 31 09:00:55 crc kubenswrapper[4830]: W0131 09:00:55.991674 4830 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 31 09:00:55 crc kubenswrapper[4830]: W0131 09:00:55.991679 4830 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 31 09:00:55 crc kubenswrapper[4830]: W0131 09:00:55.991683 4830 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 31 09:00:55 crc kubenswrapper[4830]: W0131 09:00:55.991686 4830 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 31 09:00:55 crc kubenswrapper[4830]: W0131 09:00:55.991690 4830 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 31 09:00:55 crc kubenswrapper[4830]: W0131 09:00:55.991694 4830 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 31 09:00:55 crc kubenswrapper[4830]: W0131 09:00:55.991698 4830 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 31 09:00:55 crc kubenswrapper[4830]: W0131 09:00:55.991703 4830 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 31 09:00:55 crc kubenswrapper[4830]: W0131 09:00:55.991707 4830 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 31 09:00:55 crc kubenswrapper[4830]: W0131 09:00:55.991710 4830 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 31 09:00:55 crc kubenswrapper[4830]: W0131 09:00:55.991714 4830 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 31 09:00:55 crc kubenswrapper[4830]: W0131 09:00:55.991718 4830 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 31 09:00:55 crc kubenswrapper[4830]: W0131 09:00:55.991748 4830 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 31 09:00:55 crc kubenswrapper[4830]: W0131 09:00:55.991753 4830 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 31 09:00:55 crc kubenswrapper[4830]: W0131 09:00:55.991756 4830 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 31 09:00:55 crc kubenswrapper[4830]: W0131 09:00:55.991761 4830 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 31 09:00:55 crc kubenswrapper[4830]: W0131 09:00:55.991764 4830 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 31 09:00:55 crc kubenswrapper[4830]: W0131 09:00:55.991769 4830 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 31 09:00:55 crc kubenswrapper[4830]: W0131 09:00:55.991775 4830 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 31 09:00:55 crc kubenswrapper[4830]: W0131 09:00:55.991778 4830 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 31 09:00:55 crc kubenswrapper[4830]: W0131 09:00:55.991782 4830 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 31 09:00:55 crc kubenswrapper[4830]: W0131 09:00:55.991786 4830 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 31 09:00:55 crc kubenswrapper[4830]: W0131 09:00:55.991792 4830 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 31 09:00:55 crc kubenswrapper[4830]: W0131 09:00:55.991796 4830 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 31 09:00:55 crc kubenswrapper[4830]: W0131 09:00:55.991801 4830 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 31 09:00:55 crc kubenswrapper[4830]: W0131 09:00:55.991805 4830 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 31 09:00:55 crc kubenswrapper[4830]: W0131 09:00:55.991810 4830 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 31 09:00:55 crc kubenswrapper[4830]: W0131 09:00:55.991814 4830 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 31 09:00:55 crc kubenswrapper[4830]: W0131 09:00:55.991820 4830 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 31 09:00:55 crc kubenswrapper[4830]: W0131 09:00:55.991824 4830 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 31 09:00:55 crc kubenswrapper[4830]: W0131 09:00:55.991828 4830 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 31 09:00:55 crc kubenswrapper[4830]: W0131 09:00:55.991833 4830 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 31 09:00:55 crc kubenswrapper[4830]: W0131 09:00:55.991838 4830 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 31 09:00:55 crc kubenswrapper[4830]: W0131 09:00:55.991843 4830 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 31 09:00:55 crc kubenswrapper[4830]: W0131 09:00:55.991848 4830 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 31 09:00:55 crc kubenswrapper[4830]: W0131 09:00:55.991851 4830 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 31 09:00:55 crc kubenswrapper[4830]: W0131 09:00:55.991856 4830 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 31 09:00:55 crc kubenswrapper[4830]: W0131 09:00:55.991859 4830 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 31 09:00:55 crc kubenswrapper[4830]: W0131 09:00:55.991864 4830 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 31 09:00:55 crc kubenswrapper[4830]: W0131 09:00:55.991867 4830 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 31 09:00:55 crc kubenswrapper[4830]: W0131 09:00:55.991872 4830 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 31 09:00:55 crc kubenswrapper[4830]: W0131 09:00:55.991875 4830 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 31 09:00:55 crc kubenswrapper[4830]: W0131 09:00:55.991879 4830 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 31 09:00:55 crc kubenswrapper[4830]: W0131 09:00:55.991882 4830 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 31 09:00:55 crc kubenswrapper[4830]: W0131 09:00:55.991886 4830 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 31 09:00:55 crc kubenswrapper[4830]: W0131 09:00:55.991889 4830 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 31 09:00:55 crc kubenswrapper[4830]: W0131 09:00:55.991894 4830 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 31 09:00:55 crc kubenswrapper[4830]: W0131 09:00:55.991898 4830 feature_gate.go:330] unrecognized feature gate: Example Jan 31 09:00:55 crc kubenswrapper[4830]: W0131 09:00:55.991902 4830 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 31 09:00:55 crc kubenswrapper[4830]: W0131 09:00:55.991906 4830 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 31 09:00:55 crc kubenswrapper[4830]: W0131 09:00:55.991910 4830 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 31 09:00:55 crc kubenswrapper[4830]: W0131 09:00:55.991914 4830 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 31 09:00:55 crc kubenswrapper[4830]: W0131 09:00:55.991918 4830 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 31 09:00:55 crc kubenswrapper[4830]: W0131 09:00:55.991922 4830 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 31 09:00:55 crc kubenswrapper[4830]: W0131 09:00:55.991929 4830 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 31 09:00:55 crc kubenswrapper[4830]: W0131 09:00:55.991933 4830 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 31 09:00:55 crc kubenswrapper[4830]: W0131 09:00:55.991937 4830 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 31 09:00:55 crc kubenswrapper[4830]: W0131 09:00:55.991940 4830 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 31 09:00:55 crc kubenswrapper[4830]: W0131 09:00:55.991944 4830 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 31 09:00:55 crc kubenswrapper[4830]: W0131 09:00:55.991947 4830 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 31 09:00:55 crc kubenswrapper[4830]: I0131 09:00:55.991955 4830 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 31 09:00:55 crc kubenswrapper[4830]: W0131 09:00:55.992082 4830 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 31 09:00:55 crc kubenswrapper[4830]: W0131 09:00:55.992087 4830 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 31 09:00:55 crc kubenswrapper[4830]: W0131 09:00:55.992091 4830 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 31 09:00:55 crc kubenswrapper[4830]: W0131 09:00:55.992095 4830 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 31 09:00:55 crc kubenswrapper[4830]: W0131 09:00:55.992099 4830 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 31 09:00:55 crc kubenswrapper[4830]: W0131 09:00:55.992104 4830 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 31 09:00:55 crc kubenswrapper[4830]: W0131 09:00:55.992109 4830 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 31 09:00:55 crc kubenswrapper[4830]: W0131 09:00:55.992113 4830 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 31 09:00:55 crc kubenswrapper[4830]: W0131 09:00:55.992117 4830 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 31 09:00:55 crc kubenswrapper[4830]: W0131 09:00:55.992122 4830 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 31 09:00:55 crc kubenswrapper[4830]: W0131 09:00:55.992126 4830 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 31 09:00:55 crc kubenswrapper[4830]: W0131 09:00:55.992130 4830 feature_gate.go:330] unrecognized feature gate: Example Jan 31 09:00:55 crc kubenswrapper[4830]: W0131 09:00:55.992134 4830 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 31 09:00:55 crc kubenswrapper[4830]: W0131 09:00:55.992139 4830 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 31 09:00:55 crc kubenswrapper[4830]: W0131 09:00:55.992143 4830 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 31 09:00:55 crc kubenswrapper[4830]: W0131 09:00:55.992146 4830 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 31 09:00:55 crc kubenswrapper[4830]: W0131 09:00:55.992150 4830 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 31 09:00:55 crc kubenswrapper[4830]: W0131 09:00:55.992155 4830 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 31 09:00:55 crc kubenswrapper[4830]: W0131 09:00:55.992159 4830 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 31 09:00:55 crc kubenswrapper[4830]: W0131 09:00:55.992164 4830 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 31 09:00:55 crc kubenswrapper[4830]: W0131 09:00:55.992167 4830 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 31 09:00:55 crc kubenswrapper[4830]: W0131 09:00:55.992172 4830 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 31 09:00:55 crc kubenswrapper[4830]: W0131 09:00:55.992176 4830 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 31 09:00:55 crc kubenswrapper[4830]: W0131 09:00:55.992180 4830 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 31 09:00:55 crc kubenswrapper[4830]: W0131 09:00:55.992184 4830 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 31 09:00:55 crc kubenswrapper[4830]: W0131 09:00:55.992188 4830 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 31 09:00:55 crc kubenswrapper[4830]: W0131 09:00:55.992192 4830 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 31 09:00:55 crc kubenswrapper[4830]: W0131 09:00:55.992195 4830 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 31 09:00:55 crc kubenswrapper[4830]: W0131 09:00:55.992199 4830 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 31 09:00:55 crc kubenswrapper[4830]: W0131 09:00:55.992210 4830 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 31 09:00:55 crc kubenswrapper[4830]: W0131 09:00:55.992213 4830 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 31 09:00:55 crc kubenswrapper[4830]: W0131 09:00:55.992217 4830 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 31 09:00:55 crc kubenswrapper[4830]: W0131 09:00:55.992221 4830 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 31 09:00:55 crc kubenswrapper[4830]: W0131 09:00:55.992225 4830 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 31 09:00:55 crc kubenswrapper[4830]: W0131 09:00:55.992229 4830 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 31 09:00:55 crc kubenswrapper[4830]: W0131 09:00:55.992232 4830 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 31 09:00:55 crc kubenswrapper[4830]: W0131 09:00:55.992236 4830 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 31 09:00:55 crc kubenswrapper[4830]: W0131 09:00:55.992239 4830 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 31 09:00:55 crc kubenswrapper[4830]: W0131 09:00:55.992243 4830 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 31 09:00:55 crc kubenswrapper[4830]: W0131 09:00:55.992248 4830 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 31 09:00:55 crc kubenswrapper[4830]: W0131 09:00:55.992252 4830 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 31 09:00:55 crc kubenswrapper[4830]: W0131 09:00:55.992256 4830 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 31 09:00:55 crc kubenswrapper[4830]: W0131 09:00:55.992260 4830 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 31 09:00:55 crc kubenswrapper[4830]: W0131 09:00:55.992291 4830 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 31 09:00:55 crc kubenswrapper[4830]: W0131 09:00:55.992296 4830 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 31 09:00:55 crc kubenswrapper[4830]: W0131 09:00:55.992301 4830 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 31 09:00:55 crc kubenswrapper[4830]: W0131 09:00:55.992306 4830 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 31 09:00:55 crc kubenswrapper[4830]: W0131 09:00:55.992310 4830 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 31 09:00:55 crc kubenswrapper[4830]: W0131 09:00:55.992314 4830 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 31 09:00:55 crc kubenswrapper[4830]: W0131 09:00:55.992317 4830 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 31 09:00:55 crc kubenswrapper[4830]: W0131 09:00:55.992321 4830 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 31 09:00:55 crc kubenswrapper[4830]: W0131 09:00:55.992326 4830 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 31 09:00:55 crc kubenswrapper[4830]: W0131 09:00:55.992330 4830 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 31 09:00:55 crc kubenswrapper[4830]: W0131 09:00:55.992335 4830 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 31 09:00:55 crc kubenswrapper[4830]: W0131 09:00:55.992339 4830 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 31 09:00:55 crc kubenswrapper[4830]: W0131 09:00:55.992342 4830 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 31 09:00:55 crc kubenswrapper[4830]: W0131 09:00:55.992346 4830 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 31 09:00:55 crc kubenswrapper[4830]: W0131 09:00:55.992349 4830 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 31 09:00:55 crc kubenswrapper[4830]: W0131 09:00:55.992353 4830 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 31 09:00:55 crc kubenswrapper[4830]: W0131 09:00:55.992357 4830 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 31 09:00:55 crc kubenswrapper[4830]: W0131 09:00:55.992361 4830 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 31 09:00:55 crc kubenswrapper[4830]: W0131 09:00:55.992365 4830 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 31 09:00:55 crc kubenswrapper[4830]: W0131 09:00:55.992368 4830 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 31 09:00:55 crc kubenswrapper[4830]: W0131 09:00:55.992372 4830 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 31 09:00:55 crc kubenswrapper[4830]: W0131 09:00:55.992375 4830 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 31 09:00:55 crc kubenswrapper[4830]: W0131 09:00:55.992380 4830 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 31 09:00:55 crc kubenswrapper[4830]: W0131 09:00:55.992384 4830 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 31 09:00:55 crc kubenswrapper[4830]: W0131 09:00:55.992388 4830 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 31 09:00:55 crc kubenswrapper[4830]: W0131 09:00:55.992391 4830 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 31 09:00:55 crc kubenswrapper[4830]: W0131 09:00:55.992395 4830 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 31 09:00:55 crc kubenswrapper[4830]: W0131 09:00:55.992399 4830 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 31 09:00:55 crc kubenswrapper[4830]: I0131 09:00:55.992405 4830 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 31 09:00:55 crc kubenswrapper[4830]: I0131 09:00:55.992565 4830 server.go:940] "Client rotation is on, will bootstrap in background" Jan 31 09:00:55 crc kubenswrapper[4830]: I0131 09:00:55.996938 4830 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Jan 31 09:00:55 crc kubenswrapper[4830]: I0131 09:00:55.997055 4830 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 31 09:00:55 crc kubenswrapper[4830]: I0131 09:00:55.998545 4830 server.go:997] "Starting client certificate rotation" Jan 31 09:00:55 crc kubenswrapper[4830]: I0131 09:00:55.998583 4830 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Jan 31 09:00:55 crc kubenswrapper[4830]: I0131 09:00:55.998763 4830 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2025-11-18 02:47:59.670357153 +0000 UTC Jan 31 09:00:55 crc kubenswrapper[4830]: I0131 09:00:55.998857 4830 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.027934 4830 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.032878 4830 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 31 09:00:56 crc kubenswrapper[4830]: E0131 09:00:56.033603 4830 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.53:6443: connect: connection refused" logger="UnhandledError" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.052773 4830 log.go:25] "Validated CRI v1 runtime API" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.132768 4830 log.go:25] "Validated CRI v1 image API" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.135139 4830 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.140371 4830 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2026-01-31-08-56-31-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.140405 4830 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:41 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:42 fsType:tmpfs blockSize:0}] Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.156268 4830 manager.go:217] Machine: {Timestamp:2026-01-31 09:00:56.153095642 +0000 UTC m=+0.646458104 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2800000 MemoryCapacity:33654116352 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:c42072f0-7f1e-4cb8-a24e-882cf5477d0b BootID:09bf5dcf-c0f5-4874-a379-a4244cbfeb7d Filesystems:[{Device:/run/user/1000 DeviceMajor:0 DeviceMinor:41 Capacity:3365408768 Type:vfs Inodes:821633 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:42 Capacity:1073741824 Type:vfs Inodes:4108168 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827056128 Type:vfs Inodes:4108168 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827060224 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:e3:99:da Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:e3:99:da Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:ff:1a:e7 Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:2f:04:13 Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:81:2a:60 Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:71:83:63 Speed:-1 Mtu:1496} {Name:eth10 MacAddress:aa:4a:64:25:48:f0 Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:a6:b7:59:b8:f2:26 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654116352 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.156512 4830 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.156643 4830 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.156977 4830 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.157166 4830 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.157207 4830 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.157443 4830 topology_manager.go:138] "Creating topology manager with none policy" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.157457 4830 container_manager_linux.go:303] "Creating device plugin manager" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.158062 4830 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.158096 4830 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.160112 4830 state_mem.go:36] "Initialized new in-memory state store" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.160544 4830 server.go:1245] "Using root directory" path="/var/lib/kubelet" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.167174 4830 kubelet.go:418] "Attempting to sync node with API server" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.167213 4830 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.167242 4830 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.167258 4830 kubelet.go:324] "Adding apiserver pod source" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.167273 4830 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.171763 4830 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.172663 4830 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Jan 31 09:00:56 crc kubenswrapper[4830]: W0131 09:00:56.174708 4830 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.53:6443: connect: connection refused Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.174796 4830 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 31 09:00:56 crc kubenswrapper[4830]: E0131 09:00:56.174829 4830 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.53:6443: connect: connection refused" logger="UnhandledError" Jan 31 09:00:56 crc kubenswrapper[4830]: W0131 09:00:56.174845 4830 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.53:6443: connect: connection refused Jan 31 09:00:56 crc kubenswrapper[4830]: E0131 09:00:56.174967 4830 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.53:6443: connect: connection refused" logger="UnhandledError" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.176431 4830 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.176459 4830 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.176469 4830 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.176479 4830 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.176512 4830 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.176522 4830 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.176531 4830 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.176547 4830 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.176557 4830 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.176567 4830 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.176591 4830 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.176601 4830 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.179808 4830 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.180404 4830 server.go:1280] "Started kubelet" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.180743 4830 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 31 09:00:56 crc systemd[1]: Started Kubernetes Kubelet. Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.180981 4830 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.185657 4830 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.185851 4830 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.53:6443: connect: connection refused Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.188626 4830 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.188661 4830 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.188993 4830 volume_manager.go:287] "The desired_state_of_world populator starts" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.189031 4830 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 31 09:00:56 crc kubenswrapper[4830]: E0131 09:00:56.189061 4830 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.189270 4830 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.189371 4830 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 23:00:50.180004736 +0000 UTC Jan 31 09:00:56 crc kubenswrapper[4830]: W0131 09:00:56.189947 4830 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.53:6443: connect: connection refused Jan 31 09:00:56 crc kubenswrapper[4830]: E0131 09:00:56.190361 4830 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.53:6443: connect: connection refused" logger="UnhandledError" Jan 31 09:00:56 crc kubenswrapper[4830]: E0131 09:00:56.190089 4830 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.53:6443: connect: connection refused" interval="200ms" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.192234 4830 server.go:460] "Adding debug handlers to kubelet server" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.192497 4830 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.192533 4830 factory.go:55] Registering systemd factory Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.192547 4830 factory.go:221] Registration of the systemd container factory successfully Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.197125 4830 factory.go:153] Registering CRI-O factory Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.197182 4830 factory.go:221] Registration of the crio container factory successfully Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.197226 4830 factory.go:103] Registering Raw factory Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.197294 4830 manager.go:1196] Started watching for new ooms in manager Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.199234 4830 manager.go:319] Starting recovery of all containers Jan 31 09:00:56 crc kubenswrapper[4830]: E0131 09:00:56.197429 4830 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.53:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.188fc53fc055e8e1 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-31 09:00:56.180361441 +0000 UTC m=+0.673723883,LastTimestamp:2026-01-31 09:00:56.180361441 +0000 UTC m=+0.673723883,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.202641 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.202692 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.202705 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.202716 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.202747 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.202760 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.202768 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.202777 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.202787 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.202796 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.202805 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.202814 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.202823 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.202836 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.202846 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.202859 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.202868 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.202879 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.202890 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.202902 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.202913 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.202926 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.202963 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.202990 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.203005 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.207289 4830 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.207355 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.208245 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.208311 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.208329 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.208358 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.208374 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.208403 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.208416 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.208445 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.208457 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.208471 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.208501 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.208513 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.208524 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.208535 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.208548 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.208562 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.208574 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.208603 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.208614 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.208642 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.208656 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.208668 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.208680 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.208709 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.208737 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.208754 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.208772 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.208784 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.208797 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.208843 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.208859 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.208901 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.208918 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.208955 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.209010 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.209602 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.209693 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.209708 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.209742 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.209756 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.209776 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.209788 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.209849 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.209864 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.209876 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.209907 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.209920 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.209931 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.209949 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.210083 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.210099 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.210112 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.210151 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.210168 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.210182 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.210194 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.210226 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.210247 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.210258 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.210270 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.210297 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.210310 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.210320 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.210337 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.210349 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.210393 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.210405 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.210417 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.210450 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.210461 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.210475 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.210497 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.210509 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.210537 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.210548 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.210564 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.210576 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.210586 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.210625 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.210639 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.210653 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.210679 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.210690 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.210701 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.210711 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.210742 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.210756 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.210766 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.210777 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.210789 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.210799 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.210826 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.210838 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.210850 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.210861 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.210871 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.210897 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.210909 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.210919 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.210930 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.210940 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.210951 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.210977 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.210989 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.211001 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.211012 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.211033 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.211059 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.211071 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.211082 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.211093 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.211104 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.211129 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.211142 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.211151 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.211161 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.211172 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.211182 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.211209 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.211221 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.211232 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.211242 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.211253 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.211263 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.211288 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.211300 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.211309 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.211320 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.211330 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.211340 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.211364 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.211375 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.211386 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.211397 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.211409 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.211420 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.211444 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.211456 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.211469 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.211482 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.211494 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.211534 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.211546 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.211557 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.211567 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.211593 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.211605 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.211617 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.211629 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.211640 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.211651 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.211677 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.211689 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.211701 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.211713 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.211743 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.211760 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.211771 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.211781 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.211791 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.211821 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.211831 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.211839 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.211849 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.211859 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.211870 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.211897 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.211907 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.211918 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.211929 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.211938 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.211948 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.211975 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.211985 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.211994 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.212004 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.212014 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.212024 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.212034 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.212060 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.212070 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.212081 4830 reconstruct.go:97] "Volume reconstruction finished" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.212091 4830 reconciler.go:26] "Reconciler: start to sync state" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.228010 4830 manager.go:324] Recovery completed Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.239968 4830 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.241947 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.241998 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.242012 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.243201 4830 cpu_manager.go:225] "Starting CPU manager" policy="none" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.243225 4830 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.243249 4830 state_mem.go:36] "Initialized new in-memory state store" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.247714 4830 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.249921 4830 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.250028 4830 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.250099 4830 kubelet.go:2335] "Starting kubelet main sync loop" Jan 31 09:00:56 crc kubenswrapper[4830]: E0131 09:00:56.250211 4830 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 31 09:00:56 crc kubenswrapper[4830]: W0131 09:00:56.250844 4830 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.53:6443: connect: connection refused Jan 31 09:00:56 crc kubenswrapper[4830]: E0131 09:00:56.250923 4830 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.53:6443: connect: connection refused" logger="UnhandledError" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.267174 4830 policy_none.go:49] "None policy: Start" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.268181 4830 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.268223 4830 state_mem.go:35] "Initializing new in-memory state store" Jan 31 09:00:56 crc kubenswrapper[4830]: E0131 09:00:56.290030 4830 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.335193 4830 manager.go:334] "Starting Device Plugin manager" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.335252 4830 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.335266 4830 server.go:79] "Starting device plugin registration server" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.335798 4830 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.335817 4830 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.336068 4830 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.336289 4830 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.336307 4830 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 31 09:00:56 crc kubenswrapper[4830]: E0131 09:00:56.343695 4830 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.350312 4830 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc","openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc"] Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.350380 4830 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.353997 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.354031 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.354046 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.354209 4830 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.354518 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.354583 4830 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.355043 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.355082 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.355091 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.355180 4830 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.355305 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.355361 4830 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.356655 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.356675 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.356684 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.356983 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.357020 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.357033 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.357029 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.357154 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.357173 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.357211 4830 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.357379 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.357453 4830 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.358356 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.358387 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.358398 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.358560 4830 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.358760 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.358814 4830 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.358922 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.358964 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.358973 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.359229 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.359257 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.359269 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.359446 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.359477 4830 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.359538 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.359559 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.359572 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.360111 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.360145 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.360190 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:00:56 crc kubenswrapper[4830]: E0131 09:00:56.391878 4830 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.53:6443: connect: connection refused" interval="400ms" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.413969 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.414022 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.414108 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.414130 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.414148 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.414166 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.414259 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.414311 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.414359 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.414386 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.414410 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.414969 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.415040 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.415079 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.415134 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.436547 4830 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.437871 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.437912 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.437926 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.437956 4830 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 31 09:00:56 crc kubenswrapper[4830]: E0131 09:00:56.438699 4830 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.53:6443: connect: connection refused" node="crc" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.515994 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.516068 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.516093 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.516115 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.516142 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.516165 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.516190 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.516217 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.516246 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.516271 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.516296 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.516318 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.516340 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.516362 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.516336 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.516453 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.516499 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.516384 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.516559 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.516605 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.516639 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.516672 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.516707 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.516759 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.516795 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.516835 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.516862 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.516892 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.516919 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.516952 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.638980 4830 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.640857 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.640898 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.640908 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.640935 4830 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 31 09:00:56 crc kubenswrapper[4830]: E0131 09:00:56.641510 4830 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.53:6443: connect: connection refused" node="crc" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.692777 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.698634 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.716086 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.724168 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 31 09:00:56 crc kubenswrapper[4830]: W0131 09:00:56.740479 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-1e01ecf9a8bc72eaf2247139f48567239c65c81ba02980546673da0484003f43 WatchSource:0}: Error finding container 1e01ecf9a8bc72eaf2247139f48567239c65c81ba02980546673da0484003f43: Status 404 returned error can't find the container with id 1e01ecf9a8bc72eaf2247139f48567239c65c81ba02980546673da0484003f43 Jan 31 09:00:56 crc kubenswrapper[4830]: W0131 09:00:56.740913 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-e33e8c0cb3703bb1ef01433ba99cbd4cb8b46f2299a00a968001c21c13b78c20 WatchSource:0}: Error finding container e33e8c0cb3703bb1ef01433ba99cbd4cb8b46f2299a00a968001c21c13b78c20: Status 404 returned error can't find the container with id e33e8c0cb3703bb1ef01433ba99cbd4cb8b46f2299a00a968001c21c13b78c20 Jan 31 09:00:56 crc kubenswrapper[4830]: I0131 09:00:56.744569 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 09:00:56 crc kubenswrapper[4830]: W0131 09:00:56.746954 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-68655f5222b4b3428fe489c5224e1bc8ae524b0a55757f829de704f00726bf05 WatchSource:0}: Error finding container 68655f5222b4b3428fe489c5224e1bc8ae524b0a55757f829de704f00726bf05: Status 404 returned error can't find the container with id 68655f5222b4b3428fe489c5224e1bc8ae524b0a55757f829de704f00726bf05 Jan 31 09:00:56 crc kubenswrapper[4830]: W0131 09:00:56.752986 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-b97d2b5be76e4587e3580ee5215f3df6f7b70fccee9f22c7dc3bd68b74e11287 WatchSource:0}: Error finding container b97d2b5be76e4587e3580ee5215f3df6f7b70fccee9f22c7dc3bd68b74e11287: Status 404 returned error can't find the container with id b97d2b5be76e4587e3580ee5215f3df6f7b70fccee9f22c7dc3bd68b74e11287 Jan 31 09:00:56 crc kubenswrapper[4830]: W0131 09:00:56.754256 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-8248629293c3fde0147b601a30a3a24d54f22ced0f70035c260668208096903a WatchSource:0}: Error finding container 8248629293c3fde0147b601a30a3a24d54f22ced0f70035c260668208096903a: Status 404 returned error can't find the container with id 8248629293c3fde0147b601a30a3a24d54f22ced0f70035c260668208096903a Jan 31 09:00:56 crc kubenswrapper[4830]: E0131 09:00:56.793570 4830 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.53:6443: connect: connection refused" interval="800ms" Jan 31 09:00:57 crc kubenswrapper[4830]: I0131 09:00:57.041942 4830 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 09:00:57 crc kubenswrapper[4830]: I0131 09:00:57.043134 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:00:57 crc kubenswrapper[4830]: I0131 09:00:57.043173 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:00:57 crc kubenswrapper[4830]: I0131 09:00:57.043181 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:00:57 crc kubenswrapper[4830]: I0131 09:00:57.043209 4830 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 31 09:00:57 crc kubenswrapper[4830]: E0131 09:00:57.043839 4830 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.53:6443: connect: connection refused" node="crc" Jan 31 09:00:57 crc kubenswrapper[4830]: I0131 09:00:57.187219 4830 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.53:6443: connect: connection refused Jan 31 09:00:57 crc kubenswrapper[4830]: I0131 09:00:57.190219 4830 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 01:09:05.499510878 +0000 UTC Jan 31 09:00:57 crc kubenswrapper[4830]: I0131 09:00:57.254603 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"8248629293c3fde0147b601a30a3a24d54f22ced0f70035c260668208096903a"} Jan 31 09:00:57 crc kubenswrapper[4830]: I0131 09:00:57.256584 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"b97d2b5be76e4587e3580ee5215f3df6f7b70fccee9f22c7dc3bd68b74e11287"} Jan 31 09:00:57 crc kubenswrapper[4830]: I0131 09:00:57.257434 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"68655f5222b4b3428fe489c5224e1bc8ae524b0a55757f829de704f00726bf05"} Jan 31 09:00:57 crc kubenswrapper[4830]: I0131 09:00:57.258399 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"e33e8c0cb3703bb1ef01433ba99cbd4cb8b46f2299a00a968001c21c13b78c20"} Jan 31 09:00:57 crc kubenswrapper[4830]: I0131 09:00:57.259339 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"1e01ecf9a8bc72eaf2247139f48567239c65c81ba02980546673da0484003f43"} Jan 31 09:00:57 crc kubenswrapper[4830]: W0131 09:00:57.276949 4830 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.53:6443: connect: connection refused Jan 31 09:00:57 crc kubenswrapper[4830]: E0131 09:00:57.277032 4830 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.53:6443: connect: connection refused" logger="UnhandledError" Jan 31 09:00:57 crc kubenswrapper[4830]: W0131 09:00:57.334376 4830 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.53:6443: connect: connection refused Jan 31 09:00:57 crc kubenswrapper[4830]: E0131 09:00:57.334691 4830 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.53:6443: connect: connection refused" logger="UnhandledError" Jan 31 09:00:57 crc kubenswrapper[4830]: E0131 09:00:57.595150 4830 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.53:6443: connect: connection refused" interval="1.6s" Jan 31 09:00:57 crc kubenswrapper[4830]: W0131 09:00:57.716195 4830 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.53:6443: connect: connection refused Jan 31 09:00:57 crc kubenswrapper[4830]: E0131 09:00:57.716305 4830 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.53:6443: connect: connection refused" logger="UnhandledError" Jan 31 09:00:57 crc kubenswrapper[4830]: W0131 09:00:57.728399 4830 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.53:6443: connect: connection refused Jan 31 09:00:57 crc kubenswrapper[4830]: E0131 09:00:57.728504 4830 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.53:6443: connect: connection refused" logger="UnhandledError" Jan 31 09:00:57 crc kubenswrapper[4830]: I0131 09:00:57.844365 4830 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 09:00:57 crc kubenswrapper[4830]: I0131 09:00:57.846857 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:00:57 crc kubenswrapper[4830]: I0131 09:00:57.846904 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:00:57 crc kubenswrapper[4830]: I0131 09:00:57.846916 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:00:57 crc kubenswrapper[4830]: I0131 09:00:57.846980 4830 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 31 09:00:57 crc kubenswrapper[4830]: E0131 09:00:57.847636 4830 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.53:6443: connect: connection refused" node="crc" Jan 31 09:00:58 crc kubenswrapper[4830]: I0131 09:00:58.043998 4830 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 31 09:00:58 crc kubenswrapper[4830]: E0131 09:00:58.045283 4830 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.53:6443: connect: connection refused" logger="UnhandledError" Jan 31 09:00:58 crc kubenswrapper[4830]: I0131 09:00:58.186979 4830 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.53:6443: connect: connection refused Jan 31 09:00:58 crc kubenswrapper[4830]: I0131 09:00:58.191175 4830 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 09:52:07.361526626 +0000 UTC Jan 31 09:00:58 crc kubenswrapper[4830]: I0131 09:00:58.264420 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"3e99664db53d57a91882867cdf4ab33d52a2e165c53f91cd1b918a32c49a7afa"} Jan 31 09:00:58 crc kubenswrapper[4830]: I0131 09:00:58.264462 4830 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 09:00:58 crc kubenswrapper[4830]: I0131 09:00:58.264476 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"b6889405df5b8cc9e09b36c99a5db06048c9de193dd5744d2b48c07f42c477bd"} Jan 31 09:00:58 crc kubenswrapper[4830]: I0131 09:00:58.264489 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"28d65a2b0e667ff1bd912564180a6a9ce77e91a104c6028e19bc58e0ab3b295d"} Jan 31 09:00:58 crc kubenswrapper[4830]: I0131 09:00:58.264499 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"c484d4ed3775a955f3077e3404135c771c581eef1e8fa614d7cdd6e521bbb426"} Jan 31 09:00:58 crc kubenswrapper[4830]: I0131 09:00:58.265429 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:00:58 crc kubenswrapper[4830]: I0131 09:00:58.265467 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:00:58 crc kubenswrapper[4830]: I0131 09:00:58.265480 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:00:58 crc kubenswrapper[4830]: I0131 09:00:58.267605 4830 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="7ecd1234e4873862db88981fcc0a8c9fd9fc7f913649528a5c274c2feb4617b3" exitCode=0 Jan 31 09:00:58 crc kubenswrapper[4830]: I0131 09:00:58.267668 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"7ecd1234e4873862db88981fcc0a8c9fd9fc7f913649528a5c274c2feb4617b3"} Jan 31 09:00:58 crc kubenswrapper[4830]: I0131 09:00:58.267717 4830 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 09:00:58 crc kubenswrapper[4830]: I0131 09:00:58.268686 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:00:58 crc kubenswrapper[4830]: I0131 09:00:58.268716 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:00:58 crc kubenswrapper[4830]: I0131 09:00:58.268751 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:00:58 crc kubenswrapper[4830]: I0131 09:00:58.269553 4830 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="41db253848cac205af5f91621cac4e8223bda7d76984afd734a093f8e8a2d125" exitCode=0 Jan 31 09:00:58 crc kubenswrapper[4830]: I0131 09:00:58.269581 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"41db253848cac205af5f91621cac4e8223bda7d76984afd734a093f8e8a2d125"} Jan 31 09:00:58 crc kubenswrapper[4830]: I0131 09:00:58.269672 4830 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 09:00:58 crc kubenswrapper[4830]: I0131 09:00:58.270747 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:00:58 crc kubenswrapper[4830]: I0131 09:00:58.270773 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:00:58 crc kubenswrapper[4830]: I0131 09:00:58.270782 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:00:58 crc kubenswrapper[4830]: I0131 09:00:58.271975 4830 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="2096510a04ddaabd5c882f0d0913df7d2be58b1bece01c9d9952aa0ef70fdbb6" exitCode=0 Jan 31 09:00:58 crc kubenswrapper[4830]: I0131 09:00:58.272141 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"2096510a04ddaabd5c882f0d0913df7d2be58b1bece01c9d9952aa0ef70fdbb6"} Jan 31 09:00:58 crc kubenswrapper[4830]: I0131 09:00:58.272158 4830 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 09:00:58 crc kubenswrapper[4830]: I0131 09:00:58.272241 4830 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 09:00:58 crc kubenswrapper[4830]: I0131 09:00:58.273308 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:00:58 crc kubenswrapper[4830]: I0131 09:00:58.273381 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:00:58 crc kubenswrapper[4830]: I0131 09:00:58.273438 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:00:58 crc kubenswrapper[4830]: I0131 09:00:58.273340 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:00:58 crc kubenswrapper[4830]: I0131 09:00:58.273608 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:00:58 crc kubenswrapper[4830]: I0131 09:00:58.273624 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:00:58 crc kubenswrapper[4830]: I0131 09:00:58.275019 4830 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="c9a5ad1758f6e487cb246cd0b326198c357b07fa83729681d0e68a5a358c811f" exitCode=0 Jan 31 09:00:58 crc kubenswrapper[4830]: I0131 09:00:58.275058 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"c9a5ad1758f6e487cb246cd0b326198c357b07fa83729681d0e68a5a358c811f"} Jan 31 09:00:58 crc kubenswrapper[4830]: I0131 09:00:58.275223 4830 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 09:00:58 crc kubenswrapper[4830]: I0131 09:00:58.275927 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:00:58 crc kubenswrapper[4830]: I0131 09:00:58.275952 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:00:58 crc kubenswrapper[4830]: I0131 09:00:58.275967 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:00:58 crc kubenswrapper[4830]: I0131 09:00:58.395388 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 31 09:00:58 crc kubenswrapper[4830]: I0131 09:00:58.396192 4830 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": dial tcp 192.168.126.11:10357: connect: connection refused" start-of-body= Jan 31 09:00:58 crc kubenswrapper[4830]: I0131 09:00:58.396284 4830 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": dial tcp 192.168.126.11:10357: connect: connection refused" Jan 31 09:00:59 crc kubenswrapper[4830]: W0131 09:00:59.035331 4830 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.53:6443: connect: connection refused Jan 31 09:00:59 crc kubenswrapper[4830]: E0131 09:00:59.035425 4830 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.53:6443: connect: connection refused" logger="UnhandledError" Jan 31 09:00:59 crc kubenswrapper[4830]: I0131 09:00:59.187359 4830 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.53:6443: connect: connection refused Jan 31 09:00:59 crc kubenswrapper[4830]: I0131 09:00:59.191665 4830 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 18:06:16.126463518 +0000 UTC Jan 31 09:00:59 crc kubenswrapper[4830]: E0131 09:00:59.196513 4830 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.53:6443: connect: connection refused" interval="3.2s" Jan 31 09:00:59 crc kubenswrapper[4830]: I0131 09:00:59.281167 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"4d4d8bd6451d7416a2a312147ec282db5de8410910834f555c8d09062902130d"} Jan 31 09:00:59 crc kubenswrapper[4830]: I0131 09:00:59.281337 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"b68f539fdc8bbf394adaf06d3e8682cdf498d0994c53f3754caf282cf9cf3607"} Jan 31 09:00:59 crc kubenswrapper[4830]: I0131 09:00:59.281353 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"539885da1b8f083e7ae878fec45416be66a5b06df063f046a05a4981bc8a8f75"} Jan 31 09:00:59 crc kubenswrapper[4830]: I0131 09:00:59.281369 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"0fc4f1b4979b8d902676eced34650daea491e68b4c4377492b928a9f0f78d12b"} Jan 31 09:00:59 crc kubenswrapper[4830]: I0131 09:00:59.283571 4830 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="48e6efdc83e36583b849fc3d7e0e36091b0b3586073ae15546cd3bfa9764fb81" exitCode=0 Jan 31 09:00:59 crc kubenswrapper[4830]: I0131 09:00:59.283753 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"48e6efdc83e36583b849fc3d7e0e36091b0b3586073ae15546cd3bfa9764fb81"} Jan 31 09:00:59 crc kubenswrapper[4830]: I0131 09:00:59.283829 4830 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 09:00:59 crc kubenswrapper[4830]: I0131 09:00:59.284693 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:00:59 crc kubenswrapper[4830]: I0131 09:00:59.284715 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:00:59 crc kubenswrapper[4830]: I0131 09:00:59.284807 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:00:59 crc kubenswrapper[4830]: I0131 09:00:59.287833 4830 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 09:00:59 crc kubenswrapper[4830]: I0131 09:00:59.288011 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"66ad13b4b3b7a21a296839b27f9730dcfd25d38b53430aa75e642c6bf04cb365"} Jan 31 09:00:59 crc kubenswrapper[4830]: I0131 09:00:59.288697 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:00:59 crc kubenswrapper[4830]: I0131 09:00:59.288742 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:00:59 crc kubenswrapper[4830]: I0131 09:00:59.288754 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:00:59 crc kubenswrapper[4830]: I0131 09:00:59.292681 4830 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 09:00:59 crc kubenswrapper[4830]: I0131 09:00:59.292739 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"4c457892625099d1b14d857643ba5c70e76cfe582ee31c1b8736f4e278557ab1"} Jan 31 09:00:59 crc kubenswrapper[4830]: I0131 09:00:59.292793 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"49f1cea3266a97316fb0737cb770f6da2abfd58b016987b92c19aa20a9366129"} Jan 31 09:00:59 crc kubenswrapper[4830]: I0131 09:00:59.292920 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"1dc96f3d1e085f925a6a1b73ef1312bd85072065059f20eb6c11f7d044635f8b"} Jan 31 09:00:59 crc kubenswrapper[4830]: I0131 09:00:59.292938 4830 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 09:00:59 crc kubenswrapper[4830]: I0131 09:00:59.293548 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:00:59 crc kubenswrapper[4830]: I0131 09:00:59.293584 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:00:59 crc kubenswrapper[4830]: I0131 09:00:59.293594 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:00:59 crc kubenswrapper[4830]: I0131 09:00:59.294387 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:00:59 crc kubenswrapper[4830]: I0131 09:00:59.294455 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:00:59 crc kubenswrapper[4830]: I0131 09:00:59.294465 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:00:59 crc kubenswrapper[4830]: I0131 09:00:59.448744 4830 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 09:00:59 crc kubenswrapper[4830]: I0131 09:00:59.450007 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:00:59 crc kubenswrapper[4830]: I0131 09:00:59.450041 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:00:59 crc kubenswrapper[4830]: I0131 09:00:59.450054 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:00:59 crc kubenswrapper[4830]: I0131 09:00:59.450079 4830 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 31 09:00:59 crc kubenswrapper[4830]: E0131 09:00:59.450525 4830 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.53:6443: connect: connection refused" node="crc" Jan 31 09:00:59 crc kubenswrapper[4830]: I0131 09:00:59.562855 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 31 09:00:59 crc kubenswrapper[4830]: I0131 09:00:59.571550 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 31 09:00:59 crc kubenswrapper[4830]: W0131 09:00:59.578792 4830 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.53:6443: connect: connection refused Jan 31 09:00:59 crc kubenswrapper[4830]: E0131 09:00:59.578901 4830 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.53:6443: connect: connection refused" logger="UnhandledError" Jan 31 09:00:59 crc kubenswrapper[4830]: W0131 09:00:59.686440 4830 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.53:6443: connect: connection refused Jan 31 09:00:59 crc kubenswrapper[4830]: E0131 09:00:59.686546 4830 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.53:6443: connect: connection refused" logger="UnhandledError" Jan 31 09:01:00 crc kubenswrapper[4830]: I0131 09:01:00.192080 4830 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 21:29:56.390314085 +0000 UTC Jan 31 09:01:00 crc kubenswrapper[4830]: I0131 09:01:00.236344 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 31 09:01:00 crc kubenswrapper[4830]: I0131 09:01:00.296922 4830 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="8e86e6c091d7dbff392d0040ce519065173e2ccc0813d9fc5d172442a53e261f" exitCode=0 Jan 31 09:01:00 crc kubenswrapper[4830]: I0131 09:01:00.297018 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"8e86e6c091d7dbff392d0040ce519065173e2ccc0813d9fc5d172442a53e261f"} Jan 31 09:01:00 crc kubenswrapper[4830]: I0131 09:01:00.297037 4830 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 09:01:00 crc kubenswrapper[4830]: I0131 09:01:00.298078 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:00 crc kubenswrapper[4830]: I0131 09:01:00.298114 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:00 crc kubenswrapper[4830]: I0131 09:01:00.298128 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:00 crc kubenswrapper[4830]: I0131 09:01:00.299812 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"623a3927e0057e80fd78da0a015ae87c5b9eb95715ca1a6d40af90257e77a0e2"} Jan 31 09:01:00 crc kubenswrapper[4830]: I0131 09:01:00.299824 4830 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 09:01:00 crc kubenswrapper[4830]: I0131 09:01:00.299887 4830 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 31 09:01:00 crc kubenswrapper[4830]: I0131 09:01:00.299908 4830 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 09:01:00 crc kubenswrapper[4830]: I0131 09:01:00.299928 4830 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 09:01:00 crc kubenswrapper[4830]: I0131 09:01:00.299928 4830 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 09:01:00 crc kubenswrapper[4830]: I0131 09:01:00.300535 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:00 crc kubenswrapper[4830]: I0131 09:01:00.300558 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:00 crc kubenswrapper[4830]: I0131 09:01:00.300567 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:00 crc kubenswrapper[4830]: I0131 09:01:00.300884 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:00 crc kubenswrapper[4830]: I0131 09:01:00.300925 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:00 crc kubenswrapper[4830]: I0131 09:01:00.300946 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:00 crc kubenswrapper[4830]: I0131 09:01:00.301051 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:00 crc kubenswrapper[4830]: I0131 09:01:00.301081 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:00 crc kubenswrapper[4830]: I0131 09:01:00.301093 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:00 crc kubenswrapper[4830]: I0131 09:01:00.301060 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:00 crc kubenswrapper[4830]: I0131 09:01:00.301138 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:00 crc kubenswrapper[4830]: I0131 09:01:00.301149 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:00 crc kubenswrapper[4830]: I0131 09:01:00.468637 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 31 09:01:01 crc kubenswrapper[4830]: I0131 09:01:01.193953 4830 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 05:44:46.432362936 +0000 UTC Jan 31 09:01:01 crc kubenswrapper[4830]: I0131 09:01:01.307324 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"726b5f788f66263de9451cf0f037d42d2dbc8b008923aa807dfd2020558c9ec8"} Jan 31 09:01:01 crc kubenswrapper[4830]: I0131 09:01:01.307372 4830 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 31 09:01:01 crc kubenswrapper[4830]: I0131 09:01:01.307451 4830 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 09:01:01 crc kubenswrapper[4830]: I0131 09:01:01.307469 4830 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 09:01:01 crc kubenswrapper[4830]: I0131 09:01:01.307372 4830 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 31 09:01:01 crc kubenswrapper[4830]: I0131 09:01:01.307549 4830 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 09:01:01 crc kubenswrapper[4830]: I0131 09:01:01.307383 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"9c3a31116226244e63ee914eda9ab1ff5eea97e5a6bea459cb43d11863386c7d"} Jan 31 09:01:01 crc kubenswrapper[4830]: I0131 09:01:01.307684 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"9f4774e14cd30528af22073868dbbe43ebe8427a1843caa8c8e01226fd63b755"} Jan 31 09:01:01 crc kubenswrapper[4830]: I0131 09:01:01.307708 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"d9d13fc8d32c706bbb52086b93137196db2708e789ca1b4f5a53656f1cec21e3"} Jan 31 09:01:01 crc kubenswrapper[4830]: I0131 09:01:01.307740 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"b5502540b07b2fa92d36fff1afe0f6d48fae1f9d4a54d50ebb1c373546a61a7d"} Jan 31 09:01:01 crc kubenswrapper[4830]: I0131 09:01:01.308038 4830 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 09:01:01 crc kubenswrapper[4830]: I0131 09:01:01.308807 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:01 crc kubenswrapper[4830]: I0131 09:01:01.308839 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:01 crc kubenswrapper[4830]: I0131 09:01:01.308848 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:01 crc kubenswrapper[4830]: I0131 09:01:01.309194 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:01 crc kubenswrapper[4830]: I0131 09:01:01.309231 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:01 crc kubenswrapper[4830]: I0131 09:01:01.309245 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:01 crc kubenswrapper[4830]: I0131 09:01:01.309194 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:01 crc kubenswrapper[4830]: I0131 09:01:01.309315 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:01 crc kubenswrapper[4830]: I0131 09:01:01.309337 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:01 crc kubenswrapper[4830]: I0131 09:01:01.310435 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:01 crc kubenswrapper[4830]: I0131 09:01:01.310482 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:01 crc kubenswrapper[4830]: I0131 09:01:01.310496 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:01 crc kubenswrapper[4830]: I0131 09:01:01.641640 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Jan 31 09:01:02 crc kubenswrapper[4830]: I0131 09:01:02.194456 4830 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 00:34:42.833181582 +0000 UTC Jan 31 09:01:02 crc kubenswrapper[4830]: I0131 09:01:02.272146 4830 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 31 09:01:02 crc kubenswrapper[4830]: I0131 09:01:02.311446 4830 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 09:01:02 crc kubenswrapper[4830]: I0131 09:01:02.314120 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:02 crc kubenswrapper[4830]: I0131 09:01:02.314166 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:02 crc kubenswrapper[4830]: I0131 09:01:02.314175 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:02 crc kubenswrapper[4830]: I0131 09:01:02.651090 4830 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 09:01:02 crc kubenswrapper[4830]: I0131 09:01:02.652481 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:02 crc kubenswrapper[4830]: I0131 09:01:02.652528 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:02 crc kubenswrapper[4830]: I0131 09:01:02.652541 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:02 crc kubenswrapper[4830]: I0131 09:01:02.652568 4830 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 31 09:01:02 crc kubenswrapper[4830]: I0131 09:01:02.717699 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 09:01:02 crc kubenswrapper[4830]: I0131 09:01:02.718022 4830 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 09:01:02 crc kubenswrapper[4830]: I0131 09:01:02.719373 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:02 crc kubenswrapper[4830]: I0131 09:01:02.719453 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:02 crc kubenswrapper[4830]: I0131 09:01:02.719465 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:03 crc kubenswrapper[4830]: I0131 09:01:03.068923 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 09:01:03 crc kubenswrapper[4830]: I0131 09:01:03.195170 4830 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 06:52:25.758554332 +0000 UTC Jan 31 09:01:03 crc kubenswrapper[4830]: I0131 09:01:03.315282 4830 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 09:01:03 crc kubenswrapper[4830]: I0131 09:01:03.315281 4830 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 09:01:03 crc kubenswrapper[4830]: I0131 09:01:03.316557 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:03 crc kubenswrapper[4830]: I0131 09:01:03.316590 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:03 crc kubenswrapper[4830]: I0131 09:01:03.316600 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:03 crc kubenswrapper[4830]: I0131 09:01:03.316820 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:03 crc kubenswrapper[4830]: I0131 09:01:03.316866 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:03 crc kubenswrapper[4830]: I0131 09:01:03.316875 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:03 crc kubenswrapper[4830]: I0131 09:01:03.503776 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 09:01:04 crc kubenswrapper[4830]: I0131 09:01:04.195706 4830 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 05:23:43.223454265 +0000 UTC Jan 31 09:01:04 crc kubenswrapper[4830]: I0131 09:01:04.317455 4830 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 09:01:04 crc kubenswrapper[4830]: I0131 09:01:04.318750 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:04 crc kubenswrapper[4830]: I0131 09:01:04.318791 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:04 crc kubenswrapper[4830]: I0131 09:01:04.318804 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:05 crc kubenswrapper[4830]: I0131 09:01:05.126457 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 31 09:01:05 crc kubenswrapper[4830]: I0131 09:01:05.126694 4830 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 09:01:05 crc kubenswrapper[4830]: I0131 09:01:05.128050 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:05 crc kubenswrapper[4830]: I0131 09:01:05.128110 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:05 crc kubenswrapper[4830]: I0131 09:01:05.128122 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:05 crc kubenswrapper[4830]: I0131 09:01:05.196609 4830 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 21:24:04.587402101 +0000 UTC Jan 31 09:01:06 crc kubenswrapper[4830]: I0131 09:01:06.197770 4830 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 08:03:22.003921771 +0000 UTC Jan 31 09:01:06 crc kubenswrapper[4830]: E0131 09:01:06.343837 4830 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 31 09:01:07 crc kubenswrapper[4830]: I0131 09:01:07.198537 4830 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 16:03:13.107681218 +0000 UTC Jan 31 09:01:08 crc kubenswrapper[4830]: I0131 09:01:08.199554 4830 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 15:54:18.159456938 +0000 UTC Jan 31 09:01:09 crc kubenswrapper[4830]: I0131 09:01:09.200326 4830 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 00:33:34.211190651 +0000 UTC Jan 31 09:01:10 crc kubenswrapper[4830]: W0131 09:01:10.162386 4830 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": net/http: TLS handshake timeout Jan 31 09:01:10 crc kubenswrapper[4830]: I0131 09:01:10.162503 4830 trace.go:236] Trace[262573862]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (31-Jan-2026 09:01:00.160) (total time: 10002ms): Jan 31 09:01:10 crc kubenswrapper[4830]: Trace[262573862]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (09:01:10.162) Jan 31 09:01:10 crc kubenswrapper[4830]: Trace[262573862]: [10.002023s] [10.002023s] END Jan 31 09:01:10 crc kubenswrapper[4830]: E0131 09:01:10.162528 4830 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Jan 31 09:01:10 crc kubenswrapper[4830]: I0131 09:01:10.188372 4830 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Jan 31 09:01:10 crc kubenswrapper[4830]: I0131 09:01:10.200759 4830 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 17:23:11.543235683 +0000 UTC Jan 31 09:01:10 crc kubenswrapper[4830]: I0131 09:01:10.267423 4830 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 31 09:01:10 crc kubenswrapper[4830]: I0131 09:01:10.267487 4830 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 31 09:01:10 crc kubenswrapper[4830]: I0131 09:01:10.278837 4830 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 31 09:01:10 crc kubenswrapper[4830]: I0131 09:01:10.278934 4830 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 31 09:01:10 crc kubenswrapper[4830]: I0131 09:01:10.333378 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 31 09:01:10 crc kubenswrapper[4830]: I0131 09:01:10.335510 4830 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="623a3927e0057e80fd78da0a015ae87c5b9eb95715ca1a6d40af90257e77a0e2" exitCode=255 Jan 31 09:01:10 crc kubenswrapper[4830]: I0131 09:01:10.335579 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"623a3927e0057e80fd78da0a015ae87c5b9eb95715ca1a6d40af90257e77a0e2"} Jan 31 09:01:10 crc kubenswrapper[4830]: I0131 09:01:10.335869 4830 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 09:01:10 crc kubenswrapper[4830]: I0131 09:01:10.337086 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:10 crc kubenswrapper[4830]: I0131 09:01:10.337126 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:10 crc kubenswrapper[4830]: I0131 09:01:10.337140 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:10 crc kubenswrapper[4830]: I0131 09:01:10.337871 4830 scope.go:117] "RemoveContainer" containerID="623a3927e0057e80fd78da0a015ae87c5b9eb95715ca1a6d40af90257e77a0e2" Jan 31 09:01:10 crc kubenswrapper[4830]: I0131 09:01:10.366577 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Jan 31 09:01:10 crc kubenswrapper[4830]: I0131 09:01:10.366804 4830 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 09:01:10 crc kubenswrapper[4830]: I0131 09:01:10.369830 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:10 crc kubenswrapper[4830]: I0131 09:01:10.369894 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:10 crc kubenswrapper[4830]: I0131 09:01:10.369909 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:10 crc kubenswrapper[4830]: I0131 09:01:10.411844 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Jan 31 09:01:11 crc kubenswrapper[4830]: I0131 09:01:11.201456 4830 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 01:54:04.948292173 +0000 UTC Jan 31 09:01:11 crc kubenswrapper[4830]: I0131 09:01:11.340875 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 31 09:01:11 crc kubenswrapper[4830]: I0131 09:01:11.342648 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"8cac33719e081864153ce20c60069a21036f3c29f7b4395021c3a2fe4f09dbc9"} Jan 31 09:01:11 crc kubenswrapper[4830]: I0131 09:01:11.342756 4830 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 09:01:11 crc kubenswrapper[4830]: I0131 09:01:11.342876 4830 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 09:01:11 crc kubenswrapper[4830]: I0131 09:01:11.343744 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:11 crc kubenswrapper[4830]: I0131 09:01:11.343771 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:11 crc kubenswrapper[4830]: I0131 09:01:11.343781 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:11 crc kubenswrapper[4830]: I0131 09:01:11.343951 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:11 crc kubenswrapper[4830]: I0131 09:01:11.343989 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:11 crc kubenswrapper[4830]: I0131 09:01:11.344006 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:11 crc kubenswrapper[4830]: I0131 09:01:11.365192 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Jan 31 09:01:11 crc kubenswrapper[4830]: I0131 09:01:11.396881 4830 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 31 09:01:11 crc kubenswrapper[4830]: I0131 09:01:11.396988 4830 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 31 09:01:12 crc kubenswrapper[4830]: I0131 09:01:12.201963 4830 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-06 23:50:47.877840275 +0000 UTC Jan 31 09:01:12 crc kubenswrapper[4830]: I0131 09:01:12.346523 4830 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 09:01:12 crc kubenswrapper[4830]: I0131 09:01:12.347877 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:12 crc kubenswrapper[4830]: I0131 09:01:12.347980 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:12 crc kubenswrapper[4830]: I0131 09:01:12.348057 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:12 crc kubenswrapper[4830]: I0131 09:01:12.718082 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 09:01:12 crc kubenswrapper[4830]: I0131 09:01:12.718508 4830 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 09:01:12 crc kubenswrapper[4830]: I0131 09:01:12.719887 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:12 crc kubenswrapper[4830]: I0131 09:01:12.719926 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:12 crc kubenswrapper[4830]: I0131 09:01:12.719937 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:13 crc kubenswrapper[4830]: I0131 09:01:13.075875 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 09:01:13 crc kubenswrapper[4830]: I0131 09:01:13.203601 4830 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 11:07:11.580530071 +0000 UTC Jan 31 09:01:13 crc kubenswrapper[4830]: I0131 09:01:13.349597 4830 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 09:01:13 crc kubenswrapper[4830]: I0131 09:01:13.351149 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:13 crc kubenswrapper[4830]: I0131 09:01:13.351201 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:13 crc kubenswrapper[4830]: I0131 09:01:13.351212 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:13 crc kubenswrapper[4830]: I0131 09:01:13.354996 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 09:01:14 crc kubenswrapper[4830]: I0131 09:01:14.204458 4830 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 06:59:41.913354564 +0000 UTC Jan 31 09:01:14 crc kubenswrapper[4830]: I0131 09:01:14.351231 4830 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 09:01:14 crc kubenswrapper[4830]: I0131 09:01:14.352502 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:14 crc kubenswrapper[4830]: I0131 09:01:14.352545 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:14 crc kubenswrapper[4830]: I0131 09:01:14.352582 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:15 crc kubenswrapper[4830]: I0131 09:01:15.130634 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 31 09:01:15 crc kubenswrapper[4830]: I0131 09:01:15.131325 4830 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 09:01:15 crc kubenswrapper[4830]: I0131 09:01:15.132736 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:15 crc kubenswrapper[4830]: I0131 09:01:15.132774 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:15 crc kubenswrapper[4830]: I0131 09:01:15.132785 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:15 crc kubenswrapper[4830]: I0131 09:01:15.206222 4830 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 14:01:03.596809552 +0000 UTC Jan 31 09:01:15 crc kubenswrapper[4830]: I0131 09:01:15.273144 4830 trace.go:236] Trace[1305094649]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (31-Jan-2026 09:01:03.820) (total time: 11452ms): Jan 31 09:01:15 crc kubenswrapper[4830]: Trace[1305094649]: ---"Objects listed" error: 11452ms (09:01:15.273) Jan 31 09:01:15 crc kubenswrapper[4830]: Trace[1305094649]: [11.452930921s] [11.452930921s] END Jan 31 09:01:15 crc kubenswrapper[4830]: I0131 09:01:15.273191 4830 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 31 09:01:15 crc kubenswrapper[4830]: E0131 09:01:15.273329 4830 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="6.4s" Jan 31 09:01:15 crc kubenswrapper[4830]: I0131 09:01:15.274816 4830 trace.go:236] Trace[1156577499]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (31-Jan-2026 09:01:03.900) (total time: 11374ms): Jan 31 09:01:15 crc kubenswrapper[4830]: Trace[1156577499]: ---"Objects listed" error: 11374ms (09:01:15.274) Jan 31 09:01:15 crc kubenswrapper[4830]: Trace[1156577499]: [11.37422131s] [11.37422131s] END Jan 31 09:01:15 crc kubenswrapper[4830]: I0131 09:01:15.274865 4830 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 31 09:01:15 crc kubenswrapper[4830]: E0131 09:01:15.276457 4830 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Jan 31 09:01:15 crc kubenswrapper[4830]: I0131 09:01:15.277028 4830 trace.go:236] Trace[295025249]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (31-Jan-2026 09:01:02.931) (total time: 12345ms): Jan 31 09:01:15 crc kubenswrapper[4830]: Trace[295025249]: ---"Objects listed" error: 12345ms (09:01:15.276) Jan 31 09:01:15 crc kubenswrapper[4830]: Trace[295025249]: [12.345834189s] [12.345834189s] END Jan 31 09:01:15 crc kubenswrapper[4830]: I0131 09:01:15.277215 4830 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 31 09:01:15 crc kubenswrapper[4830]: I0131 09:01:15.277083 4830 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Jan 31 09:01:15 crc kubenswrapper[4830]: I0131 09:01:15.285672 4830 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 31 09:01:15 crc kubenswrapper[4830]: I0131 09:01:15.706389 4830 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.192436 4830 apiserver.go:52] "Watching apiserver" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.198164 4830 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.198507 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf"] Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.198863 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.198906 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.198979 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.199053 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.199306 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 31 09:01:16 crc kubenswrapper[4830]: E0131 09:01:16.199273 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 09:01:16 crc kubenswrapper[4830]: E0131 09:01:16.199304 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.199501 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 09:01:16 crc kubenswrapper[4830]: E0131 09:01:16.199614 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.201286 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.201331 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.201465 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.201665 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.201779 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.201304 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.202552 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.203693 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.206958 4830 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 00:39:53.90159126 +0000 UTC Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.208976 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.227820 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.241245 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.257592 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.275640 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.287877 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.290541 4830 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.300525 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.310840 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.324337 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.336662 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.355528 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.366571 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.379168 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.388714 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.388792 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.388812 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.388827 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.388846 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.388871 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.388893 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.388914 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.388931 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.388947 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.388963 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.388980 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.389001 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.389020 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.389039 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.389090 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.389110 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.389145 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.389163 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.389178 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.389195 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.389214 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.389248 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.389267 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.389265 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.389288 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.389329 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.389350 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.389371 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.389393 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.389414 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.389626 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.389646 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.389666 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.389684 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.389707 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.389760 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.389786 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.389817 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.389964 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.389984 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.390132 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.390144 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.390152 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.390236 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.390401 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.390486 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.390512 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.390521 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.390537 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.390560 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.390582 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.390601 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.390634 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.390653 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.390674 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.390681 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.390695 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.390708 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.390717 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.390781 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.390814 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.390866 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.390910 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.390929 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.390949 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.390967 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.390988 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.391005 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.391024 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.391042 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.391151 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.391183 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.391203 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.391225 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.391249 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.391274 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.391328 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.391365 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.391391 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.391419 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.391447 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.391477 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.391498 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.391535 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.391563 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.391581 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.391600 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.391616 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.391633 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.391649 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.391668 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.391688 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.391704 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.391738 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.391758 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.391776 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.391792 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.391807 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.391823 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.391838 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.391857 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.391873 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.391898 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.391913 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.391927 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.391987 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.392006 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.392023 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.392042 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.392059 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.392077 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.392098 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.392118 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.392138 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.392154 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.392169 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.392187 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.392204 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.392221 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.392236 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.392254 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.392272 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.392288 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.392305 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.392321 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.392337 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.392352 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.392368 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.392384 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.392400 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.392416 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.392447 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.392467 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.392485 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.392503 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.392520 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.392536 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.392552 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.392570 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.392587 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.392603 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.392620 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.392638 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.392656 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.392673 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.392690 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.392710 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.392749 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.392769 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.392795 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.392814 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.392830 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.392854 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.392872 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.392889 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.392906 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.392923 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.392940 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.392958 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.392973 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.392989 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.393007 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.393023 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.393040 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.393057 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.393074 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.393093 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.393112 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.393128 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.393146 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.393163 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.393179 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.393195 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.393212 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.393235 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.393251 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.393269 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.393286 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.393304 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.393321 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.393338 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.393360 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.393378 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.393395 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.393413 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.393431 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.393458 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.393477 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.393494 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.393511 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.393529 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.393547 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.393565 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.393582 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.393598 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.393615 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.393635 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.393652 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.393670 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.393686 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.393705 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.393934 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.393964 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.393990 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.394015 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.394034 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.394052 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.394071 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.394091 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.394110 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.394129 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.394180 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.394199 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.394218 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.394236 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.394290 4830 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.394305 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.394316 4830 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.394326 4830 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.394338 4830 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.394349 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.394359 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.394922 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.395509 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.390764 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.391031 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.391289 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.391345 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.391592 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.391587 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.392042 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.392081 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.392553 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.392873 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.393056 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.393286 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.393311 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.393537 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.393595 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.393742 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.393970 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.394066 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.394377 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.394636 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.394975 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.395163 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.395196 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.404661 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.395619 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.395873 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.395955 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.396171 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.396320 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.396456 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.396653 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.396744 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.396780 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.397013 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.397565 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.397955 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.398034 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.398229 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.398366 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.398510 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.398580 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.398985 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.399048 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.399091 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.399282 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.399359 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.399227 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.399390 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.399488 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.399529 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.399664 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.399830 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.400248 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.400330 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.400346 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.400359 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.400593 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.400606 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.400786 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.400796 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.400886 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.401031 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.401079 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.401213 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.401294 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.401301 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.401414 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.401582 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.401679 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.401689 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.401848 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.402009 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.402041 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.402118 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.402234 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.402252 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.402491 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.402645 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.402669 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.402696 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.403244 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.403408 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.403461 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.403494 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.403598 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.403671 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:01:16 crc kubenswrapper[4830]: E0131 09:01:16.403802 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 09:01:16.90377311 +0000 UTC m=+21.397135742 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.403949 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.404030 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.404228 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.404244 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.404289 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.404346 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.405311 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.405423 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.405719 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.405919 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.405954 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.406154 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.406224 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.406883 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.407288 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.407858 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.407947 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.408098 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.408162 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.408354 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.408417 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.408564 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.408937 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.409064 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.409074 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.409485 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.409415 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.409923 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.409949 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.410024 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.410794 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.410991 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.411369 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.411967 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.412410 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.412510 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:01:16 crc kubenswrapper[4830]: E0131 09:01:16.412513 4830 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 31 09:01:16 crc kubenswrapper[4830]: E0131 09:01:16.412631 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-31 09:01:16.912604038 +0000 UTC m=+21.405966480 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.412912 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.413677 4830 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.414075 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:01:16 crc kubenswrapper[4830]: E0131 09:01:16.414097 4830 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 31 09:01:16 crc kubenswrapper[4830]: E0131 09:01:16.414193 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-31 09:01:16.914166722 +0000 UTC m=+21.407529374 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.414620 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.414838 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.414840 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.414864 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.415075 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.415568 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.416071 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.416454 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.416863 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.417053 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.417065 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.417304 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.417191 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.417324 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.417549 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.418058 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.418330 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.418672 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.418748 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.418871 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.418950 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.416781 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.419562 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.419745 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.419955 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.420045 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.420274 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.420500 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.421282 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.421406 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.422082 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.422486 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.422635 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.423021 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.423133 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.423658 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.423962 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.424196 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.424231 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.424399 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.425194 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.425248 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.425461 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.425963 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.426070 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.426221 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.426370 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.426694 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.426782 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.427042 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.428199 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.428480 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.430745 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.434449 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 31 09:01:16 crc kubenswrapper[4830]: E0131 09:01:16.434491 4830 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 31 09:01:16 crc kubenswrapper[4830]: E0131 09:01:16.434514 4830 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 31 09:01:16 crc kubenswrapper[4830]: E0131 09:01:16.434528 4830 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.434661 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:01:16 crc kubenswrapper[4830]: E0131 09:01:16.434686 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-31 09:01:16.934665818 +0000 UTC m=+21.428028260 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.437103 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 31 09:01:16 crc kubenswrapper[4830]: E0131 09:01:16.439018 4830 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 31 09:01:16 crc kubenswrapper[4830]: E0131 09:01:16.439054 4830 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 31 09:01:16 crc kubenswrapper[4830]: E0131 09:01:16.439073 4830 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 09:01:16 crc kubenswrapper[4830]: E0131 09:01:16.439143 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-31 09:01:16.939118604 +0000 UTC m=+21.432481046 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.441089 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.441215 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.442267 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.443381 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.444375 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.448081 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.450426 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.452372 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.452754 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.453138 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.454869 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.460994 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.471200 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.475269 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.495658 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.495714 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.495797 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.495812 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.495826 4830 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.495840 4830 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.495850 4830 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.495859 4830 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.495868 4830 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.495877 4830 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.495885 4830 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.495894 4830 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.495904 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.495902 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.495964 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.495913 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.496005 4830 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.496033 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.496044 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.496054 4830 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.496064 4830 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.496077 4830 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.496088 4830 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.496115 4830 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.496126 4830 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.496136 4830 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.496146 4830 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.496156 4830 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.496166 4830 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.496174 4830 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.496189 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.496214 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.496223 4830 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.496233 4830 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.496243 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.496254 4830 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.496264 4830 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.496274 4830 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.496284 4830 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.496294 4830 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.496305 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.496315 4830 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.496324 4830 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.496360 4830 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.496369 4830 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.496379 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.496390 4830 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.496403 4830 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.496414 4830 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.496425 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.496440 4830 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.496451 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.496463 4830 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.496473 4830 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.496485 4830 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.496495 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.496508 4830 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.496521 4830 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.496533 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.496544 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.496575 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.496596 4830 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.496608 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.496626 4830 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.496638 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.496650 4830 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.496664 4830 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.496675 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.496685 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.496696 4830 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.496707 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.496718 4830 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.496765 4830 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.496777 4830 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.496789 4830 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.496800 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.496812 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.496823 4830 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.496835 4830 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.496845 4830 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.496856 4830 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.496866 4830 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.496876 4830 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.496885 4830 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.496894 4830 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.496905 4830 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.496917 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.496930 4830 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.496963 4830 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.496975 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.496985 4830 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.496995 4830 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.497006 4830 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.497018 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.497029 4830 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.497042 4830 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.497053 4830 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.497068 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.497080 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.497092 4830 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.497104 4830 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.497117 4830 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.497129 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.497140 4830 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.497151 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.497163 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.497175 4830 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.497185 4830 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.497196 4830 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.497207 4830 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.497218 4830 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.497230 4830 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.497243 4830 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.497260 4830 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.497272 4830 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.497284 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.497296 4830 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.497308 4830 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.497319 4830 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.497330 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.497342 4830 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.497354 4830 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.497365 4830 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.497377 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.497389 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.497400 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.497409 4830 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.497419 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.497430 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.497441 4830 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.497453 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.497464 4830 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.497475 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.497486 4830 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.497496 4830 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.497512 4830 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.497522 4830 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.497536 4830 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.497547 4830 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.497559 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.497573 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.497584 4830 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.497595 4830 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.497605 4830 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.497617 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.497630 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.497657 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.497666 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.497676 4830 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.497685 4830 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.497694 4830 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.497704 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.497714 4830 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.497739 4830 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.497747 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.497757 4830 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.497766 4830 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.497811 4830 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.497820 4830 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.497829 4830 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.497838 4830 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.497847 4830 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.497859 4830 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.497867 4830 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.497877 4830 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.497887 4830 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.497896 4830 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.497905 4830 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.497914 4830 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.497924 4830 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.497933 4830 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.497942 4830 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.497951 4830 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.497960 4830 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.497970 4830 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.497981 4830 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.497991 4830 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.498000 4830 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.498009 4830 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.498018 4830 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.498027 4830 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.498036 4830 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.498045 4830 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.498053 4830 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.498062 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.498071 4830 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.498080 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.498088 4830 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.498097 4830 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.498105 4830 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.498114 4830 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.513282 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.523974 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 31 09:01:16 crc kubenswrapper[4830]: I0131 09:01:16.528542 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 31 09:01:16 crc kubenswrapper[4830]: W0131 09:01:16.550404 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod37a5e44f_9a88_4405_be8a_b645485e7312.slice/crio-54fb4dabe05fd21354cc4fd45044bf7de5df3e30350cbeafc73b9c4edac61377 WatchSource:0}: Error finding container 54fb4dabe05fd21354cc4fd45044bf7de5df3e30350cbeafc73b9c4edac61377: Status 404 returned error can't find the container with id 54fb4dabe05fd21354cc4fd45044bf7de5df3e30350cbeafc73b9c4edac61377 Jan 31 09:01:17 crc kubenswrapper[4830]: I0131 09:01:17.001813 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 09:01:17 crc kubenswrapper[4830]: E0131 09:01:17.002033 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 09:01:18.001997375 +0000 UTC m=+22.495359817 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 09:01:17 crc kubenswrapper[4830]: I0131 09:01:17.002301 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 09:01:17 crc kubenswrapper[4830]: I0131 09:01:17.002335 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 09:01:17 crc kubenswrapper[4830]: I0131 09:01:17.002360 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 09:01:17 crc kubenswrapper[4830]: I0131 09:01:17.002390 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 09:01:17 crc kubenswrapper[4830]: E0131 09:01:17.002416 4830 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 31 09:01:17 crc kubenswrapper[4830]: E0131 09:01:17.002579 4830 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 31 09:01:17 crc kubenswrapper[4830]: E0131 09:01:17.002525 4830 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 31 09:01:17 crc kubenswrapper[4830]: E0131 09:01:17.002615 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-31 09:01:18.002607122 +0000 UTC m=+22.495969564 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 31 09:01:17 crc kubenswrapper[4830]: E0131 09:01:17.002692 4830 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 31 09:01:17 crc kubenswrapper[4830]: E0131 09:01:17.002710 4830 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 09:01:17 crc kubenswrapper[4830]: E0131 09:01:17.002546 4830 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 31 09:01:17 crc kubenswrapper[4830]: E0131 09:01:17.002717 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-31 09:01:18.002698315 +0000 UTC m=+22.496060747 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 31 09:01:17 crc kubenswrapper[4830]: E0131 09:01:17.002784 4830 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 31 09:01:17 crc kubenswrapper[4830]: E0131 09:01:17.002792 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-31 09:01:18.002773187 +0000 UTC m=+22.496135809 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 09:01:17 crc kubenswrapper[4830]: E0131 09:01:17.002806 4830 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 09:01:17 crc kubenswrapper[4830]: E0131 09:01:17.002879 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-31 09:01:18.002849859 +0000 UTC m=+22.496212491 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 09:01:17 crc kubenswrapper[4830]: I0131 09:01:17.207608 4830 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 04:39:24.650545407 +0000 UTC Jan 31 09:01:17 crc kubenswrapper[4830]: I0131 09:01:17.361563 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"0468c8e291590c1fa2ffc9b7786bbbf6f3cfb7889fc3adaf7f7a4d3b0edcb1ea"} Jan 31 09:01:17 crc kubenswrapper[4830]: I0131 09:01:17.361622 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"a4601859d7e52904b3390b2ebca48210c4b3bf132b00a4735a1dbe6cfdc7bd02"} Jan 31 09:01:17 crc kubenswrapper[4830]: I0131 09:01:17.361635 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"8d5ded97bd8193ef6d07e59458af1d04647481209b5aa1bc317ff66db6148f72"} Jan 31 09:01:17 crc kubenswrapper[4830]: I0131 09:01:17.363762 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"6c33f93dea1eda6a26c80682447157b317a4aea4af66d200c5bdfc162ad593e5"} Jan 31 09:01:17 crc kubenswrapper[4830]: I0131 09:01:17.363793 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"54fb4dabe05fd21354cc4fd45044bf7de5df3e30350cbeafc73b9c4edac61377"} Jan 31 09:01:17 crc kubenswrapper[4830]: I0131 09:01:17.365297 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Jan 31 09:01:17 crc kubenswrapper[4830]: I0131 09:01:17.365889 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 31 09:01:17 crc kubenswrapper[4830]: I0131 09:01:17.367677 4830 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="8cac33719e081864153ce20c60069a21036f3c29f7b4395021c3a2fe4f09dbc9" exitCode=255 Jan 31 09:01:17 crc kubenswrapper[4830]: I0131 09:01:17.367742 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"8cac33719e081864153ce20c60069a21036f3c29f7b4395021c3a2fe4f09dbc9"} Jan 31 09:01:17 crc kubenswrapper[4830]: I0131 09:01:17.367782 4830 scope.go:117] "RemoveContainer" containerID="623a3927e0057e80fd78da0a015ae87c5b9eb95715ca1a6d40af90257e77a0e2" Jan 31 09:01:17 crc kubenswrapper[4830]: I0131 09:01:17.368819 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"7c0f28354b15b9871ad3c960c6fbcd03815f0fd1ede563e7267a90ee5e3897dc"} Jan 31 09:01:17 crc kubenswrapper[4830]: I0131 09:01:17.380691 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:17Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:17 crc kubenswrapper[4830]: I0131 09:01:17.380884 4830 scope.go:117] "RemoveContainer" containerID="8cac33719e081864153ce20c60069a21036f3c29f7b4395021c3a2fe4f09dbc9" Jan 31 09:01:17 crc kubenswrapper[4830]: E0131 09:01:17.381127 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Jan 31 09:01:17 crc kubenswrapper[4830]: I0131 09:01:17.383134 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 31 09:01:17 crc kubenswrapper[4830]: I0131 09:01:17.396247 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:17Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:17 crc kubenswrapper[4830]: I0131 09:01:17.410206 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:17Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:17 crc kubenswrapper[4830]: I0131 09:01:17.422990 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:17Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:17 crc kubenswrapper[4830]: I0131 09:01:17.441221 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:17Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:17 crc kubenswrapper[4830]: I0131 09:01:17.455236 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0468c8e291590c1fa2ffc9b7786bbbf6f3cfb7889fc3adaf7f7a4d3b0edcb1ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4601859d7e52904b3390b2ebca48210c4b3bf132b00a4735a1dbe6cfdc7bd02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:17Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:17 crc kubenswrapper[4830]: I0131 09:01:17.469401 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:17Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:17 crc kubenswrapper[4830]: I0131 09:01:17.481534 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0468c8e291590c1fa2ffc9b7786bbbf6f3cfb7889fc3adaf7f7a4d3b0edcb1ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4601859d7e52904b3390b2ebca48210c4b3bf132b00a4735a1dbe6cfdc7bd02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:17Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:17 crc kubenswrapper[4830]: I0131 09:01:17.494466 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c75e1c36-b769-464e-96eb-6d9b3c5aa384\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fc4f1b4979b8d902676eced34650daea491e68b4c4377492b928a9f0f78d12b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b68f539fdc8bbf394adaf06d3e8682cdf498d0994c53f3754caf282cf9cf3607\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://539885da1b8f083e7ae878fec45416be66a5b06df063f046a05a4981bc8a8f75\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8cac33719e081864153ce20c60069a21036f3c29f7b4395021c3a2fe4f09dbc9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://623a3927e0057e80fd78da0a015ae87c5b9eb95715ca1a6d40af90257e77a0e2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T09:01:10Z\\\",\\\"message\\\":\\\"W0131 09:00:59.398779 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0131 09:00:59.399161 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769850059 cert, and key in /tmp/serving-cert-4054365141/serving-signer.crt, /tmp/serving-cert-4054365141/serving-signer.key\\\\nI0131 09:00:59.732801 1 observer_polling.go:159] Starting file observer\\\\nW0131 09:00:59.735830 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0131 09:00:59.736066 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 09:00:59.738624 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4054365141/tls.crt::/tmp/serving-cert-4054365141/tls.key\\\\\\\"\\\\nF0131 09:01:10.168020 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T09:00:59Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8cac33719e081864153ce20c60069a21036f3c29f7b4395021c3a2fe4f09dbc9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"message\\\":\\\"le observer\\\\nW0131 09:01:15.957161 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0131 09:01:15.957363 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 09:01:15.958528 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2684978895/tls.crt::/tmp/serving-cert-2684978895/tls.key\\\\\\\"\\\\nI0131 09:01:16.545024 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 09:01:16.548405 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 09:01:16.548444 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 09:01:16.548470 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 09:01:16.548480 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 09:01:16.557075 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 09:01:16.557112 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 09:01:16.557118 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 09:01:16.557123 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 09:01:16.557127 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 09:01:16.557130 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 09:01:16.557133 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0131 09:01:16.557468 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0131 09:01:16.564014 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d4d8bd6451d7416a2a312147ec282db5de8410910834f555c8d09062902130d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41db253848cac205af5f91621cac4e8223bda7d76984afd734a093f8e8a2d125\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41db253848cac205af5f91621cac4e8223bda7d76984afd734a093f8e8a2d125\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:00:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:00:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:00:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:17Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:17 crc kubenswrapper[4830]: I0131 09:01:17.505643 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c33f93dea1eda6a26c80682447157b317a4aea4af66d200c5bdfc162ad593e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:17Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:17 crc kubenswrapper[4830]: I0131 09:01:17.518447 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:17Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:17 crc kubenswrapper[4830]: I0131 09:01:17.531037 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:17Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:17 crc kubenswrapper[4830]: I0131 09:01:17.543279 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:17Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:17 crc kubenswrapper[4830]: I0131 09:01:17.780548 4830 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 09:01:18 crc kubenswrapper[4830]: I0131 09:01:18.011792 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 09:01:18 crc kubenswrapper[4830]: I0131 09:01:18.011874 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 09:01:18 crc kubenswrapper[4830]: I0131 09:01:18.011899 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 09:01:18 crc kubenswrapper[4830]: I0131 09:01:18.011923 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 09:01:18 crc kubenswrapper[4830]: I0131 09:01:18.011942 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 09:01:18 crc kubenswrapper[4830]: E0131 09:01:18.012072 4830 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 31 09:01:18 crc kubenswrapper[4830]: E0131 09:01:18.012077 4830 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 31 09:01:18 crc kubenswrapper[4830]: E0131 09:01:18.012108 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 09:01:20.012072286 +0000 UTC m=+24.505434738 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 09:01:18 crc kubenswrapper[4830]: E0131 09:01:18.012134 4830 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 31 09:01:18 crc kubenswrapper[4830]: E0131 09:01:18.012160 4830 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 31 09:01:18 crc kubenswrapper[4830]: E0131 09:01:18.012174 4830 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 31 09:01:18 crc kubenswrapper[4830]: E0131 09:01:18.012180 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-31 09:01:20.012159038 +0000 UTC m=+24.505521470 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 31 09:01:18 crc kubenswrapper[4830]: E0131 09:01:18.012191 4830 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 09:01:18 crc kubenswrapper[4830]: E0131 09:01:18.012223 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-31 09:01:20.01220799 +0000 UTC m=+24.505570432 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 31 09:01:18 crc kubenswrapper[4830]: E0131 09:01:18.012094 4830 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 31 09:01:18 crc kubenswrapper[4830]: E0131 09:01:18.012244 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-31 09:01:20.01223292 +0000 UTC m=+24.505595582 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 09:01:18 crc kubenswrapper[4830]: E0131 09:01:18.012256 4830 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 09:01:18 crc kubenswrapper[4830]: E0131 09:01:18.012314 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-31 09:01:20.012301632 +0000 UTC m=+24.505664284 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 09:01:18 crc kubenswrapper[4830]: I0131 09:01:18.208547 4830 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 09:19:32.857092044 +0000 UTC Jan 31 09:01:18 crc kubenswrapper[4830]: I0131 09:01:18.251362 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 09:01:18 crc kubenswrapper[4830]: I0131 09:01:18.251409 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 09:01:18 crc kubenswrapper[4830]: I0131 09:01:18.251362 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 09:01:18 crc kubenswrapper[4830]: E0131 09:01:18.251520 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 09:01:18 crc kubenswrapper[4830]: E0131 09:01:18.251589 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 09:01:18 crc kubenswrapper[4830]: E0131 09:01:18.251663 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 09:01:18 crc kubenswrapper[4830]: I0131 09:01:18.255228 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Jan 31 09:01:18 crc kubenswrapper[4830]: I0131 09:01:18.255983 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Jan 31 09:01:18 crc kubenswrapper[4830]: I0131 09:01:18.257246 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Jan 31 09:01:18 crc kubenswrapper[4830]: I0131 09:01:18.257997 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Jan 31 09:01:18 crc kubenswrapper[4830]: I0131 09:01:18.259450 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Jan 31 09:01:18 crc kubenswrapper[4830]: I0131 09:01:18.260203 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Jan 31 09:01:18 crc kubenswrapper[4830]: I0131 09:01:18.260844 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Jan 31 09:01:18 crc kubenswrapper[4830]: I0131 09:01:18.261843 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Jan 31 09:01:18 crc kubenswrapper[4830]: I0131 09:01:18.262423 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Jan 31 09:01:18 crc kubenswrapper[4830]: I0131 09:01:18.263371 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Jan 31 09:01:18 crc kubenswrapper[4830]: I0131 09:01:18.263887 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Jan 31 09:01:18 crc kubenswrapper[4830]: I0131 09:01:18.265029 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Jan 31 09:01:18 crc kubenswrapper[4830]: I0131 09:01:18.265532 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Jan 31 09:01:18 crc kubenswrapper[4830]: I0131 09:01:18.266044 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Jan 31 09:01:18 crc kubenswrapper[4830]: I0131 09:01:18.267001 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Jan 31 09:01:18 crc kubenswrapper[4830]: I0131 09:01:18.267490 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Jan 31 09:01:18 crc kubenswrapper[4830]: I0131 09:01:18.268404 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Jan 31 09:01:18 crc kubenswrapper[4830]: I0131 09:01:18.268852 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Jan 31 09:01:18 crc kubenswrapper[4830]: I0131 09:01:18.269413 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Jan 31 09:01:18 crc kubenswrapper[4830]: I0131 09:01:18.270348 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Jan 31 09:01:18 crc kubenswrapper[4830]: I0131 09:01:18.270887 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Jan 31 09:01:18 crc kubenswrapper[4830]: I0131 09:01:18.272158 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Jan 31 09:01:18 crc kubenswrapper[4830]: I0131 09:01:18.272680 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Jan 31 09:01:18 crc kubenswrapper[4830]: I0131 09:01:18.274003 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Jan 31 09:01:18 crc kubenswrapper[4830]: I0131 09:01:18.274500 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Jan 31 09:01:18 crc kubenswrapper[4830]: I0131 09:01:18.275350 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Jan 31 09:01:18 crc kubenswrapper[4830]: I0131 09:01:18.276655 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Jan 31 09:01:18 crc kubenswrapper[4830]: I0131 09:01:18.277241 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Jan 31 09:01:18 crc kubenswrapper[4830]: I0131 09:01:18.278240 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Jan 31 09:01:18 crc kubenswrapper[4830]: I0131 09:01:18.278946 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Jan 31 09:01:18 crc kubenswrapper[4830]: I0131 09:01:18.279935 4830 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Jan 31 09:01:18 crc kubenswrapper[4830]: I0131 09:01:18.280075 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Jan 31 09:01:18 crc kubenswrapper[4830]: I0131 09:01:18.282029 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Jan 31 09:01:18 crc kubenswrapper[4830]: I0131 09:01:18.283039 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Jan 31 09:01:18 crc kubenswrapper[4830]: I0131 09:01:18.283445 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Jan 31 09:01:18 crc kubenswrapper[4830]: I0131 09:01:18.285556 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Jan 31 09:01:18 crc kubenswrapper[4830]: I0131 09:01:18.286186 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Jan 31 09:01:18 crc kubenswrapper[4830]: I0131 09:01:18.287125 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Jan 31 09:01:18 crc kubenswrapper[4830]: I0131 09:01:18.287715 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Jan 31 09:01:18 crc kubenswrapper[4830]: I0131 09:01:18.288916 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Jan 31 09:01:18 crc kubenswrapper[4830]: I0131 09:01:18.289404 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Jan 31 09:01:18 crc kubenswrapper[4830]: I0131 09:01:18.290524 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Jan 31 09:01:18 crc kubenswrapper[4830]: I0131 09:01:18.291151 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Jan 31 09:01:18 crc kubenswrapper[4830]: I0131 09:01:18.292296 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Jan 31 09:01:18 crc kubenswrapper[4830]: I0131 09:01:18.292762 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Jan 31 09:01:18 crc kubenswrapper[4830]: I0131 09:01:18.293830 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Jan 31 09:01:18 crc kubenswrapper[4830]: I0131 09:01:18.294354 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Jan 31 09:01:18 crc kubenswrapper[4830]: I0131 09:01:18.295522 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Jan 31 09:01:18 crc kubenswrapper[4830]: I0131 09:01:18.296054 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Jan 31 09:01:18 crc kubenswrapper[4830]: I0131 09:01:18.296844 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Jan 31 09:01:18 crc kubenswrapper[4830]: I0131 09:01:18.297313 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Jan 31 09:01:18 crc kubenswrapper[4830]: I0131 09:01:18.298404 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Jan 31 09:01:18 crc kubenswrapper[4830]: I0131 09:01:18.299109 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Jan 31 09:01:18 crc kubenswrapper[4830]: I0131 09:01:18.299832 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Jan 31 09:01:18 crc kubenswrapper[4830]: I0131 09:01:18.373611 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Jan 31 09:01:18 crc kubenswrapper[4830]: I0131 09:01:18.376024 4830 scope.go:117] "RemoveContainer" containerID="8cac33719e081864153ce20c60069a21036f3c29f7b4395021c3a2fe4f09dbc9" Jan 31 09:01:18 crc kubenswrapper[4830]: E0131 09:01:18.376192 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Jan 31 09:01:18 crc kubenswrapper[4830]: I0131 09:01:18.388804 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:18Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:18 crc kubenswrapper[4830]: I0131 09:01:18.399848 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 31 09:01:18 crc kubenswrapper[4830]: I0131 09:01:18.404551 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0468c8e291590c1fa2ffc9b7786bbbf6f3cfb7889fc3adaf7f7a4d3b0edcb1ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4601859d7e52904b3390b2ebca48210c4b3bf132b00a4735a1dbe6cfdc7bd02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:18Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:18 crc kubenswrapper[4830]: I0131 09:01:18.405165 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 31 09:01:18 crc kubenswrapper[4830]: I0131 09:01:18.413714 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Jan 31 09:01:18 crc kubenswrapper[4830]: I0131 09:01:18.420706 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c75e1c36-b769-464e-96eb-6d9b3c5aa384\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fc4f1b4979b8d902676eced34650daea491e68b4c4377492b928a9f0f78d12b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b68f539fdc8bbf394adaf06d3e8682cdf498d0994c53f3754caf282cf9cf3607\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://539885da1b8f083e7ae878fec45416be66a5b06df063f046a05a4981bc8a8f75\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8cac33719e081864153ce20c60069a21036f3c29f7b4395021c3a2fe4f09dbc9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8cac33719e081864153ce20c60069a21036f3c29f7b4395021c3a2fe4f09dbc9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"message\\\":\\\"le observer\\\\nW0131 09:01:15.957161 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0131 09:01:15.957363 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 09:01:15.958528 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2684978895/tls.crt::/tmp/serving-cert-2684978895/tls.key\\\\\\\"\\\\nI0131 09:01:16.545024 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 09:01:16.548405 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 09:01:16.548444 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 09:01:16.548470 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 09:01:16.548480 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 09:01:16.557075 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 09:01:16.557112 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 09:01:16.557118 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 09:01:16.557123 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 09:01:16.557127 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 09:01:16.557130 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 09:01:16.557133 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0131 09:01:16.557468 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0131 09:01:16.564014 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d4d8bd6451d7416a2a312147ec282db5de8410910834f555c8d09062902130d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41db253848cac205af5f91621cac4e8223bda7d76984afd734a093f8e8a2d125\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41db253848cac205af5f91621cac4e8223bda7d76984afd734a093f8e8a2d125\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:00:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:00:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:00:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:18Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:18 crc kubenswrapper[4830]: I0131 09:01:18.436176 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c33f93dea1eda6a26c80682447157b317a4aea4af66d200c5bdfc162ad593e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:18Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:18 crc kubenswrapper[4830]: I0131 09:01:18.452177 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:18Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:18 crc kubenswrapper[4830]: I0131 09:01:18.484453 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:18Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:18 crc kubenswrapper[4830]: I0131 09:01:18.500274 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:18Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:18 crc kubenswrapper[4830]: I0131 09:01:18.514949 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c33f93dea1eda6a26c80682447157b317a4aea4af66d200c5bdfc162ad593e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:18Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:18 crc kubenswrapper[4830]: I0131 09:01:18.534345 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:18Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:18 crc kubenswrapper[4830]: I0131 09:01:18.553667 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:18Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:18 crc kubenswrapper[4830]: I0131 09:01:18.568957 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"20ed341f-ef9c-4242-981d-80c09f22a37f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://28d65a2b0e667ff1bd912564180a6a9ce77e91a104c6028e19bc58e0ab3b295d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c484d4ed3775a955f3077e3404135c771c581eef1e8fa614d7cdd6e521bbb426\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6889405df5b8cc9e09b36c99a5db06048c9de193dd5744d2b48c07f42c477bd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e99664db53d57a91882867cdf4ab33d52a2e165c53f91cd1b918a32c49a7afa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:00:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:18Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:18 crc kubenswrapper[4830]: I0131 09:01:18.583184 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c75e1c36-b769-464e-96eb-6d9b3c5aa384\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fc4f1b4979b8d902676eced34650daea491e68b4c4377492b928a9f0f78d12b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b68f539fdc8bbf394adaf06d3e8682cdf498d0994c53f3754caf282cf9cf3607\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://539885da1b8f083e7ae878fec45416be66a5b06df063f046a05a4981bc8a8f75\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8cac33719e081864153ce20c60069a21036f3c29f7b4395021c3a2fe4f09dbc9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8cac33719e081864153ce20c60069a21036f3c29f7b4395021c3a2fe4f09dbc9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"message\\\":\\\"le observer\\\\nW0131 09:01:15.957161 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0131 09:01:15.957363 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 09:01:15.958528 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2684978895/tls.crt::/tmp/serving-cert-2684978895/tls.key\\\\\\\"\\\\nI0131 09:01:16.545024 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 09:01:16.548405 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 09:01:16.548444 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 09:01:16.548470 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 09:01:16.548480 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 09:01:16.557075 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 09:01:16.557112 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 09:01:16.557118 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 09:01:16.557123 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 09:01:16.557127 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 09:01:16.557130 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 09:01:16.557133 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0131 09:01:16.557468 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0131 09:01:16.564014 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d4d8bd6451d7416a2a312147ec282db5de8410910834f555c8d09062902130d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41db253848cac205af5f91621cac4e8223bda7d76984afd734a093f8e8a2d125\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41db253848cac205af5f91621cac4e8223bda7d76984afd734a093f8e8a2d125\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:00:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:00:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:00:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:18Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:18 crc kubenswrapper[4830]: I0131 09:01:18.594962 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:18Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:18 crc kubenswrapper[4830]: I0131 09:01:18.607973 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:18Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:18 crc kubenswrapper[4830]: I0131 09:01:18.622554 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0468c8e291590c1fa2ffc9b7786bbbf6f3cfb7889fc3adaf7f7a4d3b0edcb1ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4601859d7e52904b3390b2ebca48210c4b3bf132b00a4735a1dbe6cfdc7bd02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:18Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:19 crc kubenswrapper[4830]: I0131 09:01:19.209244 4830 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 04:33:30.890221453 +0000 UTC Jan 31 09:01:19 crc kubenswrapper[4830]: I0131 09:01:19.379607 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"2018dd8e7153f3ce64992dc6f931ae09c5f77931cd0743a9fe2557673b6a41f8"} Jan 31 09:01:19 crc kubenswrapper[4830]: I0131 09:01:19.380275 4830 scope.go:117] "RemoveContainer" containerID="8cac33719e081864153ce20c60069a21036f3c29f7b4395021c3a2fe4f09dbc9" Jan 31 09:01:19 crc kubenswrapper[4830]: E0131 09:01:19.380444 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Jan 31 09:01:19 crc kubenswrapper[4830]: E0131 09:01:19.387533 4830 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-crc\" already exists" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 31 09:01:19 crc kubenswrapper[4830]: I0131 09:01:19.398118 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:19Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:19 crc kubenswrapper[4830]: I0131 09:01:19.414173 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:19Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:19 crc kubenswrapper[4830]: I0131 09:01:19.431475 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"20ed341f-ef9c-4242-981d-80c09f22a37f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://28d65a2b0e667ff1bd912564180a6a9ce77e91a104c6028e19bc58e0ab3b295d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c484d4ed3775a955f3077e3404135c771c581eef1e8fa614d7cdd6e521bbb426\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6889405df5b8cc9e09b36c99a5db06048c9de193dd5744d2b48c07f42c477bd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e99664db53d57a91882867cdf4ab33d52a2e165c53f91cd1b918a32c49a7afa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:00:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:19Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:19 crc kubenswrapper[4830]: I0131 09:01:19.449613 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c75e1c36-b769-464e-96eb-6d9b3c5aa384\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fc4f1b4979b8d902676eced34650daea491e68b4c4377492b928a9f0f78d12b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b68f539fdc8bbf394adaf06d3e8682cdf498d0994c53f3754caf282cf9cf3607\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://539885da1b8f083e7ae878fec45416be66a5b06df063f046a05a4981bc8a8f75\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8cac33719e081864153ce20c60069a21036f3c29f7b4395021c3a2fe4f09dbc9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8cac33719e081864153ce20c60069a21036f3c29f7b4395021c3a2fe4f09dbc9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"message\\\":\\\"le observer\\\\nW0131 09:01:15.957161 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0131 09:01:15.957363 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 09:01:15.958528 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2684978895/tls.crt::/tmp/serving-cert-2684978895/tls.key\\\\\\\"\\\\nI0131 09:01:16.545024 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 09:01:16.548405 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 09:01:16.548444 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 09:01:16.548470 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 09:01:16.548480 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 09:01:16.557075 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 09:01:16.557112 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 09:01:16.557118 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 09:01:16.557123 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 09:01:16.557127 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 09:01:16.557130 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 09:01:16.557133 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0131 09:01:16.557468 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0131 09:01:16.564014 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d4d8bd6451d7416a2a312147ec282db5de8410910834f555c8d09062902130d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41db253848cac205af5f91621cac4e8223bda7d76984afd734a093f8e8a2d125\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41db253848cac205af5f91621cac4e8223bda7d76984afd734a093f8e8a2d125\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:00:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:00:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:00:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:19Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:19 crc kubenswrapper[4830]: I0131 09:01:19.465581 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c33f93dea1eda6a26c80682447157b317a4aea4af66d200c5bdfc162ad593e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:19Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:19 crc kubenswrapper[4830]: I0131 09:01:19.480997 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2018dd8e7153f3ce64992dc6f931ae09c5f77931cd0743a9fe2557673b6a41f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:19Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:19 crc kubenswrapper[4830]: I0131 09:01:19.496087 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:19Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:19 crc kubenswrapper[4830]: I0131 09:01:19.510019 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0468c8e291590c1fa2ffc9b7786bbbf6f3cfb7889fc3adaf7f7a4d3b0edcb1ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4601859d7e52904b3390b2ebca48210c4b3bf132b00a4735a1dbe6cfdc7bd02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:19Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:20 crc kubenswrapper[4830]: I0131 09:01:20.029350 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 09:01:20 crc kubenswrapper[4830]: I0131 09:01:20.029462 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 09:01:20 crc kubenswrapper[4830]: E0131 09:01:20.029476 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 09:01:24.029451929 +0000 UTC m=+28.522814381 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 09:01:20 crc kubenswrapper[4830]: I0131 09:01:20.029499 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 09:01:20 crc kubenswrapper[4830]: I0131 09:01:20.029534 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 09:01:20 crc kubenswrapper[4830]: I0131 09:01:20.029562 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 09:01:20 crc kubenswrapper[4830]: E0131 09:01:20.029661 4830 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 31 09:01:20 crc kubenswrapper[4830]: E0131 09:01:20.029674 4830 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 31 09:01:20 crc kubenswrapper[4830]: E0131 09:01:20.029676 4830 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 31 09:01:20 crc kubenswrapper[4830]: E0131 09:01:20.029707 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-31 09:01:24.029696916 +0000 UTC m=+28.523059358 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 31 09:01:20 crc kubenswrapper[4830]: E0131 09:01:20.029710 4830 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 31 09:01:20 crc kubenswrapper[4830]: E0131 09:01:20.029749 4830 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 09:01:20 crc kubenswrapper[4830]: E0131 09:01:20.029783 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-31 09:01:24.029772918 +0000 UTC m=+28.523135370 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 09:01:20 crc kubenswrapper[4830]: E0131 09:01:20.029692 4830 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 31 09:01:20 crc kubenswrapper[4830]: E0131 09:01:20.029801 4830 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 09:01:20 crc kubenswrapper[4830]: E0131 09:01:20.029838 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-31 09:01:24.029815639 +0000 UTC m=+28.523178081 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 09:01:20 crc kubenswrapper[4830]: E0131 09:01:20.029860 4830 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 31 09:01:20 crc kubenswrapper[4830]: E0131 09:01:20.030023 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-31 09:01:24.029989314 +0000 UTC m=+28.523351796 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 31 09:01:20 crc kubenswrapper[4830]: I0131 09:01:20.210167 4830 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 13:28:44.555224006 +0000 UTC Jan 31 09:01:20 crc kubenswrapper[4830]: I0131 09:01:20.251848 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 09:01:20 crc kubenswrapper[4830]: I0131 09:01:20.251895 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 09:01:20 crc kubenswrapper[4830]: I0131 09:01:20.251932 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 09:01:20 crc kubenswrapper[4830]: E0131 09:01:20.252147 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 09:01:20 crc kubenswrapper[4830]: E0131 09:01:20.253085 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 09:01:20 crc kubenswrapper[4830]: E0131 09:01:20.253134 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 09:01:21 crc kubenswrapper[4830]: I0131 09:01:21.211189 4830 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 05:01:07.699243161 +0000 UTC Jan 31 09:01:21 crc kubenswrapper[4830]: I0131 09:01:21.676803 4830 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 09:01:21 crc kubenswrapper[4830]: I0131 09:01:21.683099 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:21 crc kubenswrapper[4830]: I0131 09:01:21.683163 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:21 crc kubenswrapper[4830]: I0131 09:01:21.683178 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:21 crc kubenswrapper[4830]: I0131 09:01:21.683271 4830 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 31 09:01:21 crc kubenswrapper[4830]: I0131 09:01:21.700140 4830 kubelet_node_status.go:115] "Node was previously registered" node="crc" Jan 31 09:01:21 crc kubenswrapper[4830]: I0131 09:01:21.700461 4830 kubelet_node_status.go:79] "Successfully registered node" node="crc" Jan 31 09:01:21 crc kubenswrapper[4830]: I0131 09:01:21.701671 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:21 crc kubenswrapper[4830]: I0131 09:01:21.701772 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:21 crc kubenswrapper[4830]: I0131 09:01:21.701793 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:21 crc kubenswrapper[4830]: I0131 09:01:21.701818 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:21 crc kubenswrapper[4830]: I0131 09:01:21.701834 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:21Z","lastTransitionTime":"2026-01-31T09:01:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:21 crc kubenswrapper[4830]: E0131 09:01:21.734834 4830 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T09:01:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T09:01:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T09:01:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T09:01:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"09bf5dcf-c0f5-4874-a379-a4244cbfeb7d\\\",\\\"systemUUID\\\":\\\"c42072f0-7f1e-4cb8-a24e-882cf5477d0b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:21Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:21 crc kubenswrapper[4830]: I0131 09:01:21.739752 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:21 crc kubenswrapper[4830]: I0131 09:01:21.739804 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:21 crc kubenswrapper[4830]: I0131 09:01:21.739825 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:21 crc kubenswrapper[4830]: I0131 09:01:21.739848 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:21 crc kubenswrapper[4830]: I0131 09:01:21.739861 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:21Z","lastTransitionTime":"2026-01-31T09:01:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:21 crc kubenswrapper[4830]: E0131 09:01:21.753174 4830 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T09:01:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T09:01:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T09:01:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T09:01:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"09bf5dcf-c0f5-4874-a379-a4244cbfeb7d\\\",\\\"systemUUID\\\":\\\"c42072f0-7f1e-4cb8-a24e-882cf5477d0b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:21Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:21 crc kubenswrapper[4830]: I0131 09:01:21.758099 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:21 crc kubenswrapper[4830]: I0131 09:01:21.758140 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:21 crc kubenswrapper[4830]: I0131 09:01:21.758149 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:21 crc kubenswrapper[4830]: I0131 09:01:21.758169 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:21 crc kubenswrapper[4830]: I0131 09:01:21.758180 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:21Z","lastTransitionTime":"2026-01-31T09:01:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:21 crc kubenswrapper[4830]: E0131 09:01:21.773751 4830 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T09:01:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T09:01:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T09:01:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T09:01:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"09bf5dcf-c0f5-4874-a379-a4244cbfeb7d\\\",\\\"systemUUID\\\":\\\"c42072f0-7f1e-4cb8-a24e-882cf5477d0b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:21Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:21 crc kubenswrapper[4830]: I0131 09:01:21.777897 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:21 crc kubenswrapper[4830]: I0131 09:01:21.777938 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:21 crc kubenswrapper[4830]: I0131 09:01:21.777947 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:21 crc kubenswrapper[4830]: I0131 09:01:21.777962 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:21 crc kubenswrapper[4830]: I0131 09:01:21.777971 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:21Z","lastTransitionTime":"2026-01-31T09:01:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:21 crc kubenswrapper[4830]: E0131 09:01:21.790950 4830 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T09:01:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T09:01:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T09:01:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T09:01:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"09bf5dcf-c0f5-4874-a379-a4244cbfeb7d\\\",\\\"systemUUID\\\":\\\"c42072f0-7f1e-4cb8-a24e-882cf5477d0b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:21Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:21 crc kubenswrapper[4830]: I0131 09:01:21.796980 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:21 crc kubenswrapper[4830]: I0131 09:01:21.797029 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:21 crc kubenswrapper[4830]: I0131 09:01:21.797041 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:21 crc kubenswrapper[4830]: I0131 09:01:21.797064 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:21 crc kubenswrapper[4830]: I0131 09:01:21.797078 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:21Z","lastTransitionTime":"2026-01-31T09:01:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:21 crc kubenswrapper[4830]: E0131 09:01:21.810074 4830 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T09:01:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T09:01:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T09:01:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T09:01:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"09bf5dcf-c0f5-4874-a379-a4244cbfeb7d\\\",\\\"systemUUID\\\":\\\"c42072f0-7f1e-4cb8-a24e-882cf5477d0b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:21Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:21 crc kubenswrapper[4830]: E0131 09:01:21.810251 4830 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 31 09:01:21 crc kubenswrapper[4830]: I0131 09:01:21.812363 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:21 crc kubenswrapper[4830]: I0131 09:01:21.812414 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:21 crc kubenswrapper[4830]: I0131 09:01:21.812424 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:21 crc kubenswrapper[4830]: I0131 09:01:21.812441 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:21 crc kubenswrapper[4830]: I0131 09:01:21.812454 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:21Z","lastTransitionTime":"2026-01-31T09:01:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:21 crc kubenswrapper[4830]: I0131 09:01:21.915328 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:21 crc kubenswrapper[4830]: I0131 09:01:21.915363 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:21 crc kubenswrapper[4830]: I0131 09:01:21.915374 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:21 crc kubenswrapper[4830]: I0131 09:01:21.915389 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:21 crc kubenswrapper[4830]: I0131 09:01:21.915398 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:21Z","lastTransitionTime":"2026-01-31T09:01:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:22 crc kubenswrapper[4830]: I0131 09:01:22.018064 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:22 crc kubenswrapper[4830]: I0131 09:01:22.018109 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:22 crc kubenswrapper[4830]: I0131 09:01:22.018122 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:22 crc kubenswrapper[4830]: I0131 09:01:22.018138 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:22 crc kubenswrapper[4830]: I0131 09:01:22.018149 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:22Z","lastTransitionTime":"2026-01-31T09:01:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:22 crc kubenswrapper[4830]: I0131 09:01:22.080707 4830 csr.go:261] certificate signing request csr-qghn8 is approved, waiting to be issued Jan 31 09:01:22 crc kubenswrapper[4830]: I0131 09:01:22.120498 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:22 crc kubenswrapper[4830]: I0131 09:01:22.120534 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:22 crc kubenswrapper[4830]: I0131 09:01:22.120546 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:22 crc kubenswrapper[4830]: I0131 09:01:22.120563 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:22 crc kubenswrapper[4830]: I0131 09:01:22.120573 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:22Z","lastTransitionTime":"2026-01-31T09:01:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:22 crc kubenswrapper[4830]: I0131 09:01:22.134370 4830 csr.go:257] certificate signing request csr-qghn8 is issued Jan 31 09:01:22 crc kubenswrapper[4830]: I0131 09:01:22.203315 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-pmbpr"] Jan 31 09:01:22 crc kubenswrapper[4830]: I0131 09:01:22.203804 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-pmbpr" Jan 31 09:01:22 crc kubenswrapper[4830]: I0131 09:01:22.212116 4830 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 03:04:51.747076316 +0000 UTC Jan 31 09:01:22 crc kubenswrapper[4830]: I0131 09:01:22.213002 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 31 09:01:22 crc kubenswrapper[4830]: I0131 09:01:22.213469 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 31 09:01:22 crc kubenswrapper[4830]: I0131 09:01:22.214450 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 31 09:01:22 crc kubenswrapper[4830]: I0131 09:01:22.222637 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:22 crc kubenswrapper[4830]: I0131 09:01:22.222679 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:22 crc kubenswrapper[4830]: I0131 09:01:22.222688 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:22 crc kubenswrapper[4830]: I0131 09:01:22.222705 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:22 crc kubenswrapper[4830]: I0131 09:01:22.222715 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:22Z","lastTransitionTime":"2026-01-31T09:01:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:22 crc kubenswrapper[4830]: I0131 09:01:22.230236 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:22Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:22 crc kubenswrapper[4830]: I0131 09:01:22.246229 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0468c8e291590c1fa2ffc9b7786bbbf6f3cfb7889fc3adaf7f7a4d3b0edcb1ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4601859d7e52904b3390b2ebca48210c4b3bf132b00a4735a1dbe6cfdc7bd02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:22Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:22 crc kubenswrapper[4830]: I0131 09:01:22.248538 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/ca325f50-edf0-4f3d-ab92-17f40a73d274-hosts-file\") pod \"node-resolver-pmbpr\" (UID: \"ca325f50-edf0-4f3d-ab92-17f40a73d274\") " pod="openshift-dns/node-resolver-pmbpr" Jan 31 09:01:22 crc kubenswrapper[4830]: I0131 09:01:22.248567 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7p56d\" (UniqueName: \"kubernetes.io/projected/ca325f50-edf0-4f3d-ab92-17f40a73d274-kube-api-access-7p56d\") pod \"node-resolver-pmbpr\" (UID: \"ca325f50-edf0-4f3d-ab92-17f40a73d274\") " pod="openshift-dns/node-resolver-pmbpr" Jan 31 09:01:22 crc kubenswrapper[4830]: I0131 09:01:22.251355 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 09:01:22 crc kubenswrapper[4830]: I0131 09:01:22.251377 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 09:01:22 crc kubenswrapper[4830]: I0131 09:01:22.251470 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 09:01:22 crc kubenswrapper[4830]: E0131 09:01:22.251581 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 09:01:22 crc kubenswrapper[4830]: E0131 09:01:22.251694 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 09:01:22 crc kubenswrapper[4830]: E0131 09:01:22.251841 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 09:01:22 crc kubenswrapper[4830]: I0131 09:01:22.265596 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pmbpr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca325f50-edf0-4f3d-ab92-17f40a73d274\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:22Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:22Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7p56d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:22Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pmbpr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:22Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:22 crc kubenswrapper[4830]: I0131 09:01:22.280357 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"20ed341f-ef9c-4242-981d-80c09f22a37f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://28d65a2b0e667ff1bd912564180a6a9ce77e91a104c6028e19bc58e0ab3b295d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c484d4ed3775a955f3077e3404135c771c581eef1e8fa614d7cdd6e521bbb426\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6889405df5b8cc9e09b36c99a5db06048c9de193dd5744d2b48c07f42c477bd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e99664db53d57a91882867cdf4ab33d52a2e165c53f91cd1b918a32c49a7afa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:00:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:22Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:22 crc kubenswrapper[4830]: I0131 09:01:22.296087 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c75e1c36-b769-464e-96eb-6d9b3c5aa384\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fc4f1b4979b8d902676eced34650daea491e68b4c4377492b928a9f0f78d12b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b68f539fdc8bbf394adaf06d3e8682cdf498d0994c53f3754caf282cf9cf3607\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://539885da1b8f083e7ae878fec45416be66a5b06df063f046a05a4981bc8a8f75\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8cac33719e081864153ce20c60069a21036f3c29f7b4395021c3a2fe4f09dbc9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8cac33719e081864153ce20c60069a21036f3c29f7b4395021c3a2fe4f09dbc9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"message\\\":\\\"le observer\\\\nW0131 09:01:15.957161 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0131 09:01:15.957363 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 09:01:15.958528 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2684978895/tls.crt::/tmp/serving-cert-2684978895/tls.key\\\\\\\"\\\\nI0131 09:01:16.545024 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 09:01:16.548405 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 09:01:16.548444 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 09:01:16.548470 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 09:01:16.548480 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 09:01:16.557075 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 09:01:16.557112 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 09:01:16.557118 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 09:01:16.557123 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 09:01:16.557127 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 09:01:16.557130 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 09:01:16.557133 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0131 09:01:16.557468 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0131 09:01:16.564014 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d4d8bd6451d7416a2a312147ec282db5de8410910834f555c8d09062902130d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41db253848cac205af5f91621cac4e8223bda7d76984afd734a093f8e8a2d125\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41db253848cac205af5f91621cac4e8223bda7d76984afd734a093f8e8a2d125\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:00:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:00:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:00:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:22Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:22 crc kubenswrapper[4830]: I0131 09:01:22.322117 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c33f93dea1eda6a26c80682447157b317a4aea4af66d200c5bdfc162ad593e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:22Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:22 crc kubenswrapper[4830]: I0131 09:01:22.325102 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:22 crc kubenswrapper[4830]: I0131 09:01:22.325132 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:22 crc kubenswrapper[4830]: I0131 09:01:22.325142 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:22 crc kubenswrapper[4830]: I0131 09:01:22.325158 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:22 crc kubenswrapper[4830]: I0131 09:01:22.325168 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:22Z","lastTransitionTime":"2026-01-31T09:01:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:22 crc kubenswrapper[4830]: I0131 09:01:22.348963 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/ca325f50-edf0-4f3d-ab92-17f40a73d274-hosts-file\") pod \"node-resolver-pmbpr\" (UID: \"ca325f50-edf0-4f3d-ab92-17f40a73d274\") " pod="openshift-dns/node-resolver-pmbpr" Jan 31 09:01:22 crc kubenswrapper[4830]: I0131 09:01:22.349009 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7p56d\" (UniqueName: \"kubernetes.io/projected/ca325f50-edf0-4f3d-ab92-17f40a73d274-kube-api-access-7p56d\") pod \"node-resolver-pmbpr\" (UID: \"ca325f50-edf0-4f3d-ab92-17f40a73d274\") " pod="openshift-dns/node-resolver-pmbpr" Jan 31 09:01:22 crc kubenswrapper[4830]: I0131 09:01:22.349066 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/ca325f50-edf0-4f3d-ab92-17f40a73d274-hosts-file\") pod \"node-resolver-pmbpr\" (UID: \"ca325f50-edf0-4f3d-ab92-17f40a73d274\") " pod="openshift-dns/node-resolver-pmbpr" Jan 31 09:01:22 crc kubenswrapper[4830]: I0131 09:01:22.360463 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:22Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:22 crc kubenswrapper[4830]: I0131 09:01:22.369533 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7p56d\" (UniqueName: \"kubernetes.io/projected/ca325f50-edf0-4f3d-ab92-17f40a73d274-kube-api-access-7p56d\") pod \"node-resolver-pmbpr\" (UID: \"ca325f50-edf0-4f3d-ab92-17f40a73d274\") " pod="openshift-dns/node-resolver-pmbpr" Jan 31 09:01:22 crc kubenswrapper[4830]: I0131 09:01:22.385686 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:22Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:22 crc kubenswrapper[4830]: I0131 09:01:22.427889 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:22 crc kubenswrapper[4830]: I0131 09:01:22.427929 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:22 crc kubenswrapper[4830]: I0131 09:01:22.427939 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:22 crc kubenswrapper[4830]: I0131 09:01:22.427954 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:22 crc kubenswrapper[4830]: I0131 09:01:22.427965 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:22Z","lastTransitionTime":"2026-01-31T09:01:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:22 crc kubenswrapper[4830]: I0131 09:01:22.440077 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2018dd8e7153f3ce64992dc6f931ae09c5f77931cd0743a9fe2557673b6a41f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:22Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:22 crc kubenswrapper[4830]: I0131 09:01:22.517823 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-pmbpr" Jan 31 09:01:22 crc kubenswrapper[4830]: I0131 09:01:22.543383 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:22 crc kubenswrapper[4830]: I0131 09:01:22.543425 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:22 crc kubenswrapper[4830]: I0131 09:01:22.543439 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:22 crc kubenswrapper[4830]: I0131 09:01:22.543458 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:22 crc kubenswrapper[4830]: I0131 09:01:22.543476 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:22Z","lastTransitionTime":"2026-01-31T09:01:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:22 crc kubenswrapper[4830]: I0131 09:01:22.646348 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:22 crc kubenswrapper[4830]: I0131 09:01:22.646807 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:22 crc kubenswrapper[4830]: I0131 09:01:22.646821 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:22 crc kubenswrapper[4830]: I0131 09:01:22.646840 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:22 crc kubenswrapper[4830]: I0131 09:01:22.646853 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:22Z","lastTransitionTime":"2026-01-31T09:01:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:22 crc kubenswrapper[4830]: I0131 09:01:22.749302 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:22 crc kubenswrapper[4830]: I0131 09:01:22.749348 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:22 crc kubenswrapper[4830]: I0131 09:01:22.749357 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:22 crc kubenswrapper[4830]: I0131 09:01:22.749375 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:22 crc kubenswrapper[4830]: I0131 09:01:22.749386 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:22Z","lastTransitionTime":"2026-01-31T09:01:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:22 crc kubenswrapper[4830]: I0131 09:01:22.852321 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:22 crc kubenswrapper[4830]: I0131 09:01:22.852369 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:22 crc kubenswrapper[4830]: I0131 09:01:22.852380 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:22 crc kubenswrapper[4830]: I0131 09:01:22.852401 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:22 crc kubenswrapper[4830]: I0131 09:01:22.852415 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:22Z","lastTransitionTime":"2026-01-31T09:01:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:22 crc kubenswrapper[4830]: I0131 09:01:22.954405 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:22 crc kubenswrapper[4830]: I0131 09:01:22.954452 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:22 crc kubenswrapper[4830]: I0131 09:01:22.954466 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:22 crc kubenswrapper[4830]: I0131 09:01:22.954484 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:22 crc kubenswrapper[4830]: I0131 09:01:22.954496 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:22Z","lastTransitionTime":"2026-01-31T09:01:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:23 crc kubenswrapper[4830]: I0131 09:01:23.057198 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:23 crc kubenswrapper[4830]: I0131 09:01:23.057237 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:23 crc kubenswrapper[4830]: I0131 09:01:23.057247 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:23 crc kubenswrapper[4830]: I0131 09:01:23.057263 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:23 crc kubenswrapper[4830]: I0131 09:01:23.057272 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:23Z","lastTransitionTime":"2026-01-31T09:01:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:23 crc kubenswrapper[4830]: I0131 09:01:23.116446 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-gt7kd"] Jan 31 09:01:23 crc kubenswrapper[4830]: I0131 09:01:23.116990 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" Jan 31 09:01:23 crc kubenswrapper[4830]: I0131 09:01:23.117412 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-cjqbn"] Jan 31 09:01:23 crc kubenswrapper[4830]: I0131 09:01:23.117879 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-cjqbn" Jan 31 09:01:23 crc kubenswrapper[4830]: I0131 09:01:23.119022 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-x27jw"] Jan 31 09:01:23 crc kubenswrapper[4830]: I0131 09:01:23.119872 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-x27jw" Jan 31 09:01:23 crc kubenswrapper[4830]: I0131 09:01:23.120596 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 31 09:01:23 crc kubenswrapper[4830]: I0131 09:01:23.120686 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 31 09:01:23 crc kubenswrapper[4830]: I0131 09:01:23.120801 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 31 09:01:23 crc kubenswrapper[4830]: I0131 09:01:23.120817 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 31 09:01:23 crc kubenswrapper[4830]: I0131 09:01:23.121175 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 31 09:01:23 crc kubenswrapper[4830]: I0131 09:01:23.121177 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 31 09:01:23 crc kubenswrapper[4830]: I0131 09:01:23.121287 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 31 09:01:23 crc kubenswrapper[4830]: I0131 09:01:23.121950 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 31 09:01:23 crc kubenswrapper[4830]: I0131 09:01:23.122322 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 31 09:01:23 crc kubenswrapper[4830]: I0131 09:01:23.122565 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 31 09:01:23 crc kubenswrapper[4830]: I0131 09:01:23.122705 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 31 09:01:23 crc kubenswrapper[4830]: I0131 09:01:23.122832 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 31 09:01:23 crc kubenswrapper[4830]: I0131 09:01:23.135522 4830 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-01-31 08:56:22 +0000 UTC, rotation deadline is 2026-11-29 05:44:20.873512975 +0000 UTC Jan 31 09:01:23 crc kubenswrapper[4830]: I0131 09:01:23.135594 4830 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 7244h42m57.737922531s for next certificate rotation Jan 31 09:01:23 crc kubenswrapper[4830]: I0131 09:01:23.146418 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c33f93dea1eda6a26c80682447157b317a4aea4af66d200c5bdfc162ad593e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:23Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:23 crc kubenswrapper[4830]: I0131 09:01:23.154404 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/b7e133cc-19e8-4770-9146-88dac53a6531-multus-socket-dir-parent\") pod \"multus-cjqbn\" (UID: \"b7e133cc-19e8-4770-9146-88dac53a6531\") " pod="openshift-multus/multus-cjqbn" Jan 31 09:01:23 crc kubenswrapper[4830]: I0131 09:01:23.154461 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/b7e133cc-19e8-4770-9146-88dac53a6531-multus-daemon-config\") pod \"multus-cjqbn\" (UID: \"b7e133cc-19e8-4770-9146-88dac53a6531\") " pod="openshift-multus/multus-cjqbn" Jan 31 09:01:23 crc kubenswrapper[4830]: I0131 09:01:23.154490 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/227117cb-01d3-4e44-9da3-b1d577fb3ee2-cnibin\") pod \"multus-additional-cni-plugins-x27jw\" (UID: \"227117cb-01d3-4e44-9da3-b1d577fb3ee2\") " pod="openshift-multus/multus-additional-cni-plugins-x27jw" Jan 31 09:01:23 crc kubenswrapper[4830]: I0131 09:01:23.154512 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/227117cb-01d3-4e44-9da3-b1d577fb3ee2-cni-binary-copy\") pod \"multus-additional-cni-plugins-x27jw\" (UID: \"227117cb-01d3-4e44-9da3-b1d577fb3ee2\") " pod="openshift-multus/multus-additional-cni-plugins-x27jw" Jan 31 09:01:23 crc kubenswrapper[4830]: I0131 09:01:23.154536 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/227117cb-01d3-4e44-9da3-b1d577fb3ee2-tuning-conf-dir\") pod \"multus-additional-cni-plugins-x27jw\" (UID: \"227117cb-01d3-4e44-9da3-b1d577fb3ee2\") " pod="openshift-multus/multus-additional-cni-plugins-x27jw" Jan 31 09:01:23 crc kubenswrapper[4830]: I0131 09:01:23.154576 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/b7e133cc-19e8-4770-9146-88dac53a6531-cnibin\") pod \"multus-cjqbn\" (UID: \"b7e133cc-19e8-4770-9146-88dac53a6531\") " pod="openshift-multus/multus-cjqbn" Jan 31 09:01:23 crc kubenswrapper[4830]: I0131 09:01:23.154603 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/b7e133cc-19e8-4770-9146-88dac53a6531-host-run-netns\") pod \"multus-cjqbn\" (UID: \"b7e133cc-19e8-4770-9146-88dac53a6531\") " pod="openshift-multus/multus-cjqbn" Jan 31 09:01:23 crc kubenswrapper[4830]: I0131 09:01:23.154655 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/b7e133cc-19e8-4770-9146-88dac53a6531-host-var-lib-kubelet\") pod \"multus-cjqbn\" (UID: \"b7e133cc-19e8-4770-9146-88dac53a6531\") " pod="openshift-multus/multus-cjqbn" Jan 31 09:01:23 crc kubenswrapper[4830]: I0131 09:01:23.154677 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b7e133cc-19e8-4770-9146-88dac53a6531-etc-kubernetes\") pod \"multus-cjqbn\" (UID: \"b7e133cc-19e8-4770-9146-88dac53a6531\") " pod="openshift-multus/multus-cjqbn" Jan 31 09:01:23 crc kubenswrapper[4830]: I0131 09:01:23.154697 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wndtp\" (UniqueName: \"kubernetes.io/projected/227117cb-01d3-4e44-9da3-b1d577fb3ee2-kube-api-access-wndtp\") pod \"multus-additional-cni-plugins-x27jw\" (UID: \"227117cb-01d3-4e44-9da3-b1d577fb3ee2\") " pod="openshift-multus/multus-additional-cni-plugins-x27jw" Jan 31 09:01:23 crc kubenswrapper[4830]: I0131 09:01:23.154793 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vqp59\" (UniqueName: \"kubernetes.io/projected/158dbfda-9b0a-4809-9946-3c6ee2d082dc-kube-api-access-vqp59\") pod \"machine-config-daemon-gt7kd\" (UID: \"158dbfda-9b0a-4809-9946-3c6ee2d082dc\") " pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" Jan 31 09:01:23 crc kubenswrapper[4830]: I0131 09:01:23.154815 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/b7e133cc-19e8-4770-9146-88dac53a6531-host-run-multus-certs\") pod \"multus-cjqbn\" (UID: \"b7e133cc-19e8-4770-9146-88dac53a6531\") " pod="openshift-multus/multus-cjqbn" Jan 31 09:01:23 crc kubenswrapper[4830]: I0131 09:01:23.154832 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/b7e133cc-19e8-4770-9146-88dac53a6531-multus-cni-dir\") pod \"multus-cjqbn\" (UID: \"b7e133cc-19e8-4770-9146-88dac53a6531\") " pod="openshift-multus/multus-cjqbn" Jan 31 09:01:23 crc kubenswrapper[4830]: I0131 09:01:23.154847 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/b7e133cc-19e8-4770-9146-88dac53a6531-cni-binary-copy\") pod \"multus-cjqbn\" (UID: \"b7e133cc-19e8-4770-9146-88dac53a6531\") " pod="openshift-multus/multus-cjqbn" Jan 31 09:01:23 crc kubenswrapper[4830]: I0131 09:01:23.154882 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/227117cb-01d3-4e44-9da3-b1d577fb3ee2-system-cni-dir\") pod \"multus-additional-cni-plugins-x27jw\" (UID: \"227117cb-01d3-4e44-9da3-b1d577fb3ee2\") " pod="openshift-multus/multus-additional-cni-plugins-x27jw" Jan 31 09:01:23 crc kubenswrapper[4830]: I0131 09:01:23.154932 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/158dbfda-9b0a-4809-9946-3c6ee2d082dc-rootfs\") pod \"machine-config-daemon-gt7kd\" (UID: \"158dbfda-9b0a-4809-9946-3c6ee2d082dc\") " pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" Jan 31 09:01:23 crc kubenswrapper[4830]: I0131 09:01:23.154961 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/b7e133cc-19e8-4770-9146-88dac53a6531-host-var-lib-cni-bin\") pod \"multus-cjqbn\" (UID: \"b7e133cc-19e8-4770-9146-88dac53a6531\") " pod="openshift-multus/multus-cjqbn" Jan 31 09:01:23 crc kubenswrapper[4830]: I0131 09:01:23.155062 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/227117cb-01d3-4e44-9da3-b1d577fb3ee2-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-x27jw\" (UID: \"227117cb-01d3-4e44-9da3-b1d577fb3ee2\") " pod="openshift-multus/multus-additional-cni-plugins-x27jw" Jan 31 09:01:23 crc kubenswrapper[4830]: I0131 09:01:23.155104 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/b7e133cc-19e8-4770-9146-88dac53a6531-hostroot\") pod \"multus-cjqbn\" (UID: \"b7e133cc-19e8-4770-9146-88dac53a6531\") " pod="openshift-multus/multus-cjqbn" Jan 31 09:01:23 crc kubenswrapper[4830]: I0131 09:01:23.155143 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/158dbfda-9b0a-4809-9946-3c6ee2d082dc-proxy-tls\") pod \"machine-config-daemon-gt7kd\" (UID: \"158dbfda-9b0a-4809-9946-3c6ee2d082dc\") " pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" Jan 31 09:01:23 crc kubenswrapper[4830]: I0131 09:01:23.155205 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/227117cb-01d3-4e44-9da3-b1d577fb3ee2-os-release\") pod \"multus-additional-cni-plugins-x27jw\" (UID: \"227117cb-01d3-4e44-9da3-b1d577fb3ee2\") " pod="openshift-multus/multus-additional-cni-plugins-x27jw" Jan 31 09:01:23 crc kubenswrapper[4830]: I0131 09:01:23.155250 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/b7e133cc-19e8-4770-9146-88dac53a6531-os-release\") pod \"multus-cjqbn\" (UID: \"b7e133cc-19e8-4770-9146-88dac53a6531\") " pod="openshift-multus/multus-cjqbn" Jan 31 09:01:23 crc kubenswrapper[4830]: I0131 09:01:23.155283 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/b7e133cc-19e8-4770-9146-88dac53a6531-multus-conf-dir\") pod \"multus-cjqbn\" (UID: \"b7e133cc-19e8-4770-9146-88dac53a6531\") " pod="openshift-multus/multus-cjqbn" Jan 31 09:01:23 crc kubenswrapper[4830]: I0131 09:01:23.155320 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/b7e133cc-19e8-4770-9146-88dac53a6531-host-var-lib-cni-multus\") pod \"multus-cjqbn\" (UID: \"b7e133cc-19e8-4770-9146-88dac53a6531\") " pod="openshift-multus/multus-cjqbn" Jan 31 09:01:23 crc kubenswrapper[4830]: I0131 09:01:23.155348 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-msp6r\" (UniqueName: \"kubernetes.io/projected/b7e133cc-19e8-4770-9146-88dac53a6531-kube-api-access-msp6r\") pod \"multus-cjqbn\" (UID: \"b7e133cc-19e8-4770-9146-88dac53a6531\") " pod="openshift-multus/multus-cjqbn" Jan 31 09:01:23 crc kubenswrapper[4830]: I0131 09:01:23.155377 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/158dbfda-9b0a-4809-9946-3c6ee2d082dc-mcd-auth-proxy-config\") pod \"machine-config-daemon-gt7kd\" (UID: \"158dbfda-9b0a-4809-9946-3c6ee2d082dc\") " pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" Jan 31 09:01:23 crc kubenswrapper[4830]: I0131 09:01:23.155396 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/b7e133cc-19e8-4770-9146-88dac53a6531-system-cni-dir\") pod \"multus-cjqbn\" (UID: \"b7e133cc-19e8-4770-9146-88dac53a6531\") " pod="openshift-multus/multus-cjqbn" Jan 31 09:01:23 crc kubenswrapper[4830]: I0131 09:01:23.155417 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/b7e133cc-19e8-4770-9146-88dac53a6531-host-run-k8s-cni-cncf-io\") pod \"multus-cjqbn\" (UID: \"b7e133cc-19e8-4770-9146-88dac53a6531\") " pod="openshift-multus/multus-cjqbn" Jan 31 09:01:23 crc kubenswrapper[4830]: I0131 09:01:23.160555 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:23 crc kubenswrapper[4830]: I0131 09:01:23.160596 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:23 crc kubenswrapper[4830]: I0131 09:01:23.160608 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:23 crc kubenswrapper[4830]: I0131 09:01:23.160628 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:23 crc kubenswrapper[4830]: I0131 09:01:23.160641 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:23Z","lastTransitionTime":"2026-01-31T09:01:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:23 crc kubenswrapper[4830]: I0131 09:01:23.161750 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:23Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:23 crc kubenswrapper[4830]: I0131 09:01:23.179963 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:23Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:23 crc kubenswrapper[4830]: I0131 09:01:23.196979 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"20ed341f-ef9c-4242-981d-80c09f22a37f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://28d65a2b0e667ff1bd912564180a6a9ce77e91a104c6028e19bc58e0ab3b295d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c484d4ed3775a955f3077e3404135c771c581eef1e8fa614d7cdd6e521bbb426\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6889405df5b8cc9e09b36c99a5db06048c9de193dd5744d2b48c07f42c477bd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e99664db53d57a91882867cdf4ab33d52a2e165c53f91cd1b918a32c49a7afa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:00:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:23Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:23 crc kubenswrapper[4830]: I0131 09:01:23.213027 4830 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 15:53:10.570254744 +0000 UTC Jan 31 09:01:23 crc kubenswrapper[4830]: I0131 09:01:23.221659 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c75e1c36-b769-464e-96eb-6d9b3c5aa384\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fc4f1b4979b8d902676eced34650daea491e68b4c4377492b928a9f0f78d12b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b68f539fdc8bbf394adaf06d3e8682cdf498d0994c53f3754caf282cf9cf3607\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://539885da1b8f083e7ae878fec45416be66a5b06df063f046a05a4981bc8a8f75\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8cac33719e081864153ce20c60069a21036f3c29f7b4395021c3a2fe4f09dbc9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8cac33719e081864153ce20c60069a21036f3c29f7b4395021c3a2fe4f09dbc9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"message\\\":\\\"le observer\\\\nW0131 09:01:15.957161 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0131 09:01:15.957363 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 09:01:15.958528 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2684978895/tls.crt::/tmp/serving-cert-2684978895/tls.key\\\\\\\"\\\\nI0131 09:01:16.545024 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 09:01:16.548405 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 09:01:16.548444 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 09:01:16.548470 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 09:01:16.548480 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 09:01:16.557075 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 09:01:16.557112 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 09:01:16.557118 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 09:01:16.557123 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 09:01:16.557127 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 09:01:16.557130 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 09:01:16.557133 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0131 09:01:16.557468 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0131 09:01:16.564014 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d4d8bd6451d7416a2a312147ec282db5de8410910834f555c8d09062902130d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41db253848cac205af5f91621cac4e8223bda7d76984afd734a093f8e8a2d125\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41db253848cac205af5f91621cac4e8223bda7d76984afd734a093f8e8a2d125\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:00:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:00:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:00:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:23Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:23 crc kubenswrapper[4830]: I0131 09:01:23.241511 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2018dd8e7153f3ce64992dc6f931ae09c5f77931cd0743a9fe2557673b6a41f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:23Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:23 crc kubenswrapper[4830]: I0131 09:01:23.255717 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/b7e133cc-19e8-4770-9146-88dac53a6531-multus-conf-dir\") pod \"multus-cjqbn\" (UID: \"b7e133cc-19e8-4770-9146-88dac53a6531\") " pod="openshift-multus/multus-cjqbn" Jan 31 09:01:23 crc kubenswrapper[4830]: I0131 09:01:23.255783 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/158dbfda-9b0a-4809-9946-3c6ee2d082dc-mcd-auth-proxy-config\") pod \"machine-config-daemon-gt7kd\" (UID: \"158dbfda-9b0a-4809-9946-3c6ee2d082dc\") " pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" Jan 31 09:01:23 crc kubenswrapper[4830]: I0131 09:01:23.255809 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/b7e133cc-19e8-4770-9146-88dac53a6531-system-cni-dir\") pod \"multus-cjqbn\" (UID: \"b7e133cc-19e8-4770-9146-88dac53a6531\") " pod="openshift-multus/multus-cjqbn" Jan 31 09:01:23 crc kubenswrapper[4830]: I0131 09:01:23.255830 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/b7e133cc-19e8-4770-9146-88dac53a6531-host-run-k8s-cni-cncf-io\") pod \"multus-cjqbn\" (UID: \"b7e133cc-19e8-4770-9146-88dac53a6531\") " pod="openshift-multus/multus-cjqbn" Jan 31 09:01:23 crc kubenswrapper[4830]: I0131 09:01:23.255848 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/b7e133cc-19e8-4770-9146-88dac53a6531-host-var-lib-cni-multus\") pod \"multus-cjqbn\" (UID: \"b7e133cc-19e8-4770-9146-88dac53a6531\") " pod="openshift-multus/multus-cjqbn" Jan 31 09:01:23 crc kubenswrapper[4830]: I0131 09:01:23.255864 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-msp6r\" (UniqueName: \"kubernetes.io/projected/b7e133cc-19e8-4770-9146-88dac53a6531-kube-api-access-msp6r\") pod \"multus-cjqbn\" (UID: \"b7e133cc-19e8-4770-9146-88dac53a6531\") " pod="openshift-multus/multus-cjqbn" Jan 31 09:01:23 crc kubenswrapper[4830]: I0131 09:01:23.255887 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/b7e133cc-19e8-4770-9146-88dac53a6531-multus-socket-dir-parent\") pod \"multus-cjqbn\" (UID: \"b7e133cc-19e8-4770-9146-88dac53a6531\") " pod="openshift-multus/multus-cjqbn" Jan 31 09:01:23 crc kubenswrapper[4830]: I0131 09:01:23.255909 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/b7e133cc-19e8-4770-9146-88dac53a6531-multus-daemon-config\") pod \"multus-cjqbn\" (UID: \"b7e133cc-19e8-4770-9146-88dac53a6531\") " pod="openshift-multus/multus-cjqbn" Jan 31 09:01:23 crc kubenswrapper[4830]: I0131 09:01:23.255930 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/227117cb-01d3-4e44-9da3-b1d577fb3ee2-cnibin\") pod \"multus-additional-cni-plugins-x27jw\" (UID: \"227117cb-01d3-4e44-9da3-b1d577fb3ee2\") " pod="openshift-multus/multus-additional-cni-plugins-x27jw" Jan 31 09:01:23 crc kubenswrapper[4830]: I0131 09:01:23.255945 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/227117cb-01d3-4e44-9da3-b1d577fb3ee2-cni-binary-copy\") pod \"multus-additional-cni-plugins-x27jw\" (UID: \"227117cb-01d3-4e44-9da3-b1d577fb3ee2\") " pod="openshift-multus/multus-additional-cni-plugins-x27jw" Jan 31 09:01:23 crc kubenswrapper[4830]: I0131 09:01:23.255971 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/227117cb-01d3-4e44-9da3-b1d577fb3ee2-tuning-conf-dir\") pod \"multus-additional-cni-plugins-x27jw\" (UID: \"227117cb-01d3-4e44-9da3-b1d577fb3ee2\") " pod="openshift-multus/multus-additional-cni-plugins-x27jw" Jan 31 09:01:23 crc kubenswrapper[4830]: I0131 09:01:23.255992 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/b7e133cc-19e8-4770-9146-88dac53a6531-cnibin\") pod \"multus-cjqbn\" (UID: \"b7e133cc-19e8-4770-9146-88dac53a6531\") " pod="openshift-multus/multus-cjqbn" Jan 31 09:01:23 crc kubenswrapper[4830]: I0131 09:01:23.256011 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/b7e133cc-19e8-4770-9146-88dac53a6531-host-run-netns\") pod \"multus-cjqbn\" (UID: \"b7e133cc-19e8-4770-9146-88dac53a6531\") " pod="openshift-multus/multus-cjqbn" Jan 31 09:01:23 crc kubenswrapper[4830]: I0131 09:01:23.256032 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/b7e133cc-19e8-4770-9146-88dac53a6531-host-var-lib-kubelet\") pod \"multus-cjqbn\" (UID: \"b7e133cc-19e8-4770-9146-88dac53a6531\") " pod="openshift-multus/multus-cjqbn" Jan 31 09:01:23 crc kubenswrapper[4830]: I0131 09:01:23.256057 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vqp59\" (UniqueName: \"kubernetes.io/projected/158dbfda-9b0a-4809-9946-3c6ee2d082dc-kube-api-access-vqp59\") pod \"machine-config-daemon-gt7kd\" (UID: \"158dbfda-9b0a-4809-9946-3c6ee2d082dc\") " pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" Jan 31 09:01:23 crc kubenswrapper[4830]: I0131 09:01:23.256074 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/b7e133cc-19e8-4770-9146-88dac53a6531-host-run-multus-certs\") pod \"multus-cjqbn\" (UID: \"b7e133cc-19e8-4770-9146-88dac53a6531\") " pod="openshift-multus/multus-cjqbn" Jan 31 09:01:23 crc kubenswrapper[4830]: I0131 09:01:23.256097 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b7e133cc-19e8-4770-9146-88dac53a6531-etc-kubernetes\") pod \"multus-cjqbn\" (UID: \"b7e133cc-19e8-4770-9146-88dac53a6531\") " pod="openshift-multus/multus-cjqbn" Jan 31 09:01:23 crc kubenswrapper[4830]: I0131 09:01:23.256318 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wndtp\" (UniqueName: \"kubernetes.io/projected/227117cb-01d3-4e44-9da3-b1d577fb3ee2-kube-api-access-wndtp\") pod \"multus-additional-cni-plugins-x27jw\" (UID: \"227117cb-01d3-4e44-9da3-b1d577fb3ee2\") " pod="openshift-multus/multus-additional-cni-plugins-x27jw" Jan 31 09:01:23 crc kubenswrapper[4830]: I0131 09:01:23.256538 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/158dbfda-9b0a-4809-9946-3c6ee2d082dc-rootfs\") pod \"machine-config-daemon-gt7kd\" (UID: \"158dbfda-9b0a-4809-9946-3c6ee2d082dc\") " pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" Jan 31 09:01:23 crc kubenswrapper[4830]: I0131 09:01:23.256554 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/b7e133cc-19e8-4770-9146-88dac53a6531-multus-cni-dir\") pod \"multus-cjqbn\" (UID: \"b7e133cc-19e8-4770-9146-88dac53a6531\") " pod="openshift-multus/multus-cjqbn" Jan 31 09:01:23 crc kubenswrapper[4830]: I0131 09:01:23.256576 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/b7e133cc-19e8-4770-9146-88dac53a6531-cni-binary-copy\") pod \"multus-cjqbn\" (UID: \"b7e133cc-19e8-4770-9146-88dac53a6531\") " pod="openshift-multus/multus-cjqbn" Jan 31 09:01:23 crc kubenswrapper[4830]: I0131 09:01:23.262260 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/227117cb-01d3-4e44-9da3-b1d577fb3ee2-system-cni-dir\") pod \"multus-additional-cni-plugins-x27jw\" (UID: \"227117cb-01d3-4e44-9da3-b1d577fb3ee2\") " pod="openshift-multus/multus-additional-cni-plugins-x27jw" Jan 31 09:01:23 crc kubenswrapper[4830]: I0131 09:01:23.262315 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/b7e133cc-19e8-4770-9146-88dac53a6531-multus-conf-dir\") pod \"multus-cjqbn\" (UID: \"b7e133cc-19e8-4770-9146-88dac53a6531\") " pod="openshift-multus/multus-cjqbn" Jan 31 09:01:23 crc kubenswrapper[4830]: I0131 09:01:23.262357 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/b7e133cc-19e8-4770-9146-88dac53a6531-host-var-lib-cni-bin\") pod \"multus-cjqbn\" (UID: \"b7e133cc-19e8-4770-9146-88dac53a6531\") " pod="openshift-multus/multus-cjqbn" Jan 31 09:01:23 crc kubenswrapper[4830]: I0131 09:01:23.262380 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/227117cb-01d3-4e44-9da3-b1d577fb3ee2-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-x27jw\" (UID: \"227117cb-01d3-4e44-9da3-b1d577fb3ee2\") " pod="openshift-multus/multus-additional-cni-plugins-x27jw" Jan 31 09:01:23 crc kubenswrapper[4830]: I0131 09:01:23.262403 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/b7e133cc-19e8-4770-9146-88dac53a6531-hostroot\") pod \"multus-cjqbn\" (UID: \"b7e133cc-19e8-4770-9146-88dac53a6531\") " pod="openshift-multus/multus-cjqbn" Jan 31 09:01:23 crc kubenswrapper[4830]: I0131 09:01:23.262424 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/158dbfda-9b0a-4809-9946-3c6ee2d082dc-proxy-tls\") pod \"machine-config-daemon-gt7kd\" (UID: \"158dbfda-9b0a-4809-9946-3c6ee2d082dc\") " pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" Jan 31 09:01:23 crc kubenswrapper[4830]: I0131 09:01:23.262444 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/227117cb-01d3-4e44-9da3-b1d577fb3ee2-os-release\") pod \"multus-additional-cni-plugins-x27jw\" (UID: \"227117cb-01d3-4e44-9da3-b1d577fb3ee2\") " pod="openshift-multus/multus-additional-cni-plugins-x27jw" Jan 31 09:01:23 crc kubenswrapper[4830]: I0131 09:01:23.262466 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/b7e133cc-19e8-4770-9146-88dac53a6531-os-release\") pod \"multus-cjqbn\" (UID: \"b7e133cc-19e8-4770-9146-88dac53a6531\") " pod="openshift-multus/multus-cjqbn" Jan 31 09:01:23 crc kubenswrapper[4830]: I0131 09:01:23.262560 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/b7e133cc-19e8-4770-9146-88dac53a6531-os-release\") pod \"multus-cjqbn\" (UID: \"b7e133cc-19e8-4770-9146-88dac53a6531\") " pod="openshift-multus/multus-cjqbn" Jan 31 09:01:23 crc kubenswrapper[4830]: I0131 09:01:23.263192 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/b7e133cc-19e8-4770-9146-88dac53a6531-cni-binary-copy\") pod \"multus-cjqbn\" (UID: \"b7e133cc-19e8-4770-9146-88dac53a6531\") " pod="openshift-multus/multus-cjqbn" Jan 31 09:01:23 crc kubenswrapper[4830]: I0131 09:01:23.263226 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/158dbfda-9b0a-4809-9946-3c6ee2d082dc-mcd-auth-proxy-config\") pod \"machine-config-daemon-gt7kd\" (UID: \"158dbfda-9b0a-4809-9946-3c6ee2d082dc\") " pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" Jan 31 09:01:23 crc kubenswrapper[4830]: I0131 09:01:23.263243 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/227117cb-01d3-4e44-9da3-b1d577fb3ee2-system-cni-dir\") pod \"multus-additional-cni-plugins-x27jw\" (UID: \"227117cb-01d3-4e44-9da3-b1d577fb3ee2\") " pod="openshift-multus/multus-additional-cni-plugins-x27jw" Jan 31 09:01:23 crc kubenswrapper[4830]: I0131 09:01:23.263272 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/b7e133cc-19e8-4770-9146-88dac53a6531-host-var-lib-cni-bin\") pod \"multus-cjqbn\" (UID: \"b7e133cc-19e8-4770-9146-88dac53a6531\") " pod="openshift-multus/multus-cjqbn" Jan 31 09:01:23 crc kubenswrapper[4830]: I0131 09:01:23.263302 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/b7e133cc-19e8-4770-9146-88dac53a6531-system-cni-dir\") pod \"multus-cjqbn\" (UID: \"b7e133cc-19e8-4770-9146-88dac53a6531\") " pod="openshift-multus/multus-cjqbn" Jan 31 09:01:23 crc kubenswrapper[4830]: I0131 09:01:23.263345 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/b7e133cc-19e8-4770-9146-88dac53a6531-host-run-k8s-cni-cncf-io\") pod \"multus-cjqbn\" (UID: \"b7e133cc-19e8-4770-9146-88dac53a6531\") " pod="openshift-multus/multus-cjqbn" Jan 31 09:01:23 crc kubenswrapper[4830]: I0131 09:01:23.263384 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/b7e133cc-19e8-4770-9146-88dac53a6531-host-var-lib-cni-multus\") pod \"multus-cjqbn\" (UID: \"b7e133cc-19e8-4770-9146-88dac53a6531\") " pod="openshift-multus/multus-cjqbn" Jan 31 09:01:23 crc kubenswrapper[4830]: I0131 09:01:23.263718 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/227117cb-01d3-4e44-9da3-b1d577fb3ee2-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-x27jw\" (UID: \"227117cb-01d3-4e44-9da3-b1d577fb3ee2\") " pod="openshift-multus/multus-additional-cni-plugins-x27jw" Jan 31 09:01:23 crc kubenswrapper[4830]: I0131 09:01:23.263783 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/b7e133cc-19e8-4770-9146-88dac53a6531-hostroot\") pod \"multus-cjqbn\" (UID: \"b7e133cc-19e8-4770-9146-88dac53a6531\") " pod="openshift-multus/multus-cjqbn" Jan 31 09:01:23 crc kubenswrapper[4830]: I0131 09:01:23.263907 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/b7e133cc-19e8-4770-9146-88dac53a6531-multus-socket-dir-parent\") pod \"multus-cjqbn\" (UID: \"b7e133cc-19e8-4770-9146-88dac53a6531\") " pod="openshift-multus/multus-cjqbn" Jan 31 09:01:23 crc kubenswrapper[4830]: I0131 09:01:23.264317 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/227117cb-01d3-4e44-9da3-b1d577fb3ee2-cnibin\") pod \"multus-additional-cni-plugins-x27jw\" (UID: \"227117cb-01d3-4e44-9da3-b1d577fb3ee2\") " pod="openshift-multus/multus-additional-cni-plugins-x27jw" Jan 31 09:01:23 crc kubenswrapper[4830]: I0131 09:01:23.264462 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/b7e133cc-19e8-4770-9146-88dac53a6531-multus-daemon-config\") pod \"multus-cjqbn\" (UID: \"b7e133cc-19e8-4770-9146-88dac53a6531\") " pod="openshift-multus/multus-cjqbn" Jan 31 09:01:23 crc kubenswrapper[4830]: I0131 09:01:23.264533 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/b7e133cc-19e8-4770-9146-88dac53a6531-host-run-multus-certs\") pod \"multus-cjqbn\" (UID: \"b7e133cc-19e8-4770-9146-88dac53a6531\") " pod="openshift-multus/multus-cjqbn" Jan 31 09:01:23 crc kubenswrapper[4830]: I0131 09:01:23.264584 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b7e133cc-19e8-4770-9146-88dac53a6531-etc-kubernetes\") pod \"multus-cjqbn\" (UID: \"b7e133cc-19e8-4770-9146-88dac53a6531\") " pod="openshift-multus/multus-cjqbn" Jan 31 09:01:23 crc kubenswrapper[4830]: I0131 09:01:23.264587 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/b7e133cc-19e8-4770-9146-88dac53a6531-host-var-lib-kubelet\") pod \"multus-cjqbn\" (UID: \"b7e133cc-19e8-4770-9146-88dac53a6531\") " pod="openshift-multus/multus-cjqbn" Jan 31 09:01:23 crc kubenswrapper[4830]: I0131 09:01:23.264764 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/158dbfda-9b0a-4809-9946-3c6ee2d082dc-rootfs\") pod \"machine-config-daemon-gt7kd\" (UID: \"158dbfda-9b0a-4809-9946-3c6ee2d082dc\") " pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" Jan 31 09:01:23 crc kubenswrapper[4830]: I0131 09:01:23.264831 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/227117cb-01d3-4e44-9da3-b1d577fb3ee2-cni-binary-copy\") pod \"multus-additional-cni-plugins-x27jw\" (UID: \"227117cb-01d3-4e44-9da3-b1d577fb3ee2\") " pod="openshift-multus/multus-additional-cni-plugins-x27jw" Jan 31 09:01:23 crc kubenswrapper[4830]: I0131 09:01:23.265044 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/227117cb-01d3-4e44-9da3-b1d577fb3ee2-tuning-conf-dir\") pod \"multus-additional-cni-plugins-x27jw\" (UID: \"227117cb-01d3-4e44-9da3-b1d577fb3ee2\") " pod="openshift-multus/multus-additional-cni-plugins-x27jw" Jan 31 09:01:23 crc kubenswrapper[4830]: I0131 09:01:23.265079 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/227117cb-01d3-4e44-9da3-b1d577fb3ee2-os-release\") pod \"multus-additional-cni-plugins-x27jw\" (UID: \"227117cb-01d3-4e44-9da3-b1d577fb3ee2\") " pod="openshift-multus/multus-additional-cni-plugins-x27jw" Jan 31 09:01:23 crc kubenswrapper[4830]: I0131 09:01:23.265084 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/b7e133cc-19e8-4770-9146-88dac53a6531-cnibin\") pod \"multus-cjqbn\" (UID: \"b7e133cc-19e8-4770-9146-88dac53a6531\") " pod="openshift-multus/multus-cjqbn" Jan 31 09:01:23 crc kubenswrapper[4830]: I0131 09:01:23.265220 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/b7e133cc-19e8-4770-9146-88dac53a6531-multus-cni-dir\") pod \"multus-cjqbn\" (UID: \"b7e133cc-19e8-4770-9146-88dac53a6531\") " pod="openshift-multus/multus-cjqbn" Jan 31 09:01:23 crc kubenswrapper[4830]: I0131 09:01:23.265350 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"158dbfda-9b0a-4809-9946-3c6ee2d082dc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqp59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqp59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:23Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gt7kd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:23Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:23 crc kubenswrapper[4830]: I0131 09:01:23.265469 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/b7e133cc-19e8-4770-9146-88dac53a6531-host-run-netns\") pod \"multus-cjqbn\" (UID: \"b7e133cc-19e8-4770-9146-88dac53a6531\") " pod="openshift-multus/multus-cjqbn" Jan 31 09:01:23 crc kubenswrapper[4830]: I0131 09:01:23.270371 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/158dbfda-9b0a-4809-9946-3c6ee2d082dc-proxy-tls\") pod \"machine-config-daemon-gt7kd\" (UID: \"158dbfda-9b0a-4809-9946-3c6ee2d082dc\") " pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" Jan 31 09:01:23 crc kubenswrapper[4830]: I0131 09:01:23.275437 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:23 crc kubenswrapper[4830]: I0131 09:01:23.275489 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:23 crc kubenswrapper[4830]: I0131 09:01:23.275499 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:23 crc kubenswrapper[4830]: I0131 09:01:23.275519 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:23 crc kubenswrapper[4830]: I0131 09:01:23.275530 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:23Z","lastTransitionTime":"2026-01-31T09:01:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:23 crc kubenswrapper[4830]: I0131 09:01:23.287482 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wndtp\" (UniqueName: \"kubernetes.io/projected/227117cb-01d3-4e44-9da3-b1d577fb3ee2-kube-api-access-wndtp\") pod \"multus-additional-cni-plugins-x27jw\" (UID: \"227117cb-01d3-4e44-9da3-b1d577fb3ee2\") " pod="openshift-multus/multus-additional-cni-plugins-x27jw" Jan 31 09:01:23 crc kubenswrapper[4830]: I0131 09:01:23.288359 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:23Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:23 crc kubenswrapper[4830]: I0131 09:01:23.289203 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-msp6r\" (UniqueName: \"kubernetes.io/projected/b7e133cc-19e8-4770-9146-88dac53a6531-kube-api-access-msp6r\") pod \"multus-cjqbn\" (UID: \"b7e133cc-19e8-4770-9146-88dac53a6531\") " pod="openshift-multus/multus-cjqbn" Jan 31 09:01:23 crc kubenswrapper[4830]: I0131 09:01:23.290148 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vqp59\" (UniqueName: \"kubernetes.io/projected/158dbfda-9b0a-4809-9946-3c6ee2d082dc-kube-api-access-vqp59\") pod \"machine-config-daemon-gt7kd\" (UID: \"158dbfda-9b0a-4809-9946-3c6ee2d082dc\") " pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" Jan 31 09:01:23 crc kubenswrapper[4830]: I0131 09:01:23.304641 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0468c8e291590c1fa2ffc9b7786bbbf6f3cfb7889fc3adaf7f7a4d3b0edcb1ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4601859d7e52904b3390b2ebca48210c4b3bf132b00a4735a1dbe6cfdc7bd02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:23Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:23 crc kubenswrapper[4830]: I0131 09:01:23.315157 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pmbpr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca325f50-edf0-4f3d-ab92-17f40a73d274\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:22Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:22Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7p56d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:22Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pmbpr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:23Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:23 crc kubenswrapper[4830]: I0131 09:01:23.327578 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"20ed341f-ef9c-4242-981d-80c09f22a37f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://28d65a2b0e667ff1bd912564180a6a9ce77e91a104c6028e19bc58e0ab3b295d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c484d4ed3775a955f3077e3404135c771c581eef1e8fa614d7cdd6e521bbb426\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6889405df5b8cc9e09b36c99a5db06048c9de193dd5744d2b48c07f42c477bd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e99664db53d57a91882867cdf4ab33d52a2e165c53f91cd1b918a32c49a7afa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:00:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:23Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:23 crc kubenswrapper[4830]: I0131 09:01:23.341862 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c33f93dea1eda6a26c80682447157b317a4aea4af66d200c5bdfc162ad593e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:23Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:23 crc kubenswrapper[4830]: I0131 09:01:23.359360 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:23Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:23 crc kubenswrapper[4830]: I0131 09:01:23.372674 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2018dd8e7153f3ce64992dc6f931ae09c5f77931cd0743a9fe2557673b6a41f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:23Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:23 crc kubenswrapper[4830]: I0131 09:01:23.378355 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:23 crc kubenswrapper[4830]: I0131 09:01:23.378406 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:23 crc kubenswrapper[4830]: I0131 09:01:23.378417 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:23 crc kubenswrapper[4830]: I0131 09:01:23.378437 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:23 crc kubenswrapper[4830]: I0131 09:01:23.378452 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:23Z","lastTransitionTime":"2026-01-31T09:01:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:23 crc kubenswrapper[4830]: I0131 09:01:23.385163 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"158dbfda-9b0a-4809-9946-3c6ee2d082dc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqp59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqp59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:23Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gt7kd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:23Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:23 crc kubenswrapper[4830]: I0131 09:01:23.393397 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-pmbpr" event={"ID":"ca325f50-edf0-4f3d-ab92-17f40a73d274","Type":"ContainerStarted","Data":"1d2a0a6bafefdee2120d6573808366f2455c8606c350f69b9e62bfb2903f6303"} Jan 31 09:01:23 crc kubenswrapper[4830]: I0131 09:01:23.393468 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-pmbpr" event={"ID":"ca325f50-edf0-4f3d-ab92-17f40a73d274","Type":"ContainerStarted","Data":"ee0e054342c3f92f25a46642789e4c0ef57db6ac7547c0343de482d5a48efc0d"} Jan 31 09:01:23 crc kubenswrapper[4830]: I0131 09:01:23.401037 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0468c8e291590c1fa2ffc9b7786bbbf6f3cfb7889fc3adaf7f7a4d3b0edcb1ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4601859d7e52904b3390b2ebca48210c4b3bf132b00a4735a1dbe6cfdc7bd02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:23Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:23 crc kubenswrapper[4830]: I0131 09:01:23.415706 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c75e1c36-b769-464e-96eb-6d9b3c5aa384\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fc4f1b4979b8d902676eced34650daea491e68b4c4377492b928a9f0f78d12b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b68f539fdc8bbf394adaf06d3e8682cdf498d0994c53f3754caf282cf9cf3607\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://539885da1b8f083e7ae878fec45416be66a5b06df063f046a05a4981bc8a8f75\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8cac33719e081864153ce20c60069a21036f3c29f7b4395021c3a2fe4f09dbc9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8cac33719e081864153ce20c60069a21036f3c29f7b4395021c3a2fe4f09dbc9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"message\\\":\\\"le observer\\\\nW0131 09:01:15.957161 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0131 09:01:15.957363 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 09:01:15.958528 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2684978895/tls.crt::/tmp/serving-cert-2684978895/tls.key\\\\\\\"\\\\nI0131 09:01:16.545024 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 09:01:16.548405 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 09:01:16.548444 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 09:01:16.548470 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 09:01:16.548480 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 09:01:16.557075 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 09:01:16.557112 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 09:01:16.557118 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 09:01:16.557123 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 09:01:16.557127 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 09:01:16.557130 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 09:01:16.557133 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0131 09:01:16.557468 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0131 09:01:16.564014 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d4d8bd6451d7416a2a312147ec282db5de8410910834f555c8d09062902130d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41db253848cac205af5f91621cac4e8223bda7d76984afd734a093f8e8a2d125\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41db253848cac205af5f91621cac4e8223bda7d76984afd734a093f8e8a2d125\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:00:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:00:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:00:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:23Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:23 crc kubenswrapper[4830]: I0131 09:01:23.428396 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:23Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:23 crc kubenswrapper[4830]: I0131 09:01:23.429503 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" Jan 31 09:01:23 crc kubenswrapper[4830]: I0131 09:01:23.437033 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-cjqbn" Jan 31 09:01:23 crc kubenswrapper[4830]: I0131 09:01:23.444943 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-x27jw" Jan 31 09:01:23 crc kubenswrapper[4830]: I0131 09:01:23.449846 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:23Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:23 crc kubenswrapper[4830]: I0131 09:01:23.461918 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pmbpr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca325f50-edf0-4f3d-ab92-17f40a73d274\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:22Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:22Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7p56d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:22Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pmbpr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:23Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:23 crc kubenswrapper[4830]: I0131 09:01:23.480955 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:23 crc kubenswrapper[4830]: I0131 09:01:23.480995 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:23 crc kubenswrapper[4830]: I0131 09:01:23.481006 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:23 crc kubenswrapper[4830]: I0131 09:01:23.481025 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:23 crc kubenswrapper[4830]: I0131 09:01:23.481037 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:23Z","lastTransitionTime":"2026-01-31T09:01:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:23 crc kubenswrapper[4830]: I0131 09:01:23.485995 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-r8pc4"] Jan 31 09:01:23 crc kubenswrapper[4830]: I0131 09:01:23.487957 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-r8pc4" Jan 31 09:01:23 crc kubenswrapper[4830]: I0131 09:01:23.489543 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-cjqbn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b7e133cc-19e8-4770-9146-88dac53a6531\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-msp6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-cjqbn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:23Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:23 crc kubenswrapper[4830]: I0131 09:01:23.491952 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 31 09:01:23 crc kubenswrapper[4830]: I0131 09:01:23.492803 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 31 09:01:23 crc kubenswrapper[4830]: I0131 09:01:23.492984 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 31 09:01:23 crc kubenswrapper[4830]: I0131 09:01:23.494712 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 31 09:01:23 crc kubenswrapper[4830]: I0131 09:01:23.494913 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 31 09:01:23 crc kubenswrapper[4830]: I0131 09:01:23.495109 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 31 09:01:23 crc kubenswrapper[4830]: I0131 09:01:23.506192 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 31 09:01:23 crc kubenswrapper[4830]: I0131 09:01:23.547497 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-x27jw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"227117cb-01d3-4e44-9da3-b1d577fb3ee2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wndtp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wndtp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wndtp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wndtp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wndtp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wndtp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wndtp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-x27jw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:23Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:23 crc kubenswrapper[4830]: I0131 09:01:23.565628 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/159b9801-57e3-4cf0-9b81-10aacb5eef83-host-cni-netd\") pod \"ovnkube-node-r8pc4\" (UID: \"159b9801-57e3-4cf0-9b81-10aacb5eef83\") " pod="openshift-ovn-kubernetes/ovnkube-node-r8pc4" Jan 31 09:01:23 crc kubenswrapper[4830]: I0131 09:01:23.565670 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/159b9801-57e3-4cf0-9b81-10aacb5eef83-run-ovn\") pod \"ovnkube-node-r8pc4\" (UID: \"159b9801-57e3-4cf0-9b81-10aacb5eef83\") " pod="openshift-ovn-kubernetes/ovnkube-node-r8pc4" Jan 31 09:01:23 crc kubenswrapper[4830]: I0131 09:01:23.565692 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/159b9801-57e3-4cf0-9b81-10aacb5eef83-env-overrides\") pod \"ovnkube-node-r8pc4\" (UID: \"159b9801-57e3-4cf0-9b81-10aacb5eef83\") " pod="openshift-ovn-kubernetes/ovnkube-node-r8pc4" Jan 31 09:01:23 crc kubenswrapper[4830]: I0131 09:01:23.565710 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/159b9801-57e3-4cf0-9b81-10aacb5eef83-run-systemd\") pod \"ovnkube-node-r8pc4\" (UID: \"159b9801-57e3-4cf0-9b81-10aacb5eef83\") " pod="openshift-ovn-kubernetes/ovnkube-node-r8pc4" Jan 31 09:01:23 crc kubenswrapper[4830]: I0131 09:01:23.565778 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/159b9801-57e3-4cf0-9b81-10aacb5eef83-ovnkube-script-lib\") pod \"ovnkube-node-r8pc4\" (UID: \"159b9801-57e3-4cf0-9b81-10aacb5eef83\") " pod="openshift-ovn-kubernetes/ovnkube-node-r8pc4" Jan 31 09:01:23 crc kubenswrapper[4830]: I0131 09:01:23.565805 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/159b9801-57e3-4cf0-9b81-10aacb5eef83-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-r8pc4\" (UID: \"159b9801-57e3-4cf0-9b81-10aacb5eef83\") " pod="openshift-ovn-kubernetes/ovnkube-node-r8pc4" Jan 31 09:01:23 crc kubenswrapper[4830]: I0131 09:01:23.565887 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/159b9801-57e3-4cf0-9b81-10aacb5eef83-etc-openvswitch\") pod \"ovnkube-node-r8pc4\" (UID: \"159b9801-57e3-4cf0-9b81-10aacb5eef83\") " pod="openshift-ovn-kubernetes/ovnkube-node-r8pc4" Jan 31 09:01:23 crc kubenswrapper[4830]: I0131 09:01:23.565924 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/159b9801-57e3-4cf0-9b81-10aacb5eef83-host-run-ovn-kubernetes\") pod \"ovnkube-node-r8pc4\" (UID: \"159b9801-57e3-4cf0-9b81-10aacb5eef83\") " pod="openshift-ovn-kubernetes/ovnkube-node-r8pc4" Jan 31 09:01:23 crc kubenswrapper[4830]: I0131 09:01:23.565944 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/159b9801-57e3-4cf0-9b81-10aacb5eef83-ovnkube-config\") pod \"ovnkube-node-r8pc4\" (UID: \"159b9801-57e3-4cf0-9b81-10aacb5eef83\") " pod="openshift-ovn-kubernetes/ovnkube-node-r8pc4" Jan 31 09:01:23 crc kubenswrapper[4830]: I0131 09:01:23.565980 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/159b9801-57e3-4cf0-9b81-10aacb5eef83-host-slash\") pod \"ovnkube-node-r8pc4\" (UID: \"159b9801-57e3-4cf0-9b81-10aacb5eef83\") " pod="openshift-ovn-kubernetes/ovnkube-node-r8pc4" Jan 31 09:01:23 crc kubenswrapper[4830]: I0131 09:01:23.566010 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/159b9801-57e3-4cf0-9b81-10aacb5eef83-log-socket\") pod \"ovnkube-node-r8pc4\" (UID: \"159b9801-57e3-4cf0-9b81-10aacb5eef83\") " pod="openshift-ovn-kubernetes/ovnkube-node-r8pc4" Jan 31 09:01:23 crc kubenswrapper[4830]: I0131 09:01:23.566030 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/159b9801-57e3-4cf0-9b81-10aacb5eef83-node-log\") pod \"ovnkube-node-r8pc4\" (UID: \"159b9801-57e3-4cf0-9b81-10aacb5eef83\") " pod="openshift-ovn-kubernetes/ovnkube-node-r8pc4" Jan 31 09:01:23 crc kubenswrapper[4830]: I0131 09:01:23.566062 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/159b9801-57e3-4cf0-9b81-10aacb5eef83-host-kubelet\") pod \"ovnkube-node-r8pc4\" (UID: \"159b9801-57e3-4cf0-9b81-10aacb5eef83\") " pod="openshift-ovn-kubernetes/ovnkube-node-r8pc4" Jan 31 09:01:23 crc kubenswrapper[4830]: I0131 09:01:23.566100 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/159b9801-57e3-4cf0-9b81-10aacb5eef83-host-cni-bin\") pod \"ovnkube-node-r8pc4\" (UID: \"159b9801-57e3-4cf0-9b81-10aacb5eef83\") " pod="openshift-ovn-kubernetes/ovnkube-node-r8pc4" Jan 31 09:01:23 crc kubenswrapper[4830]: I0131 09:01:23.566118 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/159b9801-57e3-4cf0-9b81-10aacb5eef83-var-lib-openvswitch\") pod \"ovnkube-node-r8pc4\" (UID: \"159b9801-57e3-4cf0-9b81-10aacb5eef83\") " pod="openshift-ovn-kubernetes/ovnkube-node-r8pc4" Jan 31 09:01:23 crc kubenswrapper[4830]: I0131 09:01:23.566149 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/159b9801-57e3-4cf0-9b81-10aacb5eef83-run-openvswitch\") pod \"ovnkube-node-r8pc4\" (UID: \"159b9801-57e3-4cf0-9b81-10aacb5eef83\") " pod="openshift-ovn-kubernetes/ovnkube-node-r8pc4" Jan 31 09:01:23 crc kubenswrapper[4830]: I0131 09:01:23.566166 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/159b9801-57e3-4cf0-9b81-10aacb5eef83-ovn-node-metrics-cert\") pod \"ovnkube-node-r8pc4\" (UID: \"159b9801-57e3-4cf0-9b81-10aacb5eef83\") " pod="openshift-ovn-kubernetes/ovnkube-node-r8pc4" Jan 31 09:01:23 crc kubenswrapper[4830]: I0131 09:01:23.566184 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/159b9801-57e3-4cf0-9b81-10aacb5eef83-systemd-units\") pod \"ovnkube-node-r8pc4\" (UID: \"159b9801-57e3-4cf0-9b81-10aacb5eef83\") " pod="openshift-ovn-kubernetes/ovnkube-node-r8pc4" Jan 31 09:01:23 crc kubenswrapper[4830]: I0131 09:01:23.566214 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8nvq5\" (UniqueName: \"kubernetes.io/projected/159b9801-57e3-4cf0-9b81-10aacb5eef83-kube-api-access-8nvq5\") pod \"ovnkube-node-r8pc4\" (UID: \"159b9801-57e3-4cf0-9b81-10aacb5eef83\") " pod="openshift-ovn-kubernetes/ovnkube-node-r8pc4" Jan 31 09:01:23 crc kubenswrapper[4830]: I0131 09:01:23.566232 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/159b9801-57e3-4cf0-9b81-10aacb5eef83-host-run-netns\") pod \"ovnkube-node-r8pc4\" (UID: \"159b9801-57e3-4cf0-9b81-10aacb5eef83\") " pod="openshift-ovn-kubernetes/ovnkube-node-r8pc4" Jan 31 09:01:23 crc kubenswrapper[4830]: I0131 09:01:23.573883 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0468c8e291590c1fa2ffc9b7786bbbf6f3cfb7889fc3adaf7f7a4d3b0edcb1ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4601859d7e52904b3390b2ebca48210c4b3bf132b00a4735a1dbe6cfdc7bd02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:23Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:23 crc kubenswrapper[4830]: I0131 09:01:23.585252 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:23 crc kubenswrapper[4830]: I0131 09:01:23.585295 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:23 crc kubenswrapper[4830]: I0131 09:01:23.585304 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:23 crc kubenswrapper[4830]: I0131 09:01:23.585320 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:23 crc kubenswrapper[4830]: I0131 09:01:23.585330 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:23Z","lastTransitionTime":"2026-01-31T09:01:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:23 crc kubenswrapper[4830]: I0131 09:01:23.611446 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-r8pc4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"159b9801-57e3-4cf0-9b81-10aacb5eef83\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-r8pc4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:23Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:23 crc kubenswrapper[4830]: I0131 09:01:23.636924 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c75e1c36-b769-464e-96eb-6d9b3c5aa384\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fc4f1b4979b8d902676eced34650daea491e68b4c4377492b928a9f0f78d12b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b68f539fdc8bbf394adaf06d3e8682cdf498d0994c53f3754caf282cf9cf3607\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://539885da1b8f083e7ae878fec45416be66a5b06df063f046a05a4981bc8a8f75\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8cac33719e081864153ce20c60069a21036f3c29f7b4395021c3a2fe4f09dbc9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8cac33719e081864153ce20c60069a21036f3c29f7b4395021c3a2fe4f09dbc9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"message\\\":\\\"le observer\\\\nW0131 09:01:15.957161 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0131 09:01:15.957363 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 09:01:15.958528 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2684978895/tls.crt::/tmp/serving-cert-2684978895/tls.key\\\\\\\"\\\\nI0131 09:01:16.545024 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 09:01:16.548405 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 09:01:16.548444 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 09:01:16.548470 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 09:01:16.548480 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 09:01:16.557075 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 09:01:16.557112 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 09:01:16.557118 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 09:01:16.557123 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 09:01:16.557127 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 09:01:16.557130 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 09:01:16.557133 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0131 09:01:16.557468 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0131 09:01:16.564014 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d4d8bd6451d7416a2a312147ec282db5de8410910834f555c8d09062902130d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41db253848cac205af5f91621cac4e8223bda7d76984afd734a093f8e8a2d125\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41db253848cac205af5f91621cac4e8223bda7d76984afd734a093f8e8a2d125\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:00:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:00:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:00:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:23Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:23 crc kubenswrapper[4830]: I0131 09:01:23.651861 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:23Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:23 crc kubenswrapper[4830]: I0131 09:01:23.667942 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/159b9801-57e3-4cf0-9b81-10aacb5eef83-ovn-node-metrics-cert\") pod \"ovnkube-node-r8pc4\" (UID: \"159b9801-57e3-4cf0-9b81-10aacb5eef83\") " pod="openshift-ovn-kubernetes/ovnkube-node-r8pc4" Jan 31 09:01:23 crc kubenswrapper[4830]: I0131 09:01:23.667997 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/159b9801-57e3-4cf0-9b81-10aacb5eef83-systemd-units\") pod \"ovnkube-node-r8pc4\" (UID: \"159b9801-57e3-4cf0-9b81-10aacb5eef83\") " pod="openshift-ovn-kubernetes/ovnkube-node-r8pc4" Jan 31 09:01:23 crc kubenswrapper[4830]: I0131 09:01:23.668022 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8nvq5\" (UniqueName: \"kubernetes.io/projected/159b9801-57e3-4cf0-9b81-10aacb5eef83-kube-api-access-8nvq5\") pod \"ovnkube-node-r8pc4\" (UID: \"159b9801-57e3-4cf0-9b81-10aacb5eef83\") " pod="openshift-ovn-kubernetes/ovnkube-node-r8pc4" Jan 31 09:01:23 crc kubenswrapper[4830]: I0131 09:01:23.668050 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/159b9801-57e3-4cf0-9b81-10aacb5eef83-host-run-netns\") pod \"ovnkube-node-r8pc4\" (UID: \"159b9801-57e3-4cf0-9b81-10aacb5eef83\") " pod="openshift-ovn-kubernetes/ovnkube-node-r8pc4" Jan 31 09:01:23 crc kubenswrapper[4830]: I0131 09:01:23.668076 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/159b9801-57e3-4cf0-9b81-10aacb5eef83-host-cni-netd\") pod \"ovnkube-node-r8pc4\" (UID: \"159b9801-57e3-4cf0-9b81-10aacb5eef83\") " pod="openshift-ovn-kubernetes/ovnkube-node-r8pc4" Jan 31 09:01:23 crc kubenswrapper[4830]: I0131 09:01:23.668100 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/159b9801-57e3-4cf0-9b81-10aacb5eef83-run-ovn\") pod \"ovnkube-node-r8pc4\" (UID: \"159b9801-57e3-4cf0-9b81-10aacb5eef83\") " pod="openshift-ovn-kubernetes/ovnkube-node-r8pc4" Jan 31 09:01:23 crc kubenswrapper[4830]: I0131 09:01:23.668081 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:23Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:23 crc kubenswrapper[4830]: I0131 09:01:23.668129 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/159b9801-57e3-4cf0-9b81-10aacb5eef83-env-overrides\") pod \"ovnkube-node-r8pc4\" (UID: \"159b9801-57e3-4cf0-9b81-10aacb5eef83\") " pod="openshift-ovn-kubernetes/ovnkube-node-r8pc4" Jan 31 09:01:23 crc kubenswrapper[4830]: I0131 09:01:23.668155 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/159b9801-57e3-4cf0-9b81-10aacb5eef83-run-systemd\") pod \"ovnkube-node-r8pc4\" (UID: \"159b9801-57e3-4cf0-9b81-10aacb5eef83\") " pod="openshift-ovn-kubernetes/ovnkube-node-r8pc4" Jan 31 09:01:23 crc kubenswrapper[4830]: I0131 09:01:23.668176 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/159b9801-57e3-4cf0-9b81-10aacb5eef83-ovnkube-script-lib\") pod \"ovnkube-node-r8pc4\" (UID: \"159b9801-57e3-4cf0-9b81-10aacb5eef83\") " pod="openshift-ovn-kubernetes/ovnkube-node-r8pc4" Jan 31 09:01:23 crc kubenswrapper[4830]: I0131 09:01:23.668202 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/159b9801-57e3-4cf0-9b81-10aacb5eef83-etc-openvswitch\") pod \"ovnkube-node-r8pc4\" (UID: \"159b9801-57e3-4cf0-9b81-10aacb5eef83\") " pod="openshift-ovn-kubernetes/ovnkube-node-r8pc4" Jan 31 09:01:23 crc kubenswrapper[4830]: I0131 09:01:23.668223 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/159b9801-57e3-4cf0-9b81-10aacb5eef83-host-run-ovn-kubernetes\") pod \"ovnkube-node-r8pc4\" (UID: \"159b9801-57e3-4cf0-9b81-10aacb5eef83\") " pod="openshift-ovn-kubernetes/ovnkube-node-r8pc4" Jan 31 09:01:23 crc kubenswrapper[4830]: I0131 09:01:23.668318 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/159b9801-57e3-4cf0-9b81-10aacb5eef83-run-ovn\") pod \"ovnkube-node-r8pc4\" (UID: \"159b9801-57e3-4cf0-9b81-10aacb5eef83\") " pod="openshift-ovn-kubernetes/ovnkube-node-r8pc4" Jan 31 09:01:23 crc kubenswrapper[4830]: I0131 09:01:23.668573 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/159b9801-57e3-4cf0-9b81-10aacb5eef83-host-run-netns\") pod \"ovnkube-node-r8pc4\" (UID: \"159b9801-57e3-4cf0-9b81-10aacb5eef83\") " pod="openshift-ovn-kubernetes/ovnkube-node-r8pc4" Jan 31 09:01:23 crc kubenswrapper[4830]: I0131 09:01:23.668247 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/159b9801-57e3-4cf0-9b81-10aacb5eef83-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-r8pc4\" (UID: \"159b9801-57e3-4cf0-9b81-10aacb5eef83\") " pod="openshift-ovn-kubernetes/ovnkube-node-r8pc4" Jan 31 09:01:23 crc kubenswrapper[4830]: I0131 09:01:23.669110 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/159b9801-57e3-4cf0-9b81-10aacb5eef83-host-cni-netd\") pod \"ovnkube-node-r8pc4\" (UID: \"159b9801-57e3-4cf0-9b81-10aacb5eef83\") " pod="openshift-ovn-kubernetes/ovnkube-node-r8pc4" Jan 31 09:01:23 crc kubenswrapper[4830]: I0131 09:01:23.669149 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/159b9801-57e3-4cf0-9b81-10aacb5eef83-systemd-units\") pod \"ovnkube-node-r8pc4\" (UID: \"159b9801-57e3-4cf0-9b81-10aacb5eef83\") " pod="openshift-ovn-kubernetes/ovnkube-node-r8pc4" Jan 31 09:01:23 crc kubenswrapper[4830]: I0131 09:01:23.669171 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/159b9801-57e3-4cf0-9b81-10aacb5eef83-run-systemd\") pod \"ovnkube-node-r8pc4\" (UID: \"159b9801-57e3-4cf0-9b81-10aacb5eef83\") " pod="openshift-ovn-kubernetes/ovnkube-node-r8pc4" Jan 31 09:01:23 crc kubenswrapper[4830]: I0131 09:01:23.669189 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/159b9801-57e3-4cf0-9b81-10aacb5eef83-ovnkube-config\") pod \"ovnkube-node-r8pc4\" (UID: \"159b9801-57e3-4cf0-9b81-10aacb5eef83\") " pod="openshift-ovn-kubernetes/ovnkube-node-r8pc4" Jan 31 09:01:23 crc kubenswrapper[4830]: I0131 09:01:23.669202 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/159b9801-57e3-4cf0-9b81-10aacb5eef83-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-r8pc4\" (UID: \"159b9801-57e3-4cf0-9b81-10aacb5eef83\") " pod="openshift-ovn-kubernetes/ovnkube-node-r8pc4" Jan 31 09:01:23 crc kubenswrapper[4830]: I0131 09:01:23.669204 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/159b9801-57e3-4cf0-9b81-10aacb5eef83-host-run-ovn-kubernetes\") pod \"ovnkube-node-r8pc4\" (UID: \"159b9801-57e3-4cf0-9b81-10aacb5eef83\") " pod="openshift-ovn-kubernetes/ovnkube-node-r8pc4" Jan 31 09:01:23 crc kubenswrapper[4830]: I0131 09:01:23.669247 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/159b9801-57e3-4cf0-9b81-10aacb5eef83-host-slash\") pod \"ovnkube-node-r8pc4\" (UID: \"159b9801-57e3-4cf0-9b81-10aacb5eef83\") " pod="openshift-ovn-kubernetes/ovnkube-node-r8pc4" Jan 31 09:01:23 crc kubenswrapper[4830]: I0131 09:01:23.669281 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/159b9801-57e3-4cf0-9b81-10aacb5eef83-log-socket\") pod \"ovnkube-node-r8pc4\" (UID: \"159b9801-57e3-4cf0-9b81-10aacb5eef83\") " pod="openshift-ovn-kubernetes/ovnkube-node-r8pc4" Jan 31 09:01:23 crc kubenswrapper[4830]: I0131 09:01:23.669318 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/159b9801-57e3-4cf0-9b81-10aacb5eef83-node-log\") pod \"ovnkube-node-r8pc4\" (UID: \"159b9801-57e3-4cf0-9b81-10aacb5eef83\") " pod="openshift-ovn-kubernetes/ovnkube-node-r8pc4" Jan 31 09:01:23 crc kubenswrapper[4830]: I0131 09:01:23.669343 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/159b9801-57e3-4cf0-9b81-10aacb5eef83-host-kubelet\") pod \"ovnkube-node-r8pc4\" (UID: \"159b9801-57e3-4cf0-9b81-10aacb5eef83\") " pod="openshift-ovn-kubernetes/ovnkube-node-r8pc4" Jan 31 09:01:23 crc kubenswrapper[4830]: I0131 09:01:23.669395 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/159b9801-57e3-4cf0-9b81-10aacb5eef83-host-cni-bin\") pod \"ovnkube-node-r8pc4\" (UID: \"159b9801-57e3-4cf0-9b81-10aacb5eef83\") " pod="openshift-ovn-kubernetes/ovnkube-node-r8pc4" Jan 31 09:01:23 crc kubenswrapper[4830]: I0131 09:01:23.669416 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/159b9801-57e3-4cf0-9b81-10aacb5eef83-var-lib-openvswitch\") pod \"ovnkube-node-r8pc4\" (UID: \"159b9801-57e3-4cf0-9b81-10aacb5eef83\") " pod="openshift-ovn-kubernetes/ovnkube-node-r8pc4" Jan 31 09:01:23 crc kubenswrapper[4830]: I0131 09:01:23.669432 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/159b9801-57e3-4cf0-9b81-10aacb5eef83-run-openvswitch\") pod \"ovnkube-node-r8pc4\" (UID: \"159b9801-57e3-4cf0-9b81-10aacb5eef83\") " pod="openshift-ovn-kubernetes/ovnkube-node-r8pc4" Jan 31 09:01:23 crc kubenswrapper[4830]: I0131 09:01:23.669436 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/159b9801-57e3-4cf0-9b81-10aacb5eef83-etc-openvswitch\") pod \"ovnkube-node-r8pc4\" (UID: \"159b9801-57e3-4cf0-9b81-10aacb5eef83\") " pod="openshift-ovn-kubernetes/ovnkube-node-r8pc4" Jan 31 09:01:23 crc kubenswrapper[4830]: I0131 09:01:23.669492 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/159b9801-57e3-4cf0-9b81-10aacb5eef83-ovnkube-script-lib\") pod \"ovnkube-node-r8pc4\" (UID: \"159b9801-57e3-4cf0-9b81-10aacb5eef83\") " pod="openshift-ovn-kubernetes/ovnkube-node-r8pc4" Jan 31 09:01:23 crc kubenswrapper[4830]: I0131 09:01:23.669518 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/159b9801-57e3-4cf0-9b81-10aacb5eef83-run-openvswitch\") pod \"ovnkube-node-r8pc4\" (UID: \"159b9801-57e3-4cf0-9b81-10aacb5eef83\") " pod="openshift-ovn-kubernetes/ovnkube-node-r8pc4" Jan 31 09:01:23 crc kubenswrapper[4830]: I0131 09:01:23.669565 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/159b9801-57e3-4cf0-9b81-10aacb5eef83-node-log\") pod \"ovnkube-node-r8pc4\" (UID: \"159b9801-57e3-4cf0-9b81-10aacb5eef83\") " pod="openshift-ovn-kubernetes/ovnkube-node-r8pc4" Jan 31 09:01:23 crc kubenswrapper[4830]: I0131 09:01:23.669571 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/159b9801-57e3-4cf0-9b81-10aacb5eef83-host-slash\") pod \"ovnkube-node-r8pc4\" (UID: \"159b9801-57e3-4cf0-9b81-10aacb5eef83\") " pod="openshift-ovn-kubernetes/ovnkube-node-r8pc4" Jan 31 09:01:23 crc kubenswrapper[4830]: I0131 09:01:23.669594 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/159b9801-57e3-4cf0-9b81-10aacb5eef83-var-lib-openvswitch\") pod \"ovnkube-node-r8pc4\" (UID: \"159b9801-57e3-4cf0-9b81-10aacb5eef83\") " pod="openshift-ovn-kubernetes/ovnkube-node-r8pc4" Jan 31 09:01:23 crc kubenswrapper[4830]: I0131 09:01:23.669621 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/159b9801-57e3-4cf0-9b81-10aacb5eef83-log-socket\") pod \"ovnkube-node-r8pc4\" (UID: \"159b9801-57e3-4cf0-9b81-10aacb5eef83\") " pod="openshift-ovn-kubernetes/ovnkube-node-r8pc4" Jan 31 09:01:23 crc kubenswrapper[4830]: I0131 09:01:23.669626 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/159b9801-57e3-4cf0-9b81-10aacb5eef83-host-kubelet\") pod \"ovnkube-node-r8pc4\" (UID: \"159b9801-57e3-4cf0-9b81-10aacb5eef83\") " pod="openshift-ovn-kubernetes/ovnkube-node-r8pc4" Jan 31 09:01:23 crc kubenswrapper[4830]: I0131 09:01:23.669622 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/159b9801-57e3-4cf0-9b81-10aacb5eef83-host-cni-bin\") pod \"ovnkube-node-r8pc4\" (UID: \"159b9801-57e3-4cf0-9b81-10aacb5eef83\") " pod="openshift-ovn-kubernetes/ovnkube-node-r8pc4" Jan 31 09:01:23 crc kubenswrapper[4830]: I0131 09:01:23.669994 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/159b9801-57e3-4cf0-9b81-10aacb5eef83-ovnkube-config\") pod \"ovnkube-node-r8pc4\" (UID: \"159b9801-57e3-4cf0-9b81-10aacb5eef83\") " pod="openshift-ovn-kubernetes/ovnkube-node-r8pc4" Jan 31 09:01:23 crc kubenswrapper[4830]: I0131 09:01:23.670155 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/159b9801-57e3-4cf0-9b81-10aacb5eef83-env-overrides\") pod \"ovnkube-node-r8pc4\" (UID: \"159b9801-57e3-4cf0-9b81-10aacb5eef83\") " pod="openshift-ovn-kubernetes/ovnkube-node-r8pc4" Jan 31 09:01:23 crc kubenswrapper[4830]: I0131 09:01:23.674850 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/159b9801-57e3-4cf0-9b81-10aacb5eef83-ovn-node-metrics-cert\") pod \"ovnkube-node-r8pc4\" (UID: \"159b9801-57e3-4cf0-9b81-10aacb5eef83\") " pod="openshift-ovn-kubernetes/ovnkube-node-r8pc4" Jan 31 09:01:23 crc kubenswrapper[4830]: I0131 09:01:23.681844 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pmbpr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca325f50-edf0-4f3d-ab92-17f40a73d274\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d2a0a6bafefdee2120d6573808366f2455c8606c350f69b9e62bfb2903f6303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7p56d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:22Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pmbpr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:23Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:23 crc kubenswrapper[4830]: I0131 09:01:23.687220 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8nvq5\" (UniqueName: \"kubernetes.io/projected/159b9801-57e3-4cf0-9b81-10aacb5eef83-kube-api-access-8nvq5\") pod \"ovnkube-node-r8pc4\" (UID: \"159b9801-57e3-4cf0-9b81-10aacb5eef83\") " pod="openshift-ovn-kubernetes/ovnkube-node-r8pc4" Jan 31 09:01:23 crc kubenswrapper[4830]: I0131 09:01:23.692252 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:23 crc kubenswrapper[4830]: I0131 09:01:23.692292 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:23 crc kubenswrapper[4830]: I0131 09:01:23.692304 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:23 crc kubenswrapper[4830]: I0131 09:01:23.692319 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:23 crc kubenswrapper[4830]: I0131 09:01:23.692331 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:23Z","lastTransitionTime":"2026-01-31T09:01:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:23 crc kubenswrapper[4830]: I0131 09:01:23.696109 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-cjqbn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b7e133cc-19e8-4770-9146-88dac53a6531\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-msp6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-cjqbn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:23Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:23 crc kubenswrapper[4830]: I0131 09:01:23.713933 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-x27jw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"227117cb-01d3-4e44-9da3-b1d577fb3ee2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wndtp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wndtp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wndtp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wndtp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wndtp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wndtp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wndtp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-x27jw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:23Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:23 crc kubenswrapper[4830]: I0131 09:01:23.729818 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"20ed341f-ef9c-4242-981d-80c09f22a37f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://28d65a2b0e667ff1bd912564180a6a9ce77e91a104c6028e19bc58e0ab3b295d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c484d4ed3775a955f3077e3404135c771c581eef1e8fa614d7cdd6e521bbb426\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6889405df5b8cc9e09b36c99a5db06048c9de193dd5744d2b48c07f42c477bd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e99664db53d57a91882867cdf4ab33d52a2e165c53f91cd1b918a32c49a7afa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:00:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:23Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:23 crc kubenswrapper[4830]: I0131 09:01:23.745367 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c33f93dea1eda6a26c80682447157b317a4aea4af66d200c5bdfc162ad593e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:23Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:23 crc kubenswrapper[4830]: I0131 09:01:23.760915 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:23Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:23 crc kubenswrapper[4830]: I0131 09:01:23.774427 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2018dd8e7153f3ce64992dc6f931ae09c5f77931cd0743a9fe2557673b6a41f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:23Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:23 crc kubenswrapper[4830]: I0131 09:01:23.786402 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"158dbfda-9b0a-4809-9946-3c6ee2d082dc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqp59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqp59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:23Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gt7kd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:23Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:23 crc kubenswrapper[4830]: I0131 09:01:23.795044 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:23 crc kubenswrapper[4830]: I0131 09:01:23.795082 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:23 crc kubenswrapper[4830]: I0131 09:01:23.795091 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:23 crc kubenswrapper[4830]: I0131 09:01:23.795108 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:23 crc kubenswrapper[4830]: I0131 09:01:23.795122 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:23Z","lastTransitionTime":"2026-01-31T09:01:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:23 crc kubenswrapper[4830]: I0131 09:01:23.836632 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-r8pc4" Jan 31 09:01:23 crc kubenswrapper[4830]: W0131 09:01:23.849103 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod159b9801_57e3_4cf0_9b81_10aacb5eef83.slice/crio-a62084ce1ecb569b06c0f5e5d4ebedf6167c26b47f68c88eac425a8407c28db9 WatchSource:0}: Error finding container a62084ce1ecb569b06c0f5e5d4ebedf6167c26b47f68c88eac425a8407c28db9: Status 404 returned error can't find the container with id a62084ce1ecb569b06c0f5e5d4ebedf6167c26b47f68c88eac425a8407c28db9 Jan 31 09:01:23 crc kubenswrapper[4830]: I0131 09:01:23.898213 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:23 crc kubenswrapper[4830]: I0131 09:01:23.898252 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:23 crc kubenswrapper[4830]: I0131 09:01:23.898260 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:23 crc kubenswrapper[4830]: I0131 09:01:23.898275 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:23 crc kubenswrapper[4830]: I0131 09:01:23.898315 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:23Z","lastTransitionTime":"2026-01-31T09:01:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:24 crc kubenswrapper[4830]: I0131 09:01:24.001365 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:24 crc kubenswrapper[4830]: I0131 09:01:24.001411 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:24 crc kubenswrapper[4830]: I0131 09:01:24.001428 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:24 crc kubenswrapper[4830]: I0131 09:01:24.001447 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:24 crc kubenswrapper[4830]: I0131 09:01:24.001461 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:24Z","lastTransitionTime":"2026-01-31T09:01:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:24 crc kubenswrapper[4830]: I0131 09:01:24.073978 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 09:01:24 crc kubenswrapper[4830]: I0131 09:01:24.074154 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 09:01:24 crc kubenswrapper[4830]: I0131 09:01:24.074182 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 09:01:24 crc kubenswrapper[4830]: E0131 09:01:24.074252 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 09:01:32.074186476 +0000 UTC m=+36.567548918 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 09:01:24 crc kubenswrapper[4830]: E0131 09:01:24.074308 4830 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 31 09:01:24 crc kubenswrapper[4830]: E0131 09:01:24.074370 4830 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 31 09:01:24 crc kubenswrapper[4830]: E0131 09:01:24.074391 4830 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 31 09:01:24 crc kubenswrapper[4830]: E0131 09:01:24.074404 4830 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 09:01:24 crc kubenswrapper[4830]: E0131 09:01:24.074407 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-31 09:01:32.074387602 +0000 UTC m=+36.567750044 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 31 09:01:24 crc kubenswrapper[4830]: E0131 09:01:24.074440 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-31 09:01:32.074426533 +0000 UTC m=+36.567788975 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 09:01:24 crc kubenswrapper[4830]: I0131 09:01:24.074318 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 09:01:24 crc kubenswrapper[4830]: E0131 09:01:24.074455 4830 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 31 09:01:24 crc kubenswrapper[4830]: E0131 09:01:24.074474 4830 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 31 09:01:24 crc kubenswrapper[4830]: E0131 09:01:24.074489 4830 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 09:01:24 crc kubenswrapper[4830]: I0131 09:01:24.074493 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 09:01:24 crc kubenswrapper[4830]: E0131 09:01:24.074532 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-31 09:01:32.074525676 +0000 UTC m=+36.567888118 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 09:01:24 crc kubenswrapper[4830]: E0131 09:01:24.074614 4830 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 31 09:01:24 crc kubenswrapper[4830]: E0131 09:01:24.074647 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-31 09:01:32.074639479 +0000 UTC m=+36.568001921 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 31 09:01:24 crc kubenswrapper[4830]: I0131 09:01:24.104256 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:24 crc kubenswrapper[4830]: I0131 09:01:24.104301 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:24 crc kubenswrapper[4830]: I0131 09:01:24.104310 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:24 crc kubenswrapper[4830]: I0131 09:01:24.104332 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:24 crc kubenswrapper[4830]: I0131 09:01:24.104342 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:24Z","lastTransitionTime":"2026-01-31T09:01:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:24 crc kubenswrapper[4830]: I0131 09:01:24.207654 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:24 crc kubenswrapper[4830]: I0131 09:01:24.207766 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:24 crc kubenswrapper[4830]: I0131 09:01:24.207827 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:24 crc kubenswrapper[4830]: I0131 09:01:24.207862 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:24 crc kubenswrapper[4830]: I0131 09:01:24.207887 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:24Z","lastTransitionTime":"2026-01-31T09:01:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:24 crc kubenswrapper[4830]: I0131 09:01:24.213936 4830 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 10:13:23.080058579 +0000 UTC Jan 31 09:01:24 crc kubenswrapper[4830]: I0131 09:01:24.251316 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 09:01:24 crc kubenswrapper[4830]: I0131 09:01:24.251368 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 09:01:24 crc kubenswrapper[4830]: I0131 09:01:24.251461 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 09:01:24 crc kubenswrapper[4830]: E0131 09:01:24.251476 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 09:01:24 crc kubenswrapper[4830]: E0131 09:01:24.251601 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 09:01:24 crc kubenswrapper[4830]: E0131 09:01:24.251891 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 09:01:24 crc kubenswrapper[4830]: I0131 09:01:24.310823 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:24 crc kubenswrapper[4830]: I0131 09:01:24.310879 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:24 crc kubenswrapper[4830]: I0131 09:01:24.310887 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:24 crc kubenswrapper[4830]: I0131 09:01:24.310910 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:24 crc kubenswrapper[4830]: I0131 09:01:24.310921 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:24Z","lastTransitionTime":"2026-01-31T09:01:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:24 crc kubenswrapper[4830]: I0131 09:01:24.398683 4830 generic.go:334] "Generic (PLEG): container finished" podID="159b9801-57e3-4cf0-9b81-10aacb5eef83" containerID="ba6e5a840aca1952877a5d2c18af1d28779b13d54e0c653a2b02b3c3d7b748e4" exitCode=0 Jan 31 09:01:24 crc kubenswrapper[4830]: I0131 09:01:24.398793 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-r8pc4" event={"ID":"159b9801-57e3-4cf0-9b81-10aacb5eef83","Type":"ContainerDied","Data":"ba6e5a840aca1952877a5d2c18af1d28779b13d54e0c653a2b02b3c3d7b748e4"} Jan 31 09:01:24 crc kubenswrapper[4830]: I0131 09:01:24.398874 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-r8pc4" event={"ID":"159b9801-57e3-4cf0-9b81-10aacb5eef83","Type":"ContainerStarted","Data":"a62084ce1ecb569b06c0f5e5d4ebedf6167c26b47f68c88eac425a8407c28db9"} Jan 31 09:01:24 crc kubenswrapper[4830]: I0131 09:01:24.400924 4830 generic.go:334] "Generic (PLEG): container finished" podID="227117cb-01d3-4e44-9da3-b1d577fb3ee2" containerID="0b9804ac2b3c56a17dc1665cc7ca0c622f9f7a5dbf19f804b155c517a5be61e7" exitCode=0 Jan 31 09:01:24 crc kubenswrapper[4830]: I0131 09:01:24.401027 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-x27jw" event={"ID":"227117cb-01d3-4e44-9da3-b1d577fb3ee2","Type":"ContainerDied","Data":"0b9804ac2b3c56a17dc1665cc7ca0c622f9f7a5dbf19f804b155c517a5be61e7"} Jan 31 09:01:24 crc kubenswrapper[4830]: I0131 09:01:24.401081 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-x27jw" event={"ID":"227117cb-01d3-4e44-9da3-b1d577fb3ee2","Type":"ContainerStarted","Data":"5c95cab96651045cb2c8b7ddbffe5d4dd7effe90d1775026e6dd2c5930b78c78"} Jan 31 09:01:24 crc kubenswrapper[4830]: I0131 09:01:24.403105 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-cjqbn" event={"ID":"b7e133cc-19e8-4770-9146-88dac53a6531","Type":"ContainerStarted","Data":"4fc53764819654361fe0c4c89480ef4e2b42eb79d71ab8b88f1cc9283c67ce70"} Jan 31 09:01:24 crc kubenswrapper[4830]: I0131 09:01:24.403156 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-cjqbn" event={"ID":"b7e133cc-19e8-4770-9146-88dac53a6531","Type":"ContainerStarted","Data":"eb0ac968b7b3619687296493508ba1c87e12a2570b87fb71ee922d105f9cc153"} Jan 31 09:01:24 crc kubenswrapper[4830]: I0131 09:01:24.406132 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" event={"ID":"158dbfda-9b0a-4809-9946-3c6ee2d082dc","Type":"ContainerStarted","Data":"e4f2590c48b20124bb8d0271755d430719ece306dbdc95acc26258abaf331ee2"} Jan 31 09:01:24 crc kubenswrapper[4830]: I0131 09:01:24.406206 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" event={"ID":"158dbfda-9b0a-4809-9946-3c6ee2d082dc","Type":"ContainerStarted","Data":"7cfb7ee25dc18bb1412f69e9bbc3a9055029ed188a12baa5ceef7d5445ad597c"} Jan 31 09:01:24 crc kubenswrapper[4830]: I0131 09:01:24.406219 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" event={"ID":"158dbfda-9b0a-4809-9946-3c6ee2d082dc","Type":"ContainerStarted","Data":"a2a28d572a13aed79318719c6ad01a2deadd6a97bd75b98af491ac98beaf8e2c"} Jan 31 09:01:24 crc kubenswrapper[4830]: I0131 09:01:24.414348 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:24 crc kubenswrapper[4830]: I0131 09:01:24.414406 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:24 crc kubenswrapper[4830]: I0131 09:01:24.414421 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:24 crc kubenswrapper[4830]: I0131 09:01:24.414442 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:24 crc kubenswrapper[4830]: I0131 09:01:24.414453 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:24Z","lastTransitionTime":"2026-01-31T09:01:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:24 crc kubenswrapper[4830]: I0131 09:01:24.417484 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-cjqbn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b7e133cc-19e8-4770-9146-88dac53a6531\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-msp6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-cjqbn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:24Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:24 crc kubenswrapper[4830]: I0131 09:01:24.435673 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-x27jw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"227117cb-01d3-4e44-9da3-b1d577fb3ee2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wndtp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wndtp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wndtp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wndtp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wndtp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wndtp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wndtp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-x27jw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:24Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:24 crc kubenswrapper[4830]: I0131 09:01:24.457313 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:24Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:24 crc kubenswrapper[4830]: I0131 09:01:24.469885 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pmbpr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca325f50-edf0-4f3d-ab92-17f40a73d274\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d2a0a6bafefdee2120d6573808366f2455c8606c350f69b9e62bfb2903f6303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7p56d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:22Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pmbpr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:24Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:24 crc kubenswrapper[4830]: I0131 09:01:24.485403 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c33f93dea1eda6a26c80682447157b317a4aea4af66d200c5bdfc162ad593e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:24Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:24 crc kubenswrapper[4830]: I0131 09:01:24.501981 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:24Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:24 crc kubenswrapper[4830]: I0131 09:01:24.519033 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:24 crc kubenswrapper[4830]: I0131 09:01:24.519472 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:24 crc kubenswrapper[4830]: I0131 09:01:24.519483 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:24 crc kubenswrapper[4830]: I0131 09:01:24.519046 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2018dd8e7153f3ce64992dc6f931ae09c5f77931cd0743a9fe2557673b6a41f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:24Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:24 crc kubenswrapper[4830]: I0131 09:01:24.519499 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:24 crc kubenswrapper[4830]: I0131 09:01:24.519661 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:24Z","lastTransitionTime":"2026-01-31T09:01:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:24 crc kubenswrapper[4830]: I0131 09:01:24.535287 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"158dbfda-9b0a-4809-9946-3c6ee2d082dc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqp59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqp59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:23Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gt7kd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:24Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:24 crc kubenswrapper[4830]: I0131 09:01:24.554695 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"20ed341f-ef9c-4242-981d-80c09f22a37f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://28d65a2b0e667ff1bd912564180a6a9ce77e91a104c6028e19bc58e0ab3b295d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c484d4ed3775a955f3077e3404135c771c581eef1e8fa614d7cdd6e521bbb426\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6889405df5b8cc9e09b36c99a5db06048c9de193dd5744d2b48c07f42c477bd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e99664db53d57a91882867cdf4ab33d52a2e165c53f91cd1b918a32c49a7afa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:00:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:24Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:24 crc kubenswrapper[4830]: I0131 09:01:24.574019 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0468c8e291590c1fa2ffc9b7786bbbf6f3cfb7889fc3adaf7f7a4d3b0edcb1ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4601859d7e52904b3390b2ebca48210c4b3bf132b00a4735a1dbe6cfdc7bd02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:24Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:24 crc kubenswrapper[4830]: I0131 09:01:24.602153 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-r8pc4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"159b9801-57e3-4cf0-9b81-10aacb5eef83\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba6e5a840aca1952877a5d2c18af1d28779b13d54e0c653a2b02b3c3d7b748e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba6e5a840aca1952877a5d2c18af1d28779b13d54e0c653a2b02b3c3d7b748e4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-r8pc4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:24Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:24 crc kubenswrapper[4830]: I0131 09:01:24.617813 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:24Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:24 crc kubenswrapper[4830]: I0131 09:01:24.622657 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:24 crc kubenswrapper[4830]: I0131 09:01:24.622689 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:24 crc kubenswrapper[4830]: I0131 09:01:24.622698 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:24 crc kubenswrapper[4830]: I0131 09:01:24.622717 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:24 crc kubenswrapper[4830]: I0131 09:01:24.622772 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:24Z","lastTransitionTime":"2026-01-31T09:01:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:24 crc kubenswrapper[4830]: I0131 09:01:24.634110 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c75e1c36-b769-464e-96eb-6d9b3c5aa384\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fc4f1b4979b8d902676eced34650daea491e68b4c4377492b928a9f0f78d12b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b68f539fdc8bbf394adaf06d3e8682cdf498d0994c53f3754caf282cf9cf3607\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://539885da1b8f083e7ae878fec45416be66a5b06df063f046a05a4981bc8a8f75\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8cac33719e081864153ce20c60069a21036f3c29f7b4395021c3a2fe4f09dbc9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8cac33719e081864153ce20c60069a21036f3c29f7b4395021c3a2fe4f09dbc9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"message\\\":\\\"le observer\\\\nW0131 09:01:15.957161 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0131 09:01:15.957363 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 09:01:15.958528 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2684978895/tls.crt::/tmp/serving-cert-2684978895/tls.key\\\\\\\"\\\\nI0131 09:01:16.545024 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 09:01:16.548405 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 09:01:16.548444 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 09:01:16.548470 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 09:01:16.548480 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 09:01:16.557075 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 09:01:16.557112 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 09:01:16.557118 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 09:01:16.557123 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 09:01:16.557127 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 09:01:16.557130 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 09:01:16.557133 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0131 09:01:16.557468 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0131 09:01:16.564014 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d4d8bd6451d7416a2a312147ec282db5de8410910834f555c8d09062902130d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41db253848cac205af5f91621cac4e8223bda7d76984afd734a093f8e8a2d125\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41db253848cac205af5f91621cac4e8223bda7d76984afd734a093f8e8a2d125\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:00:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:00:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:00:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:24Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:24 crc kubenswrapper[4830]: I0131 09:01:24.652402 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0468c8e291590c1fa2ffc9b7786bbbf6f3cfb7889fc3adaf7f7a4d3b0edcb1ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4601859d7e52904b3390b2ebca48210c4b3bf132b00a4735a1dbe6cfdc7bd02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:24Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:24 crc kubenswrapper[4830]: I0131 09:01:24.672863 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-r8pc4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"159b9801-57e3-4cf0-9b81-10aacb5eef83\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba6e5a840aca1952877a5d2c18af1d28779b13d54e0c653a2b02b3c3d7b748e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba6e5a840aca1952877a5d2c18af1d28779b13d54e0c653a2b02b3c3d7b748e4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-r8pc4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:24Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:24 crc kubenswrapper[4830]: I0131 09:01:24.688741 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c75e1c36-b769-464e-96eb-6d9b3c5aa384\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fc4f1b4979b8d902676eced34650daea491e68b4c4377492b928a9f0f78d12b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b68f539fdc8bbf394adaf06d3e8682cdf498d0994c53f3754caf282cf9cf3607\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://539885da1b8f083e7ae878fec45416be66a5b06df063f046a05a4981bc8a8f75\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8cac33719e081864153ce20c60069a21036f3c29f7b4395021c3a2fe4f09dbc9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8cac33719e081864153ce20c60069a21036f3c29f7b4395021c3a2fe4f09dbc9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"message\\\":\\\"le observer\\\\nW0131 09:01:15.957161 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0131 09:01:15.957363 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 09:01:15.958528 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2684978895/tls.crt::/tmp/serving-cert-2684978895/tls.key\\\\\\\"\\\\nI0131 09:01:16.545024 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 09:01:16.548405 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 09:01:16.548444 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 09:01:16.548470 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 09:01:16.548480 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 09:01:16.557075 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 09:01:16.557112 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 09:01:16.557118 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 09:01:16.557123 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 09:01:16.557127 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 09:01:16.557130 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 09:01:16.557133 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0131 09:01:16.557468 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0131 09:01:16.564014 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d4d8bd6451d7416a2a312147ec282db5de8410910834f555c8d09062902130d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41db253848cac205af5f91621cac4e8223bda7d76984afd734a093f8e8a2d125\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41db253848cac205af5f91621cac4e8223bda7d76984afd734a093f8e8a2d125\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:00:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:00:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:00:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:24Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:24 crc kubenswrapper[4830]: I0131 09:01:24.703225 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:24Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:24 crc kubenswrapper[4830]: I0131 09:01:24.718101 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:24Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:24 crc kubenswrapper[4830]: I0131 09:01:24.726292 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:24 crc kubenswrapper[4830]: I0131 09:01:24.726354 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:24 crc kubenswrapper[4830]: I0131 09:01:24.726369 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:24 crc kubenswrapper[4830]: I0131 09:01:24.726392 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:24 crc kubenswrapper[4830]: I0131 09:01:24.726406 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:24Z","lastTransitionTime":"2026-01-31T09:01:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:24 crc kubenswrapper[4830]: I0131 09:01:24.731873 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pmbpr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca325f50-edf0-4f3d-ab92-17f40a73d274\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d2a0a6bafefdee2120d6573808366f2455c8606c350f69b9e62bfb2903f6303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7p56d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:22Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pmbpr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:24Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:24 crc kubenswrapper[4830]: I0131 09:01:24.747136 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-cjqbn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b7e133cc-19e8-4770-9146-88dac53a6531\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fc53764819654361fe0c4c89480ef4e2b42eb79d71ab8b88f1cc9283c67ce70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-msp6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-cjqbn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:24Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:24 crc kubenswrapper[4830]: I0131 09:01:24.765344 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-x27jw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"227117cb-01d3-4e44-9da3-b1d577fb3ee2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wndtp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b9804ac2b3c56a17dc1665cc7ca0c622f9f7a5dbf19f804b155c517a5be61e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b9804ac2b3c56a17dc1665cc7ca0c622f9f7a5dbf19f804b155c517a5be61e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wndtp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wndtp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wndtp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wndtp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wndtp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wndtp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-x27jw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:24Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:24 crc kubenswrapper[4830]: I0131 09:01:24.779480 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"158dbfda-9b0a-4809-9946-3c6ee2d082dc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4f2590c48b20124bb8d0271755d430719ece306dbdc95acc26258abaf331ee2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqp59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7cfb7ee25dc18bb1412f69e9bbc3a9055029ed188a12baa5ceef7d5445ad597c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqp59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:23Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gt7kd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:24Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:24 crc kubenswrapper[4830]: I0131 09:01:24.795361 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"20ed341f-ef9c-4242-981d-80c09f22a37f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://28d65a2b0e667ff1bd912564180a6a9ce77e91a104c6028e19bc58e0ab3b295d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c484d4ed3775a955f3077e3404135c771c581eef1e8fa614d7cdd6e521bbb426\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6889405df5b8cc9e09b36c99a5db06048c9de193dd5744d2b48c07f42c477bd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e99664db53d57a91882867cdf4ab33d52a2e165c53f91cd1b918a32c49a7afa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:00:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:24Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:24 crc kubenswrapper[4830]: I0131 09:01:24.810917 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c33f93dea1eda6a26c80682447157b317a4aea4af66d200c5bdfc162ad593e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:24Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:24 crc kubenswrapper[4830]: I0131 09:01:24.823168 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:24Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:24 crc kubenswrapper[4830]: I0131 09:01:24.828918 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:24 crc kubenswrapper[4830]: I0131 09:01:24.828989 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:24 crc kubenswrapper[4830]: I0131 09:01:24.829004 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:24 crc kubenswrapper[4830]: I0131 09:01:24.829026 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:24 crc kubenswrapper[4830]: I0131 09:01:24.829046 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:24Z","lastTransitionTime":"2026-01-31T09:01:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:24 crc kubenswrapper[4830]: I0131 09:01:24.838479 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2018dd8e7153f3ce64992dc6f931ae09c5f77931cd0743a9fe2557673b6a41f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:24Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:24 crc kubenswrapper[4830]: I0131 09:01:24.931766 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:24 crc kubenswrapper[4830]: I0131 09:01:24.932394 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:24 crc kubenswrapper[4830]: I0131 09:01:24.932465 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:24 crc kubenswrapper[4830]: I0131 09:01:24.932539 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:24 crc kubenswrapper[4830]: I0131 09:01:24.932598 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:24Z","lastTransitionTime":"2026-01-31T09:01:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:25 crc kubenswrapper[4830]: I0131 09:01:25.036815 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:25 crc kubenswrapper[4830]: I0131 09:01:25.036869 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:25 crc kubenswrapper[4830]: I0131 09:01:25.036883 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:25 crc kubenswrapper[4830]: I0131 09:01:25.036904 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:25 crc kubenswrapper[4830]: I0131 09:01:25.036916 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:25Z","lastTransitionTime":"2026-01-31T09:01:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:25 crc kubenswrapper[4830]: I0131 09:01:25.139042 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:25 crc kubenswrapper[4830]: I0131 09:01:25.139330 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:25 crc kubenswrapper[4830]: I0131 09:01:25.139398 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:25 crc kubenswrapper[4830]: I0131 09:01:25.139461 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:25 crc kubenswrapper[4830]: I0131 09:01:25.139526 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:25Z","lastTransitionTime":"2026-01-31T09:01:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:25 crc kubenswrapper[4830]: I0131 09:01:25.215172 4830 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 18:56:04.038351312 +0000 UTC Jan 31 09:01:25 crc kubenswrapper[4830]: I0131 09:01:25.241915 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:25 crc kubenswrapper[4830]: I0131 09:01:25.241961 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:25 crc kubenswrapper[4830]: I0131 09:01:25.241970 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:25 crc kubenswrapper[4830]: I0131 09:01:25.241985 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:25 crc kubenswrapper[4830]: I0131 09:01:25.241994 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:25Z","lastTransitionTime":"2026-01-31T09:01:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:25 crc kubenswrapper[4830]: I0131 09:01:25.346152 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:25 crc kubenswrapper[4830]: I0131 09:01:25.346224 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:25 crc kubenswrapper[4830]: I0131 09:01:25.346237 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:25 crc kubenswrapper[4830]: I0131 09:01:25.346256 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:25 crc kubenswrapper[4830]: I0131 09:01:25.346269 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:25Z","lastTransitionTime":"2026-01-31T09:01:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:25 crc kubenswrapper[4830]: I0131 09:01:25.412923 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-r8pc4" event={"ID":"159b9801-57e3-4cf0-9b81-10aacb5eef83","Type":"ContainerStarted","Data":"3614c1ff7ca6762c15b9b85e6187ad94029630e54254e8ccb45b5e7c24692561"} Jan 31 09:01:25 crc kubenswrapper[4830]: I0131 09:01:25.413401 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-r8pc4" event={"ID":"159b9801-57e3-4cf0-9b81-10aacb5eef83","Type":"ContainerStarted","Data":"ae12083012db5dcd6cc0b23cf64d04c952587334fdf617d36d4cdc2f657eb163"} Jan 31 09:01:25 crc kubenswrapper[4830]: I0131 09:01:25.413415 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-r8pc4" event={"ID":"159b9801-57e3-4cf0-9b81-10aacb5eef83","Type":"ContainerStarted","Data":"320f24051b1a05475b54823ec3401ad252f0efd6c0d3c5098643f959bad15179"} Jan 31 09:01:25 crc kubenswrapper[4830]: I0131 09:01:25.413445 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-r8pc4" event={"ID":"159b9801-57e3-4cf0-9b81-10aacb5eef83","Type":"ContainerStarted","Data":"27c8241e84418d80626b17fe37ccf994304394c3fa85c7f1a058d8d54ad7e7d6"} Jan 31 09:01:25 crc kubenswrapper[4830]: I0131 09:01:25.413459 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-r8pc4" event={"ID":"159b9801-57e3-4cf0-9b81-10aacb5eef83","Type":"ContainerStarted","Data":"0fd109715e65725addfeba399eacb1abbe16132e06e9ee50b542e877ca9d8d82"} Jan 31 09:01:25 crc kubenswrapper[4830]: I0131 09:01:25.417492 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-x27jw" event={"ID":"227117cb-01d3-4e44-9da3-b1d577fb3ee2","Type":"ContainerStarted","Data":"e777b7bcec1e8930ac1c280460b5733614b633b1ed72522e124f8585ef5cb38d"} Jan 31 09:01:25 crc kubenswrapper[4830]: I0131 09:01:25.433918 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c75e1c36-b769-464e-96eb-6d9b3c5aa384\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fc4f1b4979b8d902676eced34650daea491e68b4c4377492b928a9f0f78d12b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b68f539fdc8bbf394adaf06d3e8682cdf498d0994c53f3754caf282cf9cf3607\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://539885da1b8f083e7ae878fec45416be66a5b06df063f046a05a4981bc8a8f75\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8cac33719e081864153ce20c60069a21036f3c29f7b4395021c3a2fe4f09dbc9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8cac33719e081864153ce20c60069a21036f3c29f7b4395021c3a2fe4f09dbc9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"message\\\":\\\"le observer\\\\nW0131 09:01:15.957161 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0131 09:01:15.957363 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 09:01:15.958528 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2684978895/tls.crt::/tmp/serving-cert-2684978895/tls.key\\\\\\\"\\\\nI0131 09:01:16.545024 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 09:01:16.548405 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 09:01:16.548444 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 09:01:16.548470 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 09:01:16.548480 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 09:01:16.557075 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 09:01:16.557112 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 09:01:16.557118 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 09:01:16.557123 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 09:01:16.557127 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 09:01:16.557130 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 09:01:16.557133 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0131 09:01:16.557468 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0131 09:01:16.564014 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d4d8bd6451d7416a2a312147ec282db5de8410910834f555c8d09062902130d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41db253848cac205af5f91621cac4e8223bda7d76984afd734a093f8e8a2d125\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41db253848cac205af5f91621cac4e8223bda7d76984afd734a093f8e8a2d125\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:00:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:00:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:00:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:25Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:25 crc kubenswrapper[4830]: I0131 09:01:25.448068 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:25Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:25 crc kubenswrapper[4830]: I0131 09:01:25.449253 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:25 crc kubenswrapper[4830]: I0131 09:01:25.449299 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:25 crc kubenswrapper[4830]: I0131 09:01:25.449312 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:25 crc kubenswrapper[4830]: I0131 09:01:25.449333 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:25 crc kubenswrapper[4830]: I0131 09:01:25.449348 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:25Z","lastTransitionTime":"2026-01-31T09:01:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:25 crc kubenswrapper[4830]: I0131 09:01:25.462932 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pmbpr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca325f50-edf0-4f3d-ab92-17f40a73d274\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d2a0a6bafefdee2120d6573808366f2455c8606c350f69b9e62bfb2903f6303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7p56d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:22Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pmbpr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:25Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:25 crc kubenswrapper[4830]: I0131 09:01:25.482104 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-cjqbn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b7e133cc-19e8-4770-9146-88dac53a6531\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fc53764819654361fe0c4c89480ef4e2b42eb79d71ab8b88f1cc9283c67ce70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-msp6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-cjqbn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:25Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:25 crc kubenswrapper[4830]: I0131 09:01:25.501181 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-x27jw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"227117cb-01d3-4e44-9da3-b1d577fb3ee2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wndtp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b9804ac2b3c56a17dc1665cc7ca0c622f9f7a5dbf19f804b155c517a5be61e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b9804ac2b3c56a17dc1665cc7ca0c622f9f7a5dbf19f804b155c517a5be61e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wndtp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e777b7bcec1e8930ac1c280460b5733614b633b1ed72522e124f8585ef5cb38d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wndtp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wndtp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wndtp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wndtp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wndtp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-x27jw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:25Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:25 crc kubenswrapper[4830]: I0131 09:01:25.516169 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:25Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:25 crc kubenswrapper[4830]: I0131 09:01:25.529747 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"20ed341f-ef9c-4242-981d-80c09f22a37f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://28d65a2b0e667ff1bd912564180a6a9ce77e91a104c6028e19bc58e0ab3b295d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c484d4ed3775a955f3077e3404135c771c581eef1e8fa614d7cdd6e521bbb426\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6889405df5b8cc9e09b36c99a5db06048c9de193dd5744d2b48c07f42c477bd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e99664db53d57a91882867cdf4ab33d52a2e165c53f91cd1b918a32c49a7afa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:00:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:25Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:25 crc kubenswrapper[4830]: I0131 09:01:25.544619 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c33f93dea1eda6a26c80682447157b317a4aea4af66d200c5bdfc162ad593e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:25Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:25 crc kubenswrapper[4830]: I0131 09:01:25.551871 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:25 crc kubenswrapper[4830]: I0131 09:01:25.551930 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:25 crc kubenswrapper[4830]: I0131 09:01:25.551944 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:25 crc kubenswrapper[4830]: I0131 09:01:25.551968 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:25 crc kubenswrapper[4830]: I0131 09:01:25.551983 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:25Z","lastTransitionTime":"2026-01-31T09:01:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:25 crc kubenswrapper[4830]: I0131 09:01:25.571230 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:25Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:25 crc kubenswrapper[4830]: I0131 09:01:25.603374 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2018dd8e7153f3ce64992dc6f931ae09c5f77931cd0743a9fe2557673b6a41f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:25Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:25 crc kubenswrapper[4830]: I0131 09:01:25.634563 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"158dbfda-9b0a-4809-9946-3c6ee2d082dc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4f2590c48b20124bb8d0271755d430719ece306dbdc95acc26258abaf331ee2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqp59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7cfb7ee25dc18bb1412f69e9bbc3a9055029ed188a12baa5ceef7d5445ad597c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqp59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:23Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gt7kd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:25Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:25 crc kubenswrapper[4830]: I0131 09:01:25.654157 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-r8pc4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"159b9801-57e3-4cf0-9b81-10aacb5eef83\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba6e5a840aca1952877a5d2c18af1d28779b13d54e0c653a2b02b3c3d7b748e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba6e5a840aca1952877a5d2c18af1d28779b13d54e0c653a2b02b3c3d7b748e4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-r8pc4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:25Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:25 crc kubenswrapper[4830]: I0131 09:01:25.655348 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:25 crc kubenswrapper[4830]: I0131 09:01:25.655389 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:25 crc kubenswrapper[4830]: I0131 09:01:25.655401 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:25 crc kubenswrapper[4830]: I0131 09:01:25.655423 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:25 crc kubenswrapper[4830]: I0131 09:01:25.655438 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:25Z","lastTransitionTime":"2026-01-31T09:01:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:25 crc kubenswrapper[4830]: I0131 09:01:25.668787 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0468c8e291590c1fa2ffc9b7786bbbf6f3cfb7889fc3adaf7f7a4d3b0edcb1ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4601859d7e52904b3390b2ebca48210c4b3bf132b00a4735a1dbe6cfdc7bd02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:25Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:25 crc kubenswrapper[4830]: I0131 09:01:25.758765 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:25 crc kubenswrapper[4830]: I0131 09:01:25.758816 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:25 crc kubenswrapper[4830]: I0131 09:01:25.758828 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:25 crc kubenswrapper[4830]: I0131 09:01:25.758850 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:25 crc kubenswrapper[4830]: I0131 09:01:25.758865 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:25Z","lastTransitionTime":"2026-01-31T09:01:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:25 crc kubenswrapper[4830]: I0131 09:01:25.767189 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-zt78q"] Jan 31 09:01:25 crc kubenswrapper[4830]: I0131 09:01:25.767676 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-zt78q" Jan 31 09:01:25 crc kubenswrapper[4830]: I0131 09:01:25.770056 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 31 09:01:25 crc kubenswrapper[4830]: I0131 09:01:25.770055 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 31 09:01:25 crc kubenswrapper[4830]: I0131 09:01:25.770265 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 31 09:01:25 crc kubenswrapper[4830]: I0131 09:01:25.773200 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 31 09:01:25 crc kubenswrapper[4830]: I0131 09:01:25.787998 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-cjqbn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b7e133cc-19e8-4770-9146-88dac53a6531\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fc53764819654361fe0c4c89480ef4e2b42eb79d71ab8b88f1cc9283c67ce70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-msp6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-cjqbn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:25Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:25 crc kubenswrapper[4830]: I0131 09:01:25.798278 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1f8a0ccd-540b-4151-a34d-438e433cb141-host\") pod \"node-ca-zt78q\" (UID: \"1f8a0ccd-540b-4151-a34d-438e433cb141\") " pod="openshift-image-registry/node-ca-zt78q" Jan 31 09:01:25 crc kubenswrapper[4830]: I0131 09:01:25.798344 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/1f8a0ccd-540b-4151-a34d-438e433cb141-serviceca\") pod \"node-ca-zt78q\" (UID: \"1f8a0ccd-540b-4151-a34d-438e433cb141\") " pod="openshift-image-registry/node-ca-zt78q" Jan 31 09:01:25 crc kubenswrapper[4830]: I0131 09:01:25.798363 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z6zlx\" (UniqueName: \"kubernetes.io/projected/1f8a0ccd-540b-4151-a34d-438e433cb141-kube-api-access-z6zlx\") pod \"node-ca-zt78q\" (UID: \"1f8a0ccd-540b-4151-a34d-438e433cb141\") " pod="openshift-image-registry/node-ca-zt78q" Jan 31 09:01:25 crc kubenswrapper[4830]: I0131 09:01:25.803171 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-x27jw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"227117cb-01d3-4e44-9da3-b1d577fb3ee2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wndtp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b9804ac2b3c56a17dc1665cc7ca0c622f9f7a5dbf19f804b155c517a5be61e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b9804ac2b3c56a17dc1665cc7ca0c622f9f7a5dbf19f804b155c517a5be61e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wndtp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e777b7bcec1e8930ac1c280460b5733614b633b1ed72522e124f8585ef5cb38d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wndtp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wndtp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wndtp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wndtp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wndtp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-x27jw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:25Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:25 crc kubenswrapper[4830]: I0131 09:01:25.819985 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:25Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:25 crc kubenswrapper[4830]: I0131 09:01:25.832056 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pmbpr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca325f50-edf0-4f3d-ab92-17f40a73d274\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d2a0a6bafefdee2120d6573808366f2455c8606c350f69b9e62bfb2903f6303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7p56d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:22Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pmbpr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:25Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:25 crc kubenswrapper[4830]: I0131 09:01:25.849276 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c33f93dea1eda6a26c80682447157b317a4aea4af66d200c5bdfc162ad593e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:25Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:25 crc kubenswrapper[4830]: I0131 09:01:25.861465 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:25 crc kubenswrapper[4830]: I0131 09:01:25.861513 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:25 crc kubenswrapper[4830]: I0131 09:01:25.861525 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:25 crc kubenswrapper[4830]: I0131 09:01:25.861542 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:25 crc kubenswrapper[4830]: I0131 09:01:25.861557 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:25Z","lastTransitionTime":"2026-01-31T09:01:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:25 crc kubenswrapper[4830]: I0131 09:01:25.862767 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:25Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:25 crc kubenswrapper[4830]: I0131 09:01:25.876075 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2018dd8e7153f3ce64992dc6f931ae09c5f77931cd0743a9fe2557673b6a41f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:25Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:25 crc kubenswrapper[4830]: I0131 09:01:25.887040 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"158dbfda-9b0a-4809-9946-3c6ee2d082dc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4f2590c48b20124bb8d0271755d430719ece306dbdc95acc26258abaf331ee2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqp59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7cfb7ee25dc18bb1412f69e9bbc3a9055029ed188a12baa5ceef7d5445ad597c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqp59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:23Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gt7kd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:25Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:25 crc kubenswrapper[4830]: I0131 09:01:25.899363 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/1f8a0ccd-540b-4151-a34d-438e433cb141-serviceca\") pod \"node-ca-zt78q\" (UID: \"1f8a0ccd-540b-4151-a34d-438e433cb141\") " pod="openshift-image-registry/node-ca-zt78q" Jan 31 09:01:25 crc kubenswrapper[4830]: I0131 09:01:25.899398 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z6zlx\" (UniqueName: \"kubernetes.io/projected/1f8a0ccd-540b-4151-a34d-438e433cb141-kube-api-access-z6zlx\") pod \"node-ca-zt78q\" (UID: \"1f8a0ccd-540b-4151-a34d-438e433cb141\") " pod="openshift-image-registry/node-ca-zt78q" Jan 31 09:01:25 crc kubenswrapper[4830]: I0131 09:01:25.899449 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1f8a0ccd-540b-4151-a34d-438e433cb141-host\") pod \"node-ca-zt78q\" (UID: \"1f8a0ccd-540b-4151-a34d-438e433cb141\") " pod="openshift-image-registry/node-ca-zt78q" Jan 31 09:01:25 crc kubenswrapper[4830]: I0131 09:01:25.899515 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1f8a0ccd-540b-4151-a34d-438e433cb141-host\") pod \"node-ca-zt78q\" (UID: \"1f8a0ccd-540b-4151-a34d-438e433cb141\") " pod="openshift-image-registry/node-ca-zt78q" Jan 31 09:01:25 crc kubenswrapper[4830]: I0131 09:01:25.900464 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/1f8a0ccd-540b-4151-a34d-438e433cb141-serviceca\") pod \"node-ca-zt78q\" (UID: \"1f8a0ccd-540b-4151-a34d-438e433cb141\") " pod="openshift-image-registry/node-ca-zt78q" Jan 31 09:01:25 crc kubenswrapper[4830]: I0131 09:01:25.905260 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"20ed341f-ef9c-4242-981d-80c09f22a37f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://28d65a2b0e667ff1bd912564180a6a9ce77e91a104c6028e19bc58e0ab3b295d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c484d4ed3775a955f3077e3404135c771c581eef1e8fa614d7cdd6e521bbb426\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6889405df5b8cc9e09b36c99a5db06048c9de193dd5744d2b48c07f42c477bd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e99664db53d57a91882867cdf4ab33d52a2e165c53f91cd1b918a32c49a7afa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:00:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:25Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:25 crc kubenswrapper[4830]: I0131 09:01:25.917803 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z6zlx\" (UniqueName: \"kubernetes.io/projected/1f8a0ccd-540b-4151-a34d-438e433cb141-kube-api-access-z6zlx\") pod \"node-ca-zt78q\" (UID: \"1f8a0ccd-540b-4151-a34d-438e433cb141\") " pod="openshift-image-registry/node-ca-zt78q" Jan 31 09:01:25 crc kubenswrapper[4830]: I0131 09:01:25.920102 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0468c8e291590c1fa2ffc9b7786bbbf6f3cfb7889fc3adaf7f7a4d3b0edcb1ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4601859d7e52904b3390b2ebca48210c4b3bf132b00a4735a1dbe6cfdc7bd02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:25Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:25 crc kubenswrapper[4830]: I0131 09:01:25.939168 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-r8pc4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"159b9801-57e3-4cf0-9b81-10aacb5eef83\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba6e5a840aca1952877a5d2c18af1d28779b13d54e0c653a2b02b3c3d7b748e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba6e5a840aca1952877a5d2c18af1d28779b13d54e0c653a2b02b3c3d7b748e4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-r8pc4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:25Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:25 crc kubenswrapper[4830]: I0131 09:01:25.952054 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:25Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:25 crc kubenswrapper[4830]: I0131 09:01:25.963638 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:25 crc kubenswrapper[4830]: I0131 09:01:25.963670 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:25 crc kubenswrapper[4830]: I0131 09:01:25.963678 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:25 crc kubenswrapper[4830]: I0131 09:01:25.963693 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:25 crc kubenswrapper[4830]: I0131 09:01:25.963704 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:25Z","lastTransitionTime":"2026-01-31T09:01:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:25 crc kubenswrapper[4830]: I0131 09:01:25.967806 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-zt78q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f8a0ccd-540b-4151-a34d-438e433cb141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:25Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:25Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6zlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:25Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-zt78q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:25Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:25 crc kubenswrapper[4830]: I0131 09:01:25.991647 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c75e1c36-b769-464e-96eb-6d9b3c5aa384\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fc4f1b4979b8d902676eced34650daea491e68b4c4377492b928a9f0f78d12b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b68f539fdc8bbf394adaf06d3e8682cdf498d0994c53f3754caf282cf9cf3607\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://539885da1b8f083e7ae878fec45416be66a5b06df063f046a05a4981bc8a8f75\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8cac33719e081864153ce20c60069a21036f3c29f7b4395021c3a2fe4f09dbc9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8cac33719e081864153ce20c60069a21036f3c29f7b4395021c3a2fe4f09dbc9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"message\\\":\\\"le observer\\\\nW0131 09:01:15.957161 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0131 09:01:15.957363 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 09:01:15.958528 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2684978895/tls.crt::/tmp/serving-cert-2684978895/tls.key\\\\\\\"\\\\nI0131 09:01:16.545024 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 09:01:16.548405 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 09:01:16.548444 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 09:01:16.548470 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 09:01:16.548480 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 09:01:16.557075 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 09:01:16.557112 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 09:01:16.557118 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 09:01:16.557123 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 09:01:16.557127 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 09:01:16.557130 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 09:01:16.557133 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0131 09:01:16.557468 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0131 09:01:16.564014 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d4d8bd6451d7416a2a312147ec282db5de8410910834f555c8d09062902130d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41db253848cac205af5f91621cac4e8223bda7d76984afd734a093f8e8a2d125\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41db253848cac205af5f91621cac4e8223bda7d76984afd734a093f8e8a2d125\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:00:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:00:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:00:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:25Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:25 crc kubenswrapper[4830]: I0131 09:01:25.999742 4830 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jan 31 09:01:26 crc kubenswrapper[4830]: W0131 09:01:26.000016 4830 reflector.go:484] object-"openshift-image-registry"/"image-registry-certificates": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-image-registry"/"image-registry-certificates": Unexpected watch close - watch lasted less than a second and no items received Jan 31 09:01:26 crc kubenswrapper[4830]: W0131 09:01:26.000542 4830 reflector.go:484] object-"openshift-image-registry"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-image-registry"/"openshift-service-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 31 09:01:26 crc kubenswrapper[4830]: W0131 09:01:26.001705 4830 reflector.go:484] object-"openshift-image-registry"/"node-ca-dockercfg-4777p": watch of *v1.Secret ended with: very short watch: object-"openshift-image-registry"/"node-ca-dockercfg-4777p": Unexpected watch close - watch lasted less than a second and no items received Jan 31 09:01:26 crc kubenswrapper[4830]: W0131 09:01:26.001848 4830 reflector.go:484] object-"openshift-image-registry"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-image-registry"/"kube-root-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 31 09:01:26 crc kubenswrapper[4830]: I0131 09:01:26.066963 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:26 crc kubenswrapper[4830]: I0131 09:01:26.067035 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:26 crc kubenswrapper[4830]: I0131 09:01:26.067046 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:26 crc kubenswrapper[4830]: I0131 09:01:26.067066 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:26 crc kubenswrapper[4830]: I0131 09:01:26.067085 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:26Z","lastTransitionTime":"2026-01-31T09:01:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:26 crc kubenswrapper[4830]: I0131 09:01:26.082222 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-zt78q" Jan 31 09:01:26 crc kubenswrapper[4830]: I0131 09:01:26.170141 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:26 crc kubenswrapper[4830]: I0131 09:01:26.170199 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:26 crc kubenswrapper[4830]: I0131 09:01:26.170214 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:26 crc kubenswrapper[4830]: I0131 09:01:26.170237 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:26 crc kubenswrapper[4830]: I0131 09:01:26.170251 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:26Z","lastTransitionTime":"2026-01-31T09:01:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:26 crc kubenswrapper[4830]: I0131 09:01:26.216157 4830 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 12:24:38.711640289 +0000 UTC Jan 31 09:01:26 crc kubenswrapper[4830]: I0131 09:01:26.251033 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 09:01:26 crc kubenswrapper[4830]: E0131 09:01:26.251666 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 09:01:26 crc kubenswrapper[4830]: I0131 09:01:26.251785 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 09:01:26 crc kubenswrapper[4830]: I0131 09:01:26.251796 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 09:01:26 crc kubenswrapper[4830]: E0131 09:01:26.251885 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 09:01:26 crc kubenswrapper[4830]: E0131 09:01:26.251961 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 09:01:26 crc kubenswrapper[4830]: I0131 09:01:26.267102 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0468c8e291590c1fa2ffc9b7786bbbf6f3cfb7889fc3adaf7f7a4d3b0edcb1ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4601859d7e52904b3390b2ebca48210c4b3bf132b00a4735a1dbe6cfdc7bd02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:26Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:26 crc kubenswrapper[4830]: I0131 09:01:26.274718 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:26 crc kubenswrapper[4830]: I0131 09:01:26.274774 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:26 crc kubenswrapper[4830]: I0131 09:01:26.274786 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:26 crc kubenswrapper[4830]: I0131 09:01:26.274814 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:26 crc kubenswrapper[4830]: I0131 09:01:26.274826 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:26Z","lastTransitionTime":"2026-01-31T09:01:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:26 crc kubenswrapper[4830]: I0131 09:01:26.291315 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-r8pc4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"159b9801-57e3-4cf0-9b81-10aacb5eef83\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba6e5a840aca1952877a5d2c18af1d28779b13d54e0c653a2b02b3c3d7b748e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba6e5a840aca1952877a5d2c18af1d28779b13d54e0c653a2b02b3c3d7b748e4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-r8pc4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:26Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:26 crc kubenswrapper[4830]: I0131 09:01:26.309970 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c75e1c36-b769-464e-96eb-6d9b3c5aa384\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fc4f1b4979b8d902676eced34650daea491e68b4c4377492b928a9f0f78d12b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b68f539fdc8bbf394adaf06d3e8682cdf498d0994c53f3754caf282cf9cf3607\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://539885da1b8f083e7ae878fec45416be66a5b06df063f046a05a4981bc8a8f75\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8cac33719e081864153ce20c60069a21036f3c29f7b4395021c3a2fe4f09dbc9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8cac33719e081864153ce20c60069a21036f3c29f7b4395021c3a2fe4f09dbc9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"message\\\":\\\"le observer\\\\nW0131 09:01:15.957161 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0131 09:01:15.957363 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 09:01:15.958528 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2684978895/tls.crt::/tmp/serving-cert-2684978895/tls.key\\\\\\\"\\\\nI0131 09:01:16.545024 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 09:01:16.548405 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 09:01:16.548444 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 09:01:16.548470 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 09:01:16.548480 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 09:01:16.557075 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 09:01:16.557112 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 09:01:16.557118 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 09:01:16.557123 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 09:01:16.557127 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 09:01:16.557130 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 09:01:16.557133 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0131 09:01:16.557468 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0131 09:01:16.564014 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d4d8bd6451d7416a2a312147ec282db5de8410910834f555c8d09062902130d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41db253848cac205af5f91621cac4e8223bda7d76984afd734a093f8e8a2d125\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41db253848cac205af5f91621cac4e8223bda7d76984afd734a093f8e8a2d125\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:00:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:00:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:00:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:26Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:26 crc kubenswrapper[4830]: I0131 09:01:26.327294 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:26Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:26 crc kubenswrapper[4830]: I0131 09:01:26.339240 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-zt78q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f8a0ccd-540b-4151-a34d-438e433cb141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:25Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:25Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6zlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:25Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-zt78q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:26Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:26 crc kubenswrapper[4830]: I0131 09:01:26.357943 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:26Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:26 crc kubenswrapper[4830]: I0131 09:01:26.375063 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pmbpr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca325f50-edf0-4f3d-ab92-17f40a73d274\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d2a0a6bafefdee2120d6573808366f2455c8606c350f69b9e62bfb2903f6303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7p56d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:22Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pmbpr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:26Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:26 crc kubenswrapper[4830]: I0131 09:01:26.376863 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:26 crc kubenswrapper[4830]: I0131 09:01:26.376916 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:26 crc kubenswrapper[4830]: I0131 09:01:26.376926 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:26 crc kubenswrapper[4830]: I0131 09:01:26.376945 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:26 crc kubenswrapper[4830]: I0131 09:01:26.376956 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:26Z","lastTransitionTime":"2026-01-31T09:01:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:26 crc kubenswrapper[4830]: I0131 09:01:26.391004 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-cjqbn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b7e133cc-19e8-4770-9146-88dac53a6531\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fc53764819654361fe0c4c89480ef4e2b42eb79d71ab8b88f1cc9283c67ce70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-msp6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-cjqbn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:26Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:26 crc kubenswrapper[4830]: I0131 09:01:26.413078 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-x27jw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"227117cb-01d3-4e44-9da3-b1d577fb3ee2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wndtp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b9804ac2b3c56a17dc1665cc7ca0c622f9f7a5dbf19f804b155c517a5be61e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b9804ac2b3c56a17dc1665cc7ca0c622f9f7a5dbf19f804b155c517a5be61e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wndtp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e777b7bcec1e8930ac1c280460b5733614b633b1ed72522e124f8585ef5cb38d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wndtp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wndtp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wndtp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wndtp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wndtp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-x27jw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:26Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:26 crc kubenswrapper[4830]: I0131 09:01:26.423193 4830 generic.go:334] "Generic (PLEG): container finished" podID="227117cb-01d3-4e44-9da3-b1d577fb3ee2" containerID="e777b7bcec1e8930ac1c280460b5733614b633b1ed72522e124f8585ef5cb38d" exitCode=0 Jan 31 09:01:26 crc kubenswrapper[4830]: I0131 09:01:26.423256 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-x27jw" event={"ID":"227117cb-01d3-4e44-9da3-b1d577fb3ee2","Type":"ContainerDied","Data":"e777b7bcec1e8930ac1c280460b5733614b633b1ed72522e124f8585ef5cb38d"} Jan 31 09:01:26 crc kubenswrapper[4830]: I0131 09:01:26.428292 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-r8pc4" event={"ID":"159b9801-57e3-4cf0-9b81-10aacb5eef83","Type":"ContainerStarted","Data":"351a35da1a2503d6c5d323c7a9a26460236f70517498b41055e6d21bbf92b358"} Jan 31 09:01:26 crc kubenswrapper[4830]: I0131 09:01:26.433172 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:26Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:26 crc kubenswrapper[4830]: I0131 09:01:26.434715 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-zt78q" event={"ID":"1f8a0ccd-540b-4151-a34d-438e433cb141","Type":"ContainerStarted","Data":"362d0fc182d79e72720f3686e7fb5219372cf72d8be09c8086713b692e8d66d8"} Jan 31 09:01:26 crc kubenswrapper[4830]: I0131 09:01:26.434819 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-zt78q" event={"ID":"1f8a0ccd-540b-4151-a34d-438e433cb141","Type":"ContainerStarted","Data":"a7bf03452deed4ea37c7998187125e8fbe7a71a1a60b44d2bd093cbb8577b23d"} Jan 31 09:01:26 crc kubenswrapper[4830]: I0131 09:01:26.448379 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2018dd8e7153f3ce64992dc6f931ae09c5f77931cd0743a9fe2557673b6a41f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:26Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:26 crc kubenswrapper[4830]: I0131 09:01:26.461296 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"158dbfda-9b0a-4809-9946-3c6ee2d082dc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4f2590c48b20124bb8d0271755d430719ece306dbdc95acc26258abaf331ee2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqp59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7cfb7ee25dc18bb1412f69e9bbc3a9055029ed188a12baa5ceef7d5445ad597c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqp59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:23Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gt7kd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:26Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:26 crc kubenswrapper[4830]: I0131 09:01:26.475980 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"20ed341f-ef9c-4242-981d-80c09f22a37f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://28d65a2b0e667ff1bd912564180a6a9ce77e91a104c6028e19bc58e0ab3b295d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c484d4ed3775a955f3077e3404135c771c581eef1e8fa614d7cdd6e521bbb426\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6889405df5b8cc9e09b36c99a5db06048c9de193dd5744d2b48c07f42c477bd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e99664db53d57a91882867cdf4ab33d52a2e165c53f91cd1b918a32c49a7afa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:00:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:26Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:26 crc kubenswrapper[4830]: I0131 09:01:26.484031 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:26 crc kubenswrapper[4830]: I0131 09:01:26.484068 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:26 crc kubenswrapper[4830]: I0131 09:01:26.484078 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:26 crc kubenswrapper[4830]: I0131 09:01:26.484101 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:26 crc kubenswrapper[4830]: I0131 09:01:26.484112 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:26Z","lastTransitionTime":"2026-01-31T09:01:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:26 crc kubenswrapper[4830]: I0131 09:01:26.494281 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c33f93dea1eda6a26c80682447157b317a4aea4af66d200c5bdfc162ad593e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:26Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:26 crc kubenswrapper[4830]: I0131 09:01:26.509853 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:26Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:26 crc kubenswrapper[4830]: I0131 09:01:26.523031 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-zt78q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f8a0ccd-540b-4151-a34d-438e433cb141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://362d0fc182d79e72720f3686e7fb5219372cf72d8be09c8086713b692e8d66d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6zlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:25Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-zt78q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:26Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:26 crc kubenswrapper[4830]: I0131 09:01:26.540037 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c75e1c36-b769-464e-96eb-6d9b3c5aa384\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fc4f1b4979b8d902676eced34650daea491e68b4c4377492b928a9f0f78d12b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b68f539fdc8bbf394adaf06d3e8682cdf498d0994c53f3754caf282cf9cf3607\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://539885da1b8f083e7ae878fec45416be66a5b06df063f046a05a4981bc8a8f75\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8cac33719e081864153ce20c60069a21036f3c29f7b4395021c3a2fe4f09dbc9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8cac33719e081864153ce20c60069a21036f3c29f7b4395021c3a2fe4f09dbc9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"message\\\":\\\"le observer\\\\nW0131 09:01:15.957161 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0131 09:01:15.957363 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 09:01:15.958528 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2684978895/tls.crt::/tmp/serving-cert-2684978895/tls.key\\\\\\\"\\\\nI0131 09:01:16.545024 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 09:01:16.548405 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 09:01:16.548444 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 09:01:16.548470 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 09:01:16.548480 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 09:01:16.557075 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 09:01:16.557112 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 09:01:16.557118 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 09:01:16.557123 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 09:01:16.557127 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 09:01:16.557130 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 09:01:16.557133 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0131 09:01:16.557468 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0131 09:01:16.564014 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d4d8bd6451d7416a2a312147ec282db5de8410910834f555c8d09062902130d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41db253848cac205af5f91621cac4e8223bda7d76984afd734a093f8e8a2d125\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41db253848cac205af5f91621cac4e8223bda7d76984afd734a093f8e8a2d125\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:00:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:00:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:00:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:26Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:26 crc kubenswrapper[4830]: I0131 09:01:26.555809 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-cjqbn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b7e133cc-19e8-4770-9146-88dac53a6531\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fc53764819654361fe0c4c89480ef4e2b42eb79d71ab8b88f1cc9283c67ce70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-msp6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-cjqbn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:26Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:26 crc kubenswrapper[4830]: I0131 09:01:26.573185 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-x27jw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"227117cb-01d3-4e44-9da3-b1d577fb3ee2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wndtp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b9804ac2b3c56a17dc1665cc7ca0c622f9f7a5dbf19f804b155c517a5be61e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b9804ac2b3c56a17dc1665cc7ca0c622f9f7a5dbf19f804b155c517a5be61e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wndtp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e777b7bcec1e8930ac1c280460b5733614b633b1ed72522e124f8585ef5cb38d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e777b7bcec1e8930ac1c280460b5733614b633b1ed72522e124f8585ef5cb38d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wndtp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wndtp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wndtp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wndtp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wndtp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-x27jw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:26Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:26 crc kubenswrapper[4830]: I0131 09:01:26.587245 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:26Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:26 crc kubenswrapper[4830]: I0131 09:01:26.588224 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:26 crc kubenswrapper[4830]: I0131 09:01:26.588266 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:26 crc kubenswrapper[4830]: I0131 09:01:26.588275 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:26 crc kubenswrapper[4830]: I0131 09:01:26.588292 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:26 crc kubenswrapper[4830]: I0131 09:01:26.588303 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:26Z","lastTransitionTime":"2026-01-31T09:01:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:26 crc kubenswrapper[4830]: I0131 09:01:26.597758 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pmbpr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca325f50-edf0-4f3d-ab92-17f40a73d274\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d2a0a6bafefdee2120d6573808366f2455c8606c350f69b9e62bfb2903f6303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7p56d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:22Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pmbpr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:26Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:26 crc kubenswrapper[4830]: I0131 09:01:26.615525 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c33f93dea1eda6a26c80682447157b317a4aea4af66d200c5bdfc162ad593e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:26Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:26 crc kubenswrapper[4830]: I0131 09:01:26.633081 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:26Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:26 crc kubenswrapper[4830]: I0131 09:01:26.647565 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2018dd8e7153f3ce64992dc6f931ae09c5f77931cd0743a9fe2557673b6a41f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:26Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:26 crc kubenswrapper[4830]: I0131 09:01:26.659198 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"158dbfda-9b0a-4809-9946-3c6ee2d082dc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4f2590c48b20124bb8d0271755d430719ece306dbdc95acc26258abaf331ee2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqp59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7cfb7ee25dc18bb1412f69e9bbc3a9055029ed188a12baa5ceef7d5445ad597c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqp59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:23Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gt7kd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:26Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:26 crc kubenswrapper[4830]: I0131 09:01:26.674427 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"20ed341f-ef9c-4242-981d-80c09f22a37f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://28d65a2b0e667ff1bd912564180a6a9ce77e91a104c6028e19bc58e0ab3b295d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c484d4ed3775a955f3077e3404135c771c581eef1e8fa614d7cdd6e521bbb426\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6889405df5b8cc9e09b36c99a5db06048c9de193dd5744d2b48c07f42c477bd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e99664db53d57a91882867cdf4ab33d52a2e165c53f91cd1b918a32c49a7afa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:00:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:26Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:26 crc kubenswrapper[4830]: I0131 09:01:26.688113 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0468c8e291590c1fa2ffc9b7786bbbf6f3cfb7889fc3adaf7f7a4d3b0edcb1ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4601859d7e52904b3390b2ebca48210c4b3bf132b00a4735a1dbe6cfdc7bd02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:26Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:26 crc kubenswrapper[4830]: I0131 09:01:26.692467 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:26 crc kubenswrapper[4830]: I0131 09:01:26.692611 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:26 crc kubenswrapper[4830]: I0131 09:01:26.692671 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:26 crc kubenswrapper[4830]: I0131 09:01:26.692795 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:26 crc kubenswrapper[4830]: I0131 09:01:26.692874 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:26Z","lastTransitionTime":"2026-01-31T09:01:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:26 crc kubenswrapper[4830]: I0131 09:01:26.709040 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-r8pc4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"159b9801-57e3-4cf0-9b81-10aacb5eef83\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba6e5a840aca1952877a5d2c18af1d28779b13d54e0c653a2b02b3c3d7b748e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba6e5a840aca1952877a5d2c18af1d28779b13d54e0c653a2b02b3c3d7b748e4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-r8pc4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:26Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:26 crc kubenswrapper[4830]: I0131 09:01:26.795740 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:26 crc kubenswrapper[4830]: I0131 09:01:26.796062 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:26 crc kubenswrapper[4830]: I0131 09:01:26.796155 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:26 crc kubenswrapper[4830]: I0131 09:01:26.796239 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:26 crc kubenswrapper[4830]: I0131 09:01:26.796312 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:26Z","lastTransitionTime":"2026-01-31T09:01:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:26 crc kubenswrapper[4830]: I0131 09:01:26.819966 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 31 09:01:26 crc kubenswrapper[4830]: I0131 09:01:26.899708 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:26 crc kubenswrapper[4830]: I0131 09:01:26.900032 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:26 crc kubenswrapper[4830]: I0131 09:01:26.900148 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:26 crc kubenswrapper[4830]: I0131 09:01:26.900232 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:26 crc kubenswrapper[4830]: I0131 09:01:26.900308 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:26Z","lastTransitionTime":"2026-01-31T09:01:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:27 crc kubenswrapper[4830]: I0131 09:01:27.003556 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:27 crc kubenswrapper[4830]: I0131 09:01:27.003624 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:27 crc kubenswrapper[4830]: I0131 09:01:27.003635 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:27 crc kubenswrapper[4830]: I0131 09:01:27.003656 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:27 crc kubenswrapper[4830]: I0131 09:01:27.003670 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:27Z","lastTransitionTime":"2026-01-31T09:01:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:27 crc kubenswrapper[4830]: I0131 09:01:27.106335 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:27 crc kubenswrapper[4830]: I0131 09:01:27.106397 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:27 crc kubenswrapper[4830]: I0131 09:01:27.106407 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:27 crc kubenswrapper[4830]: I0131 09:01:27.106425 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:27 crc kubenswrapper[4830]: I0131 09:01:27.106437 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:27Z","lastTransitionTime":"2026-01-31T09:01:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:27 crc kubenswrapper[4830]: I0131 09:01:27.124258 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 31 09:01:27 crc kubenswrapper[4830]: I0131 09:01:27.209373 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:27 crc kubenswrapper[4830]: I0131 09:01:27.209419 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:27 crc kubenswrapper[4830]: I0131 09:01:27.209432 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:27 crc kubenswrapper[4830]: I0131 09:01:27.209451 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:27 crc kubenswrapper[4830]: I0131 09:01:27.209463 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:27Z","lastTransitionTime":"2026-01-31T09:01:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:27 crc kubenswrapper[4830]: I0131 09:01:27.216793 4830 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 09:18:05.763044726 +0000 UTC Jan 31 09:01:27 crc kubenswrapper[4830]: I0131 09:01:27.255754 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 31 09:01:27 crc kubenswrapper[4830]: I0131 09:01:27.312306 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:27 crc kubenswrapper[4830]: I0131 09:01:27.312361 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:27 crc kubenswrapper[4830]: I0131 09:01:27.312382 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:27 crc kubenswrapper[4830]: I0131 09:01:27.312402 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:27 crc kubenswrapper[4830]: I0131 09:01:27.312415 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:27Z","lastTransitionTime":"2026-01-31T09:01:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:27 crc kubenswrapper[4830]: I0131 09:01:27.404112 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 31 09:01:27 crc kubenswrapper[4830]: I0131 09:01:27.414521 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:27 crc kubenswrapper[4830]: I0131 09:01:27.414559 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:27 crc kubenswrapper[4830]: I0131 09:01:27.414568 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:27 crc kubenswrapper[4830]: I0131 09:01:27.414587 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:27 crc kubenswrapper[4830]: I0131 09:01:27.414598 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:27Z","lastTransitionTime":"2026-01-31T09:01:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:27 crc kubenswrapper[4830]: I0131 09:01:27.441605 4830 generic.go:334] "Generic (PLEG): container finished" podID="227117cb-01d3-4e44-9da3-b1d577fb3ee2" containerID="10692d497b8cf269a439c603376a958fcead6d7ee2338eff4e014a75eaa26417" exitCode=0 Jan 31 09:01:27 crc kubenswrapper[4830]: I0131 09:01:27.441662 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-x27jw" event={"ID":"227117cb-01d3-4e44-9da3-b1d577fb3ee2","Type":"ContainerDied","Data":"10692d497b8cf269a439c603376a958fcead6d7ee2338eff4e014a75eaa26417"} Jan 31 09:01:27 crc kubenswrapper[4830]: I0131 09:01:27.462689 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0468c8e291590c1fa2ffc9b7786bbbf6f3cfb7889fc3adaf7f7a4d3b0edcb1ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4601859d7e52904b3390b2ebca48210c4b3bf132b00a4735a1dbe6cfdc7bd02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:27Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:27 crc kubenswrapper[4830]: I0131 09:01:27.490858 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-r8pc4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"159b9801-57e3-4cf0-9b81-10aacb5eef83\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba6e5a840aca1952877a5d2c18af1d28779b13d54e0c653a2b02b3c3d7b748e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba6e5a840aca1952877a5d2c18af1d28779b13d54e0c653a2b02b3c3d7b748e4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-r8pc4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:27Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:27 crc kubenswrapper[4830]: I0131 09:01:27.507899 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c75e1c36-b769-464e-96eb-6d9b3c5aa384\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fc4f1b4979b8d902676eced34650daea491e68b4c4377492b928a9f0f78d12b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b68f539fdc8bbf394adaf06d3e8682cdf498d0994c53f3754caf282cf9cf3607\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://539885da1b8f083e7ae878fec45416be66a5b06df063f046a05a4981bc8a8f75\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8cac33719e081864153ce20c60069a21036f3c29f7b4395021c3a2fe4f09dbc9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8cac33719e081864153ce20c60069a21036f3c29f7b4395021c3a2fe4f09dbc9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"message\\\":\\\"le observer\\\\nW0131 09:01:15.957161 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0131 09:01:15.957363 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 09:01:15.958528 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2684978895/tls.crt::/tmp/serving-cert-2684978895/tls.key\\\\\\\"\\\\nI0131 09:01:16.545024 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 09:01:16.548405 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 09:01:16.548444 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 09:01:16.548470 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 09:01:16.548480 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 09:01:16.557075 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 09:01:16.557112 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 09:01:16.557118 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 09:01:16.557123 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 09:01:16.557127 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 09:01:16.557130 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 09:01:16.557133 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0131 09:01:16.557468 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0131 09:01:16.564014 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d4d8bd6451d7416a2a312147ec282db5de8410910834f555c8d09062902130d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41db253848cac205af5f91621cac4e8223bda7d76984afd734a093f8e8a2d125\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41db253848cac205af5f91621cac4e8223bda7d76984afd734a093f8e8a2d125\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:00:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:00:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:00:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:27Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:27 crc kubenswrapper[4830]: I0131 09:01:27.516882 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:27 crc kubenswrapper[4830]: I0131 09:01:27.516915 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:27 crc kubenswrapper[4830]: I0131 09:01:27.516923 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:27 crc kubenswrapper[4830]: I0131 09:01:27.516940 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:27 crc kubenswrapper[4830]: I0131 09:01:27.516949 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:27Z","lastTransitionTime":"2026-01-31T09:01:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:27 crc kubenswrapper[4830]: I0131 09:01:27.522203 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:27Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:27 crc kubenswrapper[4830]: I0131 09:01:27.532955 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-zt78q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f8a0ccd-540b-4151-a34d-438e433cb141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://362d0fc182d79e72720f3686e7fb5219372cf72d8be09c8086713b692e8d66d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6zlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:25Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-zt78q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:27Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:27 crc kubenswrapper[4830]: I0131 09:01:27.548238 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:27Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:27 crc kubenswrapper[4830]: I0131 09:01:27.563831 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pmbpr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca325f50-edf0-4f3d-ab92-17f40a73d274\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d2a0a6bafefdee2120d6573808366f2455c8606c350f69b9e62bfb2903f6303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7p56d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:22Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pmbpr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:27Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:27 crc kubenswrapper[4830]: I0131 09:01:27.580409 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-cjqbn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b7e133cc-19e8-4770-9146-88dac53a6531\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fc53764819654361fe0c4c89480ef4e2b42eb79d71ab8b88f1cc9283c67ce70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-msp6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-cjqbn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:27Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:27 crc kubenswrapper[4830]: I0131 09:01:27.597682 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-x27jw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"227117cb-01d3-4e44-9da3-b1d577fb3ee2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wndtp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b9804ac2b3c56a17dc1665cc7ca0c622f9f7a5dbf19f804b155c517a5be61e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b9804ac2b3c56a17dc1665cc7ca0c622f9f7a5dbf19f804b155c517a5be61e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wndtp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e777b7bcec1e8930ac1c280460b5733614b633b1ed72522e124f8585ef5cb38d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e777b7bcec1e8930ac1c280460b5733614b633b1ed72522e124f8585ef5cb38d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wndtp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10692d497b8cf269a439c603376a958fcead6d7ee2338eff4e014a75eaa26417\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10692d497b8cf269a439c603376a958fcead6d7ee2338eff4e014a75eaa26417\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wndtp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wndtp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wndtp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wndtp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-x27jw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:27Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:27 crc kubenswrapper[4830]: I0131 09:01:27.611685 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"20ed341f-ef9c-4242-981d-80c09f22a37f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://28d65a2b0e667ff1bd912564180a6a9ce77e91a104c6028e19bc58e0ab3b295d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c484d4ed3775a955f3077e3404135c771c581eef1e8fa614d7cdd6e521bbb426\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6889405df5b8cc9e09b36c99a5db06048c9de193dd5744d2b48c07f42c477bd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e99664db53d57a91882867cdf4ab33d52a2e165c53f91cd1b918a32c49a7afa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:00:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:27Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:27 crc kubenswrapper[4830]: I0131 09:01:27.620445 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:27 crc kubenswrapper[4830]: I0131 09:01:27.620501 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:27 crc kubenswrapper[4830]: I0131 09:01:27.620516 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:27 crc kubenswrapper[4830]: I0131 09:01:27.620557 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:27 crc kubenswrapper[4830]: I0131 09:01:27.620570 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:27Z","lastTransitionTime":"2026-01-31T09:01:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:27 crc kubenswrapper[4830]: I0131 09:01:27.627816 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c33f93dea1eda6a26c80682447157b317a4aea4af66d200c5bdfc162ad593e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:27Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:27 crc kubenswrapper[4830]: I0131 09:01:27.643544 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:27Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:27 crc kubenswrapper[4830]: I0131 09:01:27.656048 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2018dd8e7153f3ce64992dc6f931ae09c5f77931cd0743a9fe2557673b6a41f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:27Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:27 crc kubenswrapper[4830]: I0131 09:01:27.669754 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"158dbfda-9b0a-4809-9946-3c6ee2d082dc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4f2590c48b20124bb8d0271755d430719ece306dbdc95acc26258abaf331ee2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqp59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7cfb7ee25dc18bb1412f69e9bbc3a9055029ed188a12baa5ceef7d5445ad597c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqp59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:23Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gt7kd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:27Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:27 crc kubenswrapper[4830]: I0131 09:01:27.723853 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:27 crc kubenswrapper[4830]: I0131 09:01:27.724235 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:27 crc kubenswrapper[4830]: I0131 09:01:27.724372 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:27 crc kubenswrapper[4830]: I0131 09:01:27.724464 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:27 crc kubenswrapper[4830]: I0131 09:01:27.724569 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:27Z","lastTransitionTime":"2026-01-31T09:01:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:27 crc kubenswrapper[4830]: I0131 09:01:27.827967 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:27 crc kubenswrapper[4830]: I0131 09:01:27.828358 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:27 crc kubenswrapper[4830]: I0131 09:01:27.828469 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:27 crc kubenswrapper[4830]: I0131 09:01:27.828609 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:27 crc kubenswrapper[4830]: I0131 09:01:27.828775 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:27Z","lastTransitionTime":"2026-01-31T09:01:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:27 crc kubenswrapper[4830]: I0131 09:01:27.931883 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:27 crc kubenswrapper[4830]: I0131 09:01:27.932236 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:27 crc kubenswrapper[4830]: I0131 09:01:27.932302 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:27 crc kubenswrapper[4830]: I0131 09:01:27.932369 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:27 crc kubenswrapper[4830]: I0131 09:01:27.932467 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:27Z","lastTransitionTime":"2026-01-31T09:01:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:28 crc kubenswrapper[4830]: I0131 09:01:28.034567 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:28 crc kubenswrapper[4830]: I0131 09:01:28.034907 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:28 crc kubenswrapper[4830]: I0131 09:01:28.034994 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:28 crc kubenswrapper[4830]: I0131 09:01:28.035063 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:28 crc kubenswrapper[4830]: I0131 09:01:28.035123 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:28Z","lastTransitionTime":"2026-01-31T09:01:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:28 crc kubenswrapper[4830]: I0131 09:01:28.138093 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:28 crc kubenswrapper[4830]: I0131 09:01:28.138446 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:28 crc kubenswrapper[4830]: I0131 09:01:28.138454 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:28 crc kubenswrapper[4830]: I0131 09:01:28.138472 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:28 crc kubenswrapper[4830]: I0131 09:01:28.138481 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:28Z","lastTransitionTime":"2026-01-31T09:01:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:28 crc kubenswrapper[4830]: I0131 09:01:28.217648 4830 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 02:27:51.439433819 +0000 UTC Jan 31 09:01:28 crc kubenswrapper[4830]: I0131 09:01:28.241242 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:28 crc kubenswrapper[4830]: I0131 09:01:28.241317 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:28 crc kubenswrapper[4830]: I0131 09:01:28.241342 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:28 crc kubenswrapper[4830]: I0131 09:01:28.241368 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:28 crc kubenswrapper[4830]: I0131 09:01:28.241384 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:28Z","lastTransitionTime":"2026-01-31T09:01:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:28 crc kubenswrapper[4830]: I0131 09:01:28.250943 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 09:01:28 crc kubenswrapper[4830]: E0131 09:01:28.251303 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 09:01:28 crc kubenswrapper[4830]: I0131 09:01:28.251065 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 09:01:28 crc kubenswrapper[4830]: E0131 09:01:28.252002 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 09:01:28 crc kubenswrapper[4830]: I0131 09:01:28.251002 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 09:01:28 crc kubenswrapper[4830]: E0131 09:01:28.252190 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 09:01:28 crc kubenswrapper[4830]: I0131 09:01:28.344318 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:28 crc kubenswrapper[4830]: I0131 09:01:28.344367 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:28 crc kubenswrapper[4830]: I0131 09:01:28.344376 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:28 crc kubenswrapper[4830]: I0131 09:01:28.344395 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:28 crc kubenswrapper[4830]: I0131 09:01:28.344406 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:28Z","lastTransitionTime":"2026-01-31T09:01:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:28 crc kubenswrapper[4830]: I0131 09:01:28.446560 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:28 crc kubenswrapper[4830]: I0131 09:01:28.446601 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:28 crc kubenswrapper[4830]: I0131 09:01:28.446615 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:28 crc kubenswrapper[4830]: I0131 09:01:28.446631 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:28 crc kubenswrapper[4830]: I0131 09:01:28.446641 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:28Z","lastTransitionTime":"2026-01-31T09:01:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:28 crc kubenswrapper[4830]: I0131 09:01:28.450318 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-r8pc4" event={"ID":"159b9801-57e3-4cf0-9b81-10aacb5eef83","Type":"ContainerStarted","Data":"a39d2c99a3b1f6ff5d003d2163046c5127e41dac82f41744c1fbffd0cec8ff09"} Jan 31 09:01:28 crc kubenswrapper[4830]: I0131 09:01:28.453607 4830 generic.go:334] "Generic (PLEG): container finished" podID="227117cb-01d3-4e44-9da3-b1d577fb3ee2" containerID="c1275255b2597fbb09db9612f496ab3177afffe0c7d74d406bc4b1bc861b5b54" exitCode=0 Jan 31 09:01:28 crc kubenswrapper[4830]: I0131 09:01:28.453659 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-x27jw" event={"ID":"227117cb-01d3-4e44-9da3-b1d577fb3ee2","Type":"ContainerDied","Data":"c1275255b2597fbb09db9612f496ab3177afffe0c7d74d406bc4b1bc861b5b54"} Jan 31 09:01:28 crc kubenswrapper[4830]: I0131 09:01:28.478984 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"20ed341f-ef9c-4242-981d-80c09f22a37f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://28d65a2b0e667ff1bd912564180a6a9ce77e91a104c6028e19bc58e0ab3b295d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c484d4ed3775a955f3077e3404135c771c581eef1e8fa614d7cdd6e521bbb426\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6889405df5b8cc9e09b36c99a5db06048c9de193dd5744d2b48c07f42c477bd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e99664db53d57a91882867cdf4ab33d52a2e165c53f91cd1b918a32c49a7afa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:00:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:28Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:28 crc kubenswrapper[4830]: I0131 09:01:28.495503 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c33f93dea1eda6a26c80682447157b317a4aea4af66d200c5bdfc162ad593e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:28Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:28 crc kubenswrapper[4830]: I0131 09:01:28.514907 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:28Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:28 crc kubenswrapper[4830]: I0131 09:01:28.526827 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2018dd8e7153f3ce64992dc6f931ae09c5f77931cd0743a9fe2557673b6a41f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:28Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:28 crc kubenswrapper[4830]: I0131 09:01:28.538281 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"158dbfda-9b0a-4809-9946-3c6ee2d082dc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4f2590c48b20124bb8d0271755d430719ece306dbdc95acc26258abaf331ee2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqp59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7cfb7ee25dc18bb1412f69e9bbc3a9055029ed188a12baa5ceef7d5445ad597c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqp59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:23Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gt7kd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:28Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:28 crc kubenswrapper[4830]: I0131 09:01:28.550920 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:28 crc kubenswrapper[4830]: I0131 09:01:28.550990 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:28 crc kubenswrapper[4830]: I0131 09:01:28.551004 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:28 crc kubenswrapper[4830]: I0131 09:01:28.551029 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:28 crc kubenswrapper[4830]: I0131 09:01:28.551041 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:28Z","lastTransitionTime":"2026-01-31T09:01:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:28 crc kubenswrapper[4830]: I0131 09:01:28.552274 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0468c8e291590c1fa2ffc9b7786bbbf6f3cfb7889fc3adaf7f7a4d3b0edcb1ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4601859d7e52904b3390b2ebca48210c4b3bf132b00a4735a1dbe6cfdc7bd02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:28Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:28 crc kubenswrapper[4830]: I0131 09:01:28.576125 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-r8pc4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"159b9801-57e3-4cf0-9b81-10aacb5eef83\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba6e5a840aca1952877a5d2c18af1d28779b13d54e0c653a2b02b3c3d7b748e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba6e5a840aca1952877a5d2c18af1d28779b13d54e0c653a2b02b3c3d7b748e4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-r8pc4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:28Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:28 crc kubenswrapper[4830]: I0131 09:01:28.593856 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c75e1c36-b769-464e-96eb-6d9b3c5aa384\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fc4f1b4979b8d902676eced34650daea491e68b4c4377492b928a9f0f78d12b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b68f539fdc8bbf394adaf06d3e8682cdf498d0994c53f3754caf282cf9cf3607\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://539885da1b8f083e7ae878fec45416be66a5b06df063f046a05a4981bc8a8f75\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8cac33719e081864153ce20c60069a21036f3c29f7b4395021c3a2fe4f09dbc9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8cac33719e081864153ce20c60069a21036f3c29f7b4395021c3a2fe4f09dbc9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"message\\\":\\\"le observer\\\\nW0131 09:01:15.957161 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0131 09:01:15.957363 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 09:01:15.958528 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2684978895/tls.crt::/tmp/serving-cert-2684978895/tls.key\\\\\\\"\\\\nI0131 09:01:16.545024 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 09:01:16.548405 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 09:01:16.548444 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 09:01:16.548470 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 09:01:16.548480 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 09:01:16.557075 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 09:01:16.557112 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 09:01:16.557118 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 09:01:16.557123 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 09:01:16.557127 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 09:01:16.557130 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 09:01:16.557133 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0131 09:01:16.557468 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0131 09:01:16.564014 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d4d8bd6451d7416a2a312147ec282db5de8410910834f555c8d09062902130d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41db253848cac205af5f91621cac4e8223bda7d76984afd734a093f8e8a2d125\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41db253848cac205af5f91621cac4e8223bda7d76984afd734a093f8e8a2d125\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:00:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:00:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:00:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:28Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:28 crc kubenswrapper[4830]: I0131 09:01:28.608815 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:28Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:28 crc kubenswrapper[4830]: I0131 09:01:28.620392 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-zt78q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f8a0ccd-540b-4151-a34d-438e433cb141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://362d0fc182d79e72720f3686e7fb5219372cf72d8be09c8086713b692e8d66d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6zlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:25Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-zt78q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:28Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:28 crc kubenswrapper[4830]: I0131 09:01:28.633974 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:28Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:28 crc kubenswrapper[4830]: I0131 09:01:28.648528 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pmbpr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca325f50-edf0-4f3d-ab92-17f40a73d274\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d2a0a6bafefdee2120d6573808366f2455c8606c350f69b9e62bfb2903f6303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7p56d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:22Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pmbpr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:28Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:28 crc kubenswrapper[4830]: I0131 09:01:28.653571 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:28 crc kubenswrapper[4830]: I0131 09:01:28.653613 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:28 crc kubenswrapper[4830]: I0131 09:01:28.653629 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:28 crc kubenswrapper[4830]: I0131 09:01:28.653649 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:28 crc kubenswrapper[4830]: I0131 09:01:28.653662 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:28Z","lastTransitionTime":"2026-01-31T09:01:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:28 crc kubenswrapper[4830]: I0131 09:01:28.663767 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-cjqbn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b7e133cc-19e8-4770-9146-88dac53a6531\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fc53764819654361fe0c4c89480ef4e2b42eb79d71ab8b88f1cc9283c67ce70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-msp6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-cjqbn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:28Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:28 crc kubenswrapper[4830]: I0131 09:01:28.682428 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-x27jw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"227117cb-01d3-4e44-9da3-b1d577fb3ee2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wndtp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b9804ac2b3c56a17dc1665cc7ca0c622f9f7a5dbf19f804b155c517a5be61e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b9804ac2b3c56a17dc1665cc7ca0c622f9f7a5dbf19f804b155c517a5be61e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wndtp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e777b7bcec1e8930ac1c280460b5733614b633b1ed72522e124f8585ef5cb38d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e777b7bcec1e8930ac1c280460b5733614b633b1ed72522e124f8585ef5cb38d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wndtp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10692d497b8cf269a439c603376a958fcead6d7ee2338eff4e014a75eaa26417\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10692d497b8cf269a439c603376a958fcead6d7ee2338eff4e014a75eaa26417\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wndtp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1275255b2597fbb09db9612f496ab3177afffe0c7d74d406bc4b1bc861b5b54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c1275255b2597fbb09db9612f496ab3177afffe0c7d74d406bc4b1bc861b5b54\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wndtp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wndtp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wndtp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-x27jw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:28Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:28 crc kubenswrapper[4830]: I0131 09:01:28.759319 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:28 crc kubenswrapper[4830]: I0131 09:01:28.759380 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:28 crc kubenswrapper[4830]: I0131 09:01:28.759391 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:28 crc kubenswrapper[4830]: I0131 09:01:28.759937 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:28 crc kubenswrapper[4830]: I0131 09:01:28.760034 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:28Z","lastTransitionTime":"2026-01-31T09:01:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:28 crc kubenswrapper[4830]: I0131 09:01:28.863205 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:28 crc kubenswrapper[4830]: I0131 09:01:28.863266 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:28 crc kubenswrapper[4830]: I0131 09:01:28.863278 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:28 crc kubenswrapper[4830]: I0131 09:01:28.863299 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:28 crc kubenswrapper[4830]: I0131 09:01:28.863315 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:28Z","lastTransitionTime":"2026-01-31T09:01:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:28 crc kubenswrapper[4830]: I0131 09:01:28.966253 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:28 crc kubenswrapper[4830]: I0131 09:01:28.966323 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:28 crc kubenswrapper[4830]: I0131 09:01:28.966346 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:28 crc kubenswrapper[4830]: I0131 09:01:28.966379 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:28 crc kubenswrapper[4830]: I0131 09:01:28.966402 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:28Z","lastTransitionTime":"2026-01-31T09:01:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:29 crc kubenswrapper[4830]: I0131 09:01:29.069998 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:29 crc kubenswrapper[4830]: I0131 09:01:29.070054 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:29 crc kubenswrapper[4830]: I0131 09:01:29.070067 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:29 crc kubenswrapper[4830]: I0131 09:01:29.070092 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:29 crc kubenswrapper[4830]: I0131 09:01:29.070115 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:29Z","lastTransitionTime":"2026-01-31T09:01:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:29 crc kubenswrapper[4830]: I0131 09:01:29.172530 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:29 crc kubenswrapper[4830]: I0131 09:01:29.172583 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:29 crc kubenswrapper[4830]: I0131 09:01:29.172595 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:29 crc kubenswrapper[4830]: I0131 09:01:29.172619 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:29 crc kubenswrapper[4830]: I0131 09:01:29.172633 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:29Z","lastTransitionTime":"2026-01-31T09:01:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:29 crc kubenswrapper[4830]: I0131 09:01:29.218405 4830 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 23:57:50.622216864 +0000 UTC Jan 31 09:01:29 crc kubenswrapper[4830]: I0131 09:01:29.275832 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:29 crc kubenswrapper[4830]: I0131 09:01:29.275889 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:29 crc kubenswrapper[4830]: I0131 09:01:29.275907 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:29 crc kubenswrapper[4830]: I0131 09:01:29.275939 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:29 crc kubenswrapper[4830]: I0131 09:01:29.275961 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:29Z","lastTransitionTime":"2026-01-31T09:01:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:29 crc kubenswrapper[4830]: I0131 09:01:29.379380 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:29 crc kubenswrapper[4830]: I0131 09:01:29.379445 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:29 crc kubenswrapper[4830]: I0131 09:01:29.379462 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:29 crc kubenswrapper[4830]: I0131 09:01:29.379488 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:29 crc kubenswrapper[4830]: I0131 09:01:29.379518 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:29Z","lastTransitionTime":"2026-01-31T09:01:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:29 crc kubenswrapper[4830]: I0131 09:01:29.460157 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-x27jw" event={"ID":"227117cb-01d3-4e44-9da3-b1d577fb3ee2","Type":"ContainerStarted","Data":"443af6aeef0b259a54c898d9c08f6e39d663ddf06d36d8712ee8ebb7fd2ebf5a"} Jan 31 09:01:29 crc kubenswrapper[4830]: I0131 09:01:29.475062 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"158dbfda-9b0a-4809-9946-3c6ee2d082dc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4f2590c48b20124bb8d0271755d430719ece306dbdc95acc26258abaf331ee2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqp59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7cfb7ee25dc18bb1412f69e9bbc3a9055029ed188a12baa5ceef7d5445ad597c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqp59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:23Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gt7kd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:29Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:29 crc kubenswrapper[4830]: I0131 09:01:29.482705 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:29 crc kubenswrapper[4830]: I0131 09:01:29.482761 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:29 crc kubenswrapper[4830]: I0131 09:01:29.482780 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:29 crc kubenswrapper[4830]: I0131 09:01:29.482800 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:29 crc kubenswrapper[4830]: I0131 09:01:29.482815 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:29Z","lastTransitionTime":"2026-01-31T09:01:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:29 crc kubenswrapper[4830]: I0131 09:01:29.490625 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"20ed341f-ef9c-4242-981d-80c09f22a37f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://28d65a2b0e667ff1bd912564180a6a9ce77e91a104c6028e19bc58e0ab3b295d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c484d4ed3775a955f3077e3404135c771c581eef1e8fa614d7cdd6e521bbb426\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6889405df5b8cc9e09b36c99a5db06048c9de193dd5744d2b48c07f42c477bd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e99664db53d57a91882867cdf4ab33d52a2e165c53f91cd1b918a32c49a7afa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:00:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:29Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:29 crc kubenswrapper[4830]: I0131 09:01:29.507280 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c33f93dea1eda6a26c80682447157b317a4aea4af66d200c5bdfc162ad593e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:29Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:29 crc kubenswrapper[4830]: I0131 09:01:29.522097 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:29Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:29 crc kubenswrapper[4830]: I0131 09:01:29.536250 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2018dd8e7153f3ce64992dc6f931ae09c5f77931cd0743a9fe2557673b6a41f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:29Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:29 crc kubenswrapper[4830]: I0131 09:01:29.548952 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0468c8e291590c1fa2ffc9b7786bbbf6f3cfb7889fc3adaf7f7a4d3b0edcb1ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4601859d7e52904b3390b2ebca48210c4b3bf132b00a4735a1dbe6cfdc7bd02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:29Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:29 crc kubenswrapper[4830]: I0131 09:01:29.567581 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-r8pc4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"159b9801-57e3-4cf0-9b81-10aacb5eef83\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba6e5a840aca1952877a5d2c18af1d28779b13d54e0c653a2b02b3c3d7b748e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba6e5a840aca1952877a5d2c18af1d28779b13d54e0c653a2b02b3c3d7b748e4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-r8pc4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:29Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:29 crc kubenswrapper[4830]: I0131 09:01:29.587314 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:29 crc kubenswrapper[4830]: I0131 09:01:29.587380 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:29 crc kubenswrapper[4830]: I0131 09:01:29.587404 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:29 crc kubenswrapper[4830]: I0131 09:01:29.587426 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:29 crc kubenswrapper[4830]: I0131 09:01:29.587440 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:29Z","lastTransitionTime":"2026-01-31T09:01:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:29 crc kubenswrapper[4830]: I0131 09:01:29.590474 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c75e1c36-b769-464e-96eb-6d9b3c5aa384\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fc4f1b4979b8d902676eced34650daea491e68b4c4377492b928a9f0f78d12b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b68f539fdc8bbf394adaf06d3e8682cdf498d0994c53f3754caf282cf9cf3607\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://539885da1b8f083e7ae878fec45416be66a5b06df063f046a05a4981bc8a8f75\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8cac33719e081864153ce20c60069a21036f3c29f7b4395021c3a2fe4f09dbc9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8cac33719e081864153ce20c60069a21036f3c29f7b4395021c3a2fe4f09dbc9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"message\\\":\\\"le observer\\\\nW0131 09:01:15.957161 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0131 09:01:15.957363 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 09:01:15.958528 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2684978895/tls.crt::/tmp/serving-cert-2684978895/tls.key\\\\\\\"\\\\nI0131 09:01:16.545024 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 09:01:16.548405 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 09:01:16.548444 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 09:01:16.548470 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 09:01:16.548480 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 09:01:16.557075 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 09:01:16.557112 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 09:01:16.557118 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 09:01:16.557123 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 09:01:16.557127 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 09:01:16.557130 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 09:01:16.557133 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0131 09:01:16.557468 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0131 09:01:16.564014 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d4d8bd6451d7416a2a312147ec282db5de8410910834f555c8d09062902130d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41db253848cac205af5f91621cac4e8223bda7d76984afd734a093f8e8a2d125\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41db253848cac205af5f91621cac4e8223bda7d76984afd734a093f8e8a2d125\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:00:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:00:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:00:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:29Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:29 crc kubenswrapper[4830]: I0131 09:01:29.606144 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:29Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:29 crc kubenswrapper[4830]: I0131 09:01:29.617923 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-zt78q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f8a0ccd-540b-4151-a34d-438e433cb141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://362d0fc182d79e72720f3686e7fb5219372cf72d8be09c8086713b692e8d66d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6zlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:25Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-zt78q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:29Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:29 crc kubenswrapper[4830]: I0131 09:01:29.632071 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:29Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:29 crc kubenswrapper[4830]: I0131 09:01:29.642482 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pmbpr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca325f50-edf0-4f3d-ab92-17f40a73d274\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d2a0a6bafefdee2120d6573808366f2455c8606c350f69b9e62bfb2903f6303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7p56d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:22Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pmbpr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:29Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:29 crc kubenswrapper[4830]: I0131 09:01:29.654365 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-cjqbn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b7e133cc-19e8-4770-9146-88dac53a6531\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fc53764819654361fe0c4c89480ef4e2b42eb79d71ab8b88f1cc9283c67ce70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-msp6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-cjqbn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:29Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:29 crc kubenswrapper[4830]: I0131 09:01:29.671111 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-x27jw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"227117cb-01d3-4e44-9da3-b1d577fb3ee2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wndtp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b9804ac2b3c56a17dc1665cc7ca0c622f9f7a5dbf19f804b155c517a5be61e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b9804ac2b3c56a17dc1665cc7ca0c622f9f7a5dbf19f804b155c517a5be61e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wndtp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e777b7bcec1e8930ac1c280460b5733614b633b1ed72522e124f8585ef5cb38d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e777b7bcec1e8930ac1c280460b5733614b633b1ed72522e124f8585ef5cb38d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wndtp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10692d497b8cf269a439c603376a958fcead6d7ee2338eff4e014a75eaa26417\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10692d497b8cf269a439c603376a958fcead6d7ee2338eff4e014a75eaa26417\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wndtp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1275255b2597fbb09db9612f496ab3177afffe0c7d74d406bc4b1bc861b5b54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c1275255b2597fbb09db9612f496ab3177afffe0c7d74d406bc4b1bc861b5b54\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wndtp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://443af6aeef0b259a54c898d9c08f6e39d663ddf06d36d8712ee8ebb7fd2ebf5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wndtp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wndtp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-x27jw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:29Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:29 crc kubenswrapper[4830]: I0131 09:01:29.689988 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:29 crc kubenswrapper[4830]: I0131 09:01:29.690018 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:29 crc kubenswrapper[4830]: I0131 09:01:29.690030 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:29 crc kubenswrapper[4830]: I0131 09:01:29.690051 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:29 crc kubenswrapper[4830]: I0131 09:01:29.690070 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:29Z","lastTransitionTime":"2026-01-31T09:01:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:29 crc kubenswrapper[4830]: I0131 09:01:29.793232 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:29 crc kubenswrapper[4830]: I0131 09:01:29.793272 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:29 crc kubenswrapper[4830]: I0131 09:01:29.793281 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:29 crc kubenswrapper[4830]: I0131 09:01:29.793296 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:29 crc kubenswrapper[4830]: I0131 09:01:29.793305 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:29Z","lastTransitionTime":"2026-01-31T09:01:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:29 crc kubenswrapper[4830]: I0131 09:01:29.896056 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:29 crc kubenswrapper[4830]: I0131 09:01:29.896113 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:29 crc kubenswrapper[4830]: I0131 09:01:29.896129 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:29 crc kubenswrapper[4830]: I0131 09:01:29.896154 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:29 crc kubenswrapper[4830]: I0131 09:01:29.896168 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:29Z","lastTransitionTime":"2026-01-31T09:01:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:29 crc kubenswrapper[4830]: I0131 09:01:29.999373 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:29 crc kubenswrapper[4830]: I0131 09:01:29.999429 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:29 crc kubenswrapper[4830]: I0131 09:01:29.999442 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:29 crc kubenswrapper[4830]: I0131 09:01:29.999463 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:29 crc kubenswrapper[4830]: I0131 09:01:29.999477 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:29Z","lastTransitionTime":"2026-01-31T09:01:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:30 crc kubenswrapper[4830]: I0131 09:01:30.101872 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:30 crc kubenswrapper[4830]: I0131 09:01:30.101953 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:30 crc kubenswrapper[4830]: I0131 09:01:30.101962 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:30 crc kubenswrapper[4830]: I0131 09:01:30.101979 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:30 crc kubenswrapper[4830]: I0131 09:01:30.101990 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:30Z","lastTransitionTime":"2026-01-31T09:01:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:30 crc kubenswrapper[4830]: I0131 09:01:30.204405 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:30 crc kubenswrapper[4830]: I0131 09:01:30.204461 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:30 crc kubenswrapper[4830]: I0131 09:01:30.204471 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:30 crc kubenswrapper[4830]: I0131 09:01:30.204490 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:30 crc kubenswrapper[4830]: I0131 09:01:30.204502 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:30Z","lastTransitionTime":"2026-01-31T09:01:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:30 crc kubenswrapper[4830]: I0131 09:01:30.218811 4830 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 22:19:13.431268361 +0000 UTC Jan 31 09:01:30 crc kubenswrapper[4830]: I0131 09:01:30.273805 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 09:01:30 crc kubenswrapper[4830]: E0131 09:01:30.273981 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 09:01:30 crc kubenswrapper[4830]: I0131 09:01:30.275014 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 09:01:30 crc kubenswrapper[4830]: I0131 09:01:30.275136 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 09:01:30 crc kubenswrapper[4830]: E0131 09:01:30.275200 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 09:01:30 crc kubenswrapper[4830]: E0131 09:01:30.275406 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 09:01:30 crc kubenswrapper[4830]: I0131 09:01:30.276216 4830 scope.go:117] "RemoveContainer" containerID="8cac33719e081864153ce20c60069a21036f3c29f7b4395021c3a2fe4f09dbc9" Jan 31 09:01:30 crc kubenswrapper[4830]: I0131 09:01:30.307236 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:30 crc kubenswrapper[4830]: I0131 09:01:30.307750 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:30 crc kubenswrapper[4830]: I0131 09:01:30.307767 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:30 crc kubenswrapper[4830]: I0131 09:01:30.307785 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:30 crc kubenswrapper[4830]: I0131 09:01:30.307796 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:30Z","lastTransitionTime":"2026-01-31T09:01:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:30 crc kubenswrapper[4830]: I0131 09:01:30.410851 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:30 crc kubenswrapper[4830]: I0131 09:01:30.410887 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:30 crc kubenswrapper[4830]: I0131 09:01:30.410901 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:30 crc kubenswrapper[4830]: I0131 09:01:30.410917 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:30 crc kubenswrapper[4830]: I0131 09:01:30.410929 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:30Z","lastTransitionTime":"2026-01-31T09:01:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:30 crc kubenswrapper[4830]: I0131 09:01:30.471484 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-r8pc4" event={"ID":"159b9801-57e3-4cf0-9b81-10aacb5eef83","Type":"ContainerStarted","Data":"81cb2855bf27ee35b843c98bb352ecabf420ed858ebfc1459adaac6a9fd55407"} Jan 31 09:01:30 crc kubenswrapper[4830]: I0131 09:01:30.471934 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-r8pc4" Jan 31 09:01:30 crc kubenswrapper[4830]: I0131 09:01:30.475766 4830 generic.go:334] "Generic (PLEG): container finished" podID="227117cb-01d3-4e44-9da3-b1d577fb3ee2" containerID="443af6aeef0b259a54c898d9c08f6e39d663ddf06d36d8712ee8ebb7fd2ebf5a" exitCode=0 Jan 31 09:01:30 crc kubenswrapper[4830]: I0131 09:01:30.475813 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-x27jw" event={"ID":"227117cb-01d3-4e44-9da3-b1d577fb3ee2","Type":"ContainerDied","Data":"443af6aeef0b259a54c898d9c08f6e39d663ddf06d36d8712ee8ebb7fd2ebf5a"} Jan 31 09:01:30 crc kubenswrapper[4830]: I0131 09:01:30.503627 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c75e1c36-b769-464e-96eb-6d9b3c5aa384\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fc4f1b4979b8d902676eced34650daea491e68b4c4377492b928a9f0f78d12b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b68f539fdc8bbf394adaf06d3e8682cdf498d0994c53f3754caf282cf9cf3607\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://539885da1b8f083e7ae878fec45416be66a5b06df063f046a05a4981bc8a8f75\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8cac33719e081864153ce20c60069a21036f3c29f7b4395021c3a2fe4f09dbc9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8cac33719e081864153ce20c60069a21036f3c29f7b4395021c3a2fe4f09dbc9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"message\\\":\\\"le observer\\\\nW0131 09:01:15.957161 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0131 09:01:15.957363 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 09:01:15.958528 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2684978895/tls.crt::/tmp/serving-cert-2684978895/tls.key\\\\\\\"\\\\nI0131 09:01:16.545024 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 09:01:16.548405 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 09:01:16.548444 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 09:01:16.548470 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 09:01:16.548480 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 09:01:16.557075 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 09:01:16.557112 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 09:01:16.557118 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 09:01:16.557123 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 09:01:16.557127 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 09:01:16.557130 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 09:01:16.557133 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0131 09:01:16.557468 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0131 09:01:16.564014 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d4d8bd6451d7416a2a312147ec282db5de8410910834f555c8d09062902130d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41db253848cac205af5f91621cac4e8223bda7d76984afd734a093f8e8a2d125\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41db253848cac205af5f91621cac4e8223bda7d76984afd734a093f8e8a2d125\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:00:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:00:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:00:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:30Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:30 crc kubenswrapper[4830]: I0131 09:01:30.518428 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:30 crc kubenswrapper[4830]: I0131 09:01:30.518479 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:30 crc kubenswrapper[4830]: I0131 09:01:30.518490 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:30 crc kubenswrapper[4830]: I0131 09:01:30.518512 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:30 crc kubenswrapper[4830]: I0131 09:01:30.518526 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:30Z","lastTransitionTime":"2026-01-31T09:01:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:30 crc kubenswrapper[4830]: I0131 09:01:30.523174 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:30Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:30 crc kubenswrapper[4830]: I0131 09:01:30.529587 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-r8pc4" Jan 31 09:01:30 crc kubenswrapper[4830]: I0131 09:01:30.551168 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-zt78q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f8a0ccd-540b-4151-a34d-438e433cb141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://362d0fc182d79e72720f3686e7fb5219372cf72d8be09c8086713b692e8d66d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6zlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:25Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-zt78q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:30Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:30 crc kubenswrapper[4830]: I0131 09:01:30.576832 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:30Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:30 crc kubenswrapper[4830]: I0131 09:01:30.590440 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pmbpr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca325f50-edf0-4f3d-ab92-17f40a73d274\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d2a0a6bafefdee2120d6573808366f2455c8606c350f69b9e62bfb2903f6303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7p56d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:22Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pmbpr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:30Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:30 crc kubenswrapper[4830]: I0131 09:01:30.603787 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-cjqbn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b7e133cc-19e8-4770-9146-88dac53a6531\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fc53764819654361fe0c4c89480ef4e2b42eb79d71ab8b88f1cc9283c67ce70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-msp6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-cjqbn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:30Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:30 crc kubenswrapper[4830]: I0131 09:01:30.617554 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-x27jw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"227117cb-01d3-4e44-9da3-b1d577fb3ee2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wndtp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b9804ac2b3c56a17dc1665cc7ca0c622f9f7a5dbf19f804b155c517a5be61e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b9804ac2b3c56a17dc1665cc7ca0c622f9f7a5dbf19f804b155c517a5be61e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wndtp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e777b7bcec1e8930ac1c280460b5733614b633b1ed72522e124f8585ef5cb38d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e777b7bcec1e8930ac1c280460b5733614b633b1ed72522e124f8585ef5cb38d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wndtp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10692d497b8cf269a439c603376a958fcead6d7ee2338eff4e014a75eaa26417\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10692d497b8cf269a439c603376a958fcead6d7ee2338eff4e014a75eaa26417\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wndtp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1275255b2597fbb09db9612f496ab3177afffe0c7d74d406bc4b1bc861b5b54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c1275255b2597fbb09db9612f496ab3177afffe0c7d74d406bc4b1bc861b5b54\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wndtp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://443af6aeef0b259a54c898d9c08f6e39d663ddf06d36d8712ee8ebb7fd2ebf5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wndtp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wndtp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-x27jw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:30Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:30 crc kubenswrapper[4830]: I0131 09:01:30.621813 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:30 crc kubenswrapper[4830]: I0131 09:01:30.621979 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:30 crc kubenswrapper[4830]: I0131 09:01:30.622081 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:30 crc kubenswrapper[4830]: I0131 09:01:30.622179 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:30 crc kubenswrapper[4830]: I0131 09:01:30.622261 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:30Z","lastTransitionTime":"2026-01-31T09:01:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:30 crc kubenswrapper[4830]: I0131 09:01:30.632122 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"20ed341f-ef9c-4242-981d-80c09f22a37f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://28d65a2b0e667ff1bd912564180a6a9ce77e91a104c6028e19bc58e0ab3b295d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c484d4ed3775a955f3077e3404135c771c581eef1e8fa614d7cdd6e521bbb426\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6889405df5b8cc9e09b36c99a5db06048c9de193dd5744d2b48c07f42c477bd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e99664db53d57a91882867cdf4ab33d52a2e165c53f91cd1b918a32c49a7afa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:00:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:30Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:30 crc kubenswrapper[4830]: I0131 09:01:30.644857 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c33f93dea1eda6a26c80682447157b317a4aea4af66d200c5bdfc162ad593e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:30Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:30 crc kubenswrapper[4830]: I0131 09:01:30.658313 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:30Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:30 crc kubenswrapper[4830]: I0131 09:01:30.671929 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2018dd8e7153f3ce64992dc6f931ae09c5f77931cd0743a9fe2557673b6a41f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:30Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:30 crc kubenswrapper[4830]: I0131 09:01:30.686593 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"158dbfda-9b0a-4809-9946-3c6ee2d082dc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4f2590c48b20124bb8d0271755d430719ece306dbdc95acc26258abaf331ee2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqp59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7cfb7ee25dc18bb1412f69e9bbc3a9055029ed188a12baa5ceef7d5445ad597c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqp59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:23Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gt7kd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:30Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:30 crc kubenswrapper[4830]: I0131 09:01:30.703211 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0468c8e291590c1fa2ffc9b7786bbbf6f3cfb7889fc3adaf7f7a4d3b0edcb1ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4601859d7e52904b3390b2ebca48210c4b3bf132b00a4735a1dbe6cfdc7bd02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:30Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:30 crc kubenswrapper[4830]: I0131 09:01:30.725143 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:30 crc kubenswrapper[4830]: I0131 09:01:30.725188 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:30 crc kubenswrapper[4830]: I0131 09:01:30.725200 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:30 crc kubenswrapper[4830]: I0131 09:01:30.725221 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:30 crc kubenswrapper[4830]: I0131 09:01:30.725233 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:30Z","lastTransitionTime":"2026-01-31T09:01:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:30 crc kubenswrapper[4830]: I0131 09:01:30.728571 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-r8pc4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"159b9801-57e3-4cf0-9b81-10aacb5eef83\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://320f24051b1a05475b54823ec3401ad252f0efd6c0d3c5098643f959bad15179\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae12083012db5dcd6cc0b23cf64d04c952587334fdf617d36d4cdc2f657eb163\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://351a35da1a2503d6c5d323c7a9a26460236f70517498b41055e6d21bbf92b358\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3614c1ff7ca6762c15b9b85e6187ad94029630e54254e8ccb45b5e7c24692561\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27c8241e84418d80626b17fe37ccf994304394c3fa85c7f1a058d8d54ad7e7d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fd109715e65725addfeba399eacb1abbe16132e06e9ee50b542e877ca9d8d82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://81cb2855bf27ee35b843c98bb352ecabf420ed858ebfc1459adaac6a9fd55407\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a39d2c99a3b1f6ff5d003d2163046c5127e41dac82f41744c1fbffd0cec8ff09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba6e5a840aca1952877a5d2c18af1d28779b13d54e0c653a2b02b3c3d7b748e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba6e5a840aca1952877a5d2c18af1d28779b13d54e0c653a2b02b3c3d7b748e4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-r8pc4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:30Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:30 crc kubenswrapper[4830]: I0131 09:01:30.754417 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:30Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:30 crc kubenswrapper[4830]: I0131 09:01:30.769638 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2018dd8e7153f3ce64992dc6f931ae09c5f77931cd0743a9fe2557673b6a41f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:30Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:30 crc kubenswrapper[4830]: I0131 09:01:30.786473 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"158dbfda-9b0a-4809-9946-3c6ee2d082dc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4f2590c48b20124bb8d0271755d430719ece306dbdc95acc26258abaf331ee2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqp59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7cfb7ee25dc18bb1412f69e9bbc3a9055029ed188a12baa5ceef7d5445ad597c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqp59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:23Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gt7kd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:30Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:30 crc kubenswrapper[4830]: I0131 09:01:30.800414 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"20ed341f-ef9c-4242-981d-80c09f22a37f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://28d65a2b0e667ff1bd912564180a6a9ce77e91a104c6028e19bc58e0ab3b295d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c484d4ed3775a955f3077e3404135c771c581eef1e8fa614d7cdd6e521bbb426\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6889405df5b8cc9e09b36c99a5db06048c9de193dd5744d2b48c07f42c477bd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e99664db53d57a91882867cdf4ab33d52a2e165c53f91cd1b918a32c49a7afa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:00:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:30Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:30 crc kubenswrapper[4830]: I0131 09:01:30.814822 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c33f93dea1eda6a26c80682447157b317a4aea4af66d200c5bdfc162ad593e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:30Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:30 crc kubenswrapper[4830]: I0131 09:01:30.827979 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:30 crc kubenswrapper[4830]: I0131 09:01:30.828033 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:30 crc kubenswrapper[4830]: I0131 09:01:30.828044 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:30 crc kubenswrapper[4830]: I0131 09:01:30.828062 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:30 crc kubenswrapper[4830]: I0131 09:01:30.828075 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:30Z","lastTransitionTime":"2026-01-31T09:01:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:30 crc kubenswrapper[4830]: I0131 09:01:30.831095 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0468c8e291590c1fa2ffc9b7786bbbf6f3cfb7889fc3adaf7f7a4d3b0edcb1ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4601859d7e52904b3390b2ebca48210c4b3bf132b00a4735a1dbe6cfdc7bd02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:30Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:30 crc kubenswrapper[4830]: I0131 09:01:30.858618 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-r8pc4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"159b9801-57e3-4cf0-9b81-10aacb5eef83\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://320f24051b1a05475b54823ec3401ad252f0efd6c0d3c5098643f959bad15179\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae12083012db5dcd6cc0b23cf64d04c952587334fdf617d36d4cdc2f657eb163\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://351a35da1a2503d6c5d323c7a9a26460236f70517498b41055e6d21bbf92b358\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3614c1ff7ca6762c15b9b85e6187ad94029630e54254e8ccb45b5e7c24692561\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27c8241e84418d80626b17fe37ccf994304394c3fa85c7f1a058d8d54ad7e7d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fd109715e65725addfeba399eacb1abbe16132e06e9ee50b542e877ca9d8d82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://81cb2855bf27ee35b843c98bb352ecabf420ed858ebfc1459adaac6a9fd55407\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a39d2c99a3b1f6ff5d003d2163046c5127e41dac82f41744c1fbffd0cec8ff09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba6e5a840aca1952877a5d2c18af1d28779b13d54e0c653a2b02b3c3d7b748e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba6e5a840aca1952877a5d2c18af1d28779b13d54e0c653a2b02b3c3d7b748e4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-r8pc4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:30Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:30 crc kubenswrapper[4830]: I0131 09:01:30.874560 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c75e1c36-b769-464e-96eb-6d9b3c5aa384\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fc4f1b4979b8d902676eced34650daea491e68b4c4377492b928a9f0f78d12b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b68f539fdc8bbf394adaf06d3e8682cdf498d0994c53f3754caf282cf9cf3607\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://539885da1b8f083e7ae878fec45416be66a5b06df063f046a05a4981bc8a8f75\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8cac33719e081864153ce20c60069a21036f3c29f7b4395021c3a2fe4f09dbc9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8cac33719e081864153ce20c60069a21036f3c29f7b4395021c3a2fe4f09dbc9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"message\\\":\\\"le observer\\\\nW0131 09:01:15.957161 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0131 09:01:15.957363 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 09:01:15.958528 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2684978895/tls.crt::/tmp/serving-cert-2684978895/tls.key\\\\\\\"\\\\nI0131 09:01:16.545024 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 09:01:16.548405 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 09:01:16.548444 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 09:01:16.548470 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 09:01:16.548480 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 09:01:16.557075 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 09:01:16.557112 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 09:01:16.557118 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 09:01:16.557123 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 09:01:16.557127 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 09:01:16.557130 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 09:01:16.557133 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0131 09:01:16.557468 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0131 09:01:16.564014 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d4d8bd6451d7416a2a312147ec282db5de8410910834f555c8d09062902130d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41db253848cac205af5f91621cac4e8223bda7d76984afd734a093f8e8a2d125\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41db253848cac205af5f91621cac4e8223bda7d76984afd734a093f8e8a2d125\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:00:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:00:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:00:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:30Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:30 crc kubenswrapper[4830]: I0131 09:01:30.889997 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:30Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:30 crc kubenswrapper[4830]: I0131 09:01:30.902103 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-zt78q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f8a0ccd-540b-4151-a34d-438e433cb141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://362d0fc182d79e72720f3686e7fb5219372cf72d8be09c8086713b692e8d66d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6zlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:25Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-zt78q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:30Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:30 crc kubenswrapper[4830]: I0131 09:01:30.918554 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:30Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:30 crc kubenswrapper[4830]: I0131 09:01:30.931499 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pmbpr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca325f50-edf0-4f3d-ab92-17f40a73d274\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d2a0a6bafefdee2120d6573808366f2455c8606c350f69b9e62bfb2903f6303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7p56d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:22Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pmbpr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:30Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:30 crc kubenswrapper[4830]: I0131 09:01:30.932839 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:30 crc kubenswrapper[4830]: I0131 09:01:30.932875 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:30 crc kubenswrapper[4830]: I0131 09:01:30.932885 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:30 crc kubenswrapper[4830]: I0131 09:01:30.932904 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:30 crc kubenswrapper[4830]: I0131 09:01:30.932916 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:30Z","lastTransitionTime":"2026-01-31T09:01:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:30 crc kubenswrapper[4830]: I0131 09:01:30.945134 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-cjqbn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b7e133cc-19e8-4770-9146-88dac53a6531\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fc53764819654361fe0c4c89480ef4e2b42eb79d71ab8b88f1cc9283c67ce70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-msp6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-cjqbn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:30Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:30 crc kubenswrapper[4830]: I0131 09:01:30.961204 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-x27jw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"227117cb-01d3-4e44-9da3-b1d577fb3ee2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wndtp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b9804ac2b3c56a17dc1665cc7ca0c622f9f7a5dbf19f804b155c517a5be61e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b9804ac2b3c56a17dc1665cc7ca0c622f9f7a5dbf19f804b155c517a5be61e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wndtp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e777b7bcec1e8930ac1c280460b5733614b633b1ed72522e124f8585ef5cb38d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e777b7bcec1e8930ac1c280460b5733614b633b1ed72522e124f8585ef5cb38d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wndtp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10692d497b8cf269a439c603376a958fcead6d7ee2338eff4e014a75eaa26417\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10692d497b8cf269a439c603376a958fcead6d7ee2338eff4e014a75eaa26417\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wndtp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1275255b2597fbb09db9612f496ab3177afffe0c7d74d406bc4b1bc861b5b54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c1275255b2597fbb09db9612f496ab3177afffe0c7d74d406bc4b1bc861b5b54\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wndtp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://443af6aeef0b259a54c898d9c08f6e39d663ddf06d36d8712ee8ebb7fd2ebf5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://443af6aeef0b259a54c898d9c08f6e39d663ddf06d36d8712ee8ebb7fd2ebf5a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wndtp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wndtp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-x27jw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:30Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:31 crc kubenswrapper[4830]: I0131 09:01:31.035324 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:31 crc kubenswrapper[4830]: I0131 09:01:31.035353 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:31 crc kubenswrapper[4830]: I0131 09:01:31.035363 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:31 crc kubenswrapper[4830]: I0131 09:01:31.035378 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:31 crc kubenswrapper[4830]: I0131 09:01:31.035389 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:31Z","lastTransitionTime":"2026-01-31T09:01:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:31 crc kubenswrapper[4830]: I0131 09:01:31.144976 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:31 crc kubenswrapper[4830]: I0131 09:01:31.145022 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:31 crc kubenswrapper[4830]: I0131 09:01:31.145034 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:31 crc kubenswrapper[4830]: I0131 09:01:31.145050 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:31 crc kubenswrapper[4830]: I0131 09:01:31.145064 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:31Z","lastTransitionTime":"2026-01-31T09:01:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:31 crc kubenswrapper[4830]: I0131 09:01:31.219419 4830 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 00:37:24.926012907 +0000 UTC Jan 31 09:01:31 crc kubenswrapper[4830]: I0131 09:01:31.247859 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:31 crc kubenswrapper[4830]: I0131 09:01:31.247910 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:31 crc kubenswrapper[4830]: I0131 09:01:31.247921 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:31 crc kubenswrapper[4830]: I0131 09:01:31.247937 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:31 crc kubenswrapper[4830]: I0131 09:01:31.247948 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:31Z","lastTransitionTime":"2026-01-31T09:01:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:31 crc kubenswrapper[4830]: I0131 09:01:31.350609 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:31 crc kubenswrapper[4830]: I0131 09:01:31.350645 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:31 crc kubenswrapper[4830]: I0131 09:01:31.350654 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:31 crc kubenswrapper[4830]: I0131 09:01:31.350671 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:31 crc kubenswrapper[4830]: I0131 09:01:31.350680 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:31Z","lastTransitionTime":"2026-01-31T09:01:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:31 crc kubenswrapper[4830]: I0131 09:01:31.453263 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:31 crc kubenswrapper[4830]: I0131 09:01:31.453313 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:31 crc kubenswrapper[4830]: I0131 09:01:31.453323 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:31 crc kubenswrapper[4830]: I0131 09:01:31.453340 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:31 crc kubenswrapper[4830]: I0131 09:01:31.453354 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:31Z","lastTransitionTime":"2026-01-31T09:01:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:31 crc kubenswrapper[4830]: I0131 09:01:31.484326 4830 generic.go:334] "Generic (PLEG): container finished" podID="227117cb-01d3-4e44-9da3-b1d577fb3ee2" containerID="e55fc9c3b6f0e9526860d830179e759cb2eae3c52847b675db6ccce9b6ed2766" exitCode=0 Jan 31 09:01:31 crc kubenswrapper[4830]: I0131 09:01:31.484421 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-x27jw" event={"ID":"227117cb-01d3-4e44-9da3-b1d577fb3ee2","Type":"ContainerDied","Data":"e55fc9c3b6f0e9526860d830179e759cb2eae3c52847b675db6ccce9b6ed2766"} Jan 31 09:01:31 crc kubenswrapper[4830]: I0131 09:01:31.486365 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Jan 31 09:01:31 crc kubenswrapper[4830]: I0131 09:01:31.488900 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"d9b64732f8259953717c8ad355889afd462ce339c881ba9c105f6d3f39245e79"} Jan 31 09:01:31 crc kubenswrapper[4830]: I0131 09:01:31.489125 4830 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 31 09:01:31 crc kubenswrapper[4830]: I0131 09:01:31.489767 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 09:01:31 crc kubenswrapper[4830]: I0131 09:01:31.489829 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-r8pc4" Jan 31 09:01:31 crc kubenswrapper[4830]: I0131 09:01:31.505323 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c33f93dea1eda6a26c80682447157b317a4aea4af66d200c5bdfc162ad593e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:31Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:31 crc kubenswrapper[4830]: I0131 09:01:31.523168 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-r8pc4" Jan 31 09:01:31 crc kubenswrapper[4830]: I0131 09:01:31.527633 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:31Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:31 crc kubenswrapper[4830]: I0131 09:01:31.548278 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2018dd8e7153f3ce64992dc6f931ae09c5f77931cd0743a9fe2557673b6a41f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:31Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:31 crc kubenswrapper[4830]: I0131 09:01:31.555962 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:31 crc kubenswrapper[4830]: I0131 09:01:31.556007 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:31 crc kubenswrapper[4830]: I0131 09:01:31.556021 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:31 crc kubenswrapper[4830]: I0131 09:01:31.556043 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:31 crc kubenswrapper[4830]: I0131 09:01:31.556058 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:31Z","lastTransitionTime":"2026-01-31T09:01:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:31 crc kubenswrapper[4830]: I0131 09:01:31.564922 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"158dbfda-9b0a-4809-9946-3c6ee2d082dc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4f2590c48b20124bb8d0271755d430719ece306dbdc95acc26258abaf331ee2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqp59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7cfb7ee25dc18bb1412f69e9bbc3a9055029ed188a12baa5ceef7d5445ad597c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqp59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:23Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gt7kd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:31Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:31 crc kubenswrapper[4830]: I0131 09:01:31.581833 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"20ed341f-ef9c-4242-981d-80c09f22a37f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://28d65a2b0e667ff1bd912564180a6a9ce77e91a104c6028e19bc58e0ab3b295d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c484d4ed3775a955f3077e3404135c771c581eef1e8fa614d7cdd6e521bbb426\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6889405df5b8cc9e09b36c99a5db06048c9de193dd5744d2b48c07f42c477bd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e99664db53d57a91882867cdf4ab33d52a2e165c53f91cd1b918a32c49a7afa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:00:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:31Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:31 crc kubenswrapper[4830]: I0131 09:01:31.609389 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0468c8e291590c1fa2ffc9b7786bbbf6f3cfb7889fc3adaf7f7a4d3b0edcb1ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4601859d7e52904b3390b2ebca48210c4b3bf132b00a4735a1dbe6cfdc7bd02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:31Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:31 crc kubenswrapper[4830]: I0131 09:01:31.629522 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-r8pc4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"159b9801-57e3-4cf0-9b81-10aacb5eef83\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://320f24051b1a05475b54823ec3401ad252f0efd6c0d3c5098643f959bad15179\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae12083012db5dcd6cc0b23cf64d04c952587334fdf617d36d4cdc2f657eb163\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://351a35da1a2503d6c5d323c7a9a26460236f70517498b41055e6d21bbf92b358\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3614c1ff7ca6762c15b9b85e6187ad94029630e54254e8ccb45b5e7c24692561\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27c8241e84418d80626b17fe37ccf994304394c3fa85c7f1a058d8d54ad7e7d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fd109715e65725addfeba399eacb1abbe16132e06e9ee50b542e877ca9d8d82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://81cb2855bf27ee35b843c98bb352ecabf420ed858ebfc1459adaac6a9fd55407\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a39d2c99a3b1f6ff5d003d2163046c5127e41dac82f41744c1fbffd0cec8ff09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba6e5a840aca1952877a5d2c18af1d28779b13d54e0c653a2b02b3c3d7b748e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba6e5a840aca1952877a5d2c18af1d28779b13d54e0c653a2b02b3c3d7b748e4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-r8pc4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:31Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:31 crc kubenswrapper[4830]: I0131 09:01:31.642208 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-zt78q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f8a0ccd-540b-4151-a34d-438e433cb141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://362d0fc182d79e72720f3686e7fb5219372cf72d8be09c8086713b692e8d66d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6zlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:25Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-zt78q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:31Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:31 crc kubenswrapper[4830]: I0131 09:01:31.664465 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:31 crc kubenswrapper[4830]: I0131 09:01:31.664500 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:31 crc kubenswrapper[4830]: I0131 09:01:31.664510 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:31 crc kubenswrapper[4830]: I0131 09:01:31.664529 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:31 crc kubenswrapper[4830]: I0131 09:01:31.664540 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:31Z","lastTransitionTime":"2026-01-31T09:01:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:31 crc kubenswrapper[4830]: I0131 09:01:31.666059 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c75e1c36-b769-464e-96eb-6d9b3c5aa384\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fc4f1b4979b8d902676eced34650daea491e68b4c4377492b928a9f0f78d12b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b68f539fdc8bbf394adaf06d3e8682cdf498d0994c53f3754caf282cf9cf3607\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://539885da1b8f083e7ae878fec45416be66a5b06df063f046a05a4981bc8a8f75\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8cac33719e081864153ce20c60069a21036f3c29f7b4395021c3a2fe4f09dbc9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8cac33719e081864153ce20c60069a21036f3c29f7b4395021c3a2fe4f09dbc9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"message\\\":\\\"le observer\\\\nW0131 09:01:15.957161 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0131 09:01:15.957363 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 09:01:15.958528 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2684978895/tls.crt::/tmp/serving-cert-2684978895/tls.key\\\\\\\"\\\\nI0131 09:01:16.545024 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 09:01:16.548405 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 09:01:16.548444 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 09:01:16.548470 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 09:01:16.548480 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 09:01:16.557075 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 09:01:16.557112 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 09:01:16.557118 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 09:01:16.557123 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 09:01:16.557127 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 09:01:16.557130 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 09:01:16.557133 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0131 09:01:16.557468 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0131 09:01:16.564014 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d4d8bd6451d7416a2a312147ec282db5de8410910834f555c8d09062902130d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41db253848cac205af5f91621cac4e8223bda7d76984afd734a093f8e8a2d125\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41db253848cac205af5f91621cac4e8223bda7d76984afd734a093f8e8a2d125\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:00:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:00:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:00:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:31Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:31 crc kubenswrapper[4830]: I0131 09:01:31.686895 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:31Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:31 crc kubenswrapper[4830]: I0131 09:01:31.703708 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-x27jw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"227117cb-01d3-4e44-9da3-b1d577fb3ee2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wndtp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b9804ac2b3c56a17dc1665cc7ca0c622f9f7a5dbf19f804b155c517a5be61e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b9804ac2b3c56a17dc1665cc7ca0c622f9f7a5dbf19f804b155c517a5be61e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wndtp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e777b7bcec1e8930ac1c280460b5733614b633b1ed72522e124f8585ef5cb38d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e777b7bcec1e8930ac1c280460b5733614b633b1ed72522e124f8585ef5cb38d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wndtp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10692d497b8cf269a439c603376a958fcead6d7ee2338eff4e014a75eaa26417\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10692d497b8cf269a439c603376a958fcead6d7ee2338eff4e014a75eaa26417\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wndtp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1275255b2597fbb09db9612f496ab3177afffe0c7d74d406bc4b1bc861b5b54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c1275255b2597fbb09db9612f496ab3177afffe0c7d74d406bc4b1bc861b5b54\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wndtp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://443af6aeef0b259a54c898d9c08f6e39d663ddf06d36d8712ee8ebb7fd2ebf5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://443af6aeef0b259a54c898d9c08f6e39d663ddf06d36d8712ee8ebb7fd2ebf5a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wndtp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e55fc9c3b6f0e9526860d830179e759cb2eae3c52847b675db6ccce9b6ed2766\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e55fc9c3b6f0e9526860d830179e759cb2eae3c52847b675db6ccce9b6ed2766\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wndtp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-x27jw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:31Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:31 crc kubenswrapper[4830]: I0131 09:01:31.724791 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:31Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:31 crc kubenswrapper[4830]: I0131 09:01:31.736812 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pmbpr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca325f50-edf0-4f3d-ab92-17f40a73d274\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d2a0a6bafefdee2120d6573808366f2455c8606c350f69b9e62bfb2903f6303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7p56d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:22Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pmbpr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:31Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:31 crc kubenswrapper[4830]: I0131 09:01:31.750927 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-cjqbn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b7e133cc-19e8-4770-9146-88dac53a6531\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fc53764819654361fe0c4c89480ef4e2b42eb79d71ab8b88f1cc9283c67ce70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-msp6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-cjqbn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:31Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:31 crc kubenswrapper[4830]: I0131 09:01:31.768194 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:31 crc kubenswrapper[4830]: I0131 09:01:31.768261 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:31 crc kubenswrapper[4830]: I0131 09:01:31.768278 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:31 crc kubenswrapper[4830]: I0131 09:01:31.768309 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:31 crc kubenswrapper[4830]: I0131 09:01:31.768325 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:31Z","lastTransitionTime":"2026-01-31T09:01:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:31 crc kubenswrapper[4830]: I0131 09:01:31.770248 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-cjqbn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b7e133cc-19e8-4770-9146-88dac53a6531\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fc53764819654361fe0c4c89480ef4e2b42eb79d71ab8b88f1cc9283c67ce70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-msp6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-cjqbn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:31Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:31 crc kubenswrapper[4830]: I0131 09:01:31.790466 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-x27jw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"227117cb-01d3-4e44-9da3-b1d577fb3ee2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wndtp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b9804ac2b3c56a17dc1665cc7ca0c622f9f7a5dbf19f804b155c517a5be61e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b9804ac2b3c56a17dc1665cc7ca0c622f9f7a5dbf19f804b155c517a5be61e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wndtp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e777b7bcec1e8930ac1c280460b5733614b633b1ed72522e124f8585ef5cb38d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e777b7bcec1e8930ac1c280460b5733614b633b1ed72522e124f8585ef5cb38d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wndtp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10692d497b8cf269a439c603376a958fcead6d7ee2338eff4e014a75eaa26417\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10692d497b8cf269a439c603376a958fcead6d7ee2338eff4e014a75eaa26417\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wndtp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1275255b2597fbb09db9612f496ab3177afffe0c7d74d406bc4b1bc861b5b54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c1275255b2597fbb09db9612f496ab3177afffe0c7d74d406bc4b1bc861b5b54\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wndtp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://443af6aeef0b259a54c898d9c08f6e39d663ddf06d36d8712ee8ebb7fd2ebf5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://443af6aeef0b259a54c898d9c08f6e39d663ddf06d36d8712ee8ebb7fd2ebf5a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wndtp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e55fc9c3b6f0e9526860d830179e759cb2eae3c52847b675db6ccce9b6ed2766\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e55fc9c3b6f0e9526860d830179e759cb2eae3c52847b675db6ccce9b6ed2766\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wndtp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-x27jw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:31Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:31 crc kubenswrapper[4830]: I0131 09:01:31.806660 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:31Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:31 crc kubenswrapper[4830]: I0131 09:01:31.817240 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pmbpr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca325f50-edf0-4f3d-ab92-17f40a73d274\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d2a0a6bafefdee2120d6573808366f2455c8606c350f69b9e62bfb2903f6303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7p56d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:22Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pmbpr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:31Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:31 crc kubenswrapper[4830]: I0131 09:01:31.828106 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c33f93dea1eda6a26c80682447157b317a4aea4af66d200c5bdfc162ad593e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:31Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:31 crc kubenswrapper[4830]: I0131 09:01:31.843229 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:31Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:31 crc kubenswrapper[4830]: I0131 09:01:31.854636 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2018dd8e7153f3ce64992dc6f931ae09c5f77931cd0743a9fe2557673b6a41f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:31Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:31 crc kubenswrapper[4830]: I0131 09:01:31.868046 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"158dbfda-9b0a-4809-9946-3c6ee2d082dc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4f2590c48b20124bb8d0271755d430719ece306dbdc95acc26258abaf331ee2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqp59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7cfb7ee25dc18bb1412f69e9bbc3a9055029ed188a12baa5ceef7d5445ad597c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqp59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:23Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gt7kd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:31Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:31 crc kubenswrapper[4830]: I0131 09:01:31.871431 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:31 crc kubenswrapper[4830]: I0131 09:01:31.871474 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:31 crc kubenswrapper[4830]: I0131 09:01:31.871486 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:31 crc kubenswrapper[4830]: I0131 09:01:31.871506 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:31 crc kubenswrapper[4830]: I0131 09:01:31.871520 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:31Z","lastTransitionTime":"2026-01-31T09:01:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:31 crc kubenswrapper[4830]: I0131 09:01:31.884135 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"20ed341f-ef9c-4242-981d-80c09f22a37f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://28d65a2b0e667ff1bd912564180a6a9ce77e91a104c6028e19bc58e0ab3b295d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c484d4ed3775a955f3077e3404135c771c581eef1e8fa614d7cdd6e521bbb426\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6889405df5b8cc9e09b36c99a5db06048c9de193dd5744d2b48c07f42c477bd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e99664db53d57a91882867cdf4ab33d52a2e165c53f91cd1b918a32c49a7afa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:00:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:31Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:31 crc kubenswrapper[4830]: I0131 09:01:31.898453 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0468c8e291590c1fa2ffc9b7786bbbf6f3cfb7889fc3adaf7f7a4d3b0edcb1ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4601859d7e52904b3390b2ebca48210c4b3bf132b00a4735a1dbe6cfdc7bd02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:31Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:31 crc kubenswrapper[4830]: I0131 09:01:31.918882 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-r8pc4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"159b9801-57e3-4cf0-9b81-10aacb5eef83\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://320f24051b1a05475b54823ec3401ad252f0efd6c0d3c5098643f959bad15179\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae12083012db5dcd6cc0b23cf64d04c952587334fdf617d36d4cdc2f657eb163\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://351a35da1a2503d6c5d323c7a9a26460236f70517498b41055e6d21bbf92b358\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3614c1ff7ca6762c15b9b85e6187ad94029630e54254e8ccb45b5e7c24692561\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27c8241e84418d80626b17fe37ccf994304394c3fa85c7f1a058d8d54ad7e7d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fd109715e65725addfeba399eacb1abbe16132e06e9ee50b542e877ca9d8d82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://81cb2855bf27ee35b843c98bb352ecabf420ed858ebfc1459adaac6a9fd55407\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a39d2c99a3b1f6ff5d003d2163046c5127e41dac82f41744c1fbffd0cec8ff09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba6e5a840aca1952877a5d2c18af1d28779b13d54e0c653a2b02b3c3d7b748e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba6e5a840aca1952877a5d2c18af1d28779b13d54e0c653a2b02b3c3d7b748e4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-r8pc4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:31Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:31 crc kubenswrapper[4830]: I0131 09:01:31.930716 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:31Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:31 crc kubenswrapper[4830]: I0131 09:01:31.936619 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:31 crc kubenswrapper[4830]: I0131 09:01:31.937074 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:31 crc kubenswrapper[4830]: I0131 09:01:31.937136 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:31 crc kubenswrapper[4830]: I0131 09:01:31.937163 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:31 crc kubenswrapper[4830]: I0131 09:01:31.937511 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:31Z","lastTransitionTime":"2026-01-31T09:01:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:31 crc kubenswrapper[4830]: I0131 09:01:31.947405 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-zt78q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f8a0ccd-540b-4151-a34d-438e433cb141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://362d0fc182d79e72720f3686e7fb5219372cf72d8be09c8086713b692e8d66d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6zlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:25Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-zt78q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:31Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:31 crc kubenswrapper[4830]: E0131 09:01:31.952946 4830 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T09:01:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T09:01:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T09:01:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T09:01:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"09bf5dcf-c0f5-4874-a379-a4244cbfeb7d\\\",\\\"systemUUID\\\":\\\"c42072f0-7f1e-4cb8-a24e-882cf5477d0b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:31Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:31 crc kubenswrapper[4830]: I0131 09:01:31.961320 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:31 crc kubenswrapper[4830]: I0131 09:01:31.961358 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:31 crc kubenswrapper[4830]: I0131 09:01:31.961370 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:31 crc kubenswrapper[4830]: I0131 09:01:31.961389 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:31 crc kubenswrapper[4830]: I0131 09:01:31.961404 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:31Z","lastTransitionTime":"2026-01-31T09:01:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:31 crc kubenswrapper[4830]: I0131 09:01:31.968245 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c75e1c36-b769-464e-96eb-6d9b3c5aa384\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fc4f1b4979b8d902676eced34650daea491e68b4c4377492b928a9f0f78d12b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b68f539fdc8bbf394adaf06d3e8682cdf498d0994c53f3754caf282cf9cf3607\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://539885da1b8f083e7ae878fec45416be66a5b06df063f046a05a4981bc8a8f75\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9b64732f8259953717c8ad355889afd462ce339c881ba9c105f6d3f39245e79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8cac33719e081864153ce20c60069a21036f3c29f7b4395021c3a2fe4f09dbc9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"message\\\":\\\"le observer\\\\nW0131 09:01:15.957161 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0131 09:01:15.957363 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 09:01:15.958528 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2684978895/tls.crt::/tmp/serving-cert-2684978895/tls.key\\\\\\\"\\\\nI0131 09:01:16.545024 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 09:01:16.548405 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 09:01:16.548444 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 09:01:16.548470 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 09:01:16.548480 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 09:01:16.557075 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 09:01:16.557112 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 09:01:16.557118 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 09:01:16.557123 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 09:01:16.557127 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 09:01:16.557130 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 09:01:16.557133 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0131 09:01:16.557468 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0131 09:01:16.564014 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d4d8bd6451d7416a2a312147ec282db5de8410910834f555c8d09062902130d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41db253848cac205af5f91621cac4e8223bda7d76984afd734a093f8e8a2d125\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41db253848cac205af5f91621cac4e8223bda7d76984afd734a093f8e8a2d125\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:00:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:00:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:00:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:31Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:31 crc kubenswrapper[4830]: E0131 09:01:31.982652 4830 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T09:01:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T09:01:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T09:01:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T09:01:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"09bf5dcf-c0f5-4874-a379-a4244cbfeb7d\\\",\\\"systemUUID\\\":\\\"c42072f0-7f1e-4cb8-a24e-882cf5477d0b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:31Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:31 crc kubenswrapper[4830]: I0131 09:01:31.987176 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:31 crc kubenswrapper[4830]: I0131 09:01:31.987218 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:31 crc kubenswrapper[4830]: I0131 09:01:31.987232 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:31 crc kubenswrapper[4830]: I0131 09:01:31.987250 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:31 crc kubenswrapper[4830]: I0131 09:01:31.987265 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:31Z","lastTransitionTime":"2026-01-31T09:01:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:32 crc kubenswrapper[4830]: E0131 09:01:32.006857 4830 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T09:01:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T09:01:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T09:01:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T09:01:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"09bf5dcf-c0f5-4874-a379-a4244cbfeb7d\\\",\\\"systemUUID\\\":\\\"c42072f0-7f1e-4cb8-a24e-882cf5477d0b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:32Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:32 crc kubenswrapper[4830]: I0131 09:01:32.011204 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:32 crc kubenswrapper[4830]: I0131 09:01:32.011252 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:32 crc kubenswrapper[4830]: I0131 09:01:32.011271 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:32 crc kubenswrapper[4830]: I0131 09:01:32.011299 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:32 crc kubenswrapper[4830]: I0131 09:01:32.011322 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:32Z","lastTransitionTime":"2026-01-31T09:01:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:32 crc kubenswrapper[4830]: E0131 09:01:32.027301 4830 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T09:01:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T09:01:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T09:01:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T09:01:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"09bf5dcf-c0f5-4874-a379-a4244cbfeb7d\\\",\\\"systemUUID\\\":\\\"c42072f0-7f1e-4cb8-a24e-882cf5477d0b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:32Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:32 crc kubenswrapper[4830]: I0131 09:01:32.031623 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:32 crc kubenswrapper[4830]: I0131 09:01:32.031667 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:32 crc kubenswrapper[4830]: I0131 09:01:32.031683 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:32 crc kubenswrapper[4830]: I0131 09:01:32.031741 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:32 crc kubenswrapper[4830]: I0131 09:01:32.031760 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:32Z","lastTransitionTime":"2026-01-31T09:01:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:32 crc kubenswrapper[4830]: E0131 09:01:32.045523 4830 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T09:01:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T09:01:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T09:01:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T09:01:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"09bf5dcf-c0f5-4874-a379-a4244cbfeb7d\\\",\\\"systemUUID\\\":\\\"c42072f0-7f1e-4cb8-a24e-882cf5477d0b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:32Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:32 crc kubenswrapper[4830]: E0131 09:01:32.045648 4830 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 31 09:01:32 crc kubenswrapper[4830]: I0131 09:01:32.047983 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:32 crc kubenswrapper[4830]: I0131 09:01:32.048046 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:32 crc kubenswrapper[4830]: I0131 09:01:32.048062 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:32 crc kubenswrapper[4830]: I0131 09:01:32.048085 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:32 crc kubenswrapper[4830]: I0131 09:01:32.048099 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:32Z","lastTransitionTime":"2026-01-31T09:01:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:32 crc kubenswrapper[4830]: I0131 09:01:32.090655 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 09:01:32 crc kubenswrapper[4830]: I0131 09:01:32.090858 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 09:01:32 crc kubenswrapper[4830]: I0131 09:01:32.090899 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 09:01:32 crc kubenswrapper[4830]: I0131 09:01:32.090919 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 09:01:32 crc kubenswrapper[4830]: I0131 09:01:32.090956 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 09:01:32 crc kubenswrapper[4830]: E0131 09:01:32.091113 4830 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 31 09:01:32 crc kubenswrapper[4830]: E0131 09:01:32.091132 4830 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 31 09:01:32 crc kubenswrapper[4830]: E0131 09:01:32.091147 4830 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 09:01:32 crc kubenswrapper[4830]: E0131 09:01:32.091204 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-31 09:01:48.091187935 +0000 UTC m=+52.584550377 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 09:01:32 crc kubenswrapper[4830]: E0131 09:01:32.091698 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 09:01:48.091689319 +0000 UTC m=+52.585051761 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 09:01:32 crc kubenswrapper[4830]: E0131 09:01:32.091797 4830 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 31 09:01:32 crc kubenswrapper[4830]: E0131 09:01:32.091835 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-31 09:01:48.091827673 +0000 UTC m=+52.585190115 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 31 09:01:32 crc kubenswrapper[4830]: E0131 09:01:32.091869 4830 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 31 09:01:32 crc kubenswrapper[4830]: E0131 09:01:32.091890 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-31 09:01:48.091884384 +0000 UTC m=+52.585246826 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 31 09:01:32 crc kubenswrapper[4830]: E0131 09:01:32.091937 4830 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 31 09:01:32 crc kubenswrapper[4830]: E0131 09:01:32.091948 4830 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 31 09:01:32 crc kubenswrapper[4830]: E0131 09:01:32.091956 4830 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 09:01:32 crc kubenswrapper[4830]: E0131 09:01:32.091977 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-31 09:01:48.091971787 +0000 UTC m=+52.585334229 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 09:01:32 crc kubenswrapper[4830]: I0131 09:01:32.151204 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:32 crc kubenswrapper[4830]: I0131 09:01:32.151236 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:32 crc kubenswrapper[4830]: I0131 09:01:32.151244 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:32 crc kubenswrapper[4830]: I0131 09:01:32.151259 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:32 crc kubenswrapper[4830]: I0131 09:01:32.151268 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:32Z","lastTransitionTime":"2026-01-31T09:01:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:32 crc kubenswrapper[4830]: I0131 09:01:32.220336 4830 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 04:22:22.460008398 +0000 UTC Jan 31 09:01:32 crc kubenswrapper[4830]: I0131 09:01:32.253145 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 09:01:32 crc kubenswrapper[4830]: E0131 09:01:32.253289 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 09:01:32 crc kubenswrapper[4830]: I0131 09:01:32.253352 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 09:01:32 crc kubenswrapper[4830]: E0131 09:01:32.253401 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 09:01:32 crc kubenswrapper[4830]: I0131 09:01:32.253437 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 09:01:32 crc kubenswrapper[4830]: E0131 09:01:32.253478 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 09:01:32 crc kubenswrapper[4830]: I0131 09:01:32.259693 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:32 crc kubenswrapper[4830]: I0131 09:01:32.259964 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:32 crc kubenswrapper[4830]: I0131 09:01:32.259997 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:32 crc kubenswrapper[4830]: I0131 09:01:32.260063 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:32 crc kubenswrapper[4830]: I0131 09:01:32.260092 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:32Z","lastTransitionTime":"2026-01-31T09:01:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:32 crc kubenswrapper[4830]: I0131 09:01:32.364120 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:32 crc kubenswrapper[4830]: I0131 09:01:32.364163 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:32 crc kubenswrapper[4830]: I0131 09:01:32.364179 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:32 crc kubenswrapper[4830]: I0131 09:01:32.364200 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:32 crc kubenswrapper[4830]: I0131 09:01:32.364215 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:32Z","lastTransitionTime":"2026-01-31T09:01:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:32 crc kubenswrapper[4830]: I0131 09:01:32.467040 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:32 crc kubenswrapper[4830]: I0131 09:01:32.467074 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:32 crc kubenswrapper[4830]: I0131 09:01:32.467083 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:32 crc kubenswrapper[4830]: I0131 09:01:32.467099 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:32 crc kubenswrapper[4830]: I0131 09:01:32.467108 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:32Z","lastTransitionTime":"2026-01-31T09:01:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:32 crc kubenswrapper[4830]: I0131 09:01:32.497281 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-x27jw" event={"ID":"227117cb-01d3-4e44-9da3-b1d577fb3ee2","Type":"ContainerStarted","Data":"d59f81b73056481d4e6eb23c2a98c3c088b5255b82cd28e0cad0ac2a9b271cfe"} Jan 31 09:01:32 crc kubenswrapper[4830]: I0131 09:01:32.497368 4830 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 31 09:01:32 crc kubenswrapper[4830]: I0131 09:01:32.513369 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c75e1c36-b769-464e-96eb-6d9b3c5aa384\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fc4f1b4979b8d902676eced34650daea491e68b4c4377492b928a9f0f78d12b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b68f539fdc8bbf394adaf06d3e8682cdf498d0994c53f3754caf282cf9cf3607\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://539885da1b8f083e7ae878fec45416be66a5b06df063f046a05a4981bc8a8f75\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9b64732f8259953717c8ad355889afd462ce339c881ba9c105f6d3f39245e79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8cac33719e081864153ce20c60069a21036f3c29f7b4395021c3a2fe4f09dbc9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"message\\\":\\\"le observer\\\\nW0131 09:01:15.957161 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0131 09:01:15.957363 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 09:01:15.958528 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2684978895/tls.crt::/tmp/serving-cert-2684978895/tls.key\\\\\\\"\\\\nI0131 09:01:16.545024 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 09:01:16.548405 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 09:01:16.548444 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 09:01:16.548470 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 09:01:16.548480 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 09:01:16.557075 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 09:01:16.557112 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 09:01:16.557118 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 09:01:16.557123 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 09:01:16.557127 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 09:01:16.557130 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 09:01:16.557133 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0131 09:01:16.557468 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0131 09:01:16.564014 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d4d8bd6451d7416a2a312147ec282db5de8410910834f555c8d09062902130d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41db253848cac205af5f91621cac4e8223bda7d76984afd734a093f8e8a2d125\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41db253848cac205af5f91621cac4e8223bda7d76984afd734a093f8e8a2d125\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:00:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:00:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:00:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:32Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:32 crc kubenswrapper[4830]: I0131 09:01:32.533157 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:32Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:32 crc kubenswrapper[4830]: I0131 09:01:32.545027 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-zt78q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f8a0ccd-540b-4151-a34d-438e433cb141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://362d0fc182d79e72720f3686e7fb5219372cf72d8be09c8086713b692e8d66d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6zlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:25Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-zt78q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:32Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:32 crc kubenswrapper[4830]: I0131 09:01:32.560156 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:32Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:32 crc kubenswrapper[4830]: I0131 09:01:32.569654 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:32 crc kubenswrapper[4830]: I0131 09:01:32.569693 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:32 crc kubenswrapper[4830]: I0131 09:01:32.569702 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:32 crc kubenswrapper[4830]: I0131 09:01:32.569749 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:32 crc kubenswrapper[4830]: I0131 09:01:32.569764 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:32Z","lastTransitionTime":"2026-01-31T09:01:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:32 crc kubenswrapper[4830]: I0131 09:01:32.580757 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pmbpr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca325f50-edf0-4f3d-ab92-17f40a73d274\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d2a0a6bafefdee2120d6573808366f2455c8606c350f69b9e62bfb2903f6303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7p56d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:22Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pmbpr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:32Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:32 crc kubenswrapper[4830]: I0131 09:01:32.598068 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-cjqbn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b7e133cc-19e8-4770-9146-88dac53a6531\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fc53764819654361fe0c4c89480ef4e2b42eb79d71ab8b88f1cc9283c67ce70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-msp6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-cjqbn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:32Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:32 crc kubenswrapper[4830]: I0131 09:01:32.612522 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-x27jw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"227117cb-01d3-4e44-9da3-b1d577fb3ee2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d59f81b73056481d4e6eb23c2a98c3c088b5255b82cd28e0cad0ac2a9b271cfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wndtp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b9804ac2b3c56a17dc1665cc7ca0c622f9f7a5dbf19f804b155c517a5be61e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b9804ac2b3c56a17dc1665cc7ca0c622f9f7a5dbf19f804b155c517a5be61e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wndtp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e777b7bcec1e8930ac1c280460b5733614b633b1ed72522e124f8585ef5cb38d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e777b7bcec1e8930ac1c280460b5733614b633b1ed72522e124f8585ef5cb38d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wndtp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10692d497b8cf269a439c603376a958fcead6d7ee2338eff4e014a75eaa26417\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10692d497b8cf269a439c603376a958fcead6d7ee2338eff4e014a75eaa26417\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wndtp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1275255b2597fbb09db9612f496ab3177afffe0c7d74d406bc4b1bc861b5b54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c1275255b2597fbb09db9612f496ab3177afffe0c7d74d406bc4b1bc861b5b54\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wndtp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://443af6aeef0b259a54c898d9c08f6e39d663ddf06d36d8712ee8ebb7fd2ebf5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://443af6aeef0b259a54c898d9c08f6e39d663ddf06d36d8712ee8ebb7fd2ebf5a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wndtp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e55fc9c3b6f0e9526860d830179e759cb2eae3c52847b675db6ccce9b6ed2766\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e55fc9c3b6f0e9526860d830179e759cb2eae3c52847b675db6ccce9b6ed2766\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wndtp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-x27jw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:32Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:32 crc kubenswrapper[4830]: I0131 09:01:32.624615 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"158dbfda-9b0a-4809-9946-3c6ee2d082dc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4f2590c48b20124bb8d0271755d430719ece306dbdc95acc26258abaf331ee2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqp59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7cfb7ee25dc18bb1412f69e9bbc3a9055029ed188a12baa5ceef7d5445ad597c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqp59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:23Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gt7kd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:32Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:32 crc kubenswrapper[4830]: I0131 09:01:32.637947 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"20ed341f-ef9c-4242-981d-80c09f22a37f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://28d65a2b0e667ff1bd912564180a6a9ce77e91a104c6028e19bc58e0ab3b295d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c484d4ed3775a955f3077e3404135c771c581eef1e8fa614d7cdd6e521bbb426\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6889405df5b8cc9e09b36c99a5db06048c9de193dd5744d2b48c07f42c477bd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e99664db53d57a91882867cdf4ab33d52a2e165c53f91cd1b918a32c49a7afa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:00:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:32Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:32 crc kubenswrapper[4830]: I0131 09:01:32.653782 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c33f93dea1eda6a26c80682447157b317a4aea4af66d200c5bdfc162ad593e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:32Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:32 crc kubenswrapper[4830]: I0131 09:01:32.668558 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:32Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:32 crc kubenswrapper[4830]: I0131 09:01:32.672516 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:32 crc kubenswrapper[4830]: I0131 09:01:32.672550 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:32 crc kubenswrapper[4830]: I0131 09:01:32.672562 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:32 crc kubenswrapper[4830]: I0131 09:01:32.672579 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:32 crc kubenswrapper[4830]: I0131 09:01:32.672590 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:32Z","lastTransitionTime":"2026-01-31T09:01:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:32 crc kubenswrapper[4830]: I0131 09:01:32.682994 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2018dd8e7153f3ce64992dc6f931ae09c5f77931cd0743a9fe2557673b6a41f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:32Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:32 crc kubenswrapper[4830]: I0131 09:01:32.699157 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0468c8e291590c1fa2ffc9b7786bbbf6f3cfb7889fc3adaf7f7a4d3b0edcb1ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4601859d7e52904b3390b2ebca48210c4b3bf132b00a4735a1dbe6cfdc7bd02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:32Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:32 crc kubenswrapper[4830]: I0131 09:01:32.721528 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-r8pc4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"159b9801-57e3-4cf0-9b81-10aacb5eef83\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://320f24051b1a05475b54823ec3401ad252f0efd6c0d3c5098643f959bad15179\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae12083012db5dcd6cc0b23cf64d04c952587334fdf617d36d4cdc2f657eb163\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://351a35da1a2503d6c5d323c7a9a26460236f70517498b41055e6d21bbf92b358\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3614c1ff7ca6762c15b9b85e6187ad94029630e54254e8ccb45b5e7c24692561\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27c8241e84418d80626b17fe37ccf994304394c3fa85c7f1a058d8d54ad7e7d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fd109715e65725addfeba399eacb1abbe16132e06e9ee50b542e877ca9d8d82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://81cb2855bf27ee35b843c98bb352ecabf420ed858ebfc1459adaac6a9fd55407\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a39d2c99a3b1f6ff5d003d2163046c5127e41dac82f41744c1fbffd0cec8ff09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba6e5a840aca1952877a5d2c18af1d28779b13d54e0c653a2b02b3c3d7b748e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba6e5a840aca1952877a5d2c18af1d28779b13d54e0c653a2b02b3c3d7b748e4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-r8pc4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:32Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:32 crc kubenswrapper[4830]: I0131 09:01:32.776150 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:32 crc kubenswrapper[4830]: I0131 09:01:32.776215 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:32 crc kubenswrapper[4830]: I0131 09:01:32.776229 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:32 crc kubenswrapper[4830]: I0131 09:01:32.776253 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:32 crc kubenswrapper[4830]: I0131 09:01:32.776269 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:32Z","lastTransitionTime":"2026-01-31T09:01:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:32 crc kubenswrapper[4830]: I0131 09:01:32.880624 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:32 crc kubenswrapper[4830]: I0131 09:01:32.881076 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:32 crc kubenswrapper[4830]: I0131 09:01:32.881091 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:32 crc kubenswrapper[4830]: I0131 09:01:32.881111 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:32 crc kubenswrapper[4830]: I0131 09:01:32.881124 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:32Z","lastTransitionTime":"2026-01-31T09:01:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:32 crc kubenswrapper[4830]: I0131 09:01:32.984456 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:32 crc kubenswrapper[4830]: I0131 09:01:32.984516 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:32 crc kubenswrapper[4830]: I0131 09:01:32.984528 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:32 crc kubenswrapper[4830]: I0131 09:01:32.984548 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:32 crc kubenswrapper[4830]: I0131 09:01:32.984564 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:32Z","lastTransitionTime":"2026-01-31T09:01:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:33 crc kubenswrapper[4830]: I0131 09:01:33.088111 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:33 crc kubenswrapper[4830]: I0131 09:01:33.088192 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:33 crc kubenswrapper[4830]: I0131 09:01:33.088210 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:33 crc kubenswrapper[4830]: I0131 09:01:33.088239 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:33 crc kubenswrapper[4830]: I0131 09:01:33.088255 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:33Z","lastTransitionTime":"2026-01-31T09:01:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:33 crc kubenswrapper[4830]: I0131 09:01:33.192229 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:33 crc kubenswrapper[4830]: I0131 09:01:33.192290 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:33 crc kubenswrapper[4830]: I0131 09:01:33.192308 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:33 crc kubenswrapper[4830]: I0131 09:01:33.192333 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:33 crc kubenswrapper[4830]: I0131 09:01:33.192347 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:33Z","lastTransitionTime":"2026-01-31T09:01:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:33 crc kubenswrapper[4830]: I0131 09:01:33.220957 4830 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 01:02:18.338731627 +0000 UTC Jan 31 09:01:33 crc kubenswrapper[4830]: I0131 09:01:33.295286 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:33 crc kubenswrapper[4830]: I0131 09:01:33.295346 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:33 crc kubenswrapper[4830]: I0131 09:01:33.295358 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:33 crc kubenswrapper[4830]: I0131 09:01:33.295379 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:33 crc kubenswrapper[4830]: I0131 09:01:33.295397 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:33Z","lastTransitionTime":"2026-01-31T09:01:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:33 crc kubenswrapper[4830]: I0131 09:01:33.398460 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:33 crc kubenswrapper[4830]: I0131 09:01:33.398534 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:33 crc kubenswrapper[4830]: I0131 09:01:33.398554 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:33 crc kubenswrapper[4830]: I0131 09:01:33.398575 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:33 crc kubenswrapper[4830]: I0131 09:01:33.398590 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:33Z","lastTransitionTime":"2026-01-31T09:01:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:33 crc kubenswrapper[4830]: I0131 09:01:33.500466 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:33 crc kubenswrapper[4830]: I0131 09:01:33.500511 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:33 crc kubenswrapper[4830]: I0131 09:01:33.500520 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:33 crc kubenswrapper[4830]: I0131 09:01:33.500535 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:33 crc kubenswrapper[4830]: I0131 09:01:33.500546 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:33Z","lastTransitionTime":"2026-01-31T09:01:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:33 crc kubenswrapper[4830]: I0131 09:01:33.503828 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-r8pc4_159b9801-57e3-4cf0-9b81-10aacb5eef83/ovnkube-controller/0.log" Jan 31 09:01:33 crc kubenswrapper[4830]: I0131 09:01:33.506906 4830 generic.go:334] "Generic (PLEG): container finished" podID="159b9801-57e3-4cf0-9b81-10aacb5eef83" containerID="81cb2855bf27ee35b843c98bb352ecabf420ed858ebfc1459adaac6a9fd55407" exitCode=1 Jan 31 09:01:33 crc kubenswrapper[4830]: I0131 09:01:33.506943 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-r8pc4" event={"ID":"159b9801-57e3-4cf0-9b81-10aacb5eef83","Type":"ContainerDied","Data":"81cb2855bf27ee35b843c98bb352ecabf420ed858ebfc1459adaac6a9fd55407"} Jan 31 09:01:33 crc kubenswrapper[4830]: I0131 09:01:33.507769 4830 scope.go:117] "RemoveContainer" containerID="81cb2855bf27ee35b843c98bb352ecabf420ed858ebfc1459adaac6a9fd55407" Jan 31 09:01:33 crc kubenswrapper[4830]: I0131 09:01:33.525912 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"20ed341f-ef9c-4242-981d-80c09f22a37f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://28d65a2b0e667ff1bd912564180a6a9ce77e91a104c6028e19bc58e0ab3b295d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c484d4ed3775a955f3077e3404135c771c581eef1e8fa614d7cdd6e521bbb426\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6889405df5b8cc9e09b36c99a5db06048c9de193dd5744d2b48c07f42c477bd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e99664db53d57a91882867cdf4ab33d52a2e165c53f91cd1b918a32c49a7afa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:00:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:33Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:33 crc kubenswrapper[4830]: I0131 09:01:33.546129 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c33f93dea1eda6a26c80682447157b317a4aea4af66d200c5bdfc162ad593e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:33Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:33 crc kubenswrapper[4830]: I0131 09:01:33.562712 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:33Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:33 crc kubenswrapper[4830]: I0131 09:01:33.582984 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2018dd8e7153f3ce64992dc6f931ae09c5f77931cd0743a9fe2557673b6a41f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:33Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:33 crc kubenswrapper[4830]: I0131 09:01:33.600387 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"158dbfda-9b0a-4809-9946-3c6ee2d082dc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4f2590c48b20124bb8d0271755d430719ece306dbdc95acc26258abaf331ee2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqp59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7cfb7ee25dc18bb1412f69e9bbc3a9055029ed188a12baa5ceef7d5445ad597c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqp59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:23Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gt7kd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:33Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:33 crc kubenswrapper[4830]: I0131 09:01:33.605557 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:33 crc kubenswrapper[4830]: I0131 09:01:33.605586 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:33 crc kubenswrapper[4830]: I0131 09:01:33.605596 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:33 crc kubenswrapper[4830]: I0131 09:01:33.605614 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:33 crc kubenswrapper[4830]: I0131 09:01:33.605625 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:33Z","lastTransitionTime":"2026-01-31T09:01:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:33 crc kubenswrapper[4830]: I0131 09:01:33.613848 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0468c8e291590c1fa2ffc9b7786bbbf6f3cfb7889fc3adaf7f7a4d3b0edcb1ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4601859d7e52904b3390b2ebca48210c4b3bf132b00a4735a1dbe6cfdc7bd02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:33Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:33 crc kubenswrapper[4830]: I0131 09:01:33.635040 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-r8pc4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"159b9801-57e3-4cf0-9b81-10aacb5eef83\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://320f24051b1a05475b54823ec3401ad252f0efd6c0d3c5098643f959bad15179\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae12083012db5dcd6cc0b23cf64d04c952587334fdf617d36d4cdc2f657eb163\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://351a35da1a2503d6c5d323c7a9a26460236f70517498b41055e6d21bbf92b358\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3614c1ff7ca6762c15b9b85e6187ad94029630e54254e8ccb45b5e7c24692561\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27c8241e84418d80626b17fe37ccf994304394c3fa85c7f1a058d8d54ad7e7d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fd109715e65725addfeba399eacb1abbe16132e06e9ee50b542e877ca9d8d82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://81cb2855bf27ee35b843c98bb352ecabf420ed858ebfc1459adaac6a9fd55407\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://81cb2855bf27ee35b843c98bb352ecabf420ed858ebfc1459adaac6a9fd55407\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T09:01:32Z\\\",\\\"message\\\":\\\"e (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 09:01:32.948026 6097 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0131 09:01:32.948086 6097 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0131 09:01:32.948101 6097 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0131 09:01:32.948106 6097 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0131 09:01:32.948118 6097 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0131 09:01:32.948123 6097 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0131 09:01:32.948149 6097 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0131 09:01:32.948187 6097 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0131 09:01:32.948397 6097 handler.go:208] Removed *v1.Node event handler 7\\\\nI0131 09:01:32.948411 6097 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0131 09:01:32.948422 6097 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0131 09:01:32.948427 6097 factory.go:656] Stopping watch factory\\\\nI0131 09:01:32.948429 6097 handler.go:208] Removed *v1.Node event handler 2\\\\nI0131 09:01:32.948453 6097 ovnkube.go:599] Stopped ovnkube\\\\nI0131 09:01:32.948455 6097 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0131 09:01:3\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a39d2c99a3b1f6ff5d003d2163046c5127e41dac82f41744c1fbffd0cec8ff09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba6e5a840aca1952877a5d2c18af1d28779b13d54e0c653a2b02b3c3d7b748e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba6e5a840aca1952877a5d2c18af1d28779b13d54e0c653a2b02b3c3d7b748e4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-r8pc4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:33Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:33 crc kubenswrapper[4830]: I0131 09:01:33.654407 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c75e1c36-b769-464e-96eb-6d9b3c5aa384\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fc4f1b4979b8d902676eced34650daea491e68b4c4377492b928a9f0f78d12b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b68f539fdc8bbf394adaf06d3e8682cdf498d0994c53f3754caf282cf9cf3607\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://539885da1b8f083e7ae878fec45416be66a5b06df063f046a05a4981bc8a8f75\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9b64732f8259953717c8ad355889afd462ce339c881ba9c105f6d3f39245e79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8cac33719e081864153ce20c60069a21036f3c29f7b4395021c3a2fe4f09dbc9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"message\\\":\\\"le observer\\\\nW0131 09:01:15.957161 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0131 09:01:15.957363 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 09:01:15.958528 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2684978895/tls.crt::/tmp/serving-cert-2684978895/tls.key\\\\\\\"\\\\nI0131 09:01:16.545024 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 09:01:16.548405 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 09:01:16.548444 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 09:01:16.548470 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 09:01:16.548480 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 09:01:16.557075 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 09:01:16.557112 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 09:01:16.557118 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 09:01:16.557123 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 09:01:16.557127 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 09:01:16.557130 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 09:01:16.557133 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0131 09:01:16.557468 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0131 09:01:16.564014 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d4d8bd6451d7416a2a312147ec282db5de8410910834f555c8d09062902130d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41db253848cac205af5f91621cac4e8223bda7d76984afd734a093f8e8a2d125\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41db253848cac205af5f91621cac4e8223bda7d76984afd734a093f8e8a2d125\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:00:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:00:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:00:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:33Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:33 crc kubenswrapper[4830]: I0131 09:01:33.669889 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:33Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:33 crc kubenswrapper[4830]: I0131 09:01:33.685771 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-zt78q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f8a0ccd-540b-4151-a34d-438e433cb141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://362d0fc182d79e72720f3686e7fb5219372cf72d8be09c8086713b692e8d66d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6zlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:25Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-zt78q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:33Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:33 crc kubenswrapper[4830]: I0131 09:01:33.700881 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:33Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:33 crc kubenswrapper[4830]: I0131 09:01:33.709306 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:33 crc kubenswrapper[4830]: I0131 09:01:33.709356 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:33 crc kubenswrapper[4830]: I0131 09:01:33.709366 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:33 crc kubenswrapper[4830]: I0131 09:01:33.709385 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:33 crc kubenswrapper[4830]: I0131 09:01:33.709397 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:33Z","lastTransitionTime":"2026-01-31T09:01:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:33 crc kubenswrapper[4830]: I0131 09:01:33.715462 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pmbpr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca325f50-edf0-4f3d-ab92-17f40a73d274\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d2a0a6bafefdee2120d6573808366f2455c8606c350f69b9e62bfb2903f6303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7p56d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:22Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pmbpr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:33Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:33 crc kubenswrapper[4830]: I0131 09:01:33.730295 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-cjqbn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b7e133cc-19e8-4770-9146-88dac53a6531\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fc53764819654361fe0c4c89480ef4e2b42eb79d71ab8b88f1cc9283c67ce70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-msp6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-cjqbn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:33Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:33 crc kubenswrapper[4830]: I0131 09:01:33.743428 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-x27jw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"227117cb-01d3-4e44-9da3-b1d577fb3ee2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d59f81b73056481d4e6eb23c2a98c3c088b5255b82cd28e0cad0ac2a9b271cfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wndtp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b9804ac2b3c56a17dc1665cc7ca0c622f9f7a5dbf19f804b155c517a5be61e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b9804ac2b3c56a17dc1665cc7ca0c622f9f7a5dbf19f804b155c517a5be61e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wndtp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e777b7bcec1e8930ac1c280460b5733614b633b1ed72522e124f8585ef5cb38d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e777b7bcec1e8930ac1c280460b5733614b633b1ed72522e124f8585ef5cb38d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wndtp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10692d497b8cf269a439c603376a958fcead6d7ee2338eff4e014a75eaa26417\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10692d497b8cf269a439c603376a958fcead6d7ee2338eff4e014a75eaa26417\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wndtp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1275255b2597fbb09db9612f496ab3177afffe0c7d74d406bc4b1bc861b5b54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c1275255b2597fbb09db9612f496ab3177afffe0c7d74d406bc4b1bc861b5b54\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wndtp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://443af6aeef0b259a54c898d9c08f6e39d663ddf06d36d8712ee8ebb7fd2ebf5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://443af6aeef0b259a54c898d9c08f6e39d663ddf06d36d8712ee8ebb7fd2ebf5a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wndtp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e55fc9c3b6f0e9526860d830179e759cb2eae3c52847b675db6ccce9b6ed2766\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e55fc9c3b6f0e9526860d830179e759cb2eae3c52847b675db6ccce9b6ed2766\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wndtp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-x27jw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:33Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:33 crc kubenswrapper[4830]: I0131 09:01:33.812275 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:33 crc kubenswrapper[4830]: I0131 09:01:33.812325 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:33 crc kubenswrapper[4830]: I0131 09:01:33.812334 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:33 crc kubenswrapper[4830]: I0131 09:01:33.812351 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:33 crc kubenswrapper[4830]: I0131 09:01:33.812361 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:33Z","lastTransitionTime":"2026-01-31T09:01:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:33 crc kubenswrapper[4830]: I0131 09:01:33.915163 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:33 crc kubenswrapper[4830]: I0131 09:01:33.915224 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:33 crc kubenswrapper[4830]: I0131 09:01:33.915235 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:33 crc kubenswrapper[4830]: I0131 09:01:33.915253 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:33 crc kubenswrapper[4830]: I0131 09:01:33.915265 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:33Z","lastTransitionTime":"2026-01-31T09:01:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:34 crc kubenswrapper[4830]: I0131 09:01:34.018455 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:34 crc kubenswrapper[4830]: I0131 09:01:34.018516 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:34 crc kubenswrapper[4830]: I0131 09:01:34.018531 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:34 crc kubenswrapper[4830]: I0131 09:01:34.018552 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:34 crc kubenswrapper[4830]: I0131 09:01:34.018571 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:34Z","lastTransitionTime":"2026-01-31T09:01:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:34 crc kubenswrapper[4830]: I0131 09:01:34.121341 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:34 crc kubenswrapper[4830]: I0131 09:01:34.121421 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:34 crc kubenswrapper[4830]: I0131 09:01:34.121435 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:34 crc kubenswrapper[4830]: I0131 09:01:34.121482 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:34 crc kubenswrapper[4830]: I0131 09:01:34.121494 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:34Z","lastTransitionTime":"2026-01-31T09:01:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:34 crc kubenswrapper[4830]: I0131 09:01:34.221654 4830 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 02:17:24.024402182 +0000 UTC Jan 31 09:01:34 crc kubenswrapper[4830]: I0131 09:01:34.224022 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:34 crc kubenswrapper[4830]: I0131 09:01:34.224068 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:34 crc kubenswrapper[4830]: I0131 09:01:34.224082 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:34 crc kubenswrapper[4830]: I0131 09:01:34.224102 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:34 crc kubenswrapper[4830]: I0131 09:01:34.224113 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:34Z","lastTransitionTime":"2026-01-31T09:01:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:34 crc kubenswrapper[4830]: I0131 09:01:34.251467 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 09:01:34 crc kubenswrapper[4830]: I0131 09:01:34.251583 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 09:01:34 crc kubenswrapper[4830]: I0131 09:01:34.251660 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 09:01:34 crc kubenswrapper[4830]: E0131 09:01:34.251673 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 09:01:34 crc kubenswrapper[4830]: E0131 09:01:34.251826 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 09:01:34 crc kubenswrapper[4830]: E0131 09:01:34.252087 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 09:01:34 crc kubenswrapper[4830]: I0131 09:01:34.326620 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:34 crc kubenswrapper[4830]: I0131 09:01:34.326657 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:34 crc kubenswrapper[4830]: I0131 09:01:34.326665 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:34 crc kubenswrapper[4830]: I0131 09:01:34.326680 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:34 crc kubenswrapper[4830]: I0131 09:01:34.326691 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:34Z","lastTransitionTime":"2026-01-31T09:01:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:34 crc kubenswrapper[4830]: I0131 09:01:34.429923 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:34 crc kubenswrapper[4830]: I0131 09:01:34.429972 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:34 crc kubenswrapper[4830]: I0131 09:01:34.429980 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:34 crc kubenswrapper[4830]: I0131 09:01:34.429998 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:34 crc kubenswrapper[4830]: I0131 09:01:34.430009 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:34Z","lastTransitionTime":"2026-01-31T09:01:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:34 crc kubenswrapper[4830]: I0131 09:01:34.512604 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-r8pc4_159b9801-57e3-4cf0-9b81-10aacb5eef83/ovnkube-controller/0.log" Jan 31 09:01:34 crc kubenswrapper[4830]: I0131 09:01:34.515332 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-r8pc4" event={"ID":"159b9801-57e3-4cf0-9b81-10aacb5eef83","Type":"ContainerStarted","Data":"ed906c96861bf8d2af425b975d67a4b298930cd75ababc2d621541adaa7b2ba2"} Jan 31 09:01:34 crc kubenswrapper[4830]: I0131 09:01:34.515444 4830 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 31 09:01:34 crc kubenswrapper[4830]: I0131 09:01:34.532927 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:34 crc kubenswrapper[4830]: I0131 09:01:34.533026 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:34 crc kubenswrapper[4830]: I0131 09:01:34.533051 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:34 crc kubenswrapper[4830]: I0131 09:01:34.533083 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:34 crc kubenswrapper[4830]: I0131 09:01:34.533101 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:34Z","lastTransitionTime":"2026-01-31T09:01:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:34 crc kubenswrapper[4830]: I0131 09:01:34.534593 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-cjqbn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b7e133cc-19e8-4770-9146-88dac53a6531\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fc53764819654361fe0c4c89480ef4e2b42eb79d71ab8b88f1cc9283c67ce70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-msp6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-cjqbn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:34Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:34 crc kubenswrapper[4830]: I0131 09:01:34.551838 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-x27jw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"227117cb-01d3-4e44-9da3-b1d577fb3ee2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d59f81b73056481d4e6eb23c2a98c3c088b5255b82cd28e0cad0ac2a9b271cfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wndtp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b9804ac2b3c56a17dc1665cc7ca0c622f9f7a5dbf19f804b155c517a5be61e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b9804ac2b3c56a17dc1665cc7ca0c622f9f7a5dbf19f804b155c517a5be61e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wndtp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e777b7bcec1e8930ac1c280460b5733614b633b1ed72522e124f8585ef5cb38d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e777b7bcec1e8930ac1c280460b5733614b633b1ed72522e124f8585ef5cb38d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wndtp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10692d497b8cf269a439c603376a958fcead6d7ee2338eff4e014a75eaa26417\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10692d497b8cf269a439c603376a958fcead6d7ee2338eff4e014a75eaa26417\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wndtp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1275255b2597fbb09db9612f496ab3177afffe0c7d74d406bc4b1bc861b5b54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c1275255b2597fbb09db9612f496ab3177afffe0c7d74d406bc4b1bc861b5b54\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wndtp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://443af6aeef0b259a54c898d9c08f6e39d663ddf06d36d8712ee8ebb7fd2ebf5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://443af6aeef0b259a54c898d9c08f6e39d663ddf06d36d8712ee8ebb7fd2ebf5a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wndtp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e55fc9c3b6f0e9526860d830179e759cb2eae3c52847b675db6ccce9b6ed2766\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e55fc9c3b6f0e9526860d830179e759cb2eae3c52847b675db6ccce9b6ed2766\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wndtp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-x27jw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:34Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:34 crc kubenswrapper[4830]: I0131 09:01:34.567335 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:34Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:34 crc kubenswrapper[4830]: I0131 09:01:34.580572 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pmbpr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca325f50-edf0-4f3d-ab92-17f40a73d274\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d2a0a6bafefdee2120d6573808366f2455c8606c350f69b9e62bfb2903f6303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7p56d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:22Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pmbpr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:34Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:34 crc kubenswrapper[4830]: I0131 09:01:34.598408 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c33f93dea1eda6a26c80682447157b317a4aea4af66d200c5bdfc162ad593e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:34Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:34 crc kubenswrapper[4830]: I0131 09:01:34.611877 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:34Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:34 crc kubenswrapper[4830]: I0131 09:01:34.624982 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2018dd8e7153f3ce64992dc6f931ae09c5f77931cd0743a9fe2557673b6a41f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:34Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:34 crc kubenswrapper[4830]: I0131 09:01:34.636312 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:34 crc kubenswrapper[4830]: I0131 09:01:34.636369 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:34 crc kubenswrapper[4830]: I0131 09:01:34.636383 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:34 crc kubenswrapper[4830]: I0131 09:01:34.636405 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:34 crc kubenswrapper[4830]: I0131 09:01:34.636421 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:34Z","lastTransitionTime":"2026-01-31T09:01:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:34 crc kubenswrapper[4830]: I0131 09:01:34.637829 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"158dbfda-9b0a-4809-9946-3c6ee2d082dc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4f2590c48b20124bb8d0271755d430719ece306dbdc95acc26258abaf331ee2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqp59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7cfb7ee25dc18bb1412f69e9bbc3a9055029ed188a12baa5ceef7d5445ad597c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqp59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:23Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gt7kd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:34Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:34 crc kubenswrapper[4830]: I0131 09:01:34.656034 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"20ed341f-ef9c-4242-981d-80c09f22a37f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://28d65a2b0e667ff1bd912564180a6a9ce77e91a104c6028e19bc58e0ab3b295d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c484d4ed3775a955f3077e3404135c771c581eef1e8fa614d7cdd6e521bbb426\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6889405df5b8cc9e09b36c99a5db06048c9de193dd5744d2b48c07f42c477bd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e99664db53d57a91882867cdf4ab33d52a2e165c53f91cd1b918a32c49a7afa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:00:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:34Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:34 crc kubenswrapper[4830]: I0131 09:01:34.672442 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0468c8e291590c1fa2ffc9b7786bbbf6f3cfb7889fc3adaf7f7a4d3b0edcb1ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4601859d7e52904b3390b2ebca48210c4b3bf132b00a4735a1dbe6cfdc7bd02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:34Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:34 crc kubenswrapper[4830]: I0131 09:01:34.691657 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-r8pc4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"159b9801-57e3-4cf0-9b81-10aacb5eef83\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://320f24051b1a05475b54823ec3401ad252f0efd6c0d3c5098643f959bad15179\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae12083012db5dcd6cc0b23cf64d04c952587334fdf617d36d4cdc2f657eb163\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://351a35da1a2503d6c5d323c7a9a26460236f70517498b41055e6d21bbf92b358\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3614c1ff7ca6762c15b9b85e6187ad94029630e54254e8ccb45b5e7c24692561\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27c8241e84418d80626b17fe37ccf994304394c3fa85c7f1a058d8d54ad7e7d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fd109715e65725addfeba399eacb1abbe16132e06e9ee50b542e877ca9d8d82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed906c96861bf8d2af425b975d67a4b298930cd75ababc2d621541adaa7b2ba2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://81cb2855bf27ee35b843c98bb352ecabf420ed858ebfc1459adaac6a9fd55407\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T09:01:32Z\\\",\\\"message\\\":\\\"e (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 09:01:32.948026 6097 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0131 09:01:32.948086 6097 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0131 09:01:32.948101 6097 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0131 09:01:32.948106 6097 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0131 09:01:32.948118 6097 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0131 09:01:32.948123 6097 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0131 09:01:32.948149 6097 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0131 09:01:32.948187 6097 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0131 09:01:32.948397 6097 handler.go:208] Removed *v1.Node event handler 7\\\\nI0131 09:01:32.948411 6097 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0131 09:01:32.948422 6097 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0131 09:01:32.948427 6097 factory.go:656] Stopping watch factory\\\\nI0131 09:01:32.948429 6097 handler.go:208] Removed *v1.Node event handler 2\\\\nI0131 09:01:32.948453 6097 ovnkube.go:599] Stopped ovnkube\\\\nI0131 09:01:32.948455 6097 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0131 09:01:3\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:29Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a39d2c99a3b1f6ff5d003d2163046c5127e41dac82f41744c1fbffd0cec8ff09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba6e5a840aca1952877a5d2c18af1d28779b13d54e0c653a2b02b3c3d7b748e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba6e5a840aca1952877a5d2c18af1d28779b13d54e0c653a2b02b3c3d7b748e4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-r8pc4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:34Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:34 crc kubenswrapper[4830]: I0131 09:01:34.702655 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:34Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:34 crc kubenswrapper[4830]: I0131 09:01:34.716435 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-zt78q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f8a0ccd-540b-4151-a34d-438e433cb141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://362d0fc182d79e72720f3686e7fb5219372cf72d8be09c8086713b692e8d66d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6zlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:25Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-zt78q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:34Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:34 crc kubenswrapper[4830]: I0131 09:01:34.731601 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c75e1c36-b769-464e-96eb-6d9b3c5aa384\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fc4f1b4979b8d902676eced34650daea491e68b4c4377492b928a9f0f78d12b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b68f539fdc8bbf394adaf06d3e8682cdf498d0994c53f3754caf282cf9cf3607\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://539885da1b8f083e7ae878fec45416be66a5b06df063f046a05a4981bc8a8f75\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9b64732f8259953717c8ad355889afd462ce339c881ba9c105f6d3f39245e79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8cac33719e081864153ce20c60069a21036f3c29f7b4395021c3a2fe4f09dbc9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"message\\\":\\\"le observer\\\\nW0131 09:01:15.957161 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0131 09:01:15.957363 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 09:01:15.958528 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2684978895/tls.crt::/tmp/serving-cert-2684978895/tls.key\\\\\\\"\\\\nI0131 09:01:16.545024 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 09:01:16.548405 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 09:01:16.548444 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 09:01:16.548470 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 09:01:16.548480 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 09:01:16.557075 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 09:01:16.557112 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 09:01:16.557118 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 09:01:16.557123 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 09:01:16.557127 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 09:01:16.557130 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 09:01:16.557133 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0131 09:01:16.557468 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0131 09:01:16.564014 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d4d8bd6451d7416a2a312147ec282db5de8410910834f555c8d09062902130d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41db253848cac205af5f91621cac4e8223bda7d76984afd734a093f8e8a2d125\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41db253848cac205af5f91621cac4e8223bda7d76984afd734a093f8e8a2d125\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:00:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:00:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:00:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:34Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:34 crc kubenswrapper[4830]: I0131 09:01:34.739869 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:34 crc kubenswrapper[4830]: I0131 09:01:34.739933 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:34 crc kubenswrapper[4830]: I0131 09:01:34.739944 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:34 crc kubenswrapper[4830]: I0131 09:01:34.739968 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:34 crc kubenswrapper[4830]: I0131 09:01:34.739982 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:34Z","lastTransitionTime":"2026-01-31T09:01:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:34 crc kubenswrapper[4830]: I0131 09:01:34.843374 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:34 crc kubenswrapper[4830]: I0131 09:01:34.843428 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:34 crc kubenswrapper[4830]: I0131 09:01:34.843438 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:34 crc kubenswrapper[4830]: I0131 09:01:34.843459 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:34 crc kubenswrapper[4830]: I0131 09:01:34.843472 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:34Z","lastTransitionTime":"2026-01-31T09:01:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:34 crc kubenswrapper[4830]: I0131 09:01:34.946480 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:34 crc kubenswrapper[4830]: I0131 09:01:34.946555 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:34 crc kubenswrapper[4830]: I0131 09:01:34.946575 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:34 crc kubenswrapper[4830]: I0131 09:01:34.946604 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:34 crc kubenswrapper[4830]: I0131 09:01:34.946625 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:34Z","lastTransitionTime":"2026-01-31T09:01:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:35 crc kubenswrapper[4830]: I0131 09:01:35.049987 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:35 crc kubenswrapper[4830]: I0131 09:01:35.050041 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:35 crc kubenswrapper[4830]: I0131 09:01:35.050054 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:35 crc kubenswrapper[4830]: I0131 09:01:35.050072 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:35 crc kubenswrapper[4830]: I0131 09:01:35.050085 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:35Z","lastTransitionTime":"2026-01-31T09:01:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:35 crc kubenswrapper[4830]: I0131 09:01:35.153599 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:35 crc kubenswrapper[4830]: I0131 09:01:35.153663 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:35 crc kubenswrapper[4830]: I0131 09:01:35.153675 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:35 crc kubenswrapper[4830]: I0131 09:01:35.153694 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:35 crc kubenswrapper[4830]: I0131 09:01:35.153705 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:35Z","lastTransitionTime":"2026-01-31T09:01:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:35 crc kubenswrapper[4830]: I0131 09:01:35.222204 4830 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 03:05:14.435323629 +0000 UTC Jan 31 09:01:35 crc kubenswrapper[4830]: I0131 09:01:35.257023 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:35 crc kubenswrapper[4830]: I0131 09:01:35.257064 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:35 crc kubenswrapper[4830]: I0131 09:01:35.257079 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:35 crc kubenswrapper[4830]: I0131 09:01:35.257098 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:35 crc kubenswrapper[4830]: I0131 09:01:35.257112 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:35Z","lastTransitionTime":"2026-01-31T09:01:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:35 crc kubenswrapper[4830]: I0131 09:01:35.359964 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:35 crc kubenswrapper[4830]: I0131 09:01:35.360026 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:35 crc kubenswrapper[4830]: I0131 09:01:35.360039 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:35 crc kubenswrapper[4830]: I0131 09:01:35.360063 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:35 crc kubenswrapper[4830]: I0131 09:01:35.360078 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:35Z","lastTransitionTime":"2026-01-31T09:01:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:35 crc kubenswrapper[4830]: I0131 09:01:35.463426 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:35 crc kubenswrapper[4830]: I0131 09:01:35.463463 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:35 crc kubenswrapper[4830]: I0131 09:01:35.463474 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:35 crc kubenswrapper[4830]: I0131 09:01:35.463490 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:35 crc kubenswrapper[4830]: I0131 09:01:35.463482 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7vq99"] Jan 31 09:01:35 crc kubenswrapper[4830]: I0131 09:01:35.463501 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:35Z","lastTransitionTime":"2026-01-31T09:01:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:35 crc kubenswrapper[4830]: I0131 09:01:35.464022 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7vq99" Jan 31 09:01:35 crc kubenswrapper[4830]: I0131 09:01:35.466867 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 31 09:01:35 crc kubenswrapper[4830]: I0131 09:01:35.467804 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 31 09:01:35 crc kubenswrapper[4830]: I0131 09:01:35.481586 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:35Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:35 crc kubenswrapper[4830]: I0131 09:01:35.494484 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pmbpr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca325f50-edf0-4f3d-ab92-17f40a73d274\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d2a0a6bafefdee2120d6573808366f2455c8606c350f69b9e62bfb2903f6303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7p56d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:22Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pmbpr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:35Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:35 crc kubenswrapper[4830]: I0131 09:01:35.525588 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-r8pc4_159b9801-57e3-4cf0-9b81-10aacb5eef83/ovnkube-controller/1.log" Jan 31 09:01:35 crc kubenswrapper[4830]: I0131 09:01:35.526460 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-r8pc4_159b9801-57e3-4cf0-9b81-10aacb5eef83/ovnkube-controller/0.log" Jan 31 09:01:35 crc kubenswrapper[4830]: I0131 09:01:35.528886 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/44acb8ed-5840-46fa-9ba1-1b89653e1478-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-7vq99\" (UID: \"44acb8ed-5840-46fa-9ba1-1b89653e1478\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7vq99" Jan 31 09:01:35 crc kubenswrapper[4830]: I0131 09:01:35.528936 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v9w5d\" (UniqueName: \"kubernetes.io/projected/44acb8ed-5840-46fa-9ba1-1b89653e1478-kube-api-access-v9w5d\") pod \"ovnkube-control-plane-749d76644c-7vq99\" (UID: \"44acb8ed-5840-46fa-9ba1-1b89653e1478\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7vq99" Jan 31 09:01:35 crc kubenswrapper[4830]: I0131 09:01:35.528979 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/44acb8ed-5840-46fa-9ba1-1b89653e1478-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-7vq99\" (UID: \"44acb8ed-5840-46fa-9ba1-1b89653e1478\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7vq99" Jan 31 09:01:35 crc kubenswrapper[4830]: I0131 09:01:35.529048 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/44acb8ed-5840-46fa-9ba1-1b89653e1478-env-overrides\") pod \"ovnkube-control-plane-749d76644c-7vq99\" (UID: \"44acb8ed-5840-46fa-9ba1-1b89653e1478\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7vq99" Jan 31 09:01:35 crc kubenswrapper[4830]: I0131 09:01:35.529197 4830 generic.go:334] "Generic (PLEG): container finished" podID="159b9801-57e3-4cf0-9b81-10aacb5eef83" containerID="ed906c96861bf8d2af425b975d67a4b298930cd75ababc2d621541adaa7b2ba2" exitCode=1 Jan 31 09:01:35 crc kubenswrapper[4830]: I0131 09:01:35.529263 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-r8pc4" event={"ID":"159b9801-57e3-4cf0-9b81-10aacb5eef83","Type":"ContainerDied","Data":"ed906c96861bf8d2af425b975d67a4b298930cd75ababc2d621541adaa7b2ba2"} Jan 31 09:01:35 crc kubenswrapper[4830]: I0131 09:01:35.529314 4830 scope.go:117] "RemoveContainer" containerID="81cb2855bf27ee35b843c98bb352ecabf420ed858ebfc1459adaac6a9fd55407" Jan 31 09:01:35 crc kubenswrapper[4830]: I0131 09:01:35.530078 4830 scope.go:117] "RemoveContainer" containerID="ed906c96861bf8d2af425b975d67a4b298930cd75ababc2d621541adaa7b2ba2" Jan 31 09:01:35 crc kubenswrapper[4830]: E0131 09:01:35.530286 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-r8pc4_openshift-ovn-kubernetes(159b9801-57e3-4cf0-9b81-10aacb5eef83)\"" pod="openshift-ovn-kubernetes/ovnkube-node-r8pc4" podUID="159b9801-57e3-4cf0-9b81-10aacb5eef83" Jan 31 09:01:35 crc kubenswrapper[4830]: I0131 09:01:35.543056 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-cjqbn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b7e133cc-19e8-4770-9146-88dac53a6531\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fc53764819654361fe0c4c89480ef4e2b42eb79d71ab8b88f1cc9283c67ce70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-msp6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-cjqbn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:35Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:35 crc kubenswrapper[4830]: I0131 09:01:35.569191 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:35 crc kubenswrapper[4830]: I0131 09:01:35.569238 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:35 crc kubenswrapper[4830]: I0131 09:01:35.569250 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:35 crc kubenswrapper[4830]: I0131 09:01:35.569266 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:35 crc kubenswrapper[4830]: I0131 09:01:35.569277 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:35Z","lastTransitionTime":"2026-01-31T09:01:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:35 crc kubenswrapper[4830]: I0131 09:01:35.576216 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-x27jw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"227117cb-01d3-4e44-9da3-b1d577fb3ee2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d59f81b73056481d4e6eb23c2a98c3c088b5255b82cd28e0cad0ac2a9b271cfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wndtp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b9804ac2b3c56a17dc1665cc7ca0c622f9f7a5dbf19f804b155c517a5be61e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b9804ac2b3c56a17dc1665cc7ca0c622f9f7a5dbf19f804b155c517a5be61e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wndtp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e777b7bcec1e8930ac1c280460b5733614b633b1ed72522e124f8585ef5cb38d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e777b7bcec1e8930ac1c280460b5733614b633b1ed72522e124f8585ef5cb38d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wndtp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10692d497b8cf269a439c603376a958fcead6d7ee2338eff4e014a75eaa26417\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10692d497b8cf269a439c603376a958fcead6d7ee2338eff4e014a75eaa26417\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wndtp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1275255b2597fbb09db9612f496ab3177afffe0c7d74d406bc4b1bc861b5b54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c1275255b2597fbb09db9612f496ab3177afffe0c7d74d406bc4b1bc861b5b54\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wndtp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://443af6aeef0b259a54c898d9c08f6e39d663ddf06d36d8712ee8ebb7fd2ebf5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://443af6aeef0b259a54c898d9c08f6e39d663ddf06d36d8712ee8ebb7fd2ebf5a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wndtp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e55fc9c3b6f0e9526860d830179e759cb2eae3c52847b675db6ccce9b6ed2766\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e55fc9c3b6f0e9526860d830179e759cb2eae3c52847b675db6ccce9b6ed2766\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wndtp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-x27jw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:35Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:35 crc kubenswrapper[4830]: I0131 09:01:35.589521 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"20ed341f-ef9c-4242-981d-80c09f22a37f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://28d65a2b0e667ff1bd912564180a6a9ce77e91a104c6028e19bc58e0ab3b295d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c484d4ed3775a955f3077e3404135c771c581eef1e8fa614d7cdd6e521bbb426\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6889405df5b8cc9e09b36c99a5db06048c9de193dd5744d2b48c07f42c477bd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e99664db53d57a91882867cdf4ab33d52a2e165c53f91cd1b918a32c49a7afa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:00:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:35Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:35 crc kubenswrapper[4830]: I0131 09:01:35.603349 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c33f93dea1eda6a26c80682447157b317a4aea4af66d200c5bdfc162ad593e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:35Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:35 crc kubenswrapper[4830]: I0131 09:01:35.618130 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:35Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:35 crc kubenswrapper[4830]: I0131 09:01:35.630203 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/44acb8ed-5840-46fa-9ba1-1b89653e1478-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-7vq99\" (UID: \"44acb8ed-5840-46fa-9ba1-1b89653e1478\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7vq99" Jan 31 09:01:35 crc kubenswrapper[4830]: I0131 09:01:35.630263 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v9w5d\" (UniqueName: \"kubernetes.io/projected/44acb8ed-5840-46fa-9ba1-1b89653e1478-kube-api-access-v9w5d\") pod \"ovnkube-control-plane-749d76644c-7vq99\" (UID: \"44acb8ed-5840-46fa-9ba1-1b89653e1478\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7vq99" Jan 31 09:01:35 crc kubenswrapper[4830]: I0131 09:01:35.630294 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/44acb8ed-5840-46fa-9ba1-1b89653e1478-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-7vq99\" (UID: \"44acb8ed-5840-46fa-9ba1-1b89653e1478\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7vq99" Jan 31 09:01:35 crc kubenswrapper[4830]: I0131 09:01:35.630348 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/44acb8ed-5840-46fa-9ba1-1b89653e1478-env-overrides\") pod \"ovnkube-control-plane-749d76644c-7vq99\" (UID: \"44acb8ed-5840-46fa-9ba1-1b89653e1478\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7vq99" Jan 31 09:01:35 crc kubenswrapper[4830]: I0131 09:01:35.631121 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/44acb8ed-5840-46fa-9ba1-1b89653e1478-env-overrides\") pod \"ovnkube-control-plane-749d76644c-7vq99\" (UID: \"44acb8ed-5840-46fa-9ba1-1b89653e1478\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7vq99" Jan 31 09:01:35 crc kubenswrapper[4830]: I0131 09:01:35.631249 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/44acb8ed-5840-46fa-9ba1-1b89653e1478-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-7vq99\" (UID: \"44acb8ed-5840-46fa-9ba1-1b89653e1478\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7vq99" Jan 31 09:01:35 crc kubenswrapper[4830]: I0131 09:01:35.631268 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2018dd8e7153f3ce64992dc6f931ae09c5f77931cd0743a9fe2557673b6a41f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:35Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:35 crc kubenswrapper[4830]: I0131 09:01:35.643315 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/44acb8ed-5840-46fa-9ba1-1b89653e1478-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-7vq99\" (UID: \"44acb8ed-5840-46fa-9ba1-1b89653e1478\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7vq99" Jan 31 09:01:35 crc kubenswrapper[4830]: I0131 09:01:35.646364 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"158dbfda-9b0a-4809-9946-3c6ee2d082dc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4f2590c48b20124bb8d0271755d430719ece306dbdc95acc26258abaf331ee2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqp59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7cfb7ee25dc18bb1412f69e9bbc3a9055029ed188a12baa5ceef7d5445ad597c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqp59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:23Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gt7kd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:35Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:35 crc kubenswrapper[4830]: I0131 09:01:35.652359 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v9w5d\" (UniqueName: \"kubernetes.io/projected/44acb8ed-5840-46fa-9ba1-1b89653e1478-kube-api-access-v9w5d\") pod \"ovnkube-control-plane-749d76644c-7vq99\" (UID: \"44acb8ed-5840-46fa-9ba1-1b89653e1478\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7vq99" Jan 31 09:01:35 crc kubenswrapper[4830]: I0131 09:01:35.661743 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0468c8e291590c1fa2ffc9b7786bbbf6f3cfb7889fc3adaf7f7a4d3b0edcb1ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4601859d7e52904b3390b2ebca48210c4b3bf132b00a4735a1dbe6cfdc7bd02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:35Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:35 crc kubenswrapper[4830]: I0131 09:01:35.673447 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:35 crc kubenswrapper[4830]: I0131 09:01:35.673481 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:35 crc kubenswrapper[4830]: I0131 09:01:35.673492 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:35 crc kubenswrapper[4830]: I0131 09:01:35.673508 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:35 crc kubenswrapper[4830]: I0131 09:01:35.673518 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:35Z","lastTransitionTime":"2026-01-31T09:01:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:35 crc kubenswrapper[4830]: I0131 09:01:35.691637 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-r8pc4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"159b9801-57e3-4cf0-9b81-10aacb5eef83\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://320f24051b1a05475b54823ec3401ad252f0efd6c0d3c5098643f959bad15179\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae12083012db5dcd6cc0b23cf64d04c952587334fdf617d36d4cdc2f657eb163\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://351a35da1a2503d6c5d323c7a9a26460236f70517498b41055e6d21bbf92b358\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3614c1ff7ca6762c15b9b85e6187ad94029630e54254e8ccb45b5e7c24692561\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27c8241e84418d80626b17fe37ccf994304394c3fa85c7f1a058d8d54ad7e7d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fd109715e65725addfeba399eacb1abbe16132e06e9ee50b542e877ca9d8d82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed906c96861bf8d2af425b975d67a4b298930cd75ababc2d621541adaa7b2ba2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://81cb2855bf27ee35b843c98bb352ecabf420ed858ebfc1459adaac6a9fd55407\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T09:01:32Z\\\",\\\"message\\\":\\\"e (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 09:01:32.948026 6097 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0131 09:01:32.948086 6097 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0131 09:01:32.948101 6097 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0131 09:01:32.948106 6097 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0131 09:01:32.948118 6097 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0131 09:01:32.948123 6097 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0131 09:01:32.948149 6097 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0131 09:01:32.948187 6097 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0131 09:01:32.948397 6097 handler.go:208] Removed *v1.Node event handler 7\\\\nI0131 09:01:32.948411 6097 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0131 09:01:32.948422 6097 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0131 09:01:32.948427 6097 factory.go:656] Stopping watch factory\\\\nI0131 09:01:32.948429 6097 handler.go:208] Removed *v1.Node event handler 2\\\\nI0131 09:01:32.948453 6097 ovnkube.go:599] Stopped ovnkube\\\\nI0131 09:01:32.948455 6097 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0131 09:01:3\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:29Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a39d2c99a3b1f6ff5d003d2163046c5127e41dac82f41744c1fbffd0cec8ff09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba6e5a840aca1952877a5d2c18af1d28779b13d54e0c653a2b02b3c3d7b748e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba6e5a840aca1952877a5d2c18af1d28779b13d54e0c653a2b02b3c3d7b748e4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-r8pc4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:35Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:35 crc kubenswrapper[4830]: I0131 09:01:35.704598 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7vq99" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"44acb8ed-5840-46fa-9ba1-1b89653e1478\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9w5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9w5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:35Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7vq99\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:35Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:35 crc kubenswrapper[4830]: I0131 09:01:35.728095 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c75e1c36-b769-464e-96eb-6d9b3c5aa384\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fc4f1b4979b8d902676eced34650daea491e68b4c4377492b928a9f0f78d12b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b68f539fdc8bbf394adaf06d3e8682cdf498d0994c53f3754caf282cf9cf3607\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://539885da1b8f083e7ae878fec45416be66a5b06df063f046a05a4981bc8a8f75\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9b64732f8259953717c8ad355889afd462ce339c881ba9c105f6d3f39245e79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8cac33719e081864153ce20c60069a21036f3c29f7b4395021c3a2fe4f09dbc9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"message\\\":\\\"le observer\\\\nW0131 09:01:15.957161 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0131 09:01:15.957363 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 09:01:15.958528 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2684978895/tls.crt::/tmp/serving-cert-2684978895/tls.key\\\\\\\"\\\\nI0131 09:01:16.545024 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 09:01:16.548405 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 09:01:16.548444 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 09:01:16.548470 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 09:01:16.548480 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 09:01:16.557075 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 09:01:16.557112 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 09:01:16.557118 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 09:01:16.557123 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 09:01:16.557127 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 09:01:16.557130 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 09:01:16.557133 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0131 09:01:16.557468 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0131 09:01:16.564014 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d4d8bd6451d7416a2a312147ec282db5de8410910834f555c8d09062902130d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41db253848cac205af5f91621cac4e8223bda7d76984afd734a093f8e8a2d125\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41db253848cac205af5f91621cac4e8223bda7d76984afd734a093f8e8a2d125\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:00:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:00:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:00:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:35Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:35 crc kubenswrapper[4830]: I0131 09:01:35.745711 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:35Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:35 crc kubenswrapper[4830]: I0131 09:01:35.758140 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-zt78q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f8a0ccd-540b-4151-a34d-438e433cb141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://362d0fc182d79e72720f3686e7fb5219372cf72d8be09c8086713b692e8d66d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6zlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:25Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-zt78q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:35Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:35 crc kubenswrapper[4830]: I0131 09:01:35.772946 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0468c8e291590c1fa2ffc9b7786bbbf6f3cfb7889fc3adaf7f7a4d3b0edcb1ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4601859d7e52904b3390b2ebca48210c4b3bf132b00a4735a1dbe6cfdc7bd02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:35Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:35 crc kubenswrapper[4830]: I0131 09:01:35.776313 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:35 crc kubenswrapper[4830]: I0131 09:01:35.776402 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:35 crc kubenswrapper[4830]: I0131 09:01:35.776426 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:35 crc kubenswrapper[4830]: I0131 09:01:35.776459 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:35 crc kubenswrapper[4830]: I0131 09:01:35.776487 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:35Z","lastTransitionTime":"2026-01-31T09:01:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:35 crc kubenswrapper[4830]: I0131 09:01:35.786119 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7vq99" Jan 31 09:01:35 crc kubenswrapper[4830]: I0131 09:01:35.800523 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-r8pc4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"159b9801-57e3-4cf0-9b81-10aacb5eef83\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://320f24051b1a05475b54823ec3401ad252f0efd6c0d3c5098643f959bad15179\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae12083012db5dcd6cc0b23cf64d04c952587334fdf617d36d4cdc2f657eb163\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://351a35da1a2503d6c5d323c7a9a26460236f70517498b41055e6d21bbf92b358\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3614c1ff7ca6762c15b9b85e6187ad94029630e54254e8ccb45b5e7c24692561\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27c8241e84418d80626b17fe37ccf994304394c3fa85c7f1a058d8d54ad7e7d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fd109715e65725addfeba399eacb1abbe16132e06e9ee50b542e877ca9d8d82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed906c96861bf8d2af425b975d67a4b298930cd75ababc2d621541adaa7b2ba2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://81cb2855bf27ee35b843c98bb352ecabf420ed858ebfc1459adaac6a9fd55407\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T09:01:32Z\\\",\\\"message\\\":\\\"e (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 09:01:32.948026 6097 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0131 09:01:32.948086 6097 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0131 09:01:32.948101 6097 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0131 09:01:32.948106 6097 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0131 09:01:32.948118 6097 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0131 09:01:32.948123 6097 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0131 09:01:32.948149 6097 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0131 09:01:32.948187 6097 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0131 09:01:32.948397 6097 handler.go:208] Removed *v1.Node event handler 7\\\\nI0131 09:01:32.948411 6097 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0131 09:01:32.948422 6097 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0131 09:01:32.948427 6097 factory.go:656] Stopping watch factory\\\\nI0131 09:01:32.948429 6097 handler.go:208] Removed *v1.Node event handler 2\\\\nI0131 09:01:32.948453 6097 ovnkube.go:599] Stopped ovnkube\\\\nI0131 09:01:32.948455 6097 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0131 09:01:3\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:29Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed906c96861bf8d2af425b975d67a4b298930cd75ababc2d621541adaa7b2ba2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T09:01:34Z\\\",\\\"message\\\":\\\"objects: [openshift-kube-apiserver/kube-apiserver-crc openshift-kube-controller-manager/kube-controller-manager-crc openshift-network-operator/iptables-alerter-4ln5h openshift-image-registry/node-ca-zt78q openshift-network-diagnostics/network-check-source-55646444c4-trplf openshift-network-diagnostics/network-check-target-xd92c openshift-network-node-identity/network-node-identity-vrzqb openshift-dns/node-resolver-pmbpr openshift-network-operator/network-operator-58b4c7f79c-55gtf openshift-machine-config-operator/machine-config-daemon-gt7kd openshift-multus/multus-additional-cni-plugins-x27jw openshift-multus/multus-cjqbn]\\\\nI0131 09:01:34.674011 6272 obj_retry.go:418] Waiting for all the *v1.Pod retry setup to complete in iterateRetryResources\\\\nI0131 09:01:34.674042 6272 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/multus-cjqbn\\\\nI0131 09:01:34.674056 6272 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/multus-cjqbn\\\\nF0131 09:01:34.674068 6272 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stoppe\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a39d2c99a3b1f6ff5d003d2163046c5127e41dac82f41744c1fbffd0cec8ff09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba6e5a840aca1952877a5d2c18af1d28779b13d54e0c653a2b02b3c3d7b748e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba6e5a840aca1952877a5d2c18af1d28779b13d54e0c653a2b02b3c3d7b748e4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-r8pc4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:35Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:35 crc kubenswrapper[4830]: W0131 09:01:35.806429 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod44acb8ed_5840_46fa_9ba1_1b89653e1478.slice/crio-1592bffd18ed95adda8051b2f0558ea43be3d22059a9e7be02959666d85e4dca WatchSource:0}: Error finding container 1592bffd18ed95adda8051b2f0558ea43be3d22059a9e7be02959666d85e4dca: Status 404 returned error can't find the container with id 1592bffd18ed95adda8051b2f0558ea43be3d22059a9e7be02959666d85e4dca Jan 31 09:01:35 crc kubenswrapper[4830]: I0131 09:01:35.823259 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7vq99" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"44acb8ed-5840-46fa-9ba1-1b89653e1478\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9w5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9w5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:35Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7vq99\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:35Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:35 crc kubenswrapper[4830]: I0131 09:01:35.846631 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c75e1c36-b769-464e-96eb-6d9b3c5aa384\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fc4f1b4979b8d902676eced34650daea491e68b4c4377492b928a9f0f78d12b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b68f539fdc8bbf394adaf06d3e8682cdf498d0994c53f3754caf282cf9cf3607\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://539885da1b8f083e7ae878fec45416be66a5b06df063f046a05a4981bc8a8f75\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9b64732f8259953717c8ad355889afd462ce339c881ba9c105f6d3f39245e79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8cac33719e081864153ce20c60069a21036f3c29f7b4395021c3a2fe4f09dbc9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"message\\\":\\\"le observer\\\\nW0131 09:01:15.957161 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0131 09:01:15.957363 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 09:01:15.958528 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2684978895/tls.crt::/tmp/serving-cert-2684978895/tls.key\\\\\\\"\\\\nI0131 09:01:16.545024 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 09:01:16.548405 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 09:01:16.548444 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 09:01:16.548470 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 09:01:16.548480 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 09:01:16.557075 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 09:01:16.557112 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 09:01:16.557118 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 09:01:16.557123 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 09:01:16.557127 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 09:01:16.557130 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 09:01:16.557133 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0131 09:01:16.557468 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0131 09:01:16.564014 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d4d8bd6451d7416a2a312147ec282db5de8410910834f555c8d09062902130d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41db253848cac205af5f91621cac4e8223bda7d76984afd734a093f8e8a2d125\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41db253848cac205af5f91621cac4e8223bda7d76984afd734a093f8e8a2d125\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:00:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:00:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:00:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:35Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:35 crc kubenswrapper[4830]: I0131 09:01:35.868199 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:35Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:35 crc kubenswrapper[4830]: I0131 09:01:35.880529 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:35 crc kubenswrapper[4830]: I0131 09:01:35.880590 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:35 crc kubenswrapper[4830]: I0131 09:01:35.880605 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:35 crc kubenswrapper[4830]: I0131 09:01:35.880664 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:35 crc kubenswrapper[4830]: I0131 09:01:35.880678 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:35Z","lastTransitionTime":"2026-01-31T09:01:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:35 crc kubenswrapper[4830]: I0131 09:01:35.881959 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-zt78q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f8a0ccd-540b-4151-a34d-438e433cb141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://362d0fc182d79e72720f3686e7fb5219372cf72d8be09c8086713b692e8d66d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6zlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:25Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-zt78q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:35Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:35 crc kubenswrapper[4830]: I0131 09:01:35.896701 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:35Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:35 crc kubenswrapper[4830]: I0131 09:01:35.911949 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pmbpr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca325f50-edf0-4f3d-ab92-17f40a73d274\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d2a0a6bafefdee2120d6573808366f2455c8606c350f69b9e62bfb2903f6303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7p56d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:22Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pmbpr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:35Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:35 crc kubenswrapper[4830]: I0131 09:01:35.928929 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-cjqbn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b7e133cc-19e8-4770-9146-88dac53a6531\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fc53764819654361fe0c4c89480ef4e2b42eb79d71ab8b88f1cc9283c67ce70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-msp6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-cjqbn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:35Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:35 crc kubenswrapper[4830]: I0131 09:01:35.948397 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-x27jw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"227117cb-01d3-4e44-9da3-b1d577fb3ee2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d59f81b73056481d4e6eb23c2a98c3c088b5255b82cd28e0cad0ac2a9b271cfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wndtp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b9804ac2b3c56a17dc1665cc7ca0c622f9f7a5dbf19f804b155c517a5be61e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b9804ac2b3c56a17dc1665cc7ca0c622f9f7a5dbf19f804b155c517a5be61e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wndtp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e777b7bcec1e8930ac1c280460b5733614b633b1ed72522e124f8585ef5cb38d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e777b7bcec1e8930ac1c280460b5733614b633b1ed72522e124f8585ef5cb38d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wndtp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10692d497b8cf269a439c603376a958fcead6d7ee2338eff4e014a75eaa26417\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10692d497b8cf269a439c603376a958fcead6d7ee2338eff4e014a75eaa26417\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wndtp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1275255b2597fbb09db9612f496ab3177afffe0c7d74d406bc4b1bc861b5b54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c1275255b2597fbb09db9612f496ab3177afffe0c7d74d406bc4b1bc861b5b54\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wndtp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://443af6aeef0b259a54c898d9c08f6e39d663ddf06d36d8712ee8ebb7fd2ebf5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://443af6aeef0b259a54c898d9c08f6e39d663ddf06d36d8712ee8ebb7fd2ebf5a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wndtp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e55fc9c3b6f0e9526860d830179e759cb2eae3c52847b675db6ccce9b6ed2766\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e55fc9c3b6f0e9526860d830179e759cb2eae3c52847b675db6ccce9b6ed2766\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wndtp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-x27jw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:35Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:35 crc kubenswrapper[4830]: I0131 09:01:35.965750 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"158dbfda-9b0a-4809-9946-3c6ee2d082dc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4f2590c48b20124bb8d0271755d430719ece306dbdc95acc26258abaf331ee2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqp59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7cfb7ee25dc18bb1412f69e9bbc3a9055029ed188a12baa5ceef7d5445ad597c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqp59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:23Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gt7kd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:35Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:35 crc kubenswrapper[4830]: I0131 09:01:35.984750 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"20ed341f-ef9c-4242-981d-80c09f22a37f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://28d65a2b0e667ff1bd912564180a6a9ce77e91a104c6028e19bc58e0ab3b295d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c484d4ed3775a955f3077e3404135c771c581eef1e8fa614d7cdd6e521bbb426\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6889405df5b8cc9e09b36c99a5db06048c9de193dd5744d2b48c07f42c477bd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e99664db53d57a91882867cdf4ab33d52a2e165c53f91cd1b918a32c49a7afa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:00:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:35Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:35 crc kubenswrapper[4830]: I0131 09:01:35.984831 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:35 crc kubenswrapper[4830]: I0131 09:01:35.985011 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:35 crc kubenswrapper[4830]: I0131 09:01:35.985031 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:35 crc kubenswrapper[4830]: I0131 09:01:35.985053 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:35 crc kubenswrapper[4830]: I0131 09:01:35.985119 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:35Z","lastTransitionTime":"2026-01-31T09:01:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:36 crc kubenswrapper[4830]: I0131 09:01:35.999592 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c33f93dea1eda6a26c80682447157b317a4aea4af66d200c5bdfc162ad593e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:35Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:36 crc kubenswrapper[4830]: I0131 09:01:36.016607 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:36Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:36 crc kubenswrapper[4830]: I0131 09:01:36.035513 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2018dd8e7153f3ce64992dc6f931ae09c5f77931cd0743a9fe2557673b6a41f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:36Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:36 crc kubenswrapper[4830]: I0131 09:01:36.088623 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:36 crc kubenswrapper[4830]: I0131 09:01:36.088673 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:36 crc kubenswrapper[4830]: I0131 09:01:36.088687 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:36 crc kubenswrapper[4830]: I0131 09:01:36.088706 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:36 crc kubenswrapper[4830]: I0131 09:01:36.088718 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:36Z","lastTransitionTime":"2026-01-31T09:01:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:36 crc kubenswrapper[4830]: I0131 09:01:36.190701 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:36 crc kubenswrapper[4830]: I0131 09:01:36.190764 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:36 crc kubenswrapper[4830]: I0131 09:01:36.190776 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:36 crc kubenswrapper[4830]: I0131 09:01:36.190794 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:36 crc kubenswrapper[4830]: I0131 09:01:36.190806 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:36Z","lastTransitionTime":"2026-01-31T09:01:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:36 crc kubenswrapper[4830]: I0131 09:01:36.223278 4830 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 12:47:31.71231959 +0000 UTC Jan 31 09:01:36 crc kubenswrapper[4830]: I0131 09:01:36.251004 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 09:01:36 crc kubenswrapper[4830]: I0131 09:01:36.251034 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 09:01:36 crc kubenswrapper[4830]: I0131 09:01:36.251183 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 09:01:36 crc kubenswrapper[4830]: E0131 09:01:36.251233 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 09:01:36 crc kubenswrapper[4830]: E0131 09:01:36.251454 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 09:01:36 crc kubenswrapper[4830]: E0131 09:01:36.251669 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 09:01:36 crc kubenswrapper[4830]: I0131 09:01:36.268079 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-cjqbn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b7e133cc-19e8-4770-9146-88dac53a6531\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fc53764819654361fe0c4c89480ef4e2b42eb79d71ab8b88f1cc9283c67ce70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-msp6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-cjqbn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:36Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:36 crc kubenswrapper[4830]: I0131 09:01:36.284601 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-x27jw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"227117cb-01d3-4e44-9da3-b1d577fb3ee2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d59f81b73056481d4e6eb23c2a98c3c088b5255b82cd28e0cad0ac2a9b271cfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wndtp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b9804ac2b3c56a17dc1665cc7ca0c622f9f7a5dbf19f804b155c517a5be61e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b9804ac2b3c56a17dc1665cc7ca0c622f9f7a5dbf19f804b155c517a5be61e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wndtp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e777b7bcec1e8930ac1c280460b5733614b633b1ed72522e124f8585ef5cb38d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e777b7bcec1e8930ac1c280460b5733614b633b1ed72522e124f8585ef5cb38d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wndtp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10692d497b8cf269a439c603376a958fcead6d7ee2338eff4e014a75eaa26417\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10692d497b8cf269a439c603376a958fcead6d7ee2338eff4e014a75eaa26417\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wndtp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1275255b2597fbb09db9612f496ab3177afffe0c7d74d406bc4b1bc861b5b54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c1275255b2597fbb09db9612f496ab3177afffe0c7d74d406bc4b1bc861b5b54\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wndtp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://443af6aeef0b259a54c898d9c08f6e39d663ddf06d36d8712ee8ebb7fd2ebf5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://443af6aeef0b259a54c898d9c08f6e39d663ddf06d36d8712ee8ebb7fd2ebf5a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wndtp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e55fc9c3b6f0e9526860d830179e759cb2eae3c52847b675db6ccce9b6ed2766\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e55fc9c3b6f0e9526860d830179e759cb2eae3c52847b675db6ccce9b6ed2766\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wndtp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-x27jw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:36Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:36 crc kubenswrapper[4830]: I0131 09:01:36.293864 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:36 crc kubenswrapper[4830]: I0131 09:01:36.293941 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:36 crc kubenswrapper[4830]: I0131 09:01:36.293955 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:36 crc kubenswrapper[4830]: I0131 09:01:36.293979 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:36 crc kubenswrapper[4830]: I0131 09:01:36.293996 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:36Z","lastTransitionTime":"2026-01-31T09:01:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:36 crc kubenswrapper[4830]: I0131 09:01:36.299597 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:36Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:36 crc kubenswrapper[4830]: I0131 09:01:36.312231 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pmbpr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca325f50-edf0-4f3d-ab92-17f40a73d274\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d2a0a6bafefdee2120d6573808366f2455c8606c350f69b9e62bfb2903f6303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7p56d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:22Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pmbpr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:36Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:36 crc kubenswrapper[4830]: I0131 09:01:36.327785 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c33f93dea1eda6a26c80682447157b317a4aea4af66d200c5bdfc162ad593e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:36Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:36 crc kubenswrapper[4830]: I0131 09:01:36.343409 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:36Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:36 crc kubenswrapper[4830]: I0131 09:01:36.358840 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2018dd8e7153f3ce64992dc6f931ae09c5f77931cd0743a9fe2557673b6a41f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:36Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:36 crc kubenswrapper[4830]: I0131 09:01:36.376618 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"158dbfda-9b0a-4809-9946-3c6ee2d082dc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4f2590c48b20124bb8d0271755d430719ece306dbdc95acc26258abaf331ee2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqp59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7cfb7ee25dc18bb1412f69e9bbc3a9055029ed188a12baa5ceef7d5445ad597c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqp59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:23Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gt7kd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:36Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:36 crc kubenswrapper[4830]: I0131 09:01:36.396887 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"20ed341f-ef9c-4242-981d-80c09f22a37f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://28d65a2b0e667ff1bd912564180a6a9ce77e91a104c6028e19bc58e0ab3b295d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c484d4ed3775a955f3077e3404135c771c581eef1e8fa614d7cdd6e521bbb426\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6889405df5b8cc9e09b36c99a5db06048c9de193dd5744d2b48c07f42c477bd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e99664db53d57a91882867cdf4ab33d52a2e165c53f91cd1b918a32c49a7afa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:00:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:36Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:36 crc kubenswrapper[4830]: I0131 09:01:36.398090 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:36 crc kubenswrapper[4830]: I0131 09:01:36.398140 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:36 crc kubenswrapper[4830]: I0131 09:01:36.398150 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:36 crc kubenswrapper[4830]: I0131 09:01:36.398168 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:36 crc kubenswrapper[4830]: I0131 09:01:36.398179 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:36Z","lastTransitionTime":"2026-01-31T09:01:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:36 crc kubenswrapper[4830]: I0131 09:01:36.414055 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7vq99" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"44acb8ed-5840-46fa-9ba1-1b89653e1478\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9w5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9w5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:35Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7vq99\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:36Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:36 crc kubenswrapper[4830]: I0131 09:01:36.435917 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0468c8e291590c1fa2ffc9b7786bbbf6f3cfb7889fc3adaf7f7a4d3b0edcb1ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4601859d7e52904b3390b2ebca48210c4b3bf132b00a4735a1dbe6cfdc7bd02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:36Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:36 crc kubenswrapper[4830]: I0131 09:01:36.464228 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-r8pc4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"159b9801-57e3-4cf0-9b81-10aacb5eef83\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://320f24051b1a05475b54823ec3401ad252f0efd6c0d3c5098643f959bad15179\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae12083012db5dcd6cc0b23cf64d04c952587334fdf617d36d4cdc2f657eb163\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://351a35da1a2503d6c5d323c7a9a26460236f70517498b41055e6d21bbf92b358\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3614c1ff7ca6762c15b9b85e6187ad94029630e54254e8ccb45b5e7c24692561\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27c8241e84418d80626b17fe37ccf994304394c3fa85c7f1a058d8d54ad7e7d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fd109715e65725addfeba399eacb1abbe16132e06e9ee50b542e877ca9d8d82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed906c96861bf8d2af425b975d67a4b298930cd75ababc2d621541adaa7b2ba2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://81cb2855bf27ee35b843c98bb352ecabf420ed858ebfc1459adaac6a9fd55407\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T09:01:32Z\\\",\\\"message\\\":\\\"e (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 09:01:32.948026 6097 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0131 09:01:32.948086 6097 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0131 09:01:32.948101 6097 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0131 09:01:32.948106 6097 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0131 09:01:32.948118 6097 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0131 09:01:32.948123 6097 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0131 09:01:32.948149 6097 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0131 09:01:32.948187 6097 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0131 09:01:32.948397 6097 handler.go:208] Removed *v1.Node event handler 7\\\\nI0131 09:01:32.948411 6097 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0131 09:01:32.948422 6097 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0131 09:01:32.948427 6097 factory.go:656] Stopping watch factory\\\\nI0131 09:01:32.948429 6097 handler.go:208] Removed *v1.Node event handler 2\\\\nI0131 09:01:32.948453 6097 ovnkube.go:599] Stopped ovnkube\\\\nI0131 09:01:32.948455 6097 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0131 09:01:3\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:29Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed906c96861bf8d2af425b975d67a4b298930cd75ababc2d621541adaa7b2ba2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T09:01:34Z\\\",\\\"message\\\":\\\"objects: [openshift-kube-apiserver/kube-apiserver-crc openshift-kube-controller-manager/kube-controller-manager-crc openshift-network-operator/iptables-alerter-4ln5h openshift-image-registry/node-ca-zt78q openshift-network-diagnostics/network-check-source-55646444c4-trplf openshift-network-diagnostics/network-check-target-xd92c openshift-network-node-identity/network-node-identity-vrzqb openshift-dns/node-resolver-pmbpr openshift-network-operator/network-operator-58b4c7f79c-55gtf openshift-machine-config-operator/machine-config-daemon-gt7kd openshift-multus/multus-additional-cni-plugins-x27jw openshift-multus/multus-cjqbn]\\\\nI0131 09:01:34.674011 6272 obj_retry.go:418] Waiting for all the *v1.Pod retry setup to complete in iterateRetryResources\\\\nI0131 09:01:34.674042 6272 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/multus-cjqbn\\\\nI0131 09:01:34.674056 6272 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/multus-cjqbn\\\\nF0131 09:01:34.674068 6272 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stoppe\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a39d2c99a3b1f6ff5d003d2163046c5127e41dac82f41744c1fbffd0cec8ff09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba6e5a840aca1952877a5d2c18af1d28779b13d54e0c653a2b02b3c3d7b748e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba6e5a840aca1952877a5d2c18af1d28779b13d54e0c653a2b02b3c3d7b748e4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-r8pc4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:36Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:36 crc kubenswrapper[4830]: I0131 09:01:36.481892 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:36Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:36 crc kubenswrapper[4830]: I0131 09:01:36.498624 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-zt78q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f8a0ccd-540b-4151-a34d-438e433cb141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://362d0fc182d79e72720f3686e7fb5219372cf72d8be09c8086713b692e8d66d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6zlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:25Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-zt78q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:36Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:36 crc kubenswrapper[4830]: I0131 09:01:36.500476 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:36 crc kubenswrapper[4830]: I0131 09:01:36.500535 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:36 crc kubenswrapper[4830]: I0131 09:01:36.500547 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:36 crc kubenswrapper[4830]: I0131 09:01:36.500569 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:36 crc kubenswrapper[4830]: I0131 09:01:36.500581 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:36Z","lastTransitionTime":"2026-01-31T09:01:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:36 crc kubenswrapper[4830]: I0131 09:01:36.515185 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c75e1c36-b769-464e-96eb-6d9b3c5aa384\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fc4f1b4979b8d902676eced34650daea491e68b4c4377492b928a9f0f78d12b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b68f539fdc8bbf394adaf06d3e8682cdf498d0994c53f3754caf282cf9cf3607\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://539885da1b8f083e7ae878fec45416be66a5b06df063f046a05a4981bc8a8f75\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9b64732f8259953717c8ad355889afd462ce339c881ba9c105f6d3f39245e79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8cac33719e081864153ce20c60069a21036f3c29f7b4395021c3a2fe4f09dbc9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"message\\\":\\\"le observer\\\\nW0131 09:01:15.957161 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0131 09:01:15.957363 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 09:01:15.958528 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2684978895/tls.crt::/tmp/serving-cert-2684978895/tls.key\\\\\\\"\\\\nI0131 09:01:16.545024 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 09:01:16.548405 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 09:01:16.548444 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 09:01:16.548470 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 09:01:16.548480 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 09:01:16.557075 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 09:01:16.557112 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 09:01:16.557118 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 09:01:16.557123 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 09:01:16.557127 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 09:01:16.557130 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 09:01:16.557133 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0131 09:01:16.557468 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0131 09:01:16.564014 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d4d8bd6451d7416a2a312147ec282db5de8410910834f555c8d09062902130d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41db253848cac205af5f91621cac4e8223bda7d76984afd734a093f8e8a2d125\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41db253848cac205af5f91621cac4e8223bda7d76984afd734a093f8e8a2d125\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:00:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:00:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:00:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:36Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:36 crc kubenswrapper[4830]: I0131 09:01:36.534970 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-r8pc4_159b9801-57e3-4cf0-9b81-10aacb5eef83/ovnkube-controller/1.log" Jan 31 09:01:36 crc kubenswrapper[4830]: I0131 09:01:36.541066 4830 scope.go:117] "RemoveContainer" containerID="ed906c96861bf8d2af425b975d67a4b298930cd75ababc2d621541adaa7b2ba2" Jan 31 09:01:36 crc kubenswrapper[4830]: E0131 09:01:36.541410 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-r8pc4_openshift-ovn-kubernetes(159b9801-57e3-4cf0-9b81-10aacb5eef83)\"" pod="openshift-ovn-kubernetes/ovnkube-node-r8pc4" podUID="159b9801-57e3-4cf0-9b81-10aacb5eef83" Jan 31 09:01:36 crc kubenswrapper[4830]: I0131 09:01:36.542109 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7vq99" event={"ID":"44acb8ed-5840-46fa-9ba1-1b89653e1478","Type":"ContainerStarted","Data":"86ac3b3a214c6bca20d7fdc92a49647dfdaf8de4391f331890f74900ab7eca11"} Jan 31 09:01:36 crc kubenswrapper[4830]: I0131 09:01:36.542162 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7vq99" event={"ID":"44acb8ed-5840-46fa-9ba1-1b89653e1478","Type":"ContainerStarted","Data":"07cae4ce61629c9f8e48863d0775cf4fed46422db85ba8b29477e098b697fb1d"} Jan 31 09:01:36 crc kubenswrapper[4830]: I0131 09:01:36.542176 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7vq99" event={"ID":"44acb8ed-5840-46fa-9ba1-1b89653e1478","Type":"ContainerStarted","Data":"1592bffd18ed95adda8051b2f0558ea43be3d22059a9e7be02959666d85e4dca"} Jan 31 09:01:36 crc kubenswrapper[4830]: I0131 09:01:36.557518 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:36Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:36 crc kubenswrapper[4830]: I0131 09:01:36.572107 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pmbpr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca325f50-edf0-4f3d-ab92-17f40a73d274\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d2a0a6bafefdee2120d6573808366f2455c8606c350f69b9e62bfb2903f6303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7p56d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:22Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pmbpr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:36Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:36 crc kubenswrapper[4830]: I0131 09:01:36.587973 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-cjqbn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b7e133cc-19e8-4770-9146-88dac53a6531\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fc53764819654361fe0c4c89480ef4e2b42eb79d71ab8b88f1cc9283c67ce70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-msp6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-cjqbn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:36Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:36 crc kubenswrapper[4830]: I0131 09:01:36.605046 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:36 crc kubenswrapper[4830]: I0131 09:01:36.605090 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:36 crc kubenswrapper[4830]: I0131 09:01:36.605100 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:36 crc kubenswrapper[4830]: I0131 09:01:36.605120 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:36 crc kubenswrapper[4830]: I0131 09:01:36.605133 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:36Z","lastTransitionTime":"2026-01-31T09:01:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:36 crc kubenswrapper[4830]: I0131 09:01:36.605974 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-x27jw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"227117cb-01d3-4e44-9da3-b1d577fb3ee2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d59f81b73056481d4e6eb23c2a98c3c088b5255b82cd28e0cad0ac2a9b271cfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wndtp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b9804ac2b3c56a17dc1665cc7ca0c622f9f7a5dbf19f804b155c517a5be61e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b9804ac2b3c56a17dc1665cc7ca0c622f9f7a5dbf19f804b155c517a5be61e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wndtp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e777b7bcec1e8930ac1c280460b5733614b633b1ed72522e124f8585ef5cb38d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e777b7bcec1e8930ac1c280460b5733614b633b1ed72522e124f8585ef5cb38d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wndtp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10692d497b8cf269a439c603376a958fcead6d7ee2338eff4e014a75eaa26417\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10692d497b8cf269a439c603376a958fcead6d7ee2338eff4e014a75eaa26417\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wndtp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1275255b2597fbb09db9612f496ab3177afffe0c7d74d406bc4b1bc861b5b54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c1275255b2597fbb09db9612f496ab3177afffe0c7d74d406bc4b1bc861b5b54\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wndtp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://443af6aeef0b259a54c898d9c08f6e39d663ddf06d36d8712ee8ebb7fd2ebf5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://443af6aeef0b259a54c898d9c08f6e39d663ddf06d36d8712ee8ebb7fd2ebf5a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wndtp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e55fc9c3b6f0e9526860d830179e759cb2eae3c52847b675db6ccce9b6ed2766\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e55fc9c3b6f0e9526860d830179e759cb2eae3c52847b675db6ccce9b6ed2766\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wndtp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-x27jw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:36Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:36 crc kubenswrapper[4830]: I0131 09:01:36.626770 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"158dbfda-9b0a-4809-9946-3c6ee2d082dc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4f2590c48b20124bb8d0271755d430719ece306dbdc95acc26258abaf331ee2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqp59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7cfb7ee25dc18bb1412f69e9bbc3a9055029ed188a12baa5ceef7d5445ad597c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqp59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:23Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gt7kd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:36Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:36 crc kubenswrapper[4830]: I0131 09:01:36.642482 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"20ed341f-ef9c-4242-981d-80c09f22a37f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://28d65a2b0e667ff1bd912564180a6a9ce77e91a104c6028e19bc58e0ab3b295d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c484d4ed3775a955f3077e3404135c771c581eef1e8fa614d7cdd6e521bbb426\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6889405df5b8cc9e09b36c99a5db06048c9de193dd5744d2b48c07f42c477bd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e99664db53d57a91882867cdf4ab33d52a2e165c53f91cd1b918a32c49a7afa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:00:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:36Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:36 crc kubenswrapper[4830]: I0131 09:01:36.661324 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c33f93dea1eda6a26c80682447157b317a4aea4af66d200c5bdfc162ad593e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:36Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:36 crc kubenswrapper[4830]: I0131 09:01:36.674928 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:36Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:36 crc kubenswrapper[4830]: I0131 09:01:36.687925 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2018dd8e7153f3ce64992dc6f931ae09c5f77931cd0743a9fe2557673b6a41f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:36Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:36 crc kubenswrapper[4830]: I0131 09:01:36.701460 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0468c8e291590c1fa2ffc9b7786bbbf6f3cfb7889fc3adaf7f7a4d3b0edcb1ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4601859d7e52904b3390b2ebca48210c4b3bf132b00a4735a1dbe6cfdc7bd02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:36Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:36 crc kubenswrapper[4830]: I0131 09:01:36.707163 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:36 crc kubenswrapper[4830]: I0131 09:01:36.707229 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:36 crc kubenswrapper[4830]: I0131 09:01:36.707242 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:36 crc kubenswrapper[4830]: I0131 09:01:36.707262 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:36 crc kubenswrapper[4830]: I0131 09:01:36.707274 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:36Z","lastTransitionTime":"2026-01-31T09:01:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:36 crc kubenswrapper[4830]: I0131 09:01:36.728752 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-r8pc4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"159b9801-57e3-4cf0-9b81-10aacb5eef83\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://320f24051b1a05475b54823ec3401ad252f0efd6c0d3c5098643f959bad15179\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae12083012db5dcd6cc0b23cf64d04c952587334fdf617d36d4cdc2f657eb163\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://351a35da1a2503d6c5d323c7a9a26460236f70517498b41055e6d21bbf92b358\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3614c1ff7ca6762c15b9b85e6187ad94029630e54254e8ccb45b5e7c24692561\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27c8241e84418d80626b17fe37ccf994304394c3fa85c7f1a058d8d54ad7e7d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fd109715e65725addfeba399eacb1abbe16132e06e9ee50b542e877ca9d8d82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed906c96861bf8d2af425b975d67a4b298930cd75ababc2d621541adaa7b2ba2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed906c96861bf8d2af425b975d67a4b298930cd75ababc2d621541adaa7b2ba2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T09:01:34Z\\\",\\\"message\\\":\\\"objects: [openshift-kube-apiserver/kube-apiserver-crc openshift-kube-controller-manager/kube-controller-manager-crc openshift-network-operator/iptables-alerter-4ln5h openshift-image-registry/node-ca-zt78q openshift-network-diagnostics/network-check-source-55646444c4-trplf openshift-network-diagnostics/network-check-target-xd92c openshift-network-node-identity/network-node-identity-vrzqb openshift-dns/node-resolver-pmbpr openshift-network-operator/network-operator-58b4c7f79c-55gtf openshift-machine-config-operator/machine-config-daemon-gt7kd openshift-multus/multus-additional-cni-plugins-x27jw openshift-multus/multus-cjqbn]\\\\nI0131 09:01:34.674011 6272 obj_retry.go:418] Waiting for all the *v1.Pod retry setup to complete in iterateRetryResources\\\\nI0131 09:01:34.674042 6272 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/multus-cjqbn\\\\nI0131 09:01:34.674056 6272 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/multus-cjqbn\\\\nF0131 09:01:34.674068 6272 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stoppe\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:33Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-r8pc4_openshift-ovn-kubernetes(159b9801-57e3-4cf0-9b81-10aacb5eef83)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a39d2c99a3b1f6ff5d003d2163046c5127e41dac82f41744c1fbffd0cec8ff09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba6e5a840aca1952877a5d2c18af1d28779b13d54e0c653a2b02b3c3d7b748e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba6e5a840aca1952877a5d2c18af1d28779b13d54e0c653a2b02b3c3d7b748e4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-r8pc4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:36Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:36 crc kubenswrapper[4830]: I0131 09:01:36.741646 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7vq99" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"44acb8ed-5840-46fa-9ba1-1b89653e1478\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9w5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9w5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:35Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7vq99\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:36Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:36 crc kubenswrapper[4830]: I0131 09:01:36.757747 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c75e1c36-b769-464e-96eb-6d9b3c5aa384\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fc4f1b4979b8d902676eced34650daea491e68b4c4377492b928a9f0f78d12b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b68f539fdc8bbf394adaf06d3e8682cdf498d0994c53f3754caf282cf9cf3607\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://539885da1b8f083e7ae878fec45416be66a5b06df063f046a05a4981bc8a8f75\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9b64732f8259953717c8ad355889afd462ce339c881ba9c105f6d3f39245e79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8cac33719e081864153ce20c60069a21036f3c29f7b4395021c3a2fe4f09dbc9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"message\\\":\\\"le observer\\\\nW0131 09:01:15.957161 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0131 09:01:15.957363 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 09:01:15.958528 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2684978895/tls.crt::/tmp/serving-cert-2684978895/tls.key\\\\\\\"\\\\nI0131 09:01:16.545024 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 09:01:16.548405 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 09:01:16.548444 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 09:01:16.548470 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 09:01:16.548480 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 09:01:16.557075 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 09:01:16.557112 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 09:01:16.557118 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 09:01:16.557123 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 09:01:16.557127 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 09:01:16.557130 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 09:01:16.557133 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0131 09:01:16.557468 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0131 09:01:16.564014 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d4d8bd6451d7416a2a312147ec282db5de8410910834f555c8d09062902130d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41db253848cac205af5f91621cac4e8223bda7d76984afd734a093f8e8a2d125\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41db253848cac205af5f91621cac4e8223bda7d76984afd734a093f8e8a2d125\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:00:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:00:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:00:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:36Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:36 crc kubenswrapper[4830]: I0131 09:01:36.770248 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:36Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:36 crc kubenswrapper[4830]: I0131 09:01:36.782554 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-zt78q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f8a0ccd-540b-4151-a34d-438e433cb141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://362d0fc182d79e72720f3686e7fb5219372cf72d8be09c8086713b692e8d66d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6zlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:25Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-zt78q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:36Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:36 crc kubenswrapper[4830]: I0131 09:01:36.799240 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c75e1c36-b769-464e-96eb-6d9b3c5aa384\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fc4f1b4979b8d902676eced34650daea491e68b4c4377492b928a9f0f78d12b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b68f539fdc8bbf394adaf06d3e8682cdf498d0994c53f3754caf282cf9cf3607\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://539885da1b8f083e7ae878fec45416be66a5b06df063f046a05a4981bc8a8f75\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9b64732f8259953717c8ad355889afd462ce339c881ba9c105f6d3f39245e79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8cac33719e081864153ce20c60069a21036f3c29f7b4395021c3a2fe4f09dbc9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"message\\\":\\\"le observer\\\\nW0131 09:01:15.957161 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0131 09:01:15.957363 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 09:01:15.958528 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2684978895/tls.crt::/tmp/serving-cert-2684978895/tls.key\\\\\\\"\\\\nI0131 09:01:16.545024 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 09:01:16.548405 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 09:01:16.548444 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 09:01:16.548470 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 09:01:16.548480 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 09:01:16.557075 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 09:01:16.557112 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 09:01:16.557118 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 09:01:16.557123 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 09:01:16.557127 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 09:01:16.557130 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 09:01:16.557133 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0131 09:01:16.557468 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0131 09:01:16.564014 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d4d8bd6451d7416a2a312147ec282db5de8410910834f555c8d09062902130d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41db253848cac205af5f91621cac4e8223bda7d76984afd734a093f8e8a2d125\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41db253848cac205af5f91621cac4e8223bda7d76984afd734a093f8e8a2d125\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:00:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:00:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:00:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:36Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:36 crc kubenswrapper[4830]: I0131 09:01:36.810984 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:36 crc kubenswrapper[4830]: I0131 09:01:36.811031 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:36 crc kubenswrapper[4830]: I0131 09:01:36.811042 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:36 crc kubenswrapper[4830]: I0131 09:01:36.811060 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:36 crc kubenswrapper[4830]: I0131 09:01:36.811070 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:36Z","lastTransitionTime":"2026-01-31T09:01:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:36 crc kubenswrapper[4830]: I0131 09:01:36.814282 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:36Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:36 crc kubenswrapper[4830]: I0131 09:01:36.825147 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-zt78q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f8a0ccd-540b-4151-a34d-438e433cb141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://362d0fc182d79e72720f3686e7fb5219372cf72d8be09c8086713b692e8d66d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6zlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:25Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-zt78q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:36Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:36 crc kubenswrapper[4830]: I0131 09:01:36.838949 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:36Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:36 crc kubenswrapper[4830]: I0131 09:01:36.852409 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pmbpr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca325f50-edf0-4f3d-ab92-17f40a73d274\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d2a0a6bafefdee2120d6573808366f2455c8606c350f69b9e62bfb2903f6303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7p56d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:22Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pmbpr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:36Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:36 crc kubenswrapper[4830]: I0131 09:01:36.867909 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-cjqbn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b7e133cc-19e8-4770-9146-88dac53a6531\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fc53764819654361fe0c4c89480ef4e2b42eb79d71ab8b88f1cc9283c67ce70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-msp6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-cjqbn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:36Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:36 crc kubenswrapper[4830]: I0131 09:01:36.885920 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-x27jw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"227117cb-01d3-4e44-9da3-b1d577fb3ee2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d59f81b73056481d4e6eb23c2a98c3c088b5255b82cd28e0cad0ac2a9b271cfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wndtp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b9804ac2b3c56a17dc1665cc7ca0c622f9f7a5dbf19f804b155c517a5be61e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b9804ac2b3c56a17dc1665cc7ca0c622f9f7a5dbf19f804b155c517a5be61e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wndtp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e777b7bcec1e8930ac1c280460b5733614b633b1ed72522e124f8585ef5cb38d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e777b7bcec1e8930ac1c280460b5733614b633b1ed72522e124f8585ef5cb38d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wndtp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10692d497b8cf269a439c603376a958fcead6d7ee2338eff4e014a75eaa26417\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10692d497b8cf269a439c603376a958fcead6d7ee2338eff4e014a75eaa26417\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wndtp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1275255b2597fbb09db9612f496ab3177afffe0c7d74d406bc4b1bc861b5b54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c1275255b2597fbb09db9612f496ab3177afffe0c7d74d406bc4b1bc861b5b54\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wndtp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://443af6aeef0b259a54c898d9c08f6e39d663ddf06d36d8712ee8ebb7fd2ebf5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://443af6aeef0b259a54c898d9c08f6e39d663ddf06d36d8712ee8ebb7fd2ebf5a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wndtp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e55fc9c3b6f0e9526860d830179e759cb2eae3c52847b675db6ccce9b6ed2766\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e55fc9c3b6f0e9526860d830179e759cb2eae3c52847b675db6ccce9b6ed2766\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wndtp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-x27jw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:36Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:36 crc kubenswrapper[4830]: I0131 09:01:36.899395 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"158dbfda-9b0a-4809-9946-3c6ee2d082dc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4f2590c48b20124bb8d0271755d430719ece306dbdc95acc26258abaf331ee2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqp59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7cfb7ee25dc18bb1412f69e9bbc3a9055029ed188a12baa5ceef7d5445ad597c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqp59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:23Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gt7kd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:36Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:36 crc kubenswrapper[4830]: I0131 09:01:36.913909 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:36 crc kubenswrapper[4830]: I0131 09:01:36.913958 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:36 crc kubenswrapper[4830]: I0131 09:01:36.913968 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:36 crc kubenswrapper[4830]: I0131 09:01:36.913983 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:36 crc kubenswrapper[4830]: I0131 09:01:36.913993 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:36Z","lastTransitionTime":"2026-01-31T09:01:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:36 crc kubenswrapper[4830]: I0131 09:01:36.915417 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"20ed341f-ef9c-4242-981d-80c09f22a37f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://28d65a2b0e667ff1bd912564180a6a9ce77e91a104c6028e19bc58e0ab3b295d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c484d4ed3775a955f3077e3404135c771c581eef1e8fa614d7cdd6e521bbb426\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6889405df5b8cc9e09b36c99a5db06048c9de193dd5744d2b48c07f42c477bd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e99664db53d57a91882867cdf4ab33d52a2e165c53f91cd1b918a32c49a7afa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:00:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:36Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:36 crc kubenswrapper[4830]: I0131 09:01:36.930187 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c33f93dea1eda6a26c80682447157b317a4aea4af66d200c5bdfc162ad593e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:36Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:36 crc kubenswrapper[4830]: I0131 09:01:36.944379 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:36Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:36 crc kubenswrapper[4830]: I0131 09:01:36.959073 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2018dd8e7153f3ce64992dc6f931ae09c5f77931cd0743a9fe2557673b6a41f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:36Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:36 crc kubenswrapper[4830]: I0131 09:01:36.974170 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0468c8e291590c1fa2ffc9b7786bbbf6f3cfb7889fc3adaf7f7a4d3b0edcb1ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4601859d7e52904b3390b2ebca48210c4b3bf132b00a4735a1dbe6cfdc7bd02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:36Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:36 crc kubenswrapper[4830]: I0131 09:01:36.995487 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-5kl8z"] Jan 31 09:01:36 crc kubenswrapper[4830]: I0131 09:01:36.996231 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5kl8z" Jan 31 09:01:36 crc kubenswrapper[4830]: E0131 09:01:36.996313 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5kl8z" podUID="c1fa30e4-0c03-43ab-9c37-f7ec86153b27" Jan 31 09:01:36 crc kubenswrapper[4830]: I0131 09:01:36.997425 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-r8pc4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"159b9801-57e3-4cf0-9b81-10aacb5eef83\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://320f24051b1a05475b54823ec3401ad252f0efd6c0d3c5098643f959bad15179\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae12083012db5dcd6cc0b23cf64d04c952587334fdf617d36d4cdc2f657eb163\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://351a35da1a2503d6c5d323c7a9a26460236f70517498b41055e6d21bbf92b358\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3614c1ff7ca6762c15b9b85e6187ad94029630e54254e8ccb45b5e7c24692561\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27c8241e84418d80626b17fe37ccf994304394c3fa85c7f1a058d8d54ad7e7d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fd109715e65725addfeba399eacb1abbe16132e06e9ee50b542e877ca9d8d82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed906c96861bf8d2af425b975d67a4b298930cd75ababc2d621541adaa7b2ba2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed906c96861bf8d2af425b975d67a4b298930cd75ababc2d621541adaa7b2ba2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T09:01:34Z\\\",\\\"message\\\":\\\"objects: [openshift-kube-apiserver/kube-apiserver-crc openshift-kube-controller-manager/kube-controller-manager-crc openshift-network-operator/iptables-alerter-4ln5h openshift-image-registry/node-ca-zt78q openshift-network-diagnostics/network-check-source-55646444c4-trplf openshift-network-diagnostics/network-check-target-xd92c openshift-network-node-identity/network-node-identity-vrzqb openshift-dns/node-resolver-pmbpr openshift-network-operator/network-operator-58b4c7f79c-55gtf openshift-machine-config-operator/machine-config-daemon-gt7kd openshift-multus/multus-additional-cni-plugins-x27jw openshift-multus/multus-cjqbn]\\\\nI0131 09:01:34.674011 6272 obj_retry.go:418] Waiting for all the *v1.Pod retry setup to complete in iterateRetryResources\\\\nI0131 09:01:34.674042 6272 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/multus-cjqbn\\\\nI0131 09:01:34.674056 6272 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/multus-cjqbn\\\\nF0131 09:01:34.674068 6272 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stoppe\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:33Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-r8pc4_openshift-ovn-kubernetes(159b9801-57e3-4cf0-9b81-10aacb5eef83)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a39d2c99a3b1f6ff5d003d2163046c5127e41dac82f41744c1fbffd0cec8ff09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba6e5a840aca1952877a5d2c18af1d28779b13d54e0c653a2b02b3c3d7b748e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba6e5a840aca1952877a5d2c18af1d28779b13d54e0c653a2b02b3c3d7b748e4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-r8pc4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:36Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:37 crc kubenswrapper[4830]: I0131 09:01:37.010850 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7vq99" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"44acb8ed-5840-46fa-9ba1-1b89653e1478\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07cae4ce61629c9f8e48863d0775cf4fed46422db85ba8b29477e098b697fb1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9w5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86ac3b3a214c6bca20d7fdc92a49647dfdaf8de4391f331890f74900ab7eca11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9w5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:35Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7vq99\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:37Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:37 crc kubenswrapper[4830]: I0131 09:01:37.016466 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:37 crc kubenswrapper[4830]: I0131 09:01:37.016489 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:37 crc kubenswrapper[4830]: I0131 09:01:37.016499 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:37 crc kubenswrapper[4830]: I0131 09:01:37.016517 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:37 crc kubenswrapper[4830]: I0131 09:01:37.016531 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:37Z","lastTransitionTime":"2026-01-31T09:01:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:37 crc kubenswrapper[4830]: I0131 09:01:37.024175 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7vq99" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"44acb8ed-5840-46fa-9ba1-1b89653e1478\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07cae4ce61629c9f8e48863d0775cf4fed46422db85ba8b29477e098b697fb1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9w5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86ac3b3a214c6bca20d7fdc92a49647dfdaf8de4391f331890f74900ab7eca11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9w5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:35Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7vq99\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:37Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:37 crc kubenswrapper[4830]: I0131 09:01:37.036004 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0468c8e291590c1fa2ffc9b7786bbbf6f3cfb7889fc3adaf7f7a4d3b0edcb1ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4601859d7e52904b3390b2ebca48210c4b3bf132b00a4735a1dbe6cfdc7bd02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:37Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:37 crc kubenswrapper[4830]: I0131 09:01:37.048673 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jgvfn\" (UniqueName: \"kubernetes.io/projected/c1fa30e4-0c03-43ab-9c37-f7ec86153b27-kube-api-access-jgvfn\") pod \"network-metrics-daemon-5kl8z\" (UID: \"c1fa30e4-0c03-43ab-9c37-f7ec86153b27\") " pod="openshift-multus/network-metrics-daemon-5kl8z" Jan 31 09:01:37 crc kubenswrapper[4830]: I0131 09:01:37.048804 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c1fa30e4-0c03-43ab-9c37-f7ec86153b27-metrics-certs\") pod \"network-metrics-daemon-5kl8z\" (UID: \"c1fa30e4-0c03-43ab-9c37-f7ec86153b27\") " pod="openshift-multus/network-metrics-daemon-5kl8z" Jan 31 09:01:37 crc kubenswrapper[4830]: I0131 09:01:37.053847 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-r8pc4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"159b9801-57e3-4cf0-9b81-10aacb5eef83\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://320f24051b1a05475b54823ec3401ad252f0efd6c0d3c5098643f959bad15179\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae12083012db5dcd6cc0b23cf64d04c952587334fdf617d36d4cdc2f657eb163\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://351a35da1a2503d6c5d323c7a9a26460236f70517498b41055e6d21bbf92b358\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3614c1ff7ca6762c15b9b85e6187ad94029630e54254e8ccb45b5e7c24692561\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27c8241e84418d80626b17fe37ccf994304394c3fa85c7f1a058d8d54ad7e7d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fd109715e65725addfeba399eacb1abbe16132e06e9ee50b542e877ca9d8d82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed906c96861bf8d2af425b975d67a4b298930cd75ababc2d621541adaa7b2ba2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed906c96861bf8d2af425b975d67a4b298930cd75ababc2d621541adaa7b2ba2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T09:01:34Z\\\",\\\"message\\\":\\\"objects: [openshift-kube-apiserver/kube-apiserver-crc openshift-kube-controller-manager/kube-controller-manager-crc openshift-network-operator/iptables-alerter-4ln5h openshift-image-registry/node-ca-zt78q openshift-network-diagnostics/network-check-source-55646444c4-trplf openshift-network-diagnostics/network-check-target-xd92c openshift-network-node-identity/network-node-identity-vrzqb openshift-dns/node-resolver-pmbpr openshift-network-operator/network-operator-58b4c7f79c-55gtf openshift-machine-config-operator/machine-config-daemon-gt7kd openshift-multus/multus-additional-cni-plugins-x27jw openshift-multus/multus-cjqbn]\\\\nI0131 09:01:34.674011 6272 obj_retry.go:418] Waiting for all the *v1.Pod retry setup to complete in iterateRetryResources\\\\nI0131 09:01:34.674042 6272 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/multus-cjqbn\\\\nI0131 09:01:34.674056 6272 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/multus-cjqbn\\\\nF0131 09:01:34.674068 6272 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stoppe\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:33Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-r8pc4_openshift-ovn-kubernetes(159b9801-57e3-4cf0-9b81-10aacb5eef83)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a39d2c99a3b1f6ff5d003d2163046c5127e41dac82f41744c1fbffd0cec8ff09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba6e5a840aca1952877a5d2c18af1d28779b13d54e0c653a2b02b3c3d7b748e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba6e5a840aca1952877a5d2c18af1d28779b13d54e0c653a2b02b3c3d7b748e4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-r8pc4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:37Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:37 crc kubenswrapper[4830]: I0131 09:01:37.066403 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:37Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:37 crc kubenswrapper[4830]: I0131 09:01:37.076998 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-zt78q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f8a0ccd-540b-4151-a34d-438e433cb141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://362d0fc182d79e72720f3686e7fb5219372cf72d8be09c8086713b692e8d66d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6zlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:25Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-zt78q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:37Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:37 crc kubenswrapper[4830]: I0131 09:01:37.090437 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5kl8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1fa30e4-0c03-43ab-9c37-f7ec86153b27\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jgvfn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jgvfn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:36Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5kl8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:37Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:37 crc kubenswrapper[4830]: I0131 09:01:37.111931 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c75e1c36-b769-464e-96eb-6d9b3c5aa384\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fc4f1b4979b8d902676eced34650daea491e68b4c4377492b928a9f0f78d12b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b68f539fdc8bbf394adaf06d3e8682cdf498d0994c53f3754caf282cf9cf3607\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://539885da1b8f083e7ae878fec45416be66a5b06df063f046a05a4981bc8a8f75\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9b64732f8259953717c8ad355889afd462ce339c881ba9c105f6d3f39245e79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8cac33719e081864153ce20c60069a21036f3c29f7b4395021c3a2fe4f09dbc9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"message\\\":\\\"le observer\\\\nW0131 09:01:15.957161 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0131 09:01:15.957363 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 09:01:15.958528 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2684978895/tls.crt::/tmp/serving-cert-2684978895/tls.key\\\\\\\"\\\\nI0131 09:01:16.545024 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 09:01:16.548405 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 09:01:16.548444 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 09:01:16.548470 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 09:01:16.548480 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 09:01:16.557075 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 09:01:16.557112 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 09:01:16.557118 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 09:01:16.557123 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 09:01:16.557127 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 09:01:16.557130 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 09:01:16.557133 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0131 09:01:16.557468 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0131 09:01:16.564014 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d4d8bd6451d7416a2a312147ec282db5de8410910834f555c8d09062902130d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41db253848cac205af5f91621cac4e8223bda7d76984afd734a093f8e8a2d125\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41db253848cac205af5f91621cac4e8223bda7d76984afd734a093f8e8a2d125\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:00:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:00:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:00:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:37Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:37 crc kubenswrapper[4830]: I0131 09:01:37.119452 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:37 crc kubenswrapper[4830]: I0131 09:01:37.119497 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:37 crc kubenswrapper[4830]: I0131 09:01:37.119517 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:37 crc kubenswrapper[4830]: I0131 09:01:37.119546 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:37 crc kubenswrapper[4830]: I0131 09:01:37.119572 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:37Z","lastTransitionTime":"2026-01-31T09:01:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:37 crc kubenswrapper[4830]: I0131 09:01:37.133454 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-cjqbn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b7e133cc-19e8-4770-9146-88dac53a6531\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fc53764819654361fe0c4c89480ef4e2b42eb79d71ab8b88f1cc9283c67ce70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-msp6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-cjqbn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:37Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:37 crc kubenswrapper[4830]: I0131 09:01:37.150536 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jgvfn\" (UniqueName: \"kubernetes.io/projected/c1fa30e4-0c03-43ab-9c37-f7ec86153b27-kube-api-access-jgvfn\") pod \"network-metrics-daemon-5kl8z\" (UID: \"c1fa30e4-0c03-43ab-9c37-f7ec86153b27\") " pod="openshift-multus/network-metrics-daemon-5kl8z" Jan 31 09:01:37 crc kubenswrapper[4830]: I0131 09:01:37.150607 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c1fa30e4-0c03-43ab-9c37-f7ec86153b27-metrics-certs\") pod \"network-metrics-daemon-5kl8z\" (UID: \"c1fa30e4-0c03-43ab-9c37-f7ec86153b27\") " pod="openshift-multus/network-metrics-daemon-5kl8z" Jan 31 09:01:37 crc kubenswrapper[4830]: E0131 09:01:37.150823 4830 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 31 09:01:37 crc kubenswrapper[4830]: E0131 09:01:37.150922 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c1fa30e4-0c03-43ab-9c37-f7ec86153b27-metrics-certs podName:c1fa30e4-0c03-43ab-9c37-f7ec86153b27 nodeName:}" failed. No retries permitted until 2026-01-31 09:01:37.650897739 +0000 UTC m=+42.144260181 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/c1fa30e4-0c03-43ab-9c37-f7ec86153b27-metrics-certs") pod "network-metrics-daemon-5kl8z" (UID: "c1fa30e4-0c03-43ab-9c37-f7ec86153b27") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 31 09:01:37 crc kubenswrapper[4830]: I0131 09:01:37.151583 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-x27jw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"227117cb-01d3-4e44-9da3-b1d577fb3ee2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d59f81b73056481d4e6eb23c2a98c3c088b5255b82cd28e0cad0ac2a9b271cfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wndtp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b9804ac2b3c56a17dc1665cc7ca0c622f9f7a5dbf19f804b155c517a5be61e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b9804ac2b3c56a17dc1665cc7ca0c622f9f7a5dbf19f804b155c517a5be61e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wndtp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e777b7bcec1e8930ac1c280460b5733614b633b1ed72522e124f8585ef5cb38d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e777b7bcec1e8930ac1c280460b5733614b633b1ed72522e124f8585ef5cb38d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wndtp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10692d497b8cf269a439c603376a958fcead6d7ee2338eff4e014a75eaa26417\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10692d497b8cf269a439c603376a958fcead6d7ee2338eff4e014a75eaa26417\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wndtp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1275255b2597fbb09db9612f496ab3177afffe0c7d74d406bc4b1bc861b5b54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c1275255b2597fbb09db9612f496ab3177afffe0c7d74d406bc4b1bc861b5b54\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wndtp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://443af6aeef0b259a54c898d9c08f6e39d663ddf06d36d8712ee8ebb7fd2ebf5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://443af6aeef0b259a54c898d9c08f6e39d663ddf06d36d8712ee8ebb7fd2ebf5a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wndtp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e55fc9c3b6f0e9526860d830179e759cb2eae3c52847b675db6ccce9b6ed2766\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e55fc9c3b6f0e9526860d830179e759cb2eae3c52847b675db6ccce9b6ed2766\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wndtp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-x27jw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:37Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:37 crc kubenswrapper[4830]: I0131 09:01:37.165606 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:37Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:37 crc kubenswrapper[4830]: I0131 09:01:37.174246 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jgvfn\" (UniqueName: \"kubernetes.io/projected/c1fa30e4-0c03-43ab-9c37-f7ec86153b27-kube-api-access-jgvfn\") pod \"network-metrics-daemon-5kl8z\" (UID: \"c1fa30e4-0c03-43ab-9c37-f7ec86153b27\") " pod="openshift-multus/network-metrics-daemon-5kl8z" Jan 31 09:01:37 crc kubenswrapper[4830]: I0131 09:01:37.182810 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pmbpr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca325f50-edf0-4f3d-ab92-17f40a73d274\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d2a0a6bafefdee2120d6573808366f2455c8606c350f69b9e62bfb2903f6303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7p56d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:22Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pmbpr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:37Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:37 crc kubenswrapper[4830]: I0131 09:01:37.197548 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c33f93dea1eda6a26c80682447157b317a4aea4af66d200c5bdfc162ad593e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:37Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:37 crc kubenswrapper[4830]: I0131 09:01:37.209699 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:37Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:37 crc kubenswrapper[4830]: I0131 09:01:37.222293 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:37 crc kubenswrapper[4830]: I0131 09:01:37.222333 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:37 crc kubenswrapper[4830]: I0131 09:01:37.222343 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:37 crc kubenswrapper[4830]: I0131 09:01:37.222360 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:37 crc kubenswrapper[4830]: I0131 09:01:37.222371 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:37Z","lastTransitionTime":"2026-01-31T09:01:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:37 crc kubenswrapper[4830]: I0131 09:01:37.224877 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2018dd8e7153f3ce64992dc6f931ae09c5f77931cd0743a9fe2557673b6a41f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:37Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:37 crc kubenswrapper[4830]: I0131 09:01:37.225064 4830 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 01:01:13.878174919 +0000 UTC Jan 31 09:01:37 crc kubenswrapper[4830]: I0131 09:01:37.237101 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"158dbfda-9b0a-4809-9946-3c6ee2d082dc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4f2590c48b20124bb8d0271755d430719ece306dbdc95acc26258abaf331ee2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqp59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7cfb7ee25dc18bb1412f69e9bbc3a9055029ed188a12baa5ceef7d5445ad597c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqp59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:23Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gt7kd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:37Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:37 crc kubenswrapper[4830]: I0131 09:01:37.247954 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"20ed341f-ef9c-4242-981d-80c09f22a37f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://28d65a2b0e667ff1bd912564180a6a9ce77e91a104c6028e19bc58e0ab3b295d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c484d4ed3775a955f3077e3404135c771c581eef1e8fa614d7cdd6e521bbb426\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6889405df5b8cc9e09b36c99a5db06048c9de193dd5744d2b48c07f42c477bd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e99664db53d57a91882867cdf4ab33d52a2e165c53f91cd1b918a32c49a7afa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:00:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:37Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:37 crc kubenswrapper[4830]: I0131 09:01:37.325856 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:37 crc kubenswrapper[4830]: I0131 09:01:37.325913 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:37 crc kubenswrapper[4830]: I0131 09:01:37.325926 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:37 crc kubenswrapper[4830]: I0131 09:01:37.325945 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:37 crc kubenswrapper[4830]: I0131 09:01:37.325955 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:37Z","lastTransitionTime":"2026-01-31T09:01:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:37 crc kubenswrapper[4830]: I0131 09:01:37.428738 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:37 crc kubenswrapper[4830]: I0131 09:01:37.428799 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:37 crc kubenswrapper[4830]: I0131 09:01:37.428815 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:37 crc kubenswrapper[4830]: I0131 09:01:37.428833 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:37 crc kubenswrapper[4830]: I0131 09:01:37.428845 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:37Z","lastTransitionTime":"2026-01-31T09:01:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:37 crc kubenswrapper[4830]: I0131 09:01:37.532114 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:37 crc kubenswrapper[4830]: I0131 09:01:37.532166 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:37 crc kubenswrapper[4830]: I0131 09:01:37.532179 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:37 crc kubenswrapper[4830]: I0131 09:01:37.532198 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:37 crc kubenswrapper[4830]: I0131 09:01:37.532211 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:37Z","lastTransitionTime":"2026-01-31T09:01:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:37 crc kubenswrapper[4830]: I0131 09:01:37.634864 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:37 crc kubenswrapper[4830]: I0131 09:01:37.634920 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:37 crc kubenswrapper[4830]: I0131 09:01:37.634935 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:37 crc kubenswrapper[4830]: I0131 09:01:37.634957 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:37 crc kubenswrapper[4830]: I0131 09:01:37.634973 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:37Z","lastTransitionTime":"2026-01-31T09:01:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:37 crc kubenswrapper[4830]: I0131 09:01:37.655300 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c1fa30e4-0c03-43ab-9c37-f7ec86153b27-metrics-certs\") pod \"network-metrics-daemon-5kl8z\" (UID: \"c1fa30e4-0c03-43ab-9c37-f7ec86153b27\") " pod="openshift-multus/network-metrics-daemon-5kl8z" Jan 31 09:01:37 crc kubenswrapper[4830]: E0131 09:01:37.655573 4830 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 31 09:01:37 crc kubenswrapper[4830]: E0131 09:01:37.655718 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c1fa30e4-0c03-43ab-9c37-f7ec86153b27-metrics-certs podName:c1fa30e4-0c03-43ab-9c37-f7ec86153b27 nodeName:}" failed. No retries permitted until 2026-01-31 09:01:38.655680667 +0000 UTC m=+43.149043149 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/c1fa30e4-0c03-43ab-9c37-f7ec86153b27-metrics-certs") pod "network-metrics-daemon-5kl8z" (UID: "c1fa30e4-0c03-43ab-9c37-f7ec86153b27") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 31 09:01:37 crc kubenswrapper[4830]: I0131 09:01:37.742789 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:37 crc kubenswrapper[4830]: I0131 09:01:37.742849 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:37 crc kubenswrapper[4830]: I0131 09:01:37.742862 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:37 crc kubenswrapper[4830]: I0131 09:01:37.742886 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:37 crc kubenswrapper[4830]: I0131 09:01:37.742901 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:37Z","lastTransitionTime":"2026-01-31T09:01:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:37 crc kubenswrapper[4830]: I0131 09:01:37.847044 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:37 crc kubenswrapper[4830]: I0131 09:01:37.847089 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:37 crc kubenswrapper[4830]: I0131 09:01:37.847101 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:37 crc kubenswrapper[4830]: I0131 09:01:37.847120 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:37 crc kubenswrapper[4830]: I0131 09:01:37.847168 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:37Z","lastTransitionTime":"2026-01-31T09:01:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:37 crc kubenswrapper[4830]: I0131 09:01:37.950683 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:37 crc kubenswrapper[4830]: I0131 09:01:37.950748 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:37 crc kubenswrapper[4830]: I0131 09:01:37.950757 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:37 crc kubenswrapper[4830]: I0131 09:01:37.950774 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:37 crc kubenswrapper[4830]: I0131 09:01:37.950784 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:37Z","lastTransitionTime":"2026-01-31T09:01:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:38 crc kubenswrapper[4830]: I0131 09:01:38.054534 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:38 crc kubenswrapper[4830]: I0131 09:01:38.054577 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:38 crc kubenswrapper[4830]: I0131 09:01:38.054587 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:38 crc kubenswrapper[4830]: I0131 09:01:38.054606 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:38 crc kubenswrapper[4830]: I0131 09:01:38.054616 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:38Z","lastTransitionTime":"2026-01-31T09:01:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:38 crc kubenswrapper[4830]: I0131 09:01:38.158039 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:38 crc kubenswrapper[4830]: I0131 09:01:38.158100 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:38 crc kubenswrapper[4830]: I0131 09:01:38.158113 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:38 crc kubenswrapper[4830]: I0131 09:01:38.158134 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:38 crc kubenswrapper[4830]: I0131 09:01:38.158152 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:38Z","lastTransitionTime":"2026-01-31T09:01:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:38 crc kubenswrapper[4830]: I0131 09:01:38.225461 4830 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 06:11:18.697298223 +0000 UTC Jan 31 09:01:38 crc kubenswrapper[4830]: I0131 09:01:38.251145 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 09:01:38 crc kubenswrapper[4830]: I0131 09:01:38.251274 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5kl8z" Jan 31 09:01:38 crc kubenswrapper[4830]: I0131 09:01:38.251333 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 09:01:38 crc kubenswrapper[4830]: E0131 09:01:38.251435 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 09:01:38 crc kubenswrapper[4830]: E0131 09:01:38.251561 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 09:01:38 crc kubenswrapper[4830]: I0131 09:01:38.251566 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 09:01:38 crc kubenswrapper[4830]: E0131 09:01:38.251720 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5kl8z" podUID="c1fa30e4-0c03-43ab-9c37-f7ec86153b27" Jan 31 09:01:38 crc kubenswrapper[4830]: E0131 09:01:38.251842 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 09:01:38 crc kubenswrapper[4830]: I0131 09:01:38.260855 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:38 crc kubenswrapper[4830]: I0131 09:01:38.260881 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:38 crc kubenswrapper[4830]: I0131 09:01:38.260895 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:38 crc kubenswrapper[4830]: I0131 09:01:38.260908 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:38 crc kubenswrapper[4830]: I0131 09:01:38.260919 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:38Z","lastTransitionTime":"2026-01-31T09:01:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:38 crc kubenswrapper[4830]: I0131 09:01:38.363668 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:38 crc kubenswrapper[4830]: I0131 09:01:38.363710 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:38 crc kubenswrapper[4830]: I0131 09:01:38.363739 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:38 crc kubenswrapper[4830]: I0131 09:01:38.363757 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:38 crc kubenswrapper[4830]: I0131 09:01:38.363771 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:38Z","lastTransitionTime":"2026-01-31T09:01:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:38 crc kubenswrapper[4830]: I0131 09:01:38.467062 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:38 crc kubenswrapper[4830]: I0131 09:01:38.467138 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:38 crc kubenswrapper[4830]: I0131 09:01:38.467179 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:38 crc kubenswrapper[4830]: I0131 09:01:38.467211 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:38 crc kubenswrapper[4830]: I0131 09:01:38.467233 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:38Z","lastTransitionTime":"2026-01-31T09:01:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:38 crc kubenswrapper[4830]: I0131 09:01:38.571287 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:38 crc kubenswrapper[4830]: I0131 09:01:38.571344 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:38 crc kubenswrapper[4830]: I0131 09:01:38.571385 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:38 crc kubenswrapper[4830]: I0131 09:01:38.571406 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:38 crc kubenswrapper[4830]: I0131 09:01:38.571421 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:38Z","lastTransitionTime":"2026-01-31T09:01:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:38 crc kubenswrapper[4830]: I0131 09:01:38.665671 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c1fa30e4-0c03-43ab-9c37-f7ec86153b27-metrics-certs\") pod \"network-metrics-daemon-5kl8z\" (UID: \"c1fa30e4-0c03-43ab-9c37-f7ec86153b27\") " pod="openshift-multus/network-metrics-daemon-5kl8z" Jan 31 09:01:38 crc kubenswrapper[4830]: E0131 09:01:38.665915 4830 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 31 09:01:38 crc kubenswrapper[4830]: E0131 09:01:38.666017 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c1fa30e4-0c03-43ab-9c37-f7ec86153b27-metrics-certs podName:c1fa30e4-0c03-43ab-9c37-f7ec86153b27 nodeName:}" failed. No retries permitted until 2026-01-31 09:01:40.665991034 +0000 UTC m=+45.159353516 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/c1fa30e4-0c03-43ab-9c37-f7ec86153b27-metrics-certs") pod "network-metrics-daemon-5kl8z" (UID: "c1fa30e4-0c03-43ab-9c37-f7ec86153b27") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 31 09:01:38 crc kubenswrapper[4830]: I0131 09:01:38.674763 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:38 crc kubenswrapper[4830]: I0131 09:01:38.674860 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:38 crc kubenswrapper[4830]: I0131 09:01:38.674878 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:38 crc kubenswrapper[4830]: I0131 09:01:38.674902 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:38 crc kubenswrapper[4830]: I0131 09:01:38.674916 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:38Z","lastTransitionTime":"2026-01-31T09:01:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:38 crc kubenswrapper[4830]: I0131 09:01:38.777632 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:38 crc kubenswrapper[4830]: I0131 09:01:38.777697 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:38 crc kubenswrapper[4830]: I0131 09:01:38.777716 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:38 crc kubenswrapper[4830]: I0131 09:01:38.777772 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:38 crc kubenswrapper[4830]: I0131 09:01:38.777793 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:38Z","lastTransitionTime":"2026-01-31T09:01:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:38 crc kubenswrapper[4830]: I0131 09:01:38.881392 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:38 crc kubenswrapper[4830]: I0131 09:01:38.881528 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:38 crc kubenswrapper[4830]: I0131 09:01:38.881549 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:38 crc kubenswrapper[4830]: I0131 09:01:38.881578 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:38 crc kubenswrapper[4830]: I0131 09:01:38.881595 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:38Z","lastTransitionTime":"2026-01-31T09:01:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:38 crc kubenswrapper[4830]: I0131 09:01:38.985449 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:38 crc kubenswrapper[4830]: I0131 09:01:38.985516 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:38 crc kubenswrapper[4830]: I0131 09:01:38.985533 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:38 crc kubenswrapper[4830]: I0131 09:01:38.985562 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:38 crc kubenswrapper[4830]: I0131 09:01:38.985581 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:38Z","lastTransitionTime":"2026-01-31T09:01:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:39 crc kubenswrapper[4830]: I0131 09:01:39.089624 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:39 crc kubenswrapper[4830]: I0131 09:01:39.089696 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:39 crc kubenswrapper[4830]: I0131 09:01:39.089716 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:39 crc kubenswrapper[4830]: I0131 09:01:39.089797 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:39 crc kubenswrapper[4830]: I0131 09:01:39.089827 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:39Z","lastTransitionTime":"2026-01-31T09:01:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:39 crc kubenswrapper[4830]: I0131 09:01:39.193541 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:39 crc kubenswrapper[4830]: I0131 09:01:39.193616 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:39 crc kubenswrapper[4830]: I0131 09:01:39.193637 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:39 crc kubenswrapper[4830]: I0131 09:01:39.193667 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:39 crc kubenswrapper[4830]: I0131 09:01:39.193688 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:39Z","lastTransitionTime":"2026-01-31T09:01:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:39 crc kubenswrapper[4830]: I0131 09:01:39.226095 4830 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 04:38:34.658063714 +0000 UTC Jan 31 09:01:39 crc kubenswrapper[4830]: I0131 09:01:39.297409 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:39 crc kubenswrapper[4830]: I0131 09:01:39.297457 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:39 crc kubenswrapper[4830]: I0131 09:01:39.297471 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:39 crc kubenswrapper[4830]: I0131 09:01:39.297491 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:39 crc kubenswrapper[4830]: I0131 09:01:39.297506 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:39Z","lastTransitionTime":"2026-01-31T09:01:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:39 crc kubenswrapper[4830]: I0131 09:01:39.400111 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:39 crc kubenswrapper[4830]: I0131 09:01:39.400168 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:39 crc kubenswrapper[4830]: I0131 09:01:39.400179 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:39 crc kubenswrapper[4830]: I0131 09:01:39.400197 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:39 crc kubenswrapper[4830]: I0131 09:01:39.400209 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:39Z","lastTransitionTime":"2026-01-31T09:01:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:39 crc kubenswrapper[4830]: I0131 09:01:39.504245 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:39 crc kubenswrapper[4830]: I0131 09:01:39.504292 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:39 crc kubenswrapper[4830]: I0131 09:01:39.504302 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:39 crc kubenswrapper[4830]: I0131 09:01:39.504319 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:39 crc kubenswrapper[4830]: I0131 09:01:39.504328 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:39Z","lastTransitionTime":"2026-01-31T09:01:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:39 crc kubenswrapper[4830]: I0131 09:01:39.607278 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:39 crc kubenswrapper[4830]: I0131 09:01:39.607345 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:39 crc kubenswrapper[4830]: I0131 09:01:39.607357 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:39 crc kubenswrapper[4830]: I0131 09:01:39.607381 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:39 crc kubenswrapper[4830]: I0131 09:01:39.607392 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:39Z","lastTransitionTime":"2026-01-31T09:01:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:39 crc kubenswrapper[4830]: I0131 09:01:39.710319 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:39 crc kubenswrapper[4830]: I0131 09:01:39.710377 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:39 crc kubenswrapper[4830]: I0131 09:01:39.710392 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:39 crc kubenswrapper[4830]: I0131 09:01:39.710414 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:39 crc kubenswrapper[4830]: I0131 09:01:39.710432 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:39Z","lastTransitionTime":"2026-01-31T09:01:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:39 crc kubenswrapper[4830]: I0131 09:01:39.814718 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:39 crc kubenswrapper[4830]: I0131 09:01:39.814848 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:39 crc kubenswrapper[4830]: I0131 09:01:39.814867 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:39 crc kubenswrapper[4830]: I0131 09:01:39.814897 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:39 crc kubenswrapper[4830]: I0131 09:01:39.814917 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:39Z","lastTransitionTime":"2026-01-31T09:01:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:39 crc kubenswrapper[4830]: I0131 09:01:39.918523 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:39 crc kubenswrapper[4830]: I0131 09:01:39.918649 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:39 crc kubenswrapper[4830]: I0131 09:01:39.918675 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:39 crc kubenswrapper[4830]: I0131 09:01:39.918711 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:39 crc kubenswrapper[4830]: I0131 09:01:39.918809 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:39Z","lastTransitionTime":"2026-01-31T09:01:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:40 crc kubenswrapper[4830]: I0131 09:01:40.022652 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:40 crc kubenswrapper[4830]: I0131 09:01:40.022790 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:40 crc kubenswrapper[4830]: I0131 09:01:40.022809 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:40 crc kubenswrapper[4830]: I0131 09:01:40.022835 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:40 crc kubenswrapper[4830]: I0131 09:01:40.022855 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:40Z","lastTransitionTime":"2026-01-31T09:01:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:40 crc kubenswrapper[4830]: I0131 09:01:40.126005 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:40 crc kubenswrapper[4830]: I0131 09:01:40.126091 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:40 crc kubenswrapper[4830]: I0131 09:01:40.126104 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:40 crc kubenswrapper[4830]: I0131 09:01:40.126130 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:40 crc kubenswrapper[4830]: I0131 09:01:40.126144 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:40Z","lastTransitionTime":"2026-01-31T09:01:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:40 crc kubenswrapper[4830]: I0131 09:01:40.226827 4830 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 13:24:55.998620456 +0000 UTC Jan 31 09:01:40 crc kubenswrapper[4830]: I0131 09:01:40.229237 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:40 crc kubenswrapper[4830]: I0131 09:01:40.229300 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:40 crc kubenswrapper[4830]: I0131 09:01:40.229324 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:40 crc kubenswrapper[4830]: I0131 09:01:40.229357 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:40 crc kubenswrapper[4830]: I0131 09:01:40.229381 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:40Z","lastTransitionTime":"2026-01-31T09:01:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:40 crc kubenswrapper[4830]: I0131 09:01:40.250755 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 09:01:40 crc kubenswrapper[4830]: I0131 09:01:40.250906 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5kl8z" Jan 31 09:01:40 crc kubenswrapper[4830]: I0131 09:01:40.250988 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 09:01:40 crc kubenswrapper[4830]: I0131 09:01:40.251005 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 09:01:40 crc kubenswrapper[4830]: E0131 09:01:40.250919 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 09:01:40 crc kubenswrapper[4830]: E0131 09:01:40.251153 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5kl8z" podUID="c1fa30e4-0c03-43ab-9c37-f7ec86153b27" Jan 31 09:01:40 crc kubenswrapper[4830]: E0131 09:01:40.251435 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 09:01:40 crc kubenswrapper[4830]: E0131 09:01:40.252113 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 09:01:40 crc kubenswrapper[4830]: I0131 09:01:40.333151 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:40 crc kubenswrapper[4830]: I0131 09:01:40.333216 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:40 crc kubenswrapper[4830]: I0131 09:01:40.333230 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:40 crc kubenswrapper[4830]: I0131 09:01:40.333250 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:40 crc kubenswrapper[4830]: I0131 09:01:40.333268 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:40Z","lastTransitionTime":"2026-01-31T09:01:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:40 crc kubenswrapper[4830]: I0131 09:01:40.436671 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:40 crc kubenswrapper[4830]: I0131 09:01:40.436768 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:40 crc kubenswrapper[4830]: I0131 09:01:40.436786 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:40 crc kubenswrapper[4830]: I0131 09:01:40.436815 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:40 crc kubenswrapper[4830]: I0131 09:01:40.436832 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:40Z","lastTransitionTime":"2026-01-31T09:01:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:40 crc kubenswrapper[4830]: I0131 09:01:40.539381 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:40 crc kubenswrapper[4830]: I0131 09:01:40.539421 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:40 crc kubenswrapper[4830]: I0131 09:01:40.539431 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:40 crc kubenswrapper[4830]: I0131 09:01:40.539446 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:40 crc kubenswrapper[4830]: I0131 09:01:40.539459 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:40Z","lastTransitionTime":"2026-01-31T09:01:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:40 crc kubenswrapper[4830]: I0131 09:01:40.642959 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:40 crc kubenswrapper[4830]: I0131 09:01:40.643014 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:40 crc kubenswrapper[4830]: I0131 09:01:40.643030 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:40 crc kubenswrapper[4830]: I0131 09:01:40.643053 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:40 crc kubenswrapper[4830]: I0131 09:01:40.643069 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:40Z","lastTransitionTime":"2026-01-31T09:01:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:40 crc kubenswrapper[4830]: I0131 09:01:40.689339 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c1fa30e4-0c03-43ab-9c37-f7ec86153b27-metrics-certs\") pod \"network-metrics-daemon-5kl8z\" (UID: \"c1fa30e4-0c03-43ab-9c37-f7ec86153b27\") " pod="openshift-multus/network-metrics-daemon-5kl8z" Jan 31 09:01:40 crc kubenswrapper[4830]: E0131 09:01:40.689539 4830 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 31 09:01:40 crc kubenswrapper[4830]: E0131 09:01:40.689629 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c1fa30e4-0c03-43ab-9c37-f7ec86153b27-metrics-certs podName:c1fa30e4-0c03-43ab-9c37-f7ec86153b27 nodeName:}" failed. No retries permitted until 2026-01-31 09:01:44.689611592 +0000 UTC m=+49.182974034 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/c1fa30e4-0c03-43ab-9c37-f7ec86153b27-metrics-certs") pod "network-metrics-daemon-5kl8z" (UID: "c1fa30e4-0c03-43ab-9c37-f7ec86153b27") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 31 09:01:40 crc kubenswrapper[4830]: I0131 09:01:40.745470 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:40 crc kubenswrapper[4830]: I0131 09:01:40.745513 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:40 crc kubenswrapper[4830]: I0131 09:01:40.745521 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:40 crc kubenswrapper[4830]: I0131 09:01:40.745536 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:40 crc kubenswrapper[4830]: I0131 09:01:40.745546 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:40Z","lastTransitionTime":"2026-01-31T09:01:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:40 crc kubenswrapper[4830]: I0131 09:01:40.848078 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:40 crc kubenswrapper[4830]: I0131 09:01:40.848138 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:40 crc kubenswrapper[4830]: I0131 09:01:40.848147 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:40 crc kubenswrapper[4830]: I0131 09:01:40.848165 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:40 crc kubenswrapper[4830]: I0131 09:01:40.848181 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:40Z","lastTransitionTime":"2026-01-31T09:01:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:40 crc kubenswrapper[4830]: I0131 09:01:40.950605 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:40 crc kubenswrapper[4830]: I0131 09:01:40.950656 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:40 crc kubenswrapper[4830]: I0131 09:01:40.950666 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:40 crc kubenswrapper[4830]: I0131 09:01:40.950686 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:40 crc kubenswrapper[4830]: I0131 09:01:40.950699 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:40Z","lastTransitionTime":"2026-01-31T09:01:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:41 crc kubenswrapper[4830]: I0131 09:01:41.053541 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:41 crc kubenswrapper[4830]: I0131 09:01:41.053602 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:41 crc kubenswrapper[4830]: I0131 09:01:41.053612 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:41 crc kubenswrapper[4830]: I0131 09:01:41.053631 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:41 crc kubenswrapper[4830]: I0131 09:01:41.053643 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:41Z","lastTransitionTime":"2026-01-31T09:01:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:41 crc kubenswrapper[4830]: I0131 09:01:41.157046 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:41 crc kubenswrapper[4830]: I0131 09:01:41.157101 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:41 crc kubenswrapper[4830]: I0131 09:01:41.157116 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:41 crc kubenswrapper[4830]: I0131 09:01:41.157136 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:41 crc kubenswrapper[4830]: I0131 09:01:41.157148 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:41Z","lastTransitionTime":"2026-01-31T09:01:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:41 crc kubenswrapper[4830]: I0131 09:01:41.227367 4830 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 06:41:19.387479831 +0000 UTC Jan 31 09:01:41 crc kubenswrapper[4830]: I0131 09:01:41.260422 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:41 crc kubenswrapper[4830]: I0131 09:01:41.260490 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:41 crc kubenswrapper[4830]: I0131 09:01:41.260503 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:41 crc kubenswrapper[4830]: I0131 09:01:41.260524 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:41 crc kubenswrapper[4830]: I0131 09:01:41.260539 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:41Z","lastTransitionTime":"2026-01-31T09:01:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:41 crc kubenswrapper[4830]: I0131 09:01:41.362813 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:41 crc kubenswrapper[4830]: I0131 09:01:41.362859 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:41 crc kubenswrapper[4830]: I0131 09:01:41.362869 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:41 crc kubenswrapper[4830]: I0131 09:01:41.362886 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:41 crc kubenswrapper[4830]: I0131 09:01:41.362897 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:41Z","lastTransitionTime":"2026-01-31T09:01:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:41 crc kubenswrapper[4830]: I0131 09:01:41.465787 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:41 crc kubenswrapper[4830]: I0131 09:01:41.465837 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:41 crc kubenswrapper[4830]: I0131 09:01:41.465849 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:41 crc kubenswrapper[4830]: I0131 09:01:41.465876 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:41 crc kubenswrapper[4830]: I0131 09:01:41.465889 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:41Z","lastTransitionTime":"2026-01-31T09:01:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:41 crc kubenswrapper[4830]: I0131 09:01:41.568354 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:41 crc kubenswrapper[4830]: I0131 09:01:41.568409 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:41 crc kubenswrapper[4830]: I0131 09:01:41.568424 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:41 crc kubenswrapper[4830]: I0131 09:01:41.568441 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:41 crc kubenswrapper[4830]: I0131 09:01:41.568453 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:41Z","lastTransitionTime":"2026-01-31T09:01:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:41 crc kubenswrapper[4830]: I0131 09:01:41.672096 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:41 crc kubenswrapper[4830]: I0131 09:01:41.672149 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:41 crc kubenswrapper[4830]: I0131 09:01:41.672165 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:41 crc kubenswrapper[4830]: I0131 09:01:41.672182 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:41 crc kubenswrapper[4830]: I0131 09:01:41.672193 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:41Z","lastTransitionTime":"2026-01-31T09:01:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:41 crc kubenswrapper[4830]: I0131 09:01:41.774926 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:41 crc kubenswrapper[4830]: I0131 09:01:41.774979 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:41 crc kubenswrapper[4830]: I0131 09:01:41.774988 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:41 crc kubenswrapper[4830]: I0131 09:01:41.775008 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:41 crc kubenswrapper[4830]: I0131 09:01:41.775021 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:41Z","lastTransitionTime":"2026-01-31T09:01:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:41 crc kubenswrapper[4830]: I0131 09:01:41.877778 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:41 crc kubenswrapper[4830]: I0131 09:01:41.877857 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:41 crc kubenswrapper[4830]: I0131 09:01:41.877867 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:41 crc kubenswrapper[4830]: I0131 09:01:41.877888 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:41 crc kubenswrapper[4830]: I0131 09:01:41.877905 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:41Z","lastTransitionTime":"2026-01-31T09:01:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:41 crc kubenswrapper[4830]: I0131 09:01:41.980550 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:41 crc kubenswrapper[4830]: I0131 09:01:41.980641 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:41 crc kubenswrapper[4830]: I0131 09:01:41.980655 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:41 crc kubenswrapper[4830]: I0131 09:01:41.980677 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:41 crc kubenswrapper[4830]: I0131 09:01:41.980690 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:41Z","lastTransitionTime":"2026-01-31T09:01:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:42 crc kubenswrapper[4830]: I0131 09:01:42.083813 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:42 crc kubenswrapper[4830]: I0131 09:01:42.083867 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:42 crc kubenswrapper[4830]: I0131 09:01:42.083878 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:42 crc kubenswrapper[4830]: I0131 09:01:42.083896 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:42 crc kubenswrapper[4830]: I0131 09:01:42.083907 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:42Z","lastTransitionTime":"2026-01-31T09:01:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:42 crc kubenswrapper[4830]: I0131 09:01:42.187184 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:42 crc kubenswrapper[4830]: I0131 09:01:42.187242 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:42 crc kubenswrapper[4830]: I0131 09:01:42.187259 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:42 crc kubenswrapper[4830]: I0131 09:01:42.187279 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:42 crc kubenswrapper[4830]: I0131 09:01:42.187292 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:42Z","lastTransitionTime":"2026-01-31T09:01:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:42 crc kubenswrapper[4830]: I0131 09:01:42.228007 4830 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 12:08:57.296831345 +0000 UTC Jan 31 09:01:42 crc kubenswrapper[4830]: I0131 09:01:42.251385 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 09:01:42 crc kubenswrapper[4830]: I0131 09:01:42.251546 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5kl8z" Jan 31 09:01:42 crc kubenswrapper[4830]: E0131 09:01:42.251660 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 09:01:42 crc kubenswrapper[4830]: I0131 09:01:42.251734 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 09:01:42 crc kubenswrapper[4830]: E0131 09:01:42.251868 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5kl8z" podUID="c1fa30e4-0c03-43ab-9c37-f7ec86153b27" Jan 31 09:01:42 crc kubenswrapper[4830]: E0131 09:01:42.252052 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 09:01:42 crc kubenswrapper[4830]: I0131 09:01:42.251508 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 09:01:42 crc kubenswrapper[4830]: E0131 09:01:42.252188 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 09:01:42 crc kubenswrapper[4830]: I0131 09:01:42.290255 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:42 crc kubenswrapper[4830]: I0131 09:01:42.290302 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:42 crc kubenswrapper[4830]: I0131 09:01:42.290314 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:42 crc kubenswrapper[4830]: I0131 09:01:42.290334 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:42 crc kubenswrapper[4830]: I0131 09:01:42.290348 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:42Z","lastTransitionTime":"2026-01-31T09:01:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:42 crc kubenswrapper[4830]: I0131 09:01:42.323589 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:42 crc kubenswrapper[4830]: I0131 09:01:42.323629 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:42 crc kubenswrapper[4830]: I0131 09:01:42.323638 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:42 crc kubenswrapper[4830]: I0131 09:01:42.323654 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:42 crc kubenswrapper[4830]: I0131 09:01:42.323664 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:42Z","lastTransitionTime":"2026-01-31T09:01:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:42 crc kubenswrapper[4830]: E0131 09:01:42.336692 4830 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T09:01:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T09:01:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T09:01:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T09:01:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"09bf5dcf-c0f5-4874-a379-a4244cbfeb7d\\\",\\\"systemUUID\\\":\\\"c42072f0-7f1e-4cb8-a24e-882cf5477d0b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:42Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:42 crc kubenswrapper[4830]: I0131 09:01:42.341351 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:42 crc kubenswrapper[4830]: I0131 09:01:42.341410 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:42 crc kubenswrapper[4830]: I0131 09:01:42.341424 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:42 crc kubenswrapper[4830]: I0131 09:01:42.341447 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:42 crc kubenswrapper[4830]: I0131 09:01:42.341460 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:42Z","lastTransitionTime":"2026-01-31T09:01:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:42 crc kubenswrapper[4830]: E0131 09:01:42.355239 4830 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T09:01:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T09:01:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T09:01:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T09:01:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"09bf5dcf-c0f5-4874-a379-a4244cbfeb7d\\\",\\\"systemUUID\\\":\\\"c42072f0-7f1e-4cb8-a24e-882cf5477d0b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:42Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:42 crc kubenswrapper[4830]: I0131 09:01:42.359124 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:42 crc kubenswrapper[4830]: I0131 09:01:42.359168 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:42 crc kubenswrapper[4830]: I0131 09:01:42.359176 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:42 crc kubenswrapper[4830]: I0131 09:01:42.359192 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:42 crc kubenswrapper[4830]: I0131 09:01:42.359207 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:42Z","lastTransitionTime":"2026-01-31T09:01:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:42 crc kubenswrapper[4830]: E0131 09:01:42.372866 4830 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T09:01:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T09:01:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T09:01:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T09:01:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"09bf5dcf-c0f5-4874-a379-a4244cbfeb7d\\\",\\\"systemUUID\\\":\\\"c42072f0-7f1e-4cb8-a24e-882cf5477d0b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:42Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:42 crc kubenswrapper[4830]: I0131 09:01:42.377557 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:42 crc kubenswrapper[4830]: I0131 09:01:42.377643 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:42 crc kubenswrapper[4830]: I0131 09:01:42.377653 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:42 crc kubenswrapper[4830]: I0131 09:01:42.377673 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:42 crc kubenswrapper[4830]: I0131 09:01:42.377687 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:42Z","lastTransitionTime":"2026-01-31T09:01:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:42 crc kubenswrapper[4830]: E0131 09:01:42.391676 4830 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T09:01:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T09:01:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T09:01:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T09:01:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"09bf5dcf-c0f5-4874-a379-a4244cbfeb7d\\\",\\\"systemUUID\\\":\\\"c42072f0-7f1e-4cb8-a24e-882cf5477d0b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:42Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:42 crc kubenswrapper[4830]: I0131 09:01:42.395564 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:42 crc kubenswrapper[4830]: I0131 09:01:42.395601 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:42 crc kubenswrapper[4830]: I0131 09:01:42.395612 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:42 crc kubenswrapper[4830]: I0131 09:01:42.395630 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:42 crc kubenswrapper[4830]: I0131 09:01:42.395642 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:42Z","lastTransitionTime":"2026-01-31T09:01:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:42 crc kubenswrapper[4830]: E0131 09:01:42.407013 4830 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T09:01:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T09:01:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T09:01:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T09:01:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"09bf5dcf-c0f5-4874-a379-a4244cbfeb7d\\\",\\\"systemUUID\\\":\\\"c42072f0-7f1e-4cb8-a24e-882cf5477d0b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:42Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:42 crc kubenswrapper[4830]: E0131 09:01:42.407190 4830 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 31 09:01:42 crc kubenswrapper[4830]: I0131 09:01:42.409695 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:42 crc kubenswrapper[4830]: I0131 09:01:42.409786 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:42 crc kubenswrapper[4830]: I0131 09:01:42.409800 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:42 crc kubenswrapper[4830]: I0131 09:01:42.409823 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:42 crc kubenswrapper[4830]: I0131 09:01:42.409842 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:42Z","lastTransitionTime":"2026-01-31T09:01:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:42 crc kubenswrapper[4830]: I0131 09:01:42.512519 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:42 crc kubenswrapper[4830]: I0131 09:01:42.512563 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:42 crc kubenswrapper[4830]: I0131 09:01:42.512573 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:42 crc kubenswrapper[4830]: I0131 09:01:42.512590 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:42 crc kubenswrapper[4830]: I0131 09:01:42.512609 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:42Z","lastTransitionTime":"2026-01-31T09:01:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:42 crc kubenswrapper[4830]: I0131 09:01:42.615070 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:42 crc kubenswrapper[4830]: I0131 09:01:42.615133 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:42 crc kubenswrapper[4830]: I0131 09:01:42.615145 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:42 crc kubenswrapper[4830]: I0131 09:01:42.615166 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:42 crc kubenswrapper[4830]: I0131 09:01:42.615179 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:42Z","lastTransitionTime":"2026-01-31T09:01:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:42 crc kubenswrapper[4830]: I0131 09:01:42.717519 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:42 crc kubenswrapper[4830]: I0131 09:01:42.717560 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:42 crc kubenswrapper[4830]: I0131 09:01:42.717570 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:42 crc kubenswrapper[4830]: I0131 09:01:42.717587 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:42 crc kubenswrapper[4830]: I0131 09:01:42.717600 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:42Z","lastTransitionTime":"2026-01-31T09:01:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:42 crc kubenswrapper[4830]: I0131 09:01:42.722536 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 09:01:42 crc kubenswrapper[4830]: I0131 09:01:42.736393 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:42Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:42 crc kubenswrapper[4830]: I0131 09:01:42.746386 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-zt78q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f8a0ccd-540b-4151-a34d-438e433cb141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://362d0fc182d79e72720f3686e7fb5219372cf72d8be09c8086713b692e8d66d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6zlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:25Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-zt78q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:42Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:42 crc kubenswrapper[4830]: I0131 09:01:42.758258 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5kl8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1fa30e4-0c03-43ab-9c37-f7ec86153b27\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jgvfn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jgvfn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:36Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5kl8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:42Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:42 crc kubenswrapper[4830]: I0131 09:01:42.775776 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c75e1c36-b769-464e-96eb-6d9b3c5aa384\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fc4f1b4979b8d902676eced34650daea491e68b4c4377492b928a9f0f78d12b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b68f539fdc8bbf394adaf06d3e8682cdf498d0994c53f3754caf282cf9cf3607\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://539885da1b8f083e7ae878fec45416be66a5b06df063f046a05a4981bc8a8f75\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9b64732f8259953717c8ad355889afd462ce339c881ba9c105f6d3f39245e79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8cac33719e081864153ce20c60069a21036f3c29f7b4395021c3a2fe4f09dbc9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"message\\\":\\\"le observer\\\\nW0131 09:01:15.957161 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0131 09:01:15.957363 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 09:01:15.958528 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2684978895/tls.crt::/tmp/serving-cert-2684978895/tls.key\\\\\\\"\\\\nI0131 09:01:16.545024 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 09:01:16.548405 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 09:01:16.548444 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 09:01:16.548470 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 09:01:16.548480 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 09:01:16.557075 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 09:01:16.557112 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 09:01:16.557118 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 09:01:16.557123 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 09:01:16.557127 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 09:01:16.557130 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 09:01:16.557133 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0131 09:01:16.557468 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0131 09:01:16.564014 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d4d8bd6451d7416a2a312147ec282db5de8410910834f555c8d09062902130d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41db253848cac205af5f91621cac4e8223bda7d76984afd734a093f8e8a2d125\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41db253848cac205af5f91621cac4e8223bda7d76984afd734a093f8e8a2d125\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:00:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:00:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:00:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:42Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:42 crc kubenswrapper[4830]: I0131 09:01:42.790894 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-cjqbn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b7e133cc-19e8-4770-9146-88dac53a6531\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fc53764819654361fe0c4c89480ef4e2b42eb79d71ab8b88f1cc9283c67ce70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-msp6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-cjqbn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:42Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:42 crc kubenswrapper[4830]: I0131 09:01:42.809777 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-x27jw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"227117cb-01d3-4e44-9da3-b1d577fb3ee2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d59f81b73056481d4e6eb23c2a98c3c088b5255b82cd28e0cad0ac2a9b271cfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wndtp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b9804ac2b3c56a17dc1665cc7ca0c622f9f7a5dbf19f804b155c517a5be61e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b9804ac2b3c56a17dc1665cc7ca0c622f9f7a5dbf19f804b155c517a5be61e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wndtp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e777b7bcec1e8930ac1c280460b5733614b633b1ed72522e124f8585ef5cb38d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e777b7bcec1e8930ac1c280460b5733614b633b1ed72522e124f8585ef5cb38d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wndtp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10692d497b8cf269a439c603376a958fcead6d7ee2338eff4e014a75eaa26417\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10692d497b8cf269a439c603376a958fcead6d7ee2338eff4e014a75eaa26417\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wndtp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1275255b2597fbb09db9612f496ab3177afffe0c7d74d406bc4b1bc861b5b54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c1275255b2597fbb09db9612f496ab3177afffe0c7d74d406bc4b1bc861b5b54\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wndtp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://443af6aeef0b259a54c898d9c08f6e39d663ddf06d36d8712ee8ebb7fd2ebf5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://443af6aeef0b259a54c898d9c08f6e39d663ddf06d36d8712ee8ebb7fd2ebf5a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wndtp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e55fc9c3b6f0e9526860d830179e759cb2eae3c52847b675db6ccce9b6ed2766\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e55fc9c3b6f0e9526860d830179e759cb2eae3c52847b675db6ccce9b6ed2766\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wndtp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-x27jw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:42Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:42 crc kubenswrapper[4830]: I0131 09:01:42.820153 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:42 crc kubenswrapper[4830]: I0131 09:01:42.820225 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:42 crc kubenswrapper[4830]: I0131 09:01:42.820238 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:42 crc kubenswrapper[4830]: I0131 09:01:42.820259 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:42 crc kubenswrapper[4830]: I0131 09:01:42.820273 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:42Z","lastTransitionTime":"2026-01-31T09:01:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:42 crc kubenswrapper[4830]: I0131 09:01:42.824298 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:42Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:42 crc kubenswrapper[4830]: I0131 09:01:42.836450 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pmbpr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca325f50-edf0-4f3d-ab92-17f40a73d274\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d2a0a6bafefdee2120d6573808366f2455c8606c350f69b9e62bfb2903f6303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7p56d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:22Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pmbpr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:42Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:42 crc kubenswrapper[4830]: I0131 09:01:42.849889 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c33f93dea1eda6a26c80682447157b317a4aea4af66d200c5bdfc162ad593e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:42Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:42 crc kubenswrapper[4830]: I0131 09:01:42.865053 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:42Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:42 crc kubenswrapper[4830]: I0131 09:01:42.877353 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2018dd8e7153f3ce64992dc6f931ae09c5f77931cd0743a9fe2557673b6a41f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:42Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:42 crc kubenswrapper[4830]: I0131 09:01:42.887717 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"158dbfda-9b0a-4809-9946-3c6ee2d082dc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4f2590c48b20124bb8d0271755d430719ece306dbdc95acc26258abaf331ee2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqp59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7cfb7ee25dc18bb1412f69e9bbc3a9055029ed188a12baa5ceef7d5445ad597c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqp59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:23Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gt7kd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:42Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:42 crc kubenswrapper[4830]: I0131 09:01:42.902710 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"20ed341f-ef9c-4242-981d-80c09f22a37f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://28d65a2b0e667ff1bd912564180a6a9ce77e91a104c6028e19bc58e0ab3b295d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c484d4ed3775a955f3077e3404135c771c581eef1e8fa614d7cdd6e521bbb426\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6889405df5b8cc9e09b36c99a5db06048c9de193dd5744d2b48c07f42c477bd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e99664db53d57a91882867cdf4ab33d52a2e165c53f91cd1b918a32c49a7afa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:00:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:42Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:42 crc kubenswrapper[4830]: I0131 09:01:42.902832 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-r8pc4" Jan 31 09:01:42 crc kubenswrapper[4830]: I0131 09:01:42.903794 4830 scope.go:117] "RemoveContainer" containerID="ed906c96861bf8d2af425b975d67a4b298930cd75ababc2d621541adaa7b2ba2" Jan 31 09:01:42 crc kubenswrapper[4830]: E0131 09:01:42.903956 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-r8pc4_openshift-ovn-kubernetes(159b9801-57e3-4cf0-9b81-10aacb5eef83)\"" pod="openshift-ovn-kubernetes/ovnkube-node-r8pc4" podUID="159b9801-57e3-4cf0-9b81-10aacb5eef83" Jan 31 09:01:42 crc kubenswrapper[4830]: I0131 09:01:42.914377 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7vq99" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"44acb8ed-5840-46fa-9ba1-1b89653e1478\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07cae4ce61629c9f8e48863d0775cf4fed46422db85ba8b29477e098b697fb1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9w5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86ac3b3a214c6bca20d7fdc92a49647dfdaf8de4391f331890f74900ab7eca11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9w5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:35Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7vq99\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:42Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:42 crc kubenswrapper[4830]: I0131 09:01:42.923450 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:42 crc kubenswrapper[4830]: I0131 09:01:42.923520 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:42 crc kubenswrapper[4830]: I0131 09:01:42.923532 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:42 crc kubenswrapper[4830]: I0131 09:01:42.923580 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:42 crc kubenswrapper[4830]: I0131 09:01:42.923604 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:42Z","lastTransitionTime":"2026-01-31T09:01:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:42 crc kubenswrapper[4830]: I0131 09:01:42.927333 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0468c8e291590c1fa2ffc9b7786bbbf6f3cfb7889fc3adaf7f7a4d3b0edcb1ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4601859d7e52904b3390b2ebca48210c4b3bf132b00a4735a1dbe6cfdc7bd02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:42Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:42 crc kubenswrapper[4830]: I0131 09:01:42.944179 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-r8pc4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"159b9801-57e3-4cf0-9b81-10aacb5eef83\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://320f24051b1a05475b54823ec3401ad252f0efd6c0d3c5098643f959bad15179\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae12083012db5dcd6cc0b23cf64d04c952587334fdf617d36d4cdc2f657eb163\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://351a35da1a2503d6c5d323c7a9a26460236f70517498b41055e6d21bbf92b358\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3614c1ff7ca6762c15b9b85e6187ad94029630e54254e8ccb45b5e7c24692561\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27c8241e84418d80626b17fe37ccf994304394c3fa85c7f1a058d8d54ad7e7d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fd109715e65725addfeba399eacb1abbe16132e06e9ee50b542e877ca9d8d82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed906c96861bf8d2af425b975d67a4b298930cd75ababc2d621541adaa7b2ba2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed906c96861bf8d2af425b975d67a4b298930cd75ababc2d621541adaa7b2ba2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T09:01:34Z\\\",\\\"message\\\":\\\"objects: [openshift-kube-apiserver/kube-apiserver-crc openshift-kube-controller-manager/kube-controller-manager-crc openshift-network-operator/iptables-alerter-4ln5h openshift-image-registry/node-ca-zt78q openshift-network-diagnostics/network-check-source-55646444c4-trplf openshift-network-diagnostics/network-check-target-xd92c openshift-network-node-identity/network-node-identity-vrzqb openshift-dns/node-resolver-pmbpr openshift-network-operator/network-operator-58b4c7f79c-55gtf openshift-machine-config-operator/machine-config-daemon-gt7kd openshift-multus/multus-additional-cni-plugins-x27jw openshift-multus/multus-cjqbn]\\\\nI0131 09:01:34.674011 6272 obj_retry.go:418] Waiting for all the *v1.Pod retry setup to complete in iterateRetryResources\\\\nI0131 09:01:34.674042 6272 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/multus-cjqbn\\\\nI0131 09:01:34.674056 6272 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/multus-cjqbn\\\\nF0131 09:01:34.674068 6272 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stoppe\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:33Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-r8pc4_openshift-ovn-kubernetes(159b9801-57e3-4cf0-9b81-10aacb5eef83)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a39d2c99a3b1f6ff5d003d2163046c5127e41dac82f41744c1fbffd0cec8ff09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba6e5a840aca1952877a5d2c18af1d28779b13d54e0c653a2b02b3c3d7b748e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba6e5a840aca1952877a5d2c18af1d28779b13d54e0c653a2b02b3c3d7b748e4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-r8pc4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:42Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:43 crc kubenswrapper[4830]: I0131 09:01:43.027122 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:43 crc kubenswrapper[4830]: I0131 09:01:43.027181 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:43 crc kubenswrapper[4830]: I0131 09:01:43.027207 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:43 crc kubenswrapper[4830]: I0131 09:01:43.027232 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:43 crc kubenswrapper[4830]: I0131 09:01:43.027251 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:43Z","lastTransitionTime":"2026-01-31T09:01:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:43 crc kubenswrapper[4830]: I0131 09:01:43.130062 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:43 crc kubenswrapper[4830]: I0131 09:01:43.130113 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:43 crc kubenswrapper[4830]: I0131 09:01:43.130121 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:43 crc kubenswrapper[4830]: I0131 09:01:43.130139 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:43 crc kubenswrapper[4830]: I0131 09:01:43.130148 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:43Z","lastTransitionTime":"2026-01-31T09:01:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:43 crc kubenswrapper[4830]: I0131 09:01:43.229055 4830 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 20:21:48.210360867 +0000 UTC Jan 31 09:01:43 crc kubenswrapper[4830]: I0131 09:01:43.233772 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:43 crc kubenswrapper[4830]: I0131 09:01:43.233826 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:43 crc kubenswrapper[4830]: I0131 09:01:43.233839 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:43 crc kubenswrapper[4830]: I0131 09:01:43.233860 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:43 crc kubenswrapper[4830]: I0131 09:01:43.233877 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:43Z","lastTransitionTime":"2026-01-31T09:01:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:43 crc kubenswrapper[4830]: I0131 09:01:43.337098 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:43 crc kubenswrapper[4830]: I0131 09:01:43.337170 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:43 crc kubenswrapper[4830]: I0131 09:01:43.337180 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:43 crc kubenswrapper[4830]: I0131 09:01:43.337197 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:43 crc kubenswrapper[4830]: I0131 09:01:43.337207 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:43Z","lastTransitionTime":"2026-01-31T09:01:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:43 crc kubenswrapper[4830]: I0131 09:01:43.440786 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:43 crc kubenswrapper[4830]: I0131 09:01:43.440857 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:43 crc kubenswrapper[4830]: I0131 09:01:43.440876 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:43 crc kubenswrapper[4830]: I0131 09:01:43.440900 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:43 crc kubenswrapper[4830]: I0131 09:01:43.440918 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:43Z","lastTransitionTime":"2026-01-31T09:01:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:43 crc kubenswrapper[4830]: I0131 09:01:43.543675 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:43 crc kubenswrapper[4830]: I0131 09:01:43.543801 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:43 crc kubenswrapper[4830]: I0131 09:01:43.543819 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:43 crc kubenswrapper[4830]: I0131 09:01:43.543847 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:43 crc kubenswrapper[4830]: I0131 09:01:43.543866 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:43Z","lastTransitionTime":"2026-01-31T09:01:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:43 crc kubenswrapper[4830]: I0131 09:01:43.648537 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:43 crc kubenswrapper[4830]: I0131 09:01:43.648922 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:43 crc kubenswrapper[4830]: I0131 09:01:43.648935 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:43 crc kubenswrapper[4830]: I0131 09:01:43.648953 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:43 crc kubenswrapper[4830]: I0131 09:01:43.648995 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:43Z","lastTransitionTime":"2026-01-31T09:01:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:43 crc kubenswrapper[4830]: I0131 09:01:43.751921 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:43 crc kubenswrapper[4830]: I0131 09:01:43.751974 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:43 crc kubenswrapper[4830]: I0131 09:01:43.751991 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:43 crc kubenswrapper[4830]: I0131 09:01:43.752011 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:43 crc kubenswrapper[4830]: I0131 09:01:43.752023 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:43Z","lastTransitionTime":"2026-01-31T09:01:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:43 crc kubenswrapper[4830]: I0131 09:01:43.856802 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:43 crc kubenswrapper[4830]: I0131 09:01:43.856839 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:43 crc kubenswrapper[4830]: I0131 09:01:43.856850 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:43 crc kubenswrapper[4830]: I0131 09:01:43.856899 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:43 crc kubenswrapper[4830]: I0131 09:01:43.856913 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:43Z","lastTransitionTime":"2026-01-31T09:01:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:43 crc kubenswrapper[4830]: I0131 09:01:43.959749 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:43 crc kubenswrapper[4830]: I0131 09:01:43.959792 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:43 crc kubenswrapper[4830]: I0131 09:01:43.959803 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:43 crc kubenswrapper[4830]: I0131 09:01:43.959820 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:43 crc kubenswrapper[4830]: I0131 09:01:43.959834 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:43Z","lastTransitionTime":"2026-01-31T09:01:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:44 crc kubenswrapper[4830]: I0131 09:01:44.063050 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:44 crc kubenswrapper[4830]: I0131 09:01:44.063101 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:44 crc kubenswrapper[4830]: I0131 09:01:44.063117 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:44 crc kubenswrapper[4830]: I0131 09:01:44.063138 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:44 crc kubenswrapper[4830]: I0131 09:01:44.063153 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:44Z","lastTransitionTime":"2026-01-31T09:01:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:44 crc kubenswrapper[4830]: I0131 09:01:44.165859 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:44 crc kubenswrapper[4830]: I0131 09:01:44.165917 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:44 crc kubenswrapper[4830]: I0131 09:01:44.165926 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:44 crc kubenswrapper[4830]: I0131 09:01:44.165945 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:44 crc kubenswrapper[4830]: I0131 09:01:44.165958 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:44Z","lastTransitionTime":"2026-01-31T09:01:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:44 crc kubenswrapper[4830]: I0131 09:01:44.229848 4830 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 19:11:34.766571956 +0000 UTC Jan 31 09:01:44 crc kubenswrapper[4830]: I0131 09:01:44.251393 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 09:01:44 crc kubenswrapper[4830]: I0131 09:01:44.251434 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 09:01:44 crc kubenswrapper[4830]: I0131 09:01:44.251434 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 09:01:44 crc kubenswrapper[4830]: E0131 09:01:44.251579 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 09:01:44 crc kubenswrapper[4830]: I0131 09:01:44.251614 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5kl8z" Jan 31 09:01:44 crc kubenswrapper[4830]: E0131 09:01:44.251790 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 09:01:44 crc kubenswrapper[4830]: E0131 09:01:44.251882 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5kl8z" podUID="c1fa30e4-0c03-43ab-9c37-f7ec86153b27" Jan 31 09:01:44 crc kubenswrapper[4830]: E0131 09:01:44.251943 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 09:01:44 crc kubenswrapper[4830]: I0131 09:01:44.270253 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:44 crc kubenswrapper[4830]: I0131 09:01:44.270304 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:44 crc kubenswrapper[4830]: I0131 09:01:44.270317 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:44 crc kubenswrapper[4830]: I0131 09:01:44.270335 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:44 crc kubenswrapper[4830]: I0131 09:01:44.270347 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:44Z","lastTransitionTime":"2026-01-31T09:01:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:44 crc kubenswrapper[4830]: I0131 09:01:44.372494 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:44 crc kubenswrapper[4830]: I0131 09:01:44.372542 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:44 crc kubenswrapper[4830]: I0131 09:01:44.372556 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:44 crc kubenswrapper[4830]: I0131 09:01:44.372579 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:44 crc kubenswrapper[4830]: I0131 09:01:44.372596 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:44Z","lastTransitionTime":"2026-01-31T09:01:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:44 crc kubenswrapper[4830]: I0131 09:01:44.474823 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:44 crc kubenswrapper[4830]: I0131 09:01:44.474895 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:44 crc kubenswrapper[4830]: I0131 09:01:44.474912 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:44 crc kubenswrapper[4830]: I0131 09:01:44.474931 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:44 crc kubenswrapper[4830]: I0131 09:01:44.474943 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:44Z","lastTransitionTime":"2026-01-31T09:01:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:44 crc kubenswrapper[4830]: I0131 09:01:44.577402 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:44 crc kubenswrapper[4830]: I0131 09:01:44.577470 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:44 crc kubenswrapper[4830]: I0131 09:01:44.577484 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:44 crc kubenswrapper[4830]: I0131 09:01:44.577507 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:44 crc kubenswrapper[4830]: I0131 09:01:44.577526 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:44Z","lastTransitionTime":"2026-01-31T09:01:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:44 crc kubenswrapper[4830]: I0131 09:01:44.679693 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:44 crc kubenswrapper[4830]: I0131 09:01:44.679753 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:44 crc kubenswrapper[4830]: I0131 09:01:44.679762 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:44 crc kubenswrapper[4830]: I0131 09:01:44.679781 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:44 crc kubenswrapper[4830]: I0131 09:01:44.679791 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:44Z","lastTransitionTime":"2026-01-31T09:01:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:44 crc kubenswrapper[4830]: I0131 09:01:44.735717 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c1fa30e4-0c03-43ab-9c37-f7ec86153b27-metrics-certs\") pod \"network-metrics-daemon-5kl8z\" (UID: \"c1fa30e4-0c03-43ab-9c37-f7ec86153b27\") " pod="openshift-multus/network-metrics-daemon-5kl8z" Jan 31 09:01:44 crc kubenswrapper[4830]: E0131 09:01:44.735946 4830 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 31 09:01:44 crc kubenswrapper[4830]: E0131 09:01:44.736022 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c1fa30e4-0c03-43ab-9c37-f7ec86153b27-metrics-certs podName:c1fa30e4-0c03-43ab-9c37-f7ec86153b27 nodeName:}" failed. No retries permitted until 2026-01-31 09:01:52.736003425 +0000 UTC m=+57.229365867 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/c1fa30e4-0c03-43ab-9c37-f7ec86153b27-metrics-certs") pod "network-metrics-daemon-5kl8z" (UID: "c1fa30e4-0c03-43ab-9c37-f7ec86153b27") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 31 09:01:44 crc kubenswrapper[4830]: I0131 09:01:44.782083 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:44 crc kubenswrapper[4830]: I0131 09:01:44.782119 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:44 crc kubenswrapper[4830]: I0131 09:01:44.782128 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:44 crc kubenswrapper[4830]: I0131 09:01:44.782144 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:44 crc kubenswrapper[4830]: I0131 09:01:44.782157 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:44Z","lastTransitionTime":"2026-01-31T09:01:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:44 crc kubenswrapper[4830]: I0131 09:01:44.884756 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:44 crc kubenswrapper[4830]: I0131 09:01:44.884794 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:44 crc kubenswrapper[4830]: I0131 09:01:44.884803 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:44 crc kubenswrapper[4830]: I0131 09:01:44.884819 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:44 crc kubenswrapper[4830]: I0131 09:01:44.884831 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:44Z","lastTransitionTime":"2026-01-31T09:01:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:44 crc kubenswrapper[4830]: I0131 09:01:44.987955 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:44 crc kubenswrapper[4830]: I0131 09:01:44.988009 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:44 crc kubenswrapper[4830]: I0131 09:01:44.988022 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:44 crc kubenswrapper[4830]: I0131 09:01:44.988043 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:44 crc kubenswrapper[4830]: I0131 09:01:44.988057 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:44Z","lastTransitionTime":"2026-01-31T09:01:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:45 crc kubenswrapper[4830]: I0131 09:01:45.091476 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:45 crc kubenswrapper[4830]: I0131 09:01:45.091521 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:45 crc kubenswrapper[4830]: I0131 09:01:45.091531 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:45 crc kubenswrapper[4830]: I0131 09:01:45.091552 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:45 crc kubenswrapper[4830]: I0131 09:01:45.091564 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:45Z","lastTransitionTime":"2026-01-31T09:01:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:45 crc kubenswrapper[4830]: I0131 09:01:45.194080 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:45 crc kubenswrapper[4830]: I0131 09:01:45.194175 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:45 crc kubenswrapper[4830]: I0131 09:01:45.194210 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:45 crc kubenswrapper[4830]: I0131 09:01:45.194243 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:45 crc kubenswrapper[4830]: I0131 09:01:45.194274 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:45Z","lastTransitionTime":"2026-01-31T09:01:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:45 crc kubenswrapper[4830]: I0131 09:01:45.230600 4830 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 22:18:25.164715521 +0000 UTC Jan 31 09:01:45 crc kubenswrapper[4830]: I0131 09:01:45.297382 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:45 crc kubenswrapper[4830]: I0131 09:01:45.297458 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:45 crc kubenswrapper[4830]: I0131 09:01:45.297474 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:45 crc kubenswrapper[4830]: I0131 09:01:45.297491 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:45 crc kubenswrapper[4830]: I0131 09:01:45.297502 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:45Z","lastTransitionTime":"2026-01-31T09:01:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:45 crc kubenswrapper[4830]: I0131 09:01:45.400705 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:45 crc kubenswrapper[4830]: I0131 09:01:45.400795 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:45 crc kubenswrapper[4830]: I0131 09:01:45.400809 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:45 crc kubenswrapper[4830]: I0131 09:01:45.400833 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:45 crc kubenswrapper[4830]: I0131 09:01:45.400849 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:45Z","lastTransitionTime":"2026-01-31T09:01:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:45 crc kubenswrapper[4830]: I0131 09:01:45.503163 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:45 crc kubenswrapper[4830]: I0131 09:01:45.503223 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:45 crc kubenswrapper[4830]: I0131 09:01:45.503240 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:45 crc kubenswrapper[4830]: I0131 09:01:45.503269 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:45 crc kubenswrapper[4830]: I0131 09:01:45.503287 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:45Z","lastTransitionTime":"2026-01-31T09:01:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:45 crc kubenswrapper[4830]: I0131 09:01:45.606880 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:45 crc kubenswrapper[4830]: I0131 09:01:45.606938 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:45 crc kubenswrapper[4830]: I0131 09:01:45.606947 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:45 crc kubenswrapper[4830]: I0131 09:01:45.606966 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:45 crc kubenswrapper[4830]: I0131 09:01:45.606979 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:45Z","lastTransitionTime":"2026-01-31T09:01:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:45 crc kubenswrapper[4830]: I0131 09:01:45.710856 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:45 crc kubenswrapper[4830]: I0131 09:01:45.710907 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:45 crc kubenswrapper[4830]: I0131 09:01:45.710918 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:45 crc kubenswrapper[4830]: I0131 09:01:45.710936 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:45 crc kubenswrapper[4830]: I0131 09:01:45.710950 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:45Z","lastTransitionTime":"2026-01-31T09:01:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:45 crc kubenswrapper[4830]: I0131 09:01:45.813423 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:45 crc kubenswrapper[4830]: I0131 09:01:45.813471 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:45 crc kubenswrapper[4830]: I0131 09:01:45.813488 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:45 crc kubenswrapper[4830]: I0131 09:01:45.813513 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:45 crc kubenswrapper[4830]: I0131 09:01:45.813530 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:45Z","lastTransitionTime":"2026-01-31T09:01:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:45 crc kubenswrapper[4830]: I0131 09:01:45.916112 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:45 crc kubenswrapper[4830]: I0131 09:01:45.916162 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:45 crc kubenswrapper[4830]: I0131 09:01:45.916176 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:45 crc kubenswrapper[4830]: I0131 09:01:45.916195 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:45 crc kubenswrapper[4830]: I0131 09:01:45.916206 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:45Z","lastTransitionTime":"2026-01-31T09:01:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:46 crc kubenswrapper[4830]: I0131 09:01:46.019516 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:46 crc kubenswrapper[4830]: I0131 09:01:46.019584 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:46 crc kubenswrapper[4830]: I0131 09:01:46.019599 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:46 crc kubenswrapper[4830]: I0131 09:01:46.019624 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:46 crc kubenswrapper[4830]: I0131 09:01:46.019638 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:46Z","lastTransitionTime":"2026-01-31T09:01:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:46 crc kubenswrapper[4830]: I0131 09:01:46.122709 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:46 crc kubenswrapper[4830]: I0131 09:01:46.122788 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:46 crc kubenswrapper[4830]: I0131 09:01:46.122800 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:46 crc kubenswrapper[4830]: I0131 09:01:46.122822 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:46 crc kubenswrapper[4830]: I0131 09:01:46.122833 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:46Z","lastTransitionTime":"2026-01-31T09:01:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:46 crc kubenswrapper[4830]: I0131 09:01:46.225849 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:46 crc kubenswrapper[4830]: I0131 09:01:46.225921 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:46 crc kubenswrapper[4830]: I0131 09:01:46.225934 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:46 crc kubenswrapper[4830]: I0131 09:01:46.225955 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:46 crc kubenswrapper[4830]: I0131 09:01:46.225972 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:46Z","lastTransitionTime":"2026-01-31T09:01:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:46 crc kubenswrapper[4830]: I0131 09:01:46.230984 4830 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 16:19:19.723169543 +0000 UTC Jan 31 09:01:46 crc kubenswrapper[4830]: I0131 09:01:46.250356 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 09:01:46 crc kubenswrapper[4830]: I0131 09:01:46.250457 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5kl8z" Jan 31 09:01:46 crc kubenswrapper[4830]: I0131 09:01:46.250535 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 09:01:46 crc kubenswrapper[4830]: E0131 09:01:46.250643 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 09:01:46 crc kubenswrapper[4830]: I0131 09:01:46.250754 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 09:01:46 crc kubenswrapper[4830]: E0131 09:01:46.250835 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5kl8z" podUID="c1fa30e4-0c03-43ab-9c37-f7ec86153b27" Jan 31 09:01:46 crc kubenswrapper[4830]: E0131 09:01:46.251360 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 09:01:46 crc kubenswrapper[4830]: E0131 09:01:46.251543 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 09:01:46 crc kubenswrapper[4830]: I0131 09:01:46.263018 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pmbpr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca325f50-edf0-4f3d-ab92-17f40a73d274\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d2a0a6bafefdee2120d6573808366f2455c8606c350f69b9e62bfb2903f6303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7p56d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:22Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pmbpr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:46Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:46 crc kubenswrapper[4830]: I0131 09:01:46.276861 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-cjqbn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b7e133cc-19e8-4770-9146-88dac53a6531\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fc53764819654361fe0c4c89480ef4e2b42eb79d71ab8b88f1cc9283c67ce70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-msp6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-cjqbn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:46Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:46 crc kubenswrapper[4830]: I0131 09:01:46.302642 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-x27jw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"227117cb-01d3-4e44-9da3-b1d577fb3ee2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d59f81b73056481d4e6eb23c2a98c3c088b5255b82cd28e0cad0ac2a9b271cfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wndtp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b9804ac2b3c56a17dc1665cc7ca0c622f9f7a5dbf19f804b155c517a5be61e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b9804ac2b3c56a17dc1665cc7ca0c622f9f7a5dbf19f804b155c517a5be61e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wndtp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e777b7bcec1e8930ac1c280460b5733614b633b1ed72522e124f8585ef5cb38d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e777b7bcec1e8930ac1c280460b5733614b633b1ed72522e124f8585ef5cb38d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wndtp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10692d497b8cf269a439c603376a958fcead6d7ee2338eff4e014a75eaa26417\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10692d497b8cf269a439c603376a958fcead6d7ee2338eff4e014a75eaa26417\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wndtp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1275255b2597fbb09db9612f496ab3177afffe0c7d74d406bc4b1bc861b5b54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c1275255b2597fbb09db9612f496ab3177afffe0c7d74d406bc4b1bc861b5b54\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wndtp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://443af6aeef0b259a54c898d9c08f6e39d663ddf06d36d8712ee8ebb7fd2ebf5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://443af6aeef0b259a54c898d9c08f6e39d663ddf06d36d8712ee8ebb7fd2ebf5a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wndtp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e55fc9c3b6f0e9526860d830179e759cb2eae3c52847b675db6ccce9b6ed2766\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e55fc9c3b6f0e9526860d830179e759cb2eae3c52847b675db6ccce9b6ed2766\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wndtp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-x27jw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:46Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:46 crc kubenswrapper[4830]: I0131 09:01:46.324447 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:46Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:46 crc kubenswrapper[4830]: I0131 09:01:46.328772 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:46 crc kubenswrapper[4830]: I0131 09:01:46.328816 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:46 crc kubenswrapper[4830]: I0131 09:01:46.328827 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:46 crc kubenswrapper[4830]: I0131 09:01:46.328846 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:46 crc kubenswrapper[4830]: I0131 09:01:46.328860 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:46Z","lastTransitionTime":"2026-01-31T09:01:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:46 crc kubenswrapper[4830]: I0131 09:01:46.353677 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"20ed341f-ef9c-4242-981d-80c09f22a37f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://28d65a2b0e667ff1bd912564180a6a9ce77e91a104c6028e19bc58e0ab3b295d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c484d4ed3775a955f3077e3404135c771c581eef1e8fa614d7cdd6e521bbb426\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6889405df5b8cc9e09b36c99a5db06048c9de193dd5744d2b48c07f42c477bd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e99664db53d57a91882867cdf4ab33d52a2e165c53f91cd1b918a32c49a7afa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:00:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:46Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:46 crc kubenswrapper[4830]: I0131 09:01:46.372988 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c33f93dea1eda6a26c80682447157b317a4aea4af66d200c5bdfc162ad593e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:46Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:46 crc kubenswrapper[4830]: I0131 09:01:46.387205 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:46Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:46 crc kubenswrapper[4830]: I0131 09:01:46.401429 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2018dd8e7153f3ce64992dc6f931ae09c5f77931cd0743a9fe2557673b6a41f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:46Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:46 crc kubenswrapper[4830]: I0131 09:01:46.414036 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"158dbfda-9b0a-4809-9946-3c6ee2d082dc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4f2590c48b20124bb8d0271755d430719ece306dbdc95acc26258abaf331ee2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqp59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7cfb7ee25dc18bb1412f69e9bbc3a9055029ed188a12baa5ceef7d5445ad597c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqp59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:23Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gt7kd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:46Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:46 crc kubenswrapper[4830]: I0131 09:01:46.433325 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:46 crc kubenswrapper[4830]: I0131 09:01:46.433373 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:46 crc kubenswrapper[4830]: I0131 09:01:46.433388 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:46 crc kubenswrapper[4830]: I0131 09:01:46.433409 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:46 crc kubenswrapper[4830]: I0131 09:01:46.433422 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:46Z","lastTransitionTime":"2026-01-31T09:01:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:46 crc kubenswrapper[4830]: I0131 09:01:46.441675 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-r8pc4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"159b9801-57e3-4cf0-9b81-10aacb5eef83\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://320f24051b1a05475b54823ec3401ad252f0efd6c0d3c5098643f959bad15179\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae12083012db5dcd6cc0b23cf64d04c952587334fdf617d36d4cdc2f657eb163\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://351a35da1a2503d6c5d323c7a9a26460236f70517498b41055e6d21bbf92b358\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3614c1ff7ca6762c15b9b85e6187ad94029630e54254e8ccb45b5e7c24692561\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27c8241e84418d80626b17fe37ccf994304394c3fa85c7f1a058d8d54ad7e7d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fd109715e65725addfeba399eacb1abbe16132e06e9ee50b542e877ca9d8d82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed906c96861bf8d2af425b975d67a4b298930cd75ababc2d621541adaa7b2ba2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed906c96861bf8d2af425b975d67a4b298930cd75ababc2d621541adaa7b2ba2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T09:01:34Z\\\",\\\"message\\\":\\\"objects: [openshift-kube-apiserver/kube-apiserver-crc openshift-kube-controller-manager/kube-controller-manager-crc openshift-network-operator/iptables-alerter-4ln5h openshift-image-registry/node-ca-zt78q openshift-network-diagnostics/network-check-source-55646444c4-trplf openshift-network-diagnostics/network-check-target-xd92c openshift-network-node-identity/network-node-identity-vrzqb openshift-dns/node-resolver-pmbpr openshift-network-operator/network-operator-58b4c7f79c-55gtf openshift-machine-config-operator/machine-config-daemon-gt7kd openshift-multus/multus-additional-cni-plugins-x27jw openshift-multus/multus-cjqbn]\\\\nI0131 09:01:34.674011 6272 obj_retry.go:418] Waiting for all the *v1.Pod retry setup to complete in iterateRetryResources\\\\nI0131 09:01:34.674042 6272 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/multus-cjqbn\\\\nI0131 09:01:34.674056 6272 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/multus-cjqbn\\\\nF0131 09:01:34.674068 6272 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stoppe\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:33Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-r8pc4_openshift-ovn-kubernetes(159b9801-57e3-4cf0-9b81-10aacb5eef83)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a39d2c99a3b1f6ff5d003d2163046c5127e41dac82f41744c1fbffd0cec8ff09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba6e5a840aca1952877a5d2c18af1d28779b13d54e0c653a2b02b3c3d7b748e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba6e5a840aca1952877a5d2c18af1d28779b13d54e0c653a2b02b3c3d7b748e4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-r8pc4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:46Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:46 crc kubenswrapper[4830]: I0131 09:01:46.456073 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7vq99" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"44acb8ed-5840-46fa-9ba1-1b89653e1478\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07cae4ce61629c9f8e48863d0775cf4fed46422db85ba8b29477e098b697fb1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9w5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86ac3b3a214c6bca20d7fdc92a49647dfdaf8de4391f331890f74900ab7eca11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9w5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:35Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7vq99\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:46Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:46 crc kubenswrapper[4830]: I0131 09:01:46.471206 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0468c8e291590c1fa2ffc9b7786bbbf6f3cfb7889fc3adaf7f7a4d3b0edcb1ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4601859d7e52904b3390b2ebca48210c4b3bf132b00a4735a1dbe6cfdc7bd02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:46Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:46 crc kubenswrapper[4830]: I0131 09:01:46.489098 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c75e1c36-b769-464e-96eb-6d9b3c5aa384\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fc4f1b4979b8d902676eced34650daea491e68b4c4377492b928a9f0f78d12b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b68f539fdc8bbf394adaf06d3e8682cdf498d0994c53f3754caf282cf9cf3607\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://539885da1b8f083e7ae878fec45416be66a5b06df063f046a05a4981bc8a8f75\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9b64732f8259953717c8ad355889afd462ce339c881ba9c105f6d3f39245e79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8cac33719e081864153ce20c60069a21036f3c29f7b4395021c3a2fe4f09dbc9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"message\\\":\\\"le observer\\\\nW0131 09:01:15.957161 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0131 09:01:15.957363 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 09:01:15.958528 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2684978895/tls.crt::/tmp/serving-cert-2684978895/tls.key\\\\\\\"\\\\nI0131 09:01:16.545024 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 09:01:16.548405 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 09:01:16.548444 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 09:01:16.548470 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 09:01:16.548480 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 09:01:16.557075 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 09:01:16.557112 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 09:01:16.557118 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 09:01:16.557123 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 09:01:16.557127 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 09:01:16.557130 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 09:01:16.557133 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0131 09:01:16.557468 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0131 09:01:16.564014 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d4d8bd6451d7416a2a312147ec282db5de8410910834f555c8d09062902130d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41db253848cac205af5f91621cac4e8223bda7d76984afd734a093f8e8a2d125\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41db253848cac205af5f91621cac4e8223bda7d76984afd734a093f8e8a2d125\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:00:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:00:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:00:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:46Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:46 crc kubenswrapper[4830]: I0131 09:01:46.505547 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:46Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:46 crc kubenswrapper[4830]: I0131 09:01:46.517846 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-zt78q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f8a0ccd-540b-4151-a34d-438e433cb141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://362d0fc182d79e72720f3686e7fb5219372cf72d8be09c8086713b692e8d66d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6zlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:25Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-zt78q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:46Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:46 crc kubenswrapper[4830]: I0131 09:01:46.530116 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5kl8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1fa30e4-0c03-43ab-9c37-f7ec86153b27\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jgvfn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jgvfn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:36Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5kl8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:46Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:46 crc kubenswrapper[4830]: I0131 09:01:46.537080 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:46 crc kubenswrapper[4830]: I0131 09:01:46.537264 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:46 crc kubenswrapper[4830]: I0131 09:01:46.537350 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:46 crc kubenswrapper[4830]: I0131 09:01:46.537440 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:46 crc kubenswrapper[4830]: I0131 09:01:46.537525 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:46Z","lastTransitionTime":"2026-01-31T09:01:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:46 crc kubenswrapper[4830]: I0131 09:01:46.640711 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:46 crc kubenswrapper[4830]: I0131 09:01:46.640766 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:46 crc kubenswrapper[4830]: I0131 09:01:46.640778 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:46 crc kubenswrapper[4830]: I0131 09:01:46.640795 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:46 crc kubenswrapper[4830]: I0131 09:01:46.640807 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:46Z","lastTransitionTime":"2026-01-31T09:01:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:46 crc kubenswrapper[4830]: I0131 09:01:46.743141 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:46 crc kubenswrapper[4830]: I0131 09:01:46.743192 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:46 crc kubenswrapper[4830]: I0131 09:01:46.743204 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:46 crc kubenswrapper[4830]: I0131 09:01:46.743223 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:46 crc kubenswrapper[4830]: I0131 09:01:46.743237 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:46Z","lastTransitionTime":"2026-01-31T09:01:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:46 crc kubenswrapper[4830]: I0131 09:01:46.846740 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:46 crc kubenswrapper[4830]: I0131 09:01:46.847205 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:46 crc kubenswrapper[4830]: I0131 09:01:46.847317 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:46 crc kubenswrapper[4830]: I0131 09:01:46.847405 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:46 crc kubenswrapper[4830]: I0131 09:01:46.847518 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:46Z","lastTransitionTime":"2026-01-31T09:01:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:46 crc kubenswrapper[4830]: I0131 09:01:46.951016 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:46 crc kubenswrapper[4830]: I0131 09:01:46.951064 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:46 crc kubenswrapper[4830]: I0131 09:01:46.951076 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:46 crc kubenswrapper[4830]: I0131 09:01:46.951096 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:46 crc kubenswrapper[4830]: I0131 09:01:46.951131 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:46Z","lastTransitionTime":"2026-01-31T09:01:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:47 crc kubenswrapper[4830]: I0131 09:01:47.054365 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:47 crc kubenswrapper[4830]: I0131 09:01:47.054411 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:47 crc kubenswrapper[4830]: I0131 09:01:47.054419 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:47 crc kubenswrapper[4830]: I0131 09:01:47.054437 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:47 crc kubenswrapper[4830]: I0131 09:01:47.054448 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:47Z","lastTransitionTime":"2026-01-31T09:01:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:47 crc kubenswrapper[4830]: I0131 09:01:47.157996 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:47 crc kubenswrapper[4830]: I0131 09:01:47.158076 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:47 crc kubenswrapper[4830]: I0131 09:01:47.158094 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:47 crc kubenswrapper[4830]: I0131 09:01:47.158120 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:47 crc kubenswrapper[4830]: I0131 09:01:47.158140 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:47Z","lastTransitionTime":"2026-01-31T09:01:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:47 crc kubenswrapper[4830]: I0131 09:01:47.231396 4830 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 01:46:39.443572469 +0000 UTC Jan 31 09:01:47 crc kubenswrapper[4830]: I0131 09:01:47.260957 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:47 crc kubenswrapper[4830]: I0131 09:01:47.261016 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:47 crc kubenswrapper[4830]: I0131 09:01:47.261027 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:47 crc kubenswrapper[4830]: I0131 09:01:47.261045 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:47 crc kubenswrapper[4830]: I0131 09:01:47.261056 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:47Z","lastTransitionTime":"2026-01-31T09:01:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:47 crc kubenswrapper[4830]: I0131 09:01:47.363919 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:47 crc kubenswrapper[4830]: I0131 09:01:47.363977 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:47 crc kubenswrapper[4830]: I0131 09:01:47.363990 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:47 crc kubenswrapper[4830]: I0131 09:01:47.364008 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:47 crc kubenswrapper[4830]: I0131 09:01:47.364024 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:47Z","lastTransitionTime":"2026-01-31T09:01:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:47 crc kubenswrapper[4830]: I0131 09:01:47.467126 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:47 crc kubenswrapper[4830]: I0131 09:01:47.467210 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:47 crc kubenswrapper[4830]: I0131 09:01:47.467224 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:47 crc kubenswrapper[4830]: I0131 09:01:47.467245 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:47 crc kubenswrapper[4830]: I0131 09:01:47.467261 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:47Z","lastTransitionTime":"2026-01-31T09:01:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:47 crc kubenswrapper[4830]: I0131 09:01:47.569653 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:47 crc kubenswrapper[4830]: I0131 09:01:47.569698 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:47 crc kubenswrapper[4830]: I0131 09:01:47.569709 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:47 crc kubenswrapper[4830]: I0131 09:01:47.569748 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:47 crc kubenswrapper[4830]: I0131 09:01:47.569761 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:47Z","lastTransitionTime":"2026-01-31T09:01:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:47 crc kubenswrapper[4830]: I0131 09:01:47.671907 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:47 crc kubenswrapper[4830]: I0131 09:01:47.671980 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:47 crc kubenswrapper[4830]: I0131 09:01:47.671990 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:47 crc kubenswrapper[4830]: I0131 09:01:47.672009 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:47 crc kubenswrapper[4830]: I0131 09:01:47.672022 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:47Z","lastTransitionTime":"2026-01-31T09:01:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:47 crc kubenswrapper[4830]: I0131 09:01:47.774973 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:47 crc kubenswrapper[4830]: I0131 09:01:47.775097 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:47 crc kubenswrapper[4830]: I0131 09:01:47.775122 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:47 crc kubenswrapper[4830]: I0131 09:01:47.775154 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:47 crc kubenswrapper[4830]: I0131 09:01:47.775178 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:47Z","lastTransitionTime":"2026-01-31T09:01:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:47 crc kubenswrapper[4830]: I0131 09:01:47.905527 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:47 crc kubenswrapper[4830]: I0131 09:01:47.905576 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:47 crc kubenswrapper[4830]: I0131 09:01:47.905586 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:47 crc kubenswrapper[4830]: I0131 09:01:47.905605 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:47 crc kubenswrapper[4830]: I0131 09:01:47.905619 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:47Z","lastTransitionTime":"2026-01-31T09:01:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:48 crc kubenswrapper[4830]: I0131 09:01:48.008989 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:48 crc kubenswrapper[4830]: I0131 09:01:48.009049 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:48 crc kubenswrapper[4830]: I0131 09:01:48.009065 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:48 crc kubenswrapper[4830]: I0131 09:01:48.009088 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:48 crc kubenswrapper[4830]: I0131 09:01:48.009101 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:48Z","lastTransitionTime":"2026-01-31T09:01:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:48 crc kubenswrapper[4830]: I0131 09:01:48.105680 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 09:01:48 crc kubenswrapper[4830]: E0131 09:01:48.105851 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 09:02:20.105828231 +0000 UTC m=+84.599190663 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 09:01:48 crc kubenswrapper[4830]: I0131 09:01:48.105915 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 09:01:48 crc kubenswrapper[4830]: I0131 09:01:48.105984 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 09:01:48 crc kubenswrapper[4830]: I0131 09:01:48.106041 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 09:01:48 crc kubenswrapper[4830]: I0131 09:01:48.106090 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 09:01:48 crc kubenswrapper[4830]: E0131 09:01:48.106100 4830 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 31 09:01:48 crc kubenswrapper[4830]: E0131 09:01:48.106160 4830 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 31 09:01:48 crc kubenswrapper[4830]: E0131 09:01:48.106193 4830 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 31 09:01:48 crc kubenswrapper[4830]: E0131 09:01:48.106132 4830 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 31 09:01:48 crc kubenswrapper[4830]: E0131 09:01:48.106220 4830 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 09:01:48 crc kubenswrapper[4830]: E0131 09:01:48.106223 4830 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 31 09:01:48 crc kubenswrapper[4830]: E0131 09:01:48.106238 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-31 09:02:20.106228553 +0000 UTC m=+84.599590995 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 31 09:01:48 crc kubenswrapper[4830]: E0131 09:01:48.106242 4830 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 31 09:01:48 crc kubenswrapper[4830]: E0131 09:01:48.106253 4830 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 09:01:48 crc kubenswrapper[4830]: E0131 09:01:48.106270 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-31 09:02:20.106244503 +0000 UTC m=+84.599606975 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 31 09:01:48 crc kubenswrapper[4830]: E0131 09:01:48.106304 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-31 09:02:20.106287274 +0000 UTC m=+84.599649756 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 09:01:48 crc kubenswrapper[4830]: E0131 09:01:48.106407 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-31 09:02:20.106334156 +0000 UTC m=+84.599696638 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 09:01:48 crc kubenswrapper[4830]: I0131 09:01:48.112077 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:48 crc kubenswrapper[4830]: I0131 09:01:48.112116 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:48 crc kubenswrapper[4830]: I0131 09:01:48.112125 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:48 crc kubenswrapper[4830]: I0131 09:01:48.112141 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:48 crc kubenswrapper[4830]: I0131 09:01:48.112156 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:48Z","lastTransitionTime":"2026-01-31T09:01:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:48 crc kubenswrapper[4830]: I0131 09:01:48.214791 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:48 crc kubenswrapper[4830]: I0131 09:01:48.214846 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:48 crc kubenswrapper[4830]: I0131 09:01:48.214856 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:48 crc kubenswrapper[4830]: I0131 09:01:48.214870 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:48 crc kubenswrapper[4830]: I0131 09:01:48.214879 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:48Z","lastTransitionTime":"2026-01-31T09:01:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:48 crc kubenswrapper[4830]: I0131 09:01:48.231926 4830 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 10:19:52.139331663 +0000 UTC Jan 31 09:01:48 crc kubenswrapper[4830]: I0131 09:01:48.251956 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5kl8z" Jan 31 09:01:48 crc kubenswrapper[4830]: I0131 09:01:48.252159 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 09:01:48 crc kubenswrapper[4830]: I0131 09:01:48.252074 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 09:01:48 crc kubenswrapper[4830]: I0131 09:01:48.251968 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 09:01:48 crc kubenswrapper[4830]: E0131 09:01:48.252420 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5kl8z" podUID="c1fa30e4-0c03-43ab-9c37-f7ec86153b27" Jan 31 09:01:48 crc kubenswrapper[4830]: E0131 09:01:48.252526 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 09:01:48 crc kubenswrapper[4830]: E0131 09:01:48.252625 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 09:01:48 crc kubenswrapper[4830]: E0131 09:01:48.252706 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 09:01:48 crc kubenswrapper[4830]: I0131 09:01:48.317785 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:48 crc kubenswrapper[4830]: I0131 09:01:48.317839 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:48 crc kubenswrapper[4830]: I0131 09:01:48.317850 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:48 crc kubenswrapper[4830]: I0131 09:01:48.317870 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:48 crc kubenswrapper[4830]: I0131 09:01:48.317882 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:48Z","lastTransitionTime":"2026-01-31T09:01:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:48 crc kubenswrapper[4830]: I0131 09:01:48.421927 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:48 crc kubenswrapper[4830]: I0131 09:01:48.422004 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:48 crc kubenswrapper[4830]: I0131 09:01:48.422016 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:48 crc kubenswrapper[4830]: I0131 09:01:48.422038 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:48 crc kubenswrapper[4830]: I0131 09:01:48.422056 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:48Z","lastTransitionTime":"2026-01-31T09:01:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:48 crc kubenswrapper[4830]: I0131 09:01:48.524288 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:48 crc kubenswrapper[4830]: I0131 09:01:48.524361 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:48 crc kubenswrapper[4830]: I0131 09:01:48.524379 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:48 crc kubenswrapper[4830]: I0131 09:01:48.524404 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:48 crc kubenswrapper[4830]: I0131 09:01:48.524415 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:48Z","lastTransitionTime":"2026-01-31T09:01:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:48 crc kubenswrapper[4830]: I0131 09:01:48.628243 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:48 crc kubenswrapper[4830]: I0131 09:01:48.628326 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:48 crc kubenswrapper[4830]: I0131 09:01:48.628335 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:48 crc kubenswrapper[4830]: I0131 09:01:48.628354 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:48 crc kubenswrapper[4830]: I0131 09:01:48.628371 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:48Z","lastTransitionTime":"2026-01-31T09:01:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:48 crc kubenswrapper[4830]: I0131 09:01:48.732016 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:48 crc kubenswrapper[4830]: I0131 09:01:48.732080 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:48 crc kubenswrapper[4830]: I0131 09:01:48.732097 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:48 crc kubenswrapper[4830]: I0131 09:01:48.732121 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:48 crc kubenswrapper[4830]: I0131 09:01:48.732141 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:48Z","lastTransitionTime":"2026-01-31T09:01:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:48 crc kubenswrapper[4830]: I0131 09:01:48.834851 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:48 crc kubenswrapper[4830]: I0131 09:01:48.834902 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:48 crc kubenswrapper[4830]: I0131 09:01:48.834916 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:48 crc kubenswrapper[4830]: I0131 09:01:48.834936 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:48 crc kubenswrapper[4830]: I0131 09:01:48.834948 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:48Z","lastTransitionTime":"2026-01-31T09:01:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:48 crc kubenswrapper[4830]: I0131 09:01:48.938446 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:48 crc kubenswrapper[4830]: I0131 09:01:48.938544 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:48 crc kubenswrapper[4830]: I0131 09:01:48.938554 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:48 crc kubenswrapper[4830]: I0131 09:01:48.938571 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:48 crc kubenswrapper[4830]: I0131 09:01:48.938583 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:48Z","lastTransitionTime":"2026-01-31T09:01:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:49 crc kubenswrapper[4830]: I0131 09:01:49.041658 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:49 crc kubenswrapper[4830]: I0131 09:01:49.041756 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:49 crc kubenswrapper[4830]: I0131 09:01:49.041771 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:49 crc kubenswrapper[4830]: I0131 09:01:49.041790 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:49 crc kubenswrapper[4830]: I0131 09:01:49.041801 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:49Z","lastTransitionTime":"2026-01-31T09:01:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:49 crc kubenswrapper[4830]: I0131 09:01:49.145255 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:49 crc kubenswrapper[4830]: I0131 09:01:49.145291 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:49 crc kubenswrapper[4830]: I0131 09:01:49.145302 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:49 crc kubenswrapper[4830]: I0131 09:01:49.145317 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:49 crc kubenswrapper[4830]: I0131 09:01:49.145326 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:49Z","lastTransitionTime":"2026-01-31T09:01:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:49 crc kubenswrapper[4830]: I0131 09:01:49.232354 4830 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 12:03:49.264802666 +0000 UTC Jan 31 09:01:49 crc kubenswrapper[4830]: I0131 09:01:49.247494 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:49 crc kubenswrapper[4830]: I0131 09:01:49.247538 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:49 crc kubenswrapper[4830]: I0131 09:01:49.247549 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:49 crc kubenswrapper[4830]: I0131 09:01:49.248010 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:49 crc kubenswrapper[4830]: I0131 09:01:49.248025 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:49Z","lastTransitionTime":"2026-01-31T09:01:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:49 crc kubenswrapper[4830]: I0131 09:01:49.350984 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:49 crc kubenswrapper[4830]: I0131 09:01:49.351028 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:49 crc kubenswrapper[4830]: I0131 09:01:49.351040 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:49 crc kubenswrapper[4830]: I0131 09:01:49.351058 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:49 crc kubenswrapper[4830]: I0131 09:01:49.351071 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:49Z","lastTransitionTime":"2026-01-31T09:01:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:49 crc kubenswrapper[4830]: I0131 09:01:49.453965 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:49 crc kubenswrapper[4830]: I0131 09:01:49.454034 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:49 crc kubenswrapper[4830]: I0131 09:01:49.454048 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:49 crc kubenswrapper[4830]: I0131 09:01:49.454075 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:49 crc kubenswrapper[4830]: I0131 09:01:49.454088 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:49Z","lastTransitionTime":"2026-01-31T09:01:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:49 crc kubenswrapper[4830]: I0131 09:01:49.557052 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:49 crc kubenswrapper[4830]: I0131 09:01:49.557113 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:49 crc kubenswrapper[4830]: I0131 09:01:49.557123 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:49 crc kubenswrapper[4830]: I0131 09:01:49.557140 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:49 crc kubenswrapper[4830]: I0131 09:01:49.557150 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:49Z","lastTransitionTime":"2026-01-31T09:01:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:49 crc kubenswrapper[4830]: I0131 09:01:49.660705 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:49 crc kubenswrapper[4830]: I0131 09:01:49.660774 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:49 crc kubenswrapper[4830]: I0131 09:01:49.660786 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:49 crc kubenswrapper[4830]: I0131 09:01:49.660801 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:49 crc kubenswrapper[4830]: I0131 09:01:49.660811 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:49Z","lastTransitionTime":"2026-01-31T09:01:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:49 crc kubenswrapper[4830]: I0131 09:01:49.763933 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:49 crc kubenswrapper[4830]: I0131 09:01:49.764010 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:49 crc kubenswrapper[4830]: I0131 09:01:49.764021 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:49 crc kubenswrapper[4830]: I0131 09:01:49.764038 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:49 crc kubenswrapper[4830]: I0131 09:01:49.764069 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:49Z","lastTransitionTime":"2026-01-31T09:01:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:49 crc kubenswrapper[4830]: I0131 09:01:49.866655 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:49 crc kubenswrapper[4830]: I0131 09:01:49.866708 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:49 crc kubenswrapper[4830]: I0131 09:01:49.866737 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:49 crc kubenswrapper[4830]: I0131 09:01:49.866758 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:49 crc kubenswrapper[4830]: I0131 09:01:49.866769 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:49Z","lastTransitionTime":"2026-01-31T09:01:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:49 crc kubenswrapper[4830]: I0131 09:01:49.969841 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:49 crc kubenswrapper[4830]: I0131 09:01:49.969907 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:49 crc kubenswrapper[4830]: I0131 09:01:49.969921 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:49 crc kubenswrapper[4830]: I0131 09:01:49.969937 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:49 crc kubenswrapper[4830]: I0131 09:01:49.970004 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:49Z","lastTransitionTime":"2026-01-31T09:01:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:50 crc kubenswrapper[4830]: I0131 09:01:50.072410 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:50 crc kubenswrapper[4830]: I0131 09:01:50.073312 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:50 crc kubenswrapper[4830]: I0131 09:01:50.073398 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:50 crc kubenswrapper[4830]: I0131 09:01:50.073490 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:50 crc kubenswrapper[4830]: I0131 09:01:50.073596 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:50Z","lastTransitionTime":"2026-01-31T09:01:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:50 crc kubenswrapper[4830]: I0131 09:01:50.176898 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:50 crc kubenswrapper[4830]: I0131 09:01:50.176947 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:50 crc kubenswrapper[4830]: I0131 09:01:50.176958 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:50 crc kubenswrapper[4830]: I0131 09:01:50.176982 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:50 crc kubenswrapper[4830]: I0131 09:01:50.176996 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:50Z","lastTransitionTime":"2026-01-31T09:01:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:50 crc kubenswrapper[4830]: I0131 09:01:50.232937 4830 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 22:59:31.032146233 +0000 UTC Jan 31 09:01:50 crc kubenswrapper[4830]: I0131 09:01:50.250512 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 09:01:50 crc kubenswrapper[4830]: I0131 09:01:50.250517 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5kl8z" Jan 31 09:01:50 crc kubenswrapper[4830]: I0131 09:01:50.250793 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 09:01:50 crc kubenswrapper[4830]: E0131 09:01:50.250867 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5kl8z" podUID="c1fa30e4-0c03-43ab-9c37-f7ec86153b27" Jan 31 09:01:50 crc kubenswrapper[4830]: E0131 09:01:50.250989 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 09:01:50 crc kubenswrapper[4830]: E0131 09:01:50.250648 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 09:01:50 crc kubenswrapper[4830]: I0131 09:01:50.251167 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 09:01:50 crc kubenswrapper[4830]: E0131 09:01:50.251321 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 09:01:50 crc kubenswrapper[4830]: I0131 09:01:50.280253 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:50 crc kubenswrapper[4830]: I0131 09:01:50.280302 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:50 crc kubenswrapper[4830]: I0131 09:01:50.280312 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:50 crc kubenswrapper[4830]: I0131 09:01:50.280328 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:50 crc kubenswrapper[4830]: I0131 09:01:50.280339 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:50Z","lastTransitionTime":"2026-01-31T09:01:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:50 crc kubenswrapper[4830]: I0131 09:01:50.383395 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:50 crc kubenswrapper[4830]: I0131 09:01:50.383880 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:50 crc kubenswrapper[4830]: I0131 09:01:50.384158 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:50 crc kubenswrapper[4830]: I0131 09:01:50.384530 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:50 crc kubenswrapper[4830]: I0131 09:01:50.384720 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:50Z","lastTransitionTime":"2026-01-31T09:01:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:50 crc kubenswrapper[4830]: I0131 09:01:50.474655 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 31 09:01:50 crc kubenswrapper[4830]: I0131 09:01:50.487179 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Jan 31 09:01:50 crc kubenswrapper[4830]: I0131 09:01:50.488379 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:50 crc kubenswrapper[4830]: I0131 09:01:50.488429 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:50 crc kubenswrapper[4830]: I0131 09:01:50.488440 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:50 crc kubenswrapper[4830]: I0131 09:01:50.488460 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:50 crc kubenswrapper[4830]: I0131 09:01:50.488480 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:50Z","lastTransitionTime":"2026-01-31T09:01:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:50 crc kubenswrapper[4830]: I0131 09:01:50.495674 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:50Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:50 crc kubenswrapper[4830]: I0131 09:01:50.510218 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pmbpr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca325f50-edf0-4f3d-ab92-17f40a73d274\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d2a0a6bafefdee2120d6573808366f2455c8606c350f69b9e62bfb2903f6303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7p56d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:22Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pmbpr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:50Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:50 crc kubenswrapper[4830]: I0131 09:01:50.529617 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-cjqbn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b7e133cc-19e8-4770-9146-88dac53a6531\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fc53764819654361fe0c4c89480ef4e2b42eb79d71ab8b88f1cc9283c67ce70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-msp6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-cjqbn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:50Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:50 crc kubenswrapper[4830]: I0131 09:01:50.546868 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-x27jw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"227117cb-01d3-4e44-9da3-b1d577fb3ee2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d59f81b73056481d4e6eb23c2a98c3c088b5255b82cd28e0cad0ac2a9b271cfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wndtp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b9804ac2b3c56a17dc1665cc7ca0c622f9f7a5dbf19f804b155c517a5be61e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b9804ac2b3c56a17dc1665cc7ca0c622f9f7a5dbf19f804b155c517a5be61e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wndtp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e777b7bcec1e8930ac1c280460b5733614b633b1ed72522e124f8585ef5cb38d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e777b7bcec1e8930ac1c280460b5733614b633b1ed72522e124f8585ef5cb38d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wndtp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10692d497b8cf269a439c603376a958fcead6d7ee2338eff4e014a75eaa26417\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10692d497b8cf269a439c603376a958fcead6d7ee2338eff4e014a75eaa26417\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wndtp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1275255b2597fbb09db9612f496ab3177afffe0c7d74d406bc4b1bc861b5b54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c1275255b2597fbb09db9612f496ab3177afffe0c7d74d406bc4b1bc861b5b54\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wndtp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://443af6aeef0b259a54c898d9c08f6e39d663ddf06d36d8712ee8ebb7fd2ebf5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://443af6aeef0b259a54c898d9c08f6e39d663ddf06d36d8712ee8ebb7fd2ebf5a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wndtp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e55fc9c3b6f0e9526860d830179e759cb2eae3c52847b675db6ccce9b6ed2766\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e55fc9c3b6f0e9526860d830179e759cb2eae3c52847b675db6ccce9b6ed2766\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wndtp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-x27jw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:50Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:50 crc kubenswrapper[4830]: I0131 09:01:50.564275 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"20ed341f-ef9c-4242-981d-80c09f22a37f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://28d65a2b0e667ff1bd912564180a6a9ce77e91a104c6028e19bc58e0ab3b295d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c484d4ed3775a955f3077e3404135c771c581eef1e8fa614d7cdd6e521bbb426\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6889405df5b8cc9e09b36c99a5db06048c9de193dd5744d2b48c07f42c477bd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e99664db53d57a91882867cdf4ab33d52a2e165c53f91cd1b918a32c49a7afa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:00:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:50Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:50 crc kubenswrapper[4830]: I0131 09:01:50.584221 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c33f93dea1eda6a26c80682447157b317a4aea4af66d200c5bdfc162ad593e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:50Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:50 crc kubenswrapper[4830]: I0131 09:01:50.590609 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:50 crc kubenswrapper[4830]: I0131 09:01:50.590670 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:50 crc kubenswrapper[4830]: I0131 09:01:50.590686 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:50 crc kubenswrapper[4830]: I0131 09:01:50.590710 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:50 crc kubenswrapper[4830]: I0131 09:01:50.590751 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:50Z","lastTransitionTime":"2026-01-31T09:01:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:50 crc kubenswrapper[4830]: I0131 09:01:50.601007 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:50Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:50 crc kubenswrapper[4830]: I0131 09:01:50.616976 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2018dd8e7153f3ce64992dc6f931ae09c5f77931cd0743a9fe2557673b6a41f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:50Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:50 crc kubenswrapper[4830]: I0131 09:01:50.630018 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"158dbfda-9b0a-4809-9946-3c6ee2d082dc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4f2590c48b20124bb8d0271755d430719ece306dbdc95acc26258abaf331ee2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqp59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7cfb7ee25dc18bb1412f69e9bbc3a9055029ed188a12baa5ceef7d5445ad597c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqp59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:23Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gt7kd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:50Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:50 crc kubenswrapper[4830]: I0131 09:01:50.652511 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0468c8e291590c1fa2ffc9b7786bbbf6f3cfb7889fc3adaf7f7a4d3b0edcb1ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4601859d7e52904b3390b2ebca48210c4b3bf132b00a4735a1dbe6cfdc7bd02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:50Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:50 crc kubenswrapper[4830]: I0131 09:01:50.670514 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-r8pc4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"159b9801-57e3-4cf0-9b81-10aacb5eef83\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://320f24051b1a05475b54823ec3401ad252f0efd6c0d3c5098643f959bad15179\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae12083012db5dcd6cc0b23cf64d04c952587334fdf617d36d4cdc2f657eb163\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://351a35da1a2503d6c5d323c7a9a26460236f70517498b41055e6d21bbf92b358\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3614c1ff7ca6762c15b9b85e6187ad94029630e54254e8ccb45b5e7c24692561\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27c8241e84418d80626b17fe37ccf994304394c3fa85c7f1a058d8d54ad7e7d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fd109715e65725addfeba399eacb1abbe16132e06e9ee50b542e877ca9d8d82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed906c96861bf8d2af425b975d67a4b298930cd75ababc2d621541adaa7b2ba2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed906c96861bf8d2af425b975d67a4b298930cd75ababc2d621541adaa7b2ba2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T09:01:34Z\\\",\\\"message\\\":\\\"objects: [openshift-kube-apiserver/kube-apiserver-crc openshift-kube-controller-manager/kube-controller-manager-crc openshift-network-operator/iptables-alerter-4ln5h openshift-image-registry/node-ca-zt78q openshift-network-diagnostics/network-check-source-55646444c4-trplf openshift-network-diagnostics/network-check-target-xd92c openshift-network-node-identity/network-node-identity-vrzqb openshift-dns/node-resolver-pmbpr openshift-network-operator/network-operator-58b4c7f79c-55gtf openshift-machine-config-operator/machine-config-daemon-gt7kd openshift-multus/multus-additional-cni-plugins-x27jw openshift-multus/multus-cjqbn]\\\\nI0131 09:01:34.674011 6272 obj_retry.go:418] Waiting for all the *v1.Pod retry setup to complete in iterateRetryResources\\\\nI0131 09:01:34.674042 6272 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/multus-cjqbn\\\\nI0131 09:01:34.674056 6272 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/multus-cjqbn\\\\nF0131 09:01:34.674068 6272 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stoppe\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:33Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-r8pc4_openshift-ovn-kubernetes(159b9801-57e3-4cf0-9b81-10aacb5eef83)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a39d2c99a3b1f6ff5d003d2163046c5127e41dac82f41744c1fbffd0cec8ff09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba6e5a840aca1952877a5d2c18af1d28779b13d54e0c653a2b02b3c3d7b748e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba6e5a840aca1952877a5d2c18af1d28779b13d54e0c653a2b02b3c3d7b748e4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-r8pc4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:50Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:50 crc kubenswrapper[4830]: I0131 09:01:50.682902 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7vq99" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"44acb8ed-5840-46fa-9ba1-1b89653e1478\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07cae4ce61629c9f8e48863d0775cf4fed46422db85ba8b29477e098b697fb1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9w5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86ac3b3a214c6bca20d7fdc92a49647dfdaf8de4391f331890f74900ab7eca11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9w5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:35Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7vq99\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:50Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:50 crc kubenswrapper[4830]: I0131 09:01:50.700036 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:50 crc kubenswrapper[4830]: I0131 09:01:50.700095 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:50 crc kubenswrapper[4830]: I0131 09:01:50.700108 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:50 crc kubenswrapper[4830]: I0131 09:01:50.700131 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:50 crc kubenswrapper[4830]: I0131 09:01:50.700149 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:50Z","lastTransitionTime":"2026-01-31T09:01:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:50 crc kubenswrapper[4830]: I0131 09:01:50.705938 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c75e1c36-b769-464e-96eb-6d9b3c5aa384\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fc4f1b4979b8d902676eced34650daea491e68b4c4377492b928a9f0f78d12b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b68f539fdc8bbf394adaf06d3e8682cdf498d0994c53f3754caf282cf9cf3607\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://539885da1b8f083e7ae878fec45416be66a5b06df063f046a05a4981bc8a8f75\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9b64732f8259953717c8ad355889afd462ce339c881ba9c105f6d3f39245e79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8cac33719e081864153ce20c60069a21036f3c29f7b4395021c3a2fe4f09dbc9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"message\\\":\\\"le observer\\\\nW0131 09:01:15.957161 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0131 09:01:15.957363 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 09:01:15.958528 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2684978895/tls.crt::/tmp/serving-cert-2684978895/tls.key\\\\\\\"\\\\nI0131 09:01:16.545024 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 09:01:16.548405 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 09:01:16.548444 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 09:01:16.548470 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 09:01:16.548480 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 09:01:16.557075 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 09:01:16.557112 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 09:01:16.557118 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 09:01:16.557123 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 09:01:16.557127 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 09:01:16.557130 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 09:01:16.557133 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0131 09:01:16.557468 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0131 09:01:16.564014 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d4d8bd6451d7416a2a312147ec282db5de8410910834f555c8d09062902130d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41db253848cac205af5f91621cac4e8223bda7d76984afd734a093f8e8a2d125\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41db253848cac205af5f91621cac4e8223bda7d76984afd734a093f8e8a2d125\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:00:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:00:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:00:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:50Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:50 crc kubenswrapper[4830]: I0131 09:01:50.719641 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:50Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:50 crc kubenswrapper[4830]: I0131 09:01:50.731802 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-zt78q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f8a0ccd-540b-4151-a34d-438e433cb141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://362d0fc182d79e72720f3686e7fb5219372cf72d8be09c8086713b692e8d66d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6zlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:25Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-zt78q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:50Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:50 crc kubenswrapper[4830]: I0131 09:01:50.743540 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5kl8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1fa30e4-0c03-43ab-9c37-f7ec86153b27\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jgvfn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jgvfn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:36Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5kl8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:50Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:50 crc kubenswrapper[4830]: I0131 09:01:50.804123 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:50 crc kubenswrapper[4830]: I0131 09:01:50.804186 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:50 crc kubenswrapper[4830]: I0131 09:01:50.804199 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:50 crc kubenswrapper[4830]: I0131 09:01:50.804221 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:50 crc kubenswrapper[4830]: I0131 09:01:50.804244 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:50Z","lastTransitionTime":"2026-01-31T09:01:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:50 crc kubenswrapper[4830]: I0131 09:01:50.907215 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:50 crc kubenswrapper[4830]: I0131 09:01:50.907269 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:50 crc kubenswrapper[4830]: I0131 09:01:50.907281 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:50 crc kubenswrapper[4830]: I0131 09:01:50.907305 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:50 crc kubenswrapper[4830]: I0131 09:01:50.907324 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:50Z","lastTransitionTime":"2026-01-31T09:01:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:51 crc kubenswrapper[4830]: I0131 09:01:51.010174 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:51 crc kubenswrapper[4830]: I0131 09:01:51.010230 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:51 crc kubenswrapper[4830]: I0131 09:01:51.010248 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:51 crc kubenswrapper[4830]: I0131 09:01:51.010272 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:51 crc kubenswrapper[4830]: I0131 09:01:51.010290 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:51Z","lastTransitionTime":"2026-01-31T09:01:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:51 crc kubenswrapper[4830]: I0131 09:01:51.113391 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:51 crc kubenswrapper[4830]: I0131 09:01:51.113453 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:51 crc kubenswrapper[4830]: I0131 09:01:51.113465 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:51 crc kubenswrapper[4830]: I0131 09:01:51.113485 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:51 crc kubenswrapper[4830]: I0131 09:01:51.113498 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:51Z","lastTransitionTime":"2026-01-31T09:01:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:51 crc kubenswrapper[4830]: I0131 09:01:51.219225 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:51 crc kubenswrapper[4830]: I0131 09:01:51.219310 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:51 crc kubenswrapper[4830]: I0131 09:01:51.219326 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:51 crc kubenswrapper[4830]: I0131 09:01:51.219351 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:51 crc kubenswrapper[4830]: I0131 09:01:51.219366 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:51Z","lastTransitionTime":"2026-01-31T09:01:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:51 crc kubenswrapper[4830]: I0131 09:01:51.233627 4830 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 14:49:17.350719347 +0000 UTC Jan 31 09:01:51 crc kubenswrapper[4830]: I0131 09:01:51.331157 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:51 crc kubenswrapper[4830]: I0131 09:01:51.331235 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:51 crc kubenswrapper[4830]: I0131 09:01:51.331249 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:51 crc kubenswrapper[4830]: I0131 09:01:51.331277 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:51 crc kubenswrapper[4830]: I0131 09:01:51.331292 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:51Z","lastTransitionTime":"2026-01-31T09:01:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:51 crc kubenswrapper[4830]: I0131 09:01:51.434687 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:51 crc kubenswrapper[4830]: I0131 09:01:51.434777 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:51 crc kubenswrapper[4830]: I0131 09:01:51.434789 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:51 crc kubenswrapper[4830]: I0131 09:01:51.434812 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:51 crc kubenswrapper[4830]: I0131 09:01:51.434825 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:51Z","lastTransitionTime":"2026-01-31T09:01:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:51 crc kubenswrapper[4830]: I0131 09:01:51.538298 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:51 crc kubenswrapper[4830]: I0131 09:01:51.538593 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:51 crc kubenswrapper[4830]: I0131 09:01:51.538661 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:51 crc kubenswrapper[4830]: I0131 09:01:51.538760 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:51 crc kubenswrapper[4830]: I0131 09:01:51.538835 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:51Z","lastTransitionTime":"2026-01-31T09:01:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:51 crc kubenswrapper[4830]: I0131 09:01:51.641599 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:51 crc kubenswrapper[4830]: I0131 09:01:51.641662 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:51 crc kubenswrapper[4830]: I0131 09:01:51.641680 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:51 crc kubenswrapper[4830]: I0131 09:01:51.641710 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:51 crc kubenswrapper[4830]: I0131 09:01:51.641761 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:51Z","lastTransitionTime":"2026-01-31T09:01:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:51 crc kubenswrapper[4830]: I0131 09:01:51.744739 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:51 crc kubenswrapper[4830]: I0131 09:01:51.744798 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:51 crc kubenswrapper[4830]: I0131 09:01:51.744812 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:51 crc kubenswrapper[4830]: I0131 09:01:51.744834 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:51 crc kubenswrapper[4830]: I0131 09:01:51.744848 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:51Z","lastTransitionTime":"2026-01-31T09:01:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:51 crc kubenswrapper[4830]: I0131 09:01:51.847674 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:51 crc kubenswrapper[4830]: I0131 09:01:51.847716 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:51 crc kubenswrapper[4830]: I0131 09:01:51.847746 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:51 crc kubenswrapper[4830]: I0131 09:01:51.847766 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:51 crc kubenswrapper[4830]: I0131 09:01:51.847782 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:51Z","lastTransitionTime":"2026-01-31T09:01:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:51 crc kubenswrapper[4830]: I0131 09:01:51.950467 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:51 crc kubenswrapper[4830]: I0131 09:01:51.950498 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:51 crc kubenswrapper[4830]: I0131 09:01:51.950506 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:51 crc kubenswrapper[4830]: I0131 09:01:51.950522 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:51 crc kubenswrapper[4830]: I0131 09:01:51.950531 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:51Z","lastTransitionTime":"2026-01-31T09:01:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:52 crc kubenswrapper[4830]: I0131 09:01:52.053686 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:52 crc kubenswrapper[4830]: I0131 09:01:52.053764 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:52 crc kubenswrapper[4830]: I0131 09:01:52.053776 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:52 crc kubenswrapper[4830]: I0131 09:01:52.053797 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:52 crc kubenswrapper[4830]: I0131 09:01:52.053811 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:52Z","lastTransitionTime":"2026-01-31T09:01:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:52 crc kubenswrapper[4830]: I0131 09:01:52.156519 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:52 crc kubenswrapper[4830]: I0131 09:01:52.156578 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:52 crc kubenswrapper[4830]: I0131 09:01:52.156601 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:52 crc kubenswrapper[4830]: I0131 09:01:52.156628 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:52 crc kubenswrapper[4830]: I0131 09:01:52.156646 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:52Z","lastTransitionTime":"2026-01-31T09:01:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:52 crc kubenswrapper[4830]: I0131 09:01:52.234570 4830 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 06:03:31.449072609 +0000 UTC Jan 31 09:01:52 crc kubenswrapper[4830]: I0131 09:01:52.251133 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 09:01:52 crc kubenswrapper[4830]: I0131 09:01:52.251226 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 09:01:52 crc kubenswrapper[4830]: I0131 09:01:52.251167 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 09:01:52 crc kubenswrapper[4830]: E0131 09:01:52.251371 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 09:01:52 crc kubenswrapper[4830]: E0131 09:01:52.251510 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 09:01:52 crc kubenswrapper[4830]: E0131 09:01:52.251601 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 09:01:52 crc kubenswrapper[4830]: I0131 09:01:52.252069 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5kl8z" Jan 31 09:01:52 crc kubenswrapper[4830]: E0131 09:01:52.252287 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5kl8z" podUID="c1fa30e4-0c03-43ab-9c37-f7ec86153b27" Jan 31 09:01:52 crc kubenswrapper[4830]: I0131 09:01:52.260149 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:52 crc kubenswrapper[4830]: I0131 09:01:52.260186 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:52 crc kubenswrapper[4830]: I0131 09:01:52.260202 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:52 crc kubenswrapper[4830]: I0131 09:01:52.260224 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:52 crc kubenswrapper[4830]: I0131 09:01:52.260242 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:52Z","lastTransitionTime":"2026-01-31T09:01:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:52 crc kubenswrapper[4830]: I0131 09:01:52.362864 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:52 crc kubenswrapper[4830]: I0131 09:01:52.362930 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:52 crc kubenswrapper[4830]: I0131 09:01:52.362968 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:52 crc kubenswrapper[4830]: I0131 09:01:52.363005 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:52 crc kubenswrapper[4830]: I0131 09:01:52.363033 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:52Z","lastTransitionTime":"2026-01-31T09:01:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:52 crc kubenswrapper[4830]: I0131 09:01:52.465606 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:52 crc kubenswrapper[4830]: I0131 09:01:52.465666 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:52 crc kubenswrapper[4830]: I0131 09:01:52.465678 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:52 crc kubenswrapper[4830]: I0131 09:01:52.465699 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:52 crc kubenswrapper[4830]: I0131 09:01:52.465712 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:52Z","lastTransitionTime":"2026-01-31T09:01:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:52 crc kubenswrapper[4830]: I0131 09:01:52.492925 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:52 crc kubenswrapper[4830]: I0131 09:01:52.492968 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:52 crc kubenswrapper[4830]: I0131 09:01:52.492979 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:52 crc kubenswrapper[4830]: I0131 09:01:52.492998 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:52 crc kubenswrapper[4830]: I0131 09:01:52.493023 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:52Z","lastTransitionTime":"2026-01-31T09:01:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:52 crc kubenswrapper[4830]: E0131 09:01:52.508423 4830 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T09:01:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T09:01:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T09:01:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T09:01:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"09bf5dcf-c0f5-4874-a379-a4244cbfeb7d\\\",\\\"systemUUID\\\":\\\"c42072f0-7f1e-4cb8-a24e-882cf5477d0b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:52Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:52 crc kubenswrapper[4830]: I0131 09:01:52.513025 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:52 crc kubenswrapper[4830]: I0131 09:01:52.513071 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:52 crc kubenswrapper[4830]: I0131 09:01:52.513089 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:52 crc kubenswrapper[4830]: I0131 09:01:52.513111 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:52 crc kubenswrapper[4830]: I0131 09:01:52.513127 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:52Z","lastTransitionTime":"2026-01-31T09:01:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:52 crc kubenswrapper[4830]: E0131 09:01:52.527635 4830 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T09:01:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T09:01:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T09:01:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T09:01:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"09bf5dcf-c0f5-4874-a379-a4244cbfeb7d\\\",\\\"systemUUID\\\":\\\"c42072f0-7f1e-4cb8-a24e-882cf5477d0b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:52Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:52 crc kubenswrapper[4830]: I0131 09:01:52.531531 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:52 crc kubenswrapper[4830]: I0131 09:01:52.531565 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:52 crc kubenswrapper[4830]: I0131 09:01:52.531577 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:52 crc kubenswrapper[4830]: I0131 09:01:52.531595 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:52 crc kubenswrapper[4830]: I0131 09:01:52.531609 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:52Z","lastTransitionTime":"2026-01-31T09:01:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:52 crc kubenswrapper[4830]: E0131 09:01:52.545442 4830 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T09:01:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T09:01:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T09:01:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T09:01:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"09bf5dcf-c0f5-4874-a379-a4244cbfeb7d\\\",\\\"systemUUID\\\":\\\"c42072f0-7f1e-4cb8-a24e-882cf5477d0b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:52Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:52 crc kubenswrapper[4830]: I0131 09:01:52.556856 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:52 crc kubenswrapper[4830]: I0131 09:01:52.556930 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:52 crc kubenswrapper[4830]: I0131 09:01:52.556943 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:52 crc kubenswrapper[4830]: I0131 09:01:52.556989 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:52 crc kubenswrapper[4830]: I0131 09:01:52.557002 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:52Z","lastTransitionTime":"2026-01-31T09:01:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:52 crc kubenswrapper[4830]: E0131 09:01:52.579321 4830 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T09:01:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T09:01:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T09:01:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T09:01:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"09bf5dcf-c0f5-4874-a379-a4244cbfeb7d\\\",\\\"systemUUID\\\":\\\"c42072f0-7f1e-4cb8-a24e-882cf5477d0b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:52Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:52 crc kubenswrapper[4830]: I0131 09:01:52.583782 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:52 crc kubenswrapper[4830]: I0131 09:01:52.583833 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:52 crc kubenswrapper[4830]: I0131 09:01:52.583850 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:52 crc kubenswrapper[4830]: I0131 09:01:52.583871 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:52 crc kubenswrapper[4830]: I0131 09:01:52.583886 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:52Z","lastTransitionTime":"2026-01-31T09:01:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:52 crc kubenswrapper[4830]: E0131 09:01:52.597432 4830 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T09:01:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T09:01:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T09:01:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T09:01:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"09bf5dcf-c0f5-4874-a379-a4244cbfeb7d\\\",\\\"systemUUID\\\":\\\"c42072f0-7f1e-4cb8-a24e-882cf5477d0b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:52Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:52 crc kubenswrapper[4830]: E0131 09:01:52.597549 4830 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 31 09:01:52 crc kubenswrapper[4830]: I0131 09:01:52.599524 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:52 crc kubenswrapper[4830]: I0131 09:01:52.599565 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:52 crc kubenswrapper[4830]: I0131 09:01:52.599578 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:52 crc kubenswrapper[4830]: I0131 09:01:52.599598 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:52 crc kubenswrapper[4830]: I0131 09:01:52.599610 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:52Z","lastTransitionTime":"2026-01-31T09:01:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:52 crc kubenswrapper[4830]: I0131 09:01:52.703004 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:52 crc kubenswrapper[4830]: I0131 09:01:52.703050 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:52 crc kubenswrapper[4830]: I0131 09:01:52.703061 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:52 crc kubenswrapper[4830]: I0131 09:01:52.703079 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:52 crc kubenswrapper[4830]: I0131 09:01:52.703093 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:52Z","lastTransitionTime":"2026-01-31T09:01:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:52 crc kubenswrapper[4830]: I0131 09:01:52.758451 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c1fa30e4-0c03-43ab-9c37-f7ec86153b27-metrics-certs\") pod \"network-metrics-daemon-5kl8z\" (UID: \"c1fa30e4-0c03-43ab-9c37-f7ec86153b27\") " pod="openshift-multus/network-metrics-daemon-5kl8z" Jan 31 09:01:52 crc kubenswrapper[4830]: E0131 09:01:52.758643 4830 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 31 09:01:52 crc kubenswrapper[4830]: E0131 09:01:52.758749 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c1fa30e4-0c03-43ab-9c37-f7ec86153b27-metrics-certs podName:c1fa30e4-0c03-43ab-9c37-f7ec86153b27 nodeName:}" failed. No retries permitted until 2026-01-31 09:02:08.758700432 +0000 UTC m=+73.252062884 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/c1fa30e4-0c03-43ab-9c37-f7ec86153b27-metrics-certs") pod "network-metrics-daemon-5kl8z" (UID: "c1fa30e4-0c03-43ab-9c37-f7ec86153b27") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 31 09:01:52 crc kubenswrapper[4830]: I0131 09:01:52.806333 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:52 crc kubenswrapper[4830]: I0131 09:01:52.806380 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:52 crc kubenswrapper[4830]: I0131 09:01:52.806394 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:52 crc kubenswrapper[4830]: I0131 09:01:52.806413 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:52 crc kubenswrapper[4830]: I0131 09:01:52.806426 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:52Z","lastTransitionTime":"2026-01-31T09:01:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:52 crc kubenswrapper[4830]: I0131 09:01:52.909062 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:52 crc kubenswrapper[4830]: I0131 09:01:52.909122 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:52 crc kubenswrapper[4830]: I0131 09:01:52.909133 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:52 crc kubenswrapper[4830]: I0131 09:01:52.909151 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:52 crc kubenswrapper[4830]: I0131 09:01:52.909190 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:52Z","lastTransitionTime":"2026-01-31T09:01:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:53 crc kubenswrapper[4830]: I0131 09:01:53.011459 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:53 crc kubenswrapper[4830]: I0131 09:01:53.011504 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:53 crc kubenswrapper[4830]: I0131 09:01:53.011517 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:53 crc kubenswrapper[4830]: I0131 09:01:53.011535 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:53 crc kubenswrapper[4830]: I0131 09:01:53.011547 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:53Z","lastTransitionTime":"2026-01-31T09:01:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:53 crc kubenswrapper[4830]: I0131 09:01:53.114766 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:53 crc kubenswrapper[4830]: I0131 09:01:53.114814 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:53 crc kubenswrapper[4830]: I0131 09:01:53.114823 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:53 crc kubenswrapper[4830]: I0131 09:01:53.114839 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:53 crc kubenswrapper[4830]: I0131 09:01:53.114850 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:53Z","lastTransitionTime":"2026-01-31T09:01:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:53 crc kubenswrapper[4830]: I0131 09:01:53.217807 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:53 crc kubenswrapper[4830]: I0131 09:01:53.217872 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:53 crc kubenswrapper[4830]: I0131 09:01:53.217890 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:53 crc kubenswrapper[4830]: I0131 09:01:53.217916 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:53 crc kubenswrapper[4830]: I0131 09:01:53.217935 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:53Z","lastTransitionTime":"2026-01-31T09:01:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:53 crc kubenswrapper[4830]: I0131 09:01:53.235242 4830 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 19:08:05.203236895 +0000 UTC Jan 31 09:01:53 crc kubenswrapper[4830]: I0131 09:01:53.322494 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:53 crc kubenswrapper[4830]: I0131 09:01:53.322575 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:53 crc kubenswrapper[4830]: I0131 09:01:53.322589 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:53 crc kubenswrapper[4830]: I0131 09:01:53.322610 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:53 crc kubenswrapper[4830]: I0131 09:01:53.322623 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:53Z","lastTransitionTime":"2026-01-31T09:01:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:53 crc kubenswrapper[4830]: I0131 09:01:53.425227 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:53 crc kubenswrapper[4830]: I0131 09:01:53.425300 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:53 crc kubenswrapper[4830]: I0131 09:01:53.425319 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:53 crc kubenswrapper[4830]: I0131 09:01:53.425347 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:53 crc kubenswrapper[4830]: I0131 09:01:53.425393 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:53Z","lastTransitionTime":"2026-01-31T09:01:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:53 crc kubenswrapper[4830]: I0131 09:01:53.528720 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:53 crc kubenswrapper[4830]: I0131 09:01:53.528858 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:53 crc kubenswrapper[4830]: I0131 09:01:53.528891 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:53 crc kubenswrapper[4830]: I0131 09:01:53.528928 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:53 crc kubenswrapper[4830]: I0131 09:01:53.528954 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:53Z","lastTransitionTime":"2026-01-31T09:01:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:53 crc kubenswrapper[4830]: I0131 09:01:53.631288 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:53 crc kubenswrapper[4830]: I0131 09:01:53.631347 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:53 crc kubenswrapper[4830]: I0131 09:01:53.631360 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:53 crc kubenswrapper[4830]: I0131 09:01:53.631380 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:53 crc kubenswrapper[4830]: I0131 09:01:53.631395 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:53Z","lastTransitionTime":"2026-01-31T09:01:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:53 crc kubenswrapper[4830]: I0131 09:01:53.735061 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:53 crc kubenswrapper[4830]: I0131 09:01:53.735111 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:53 crc kubenswrapper[4830]: I0131 09:01:53.735132 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:53 crc kubenswrapper[4830]: I0131 09:01:53.735151 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:53 crc kubenswrapper[4830]: I0131 09:01:53.735164 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:53Z","lastTransitionTime":"2026-01-31T09:01:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:53 crc kubenswrapper[4830]: I0131 09:01:53.837837 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:53 crc kubenswrapper[4830]: I0131 09:01:53.837915 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:53 crc kubenswrapper[4830]: I0131 09:01:53.837932 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:53 crc kubenswrapper[4830]: I0131 09:01:53.837956 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:53 crc kubenswrapper[4830]: I0131 09:01:53.837971 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:53Z","lastTransitionTime":"2026-01-31T09:01:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:53 crc kubenswrapper[4830]: I0131 09:01:53.940773 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:53 crc kubenswrapper[4830]: I0131 09:01:53.940819 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:53 crc kubenswrapper[4830]: I0131 09:01:53.940831 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:53 crc kubenswrapper[4830]: I0131 09:01:53.940851 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:53 crc kubenswrapper[4830]: I0131 09:01:53.940865 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:53Z","lastTransitionTime":"2026-01-31T09:01:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:54 crc kubenswrapper[4830]: I0131 09:01:54.043933 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:54 crc kubenswrapper[4830]: I0131 09:01:54.043997 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:54 crc kubenswrapper[4830]: I0131 09:01:54.044010 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:54 crc kubenswrapper[4830]: I0131 09:01:54.044033 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:54 crc kubenswrapper[4830]: I0131 09:01:54.044046 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:54Z","lastTransitionTime":"2026-01-31T09:01:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:54 crc kubenswrapper[4830]: I0131 09:01:54.147391 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:54 crc kubenswrapper[4830]: I0131 09:01:54.147475 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:54 crc kubenswrapper[4830]: I0131 09:01:54.147502 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:54 crc kubenswrapper[4830]: I0131 09:01:54.147534 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:54 crc kubenswrapper[4830]: I0131 09:01:54.147556 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:54Z","lastTransitionTime":"2026-01-31T09:01:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:54 crc kubenswrapper[4830]: I0131 09:01:54.236235 4830 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 16:05:20.65354582 +0000 UTC Jan 31 09:01:54 crc kubenswrapper[4830]: I0131 09:01:54.250295 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 09:01:54 crc kubenswrapper[4830]: I0131 09:01:54.250327 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5kl8z" Jan 31 09:01:54 crc kubenswrapper[4830]: I0131 09:01:54.250423 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:54 crc kubenswrapper[4830]: I0131 09:01:54.250472 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:54 crc kubenswrapper[4830]: I0131 09:01:54.250486 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:54 crc kubenswrapper[4830]: E0131 09:01:54.250437 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 09:01:54 crc kubenswrapper[4830]: I0131 09:01:54.250518 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 09:01:54 crc kubenswrapper[4830]: I0131 09:01:54.250569 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:54 crc kubenswrapper[4830]: E0131 09:01:54.250694 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5kl8z" podUID="c1fa30e4-0c03-43ab-9c37-f7ec86153b27" Jan 31 09:01:54 crc kubenswrapper[4830]: E0131 09:01:54.250852 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 09:01:54 crc kubenswrapper[4830]: I0131 09:01:54.250938 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 09:01:54 crc kubenswrapper[4830]: E0131 09:01:54.251045 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 09:01:54 crc kubenswrapper[4830]: I0131 09:01:54.250592 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:54Z","lastTransitionTime":"2026-01-31T09:01:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:54 crc kubenswrapper[4830]: I0131 09:01:54.354609 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:54 crc kubenswrapper[4830]: I0131 09:01:54.354679 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:54 crc kubenswrapper[4830]: I0131 09:01:54.354695 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:54 crc kubenswrapper[4830]: I0131 09:01:54.354756 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:54 crc kubenswrapper[4830]: I0131 09:01:54.354772 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:54Z","lastTransitionTime":"2026-01-31T09:01:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:54 crc kubenswrapper[4830]: I0131 09:01:54.458823 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:54 crc kubenswrapper[4830]: I0131 09:01:54.458884 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:54 crc kubenswrapper[4830]: I0131 09:01:54.458899 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:54 crc kubenswrapper[4830]: I0131 09:01:54.458918 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:54 crc kubenswrapper[4830]: I0131 09:01:54.458931 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:54Z","lastTransitionTime":"2026-01-31T09:01:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:54 crc kubenswrapper[4830]: I0131 09:01:54.561553 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:54 crc kubenswrapper[4830]: I0131 09:01:54.561605 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:54 crc kubenswrapper[4830]: I0131 09:01:54.561621 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:54 crc kubenswrapper[4830]: I0131 09:01:54.561642 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:54 crc kubenswrapper[4830]: I0131 09:01:54.561655 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:54Z","lastTransitionTime":"2026-01-31T09:01:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:54 crc kubenswrapper[4830]: I0131 09:01:54.664827 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:54 crc kubenswrapper[4830]: I0131 09:01:54.664886 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:54 crc kubenswrapper[4830]: I0131 09:01:54.664901 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:54 crc kubenswrapper[4830]: I0131 09:01:54.664924 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:54 crc kubenswrapper[4830]: I0131 09:01:54.664941 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:54Z","lastTransitionTime":"2026-01-31T09:01:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:54 crc kubenswrapper[4830]: I0131 09:01:54.768611 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:54 crc kubenswrapper[4830]: I0131 09:01:54.768797 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:54 crc kubenswrapper[4830]: I0131 09:01:54.768830 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:54 crc kubenswrapper[4830]: I0131 09:01:54.769529 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:54 crc kubenswrapper[4830]: I0131 09:01:54.769562 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:54Z","lastTransitionTime":"2026-01-31T09:01:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:54 crc kubenswrapper[4830]: I0131 09:01:54.873388 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:54 crc kubenswrapper[4830]: I0131 09:01:54.873462 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:54 crc kubenswrapper[4830]: I0131 09:01:54.873475 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:54 crc kubenswrapper[4830]: I0131 09:01:54.873499 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:54 crc kubenswrapper[4830]: I0131 09:01:54.873512 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:54Z","lastTransitionTime":"2026-01-31T09:01:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:54 crc kubenswrapper[4830]: I0131 09:01:54.977435 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:54 crc kubenswrapper[4830]: I0131 09:01:54.977493 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:54 crc kubenswrapper[4830]: I0131 09:01:54.977510 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:54 crc kubenswrapper[4830]: I0131 09:01:54.977540 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:54 crc kubenswrapper[4830]: I0131 09:01:54.977560 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:54Z","lastTransitionTime":"2026-01-31T09:01:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:55 crc kubenswrapper[4830]: I0131 09:01:55.080847 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:55 crc kubenswrapper[4830]: I0131 09:01:55.080889 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:55 crc kubenswrapper[4830]: I0131 09:01:55.080898 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:55 crc kubenswrapper[4830]: I0131 09:01:55.080914 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:55 crc kubenswrapper[4830]: I0131 09:01:55.080924 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:55Z","lastTransitionTime":"2026-01-31T09:01:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:55 crc kubenswrapper[4830]: I0131 09:01:55.183626 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:55 crc kubenswrapper[4830]: I0131 09:01:55.183787 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:55 crc kubenswrapper[4830]: I0131 09:01:55.183809 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:55 crc kubenswrapper[4830]: I0131 09:01:55.183837 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:55 crc kubenswrapper[4830]: I0131 09:01:55.183856 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:55Z","lastTransitionTime":"2026-01-31T09:01:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:55 crc kubenswrapper[4830]: I0131 09:01:55.237464 4830 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 13:11:47.526873201 +0000 UTC Jan 31 09:01:55 crc kubenswrapper[4830]: I0131 09:01:55.286829 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:55 crc kubenswrapper[4830]: I0131 09:01:55.286881 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:55 crc kubenswrapper[4830]: I0131 09:01:55.286893 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:55 crc kubenswrapper[4830]: I0131 09:01:55.286913 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:55 crc kubenswrapper[4830]: I0131 09:01:55.286926 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:55Z","lastTransitionTime":"2026-01-31T09:01:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:55 crc kubenswrapper[4830]: I0131 09:01:55.389435 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:55 crc kubenswrapper[4830]: I0131 09:01:55.389524 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:55 crc kubenswrapper[4830]: I0131 09:01:55.389552 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:55 crc kubenswrapper[4830]: I0131 09:01:55.389586 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:55 crc kubenswrapper[4830]: I0131 09:01:55.389606 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:55Z","lastTransitionTime":"2026-01-31T09:01:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:55 crc kubenswrapper[4830]: I0131 09:01:55.493140 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:55 crc kubenswrapper[4830]: I0131 09:01:55.493226 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:55 crc kubenswrapper[4830]: I0131 09:01:55.493245 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:55 crc kubenswrapper[4830]: I0131 09:01:55.493272 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:55 crc kubenswrapper[4830]: I0131 09:01:55.493282 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:55Z","lastTransitionTime":"2026-01-31T09:01:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:55 crc kubenswrapper[4830]: I0131 09:01:55.596004 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:55 crc kubenswrapper[4830]: I0131 09:01:55.596060 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:55 crc kubenswrapper[4830]: I0131 09:01:55.596070 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:55 crc kubenswrapper[4830]: I0131 09:01:55.596089 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:55 crc kubenswrapper[4830]: I0131 09:01:55.596103 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:55Z","lastTransitionTime":"2026-01-31T09:01:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:55 crc kubenswrapper[4830]: I0131 09:01:55.698342 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:55 crc kubenswrapper[4830]: I0131 09:01:55.698392 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:55 crc kubenswrapper[4830]: I0131 09:01:55.698405 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:55 crc kubenswrapper[4830]: I0131 09:01:55.698425 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:55 crc kubenswrapper[4830]: I0131 09:01:55.698437 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:55Z","lastTransitionTime":"2026-01-31T09:01:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:55 crc kubenswrapper[4830]: I0131 09:01:55.801053 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:55 crc kubenswrapper[4830]: I0131 09:01:55.801105 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:55 crc kubenswrapper[4830]: I0131 09:01:55.801118 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:55 crc kubenswrapper[4830]: I0131 09:01:55.801139 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:55 crc kubenswrapper[4830]: I0131 09:01:55.801158 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:55Z","lastTransitionTime":"2026-01-31T09:01:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:55 crc kubenswrapper[4830]: I0131 09:01:55.904150 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:55 crc kubenswrapper[4830]: I0131 09:01:55.904209 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:55 crc kubenswrapper[4830]: I0131 09:01:55.904221 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:55 crc kubenswrapper[4830]: I0131 09:01:55.904241 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:55 crc kubenswrapper[4830]: I0131 09:01:55.904254 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:55Z","lastTransitionTime":"2026-01-31T09:01:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:56 crc kubenswrapper[4830]: I0131 09:01:56.006856 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:56 crc kubenswrapper[4830]: I0131 09:01:56.006916 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:56 crc kubenswrapper[4830]: I0131 09:01:56.006931 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:56 crc kubenswrapper[4830]: I0131 09:01:56.006959 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:56 crc kubenswrapper[4830]: I0131 09:01:56.006977 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:56Z","lastTransitionTime":"2026-01-31T09:01:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:56 crc kubenswrapper[4830]: I0131 09:01:56.110177 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:56 crc kubenswrapper[4830]: I0131 09:01:56.110291 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:56 crc kubenswrapper[4830]: I0131 09:01:56.110314 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:56 crc kubenswrapper[4830]: I0131 09:01:56.110377 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:56 crc kubenswrapper[4830]: I0131 09:01:56.110395 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:56Z","lastTransitionTime":"2026-01-31T09:01:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:56 crc kubenswrapper[4830]: I0131 09:01:56.213706 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:56 crc kubenswrapper[4830]: I0131 09:01:56.213786 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:56 crc kubenswrapper[4830]: I0131 09:01:56.213798 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:56 crc kubenswrapper[4830]: I0131 09:01:56.213817 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:56 crc kubenswrapper[4830]: I0131 09:01:56.213831 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:56Z","lastTransitionTime":"2026-01-31T09:01:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:56 crc kubenswrapper[4830]: I0131 09:01:56.239888 4830 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 21:50:57.728914075 +0000 UTC Jan 31 09:01:56 crc kubenswrapper[4830]: I0131 09:01:56.250601 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 09:01:56 crc kubenswrapper[4830]: I0131 09:01:56.250752 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 09:01:56 crc kubenswrapper[4830]: I0131 09:01:56.251235 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 09:01:56 crc kubenswrapper[4830]: I0131 09:01:56.250852 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5kl8z" Jan 31 09:01:56 crc kubenswrapper[4830]: E0131 09:01:56.251370 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 09:01:56 crc kubenswrapper[4830]: E0131 09:01:56.251463 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5kl8z" podUID="c1fa30e4-0c03-43ab-9c37-f7ec86153b27" Jan 31 09:01:56 crc kubenswrapper[4830]: E0131 09:01:56.251554 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 09:01:56 crc kubenswrapper[4830]: E0131 09:01:56.252008 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 09:01:56 crc kubenswrapper[4830]: I0131 09:01:56.252040 4830 scope.go:117] "RemoveContainer" containerID="ed906c96861bf8d2af425b975d67a4b298930cd75ababc2d621541adaa7b2ba2" Jan 31 09:01:56 crc kubenswrapper[4830]: I0131 09:01:56.280864 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0468c8e291590c1fa2ffc9b7786bbbf6f3cfb7889fc3adaf7f7a4d3b0edcb1ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4601859d7e52904b3390b2ebca48210c4b3bf132b00a4735a1dbe6cfdc7bd02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:56Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:56 crc kubenswrapper[4830]: I0131 09:01:56.303706 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-r8pc4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"159b9801-57e3-4cf0-9b81-10aacb5eef83\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://320f24051b1a05475b54823ec3401ad252f0efd6c0d3c5098643f959bad15179\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae12083012db5dcd6cc0b23cf64d04c952587334fdf617d36d4cdc2f657eb163\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://351a35da1a2503d6c5d323c7a9a26460236f70517498b41055e6d21bbf92b358\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3614c1ff7ca6762c15b9b85e6187ad94029630e54254e8ccb45b5e7c24692561\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27c8241e84418d80626b17fe37ccf994304394c3fa85c7f1a058d8d54ad7e7d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fd109715e65725addfeba399eacb1abbe16132e06e9ee50b542e877ca9d8d82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed906c96861bf8d2af425b975d67a4b298930cd75ababc2d621541adaa7b2ba2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed906c96861bf8d2af425b975d67a4b298930cd75ababc2d621541adaa7b2ba2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T09:01:34Z\\\",\\\"message\\\":\\\"objects: [openshift-kube-apiserver/kube-apiserver-crc openshift-kube-controller-manager/kube-controller-manager-crc openshift-network-operator/iptables-alerter-4ln5h openshift-image-registry/node-ca-zt78q openshift-network-diagnostics/network-check-source-55646444c4-trplf openshift-network-diagnostics/network-check-target-xd92c openshift-network-node-identity/network-node-identity-vrzqb openshift-dns/node-resolver-pmbpr openshift-network-operator/network-operator-58b4c7f79c-55gtf openshift-machine-config-operator/machine-config-daemon-gt7kd openshift-multus/multus-additional-cni-plugins-x27jw openshift-multus/multus-cjqbn]\\\\nI0131 09:01:34.674011 6272 obj_retry.go:418] Waiting for all the *v1.Pod retry setup to complete in iterateRetryResources\\\\nI0131 09:01:34.674042 6272 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/multus-cjqbn\\\\nI0131 09:01:34.674056 6272 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/multus-cjqbn\\\\nF0131 09:01:34.674068 6272 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stoppe\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:33Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-r8pc4_openshift-ovn-kubernetes(159b9801-57e3-4cf0-9b81-10aacb5eef83)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a39d2c99a3b1f6ff5d003d2163046c5127e41dac82f41744c1fbffd0cec8ff09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba6e5a840aca1952877a5d2c18af1d28779b13d54e0c653a2b02b3c3d7b748e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba6e5a840aca1952877a5d2c18af1d28779b13d54e0c653a2b02b3c3d7b748e4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-r8pc4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:56Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:56 crc kubenswrapper[4830]: I0131 09:01:56.316559 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7vq99" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"44acb8ed-5840-46fa-9ba1-1b89653e1478\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07cae4ce61629c9f8e48863d0775cf4fed46422db85ba8b29477e098b697fb1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9w5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86ac3b3a214c6bca20d7fdc92a49647dfdaf8de4391f331890f74900ab7eca11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9w5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:35Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7vq99\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:56Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:56 crc kubenswrapper[4830]: I0131 09:01:56.317772 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:56 crc kubenswrapper[4830]: I0131 09:01:56.317843 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:56 crc kubenswrapper[4830]: I0131 09:01:56.317860 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:56 crc kubenswrapper[4830]: I0131 09:01:56.317882 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:56 crc kubenswrapper[4830]: I0131 09:01:56.317895 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:56Z","lastTransitionTime":"2026-01-31T09:01:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:56 crc kubenswrapper[4830]: I0131 09:01:56.335150 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c75e1c36-b769-464e-96eb-6d9b3c5aa384\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fc4f1b4979b8d902676eced34650daea491e68b4c4377492b928a9f0f78d12b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b68f539fdc8bbf394adaf06d3e8682cdf498d0994c53f3754caf282cf9cf3607\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://539885da1b8f083e7ae878fec45416be66a5b06df063f046a05a4981bc8a8f75\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9b64732f8259953717c8ad355889afd462ce339c881ba9c105f6d3f39245e79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8cac33719e081864153ce20c60069a21036f3c29f7b4395021c3a2fe4f09dbc9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"message\\\":\\\"le observer\\\\nW0131 09:01:15.957161 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0131 09:01:15.957363 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 09:01:15.958528 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2684978895/tls.crt::/tmp/serving-cert-2684978895/tls.key\\\\\\\"\\\\nI0131 09:01:16.545024 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 09:01:16.548405 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 09:01:16.548444 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 09:01:16.548470 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 09:01:16.548480 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 09:01:16.557075 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 09:01:16.557112 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 09:01:16.557118 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 09:01:16.557123 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 09:01:16.557127 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 09:01:16.557130 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 09:01:16.557133 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0131 09:01:16.557468 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0131 09:01:16.564014 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d4d8bd6451d7416a2a312147ec282db5de8410910834f555c8d09062902130d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41db253848cac205af5f91621cac4e8223bda7d76984afd734a093f8e8a2d125\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41db253848cac205af5f91621cac4e8223bda7d76984afd734a093f8e8a2d125\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:00:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:00:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:00:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:56Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:56 crc kubenswrapper[4830]: I0131 09:01:56.348600 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:56Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:56 crc kubenswrapper[4830]: I0131 09:01:56.362708 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-zt78q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f8a0ccd-540b-4151-a34d-438e433cb141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://362d0fc182d79e72720f3686e7fb5219372cf72d8be09c8086713b692e8d66d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6zlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:25Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-zt78q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:56Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:56 crc kubenswrapper[4830]: I0131 09:01:56.376974 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5kl8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1fa30e4-0c03-43ab-9c37-f7ec86153b27\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jgvfn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jgvfn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:36Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5kl8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:56Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:56 crc kubenswrapper[4830]: I0131 09:01:56.390639 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:56Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:56 crc kubenswrapper[4830]: I0131 09:01:56.399912 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pmbpr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca325f50-edf0-4f3d-ab92-17f40a73d274\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d2a0a6bafefdee2120d6573808366f2455c8606c350f69b9e62bfb2903f6303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7p56d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:22Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pmbpr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:56Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:56 crc kubenswrapper[4830]: I0131 09:01:56.410743 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-cjqbn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b7e133cc-19e8-4770-9146-88dac53a6531\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fc53764819654361fe0c4c89480ef4e2b42eb79d71ab8b88f1cc9283c67ce70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-msp6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-cjqbn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:56Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:56 crc kubenswrapper[4830]: I0131 09:01:56.420312 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:56 crc kubenswrapper[4830]: I0131 09:01:56.420360 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:56 crc kubenswrapper[4830]: I0131 09:01:56.420376 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:56 crc kubenswrapper[4830]: I0131 09:01:56.420399 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:56 crc kubenswrapper[4830]: I0131 09:01:56.420415 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:56Z","lastTransitionTime":"2026-01-31T09:01:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:56 crc kubenswrapper[4830]: I0131 09:01:56.427196 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-x27jw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"227117cb-01d3-4e44-9da3-b1d577fb3ee2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d59f81b73056481d4e6eb23c2a98c3c088b5255b82cd28e0cad0ac2a9b271cfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wndtp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b9804ac2b3c56a17dc1665cc7ca0c622f9f7a5dbf19f804b155c517a5be61e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b9804ac2b3c56a17dc1665cc7ca0c622f9f7a5dbf19f804b155c517a5be61e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wndtp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e777b7bcec1e8930ac1c280460b5733614b633b1ed72522e124f8585ef5cb38d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e777b7bcec1e8930ac1c280460b5733614b633b1ed72522e124f8585ef5cb38d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wndtp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10692d497b8cf269a439c603376a958fcead6d7ee2338eff4e014a75eaa26417\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10692d497b8cf269a439c603376a958fcead6d7ee2338eff4e014a75eaa26417\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wndtp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1275255b2597fbb09db9612f496ab3177afffe0c7d74d406bc4b1bc861b5b54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c1275255b2597fbb09db9612f496ab3177afffe0c7d74d406bc4b1bc861b5b54\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wndtp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://443af6aeef0b259a54c898d9c08f6e39d663ddf06d36d8712ee8ebb7fd2ebf5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://443af6aeef0b259a54c898d9c08f6e39d663ddf06d36d8712ee8ebb7fd2ebf5a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wndtp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e55fc9c3b6f0e9526860d830179e759cb2eae3c52847b675db6ccce9b6ed2766\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e55fc9c3b6f0e9526860d830179e759cb2eae3c52847b675db6ccce9b6ed2766\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wndtp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-x27jw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:56Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:56 crc kubenswrapper[4830]: I0131 09:01:56.440819 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"158dbfda-9b0a-4809-9946-3c6ee2d082dc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4f2590c48b20124bb8d0271755d430719ece306dbdc95acc26258abaf331ee2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqp59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7cfb7ee25dc18bb1412f69e9bbc3a9055029ed188a12baa5ceef7d5445ad597c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqp59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:23Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gt7kd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:56Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:56 crc kubenswrapper[4830]: I0131 09:01:56.454562 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"026d8790-dc0a-472e-953a-66afc0fcd6e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1dc96f3d1e085f925a6a1b73ef1312bd85072065059f20eb6c11f7d044635f8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://49f1cea3266a97316fb0737cb770f6da2abfd58b016987b92c19aa20a9366129\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c457892625099d1b14d857643ba5c70e76cfe582ee31c1b8736f4e278557ab1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ecd1234e4873862db88981fcc0a8c9fd9fc7f913649528a5c274c2feb4617b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ecd1234e4873862db88981fcc0a8c9fd9fc7f913649528a5c274c2feb4617b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:00:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:00:57Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:00:56Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:56Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:56 crc kubenswrapper[4830]: I0131 09:01:56.468613 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"20ed341f-ef9c-4242-981d-80c09f22a37f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://28d65a2b0e667ff1bd912564180a6a9ce77e91a104c6028e19bc58e0ab3b295d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c484d4ed3775a955f3077e3404135c771c581eef1e8fa614d7cdd6e521bbb426\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6889405df5b8cc9e09b36c99a5db06048c9de193dd5744d2b48c07f42c477bd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e99664db53d57a91882867cdf4ab33d52a2e165c53f91cd1b918a32c49a7afa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:00:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:56Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:56 crc kubenswrapper[4830]: I0131 09:01:56.481995 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c33f93dea1eda6a26c80682447157b317a4aea4af66d200c5bdfc162ad593e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:56Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:56 crc kubenswrapper[4830]: I0131 09:01:56.494538 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:56Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:56 crc kubenswrapper[4830]: I0131 09:01:56.507196 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2018dd8e7153f3ce64992dc6f931ae09c5f77931cd0743a9fe2557673b6a41f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:56Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:56 crc kubenswrapper[4830]: I0131 09:01:56.523249 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:56 crc kubenswrapper[4830]: I0131 09:01:56.523297 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:56 crc kubenswrapper[4830]: I0131 09:01:56.523308 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:56 crc kubenswrapper[4830]: I0131 09:01:56.523326 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:56 crc kubenswrapper[4830]: I0131 09:01:56.523337 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:56Z","lastTransitionTime":"2026-01-31T09:01:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:56 crc kubenswrapper[4830]: I0131 09:01:56.624412 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-r8pc4_159b9801-57e3-4cf0-9b81-10aacb5eef83/ovnkube-controller/1.log" Jan 31 09:01:56 crc kubenswrapper[4830]: I0131 09:01:56.625956 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:56 crc kubenswrapper[4830]: I0131 09:01:56.625978 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:56 crc kubenswrapper[4830]: I0131 09:01:56.625986 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:56 crc kubenswrapper[4830]: I0131 09:01:56.626001 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:56 crc kubenswrapper[4830]: I0131 09:01:56.626011 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:56Z","lastTransitionTime":"2026-01-31T09:01:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:56 crc kubenswrapper[4830]: I0131 09:01:56.641477 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-r8pc4" event={"ID":"159b9801-57e3-4cf0-9b81-10aacb5eef83","Type":"ContainerStarted","Data":"3a83288b35d051a945187b06a0f1e8f61aec52b6343034cc2f57354b61a9309b"} Jan 31 09:01:56 crc kubenswrapper[4830]: I0131 09:01:56.668568 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"20ed341f-ef9c-4242-981d-80c09f22a37f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://28d65a2b0e667ff1bd912564180a6a9ce77e91a104c6028e19bc58e0ab3b295d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c484d4ed3775a955f3077e3404135c771c581eef1e8fa614d7cdd6e521bbb426\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6889405df5b8cc9e09b36c99a5db06048c9de193dd5744d2b48c07f42c477bd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e99664db53d57a91882867cdf4ab33d52a2e165c53f91cd1b918a32c49a7afa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:00:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:56Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:56 crc kubenswrapper[4830]: I0131 09:01:56.688089 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c33f93dea1eda6a26c80682447157b317a4aea4af66d200c5bdfc162ad593e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:56Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:56 crc kubenswrapper[4830]: I0131 09:01:56.706299 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:56Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:56 crc kubenswrapper[4830]: I0131 09:01:56.729589 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:56 crc kubenswrapper[4830]: I0131 09:01:56.729638 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:56 crc kubenswrapper[4830]: I0131 09:01:56.729649 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:56 crc kubenswrapper[4830]: I0131 09:01:56.729667 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:56 crc kubenswrapper[4830]: I0131 09:01:56.729679 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:56Z","lastTransitionTime":"2026-01-31T09:01:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:56 crc kubenswrapper[4830]: I0131 09:01:56.730743 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2018dd8e7153f3ce64992dc6f931ae09c5f77931cd0743a9fe2557673b6a41f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:56Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:56 crc kubenswrapper[4830]: I0131 09:01:56.759556 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"158dbfda-9b0a-4809-9946-3c6ee2d082dc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4f2590c48b20124bb8d0271755d430719ece306dbdc95acc26258abaf331ee2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqp59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7cfb7ee25dc18bb1412f69e9bbc3a9055029ed188a12baa5ceef7d5445ad597c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqp59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:23Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gt7kd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:56Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:56 crc kubenswrapper[4830]: I0131 09:01:56.775885 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"026d8790-dc0a-472e-953a-66afc0fcd6e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1dc96f3d1e085f925a6a1b73ef1312bd85072065059f20eb6c11f7d044635f8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://49f1cea3266a97316fb0737cb770f6da2abfd58b016987b92c19aa20a9366129\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c457892625099d1b14d857643ba5c70e76cfe582ee31c1b8736f4e278557ab1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ecd1234e4873862db88981fcc0a8c9fd9fc7f913649528a5c274c2feb4617b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ecd1234e4873862db88981fcc0a8c9fd9fc7f913649528a5c274c2feb4617b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:00:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:00:57Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:00:56Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:56Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:56 crc kubenswrapper[4830]: I0131 09:01:56.794454 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-r8pc4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"159b9801-57e3-4cf0-9b81-10aacb5eef83\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://320f24051b1a05475b54823ec3401ad252f0efd6c0d3c5098643f959bad15179\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae12083012db5dcd6cc0b23cf64d04c952587334fdf617d36d4cdc2f657eb163\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://351a35da1a2503d6c5d323c7a9a26460236f70517498b41055e6d21bbf92b358\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3614c1ff7ca6762c15b9b85e6187ad94029630e54254e8ccb45b5e7c24692561\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27c8241e84418d80626b17fe37ccf994304394c3fa85c7f1a058d8d54ad7e7d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fd109715e65725addfeba399eacb1abbe16132e06e9ee50b542e877ca9d8d82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a83288b35d051a945187b06a0f1e8f61aec52b6343034cc2f57354b61a9309b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed906c96861bf8d2af425b975d67a4b298930cd75ababc2d621541adaa7b2ba2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T09:01:34Z\\\",\\\"message\\\":\\\"objects: [openshift-kube-apiserver/kube-apiserver-crc openshift-kube-controller-manager/kube-controller-manager-crc openshift-network-operator/iptables-alerter-4ln5h openshift-image-registry/node-ca-zt78q openshift-network-diagnostics/network-check-source-55646444c4-trplf openshift-network-diagnostics/network-check-target-xd92c openshift-network-node-identity/network-node-identity-vrzqb openshift-dns/node-resolver-pmbpr openshift-network-operator/network-operator-58b4c7f79c-55gtf openshift-machine-config-operator/machine-config-daemon-gt7kd openshift-multus/multus-additional-cni-plugins-x27jw openshift-multus/multus-cjqbn]\\\\nI0131 09:01:34.674011 6272 obj_retry.go:418] Waiting for all the *v1.Pod retry setup to complete in iterateRetryResources\\\\nI0131 09:01:34.674042 6272 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/multus-cjqbn\\\\nI0131 09:01:34.674056 6272 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/multus-cjqbn\\\\nF0131 09:01:34.674068 6272 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stoppe\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:33Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a39d2c99a3b1f6ff5d003d2163046c5127e41dac82f41744c1fbffd0cec8ff09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba6e5a840aca1952877a5d2c18af1d28779b13d54e0c653a2b02b3c3d7b748e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba6e5a840aca1952877a5d2c18af1d28779b13d54e0c653a2b02b3c3d7b748e4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-r8pc4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:56Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:56 crc kubenswrapper[4830]: I0131 09:01:56.805907 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7vq99" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"44acb8ed-5840-46fa-9ba1-1b89653e1478\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07cae4ce61629c9f8e48863d0775cf4fed46422db85ba8b29477e098b697fb1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9w5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86ac3b3a214c6bca20d7fdc92a49647dfdaf8de4391f331890f74900ab7eca11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9w5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:35Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7vq99\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:56Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:56 crc kubenswrapper[4830]: I0131 09:01:56.818238 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0468c8e291590c1fa2ffc9b7786bbbf6f3cfb7889fc3adaf7f7a4d3b0edcb1ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4601859d7e52904b3390b2ebca48210c4b3bf132b00a4735a1dbe6cfdc7bd02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:56Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:56 crc kubenswrapper[4830]: I0131 09:01:56.832284 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:56 crc kubenswrapper[4830]: I0131 09:01:56.832321 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:56 crc kubenswrapper[4830]: I0131 09:01:56.832331 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:56 crc kubenswrapper[4830]: I0131 09:01:56.832368 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:56 crc kubenswrapper[4830]: I0131 09:01:56.832383 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:56Z","lastTransitionTime":"2026-01-31T09:01:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:56 crc kubenswrapper[4830]: I0131 09:01:56.833468 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c75e1c36-b769-464e-96eb-6d9b3c5aa384\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fc4f1b4979b8d902676eced34650daea491e68b4c4377492b928a9f0f78d12b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b68f539fdc8bbf394adaf06d3e8682cdf498d0994c53f3754caf282cf9cf3607\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://539885da1b8f083e7ae878fec45416be66a5b06df063f046a05a4981bc8a8f75\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9b64732f8259953717c8ad355889afd462ce339c881ba9c105f6d3f39245e79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8cac33719e081864153ce20c60069a21036f3c29f7b4395021c3a2fe4f09dbc9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"message\\\":\\\"le observer\\\\nW0131 09:01:15.957161 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0131 09:01:15.957363 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 09:01:15.958528 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2684978895/tls.crt::/tmp/serving-cert-2684978895/tls.key\\\\\\\"\\\\nI0131 09:01:16.545024 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 09:01:16.548405 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 09:01:16.548444 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 09:01:16.548470 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 09:01:16.548480 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 09:01:16.557075 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 09:01:16.557112 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 09:01:16.557118 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 09:01:16.557123 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 09:01:16.557127 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 09:01:16.557130 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 09:01:16.557133 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0131 09:01:16.557468 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0131 09:01:16.564014 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d4d8bd6451d7416a2a312147ec282db5de8410910834f555c8d09062902130d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41db253848cac205af5f91621cac4e8223bda7d76984afd734a093f8e8a2d125\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41db253848cac205af5f91621cac4e8223bda7d76984afd734a093f8e8a2d125\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:00:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:00:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:00:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:56Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:56 crc kubenswrapper[4830]: I0131 09:01:56.848863 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:56Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:56 crc kubenswrapper[4830]: I0131 09:01:56.860677 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-zt78q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f8a0ccd-540b-4151-a34d-438e433cb141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://362d0fc182d79e72720f3686e7fb5219372cf72d8be09c8086713b692e8d66d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6zlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:25Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-zt78q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:56Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:56 crc kubenswrapper[4830]: I0131 09:01:56.872431 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5kl8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1fa30e4-0c03-43ab-9c37-f7ec86153b27\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jgvfn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jgvfn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:36Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5kl8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:56Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:56 crc kubenswrapper[4830]: I0131 09:01:56.886274 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pmbpr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca325f50-edf0-4f3d-ab92-17f40a73d274\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d2a0a6bafefdee2120d6573808366f2455c8606c350f69b9e62bfb2903f6303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7p56d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:22Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pmbpr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:56Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:56 crc kubenswrapper[4830]: I0131 09:01:56.901991 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-cjqbn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b7e133cc-19e8-4770-9146-88dac53a6531\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fc53764819654361fe0c4c89480ef4e2b42eb79d71ab8b88f1cc9283c67ce70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-msp6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-cjqbn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:56Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:56 crc kubenswrapper[4830]: I0131 09:01:56.916082 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-x27jw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"227117cb-01d3-4e44-9da3-b1d577fb3ee2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d59f81b73056481d4e6eb23c2a98c3c088b5255b82cd28e0cad0ac2a9b271cfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wndtp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b9804ac2b3c56a17dc1665cc7ca0c622f9f7a5dbf19f804b155c517a5be61e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b9804ac2b3c56a17dc1665cc7ca0c622f9f7a5dbf19f804b155c517a5be61e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wndtp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e777b7bcec1e8930ac1c280460b5733614b633b1ed72522e124f8585ef5cb38d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e777b7bcec1e8930ac1c280460b5733614b633b1ed72522e124f8585ef5cb38d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wndtp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10692d497b8cf269a439c603376a958fcead6d7ee2338eff4e014a75eaa26417\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10692d497b8cf269a439c603376a958fcead6d7ee2338eff4e014a75eaa26417\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wndtp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1275255b2597fbb09db9612f496ab3177afffe0c7d74d406bc4b1bc861b5b54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c1275255b2597fbb09db9612f496ab3177afffe0c7d74d406bc4b1bc861b5b54\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wndtp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://443af6aeef0b259a54c898d9c08f6e39d663ddf06d36d8712ee8ebb7fd2ebf5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://443af6aeef0b259a54c898d9c08f6e39d663ddf06d36d8712ee8ebb7fd2ebf5a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wndtp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e55fc9c3b6f0e9526860d830179e759cb2eae3c52847b675db6ccce9b6ed2766\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e55fc9c3b6f0e9526860d830179e759cb2eae3c52847b675db6ccce9b6ed2766\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wndtp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-x27jw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:56Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:56 crc kubenswrapper[4830]: I0131 09:01:56.927837 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:56Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:56 crc kubenswrapper[4830]: I0131 09:01:56.934543 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:56 crc kubenswrapper[4830]: I0131 09:01:56.934577 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:56 crc kubenswrapper[4830]: I0131 09:01:56.934587 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:56 crc kubenswrapper[4830]: I0131 09:01:56.934602 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:56 crc kubenswrapper[4830]: I0131 09:01:56.934613 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:56Z","lastTransitionTime":"2026-01-31T09:01:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:57 crc kubenswrapper[4830]: I0131 09:01:57.037526 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:57 crc kubenswrapper[4830]: I0131 09:01:57.037584 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:57 crc kubenswrapper[4830]: I0131 09:01:57.037614 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:57 crc kubenswrapper[4830]: I0131 09:01:57.037667 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:57 crc kubenswrapper[4830]: I0131 09:01:57.037689 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:57Z","lastTransitionTime":"2026-01-31T09:01:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:57 crc kubenswrapper[4830]: I0131 09:01:57.141024 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:57 crc kubenswrapper[4830]: I0131 09:01:57.141070 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:57 crc kubenswrapper[4830]: I0131 09:01:57.141079 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:57 crc kubenswrapper[4830]: I0131 09:01:57.141095 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:57 crc kubenswrapper[4830]: I0131 09:01:57.141117 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:57Z","lastTransitionTime":"2026-01-31T09:01:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:57 crc kubenswrapper[4830]: I0131 09:01:57.241040 4830 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 03:40:09.745006353 +0000 UTC Jan 31 09:01:57 crc kubenswrapper[4830]: I0131 09:01:57.243920 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:57 crc kubenswrapper[4830]: I0131 09:01:57.243975 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:57 crc kubenswrapper[4830]: I0131 09:01:57.243986 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:57 crc kubenswrapper[4830]: I0131 09:01:57.244012 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:57 crc kubenswrapper[4830]: I0131 09:01:57.244022 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:57Z","lastTransitionTime":"2026-01-31T09:01:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:57 crc kubenswrapper[4830]: I0131 09:01:57.347422 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:57 crc kubenswrapper[4830]: I0131 09:01:57.347486 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:57 crc kubenswrapper[4830]: I0131 09:01:57.347498 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:57 crc kubenswrapper[4830]: I0131 09:01:57.347518 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:57 crc kubenswrapper[4830]: I0131 09:01:57.347532 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:57Z","lastTransitionTime":"2026-01-31T09:01:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:57 crc kubenswrapper[4830]: I0131 09:01:57.450765 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:57 crc kubenswrapper[4830]: I0131 09:01:57.450821 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:57 crc kubenswrapper[4830]: I0131 09:01:57.450836 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:57 crc kubenswrapper[4830]: I0131 09:01:57.450859 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:57 crc kubenswrapper[4830]: I0131 09:01:57.450875 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:57Z","lastTransitionTime":"2026-01-31T09:01:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:57 crc kubenswrapper[4830]: I0131 09:01:57.554191 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:57 crc kubenswrapper[4830]: I0131 09:01:57.554262 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:57 crc kubenswrapper[4830]: I0131 09:01:57.554276 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:57 crc kubenswrapper[4830]: I0131 09:01:57.554295 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:57 crc kubenswrapper[4830]: I0131 09:01:57.554308 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:57Z","lastTransitionTime":"2026-01-31T09:01:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:57 crc kubenswrapper[4830]: I0131 09:01:57.647383 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-r8pc4_159b9801-57e3-4cf0-9b81-10aacb5eef83/ovnkube-controller/2.log" Jan 31 09:01:57 crc kubenswrapper[4830]: I0131 09:01:57.648503 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-r8pc4_159b9801-57e3-4cf0-9b81-10aacb5eef83/ovnkube-controller/1.log" Jan 31 09:01:57 crc kubenswrapper[4830]: I0131 09:01:57.651812 4830 generic.go:334] "Generic (PLEG): container finished" podID="159b9801-57e3-4cf0-9b81-10aacb5eef83" containerID="3a83288b35d051a945187b06a0f1e8f61aec52b6343034cc2f57354b61a9309b" exitCode=1 Jan 31 09:01:57 crc kubenswrapper[4830]: I0131 09:01:57.651874 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-r8pc4" event={"ID":"159b9801-57e3-4cf0-9b81-10aacb5eef83","Type":"ContainerDied","Data":"3a83288b35d051a945187b06a0f1e8f61aec52b6343034cc2f57354b61a9309b"} Jan 31 09:01:57 crc kubenswrapper[4830]: I0131 09:01:57.651914 4830 scope.go:117] "RemoveContainer" containerID="ed906c96861bf8d2af425b975d67a4b298930cd75ababc2d621541adaa7b2ba2" Jan 31 09:01:57 crc kubenswrapper[4830]: I0131 09:01:57.656667 4830 scope.go:117] "RemoveContainer" containerID="3a83288b35d051a945187b06a0f1e8f61aec52b6343034cc2f57354b61a9309b" Jan 31 09:01:57 crc kubenswrapper[4830]: I0131 09:01:57.656886 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:57 crc kubenswrapper[4830]: I0131 09:01:57.656963 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:57 crc kubenswrapper[4830]: I0131 09:01:57.657008 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:57 crc kubenswrapper[4830]: I0131 09:01:57.657054 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:57 crc kubenswrapper[4830]: I0131 09:01:57.657099 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:57Z","lastTransitionTime":"2026-01-31T09:01:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:57 crc kubenswrapper[4830]: E0131 09:01:57.657517 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-r8pc4_openshift-ovn-kubernetes(159b9801-57e3-4cf0-9b81-10aacb5eef83)\"" pod="openshift-ovn-kubernetes/ovnkube-node-r8pc4" podUID="159b9801-57e3-4cf0-9b81-10aacb5eef83" Jan 31 09:01:57 crc kubenswrapper[4830]: I0131 09:01:57.679501 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c75e1c36-b769-464e-96eb-6d9b3c5aa384\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fc4f1b4979b8d902676eced34650daea491e68b4c4377492b928a9f0f78d12b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b68f539fdc8bbf394adaf06d3e8682cdf498d0994c53f3754caf282cf9cf3607\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://539885da1b8f083e7ae878fec45416be66a5b06df063f046a05a4981bc8a8f75\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9b64732f8259953717c8ad355889afd462ce339c881ba9c105f6d3f39245e79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8cac33719e081864153ce20c60069a21036f3c29f7b4395021c3a2fe4f09dbc9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"message\\\":\\\"le observer\\\\nW0131 09:01:15.957161 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0131 09:01:15.957363 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 09:01:15.958528 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2684978895/tls.crt::/tmp/serving-cert-2684978895/tls.key\\\\\\\"\\\\nI0131 09:01:16.545024 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 09:01:16.548405 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 09:01:16.548444 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 09:01:16.548470 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 09:01:16.548480 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 09:01:16.557075 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 09:01:16.557112 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 09:01:16.557118 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 09:01:16.557123 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 09:01:16.557127 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 09:01:16.557130 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 09:01:16.557133 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0131 09:01:16.557468 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0131 09:01:16.564014 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d4d8bd6451d7416a2a312147ec282db5de8410910834f555c8d09062902130d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41db253848cac205af5f91621cac4e8223bda7d76984afd734a093f8e8a2d125\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41db253848cac205af5f91621cac4e8223bda7d76984afd734a093f8e8a2d125\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:00:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:00:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:00:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:57Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:57 crc kubenswrapper[4830]: I0131 09:01:57.697053 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:57Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:57 crc kubenswrapper[4830]: I0131 09:01:57.709998 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-zt78q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f8a0ccd-540b-4151-a34d-438e433cb141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://362d0fc182d79e72720f3686e7fb5219372cf72d8be09c8086713b692e8d66d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6zlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:25Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-zt78q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:57Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:57 crc kubenswrapper[4830]: I0131 09:01:57.726935 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5kl8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1fa30e4-0c03-43ab-9c37-f7ec86153b27\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jgvfn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jgvfn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:36Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5kl8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:57Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:57 crc kubenswrapper[4830]: I0131 09:01:57.741393 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:57Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:57 crc kubenswrapper[4830]: I0131 09:01:57.750513 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pmbpr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca325f50-edf0-4f3d-ab92-17f40a73d274\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d2a0a6bafefdee2120d6573808366f2455c8606c350f69b9e62bfb2903f6303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7p56d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:22Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pmbpr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:57Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:57 crc kubenswrapper[4830]: I0131 09:01:57.759752 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:57 crc kubenswrapper[4830]: I0131 09:01:57.759800 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:57 crc kubenswrapper[4830]: I0131 09:01:57.759813 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:57 crc kubenswrapper[4830]: I0131 09:01:57.759837 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:57 crc kubenswrapper[4830]: I0131 09:01:57.759851 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:57Z","lastTransitionTime":"2026-01-31T09:01:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:57 crc kubenswrapper[4830]: I0131 09:01:57.763778 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-cjqbn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b7e133cc-19e8-4770-9146-88dac53a6531\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fc53764819654361fe0c4c89480ef4e2b42eb79d71ab8b88f1cc9283c67ce70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-msp6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-cjqbn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:57Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:57 crc kubenswrapper[4830]: I0131 09:01:57.778391 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-x27jw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"227117cb-01d3-4e44-9da3-b1d577fb3ee2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d59f81b73056481d4e6eb23c2a98c3c088b5255b82cd28e0cad0ac2a9b271cfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wndtp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b9804ac2b3c56a17dc1665cc7ca0c622f9f7a5dbf19f804b155c517a5be61e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b9804ac2b3c56a17dc1665cc7ca0c622f9f7a5dbf19f804b155c517a5be61e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wndtp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e777b7bcec1e8930ac1c280460b5733614b633b1ed72522e124f8585ef5cb38d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e777b7bcec1e8930ac1c280460b5733614b633b1ed72522e124f8585ef5cb38d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wndtp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10692d497b8cf269a439c603376a958fcead6d7ee2338eff4e014a75eaa26417\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10692d497b8cf269a439c603376a958fcead6d7ee2338eff4e014a75eaa26417\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wndtp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1275255b2597fbb09db9612f496ab3177afffe0c7d74d406bc4b1bc861b5b54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c1275255b2597fbb09db9612f496ab3177afffe0c7d74d406bc4b1bc861b5b54\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wndtp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://443af6aeef0b259a54c898d9c08f6e39d663ddf06d36d8712ee8ebb7fd2ebf5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://443af6aeef0b259a54c898d9c08f6e39d663ddf06d36d8712ee8ebb7fd2ebf5a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wndtp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e55fc9c3b6f0e9526860d830179e759cb2eae3c52847b675db6ccce9b6ed2766\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e55fc9c3b6f0e9526860d830179e759cb2eae3c52847b675db6ccce9b6ed2766\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wndtp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-x27jw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:57Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:57 crc kubenswrapper[4830]: I0131 09:01:57.789147 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"026d8790-dc0a-472e-953a-66afc0fcd6e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1dc96f3d1e085f925a6a1b73ef1312bd85072065059f20eb6c11f7d044635f8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://49f1cea3266a97316fb0737cb770f6da2abfd58b016987b92c19aa20a9366129\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c457892625099d1b14d857643ba5c70e76cfe582ee31c1b8736f4e278557ab1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ecd1234e4873862db88981fcc0a8c9fd9fc7f913649528a5c274c2feb4617b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ecd1234e4873862db88981fcc0a8c9fd9fc7f913649528a5c274c2feb4617b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:00:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:00:57Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:00:56Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:57Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:57 crc kubenswrapper[4830]: I0131 09:01:57.802542 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"20ed341f-ef9c-4242-981d-80c09f22a37f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://28d65a2b0e667ff1bd912564180a6a9ce77e91a104c6028e19bc58e0ab3b295d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c484d4ed3775a955f3077e3404135c771c581eef1e8fa614d7cdd6e521bbb426\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6889405df5b8cc9e09b36c99a5db06048c9de193dd5744d2b48c07f42c477bd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e99664db53d57a91882867cdf4ab33d52a2e165c53f91cd1b918a32c49a7afa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:00:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:57Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:57 crc kubenswrapper[4830]: I0131 09:01:57.818228 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c33f93dea1eda6a26c80682447157b317a4aea4af66d200c5bdfc162ad593e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:57Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:57 crc kubenswrapper[4830]: I0131 09:01:57.831115 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:57Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:57 crc kubenswrapper[4830]: I0131 09:01:57.844421 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2018dd8e7153f3ce64992dc6f931ae09c5f77931cd0743a9fe2557673b6a41f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:57Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:57 crc kubenswrapper[4830]: I0131 09:01:57.856960 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"158dbfda-9b0a-4809-9946-3c6ee2d082dc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4f2590c48b20124bb8d0271755d430719ece306dbdc95acc26258abaf331ee2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqp59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7cfb7ee25dc18bb1412f69e9bbc3a9055029ed188a12baa5ceef7d5445ad597c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqp59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:23Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gt7kd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:57Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:57 crc kubenswrapper[4830]: I0131 09:01:57.862317 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:57 crc kubenswrapper[4830]: I0131 09:01:57.862370 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:57 crc kubenswrapper[4830]: I0131 09:01:57.862383 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:57 crc kubenswrapper[4830]: I0131 09:01:57.862401 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:57 crc kubenswrapper[4830]: I0131 09:01:57.862415 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:57Z","lastTransitionTime":"2026-01-31T09:01:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:57 crc kubenswrapper[4830]: I0131 09:01:57.870877 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0468c8e291590c1fa2ffc9b7786bbbf6f3cfb7889fc3adaf7f7a4d3b0edcb1ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4601859d7e52904b3390b2ebca48210c4b3bf132b00a4735a1dbe6cfdc7bd02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:57Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:57 crc kubenswrapper[4830]: I0131 09:01:57.891528 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-r8pc4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"159b9801-57e3-4cf0-9b81-10aacb5eef83\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://320f24051b1a05475b54823ec3401ad252f0efd6c0d3c5098643f959bad15179\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae12083012db5dcd6cc0b23cf64d04c952587334fdf617d36d4cdc2f657eb163\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://351a35da1a2503d6c5d323c7a9a26460236f70517498b41055e6d21bbf92b358\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3614c1ff7ca6762c15b9b85e6187ad94029630e54254e8ccb45b5e7c24692561\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27c8241e84418d80626b17fe37ccf994304394c3fa85c7f1a058d8d54ad7e7d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fd109715e65725addfeba399eacb1abbe16132e06e9ee50b542e877ca9d8d82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a83288b35d051a945187b06a0f1e8f61aec52b6343034cc2f57354b61a9309b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed906c96861bf8d2af425b975d67a4b298930cd75ababc2d621541adaa7b2ba2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T09:01:34Z\\\",\\\"message\\\":\\\"objects: [openshift-kube-apiserver/kube-apiserver-crc openshift-kube-controller-manager/kube-controller-manager-crc openshift-network-operator/iptables-alerter-4ln5h openshift-image-registry/node-ca-zt78q openshift-network-diagnostics/network-check-source-55646444c4-trplf openshift-network-diagnostics/network-check-target-xd92c openshift-network-node-identity/network-node-identity-vrzqb openshift-dns/node-resolver-pmbpr openshift-network-operator/network-operator-58b4c7f79c-55gtf openshift-machine-config-operator/machine-config-daemon-gt7kd openshift-multus/multus-additional-cni-plugins-x27jw openshift-multus/multus-cjqbn]\\\\nI0131 09:01:34.674011 6272 obj_retry.go:418] Waiting for all the *v1.Pod retry setup to complete in iterateRetryResources\\\\nI0131 09:01:34.674042 6272 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/multus-cjqbn\\\\nI0131 09:01:34.674056 6272 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/multus-cjqbn\\\\nF0131 09:01:34.674068 6272 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stoppe\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:33Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a83288b35d051a945187b06a0f1e8f61aec52b6343034cc2f57354b61a9309b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T09:01:57Z\\\",\\\"message\\\":\\\"flector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 09:01:57.209441 6558 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0131 09:01:57.209613 6558 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 09:01:57.210444 6558 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0131 09:01:57.210672 6558 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 09:01:57.211135 6558 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 09:01:57.211342 6558 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0131 09:01:57.212161 6558 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 09:01:57.212266 6558 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a39d2c99a3b1f6ff5d003d2163046c5127e41dac82f41744c1fbffd0cec8ff09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba6e5a840aca1952877a5d2c18af1d28779b13d54e0c653a2b02b3c3d7b748e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba6e5a840aca1952877a5d2c18af1d28779b13d54e0c653a2b02b3c3d7b748e4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-r8pc4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:57Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:57 crc kubenswrapper[4830]: I0131 09:01:57.903302 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7vq99" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"44acb8ed-5840-46fa-9ba1-1b89653e1478\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07cae4ce61629c9f8e48863d0775cf4fed46422db85ba8b29477e098b697fb1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9w5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86ac3b3a214c6bca20d7fdc92a49647dfdaf8de4391f331890f74900ab7eca11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9w5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:35Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7vq99\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:57Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:57 crc kubenswrapper[4830]: I0131 09:01:57.965526 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:57 crc kubenswrapper[4830]: I0131 09:01:57.965570 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:57 crc kubenswrapper[4830]: I0131 09:01:57.965583 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:57 crc kubenswrapper[4830]: I0131 09:01:57.965603 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:57 crc kubenswrapper[4830]: I0131 09:01:57.965614 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:57Z","lastTransitionTime":"2026-01-31T09:01:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:58 crc kubenswrapper[4830]: I0131 09:01:58.069168 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:58 crc kubenswrapper[4830]: I0131 09:01:58.069214 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:58 crc kubenswrapper[4830]: I0131 09:01:58.069226 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:58 crc kubenswrapper[4830]: I0131 09:01:58.069245 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:58 crc kubenswrapper[4830]: I0131 09:01:58.069256 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:58Z","lastTransitionTime":"2026-01-31T09:01:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:58 crc kubenswrapper[4830]: I0131 09:01:58.172098 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:58 crc kubenswrapper[4830]: I0131 09:01:58.172174 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:58 crc kubenswrapper[4830]: I0131 09:01:58.172184 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:58 crc kubenswrapper[4830]: I0131 09:01:58.172203 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:58 crc kubenswrapper[4830]: I0131 09:01:58.172217 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:58Z","lastTransitionTime":"2026-01-31T09:01:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:58 crc kubenswrapper[4830]: I0131 09:01:58.241916 4830 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 13:27:04.689283871 +0000 UTC Jan 31 09:01:58 crc kubenswrapper[4830]: I0131 09:01:58.251399 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5kl8z" Jan 31 09:01:58 crc kubenswrapper[4830]: I0131 09:01:58.251447 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 09:01:58 crc kubenswrapper[4830]: I0131 09:01:58.251490 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 09:01:58 crc kubenswrapper[4830]: I0131 09:01:58.251518 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 09:01:58 crc kubenswrapper[4830]: E0131 09:01:58.251611 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5kl8z" podUID="c1fa30e4-0c03-43ab-9c37-f7ec86153b27" Jan 31 09:01:58 crc kubenswrapper[4830]: E0131 09:01:58.251819 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 09:01:58 crc kubenswrapper[4830]: E0131 09:01:58.251957 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 09:01:58 crc kubenswrapper[4830]: E0131 09:01:58.252114 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 09:01:58 crc kubenswrapper[4830]: I0131 09:01:58.274768 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:58 crc kubenswrapper[4830]: I0131 09:01:58.274825 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:58 crc kubenswrapper[4830]: I0131 09:01:58.274841 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:58 crc kubenswrapper[4830]: I0131 09:01:58.274864 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:58 crc kubenswrapper[4830]: I0131 09:01:58.274881 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:58Z","lastTransitionTime":"2026-01-31T09:01:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:58 crc kubenswrapper[4830]: I0131 09:01:58.378487 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:58 crc kubenswrapper[4830]: I0131 09:01:58.378533 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:58 crc kubenswrapper[4830]: I0131 09:01:58.378545 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:58 crc kubenswrapper[4830]: I0131 09:01:58.378564 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:58 crc kubenswrapper[4830]: I0131 09:01:58.378579 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:58Z","lastTransitionTime":"2026-01-31T09:01:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:58 crc kubenswrapper[4830]: I0131 09:01:58.481853 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:58 crc kubenswrapper[4830]: I0131 09:01:58.481921 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:58 crc kubenswrapper[4830]: I0131 09:01:58.481938 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:58 crc kubenswrapper[4830]: I0131 09:01:58.482015 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:58 crc kubenswrapper[4830]: I0131 09:01:58.482038 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:58Z","lastTransitionTime":"2026-01-31T09:01:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:58 crc kubenswrapper[4830]: I0131 09:01:58.584777 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:58 crc kubenswrapper[4830]: I0131 09:01:58.584821 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:58 crc kubenswrapper[4830]: I0131 09:01:58.584832 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:58 crc kubenswrapper[4830]: I0131 09:01:58.584849 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:58 crc kubenswrapper[4830]: I0131 09:01:58.584860 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:58Z","lastTransitionTime":"2026-01-31T09:01:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:58 crc kubenswrapper[4830]: I0131 09:01:58.656107 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-r8pc4_159b9801-57e3-4cf0-9b81-10aacb5eef83/ovnkube-controller/2.log" Jan 31 09:01:58 crc kubenswrapper[4830]: I0131 09:01:58.686402 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:58 crc kubenswrapper[4830]: I0131 09:01:58.686450 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:58 crc kubenswrapper[4830]: I0131 09:01:58.686469 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:58 crc kubenswrapper[4830]: I0131 09:01:58.686486 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:58 crc kubenswrapper[4830]: I0131 09:01:58.686499 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:58Z","lastTransitionTime":"2026-01-31T09:01:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:58 crc kubenswrapper[4830]: I0131 09:01:58.789191 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:58 crc kubenswrapper[4830]: I0131 09:01:58.789275 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:58 crc kubenswrapper[4830]: I0131 09:01:58.789292 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:58 crc kubenswrapper[4830]: I0131 09:01:58.789308 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:58 crc kubenswrapper[4830]: I0131 09:01:58.789320 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:58Z","lastTransitionTime":"2026-01-31T09:01:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:58 crc kubenswrapper[4830]: I0131 09:01:58.892041 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:58 crc kubenswrapper[4830]: I0131 09:01:58.892099 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:58 crc kubenswrapper[4830]: I0131 09:01:58.892118 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:58 crc kubenswrapper[4830]: I0131 09:01:58.892145 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:58 crc kubenswrapper[4830]: I0131 09:01:58.892158 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:58Z","lastTransitionTime":"2026-01-31T09:01:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:58 crc kubenswrapper[4830]: I0131 09:01:58.995173 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:58 crc kubenswrapper[4830]: I0131 09:01:58.995225 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:58 crc kubenswrapper[4830]: I0131 09:01:58.995234 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:58 crc kubenswrapper[4830]: I0131 09:01:58.995250 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:58 crc kubenswrapper[4830]: I0131 09:01:58.995263 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:58Z","lastTransitionTime":"2026-01-31T09:01:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:59 crc kubenswrapper[4830]: I0131 09:01:59.098262 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:59 crc kubenswrapper[4830]: I0131 09:01:59.098301 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:59 crc kubenswrapper[4830]: I0131 09:01:59.098309 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:59 crc kubenswrapper[4830]: I0131 09:01:59.098325 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:59 crc kubenswrapper[4830]: I0131 09:01:59.098335 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:59Z","lastTransitionTime":"2026-01-31T09:01:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:59 crc kubenswrapper[4830]: I0131 09:01:59.202866 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:59 crc kubenswrapper[4830]: I0131 09:01:59.202958 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:59 crc kubenswrapper[4830]: I0131 09:01:59.202996 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:59 crc kubenswrapper[4830]: I0131 09:01:59.203030 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:59 crc kubenswrapper[4830]: I0131 09:01:59.203050 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:59Z","lastTransitionTime":"2026-01-31T09:01:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:59 crc kubenswrapper[4830]: I0131 09:01:59.242804 4830 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 13:50:13.848347745 +0000 UTC Jan 31 09:01:59 crc kubenswrapper[4830]: I0131 09:01:59.306880 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:59 crc kubenswrapper[4830]: I0131 09:01:59.306933 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:59 crc kubenswrapper[4830]: I0131 09:01:59.306944 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:59 crc kubenswrapper[4830]: I0131 09:01:59.306962 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:59 crc kubenswrapper[4830]: I0131 09:01:59.306972 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:59Z","lastTransitionTime":"2026-01-31T09:01:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:59 crc kubenswrapper[4830]: I0131 09:01:59.409894 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:59 crc kubenswrapper[4830]: I0131 09:01:59.409948 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:59 crc kubenswrapper[4830]: I0131 09:01:59.409960 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:59 crc kubenswrapper[4830]: I0131 09:01:59.409978 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:59 crc kubenswrapper[4830]: I0131 09:01:59.409989 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:59Z","lastTransitionTime":"2026-01-31T09:01:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:59 crc kubenswrapper[4830]: I0131 09:01:59.512626 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:59 crc kubenswrapper[4830]: I0131 09:01:59.512676 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:59 crc kubenswrapper[4830]: I0131 09:01:59.512687 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:59 crc kubenswrapper[4830]: I0131 09:01:59.512706 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:59 crc kubenswrapper[4830]: I0131 09:01:59.512736 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:59Z","lastTransitionTime":"2026-01-31T09:01:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:59 crc kubenswrapper[4830]: I0131 09:01:59.615367 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:59 crc kubenswrapper[4830]: I0131 09:01:59.615404 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:59 crc kubenswrapper[4830]: I0131 09:01:59.615412 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:59 crc kubenswrapper[4830]: I0131 09:01:59.615428 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:59 crc kubenswrapper[4830]: I0131 09:01:59.615440 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:59Z","lastTransitionTime":"2026-01-31T09:01:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:59 crc kubenswrapper[4830]: I0131 09:01:59.717898 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:59 crc kubenswrapper[4830]: I0131 09:01:59.717953 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:59 crc kubenswrapper[4830]: I0131 09:01:59.717966 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:59 crc kubenswrapper[4830]: I0131 09:01:59.717985 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:59 crc kubenswrapper[4830]: I0131 09:01:59.717999 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:59Z","lastTransitionTime":"2026-01-31T09:01:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:59 crc kubenswrapper[4830]: I0131 09:01:59.820537 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:59 crc kubenswrapper[4830]: I0131 09:01:59.820597 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:59 crc kubenswrapper[4830]: I0131 09:01:59.820611 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:59 crc kubenswrapper[4830]: I0131 09:01:59.820631 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:59 crc kubenswrapper[4830]: I0131 09:01:59.820644 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:59Z","lastTransitionTime":"2026-01-31T09:01:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:59 crc kubenswrapper[4830]: I0131 09:01:59.922848 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:59 crc kubenswrapper[4830]: I0131 09:01:59.922882 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:59 crc kubenswrapper[4830]: I0131 09:01:59.922892 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:59 crc kubenswrapper[4830]: I0131 09:01:59.922909 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:59 crc kubenswrapper[4830]: I0131 09:01:59.922923 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:59Z","lastTransitionTime":"2026-01-31T09:01:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:00 crc kubenswrapper[4830]: I0131 09:02:00.026222 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:00 crc kubenswrapper[4830]: I0131 09:02:00.026268 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:00 crc kubenswrapper[4830]: I0131 09:02:00.026280 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:00 crc kubenswrapper[4830]: I0131 09:02:00.026299 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:00 crc kubenswrapper[4830]: I0131 09:02:00.026312 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:00Z","lastTransitionTime":"2026-01-31T09:02:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:00 crc kubenswrapper[4830]: I0131 09:02:00.129555 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:00 crc kubenswrapper[4830]: I0131 09:02:00.129601 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:00 crc kubenswrapper[4830]: I0131 09:02:00.129612 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:00 crc kubenswrapper[4830]: I0131 09:02:00.129629 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:00 crc kubenswrapper[4830]: I0131 09:02:00.129640 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:00Z","lastTransitionTime":"2026-01-31T09:02:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:00 crc kubenswrapper[4830]: I0131 09:02:00.232352 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:00 crc kubenswrapper[4830]: I0131 09:02:00.232390 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:00 crc kubenswrapper[4830]: I0131 09:02:00.232399 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:00 crc kubenswrapper[4830]: I0131 09:02:00.232415 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:00 crc kubenswrapper[4830]: I0131 09:02:00.232425 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:00Z","lastTransitionTime":"2026-01-31T09:02:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:00 crc kubenswrapper[4830]: I0131 09:02:00.243536 4830 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 08:19:38.864210251 +0000 UTC Jan 31 09:02:00 crc kubenswrapper[4830]: I0131 09:02:00.250913 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 09:02:00 crc kubenswrapper[4830]: I0131 09:02:00.250972 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5kl8z" Jan 31 09:02:00 crc kubenswrapper[4830]: I0131 09:02:00.251012 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 09:02:00 crc kubenswrapper[4830]: E0131 09:02:00.251091 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 09:02:00 crc kubenswrapper[4830]: I0131 09:02:00.251163 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 09:02:00 crc kubenswrapper[4830]: E0131 09:02:00.251260 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 09:02:00 crc kubenswrapper[4830]: E0131 09:02:00.251376 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 09:02:00 crc kubenswrapper[4830]: E0131 09:02:00.251596 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5kl8z" podUID="c1fa30e4-0c03-43ab-9c37-f7ec86153b27" Jan 31 09:02:00 crc kubenswrapper[4830]: I0131 09:02:00.335251 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:00 crc kubenswrapper[4830]: I0131 09:02:00.335315 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:00 crc kubenswrapper[4830]: I0131 09:02:00.335324 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:00 crc kubenswrapper[4830]: I0131 09:02:00.335341 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:00 crc kubenswrapper[4830]: I0131 09:02:00.335355 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:00Z","lastTransitionTime":"2026-01-31T09:02:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:00 crc kubenswrapper[4830]: I0131 09:02:00.439110 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:00 crc kubenswrapper[4830]: I0131 09:02:00.439160 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:00 crc kubenswrapper[4830]: I0131 09:02:00.439170 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:00 crc kubenswrapper[4830]: I0131 09:02:00.439187 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:00 crc kubenswrapper[4830]: I0131 09:02:00.439221 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:00Z","lastTransitionTime":"2026-01-31T09:02:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:00 crc kubenswrapper[4830]: I0131 09:02:00.541949 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:00 crc kubenswrapper[4830]: I0131 09:02:00.542027 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:00 crc kubenswrapper[4830]: I0131 09:02:00.542039 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:00 crc kubenswrapper[4830]: I0131 09:02:00.542061 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:00 crc kubenswrapper[4830]: I0131 09:02:00.542075 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:00Z","lastTransitionTime":"2026-01-31T09:02:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:00 crc kubenswrapper[4830]: I0131 09:02:00.644821 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:00 crc kubenswrapper[4830]: I0131 09:02:00.644853 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:00 crc kubenswrapper[4830]: I0131 09:02:00.644865 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:00 crc kubenswrapper[4830]: I0131 09:02:00.644883 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:00 crc kubenswrapper[4830]: I0131 09:02:00.644896 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:00Z","lastTransitionTime":"2026-01-31T09:02:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:00 crc kubenswrapper[4830]: I0131 09:02:00.747673 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:00 crc kubenswrapper[4830]: I0131 09:02:00.747762 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:00 crc kubenswrapper[4830]: I0131 09:02:00.747781 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:00 crc kubenswrapper[4830]: I0131 09:02:00.747801 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:00 crc kubenswrapper[4830]: I0131 09:02:00.747815 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:00Z","lastTransitionTime":"2026-01-31T09:02:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:00 crc kubenswrapper[4830]: I0131 09:02:00.850754 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:00 crc kubenswrapper[4830]: I0131 09:02:00.850801 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:00 crc kubenswrapper[4830]: I0131 09:02:00.850827 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:00 crc kubenswrapper[4830]: I0131 09:02:00.850844 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:00 crc kubenswrapper[4830]: I0131 09:02:00.850853 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:00Z","lastTransitionTime":"2026-01-31T09:02:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:00 crc kubenswrapper[4830]: I0131 09:02:00.953931 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:00 crc kubenswrapper[4830]: I0131 09:02:00.953992 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:00 crc kubenswrapper[4830]: I0131 09:02:00.954011 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:00 crc kubenswrapper[4830]: I0131 09:02:00.954031 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:00 crc kubenswrapper[4830]: I0131 09:02:00.954041 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:00Z","lastTransitionTime":"2026-01-31T09:02:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:01 crc kubenswrapper[4830]: I0131 09:02:01.057076 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:01 crc kubenswrapper[4830]: I0131 09:02:01.057124 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:01 crc kubenswrapper[4830]: I0131 09:02:01.057139 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:01 crc kubenswrapper[4830]: I0131 09:02:01.057159 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:01 crc kubenswrapper[4830]: I0131 09:02:01.057169 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:01Z","lastTransitionTime":"2026-01-31T09:02:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:01 crc kubenswrapper[4830]: I0131 09:02:01.159321 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:01 crc kubenswrapper[4830]: I0131 09:02:01.159369 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:01 crc kubenswrapper[4830]: I0131 09:02:01.159388 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:01 crc kubenswrapper[4830]: I0131 09:02:01.159408 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:01 crc kubenswrapper[4830]: I0131 09:02:01.159421 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:01Z","lastTransitionTime":"2026-01-31T09:02:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:01 crc kubenswrapper[4830]: I0131 09:02:01.244336 4830 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 09:37:09.006253501 +0000 UTC Jan 31 09:02:01 crc kubenswrapper[4830]: I0131 09:02:01.262360 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:01 crc kubenswrapper[4830]: I0131 09:02:01.262406 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:01 crc kubenswrapper[4830]: I0131 09:02:01.262419 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:01 crc kubenswrapper[4830]: I0131 09:02:01.262437 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:01 crc kubenswrapper[4830]: I0131 09:02:01.262450 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:01Z","lastTransitionTime":"2026-01-31T09:02:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:01 crc kubenswrapper[4830]: I0131 09:02:01.364589 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:01 crc kubenswrapper[4830]: I0131 09:02:01.364651 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:01 crc kubenswrapper[4830]: I0131 09:02:01.364664 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:01 crc kubenswrapper[4830]: I0131 09:02:01.364687 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:01 crc kubenswrapper[4830]: I0131 09:02:01.364706 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:01Z","lastTransitionTime":"2026-01-31T09:02:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:01 crc kubenswrapper[4830]: I0131 09:02:01.467663 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:01 crc kubenswrapper[4830]: I0131 09:02:01.467698 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:01 crc kubenswrapper[4830]: I0131 09:02:01.467705 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:01 crc kubenswrapper[4830]: I0131 09:02:01.467742 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:01 crc kubenswrapper[4830]: I0131 09:02:01.467751 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:01Z","lastTransitionTime":"2026-01-31T09:02:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:01 crc kubenswrapper[4830]: I0131 09:02:01.570229 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:01 crc kubenswrapper[4830]: I0131 09:02:01.570287 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:01 crc kubenswrapper[4830]: I0131 09:02:01.570305 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:01 crc kubenswrapper[4830]: I0131 09:02:01.570325 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:01 crc kubenswrapper[4830]: I0131 09:02:01.570411 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:01Z","lastTransitionTime":"2026-01-31T09:02:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:01 crc kubenswrapper[4830]: I0131 09:02:01.673683 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:01 crc kubenswrapper[4830]: I0131 09:02:01.674160 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:01 crc kubenswrapper[4830]: I0131 09:02:01.674174 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:01 crc kubenswrapper[4830]: I0131 09:02:01.674193 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:01 crc kubenswrapper[4830]: I0131 09:02:01.674212 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:01Z","lastTransitionTime":"2026-01-31T09:02:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:01 crc kubenswrapper[4830]: I0131 09:02:01.777717 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:01 crc kubenswrapper[4830]: I0131 09:02:01.777764 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:01 crc kubenswrapper[4830]: I0131 09:02:01.777773 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:01 crc kubenswrapper[4830]: I0131 09:02:01.777788 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:01 crc kubenswrapper[4830]: I0131 09:02:01.777797 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:01Z","lastTransitionTime":"2026-01-31T09:02:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:01 crc kubenswrapper[4830]: I0131 09:02:01.880464 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:01 crc kubenswrapper[4830]: I0131 09:02:01.880529 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:01 crc kubenswrapper[4830]: I0131 09:02:01.880541 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:01 crc kubenswrapper[4830]: I0131 09:02:01.880563 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:01 crc kubenswrapper[4830]: I0131 09:02:01.880576 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:01Z","lastTransitionTime":"2026-01-31T09:02:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:01 crc kubenswrapper[4830]: I0131 09:02:01.983491 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:01 crc kubenswrapper[4830]: I0131 09:02:01.983545 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:01 crc kubenswrapper[4830]: I0131 09:02:01.983556 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:01 crc kubenswrapper[4830]: I0131 09:02:01.983579 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:01 crc kubenswrapper[4830]: I0131 09:02:01.983590 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:01Z","lastTransitionTime":"2026-01-31T09:02:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:02 crc kubenswrapper[4830]: I0131 09:02:02.086571 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:02 crc kubenswrapper[4830]: I0131 09:02:02.086610 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:02 crc kubenswrapper[4830]: I0131 09:02:02.086621 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:02 crc kubenswrapper[4830]: I0131 09:02:02.086636 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:02 crc kubenswrapper[4830]: I0131 09:02:02.086646 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:02Z","lastTransitionTime":"2026-01-31T09:02:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:02 crc kubenswrapper[4830]: I0131 09:02:02.189643 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:02 crc kubenswrapper[4830]: I0131 09:02:02.189705 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:02 crc kubenswrapper[4830]: I0131 09:02:02.189759 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:02 crc kubenswrapper[4830]: I0131 09:02:02.189785 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:02 crc kubenswrapper[4830]: I0131 09:02:02.189818 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:02Z","lastTransitionTime":"2026-01-31T09:02:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:02 crc kubenswrapper[4830]: I0131 09:02:02.245429 4830 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 15:19:21.928205686 +0000 UTC Jan 31 09:02:02 crc kubenswrapper[4830]: I0131 09:02:02.250712 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 09:02:02 crc kubenswrapper[4830]: I0131 09:02:02.250846 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 09:02:02 crc kubenswrapper[4830]: I0131 09:02:02.250910 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5kl8z" Jan 31 09:02:02 crc kubenswrapper[4830]: E0131 09:02:02.250896 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 09:02:02 crc kubenswrapper[4830]: I0131 09:02:02.251020 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 09:02:02 crc kubenswrapper[4830]: E0131 09:02:02.251057 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5kl8z" podUID="c1fa30e4-0c03-43ab-9c37-f7ec86153b27" Jan 31 09:02:02 crc kubenswrapper[4830]: E0131 09:02:02.251107 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 09:02:02 crc kubenswrapper[4830]: E0131 09:02:02.251240 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 09:02:02 crc kubenswrapper[4830]: I0131 09:02:02.292583 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:02 crc kubenswrapper[4830]: I0131 09:02:02.292624 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:02 crc kubenswrapper[4830]: I0131 09:02:02.292634 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:02 crc kubenswrapper[4830]: I0131 09:02:02.292651 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:02 crc kubenswrapper[4830]: I0131 09:02:02.292662 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:02Z","lastTransitionTime":"2026-01-31T09:02:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:02 crc kubenswrapper[4830]: I0131 09:02:02.395293 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:02 crc kubenswrapper[4830]: I0131 09:02:02.395346 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:02 crc kubenswrapper[4830]: I0131 09:02:02.395357 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:02 crc kubenswrapper[4830]: I0131 09:02:02.395378 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:02 crc kubenswrapper[4830]: I0131 09:02:02.395390 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:02Z","lastTransitionTime":"2026-01-31T09:02:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:02 crc kubenswrapper[4830]: I0131 09:02:02.498306 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:02 crc kubenswrapper[4830]: I0131 09:02:02.498362 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:02 crc kubenswrapper[4830]: I0131 09:02:02.498376 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:02 crc kubenswrapper[4830]: I0131 09:02:02.498395 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:02 crc kubenswrapper[4830]: I0131 09:02:02.498409 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:02Z","lastTransitionTime":"2026-01-31T09:02:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:02 crc kubenswrapper[4830]: I0131 09:02:02.604334 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:02 crc kubenswrapper[4830]: I0131 09:02:02.604395 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:02 crc kubenswrapper[4830]: I0131 09:02:02.604407 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:02 crc kubenswrapper[4830]: I0131 09:02:02.604425 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:02 crc kubenswrapper[4830]: I0131 09:02:02.604439 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:02Z","lastTransitionTime":"2026-01-31T09:02:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:02 crc kubenswrapper[4830]: I0131 09:02:02.707254 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:02 crc kubenswrapper[4830]: I0131 09:02:02.707293 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:02 crc kubenswrapper[4830]: I0131 09:02:02.707304 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:02 crc kubenswrapper[4830]: I0131 09:02:02.707320 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:02 crc kubenswrapper[4830]: I0131 09:02:02.707331 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:02Z","lastTransitionTime":"2026-01-31T09:02:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:02 crc kubenswrapper[4830]: I0131 09:02:02.809630 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:02 crc kubenswrapper[4830]: I0131 09:02:02.809669 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:02 crc kubenswrapper[4830]: I0131 09:02:02.809679 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:02 crc kubenswrapper[4830]: I0131 09:02:02.809696 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:02 crc kubenswrapper[4830]: I0131 09:02:02.809707 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:02Z","lastTransitionTime":"2026-01-31T09:02:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:02 crc kubenswrapper[4830]: I0131 09:02:02.912548 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:02 crc kubenswrapper[4830]: I0131 09:02:02.912596 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:02 crc kubenswrapper[4830]: I0131 09:02:02.912606 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:02 crc kubenswrapper[4830]: I0131 09:02:02.912623 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:02 crc kubenswrapper[4830]: I0131 09:02:02.912636 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:02Z","lastTransitionTime":"2026-01-31T09:02:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:02 crc kubenswrapper[4830]: I0131 09:02:02.949388 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:02 crc kubenswrapper[4830]: I0131 09:02:02.949442 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:02 crc kubenswrapper[4830]: I0131 09:02:02.949454 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:02 crc kubenswrapper[4830]: I0131 09:02:02.949475 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:02 crc kubenswrapper[4830]: I0131 09:02:02.949494 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:02Z","lastTransitionTime":"2026-01-31T09:02:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:02 crc kubenswrapper[4830]: E0131 09:02:02.964124 4830 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T09:02:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T09:02:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T09:02:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T09:02:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T09:02:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T09:02:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T09:02:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T09:02:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"09bf5dcf-c0f5-4874-a379-a4244cbfeb7d\\\",\\\"systemUUID\\\":\\\"c42072f0-7f1e-4cb8-a24e-882cf5477d0b\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:02Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:02 crc kubenswrapper[4830]: I0131 09:02:02.968199 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:02 crc kubenswrapper[4830]: I0131 09:02:02.968254 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:02 crc kubenswrapper[4830]: I0131 09:02:02.968267 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:02 crc kubenswrapper[4830]: I0131 09:02:02.968284 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:02 crc kubenswrapper[4830]: I0131 09:02:02.968293 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:02Z","lastTransitionTime":"2026-01-31T09:02:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:02 crc kubenswrapper[4830]: E0131 09:02:02.984659 4830 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T09:02:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T09:02:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T09:02:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T09:02:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T09:02:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T09:02:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T09:02:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T09:02:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"09bf5dcf-c0f5-4874-a379-a4244cbfeb7d\\\",\\\"systemUUID\\\":\\\"c42072f0-7f1e-4cb8-a24e-882cf5477d0b\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:02Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:02 crc kubenswrapper[4830]: I0131 09:02:02.989148 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:02 crc kubenswrapper[4830]: I0131 09:02:02.989211 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:02 crc kubenswrapper[4830]: I0131 09:02:02.989228 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:02 crc kubenswrapper[4830]: I0131 09:02:02.989256 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:02 crc kubenswrapper[4830]: I0131 09:02:02.989272 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:02Z","lastTransitionTime":"2026-01-31T09:02:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:03 crc kubenswrapper[4830]: E0131 09:02:03.002338 4830 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T09:02:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T09:02:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T09:02:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T09:02:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T09:02:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T09:02:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T09:02:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T09:02:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"09bf5dcf-c0f5-4874-a379-a4244cbfeb7d\\\",\\\"systemUUID\\\":\\\"c42072f0-7f1e-4cb8-a24e-882cf5477d0b\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:03Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:03 crc kubenswrapper[4830]: I0131 09:02:03.006834 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:03 crc kubenswrapper[4830]: I0131 09:02:03.006890 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:03 crc kubenswrapper[4830]: I0131 09:02:03.006902 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:03 crc kubenswrapper[4830]: I0131 09:02:03.006922 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:03 crc kubenswrapper[4830]: I0131 09:02:03.006939 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:03Z","lastTransitionTime":"2026-01-31T09:02:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:03 crc kubenswrapper[4830]: E0131 09:02:03.021653 4830 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T09:02:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T09:02:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T09:02:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T09:02:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T09:02:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T09:02:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T09:02:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T09:02:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"09bf5dcf-c0f5-4874-a379-a4244cbfeb7d\\\",\\\"systemUUID\\\":\\\"c42072f0-7f1e-4cb8-a24e-882cf5477d0b\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:03Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:03 crc kubenswrapper[4830]: I0131 09:02:03.026385 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:03 crc kubenswrapper[4830]: I0131 09:02:03.026423 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:03 crc kubenswrapper[4830]: I0131 09:02:03.026435 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:03 crc kubenswrapper[4830]: I0131 09:02:03.026453 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:03 crc kubenswrapper[4830]: I0131 09:02:03.026463 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:03Z","lastTransitionTime":"2026-01-31T09:02:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:03 crc kubenswrapper[4830]: E0131 09:02:03.039323 4830 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T09:02:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T09:02:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T09:02:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T09:02:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T09:02:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T09:02:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T09:02:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T09:02:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"09bf5dcf-c0f5-4874-a379-a4244cbfeb7d\\\",\\\"systemUUID\\\":\\\"c42072f0-7f1e-4cb8-a24e-882cf5477d0b\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:03Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:03 crc kubenswrapper[4830]: E0131 09:02:03.039564 4830 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 31 09:02:03 crc kubenswrapper[4830]: I0131 09:02:03.041633 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:03 crc kubenswrapper[4830]: I0131 09:02:03.041687 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:03 crc kubenswrapper[4830]: I0131 09:02:03.041706 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:03 crc kubenswrapper[4830]: I0131 09:02:03.041762 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:03 crc kubenswrapper[4830]: I0131 09:02:03.041798 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:03Z","lastTransitionTime":"2026-01-31T09:02:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:03 crc kubenswrapper[4830]: I0131 09:02:03.145135 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:03 crc kubenswrapper[4830]: I0131 09:02:03.145204 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:03 crc kubenswrapper[4830]: I0131 09:02:03.145213 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:03 crc kubenswrapper[4830]: I0131 09:02:03.145233 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:03 crc kubenswrapper[4830]: I0131 09:02:03.145244 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:03Z","lastTransitionTime":"2026-01-31T09:02:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:03 crc kubenswrapper[4830]: I0131 09:02:03.245695 4830 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 16:43:50.325983027 +0000 UTC Jan 31 09:02:03 crc kubenswrapper[4830]: I0131 09:02:03.247904 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:03 crc kubenswrapper[4830]: I0131 09:02:03.247967 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:03 crc kubenswrapper[4830]: I0131 09:02:03.247986 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:03 crc kubenswrapper[4830]: I0131 09:02:03.248011 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:03 crc kubenswrapper[4830]: I0131 09:02:03.248026 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:03Z","lastTransitionTime":"2026-01-31T09:02:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:03 crc kubenswrapper[4830]: I0131 09:02:03.350665 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:03 crc kubenswrapper[4830]: I0131 09:02:03.350706 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:03 crc kubenswrapper[4830]: I0131 09:02:03.350716 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:03 crc kubenswrapper[4830]: I0131 09:02:03.350752 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:03 crc kubenswrapper[4830]: I0131 09:02:03.350763 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:03Z","lastTransitionTime":"2026-01-31T09:02:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:03 crc kubenswrapper[4830]: I0131 09:02:03.454105 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:03 crc kubenswrapper[4830]: I0131 09:02:03.454159 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:03 crc kubenswrapper[4830]: I0131 09:02:03.454171 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:03 crc kubenswrapper[4830]: I0131 09:02:03.454191 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:03 crc kubenswrapper[4830]: I0131 09:02:03.454204 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:03Z","lastTransitionTime":"2026-01-31T09:02:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:03 crc kubenswrapper[4830]: I0131 09:02:03.557152 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:03 crc kubenswrapper[4830]: I0131 09:02:03.557201 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:03 crc kubenswrapper[4830]: I0131 09:02:03.557212 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:03 crc kubenswrapper[4830]: I0131 09:02:03.557229 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:03 crc kubenswrapper[4830]: I0131 09:02:03.557243 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:03Z","lastTransitionTime":"2026-01-31T09:02:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:03 crc kubenswrapper[4830]: I0131 09:02:03.660545 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:03 crc kubenswrapper[4830]: I0131 09:02:03.660592 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:03 crc kubenswrapper[4830]: I0131 09:02:03.660605 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:03 crc kubenswrapper[4830]: I0131 09:02:03.660624 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:03 crc kubenswrapper[4830]: I0131 09:02:03.660641 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:03Z","lastTransitionTime":"2026-01-31T09:02:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:03 crc kubenswrapper[4830]: I0131 09:02:03.762982 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:03 crc kubenswrapper[4830]: I0131 09:02:03.763013 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:03 crc kubenswrapper[4830]: I0131 09:02:03.763022 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:03 crc kubenswrapper[4830]: I0131 09:02:03.763066 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:03 crc kubenswrapper[4830]: I0131 09:02:03.763078 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:03Z","lastTransitionTime":"2026-01-31T09:02:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:03 crc kubenswrapper[4830]: I0131 09:02:03.865291 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:03 crc kubenswrapper[4830]: I0131 09:02:03.865346 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:03 crc kubenswrapper[4830]: I0131 09:02:03.865356 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:03 crc kubenswrapper[4830]: I0131 09:02:03.865375 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:03 crc kubenswrapper[4830]: I0131 09:02:03.865388 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:03Z","lastTransitionTime":"2026-01-31T09:02:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:03 crc kubenswrapper[4830]: I0131 09:02:03.968481 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:03 crc kubenswrapper[4830]: I0131 09:02:03.968533 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:03 crc kubenswrapper[4830]: I0131 09:02:03.968549 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:03 crc kubenswrapper[4830]: I0131 09:02:03.968569 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:03 crc kubenswrapper[4830]: I0131 09:02:03.968583 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:03Z","lastTransitionTime":"2026-01-31T09:02:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:04 crc kubenswrapper[4830]: I0131 09:02:04.071837 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:04 crc kubenswrapper[4830]: I0131 09:02:04.071886 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:04 crc kubenswrapper[4830]: I0131 09:02:04.071896 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:04 crc kubenswrapper[4830]: I0131 09:02:04.071916 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:04 crc kubenswrapper[4830]: I0131 09:02:04.071926 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:04Z","lastTransitionTime":"2026-01-31T09:02:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:04 crc kubenswrapper[4830]: I0131 09:02:04.175016 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:04 crc kubenswrapper[4830]: I0131 09:02:04.175085 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:04 crc kubenswrapper[4830]: I0131 09:02:04.175096 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:04 crc kubenswrapper[4830]: I0131 09:02:04.175116 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:04 crc kubenswrapper[4830]: I0131 09:02:04.175128 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:04Z","lastTransitionTime":"2026-01-31T09:02:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:04 crc kubenswrapper[4830]: I0131 09:02:04.246900 4830 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 06:34:36.726472459 +0000 UTC Jan 31 09:02:04 crc kubenswrapper[4830]: I0131 09:02:04.251297 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 09:02:04 crc kubenswrapper[4830]: I0131 09:02:04.251355 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5kl8z" Jan 31 09:02:04 crc kubenswrapper[4830]: I0131 09:02:04.251373 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 09:02:04 crc kubenswrapper[4830]: I0131 09:02:04.251472 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 09:02:04 crc kubenswrapper[4830]: E0131 09:02:04.251635 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 09:02:04 crc kubenswrapper[4830]: E0131 09:02:04.251961 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 09:02:04 crc kubenswrapper[4830]: E0131 09:02:04.252190 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5kl8z" podUID="c1fa30e4-0c03-43ab-9c37-f7ec86153b27" Jan 31 09:02:04 crc kubenswrapper[4830]: E0131 09:02:04.252473 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 09:02:04 crc kubenswrapper[4830]: I0131 09:02:04.278238 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:04 crc kubenswrapper[4830]: I0131 09:02:04.278284 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:04 crc kubenswrapper[4830]: I0131 09:02:04.278297 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:04 crc kubenswrapper[4830]: I0131 09:02:04.278316 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:04 crc kubenswrapper[4830]: I0131 09:02:04.278332 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:04Z","lastTransitionTime":"2026-01-31T09:02:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:04 crc kubenswrapper[4830]: I0131 09:02:04.382037 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:04 crc kubenswrapper[4830]: I0131 09:02:04.382098 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:04 crc kubenswrapper[4830]: I0131 09:02:04.382147 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:04 crc kubenswrapper[4830]: I0131 09:02:04.382173 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:04 crc kubenswrapper[4830]: I0131 09:02:04.382186 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:04Z","lastTransitionTime":"2026-01-31T09:02:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:04 crc kubenswrapper[4830]: I0131 09:02:04.485354 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:04 crc kubenswrapper[4830]: I0131 09:02:04.485413 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:04 crc kubenswrapper[4830]: I0131 09:02:04.485423 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:04 crc kubenswrapper[4830]: I0131 09:02:04.485443 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:04 crc kubenswrapper[4830]: I0131 09:02:04.485454 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:04Z","lastTransitionTime":"2026-01-31T09:02:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:04 crc kubenswrapper[4830]: I0131 09:02:04.588579 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:04 crc kubenswrapper[4830]: I0131 09:02:04.588621 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:04 crc kubenswrapper[4830]: I0131 09:02:04.588636 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:04 crc kubenswrapper[4830]: I0131 09:02:04.588654 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:04 crc kubenswrapper[4830]: I0131 09:02:04.588666 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:04Z","lastTransitionTime":"2026-01-31T09:02:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:04 crc kubenswrapper[4830]: I0131 09:02:04.691365 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:04 crc kubenswrapper[4830]: I0131 09:02:04.691451 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:04 crc kubenswrapper[4830]: I0131 09:02:04.691468 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:04 crc kubenswrapper[4830]: I0131 09:02:04.691493 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:04 crc kubenswrapper[4830]: I0131 09:02:04.691506 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:04Z","lastTransitionTime":"2026-01-31T09:02:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:04 crc kubenswrapper[4830]: I0131 09:02:04.794082 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:04 crc kubenswrapper[4830]: I0131 09:02:04.794145 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:04 crc kubenswrapper[4830]: I0131 09:02:04.794178 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:04 crc kubenswrapper[4830]: I0131 09:02:04.794213 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:04 crc kubenswrapper[4830]: I0131 09:02:04.794237 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:04Z","lastTransitionTime":"2026-01-31T09:02:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:04 crc kubenswrapper[4830]: I0131 09:02:04.897230 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:04 crc kubenswrapper[4830]: I0131 09:02:04.897284 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:04 crc kubenswrapper[4830]: I0131 09:02:04.897296 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:04 crc kubenswrapper[4830]: I0131 09:02:04.897328 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:04 crc kubenswrapper[4830]: I0131 09:02:04.897346 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:04Z","lastTransitionTime":"2026-01-31T09:02:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:05 crc kubenswrapper[4830]: I0131 09:02:05.000092 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:05 crc kubenswrapper[4830]: I0131 09:02:05.000153 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:05 crc kubenswrapper[4830]: I0131 09:02:05.000166 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:05 crc kubenswrapper[4830]: I0131 09:02:05.000187 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:05 crc kubenswrapper[4830]: I0131 09:02:05.000200 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:05Z","lastTransitionTime":"2026-01-31T09:02:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:05 crc kubenswrapper[4830]: I0131 09:02:05.103862 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:05 crc kubenswrapper[4830]: I0131 09:02:05.103910 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:05 crc kubenswrapper[4830]: I0131 09:02:05.103928 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:05 crc kubenswrapper[4830]: I0131 09:02:05.103954 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:05 crc kubenswrapper[4830]: I0131 09:02:05.103973 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:05Z","lastTransitionTime":"2026-01-31T09:02:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:05 crc kubenswrapper[4830]: I0131 09:02:05.207392 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:05 crc kubenswrapper[4830]: I0131 09:02:05.207450 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:05 crc kubenswrapper[4830]: I0131 09:02:05.207461 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:05 crc kubenswrapper[4830]: I0131 09:02:05.207489 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:05 crc kubenswrapper[4830]: I0131 09:02:05.207500 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:05Z","lastTransitionTime":"2026-01-31T09:02:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:05 crc kubenswrapper[4830]: I0131 09:02:05.247932 4830 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 18:35:46.694943469 +0000 UTC Jan 31 09:02:05 crc kubenswrapper[4830]: I0131 09:02:05.310919 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:05 crc kubenswrapper[4830]: I0131 09:02:05.310962 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:05 crc kubenswrapper[4830]: I0131 09:02:05.310974 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:05 crc kubenswrapper[4830]: I0131 09:02:05.310994 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:05 crc kubenswrapper[4830]: I0131 09:02:05.311009 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:05Z","lastTransitionTime":"2026-01-31T09:02:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:05 crc kubenswrapper[4830]: I0131 09:02:05.414010 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:05 crc kubenswrapper[4830]: I0131 09:02:05.414061 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:05 crc kubenswrapper[4830]: I0131 09:02:05.414070 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:05 crc kubenswrapper[4830]: I0131 09:02:05.414089 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:05 crc kubenswrapper[4830]: I0131 09:02:05.414106 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:05Z","lastTransitionTime":"2026-01-31T09:02:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:05 crc kubenswrapper[4830]: I0131 09:02:05.517577 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:05 crc kubenswrapper[4830]: I0131 09:02:05.517632 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:05 crc kubenswrapper[4830]: I0131 09:02:05.517666 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:05 crc kubenswrapper[4830]: I0131 09:02:05.517685 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:05 crc kubenswrapper[4830]: I0131 09:02:05.517700 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:05Z","lastTransitionTime":"2026-01-31T09:02:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:05 crc kubenswrapper[4830]: I0131 09:02:05.620634 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:05 crc kubenswrapper[4830]: I0131 09:02:05.620673 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:05 crc kubenswrapper[4830]: I0131 09:02:05.620685 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:05 crc kubenswrapper[4830]: I0131 09:02:05.620704 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:05 crc kubenswrapper[4830]: I0131 09:02:05.620717 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:05Z","lastTransitionTime":"2026-01-31T09:02:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:05 crc kubenswrapper[4830]: I0131 09:02:05.723775 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:05 crc kubenswrapper[4830]: I0131 09:02:05.723823 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:05 crc kubenswrapper[4830]: I0131 09:02:05.723832 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:05 crc kubenswrapper[4830]: I0131 09:02:05.723850 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:05 crc kubenswrapper[4830]: I0131 09:02:05.723860 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:05Z","lastTransitionTime":"2026-01-31T09:02:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:05 crc kubenswrapper[4830]: I0131 09:02:05.827390 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:05 crc kubenswrapper[4830]: I0131 09:02:05.827443 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:05 crc kubenswrapper[4830]: I0131 09:02:05.827456 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:05 crc kubenswrapper[4830]: I0131 09:02:05.827478 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:05 crc kubenswrapper[4830]: I0131 09:02:05.827491 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:05Z","lastTransitionTime":"2026-01-31T09:02:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:05 crc kubenswrapper[4830]: I0131 09:02:05.931149 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:05 crc kubenswrapper[4830]: I0131 09:02:05.931204 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:05 crc kubenswrapper[4830]: I0131 09:02:05.931217 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:05 crc kubenswrapper[4830]: I0131 09:02:05.931237 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:05 crc kubenswrapper[4830]: I0131 09:02:05.931328 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:05Z","lastTransitionTime":"2026-01-31T09:02:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:06 crc kubenswrapper[4830]: I0131 09:02:06.033797 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:06 crc kubenswrapper[4830]: I0131 09:02:06.033843 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:06 crc kubenswrapper[4830]: I0131 09:02:06.033853 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:06 crc kubenswrapper[4830]: I0131 09:02:06.033871 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:06 crc kubenswrapper[4830]: I0131 09:02:06.033882 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:06Z","lastTransitionTime":"2026-01-31T09:02:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:06 crc kubenswrapper[4830]: I0131 09:02:06.137305 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:06 crc kubenswrapper[4830]: I0131 09:02:06.137359 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:06 crc kubenswrapper[4830]: I0131 09:02:06.137370 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:06 crc kubenswrapper[4830]: I0131 09:02:06.137392 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:06 crc kubenswrapper[4830]: I0131 09:02:06.137403 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:06Z","lastTransitionTime":"2026-01-31T09:02:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:06 crc kubenswrapper[4830]: I0131 09:02:06.239711 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:06 crc kubenswrapper[4830]: I0131 09:02:06.239776 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:06 crc kubenswrapper[4830]: I0131 09:02:06.239787 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:06 crc kubenswrapper[4830]: I0131 09:02:06.239805 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:06 crc kubenswrapper[4830]: I0131 09:02:06.239816 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:06Z","lastTransitionTime":"2026-01-31T09:02:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:06 crc kubenswrapper[4830]: I0131 09:02:06.248839 4830 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 00:25:42.884691276 +0000 UTC Jan 31 09:02:06 crc kubenswrapper[4830]: I0131 09:02:06.251111 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 09:02:06 crc kubenswrapper[4830]: I0131 09:02:06.251224 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5kl8z" Jan 31 09:02:06 crc kubenswrapper[4830]: I0131 09:02:06.251287 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 09:02:06 crc kubenswrapper[4830]: E0131 09:02:06.251249 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 09:02:06 crc kubenswrapper[4830]: I0131 09:02:06.251313 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 09:02:06 crc kubenswrapper[4830]: E0131 09:02:06.251355 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 09:02:06 crc kubenswrapper[4830]: E0131 09:02:06.251508 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 09:02:06 crc kubenswrapper[4830]: E0131 09:02:06.251629 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5kl8z" podUID="c1fa30e4-0c03-43ab-9c37-f7ec86153b27" Jan 31 09:02:06 crc kubenswrapper[4830]: I0131 09:02:06.267364 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0468c8e291590c1fa2ffc9b7786bbbf6f3cfb7889fc3adaf7f7a4d3b0edcb1ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4601859d7e52904b3390b2ebca48210c4b3bf132b00a4735a1dbe6cfdc7bd02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:06Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:06 crc kubenswrapper[4830]: I0131 09:02:06.291798 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-r8pc4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"159b9801-57e3-4cf0-9b81-10aacb5eef83\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://320f24051b1a05475b54823ec3401ad252f0efd6c0d3c5098643f959bad15179\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae12083012db5dcd6cc0b23cf64d04c952587334fdf617d36d4cdc2f657eb163\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://351a35da1a2503d6c5d323c7a9a26460236f70517498b41055e6d21bbf92b358\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3614c1ff7ca6762c15b9b85e6187ad94029630e54254e8ccb45b5e7c24692561\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27c8241e84418d80626b17fe37ccf994304394c3fa85c7f1a058d8d54ad7e7d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fd109715e65725addfeba399eacb1abbe16132e06e9ee50b542e877ca9d8d82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a83288b35d051a945187b06a0f1e8f61aec52b6343034cc2f57354b61a9309b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed906c96861bf8d2af425b975d67a4b298930cd75ababc2d621541adaa7b2ba2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T09:01:34Z\\\",\\\"message\\\":\\\"objects: [openshift-kube-apiserver/kube-apiserver-crc openshift-kube-controller-manager/kube-controller-manager-crc openshift-network-operator/iptables-alerter-4ln5h openshift-image-registry/node-ca-zt78q openshift-network-diagnostics/network-check-source-55646444c4-trplf openshift-network-diagnostics/network-check-target-xd92c openshift-network-node-identity/network-node-identity-vrzqb openshift-dns/node-resolver-pmbpr openshift-network-operator/network-operator-58b4c7f79c-55gtf openshift-machine-config-operator/machine-config-daemon-gt7kd openshift-multus/multus-additional-cni-plugins-x27jw openshift-multus/multus-cjqbn]\\\\nI0131 09:01:34.674011 6272 obj_retry.go:418] Waiting for all the *v1.Pod retry setup to complete in iterateRetryResources\\\\nI0131 09:01:34.674042 6272 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/multus-cjqbn\\\\nI0131 09:01:34.674056 6272 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/multus-cjqbn\\\\nF0131 09:01:34.674068 6272 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stoppe\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:33Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a83288b35d051a945187b06a0f1e8f61aec52b6343034cc2f57354b61a9309b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T09:01:57Z\\\",\\\"message\\\":\\\"flector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 09:01:57.209441 6558 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0131 09:01:57.209613 6558 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 09:01:57.210444 6558 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0131 09:01:57.210672 6558 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 09:01:57.211135 6558 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 09:01:57.211342 6558 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0131 09:01:57.212161 6558 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 09:01:57.212266 6558 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a39d2c99a3b1f6ff5d003d2163046c5127e41dac82f41744c1fbffd0cec8ff09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba6e5a840aca1952877a5d2c18af1d28779b13d54e0c653a2b02b3c3d7b748e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba6e5a840aca1952877a5d2c18af1d28779b13d54e0c653a2b02b3c3d7b748e4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-r8pc4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:06Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:06 crc kubenswrapper[4830]: I0131 09:02:06.306412 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7vq99" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"44acb8ed-5840-46fa-9ba1-1b89653e1478\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07cae4ce61629c9f8e48863d0775cf4fed46422db85ba8b29477e098b697fb1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9w5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86ac3b3a214c6bca20d7fdc92a49647dfdaf8de4391f331890f74900ab7eca11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9w5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:35Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7vq99\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:06Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:06 crc kubenswrapper[4830]: I0131 09:02:06.325795 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c75e1c36-b769-464e-96eb-6d9b3c5aa384\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fc4f1b4979b8d902676eced34650daea491e68b4c4377492b928a9f0f78d12b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b68f539fdc8bbf394adaf06d3e8682cdf498d0994c53f3754caf282cf9cf3607\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://539885da1b8f083e7ae878fec45416be66a5b06df063f046a05a4981bc8a8f75\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9b64732f8259953717c8ad355889afd462ce339c881ba9c105f6d3f39245e79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8cac33719e081864153ce20c60069a21036f3c29f7b4395021c3a2fe4f09dbc9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"message\\\":\\\"le observer\\\\nW0131 09:01:15.957161 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0131 09:01:15.957363 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 09:01:15.958528 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2684978895/tls.crt::/tmp/serving-cert-2684978895/tls.key\\\\\\\"\\\\nI0131 09:01:16.545024 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 09:01:16.548405 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 09:01:16.548444 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 09:01:16.548470 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 09:01:16.548480 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 09:01:16.557075 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 09:01:16.557112 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 09:01:16.557118 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 09:01:16.557123 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 09:01:16.557127 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 09:01:16.557130 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 09:01:16.557133 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0131 09:01:16.557468 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0131 09:01:16.564014 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d4d8bd6451d7416a2a312147ec282db5de8410910834f555c8d09062902130d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41db253848cac205af5f91621cac4e8223bda7d76984afd734a093f8e8a2d125\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41db253848cac205af5f91621cac4e8223bda7d76984afd734a093f8e8a2d125\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:00:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:00:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:00:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:06Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:06 crc kubenswrapper[4830]: I0131 09:02:06.343058 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:06Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:06 crc kubenswrapper[4830]: I0131 09:02:06.343739 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:06 crc kubenswrapper[4830]: I0131 09:02:06.343797 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:06 crc kubenswrapper[4830]: I0131 09:02:06.343811 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:06 crc kubenswrapper[4830]: I0131 09:02:06.343833 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:06 crc kubenswrapper[4830]: I0131 09:02:06.343847 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:06Z","lastTransitionTime":"2026-01-31T09:02:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:06 crc kubenswrapper[4830]: I0131 09:02:06.355953 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-zt78q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f8a0ccd-540b-4151-a34d-438e433cb141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://362d0fc182d79e72720f3686e7fb5219372cf72d8be09c8086713b692e8d66d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6zlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:25Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-zt78q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:06Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:06 crc kubenswrapper[4830]: I0131 09:02:06.371699 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5kl8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1fa30e4-0c03-43ab-9c37-f7ec86153b27\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jgvfn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jgvfn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:36Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5kl8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:06Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:06 crc kubenswrapper[4830]: I0131 09:02:06.389582 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:06Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:06 crc kubenswrapper[4830]: I0131 09:02:06.403785 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pmbpr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca325f50-edf0-4f3d-ab92-17f40a73d274\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d2a0a6bafefdee2120d6573808366f2455c8606c350f69b9e62bfb2903f6303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7p56d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:22Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pmbpr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:06Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:06 crc kubenswrapper[4830]: I0131 09:02:06.422563 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-cjqbn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b7e133cc-19e8-4770-9146-88dac53a6531\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fc53764819654361fe0c4c89480ef4e2b42eb79d71ab8b88f1cc9283c67ce70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-msp6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-cjqbn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:06Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:06 crc kubenswrapper[4830]: I0131 09:02:06.439811 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-x27jw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"227117cb-01d3-4e44-9da3-b1d577fb3ee2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d59f81b73056481d4e6eb23c2a98c3c088b5255b82cd28e0cad0ac2a9b271cfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wndtp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b9804ac2b3c56a17dc1665cc7ca0c622f9f7a5dbf19f804b155c517a5be61e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b9804ac2b3c56a17dc1665cc7ca0c622f9f7a5dbf19f804b155c517a5be61e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wndtp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e777b7bcec1e8930ac1c280460b5733614b633b1ed72522e124f8585ef5cb38d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e777b7bcec1e8930ac1c280460b5733614b633b1ed72522e124f8585ef5cb38d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wndtp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10692d497b8cf269a439c603376a958fcead6d7ee2338eff4e014a75eaa26417\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10692d497b8cf269a439c603376a958fcead6d7ee2338eff4e014a75eaa26417\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wndtp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1275255b2597fbb09db9612f496ab3177afffe0c7d74d406bc4b1bc861b5b54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c1275255b2597fbb09db9612f496ab3177afffe0c7d74d406bc4b1bc861b5b54\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wndtp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://443af6aeef0b259a54c898d9c08f6e39d663ddf06d36d8712ee8ebb7fd2ebf5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://443af6aeef0b259a54c898d9c08f6e39d663ddf06d36d8712ee8ebb7fd2ebf5a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wndtp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e55fc9c3b6f0e9526860d830179e759cb2eae3c52847b675db6ccce9b6ed2766\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e55fc9c3b6f0e9526860d830179e759cb2eae3c52847b675db6ccce9b6ed2766\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wndtp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-x27jw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:06Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:06 crc kubenswrapper[4830]: I0131 09:02:06.446260 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:06 crc kubenswrapper[4830]: I0131 09:02:06.446299 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:06 crc kubenswrapper[4830]: I0131 09:02:06.446307 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:06 crc kubenswrapper[4830]: I0131 09:02:06.446325 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:06 crc kubenswrapper[4830]: I0131 09:02:06.446334 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:06Z","lastTransitionTime":"2026-01-31T09:02:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:06 crc kubenswrapper[4830]: I0131 09:02:06.454774 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"026d8790-dc0a-472e-953a-66afc0fcd6e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1dc96f3d1e085f925a6a1b73ef1312bd85072065059f20eb6c11f7d044635f8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://49f1cea3266a97316fb0737cb770f6da2abfd58b016987b92c19aa20a9366129\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c457892625099d1b14d857643ba5c70e76cfe582ee31c1b8736f4e278557ab1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ecd1234e4873862db88981fcc0a8c9fd9fc7f913649528a5c274c2feb4617b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ecd1234e4873862db88981fcc0a8c9fd9fc7f913649528a5c274c2feb4617b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:00:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:00:57Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:00:56Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:06Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:06 crc kubenswrapper[4830]: I0131 09:02:06.468111 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"20ed341f-ef9c-4242-981d-80c09f22a37f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://28d65a2b0e667ff1bd912564180a6a9ce77e91a104c6028e19bc58e0ab3b295d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c484d4ed3775a955f3077e3404135c771c581eef1e8fa614d7cdd6e521bbb426\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6889405df5b8cc9e09b36c99a5db06048c9de193dd5744d2b48c07f42c477bd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e99664db53d57a91882867cdf4ab33d52a2e165c53f91cd1b918a32c49a7afa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:00:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:06Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:06 crc kubenswrapper[4830]: I0131 09:02:06.483811 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c33f93dea1eda6a26c80682447157b317a4aea4af66d200c5bdfc162ad593e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:06Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:06 crc kubenswrapper[4830]: I0131 09:02:06.498610 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:06Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:06 crc kubenswrapper[4830]: I0131 09:02:06.513795 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2018dd8e7153f3ce64992dc6f931ae09c5f77931cd0743a9fe2557673b6a41f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:06Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:06 crc kubenswrapper[4830]: I0131 09:02:06.528142 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"158dbfda-9b0a-4809-9946-3c6ee2d082dc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4f2590c48b20124bb8d0271755d430719ece306dbdc95acc26258abaf331ee2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqp59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7cfb7ee25dc18bb1412f69e9bbc3a9055029ed188a12baa5ceef7d5445ad597c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqp59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:23Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gt7kd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:06Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:06 crc kubenswrapper[4830]: I0131 09:02:06.550225 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:06 crc kubenswrapper[4830]: I0131 09:02:06.550272 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:06 crc kubenswrapper[4830]: I0131 09:02:06.550282 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:06 crc kubenswrapper[4830]: I0131 09:02:06.550301 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:06 crc kubenswrapper[4830]: I0131 09:02:06.550313 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:06Z","lastTransitionTime":"2026-01-31T09:02:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:06 crc kubenswrapper[4830]: I0131 09:02:06.652020 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:06 crc kubenswrapper[4830]: I0131 09:02:06.652056 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:06 crc kubenswrapper[4830]: I0131 09:02:06.652066 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:06 crc kubenswrapper[4830]: I0131 09:02:06.652081 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:06 crc kubenswrapper[4830]: I0131 09:02:06.652090 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:06Z","lastTransitionTime":"2026-01-31T09:02:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:06 crc kubenswrapper[4830]: I0131 09:02:06.754558 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:06 crc kubenswrapper[4830]: I0131 09:02:06.754603 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:06 crc kubenswrapper[4830]: I0131 09:02:06.754614 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:06 crc kubenswrapper[4830]: I0131 09:02:06.754630 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:06 crc kubenswrapper[4830]: I0131 09:02:06.754644 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:06Z","lastTransitionTime":"2026-01-31T09:02:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:06 crc kubenswrapper[4830]: I0131 09:02:06.857420 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:06 crc kubenswrapper[4830]: I0131 09:02:06.857458 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:06 crc kubenswrapper[4830]: I0131 09:02:06.857467 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:06 crc kubenswrapper[4830]: I0131 09:02:06.857482 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:06 crc kubenswrapper[4830]: I0131 09:02:06.857492 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:06Z","lastTransitionTime":"2026-01-31T09:02:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:06 crc kubenswrapper[4830]: I0131 09:02:06.960573 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:06 crc kubenswrapper[4830]: I0131 09:02:06.960621 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:06 crc kubenswrapper[4830]: I0131 09:02:06.960631 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:06 crc kubenswrapper[4830]: I0131 09:02:06.960653 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:06 crc kubenswrapper[4830]: I0131 09:02:06.960665 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:06Z","lastTransitionTime":"2026-01-31T09:02:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:07 crc kubenswrapper[4830]: I0131 09:02:07.063314 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:07 crc kubenswrapper[4830]: I0131 09:02:07.063365 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:07 crc kubenswrapper[4830]: I0131 09:02:07.063376 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:07 crc kubenswrapper[4830]: I0131 09:02:07.063395 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:07 crc kubenswrapper[4830]: I0131 09:02:07.063410 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:07Z","lastTransitionTime":"2026-01-31T09:02:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:07 crc kubenswrapper[4830]: I0131 09:02:07.165623 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:07 crc kubenswrapper[4830]: I0131 09:02:07.165658 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:07 crc kubenswrapper[4830]: I0131 09:02:07.165668 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:07 crc kubenswrapper[4830]: I0131 09:02:07.165687 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:07 crc kubenswrapper[4830]: I0131 09:02:07.165697 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:07Z","lastTransitionTime":"2026-01-31T09:02:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:07 crc kubenswrapper[4830]: I0131 09:02:07.249415 4830 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 10:16:38.582484464 +0000 UTC Jan 31 09:02:07 crc kubenswrapper[4830]: I0131 09:02:07.268374 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:07 crc kubenswrapper[4830]: I0131 09:02:07.268422 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:07 crc kubenswrapper[4830]: I0131 09:02:07.268433 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:07 crc kubenswrapper[4830]: I0131 09:02:07.268450 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:07 crc kubenswrapper[4830]: I0131 09:02:07.268462 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:07Z","lastTransitionTime":"2026-01-31T09:02:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:07 crc kubenswrapper[4830]: I0131 09:02:07.371432 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:07 crc kubenswrapper[4830]: I0131 09:02:07.371494 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:07 crc kubenswrapper[4830]: I0131 09:02:07.371504 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:07 crc kubenswrapper[4830]: I0131 09:02:07.371521 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:07 crc kubenswrapper[4830]: I0131 09:02:07.371532 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:07Z","lastTransitionTime":"2026-01-31T09:02:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:07 crc kubenswrapper[4830]: I0131 09:02:07.474182 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:07 crc kubenswrapper[4830]: I0131 09:02:07.474241 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:07 crc kubenswrapper[4830]: I0131 09:02:07.474253 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:07 crc kubenswrapper[4830]: I0131 09:02:07.474274 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:07 crc kubenswrapper[4830]: I0131 09:02:07.474288 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:07Z","lastTransitionTime":"2026-01-31T09:02:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:07 crc kubenswrapper[4830]: I0131 09:02:07.576671 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:07 crc kubenswrapper[4830]: I0131 09:02:07.576736 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:07 crc kubenswrapper[4830]: I0131 09:02:07.576750 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:07 crc kubenswrapper[4830]: I0131 09:02:07.576768 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:07 crc kubenswrapper[4830]: I0131 09:02:07.576777 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:07Z","lastTransitionTime":"2026-01-31T09:02:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:07 crc kubenswrapper[4830]: I0131 09:02:07.680377 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:07 crc kubenswrapper[4830]: I0131 09:02:07.680455 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:07 crc kubenswrapper[4830]: I0131 09:02:07.680472 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:07 crc kubenswrapper[4830]: I0131 09:02:07.680491 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:07 crc kubenswrapper[4830]: I0131 09:02:07.680504 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:07Z","lastTransitionTime":"2026-01-31T09:02:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:07 crc kubenswrapper[4830]: I0131 09:02:07.782650 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:07 crc kubenswrapper[4830]: I0131 09:02:07.782694 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:07 crc kubenswrapper[4830]: I0131 09:02:07.782705 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:07 crc kubenswrapper[4830]: I0131 09:02:07.782740 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:07 crc kubenswrapper[4830]: I0131 09:02:07.782757 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:07Z","lastTransitionTime":"2026-01-31T09:02:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:07 crc kubenswrapper[4830]: I0131 09:02:07.886713 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:07 crc kubenswrapper[4830]: I0131 09:02:07.886777 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:07 crc kubenswrapper[4830]: I0131 09:02:07.886793 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:07 crc kubenswrapper[4830]: I0131 09:02:07.886816 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:07 crc kubenswrapper[4830]: I0131 09:02:07.886830 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:07Z","lastTransitionTime":"2026-01-31T09:02:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:07 crc kubenswrapper[4830]: I0131 09:02:07.990014 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:07 crc kubenswrapper[4830]: I0131 09:02:07.990072 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:07 crc kubenswrapper[4830]: I0131 09:02:07.990086 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:07 crc kubenswrapper[4830]: I0131 09:02:07.990106 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:07 crc kubenswrapper[4830]: I0131 09:02:07.990119 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:07Z","lastTransitionTime":"2026-01-31T09:02:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:08 crc kubenswrapper[4830]: I0131 09:02:08.092186 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:08 crc kubenswrapper[4830]: I0131 09:02:08.092219 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:08 crc kubenswrapper[4830]: I0131 09:02:08.092227 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:08 crc kubenswrapper[4830]: I0131 09:02:08.092241 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:08 crc kubenswrapper[4830]: I0131 09:02:08.092250 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:08Z","lastTransitionTime":"2026-01-31T09:02:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:08 crc kubenswrapper[4830]: I0131 09:02:08.196346 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:08 crc kubenswrapper[4830]: I0131 09:02:08.196418 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:08 crc kubenswrapper[4830]: I0131 09:02:08.196432 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:08 crc kubenswrapper[4830]: I0131 09:02:08.196451 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:08 crc kubenswrapper[4830]: I0131 09:02:08.196464 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:08Z","lastTransitionTime":"2026-01-31T09:02:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:08 crc kubenswrapper[4830]: I0131 09:02:08.250265 4830 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 11:11:20.893506701 +0000 UTC Jan 31 09:02:08 crc kubenswrapper[4830]: I0131 09:02:08.250391 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 09:02:08 crc kubenswrapper[4830]: I0131 09:02:08.250417 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 09:02:08 crc kubenswrapper[4830]: I0131 09:02:08.250419 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 09:02:08 crc kubenswrapper[4830]: E0131 09:02:08.250544 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 09:02:08 crc kubenswrapper[4830]: I0131 09:02:08.250570 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5kl8z" Jan 31 09:02:08 crc kubenswrapper[4830]: E0131 09:02:08.250646 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 09:02:08 crc kubenswrapper[4830]: E0131 09:02:08.250870 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5kl8z" podUID="c1fa30e4-0c03-43ab-9c37-f7ec86153b27" Jan 31 09:02:08 crc kubenswrapper[4830]: E0131 09:02:08.250924 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 09:02:08 crc kubenswrapper[4830]: I0131 09:02:08.299403 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:08 crc kubenswrapper[4830]: I0131 09:02:08.299459 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:08 crc kubenswrapper[4830]: I0131 09:02:08.299472 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:08 crc kubenswrapper[4830]: I0131 09:02:08.299491 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:08 crc kubenswrapper[4830]: I0131 09:02:08.299509 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:08Z","lastTransitionTime":"2026-01-31T09:02:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:08 crc kubenswrapper[4830]: I0131 09:02:08.403123 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:08 crc kubenswrapper[4830]: I0131 09:02:08.403186 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:08 crc kubenswrapper[4830]: I0131 09:02:08.403197 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:08 crc kubenswrapper[4830]: I0131 09:02:08.403216 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:08 crc kubenswrapper[4830]: I0131 09:02:08.403232 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:08Z","lastTransitionTime":"2026-01-31T09:02:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:08 crc kubenswrapper[4830]: I0131 09:02:08.506086 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:08 crc kubenswrapper[4830]: I0131 09:02:08.506160 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:08 crc kubenswrapper[4830]: I0131 09:02:08.506170 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:08 crc kubenswrapper[4830]: I0131 09:02:08.506198 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:08 crc kubenswrapper[4830]: I0131 09:02:08.506210 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:08Z","lastTransitionTime":"2026-01-31T09:02:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:08 crc kubenswrapper[4830]: I0131 09:02:08.609615 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:08 crc kubenswrapper[4830]: I0131 09:02:08.610274 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:08 crc kubenswrapper[4830]: I0131 09:02:08.610358 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:08 crc kubenswrapper[4830]: I0131 09:02:08.610448 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:08 crc kubenswrapper[4830]: I0131 09:02:08.610515 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:08Z","lastTransitionTime":"2026-01-31T09:02:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:08 crc kubenswrapper[4830]: I0131 09:02:08.713168 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:08 crc kubenswrapper[4830]: I0131 09:02:08.713221 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:08 crc kubenswrapper[4830]: I0131 09:02:08.713234 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:08 crc kubenswrapper[4830]: I0131 09:02:08.713254 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:08 crc kubenswrapper[4830]: I0131 09:02:08.713267 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:08Z","lastTransitionTime":"2026-01-31T09:02:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:08 crc kubenswrapper[4830]: I0131 09:02:08.816269 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:08 crc kubenswrapper[4830]: I0131 09:02:08.816346 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:08 crc kubenswrapper[4830]: I0131 09:02:08.816369 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:08 crc kubenswrapper[4830]: I0131 09:02:08.816399 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:08 crc kubenswrapper[4830]: I0131 09:02:08.816422 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:08Z","lastTransitionTime":"2026-01-31T09:02:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:08 crc kubenswrapper[4830]: I0131 09:02:08.858131 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c1fa30e4-0c03-43ab-9c37-f7ec86153b27-metrics-certs\") pod \"network-metrics-daemon-5kl8z\" (UID: \"c1fa30e4-0c03-43ab-9c37-f7ec86153b27\") " pod="openshift-multus/network-metrics-daemon-5kl8z" Jan 31 09:02:08 crc kubenswrapper[4830]: E0131 09:02:08.858697 4830 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 31 09:02:08 crc kubenswrapper[4830]: E0131 09:02:08.858981 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c1fa30e4-0c03-43ab-9c37-f7ec86153b27-metrics-certs podName:c1fa30e4-0c03-43ab-9c37-f7ec86153b27 nodeName:}" failed. No retries permitted until 2026-01-31 09:02:40.858950406 +0000 UTC m=+105.352312888 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/c1fa30e4-0c03-43ab-9c37-f7ec86153b27-metrics-certs") pod "network-metrics-daemon-5kl8z" (UID: "c1fa30e4-0c03-43ab-9c37-f7ec86153b27") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 31 09:02:08 crc kubenswrapper[4830]: I0131 09:02:08.920341 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:08 crc kubenswrapper[4830]: I0131 09:02:08.920644 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:08 crc kubenswrapper[4830]: I0131 09:02:08.920776 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:08 crc kubenswrapper[4830]: I0131 09:02:08.920855 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:08 crc kubenswrapper[4830]: I0131 09:02:08.920950 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:08Z","lastTransitionTime":"2026-01-31T09:02:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:09 crc kubenswrapper[4830]: I0131 09:02:09.024086 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:09 crc kubenswrapper[4830]: I0131 09:02:09.024133 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:09 crc kubenswrapper[4830]: I0131 09:02:09.024144 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:09 crc kubenswrapper[4830]: I0131 09:02:09.024165 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:09 crc kubenswrapper[4830]: I0131 09:02:09.024178 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:09Z","lastTransitionTime":"2026-01-31T09:02:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:09 crc kubenswrapper[4830]: I0131 09:02:09.126909 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:09 crc kubenswrapper[4830]: I0131 09:02:09.126953 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:09 crc kubenswrapper[4830]: I0131 09:02:09.126967 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:09 crc kubenswrapper[4830]: I0131 09:02:09.126984 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:09 crc kubenswrapper[4830]: I0131 09:02:09.126995 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:09Z","lastTransitionTime":"2026-01-31T09:02:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:09 crc kubenswrapper[4830]: I0131 09:02:09.230388 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:09 crc kubenswrapper[4830]: I0131 09:02:09.230438 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:09 crc kubenswrapper[4830]: I0131 09:02:09.230450 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:09 crc kubenswrapper[4830]: I0131 09:02:09.230469 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:09 crc kubenswrapper[4830]: I0131 09:02:09.230485 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:09Z","lastTransitionTime":"2026-01-31T09:02:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:09 crc kubenswrapper[4830]: I0131 09:02:09.250914 4830 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 11:32:06.176357407 +0000 UTC Jan 31 09:02:09 crc kubenswrapper[4830]: I0131 09:02:09.333819 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:09 crc kubenswrapper[4830]: I0131 09:02:09.334147 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:09 crc kubenswrapper[4830]: I0131 09:02:09.334246 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:09 crc kubenswrapper[4830]: I0131 09:02:09.334413 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:09 crc kubenswrapper[4830]: I0131 09:02:09.334532 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:09Z","lastTransitionTime":"2026-01-31T09:02:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:09 crc kubenswrapper[4830]: I0131 09:02:09.437400 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:09 crc kubenswrapper[4830]: I0131 09:02:09.437461 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:09 crc kubenswrapper[4830]: I0131 09:02:09.437471 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:09 crc kubenswrapper[4830]: I0131 09:02:09.437486 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:09 crc kubenswrapper[4830]: I0131 09:02:09.437496 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:09Z","lastTransitionTime":"2026-01-31T09:02:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:09 crc kubenswrapper[4830]: I0131 09:02:09.540573 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:09 crc kubenswrapper[4830]: I0131 09:02:09.540622 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:09 crc kubenswrapper[4830]: I0131 09:02:09.540630 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:09 crc kubenswrapper[4830]: I0131 09:02:09.540646 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:09 crc kubenswrapper[4830]: I0131 09:02:09.540655 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:09Z","lastTransitionTime":"2026-01-31T09:02:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:09 crc kubenswrapper[4830]: I0131 09:02:09.643459 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:09 crc kubenswrapper[4830]: I0131 09:02:09.643528 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:09 crc kubenswrapper[4830]: I0131 09:02:09.643546 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:09 crc kubenswrapper[4830]: I0131 09:02:09.643564 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:09 crc kubenswrapper[4830]: I0131 09:02:09.643575 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:09Z","lastTransitionTime":"2026-01-31T09:02:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:09 crc kubenswrapper[4830]: I0131 09:02:09.747421 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:09 crc kubenswrapper[4830]: I0131 09:02:09.747876 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:09 crc kubenswrapper[4830]: I0131 09:02:09.747970 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:09 crc kubenswrapper[4830]: I0131 09:02:09.748068 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:09 crc kubenswrapper[4830]: I0131 09:02:09.748158 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:09Z","lastTransitionTime":"2026-01-31T09:02:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:09 crc kubenswrapper[4830]: I0131 09:02:09.850922 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:09 crc kubenswrapper[4830]: I0131 09:02:09.850991 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:09 crc kubenswrapper[4830]: I0131 09:02:09.851008 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:09 crc kubenswrapper[4830]: I0131 09:02:09.851032 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:09 crc kubenswrapper[4830]: I0131 09:02:09.851047 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:09Z","lastTransitionTime":"2026-01-31T09:02:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:09 crc kubenswrapper[4830]: I0131 09:02:09.953945 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:09 crc kubenswrapper[4830]: I0131 09:02:09.954241 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:09 crc kubenswrapper[4830]: I0131 09:02:09.954314 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:09 crc kubenswrapper[4830]: I0131 09:02:09.954422 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:09 crc kubenswrapper[4830]: I0131 09:02:09.954504 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:09Z","lastTransitionTime":"2026-01-31T09:02:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:10 crc kubenswrapper[4830]: I0131 09:02:10.057505 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:10 crc kubenswrapper[4830]: I0131 09:02:10.057888 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:10 crc kubenswrapper[4830]: I0131 09:02:10.058096 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:10 crc kubenswrapper[4830]: I0131 09:02:10.058192 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:10 crc kubenswrapper[4830]: I0131 09:02:10.058405 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:10Z","lastTransitionTime":"2026-01-31T09:02:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:10 crc kubenswrapper[4830]: I0131 09:02:10.161346 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:10 crc kubenswrapper[4830]: I0131 09:02:10.161403 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:10 crc kubenswrapper[4830]: I0131 09:02:10.161414 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:10 crc kubenswrapper[4830]: I0131 09:02:10.161434 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:10 crc kubenswrapper[4830]: I0131 09:02:10.161449 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:10Z","lastTransitionTime":"2026-01-31T09:02:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:10 crc kubenswrapper[4830]: I0131 09:02:10.250562 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 09:02:10 crc kubenswrapper[4830]: I0131 09:02:10.250591 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 09:02:10 crc kubenswrapper[4830]: E0131 09:02:10.250975 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 09:02:10 crc kubenswrapper[4830]: I0131 09:02:10.250618 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 09:02:10 crc kubenswrapper[4830]: E0131 09:02:10.251286 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 09:02:10 crc kubenswrapper[4830]: E0131 09:02:10.251120 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 09:02:10 crc kubenswrapper[4830]: I0131 09:02:10.251124 4830 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 23:59:43.266140828 +0000 UTC Jan 31 09:02:10 crc kubenswrapper[4830]: I0131 09:02:10.250592 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5kl8z" Jan 31 09:02:10 crc kubenswrapper[4830]: E0131 09:02:10.251644 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5kl8z" podUID="c1fa30e4-0c03-43ab-9c37-f7ec86153b27" Jan 31 09:02:10 crc kubenswrapper[4830]: I0131 09:02:10.264319 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:10 crc kubenswrapper[4830]: I0131 09:02:10.264367 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:10 crc kubenswrapper[4830]: I0131 09:02:10.264382 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:10 crc kubenswrapper[4830]: I0131 09:02:10.264401 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:10 crc kubenswrapper[4830]: I0131 09:02:10.264412 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:10Z","lastTransitionTime":"2026-01-31T09:02:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:10 crc kubenswrapper[4830]: I0131 09:02:10.366850 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:10 crc kubenswrapper[4830]: I0131 09:02:10.366913 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:10 crc kubenswrapper[4830]: I0131 09:02:10.366924 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:10 crc kubenswrapper[4830]: I0131 09:02:10.366944 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:10 crc kubenswrapper[4830]: I0131 09:02:10.366955 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:10Z","lastTransitionTime":"2026-01-31T09:02:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:10 crc kubenswrapper[4830]: I0131 09:02:10.470028 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:10 crc kubenswrapper[4830]: I0131 09:02:10.470321 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:10 crc kubenswrapper[4830]: I0131 09:02:10.470441 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:10 crc kubenswrapper[4830]: I0131 09:02:10.470536 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:10 crc kubenswrapper[4830]: I0131 09:02:10.470621 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:10Z","lastTransitionTime":"2026-01-31T09:02:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:10 crc kubenswrapper[4830]: I0131 09:02:10.573572 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:10 crc kubenswrapper[4830]: I0131 09:02:10.573959 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:10 crc kubenswrapper[4830]: I0131 09:02:10.574036 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:10 crc kubenswrapper[4830]: I0131 09:02:10.574148 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:10 crc kubenswrapper[4830]: I0131 09:02:10.574221 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:10Z","lastTransitionTime":"2026-01-31T09:02:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:10 crc kubenswrapper[4830]: I0131 09:02:10.677559 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:10 crc kubenswrapper[4830]: I0131 09:02:10.677663 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:10 crc kubenswrapper[4830]: I0131 09:02:10.677678 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:10 crc kubenswrapper[4830]: I0131 09:02:10.677702 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:10 crc kubenswrapper[4830]: I0131 09:02:10.677714 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:10Z","lastTransitionTime":"2026-01-31T09:02:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:10 crc kubenswrapper[4830]: I0131 09:02:10.699116 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-cjqbn_b7e133cc-19e8-4770-9146-88dac53a6531/kube-multus/0.log" Jan 31 09:02:10 crc kubenswrapper[4830]: I0131 09:02:10.699452 4830 generic.go:334] "Generic (PLEG): container finished" podID="b7e133cc-19e8-4770-9146-88dac53a6531" containerID="4fc53764819654361fe0c4c89480ef4e2b42eb79d71ab8b88f1cc9283c67ce70" exitCode=1 Jan 31 09:02:10 crc kubenswrapper[4830]: I0131 09:02:10.699570 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-cjqbn" event={"ID":"b7e133cc-19e8-4770-9146-88dac53a6531","Type":"ContainerDied","Data":"4fc53764819654361fe0c4c89480ef4e2b42eb79d71ab8b88f1cc9283c67ce70"} Jan 31 09:02:10 crc kubenswrapper[4830]: I0131 09:02:10.700200 4830 scope.go:117] "RemoveContainer" containerID="4fc53764819654361fe0c4c89480ef4e2b42eb79d71ab8b88f1cc9283c67ce70" Jan 31 09:02:10 crc kubenswrapper[4830]: I0131 09:02:10.717566 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c75e1c36-b769-464e-96eb-6d9b3c5aa384\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fc4f1b4979b8d902676eced34650daea491e68b4c4377492b928a9f0f78d12b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b68f539fdc8bbf394adaf06d3e8682cdf498d0994c53f3754caf282cf9cf3607\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://539885da1b8f083e7ae878fec45416be66a5b06df063f046a05a4981bc8a8f75\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9b64732f8259953717c8ad355889afd462ce339c881ba9c105f6d3f39245e79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8cac33719e081864153ce20c60069a21036f3c29f7b4395021c3a2fe4f09dbc9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"message\\\":\\\"le observer\\\\nW0131 09:01:15.957161 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0131 09:01:15.957363 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 09:01:15.958528 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2684978895/tls.crt::/tmp/serving-cert-2684978895/tls.key\\\\\\\"\\\\nI0131 09:01:16.545024 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 09:01:16.548405 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 09:01:16.548444 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 09:01:16.548470 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 09:01:16.548480 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 09:01:16.557075 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 09:01:16.557112 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 09:01:16.557118 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 09:01:16.557123 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 09:01:16.557127 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 09:01:16.557130 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 09:01:16.557133 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0131 09:01:16.557468 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0131 09:01:16.564014 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d4d8bd6451d7416a2a312147ec282db5de8410910834f555c8d09062902130d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41db253848cac205af5f91621cac4e8223bda7d76984afd734a093f8e8a2d125\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41db253848cac205af5f91621cac4e8223bda7d76984afd734a093f8e8a2d125\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:00:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:00:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:00:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:10Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:10 crc kubenswrapper[4830]: I0131 09:02:10.739458 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:10Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:10 crc kubenswrapper[4830]: I0131 09:02:10.750943 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-zt78q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f8a0ccd-540b-4151-a34d-438e433cb141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://362d0fc182d79e72720f3686e7fb5219372cf72d8be09c8086713b692e8d66d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6zlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:25Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-zt78q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:10Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:10 crc kubenswrapper[4830]: I0131 09:02:10.763422 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5kl8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1fa30e4-0c03-43ab-9c37-f7ec86153b27\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jgvfn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jgvfn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:36Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5kl8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:10Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:10 crc kubenswrapper[4830]: I0131 09:02:10.776396 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:10Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:10 crc kubenswrapper[4830]: I0131 09:02:10.786577 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pmbpr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca325f50-edf0-4f3d-ab92-17f40a73d274\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d2a0a6bafefdee2120d6573808366f2455c8606c350f69b9e62bfb2903f6303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7p56d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:22Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pmbpr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:10Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:10 crc kubenswrapper[4830]: I0131 09:02:10.793955 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:10 crc kubenswrapper[4830]: I0131 09:02:10.793984 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:10 crc kubenswrapper[4830]: I0131 09:02:10.793993 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:10 crc kubenswrapper[4830]: I0131 09:02:10.794007 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:10 crc kubenswrapper[4830]: I0131 09:02:10.794016 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:10Z","lastTransitionTime":"2026-01-31T09:02:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:10 crc kubenswrapper[4830]: I0131 09:02:10.801917 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-cjqbn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b7e133cc-19e8-4770-9146-88dac53a6531\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:02:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:02:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fc53764819654361fe0c4c89480ef4e2b42eb79d71ab8b88f1cc9283c67ce70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4fc53764819654361fe0c4c89480ef4e2b42eb79d71ab8b88f1cc9283c67ce70\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T09:02:10Z\\\",\\\"message\\\":\\\"2026-01-31T09:01:24+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_a3601813-05b9-4c26-9298-bb115810fa0c\\\\n2026-01-31T09:01:24+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_a3601813-05b9-4c26-9298-bb115810fa0c to /host/opt/cni/bin/\\\\n2026-01-31T09:01:25Z [verbose] multus-daemon started\\\\n2026-01-31T09:01:25Z [verbose] Readiness Indicator file check\\\\n2026-01-31T09:02:10Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-msp6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-cjqbn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:10Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:10 crc kubenswrapper[4830]: I0131 09:02:10.821677 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-x27jw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"227117cb-01d3-4e44-9da3-b1d577fb3ee2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d59f81b73056481d4e6eb23c2a98c3c088b5255b82cd28e0cad0ac2a9b271cfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wndtp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b9804ac2b3c56a17dc1665cc7ca0c622f9f7a5dbf19f804b155c517a5be61e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b9804ac2b3c56a17dc1665cc7ca0c622f9f7a5dbf19f804b155c517a5be61e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wndtp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e777b7bcec1e8930ac1c280460b5733614b633b1ed72522e124f8585ef5cb38d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e777b7bcec1e8930ac1c280460b5733614b633b1ed72522e124f8585ef5cb38d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wndtp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10692d497b8cf269a439c603376a958fcead6d7ee2338eff4e014a75eaa26417\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10692d497b8cf269a439c603376a958fcead6d7ee2338eff4e014a75eaa26417\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wndtp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1275255b2597fbb09db9612f496ab3177afffe0c7d74d406bc4b1bc861b5b54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c1275255b2597fbb09db9612f496ab3177afffe0c7d74d406bc4b1bc861b5b54\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wndtp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://443af6aeef0b259a54c898d9c08f6e39d663ddf06d36d8712ee8ebb7fd2ebf5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://443af6aeef0b259a54c898d9c08f6e39d663ddf06d36d8712ee8ebb7fd2ebf5a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wndtp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e55fc9c3b6f0e9526860d830179e759cb2eae3c52847b675db6ccce9b6ed2766\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e55fc9c3b6f0e9526860d830179e759cb2eae3c52847b675db6ccce9b6ed2766\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wndtp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-x27jw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:10Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:10 crc kubenswrapper[4830]: I0131 09:02:10.838361 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"026d8790-dc0a-472e-953a-66afc0fcd6e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1dc96f3d1e085f925a6a1b73ef1312bd85072065059f20eb6c11f7d044635f8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://49f1cea3266a97316fb0737cb770f6da2abfd58b016987b92c19aa20a9366129\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c457892625099d1b14d857643ba5c70e76cfe582ee31c1b8736f4e278557ab1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ecd1234e4873862db88981fcc0a8c9fd9fc7f913649528a5c274c2feb4617b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ecd1234e4873862db88981fcc0a8c9fd9fc7f913649528a5c274c2feb4617b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:00:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:00:57Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:00:56Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:10Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:10 crc kubenswrapper[4830]: I0131 09:02:10.853939 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"20ed341f-ef9c-4242-981d-80c09f22a37f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://28d65a2b0e667ff1bd912564180a6a9ce77e91a104c6028e19bc58e0ab3b295d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c484d4ed3775a955f3077e3404135c771c581eef1e8fa614d7cdd6e521bbb426\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6889405df5b8cc9e09b36c99a5db06048c9de193dd5744d2b48c07f42c477bd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e99664db53d57a91882867cdf4ab33d52a2e165c53f91cd1b918a32c49a7afa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:00:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:10Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:10 crc kubenswrapper[4830]: I0131 09:02:10.867678 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c33f93dea1eda6a26c80682447157b317a4aea4af66d200c5bdfc162ad593e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:10Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:10 crc kubenswrapper[4830]: I0131 09:02:10.881473 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:10Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:10 crc kubenswrapper[4830]: I0131 09:02:10.898125 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:10 crc kubenswrapper[4830]: I0131 09:02:10.898144 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2018dd8e7153f3ce64992dc6f931ae09c5f77931cd0743a9fe2557673b6a41f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:10Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:10 crc kubenswrapper[4830]: I0131 09:02:10.898181 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:10 crc kubenswrapper[4830]: I0131 09:02:10.898329 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:10 crc kubenswrapper[4830]: I0131 09:02:10.898357 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:10 crc kubenswrapper[4830]: I0131 09:02:10.898371 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:10Z","lastTransitionTime":"2026-01-31T09:02:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:10 crc kubenswrapper[4830]: I0131 09:02:10.910140 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"158dbfda-9b0a-4809-9946-3c6ee2d082dc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4f2590c48b20124bb8d0271755d430719ece306dbdc95acc26258abaf331ee2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqp59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7cfb7ee25dc18bb1412f69e9bbc3a9055029ed188a12baa5ceef7d5445ad597c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqp59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:23Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gt7kd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:10Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:10 crc kubenswrapper[4830]: I0131 09:02:10.928642 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0468c8e291590c1fa2ffc9b7786bbbf6f3cfb7889fc3adaf7f7a4d3b0edcb1ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4601859d7e52904b3390b2ebca48210c4b3bf132b00a4735a1dbe6cfdc7bd02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:10Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:10 crc kubenswrapper[4830]: I0131 09:02:10.951509 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-r8pc4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"159b9801-57e3-4cf0-9b81-10aacb5eef83\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://320f24051b1a05475b54823ec3401ad252f0efd6c0d3c5098643f959bad15179\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae12083012db5dcd6cc0b23cf64d04c952587334fdf617d36d4cdc2f657eb163\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://351a35da1a2503d6c5d323c7a9a26460236f70517498b41055e6d21bbf92b358\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3614c1ff7ca6762c15b9b85e6187ad94029630e54254e8ccb45b5e7c24692561\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27c8241e84418d80626b17fe37ccf994304394c3fa85c7f1a058d8d54ad7e7d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fd109715e65725addfeba399eacb1abbe16132e06e9ee50b542e877ca9d8d82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a83288b35d051a945187b06a0f1e8f61aec52b6343034cc2f57354b61a9309b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed906c96861bf8d2af425b975d67a4b298930cd75ababc2d621541adaa7b2ba2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T09:01:34Z\\\",\\\"message\\\":\\\"objects: [openshift-kube-apiserver/kube-apiserver-crc openshift-kube-controller-manager/kube-controller-manager-crc openshift-network-operator/iptables-alerter-4ln5h openshift-image-registry/node-ca-zt78q openshift-network-diagnostics/network-check-source-55646444c4-trplf openshift-network-diagnostics/network-check-target-xd92c openshift-network-node-identity/network-node-identity-vrzqb openshift-dns/node-resolver-pmbpr openshift-network-operator/network-operator-58b4c7f79c-55gtf openshift-machine-config-operator/machine-config-daemon-gt7kd openshift-multus/multus-additional-cni-plugins-x27jw openshift-multus/multus-cjqbn]\\\\nI0131 09:01:34.674011 6272 obj_retry.go:418] Waiting for all the *v1.Pod retry setup to complete in iterateRetryResources\\\\nI0131 09:01:34.674042 6272 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/multus-cjqbn\\\\nI0131 09:01:34.674056 6272 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/multus-cjqbn\\\\nF0131 09:01:34.674068 6272 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stoppe\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:33Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a83288b35d051a945187b06a0f1e8f61aec52b6343034cc2f57354b61a9309b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T09:01:57Z\\\",\\\"message\\\":\\\"flector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 09:01:57.209441 6558 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0131 09:01:57.209613 6558 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 09:01:57.210444 6558 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0131 09:01:57.210672 6558 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 09:01:57.211135 6558 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 09:01:57.211342 6558 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0131 09:01:57.212161 6558 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 09:01:57.212266 6558 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a39d2c99a3b1f6ff5d003d2163046c5127e41dac82f41744c1fbffd0cec8ff09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba6e5a840aca1952877a5d2c18af1d28779b13d54e0c653a2b02b3c3d7b748e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba6e5a840aca1952877a5d2c18af1d28779b13d54e0c653a2b02b3c3d7b748e4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-r8pc4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:10Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:10 crc kubenswrapper[4830]: I0131 09:02:10.963431 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7vq99" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"44acb8ed-5840-46fa-9ba1-1b89653e1478\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07cae4ce61629c9f8e48863d0775cf4fed46422db85ba8b29477e098b697fb1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9w5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86ac3b3a214c6bca20d7fdc92a49647dfdaf8de4391f331890f74900ab7eca11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9w5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:35Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7vq99\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:10Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:11 crc kubenswrapper[4830]: I0131 09:02:11.000747 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:11 crc kubenswrapper[4830]: I0131 09:02:11.000797 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:11 crc kubenswrapper[4830]: I0131 09:02:11.000806 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:11 crc kubenswrapper[4830]: I0131 09:02:11.000864 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:11 crc kubenswrapper[4830]: I0131 09:02:11.000882 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:11Z","lastTransitionTime":"2026-01-31T09:02:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:11 crc kubenswrapper[4830]: I0131 09:02:11.103011 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:11 crc kubenswrapper[4830]: I0131 09:02:11.103050 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:11 crc kubenswrapper[4830]: I0131 09:02:11.103061 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:11 crc kubenswrapper[4830]: I0131 09:02:11.103080 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:11 crc kubenswrapper[4830]: I0131 09:02:11.103093 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:11Z","lastTransitionTime":"2026-01-31T09:02:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:11 crc kubenswrapper[4830]: I0131 09:02:11.206109 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:11 crc kubenswrapper[4830]: I0131 09:02:11.206177 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:11 crc kubenswrapper[4830]: I0131 09:02:11.206217 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:11 crc kubenswrapper[4830]: I0131 09:02:11.206250 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:11 crc kubenswrapper[4830]: I0131 09:02:11.206274 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:11Z","lastTransitionTime":"2026-01-31T09:02:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:11 crc kubenswrapper[4830]: I0131 09:02:11.252129 4830 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-18 05:18:46.48170562 +0000 UTC Jan 31 09:02:11 crc kubenswrapper[4830]: I0131 09:02:11.253179 4830 scope.go:117] "RemoveContainer" containerID="3a83288b35d051a945187b06a0f1e8f61aec52b6343034cc2f57354b61a9309b" Jan 31 09:02:11 crc kubenswrapper[4830]: E0131 09:02:11.253597 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-r8pc4_openshift-ovn-kubernetes(159b9801-57e3-4cf0-9b81-10aacb5eef83)\"" pod="openshift-ovn-kubernetes/ovnkube-node-r8pc4" podUID="159b9801-57e3-4cf0-9b81-10aacb5eef83" Jan 31 09:02:11 crc kubenswrapper[4830]: I0131 09:02:11.273745 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7vq99" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"44acb8ed-5840-46fa-9ba1-1b89653e1478\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07cae4ce61629c9f8e48863d0775cf4fed46422db85ba8b29477e098b697fb1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9w5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86ac3b3a214c6bca20d7fdc92a49647dfdaf8de4391f331890f74900ab7eca11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9w5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:35Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7vq99\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:11Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:11 crc kubenswrapper[4830]: I0131 09:02:11.295328 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0468c8e291590c1fa2ffc9b7786bbbf6f3cfb7889fc3adaf7f7a4d3b0edcb1ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4601859d7e52904b3390b2ebca48210c4b3bf132b00a4735a1dbe6cfdc7bd02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:11Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:11 crc kubenswrapper[4830]: I0131 09:02:11.309316 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:11 crc kubenswrapper[4830]: I0131 09:02:11.309377 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:11 crc kubenswrapper[4830]: I0131 09:02:11.309390 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:11 crc kubenswrapper[4830]: I0131 09:02:11.309413 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:11 crc kubenswrapper[4830]: I0131 09:02:11.309425 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:11Z","lastTransitionTime":"2026-01-31T09:02:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:11 crc kubenswrapper[4830]: I0131 09:02:11.318937 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-r8pc4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"159b9801-57e3-4cf0-9b81-10aacb5eef83\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://320f24051b1a05475b54823ec3401ad252f0efd6c0d3c5098643f959bad15179\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae12083012db5dcd6cc0b23cf64d04c952587334fdf617d36d4cdc2f657eb163\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://351a35da1a2503d6c5d323c7a9a26460236f70517498b41055e6d21bbf92b358\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3614c1ff7ca6762c15b9b85e6187ad94029630e54254e8ccb45b5e7c24692561\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27c8241e84418d80626b17fe37ccf994304394c3fa85c7f1a058d8d54ad7e7d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fd109715e65725addfeba399eacb1abbe16132e06e9ee50b542e877ca9d8d82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a83288b35d051a945187b06a0f1e8f61aec52b6343034cc2f57354b61a9309b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a83288b35d051a945187b06a0f1e8f61aec52b6343034cc2f57354b61a9309b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T09:01:57Z\\\",\\\"message\\\":\\\"flector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 09:01:57.209441 6558 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0131 09:01:57.209613 6558 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 09:01:57.210444 6558 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0131 09:01:57.210672 6558 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 09:01:57.211135 6558 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 09:01:57.211342 6558 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0131 09:01:57.212161 6558 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 09:01:57.212266 6558 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:56Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-r8pc4_openshift-ovn-kubernetes(159b9801-57e3-4cf0-9b81-10aacb5eef83)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a39d2c99a3b1f6ff5d003d2163046c5127e41dac82f41744c1fbffd0cec8ff09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba6e5a840aca1952877a5d2c18af1d28779b13d54e0c653a2b02b3c3d7b748e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba6e5a840aca1952877a5d2c18af1d28779b13d54e0c653a2b02b3c3d7b748e4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-r8pc4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:11Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:11 crc kubenswrapper[4830]: I0131 09:02:11.334442 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:11Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:11 crc kubenswrapper[4830]: I0131 09:02:11.346127 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-zt78q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f8a0ccd-540b-4151-a34d-438e433cb141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://362d0fc182d79e72720f3686e7fb5219372cf72d8be09c8086713b692e8d66d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6zlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:25Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-zt78q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:11Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:11 crc kubenswrapper[4830]: I0131 09:02:11.358453 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5kl8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1fa30e4-0c03-43ab-9c37-f7ec86153b27\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jgvfn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jgvfn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:36Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5kl8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:11Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:11 crc kubenswrapper[4830]: I0131 09:02:11.375419 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c75e1c36-b769-464e-96eb-6d9b3c5aa384\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fc4f1b4979b8d902676eced34650daea491e68b4c4377492b928a9f0f78d12b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b68f539fdc8bbf394adaf06d3e8682cdf498d0994c53f3754caf282cf9cf3607\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://539885da1b8f083e7ae878fec45416be66a5b06df063f046a05a4981bc8a8f75\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9b64732f8259953717c8ad355889afd462ce339c881ba9c105f6d3f39245e79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8cac33719e081864153ce20c60069a21036f3c29f7b4395021c3a2fe4f09dbc9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"message\\\":\\\"le observer\\\\nW0131 09:01:15.957161 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0131 09:01:15.957363 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 09:01:15.958528 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2684978895/tls.crt::/tmp/serving-cert-2684978895/tls.key\\\\\\\"\\\\nI0131 09:01:16.545024 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 09:01:16.548405 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 09:01:16.548444 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 09:01:16.548470 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 09:01:16.548480 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 09:01:16.557075 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 09:01:16.557112 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 09:01:16.557118 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 09:01:16.557123 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 09:01:16.557127 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 09:01:16.557130 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 09:01:16.557133 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0131 09:01:16.557468 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0131 09:01:16.564014 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d4d8bd6451d7416a2a312147ec282db5de8410910834f555c8d09062902130d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41db253848cac205af5f91621cac4e8223bda7d76984afd734a093f8e8a2d125\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41db253848cac205af5f91621cac4e8223bda7d76984afd734a093f8e8a2d125\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:00:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:00:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:00:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:11Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:11 crc kubenswrapper[4830]: I0131 09:02:11.391567 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-cjqbn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b7e133cc-19e8-4770-9146-88dac53a6531\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:02:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:02:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fc53764819654361fe0c4c89480ef4e2b42eb79d71ab8b88f1cc9283c67ce70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4fc53764819654361fe0c4c89480ef4e2b42eb79d71ab8b88f1cc9283c67ce70\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T09:02:10Z\\\",\\\"message\\\":\\\"2026-01-31T09:01:24+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_a3601813-05b9-4c26-9298-bb115810fa0c\\\\n2026-01-31T09:01:24+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_a3601813-05b9-4c26-9298-bb115810fa0c to /host/opt/cni/bin/\\\\n2026-01-31T09:01:25Z [verbose] multus-daemon started\\\\n2026-01-31T09:01:25Z [verbose] Readiness Indicator file check\\\\n2026-01-31T09:02:10Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-msp6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-cjqbn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:11Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:11 crc kubenswrapper[4830]: I0131 09:02:11.408647 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-x27jw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"227117cb-01d3-4e44-9da3-b1d577fb3ee2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d59f81b73056481d4e6eb23c2a98c3c088b5255b82cd28e0cad0ac2a9b271cfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wndtp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b9804ac2b3c56a17dc1665cc7ca0c622f9f7a5dbf19f804b155c517a5be61e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b9804ac2b3c56a17dc1665cc7ca0c622f9f7a5dbf19f804b155c517a5be61e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wndtp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e777b7bcec1e8930ac1c280460b5733614b633b1ed72522e124f8585ef5cb38d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e777b7bcec1e8930ac1c280460b5733614b633b1ed72522e124f8585ef5cb38d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wndtp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10692d497b8cf269a439c603376a958fcead6d7ee2338eff4e014a75eaa26417\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10692d497b8cf269a439c603376a958fcead6d7ee2338eff4e014a75eaa26417\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wndtp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1275255b2597fbb09db9612f496ab3177afffe0c7d74d406bc4b1bc861b5b54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c1275255b2597fbb09db9612f496ab3177afffe0c7d74d406bc4b1bc861b5b54\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wndtp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://443af6aeef0b259a54c898d9c08f6e39d663ddf06d36d8712ee8ebb7fd2ebf5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://443af6aeef0b259a54c898d9c08f6e39d663ddf06d36d8712ee8ebb7fd2ebf5a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wndtp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e55fc9c3b6f0e9526860d830179e759cb2eae3c52847b675db6ccce9b6ed2766\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e55fc9c3b6f0e9526860d830179e759cb2eae3c52847b675db6ccce9b6ed2766\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wndtp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-x27jw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:11Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:11 crc kubenswrapper[4830]: I0131 09:02:11.412733 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:11 crc kubenswrapper[4830]: I0131 09:02:11.412777 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:11 crc kubenswrapper[4830]: I0131 09:02:11.412790 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:11 crc kubenswrapper[4830]: I0131 09:02:11.412805 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:11 crc kubenswrapper[4830]: I0131 09:02:11.412815 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:11Z","lastTransitionTime":"2026-01-31T09:02:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:11 crc kubenswrapper[4830]: I0131 09:02:11.424800 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:11Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:11 crc kubenswrapper[4830]: I0131 09:02:11.435282 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pmbpr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca325f50-edf0-4f3d-ab92-17f40a73d274\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d2a0a6bafefdee2120d6573808366f2455c8606c350f69b9e62bfb2903f6303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7p56d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:22Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pmbpr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:11Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:11 crc kubenswrapper[4830]: I0131 09:02:11.453988 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c33f93dea1eda6a26c80682447157b317a4aea4af66d200c5bdfc162ad593e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:11Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:11 crc kubenswrapper[4830]: I0131 09:02:11.468890 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:11Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:11 crc kubenswrapper[4830]: I0131 09:02:11.481749 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2018dd8e7153f3ce64992dc6f931ae09c5f77931cd0743a9fe2557673b6a41f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:11Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:11 crc kubenswrapper[4830]: I0131 09:02:11.495407 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"158dbfda-9b0a-4809-9946-3c6ee2d082dc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4f2590c48b20124bb8d0271755d430719ece306dbdc95acc26258abaf331ee2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqp59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7cfb7ee25dc18bb1412f69e9bbc3a9055029ed188a12baa5ceef7d5445ad597c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqp59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:23Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gt7kd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:11Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:11 crc kubenswrapper[4830]: I0131 09:02:11.511513 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"026d8790-dc0a-472e-953a-66afc0fcd6e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1dc96f3d1e085f925a6a1b73ef1312bd85072065059f20eb6c11f7d044635f8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://49f1cea3266a97316fb0737cb770f6da2abfd58b016987b92c19aa20a9366129\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c457892625099d1b14d857643ba5c70e76cfe582ee31c1b8736f4e278557ab1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ecd1234e4873862db88981fcc0a8c9fd9fc7f913649528a5c274c2feb4617b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ecd1234e4873862db88981fcc0a8c9fd9fc7f913649528a5c274c2feb4617b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:00:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:00:57Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:00:56Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:11Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:11 crc kubenswrapper[4830]: I0131 09:02:11.515632 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:11 crc kubenswrapper[4830]: I0131 09:02:11.515684 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:11 crc kubenswrapper[4830]: I0131 09:02:11.515696 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:11 crc kubenswrapper[4830]: I0131 09:02:11.515719 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:11 crc kubenswrapper[4830]: I0131 09:02:11.515752 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:11Z","lastTransitionTime":"2026-01-31T09:02:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:11 crc kubenswrapper[4830]: I0131 09:02:11.528333 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"20ed341f-ef9c-4242-981d-80c09f22a37f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://28d65a2b0e667ff1bd912564180a6a9ce77e91a104c6028e19bc58e0ab3b295d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c484d4ed3775a955f3077e3404135c771c581eef1e8fa614d7cdd6e521bbb426\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6889405df5b8cc9e09b36c99a5db06048c9de193dd5744d2b48c07f42c477bd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e99664db53d57a91882867cdf4ab33d52a2e165c53f91cd1b918a32c49a7afa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:00:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:11Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:11 crc kubenswrapper[4830]: I0131 09:02:11.619485 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:11 crc kubenswrapper[4830]: I0131 09:02:11.619574 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:11 crc kubenswrapper[4830]: I0131 09:02:11.619592 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:11 crc kubenswrapper[4830]: I0131 09:02:11.619622 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:11 crc kubenswrapper[4830]: I0131 09:02:11.619643 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:11Z","lastTransitionTime":"2026-01-31T09:02:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:11 crc kubenswrapper[4830]: I0131 09:02:11.705114 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-cjqbn_b7e133cc-19e8-4770-9146-88dac53a6531/kube-multus/0.log" Jan 31 09:02:11 crc kubenswrapper[4830]: I0131 09:02:11.705187 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-cjqbn" event={"ID":"b7e133cc-19e8-4770-9146-88dac53a6531","Type":"ContainerStarted","Data":"9875f32d43bbc74af3de68db341e1562d735fcd5fba747d5ca7aceea458db68a"} Jan 31 09:02:11 crc kubenswrapper[4830]: I0131 09:02:11.721963 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:11 crc kubenswrapper[4830]: I0131 09:02:11.722026 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:11 crc kubenswrapper[4830]: I0131 09:02:11.722043 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:11 crc kubenswrapper[4830]: I0131 09:02:11.722066 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:11 crc kubenswrapper[4830]: I0131 09:02:11.722084 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:11Z","lastTransitionTime":"2026-01-31T09:02:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:11 crc kubenswrapper[4830]: I0131 09:02:11.731701 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0468c8e291590c1fa2ffc9b7786bbbf6f3cfb7889fc3adaf7f7a4d3b0edcb1ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4601859d7e52904b3390b2ebca48210c4b3bf132b00a4735a1dbe6cfdc7bd02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:11Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:11 crc kubenswrapper[4830]: I0131 09:02:11.759152 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-r8pc4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"159b9801-57e3-4cf0-9b81-10aacb5eef83\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://320f24051b1a05475b54823ec3401ad252f0efd6c0d3c5098643f959bad15179\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae12083012db5dcd6cc0b23cf64d04c952587334fdf617d36d4cdc2f657eb163\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://351a35da1a2503d6c5d323c7a9a26460236f70517498b41055e6d21bbf92b358\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3614c1ff7ca6762c15b9b85e6187ad94029630e54254e8ccb45b5e7c24692561\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27c8241e84418d80626b17fe37ccf994304394c3fa85c7f1a058d8d54ad7e7d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fd109715e65725addfeba399eacb1abbe16132e06e9ee50b542e877ca9d8d82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a83288b35d051a945187b06a0f1e8f61aec52b6343034cc2f57354b61a9309b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a83288b35d051a945187b06a0f1e8f61aec52b6343034cc2f57354b61a9309b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T09:01:57Z\\\",\\\"message\\\":\\\"flector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 09:01:57.209441 6558 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0131 09:01:57.209613 6558 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 09:01:57.210444 6558 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0131 09:01:57.210672 6558 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 09:01:57.211135 6558 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 09:01:57.211342 6558 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0131 09:01:57.212161 6558 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 09:01:57.212266 6558 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:56Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-r8pc4_openshift-ovn-kubernetes(159b9801-57e3-4cf0-9b81-10aacb5eef83)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a39d2c99a3b1f6ff5d003d2163046c5127e41dac82f41744c1fbffd0cec8ff09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba6e5a840aca1952877a5d2c18af1d28779b13d54e0c653a2b02b3c3d7b748e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba6e5a840aca1952877a5d2c18af1d28779b13d54e0c653a2b02b3c3d7b748e4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-r8pc4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:11Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:11 crc kubenswrapper[4830]: I0131 09:02:11.773798 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7vq99" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"44acb8ed-5840-46fa-9ba1-1b89653e1478\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07cae4ce61629c9f8e48863d0775cf4fed46422db85ba8b29477e098b697fb1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9w5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86ac3b3a214c6bca20d7fdc92a49647dfdaf8de4391f331890f74900ab7eca11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9w5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:35Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7vq99\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:11Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:11 crc kubenswrapper[4830]: I0131 09:02:11.787052 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5kl8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1fa30e4-0c03-43ab-9c37-f7ec86153b27\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jgvfn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jgvfn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:36Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5kl8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:11Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:11 crc kubenswrapper[4830]: I0131 09:02:11.807534 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c75e1c36-b769-464e-96eb-6d9b3c5aa384\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fc4f1b4979b8d902676eced34650daea491e68b4c4377492b928a9f0f78d12b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b68f539fdc8bbf394adaf06d3e8682cdf498d0994c53f3754caf282cf9cf3607\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://539885da1b8f083e7ae878fec45416be66a5b06df063f046a05a4981bc8a8f75\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9b64732f8259953717c8ad355889afd462ce339c881ba9c105f6d3f39245e79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8cac33719e081864153ce20c60069a21036f3c29f7b4395021c3a2fe4f09dbc9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"message\\\":\\\"le observer\\\\nW0131 09:01:15.957161 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0131 09:01:15.957363 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 09:01:15.958528 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2684978895/tls.crt::/tmp/serving-cert-2684978895/tls.key\\\\\\\"\\\\nI0131 09:01:16.545024 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 09:01:16.548405 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 09:01:16.548444 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 09:01:16.548470 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 09:01:16.548480 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 09:01:16.557075 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 09:01:16.557112 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 09:01:16.557118 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 09:01:16.557123 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 09:01:16.557127 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 09:01:16.557130 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 09:01:16.557133 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0131 09:01:16.557468 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0131 09:01:16.564014 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d4d8bd6451d7416a2a312147ec282db5de8410910834f555c8d09062902130d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41db253848cac205af5f91621cac4e8223bda7d76984afd734a093f8e8a2d125\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41db253848cac205af5f91621cac4e8223bda7d76984afd734a093f8e8a2d125\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:00:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:00:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:00:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:11Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:11 crc kubenswrapper[4830]: I0131 09:02:11.825667 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:11Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:11 crc kubenswrapper[4830]: I0131 09:02:11.826029 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:11 crc kubenswrapper[4830]: I0131 09:02:11.826064 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:11 crc kubenswrapper[4830]: I0131 09:02:11.826074 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:11 crc kubenswrapper[4830]: I0131 09:02:11.826118 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:11 crc kubenswrapper[4830]: I0131 09:02:11.826129 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:11Z","lastTransitionTime":"2026-01-31T09:02:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:11 crc kubenswrapper[4830]: I0131 09:02:11.838167 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-zt78q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f8a0ccd-540b-4151-a34d-438e433cb141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://362d0fc182d79e72720f3686e7fb5219372cf72d8be09c8086713b692e8d66d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6zlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:25Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-zt78q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:11Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:11 crc kubenswrapper[4830]: I0131 09:02:11.850061 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:11Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:11 crc kubenswrapper[4830]: I0131 09:02:11.859017 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pmbpr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca325f50-edf0-4f3d-ab92-17f40a73d274\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d2a0a6bafefdee2120d6573808366f2455c8606c350f69b9e62bfb2903f6303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7p56d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:22Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pmbpr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:11Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:11 crc kubenswrapper[4830]: I0131 09:02:11.874497 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-cjqbn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b7e133cc-19e8-4770-9146-88dac53a6531\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:02:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:02:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9875f32d43bbc74af3de68db341e1562d735fcd5fba747d5ca7aceea458db68a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4fc53764819654361fe0c4c89480ef4e2b42eb79d71ab8b88f1cc9283c67ce70\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T09:02:10Z\\\",\\\"message\\\":\\\"2026-01-31T09:01:24+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_a3601813-05b9-4c26-9298-bb115810fa0c\\\\n2026-01-31T09:01:24+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_a3601813-05b9-4c26-9298-bb115810fa0c to /host/opt/cni/bin/\\\\n2026-01-31T09:01:25Z [verbose] multus-daemon started\\\\n2026-01-31T09:01:25Z [verbose] Readiness Indicator file check\\\\n2026-01-31T09:02:10Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:23Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:02:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-msp6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-cjqbn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:11Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:11 crc kubenswrapper[4830]: I0131 09:02:11.892309 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-x27jw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"227117cb-01d3-4e44-9da3-b1d577fb3ee2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d59f81b73056481d4e6eb23c2a98c3c088b5255b82cd28e0cad0ac2a9b271cfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wndtp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b9804ac2b3c56a17dc1665cc7ca0c622f9f7a5dbf19f804b155c517a5be61e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b9804ac2b3c56a17dc1665cc7ca0c622f9f7a5dbf19f804b155c517a5be61e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wndtp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e777b7bcec1e8930ac1c280460b5733614b633b1ed72522e124f8585ef5cb38d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e777b7bcec1e8930ac1c280460b5733614b633b1ed72522e124f8585ef5cb38d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wndtp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10692d497b8cf269a439c603376a958fcead6d7ee2338eff4e014a75eaa26417\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10692d497b8cf269a439c603376a958fcead6d7ee2338eff4e014a75eaa26417\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wndtp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1275255b2597fbb09db9612f496ab3177afffe0c7d74d406bc4b1bc861b5b54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c1275255b2597fbb09db9612f496ab3177afffe0c7d74d406bc4b1bc861b5b54\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wndtp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://443af6aeef0b259a54c898d9c08f6e39d663ddf06d36d8712ee8ebb7fd2ebf5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://443af6aeef0b259a54c898d9c08f6e39d663ddf06d36d8712ee8ebb7fd2ebf5a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wndtp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e55fc9c3b6f0e9526860d830179e759cb2eae3c52847b675db6ccce9b6ed2766\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e55fc9c3b6f0e9526860d830179e759cb2eae3c52847b675db6ccce9b6ed2766\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wndtp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-x27jw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:11Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:11 crc kubenswrapper[4830]: I0131 09:02:11.906071 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:11Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:11 crc kubenswrapper[4830]: I0131 09:02:11.917938 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2018dd8e7153f3ce64992dc6f931ae09c5f77931cd0743a9fe2557673b6a41f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:11Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:11 crc kubenswrapper[4830]: I0131 09:02:11.929011 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:11 crc kubenswrapper[4830]: I0131 09:02:11.929078 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:11 crc kubenswrapper[4830]: I0131 09:02:11.929094 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:11 crc kubenswrapper[4830]: I0131 09:02:11.929119 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:11 crc kubenswrapper[4830]: I0131 09:02:11.929136 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:11Z","lastTransitionTime":"2026-01-31T09:02:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:11 crc kubenswrapper[4830]: I0131 09:02:11.932269 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"158dbfda-9b0a-4809-9946-3c6ee2d082dc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4f2590c48b20124bb8d0271755d430719ece306dbdc95acc26258abaf331ee2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqp59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7cfb7ee25dc18bb1412f69e9bbc3a9055029ed188a12baa5ceef7d5445ad597c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqp59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:23Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gt7kd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:11Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:11 crc kubenswrapper[4830]: I0131 09:02:11.944505 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"026d8790-dc0a-472e-953a-66afc0fcd6e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1dc96f3d1e085f925a6a1b73ef1312bd85072065059f20eb6c11f7d044635f8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://49f1cea3266a97316fb0737cb770f6da2abfd58b016987b92c19aa20a9366129\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c457892625099d1b14d857643ba5c70e76cfe582ee31c1b8736f4e278557ab1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ecd1234e4873862db88981fcc0a8c9fd9fc7f913649528a5c274c2feb4617b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ecd1234e4873862db88981fcc0a8c9fd9fc7f913649528a5c274c2feb4617b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:00:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:00:57Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:00:56Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:11Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:11 crc kubenswrapper[4830]: I0131 09:02:11.956698 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"20ed341f-ef9c-4242-981d-80c09f22a37f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://28d65a2b0e667ff1bd912564180a6a9ce77e91a104c6028e19bc58e0ab3b295d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c484d4ed3775a955f3077e3404135c771c581eef1e8fa614d7cdd6e521bbb426\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6889405df5b8cc9e09b36c99a5db06048c9de193dd5744d2b48c07f42c477bd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e99664db53d57a91882867cdf4ab33d52a2e165c53f91cd1b918a32c49a7afa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:00:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:11Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:11 crc kubenswrapper[4830]: I0131 09:02:11.968617 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c33f93dea1eda6a26c80682447157b317a4aea4af66d200c5bdfc162ad593e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:11Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:12 crc kubenswrapper[4830]: I0131 09:02:12.032249 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:12 crc kubenswrapper[4830]: I0131 09:02:12.032311 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:12 crc kubenswrapper[4830]: I0131 09:02:12.032323 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:12 crc kubenswrapper[4830]: I0131 09:02:12.032341 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:12 crc kubenswrapper[4830]: I0131 09:02:12.032354 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:12Z","lastTransitionTime":"2026-01-31T09:02:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:12 crc kubenswrapper[4830]: I0131 09:02:12.136072 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:12 crc kubenswrapper[4830]: I0131 09:02:12.136135 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:12 crc kubenswrapper[4830]: I0131 09:02:12.136146 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:12 crc kubenswrapper[4830]: I0131 09:02:12.136165 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:12 crc kubenswrapper[4830]: I0131 09:02:12.136204 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:12Z","lastTransitionTime":"2026-01-31T09:02:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:12 crc kubenswrapper[4830]: I0131 09:02:12.239104 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:12 crc kubenswrapper[4830]: I0131 09:02:12.239136 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:12 crc kubenswrapper[4830]: I0131 09:02:12.239146 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:12 crc kubenswrapper[4830]: I0131 09:02:12.239161 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:12 crc kubenswrapper[4830]: I0131 09:02:12.239171 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:12Z","lastTransitionTime":"2026-01-31T09:02:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:12 crc kubenswrapper[4830]: I0131 09:02:12.250432 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 09:02:12 crc kubenswrapper[4830]: E0131 09:02:12.250560 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 09:02:12 crc kubenswrapper[4830]: I0131 09:02:12.250769 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 09:02:12 crc kubenswrapper[4830]: E0131 09:02:12.250845 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 09:02:12 crc kubenswrapper[4830]: I0131 09:02:12.251009 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 09:02:12 crc kubenswrapper[4830]: E0131 09:02:12.251087 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 09:02:12 crc kubenswrapper[4830]: I0131 09:02:12.251093 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5kl8z" Jan 31 09:02:12 crc kubenswrapper[4830]: E0131 09:02:12.251316 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5kl8z" podUID="c1fa30e4-0c03-43ab-9c37-f7ec86153b27" Jan 31 09:02:12 crc kubenswrapper[4830]: I0131 09:02:12.252576 4830 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 07:42:06.372152857 +0000 UTC Jan 31 09:02:12 crc kubenswrapper[4830]: I0131 09:02:12.341902 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:12 crc kubenswrapper[4830]: I0131 09:02:12.341936 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:12 crc kubenswrapper[4830]: I0131 09:02:12.341964 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:12 crc kubenswrapper[4830]: I0131 09:02:12.341979 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:12 crc kubenswrapper[4830]: I0131 09:02:12.341990 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:12Z","lastTransitionTime":"2026-01-31T09:02:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:12 crc kubenswrapper[4830]: I0131 09:02:12.445080 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:12 crc kubenswrapper[4830]: I0131 09:02:12.445141 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:12 crc kubenswrapper[4830]: I0131 09:02:12.445153 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:12 crc kubenswrapper[4830]: I0131 09:02:12.445171 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:12 crc kubenswrapper[4830]: I0131 09:02:12.445183 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:12Z","lastTransitionTime":"2026-01-31T09:02:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:12 crc kubenswrapper[4830]: I0131 09:02:12.548329 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:12 crc kubenswrapper[4830]: I0131 09:02:12.548409 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:12 crc kubenswrapper[4830]: I0131 09:02:12.548443 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:12 crc kubenswrapper[4830]: I0131 09:02:12.548466 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:12 crc kubenswrapper[4830]: I0131 09:02:12.548479 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:12Z","lastTransitionTime":"2026-01-31T09:02:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:12 crc kubenswrapper[4830]: I0131 09:02:12.651384 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:12 crc kubenswrapper[4830]: I0131 09:02:12.651496 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:12 crc kubenswrapper[4830]: I0131 09:02:12.651529 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:12 crc kubenswrapper[4830]: I0131 09:02:12.651564 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:12 crc kubenswrapper[4830]: I0131 09:02:12.651584 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:12Z","lastTransitionTime":"2026-01-31T09:02:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:12 crc kubenswrapper[4830]: I0131 09:02:12.754687 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:12 crc kubenswrapper[4830]: I0131 09:02:12.754801 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:12 crc kubenswrapper[4830]: I0131 09:02:12.754823 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:12 crc kubenswrapper[4830]: I0131 09:02:12.754851 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:12 crc kubenswrapper[4830]: I0131 09:02:12.754870 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:12Z","lastTransitionTime":"2026-01-31T09:02:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:12 crc kubenswrapper[4830]: I0131 09:02:12.858257 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:12 crc kubenswrapper[4830]: I0131 09:02:12.858329 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:12 crc kubenswrapper[4830]: I0131 09:02:12.858348 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:12 crc kubenswrapper[4830]: I0131 09:02:12.858375 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:12 crc kubenswrapper[4830]: I0131 09:02:12.858395 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:12Z","lastTransitionTime":"2026-01-31T09:02:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:12 crc kubenswrapper[4830]: I0131 09:02:12.902860 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-r8pc4" Jan 31 09:02:12 crc kubenswrapper[4830]: I0131 09:02:12.903956 4830 scope.go:117] "RemoveContainer" containerID="3a83288b35d051a945187b06a0f1e8f61aec52b6343034cc2f57354b61a9309b" Jan 31 09:02:12 crc kubenswrapper[4830]: E0131 09:02:12.904159 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-r8pc4_openshift-ovn-kubernetes(159b9801-57e3-4cf0-9b81-10aacb5eef83)\"" pod="openshift-ovn-kubernetes/ovnkube-node-r8pc4" podUID="159b9801-57e3-4cf0-9b81-10aacb5eef83" Jan 31 09:02:12 crc kubenswrapper[4830]: I0131 09:02:12.961023 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:12 crc kubenswrapper[4830]: I0131 09:02:12.961064 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:12 crc kubenswrapper[4830]: I0131 09:02:12.961073 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:12 crc kubenswrapper[4830]: I0131 09:02:12.961087 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:12 crc kubenswrapper[4830]: I0131 09:02:12.961097 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:12Z","lastTransitionTime":"2026-01-31T09:02:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:13 crc kubenswrapper[4830]: I0131 09:02:13.063687 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:13 crc kubenswrapper[4830]: I0131 09:02:13.063761 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:13 crc kubenswrapper[4830]: I0131 09:02:13.063771 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:13 crc kubenswrapper[4830]: I0131 09:02:13.063790 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:13 crc kubenswrapper[4830]: I0131 09:02:13.063804 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:13Z","lastTransitionTime":"2026-01-31T09:02:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:13 crc kubenswrapper[4830]: I0131 09:02:13.166339 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:13 crc kubenswrapper[4830]: I0131 09:02:13.166384 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:13 crc kubenswrapper[4830]: I0131 09:02:13.166396 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:13 crc kubenswrapper[4830]: I0131 09:02:13.166416 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:13 crc kubenswrapper[4830]: I0131 09:02:13.166429 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:13Z","lastTransitionTime":"2026-01-31T09:02:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:13 crc kubenswrapper[4830]: I0131 09:02:13.253509 4830 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 06:55:31.993502798 +0000 UTC Jan 31 09:02:13 crc kubenswrapper[4830]: I0131 09:02:13.269103 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:13 crc kubenswrapper[4830]: I0131 09:02:13.269199 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:13 crc kubenswrapper[4830]: I0131 09:02:13.269211 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:13 crc kubenswrapper[4830]: I0131 09:02:13.269229 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:13 crc kubenswrapper[4830]: I0131 09:02:13.269246 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:13Z","lastTransitionTime":"2026-01-31T09:02:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:13 crc kubenswrapper[4830]: I0131 09:02:13.372528 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:13 crc kubenswrapper[4830]: I0131 09:02:13.372596 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:13 crc kubenswrapper[4830]: I0131 09:02:13.372613 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:13 crc kubenswrapper[4830]: I0131 09:02:13.372636 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:13 crc kubenswrapper[4830]: I0131 09:02:13.372654 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:13Z","lastTransitionTime":"2026-01-31T09:02:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:13 crc kubenswrapper[4830]: I0131 09:02:13.407036 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:13 crc kubenswrapper[4830]: I0131 09:02:13.407077 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:13 crc kubenswrapper[4830]: I0131 09:02:13.407089 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:13 crc kubenswrapper[4830]: I0131 09:02:13.407104 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:13 crc kubenswrapper[4830]: I0131 09:02:13.407115 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:13Z","lastTransitionTime":"2026-01-31T09:02:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:13 crc kubenswrapper[4830]: E0131 09:02:13.432543 4830 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T09:02:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T09:02:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T09:02:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T09:02:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T09:02:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T09:02:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T09:02:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T09:02:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"09bf5dcf-c0f5-4874-a379-a4244cbfeb7d\\\",\\\"systemUUID\\\":\\\"c42072f0-7f1e-4cb8-a24e-882cf5477d0b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:13Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:13 crc kubenswrapper[4830]: I0131 09:02:13.438034 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:13 crc kubenswrapper[4830]: I0131 09:02:13.438069 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:13 crc kubenswrapper[4830]: I0131 09:02:13.438078 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:13 crc kubenswrapper[4830]: I0131 09:02:13.438095 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:13 crc kubenswrapper[4830]: I0131 09:02:13.438106 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:13Z","lastTransitionTime":"2026-01-31T09:02:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:13 crc kubenswrapper[4830]: E0131 09:02:13.450569 4830 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T09:02:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T09:02:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T09:02:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T09:02:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T09:02:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T09:02:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T09:02:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T09:02:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"09bf5dcf-c0f5-4874-a379-a4244cbfeb7d\\\",\\\"systemUUID\\\":\\\"c42072f0-7f1e-4cb8-a24e-882cf5477d0b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:13Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:13 crc kubenswrapper[4830]: I0131 09:02:13.455271 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:13 crc kubenswrapper[4830]: I0131 09:02:13.455476 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:13 crc kubenswrapper[4830]: I0131 09:02:13.455642 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:13 crc kubenswrapper[4830]: I0131 09:02:13.455848 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:13 crc kubenswrapper[4830]: I0131 09:02:13.455999 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:13Z","lastTransitionTime":"2026-01-31T09:02:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:13 crc kubenswrapper[4830]: E0131 09:02:13.473486 4830 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T09:02:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T09:02:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T09:02:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T09:02:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T09:02:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T09:02:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T09:02:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T09:02:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"09bf5dcf-c0f5-4874-a379-a4244cbfeb7d\\\",\\\"systemUUID\\\":\\\"c42072f0-7f1e-4cb8-a24e-882cf5477d0b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:13Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:13 crc kubenswrapper[4830]: I0131 09:02:13.478651 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:13 crc kubenswrapper[4830]: I0131 09:02:13.478746 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:13 crc kubenswrapper[4830]: I0131 09:02:13.478766 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:13 crc kubenswrapper[4830]: I0131 09:02:13.478788 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:13 crc kubenswrapper[4830]: I0131 09:02:13.478804 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:13Z","lastTransitionTime":"2026-01-31T09:02:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:13 crc kubenswrapper[4830]: E0131 09:02:13.493708 4830 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T09:02:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T09:02:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T09:02:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T09:02:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T09:02:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T09:02:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T09:02:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T09:02:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"09bf5dcf-c0f5-4874-a379-a4244cbfeb7d\\\",\\\"systemUUID\\\":\\\"c42072f0-7f1e-4cb8-a24e-882cf5477d0b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:13Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:13 crc kubenswrapper[4830]: I0131 09:02:13.498351 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:13 crc kubenswrapper[4830]: I0131 09:02:13.498471 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:13 crc kubenswrapper[4830]: I0131 09:02:13.498536 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:13 crc kubenswrapper[4830]: I0131 09:02:13.498619 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:13 crc kubenswrapper[4830]: I0131 09:02:13.498685 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:13Z","lastTransitionTime":"2026-01-31T09:02:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:13 crc kubenswrapper[4830]: E0131 09:02:13.515988 4830 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T09:02:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T09:02:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T09:02:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T09:02:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T09:02:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T09:02:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T09:02:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T09:02:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"09bf5dcf-c0f5-4874-a379-a4244cbfeb7d\\\",\\\"systemUUID\\\":\\\"c42072f0-7f1e-4cb8-a24e-882cf5477d0b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:13Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:13 crc kubenswrapper[4830]: E0131 09:02:13.516368 4830 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 31 09:02:13 crc kubenswrapper[4830]: I0131 09:02:13.518123 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:13 crc kubenswrapper[4830]: I0131 09:02:13.518220 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:13 crc kubenswrapper[4830]: I0131 09:02:13.518369 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:13 crc kubenswrapper[4830]: I0131 09:02:13.518436 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:13 crc kubenswrapper[4830]: I0131 09:02:13.518519 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:13Z","lastTransitionTime":"2026-01-31T09:02:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:13 crc kubenswrapper[4830]: I0131 09:02:13.621882 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:13 crc kubenswrapper[4830]: I0131 09:02:13.621968 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:13 crc kubenswrapper[4830]: I0131 09:02:13.621988 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:13 crc kubenswrapper[4830]: I0131 09:02:13.622019 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:13 crc kubenswrapper[4830]: I0131 09:02:13.622041 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:13Z","lastTransitionTime":"2026-01-31T09:02:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:13 crc kubenswrapper[4830]: I0131 09:02:13.724011 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:13 crc kubenswrapper[4830]: I0131 09:02:13.724346 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:13 crc kubenswrapper[4830]: I0131 09:02:13.724408 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:13 crc kubenswrapper[4830]: I0131 09:02:13.724484 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:13 crc kubenswrapper[4830]: I0131 09:02:13.724561 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:13Z","lastTransitionTime":"2026-01-31T09:02:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:13 crc kubenswrapper[4830]: I0131 09:02:13.827987 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:13 crc kubenswrapper[4830]: I0131 09:02:13.828139 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:13 crc kubenswrapper[4830]: I0131 09:02:13.828159 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:13 crc kubenswrapper[4830]: I0131 09:02:13.828190 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:13 crc kubenswrapper[4830]: I0131 09:02:13.828210 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:13Z","lastTransitionTime":"2026-01-31T09:02:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:13 crc kubenswrapper[4830]: I0131 09:02:13.932336 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:13 crc kubenswrapper[4830]: I0131 09:02:13.932386 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:13 crc kubenswrapper[4830]: I0131 09:02:13.932397 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:13 crc kubenswrapper[4830]: I0131 09:02:13.932420 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:13 crc kubenswrapper[4830]: I0131 09:02:13.932433 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:13Z","lastTransitionTime":"2026-01-31T09:02:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:14 crc kubenswrapper[4830]: I0131 09:02:14.036164 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:14 crc kubenswrapper[4830]: I0131 09:02:14.036235 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:14 crc kubenswrapper[4830]: I0131 09:02:14.036255 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:14 crc kubenswrapper[4830]: I0131 09:02:14.036282 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:14 crc kubenswrapper[4830]: I0131 09:02:14.036301 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:14Z","lastTransitionTime":"2026-01-31T09:02:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:14 crc kubenswrapper[4830]: I0131 09:02:14.139554 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:14 crc kubenswrapper[4830]: I0131 09:02:14.139622 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:14 crc kubenswrapper[4830]: I0131 09:02:14.139637 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:14 crc kubenswrapper[4830]: I0131 09:02:14.139662 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:14 crc kubenswrapper[4830]: I0131 09:02:14.139679 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:14Z","lastTransitionTime":"2026-01-31T09:02:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:14 crc kubenswrapper[4830]: I0131 09:02:14.243283 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:14 crc kubenswrapper[4830]: I0131 09:02:14.243328 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:14 crc kubenswrapper[4830]: I0131 09:02:14.243338 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:14 crc kubenswrapper[4830]: I0131 09:02:14.243359 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:14 crc kubenswrapper[4830]: I0131 09:02:14.243372 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:14Z","lastTransitionTime":"2026-01-31T09:02:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:14 crc kubenswrapper[4830]: I0131 09:02:14.250606 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 09:02:14 crc kubenswrapper[4830]: I0131 09:02:14.250777 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5kl8z" Jan 31 09:02:14 crc kubenswrapper[4830]: I0131 09:02:14.250900 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 09:02:14 crc kubenswrapper[4830]: I0131 09:02:14.250972 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 09:02:14 crc kubenswrapper[4830]: E0131 09:02:14.250976 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 09:02:14 crc kubenswrapper[4830]: E0131 09:02:14.251088 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 09:02:14 crc kubenswrapper[4830]: E0131 09:02:14.251188 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 09:02:14 crc kubenswrapper[4830]: E0131 09:02:14.251384 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5kl8z" podUID="c1fa30e4-0c03-43ab-9c37-f7ec86153b27" Jan 31 09:02:14 crc kubenswrapper[4830]: I0131 09:02:14.254187 4830 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 01:57:01.015065218 +0000 UTC Jan 31 09:02:14 crc kubenswrapper[4830]: I0131 09:02:14.346215 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:14 crc kubenswrapper[4830]: I0131 09:02:14.346319 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:14 crc kubenswrapper[4830]: I0131 09:02:14.346347 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:14 crc kubenswrapper[4830]: I0131 09:02:14.346385 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:14 crc kubenswrapper[4830]: I0131 09:02:14.346405 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:14Z","lastTransitionTime":"2026-01-31T09:02:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:14 crc kubenswrapper[4830]: I0131 09:02:14.450322 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:14 crc kubenswrapper[4830]: I0131 09:02:14.450379 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:14 crc kubenswrapper[4830]: I0131 09:02:14.450391 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:14 crc kubenswrapper[4830]: I0131 09:02:14.450413 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:14 crc kubenswrapper[4830]: I0131 09:02:14.450427 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:14Z","lastTransitionTime":"2026-01-31T09:02:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:14 crc kubenswrapper[4830]: I0131 09:02:14.552882 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:14 crc kubenswrapper[4830]: I0131 09:02:14.552963 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:14 crc kubenswrapper[4830]: I0131 09:02:14.552985 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:14 crc kubenswrapper[4830]: I0131 09:02:14.553015 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:14 crc kubenswrapper[4830]: I0131 09:02:14.553038 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:14Z","lastTransitionTime":"2026-01-31T09:02:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:14 crc kubenswrapper[4830]: I0131 09:02:14.657287 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:14 crc kubenswrapper[4830]: I0131 09:02:14.657373 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:14 crc kubenswrapper[4830]: I0131 09:02:14.657398 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:14 crc kubenswrapper[4830]: I0131 09:02:14.657428 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:14 crc kubenswrapper[4830]: I0131 09:02:14.657450 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:14Z","lastTransitionTime":"2026-01-31T09:02:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:14 crc kubenswrapper[4830]: I0131 09:02:14.761018 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:14 crc kubenswrapper[4830]: I0131 09:02:14.761093 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:14 crc kubenswrapper[4830]: I0131 09:02:14.761119 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:14 crc kubenswrapper[4830]: I0131 09:02:14.761147 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:14 crc kubenswrapper[4830]: I0131 09:02:14.761166 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:14Z","lastTransitionTime":"2026-01-31T09:02:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:14 crc kubenswrapper[4830]: I0131 09:02:14.864830 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:14 crc kubenswrapper[4830]: I0131 09:02:14.864901 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:14 crc kubenswrapper[4830]: I0131 09:02:14.864911 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:14 crc kubenswrapper[4830]: I0131 09:02:14.864951 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:14 crc kubenswrapper[4830]: I0131 09:02:14.864963 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:14Z","lastTransitionTime":"2026-01-31T09:02:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:14 crc kubenswrapper[4830]: I0131 09:02:14.967897 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:14 crc kubenswrapper[4830]: I0131 09:02:14.967963 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:14 crc kubenswrapper[4830]: I0131 09:02:14.967981 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:14 crc kubenswrapper[4830]: I0131 09:02:14.968005 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:14 crc kubenswrapper[4830]: I0131 09:02:14.968025 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:14Z","lastTransitionTime":"2026-01-31T09:02:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:15 crc kubenswrapper[4830]: I0131 09:02:15.073861 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:15 crc kubenswrapper[4830]: I0131 09:02:15.073975 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:15 crc kubenswrapper[4830]: I0131 09:02:15.074002 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:15 crc kubenswrapper[4830]: I0131 09:02:15.074042 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:15 crc kubenswrapper[4830]: I0131 09:02:15.074082 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:15Z","lastTransitionTime":"2026-01-31T09:02:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:15 crc kubenswrapper[4830]: I0131 09:02:15.177898 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:15 crc kubenswrapper[4830]: I0131 09:02:15.177972 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:15 crc kubenswrapper[4830]: I0131 09:02:15.177989 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:15 crc kubenswrapper[4830]: I0131 09:02:15.178015 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:15 crc kubenswrapper[4830]: I0131 09:02:15.178033 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:15Z","lastTransitionTime":"2026-01-31T09:02:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:15 crc kubenswrapper[4830]: I0131 09:02:15.255390 4830 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 11:56:44.251656389 +0000 UTC Jan 31 09:02:15 crc kubenswrapper[4830]: I0131 09:02:15.281284 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:15 crc kubenswrapper[4830]: I0131 09:02:15.281335 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:15 crc kubenswrapper[4830]: I0131 09:02:15.281344 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:15 crc kubenswrapper[4830]: I0131 09:02:15.281362 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:15 crc kubenswrapper[4830]: I0131 09:02:15.281373 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:15Z","lastTransitionTime":"2026-01-31T09:02:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:15 crc kubenswrapper[4830]: I0131 09:02:15.385270 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:15 crc kubenswrapper[4830]: I0131 09:02:15.385309 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:15 crc kubenswrapper[4830]: I0131 09:02:15.385318 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:15 crc kubenswrapper[4830]: I0131 09:02:15.385334 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:15 crc kubenswrapper[4830]: I0131 09:02:15.385344 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:15Z","lastTransitionTime":"2026-01-31T09:02:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:15 crc kubenswrapper[4830]: I0131 09:02:15.488279 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:15 crc kubenswrapper[4830]: I0131 09:02:15.488342 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:15 crc kubenswrapper[4830]: I0131 09:02:15.488358 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:15 crc kubenswrapper[4830]: I0131 09:02:15.488381 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:15 crc kubenswrapper[4830]: I0131 09:02:15.488396 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:15Z","lastTransitionTime":"2026-01-31T09:02:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:15 crc kubenswrapper[4830]: I0131 09:02:15.591481 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:15 crc kubenswrapper[4830]: I0131 09:02:15.591528 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:15 crc kubenswrapper[4830]: I0131 09:02:15.591540 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:15 crc kubenswrapper[4830]: I0131 09:02:15.591557 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:15 crc kubenswrapper[4830]: I0131 09:02:15.591568 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:15Z","lastTransitionTime":"2026-01-31T09:02:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:15 crc kubenswrapper[4830]: I0131 09:02:15.694951 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:15 crc kubenswrapper[4830]: I0131 09:02:15.695029 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:15 crc kubenswrapper[4830]: I0131 09:02:15.695040 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:15 crc kubenswrapper[4830]: I0131 09:02:15.695059 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:15 crc kubenswrapper[4830]: I0131 09:02:15.695070 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:15Z","lastTransitionTime":"2026-01-31T09:02:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:15 crc kubenswrapper[4830]: I0131 09:02:15.798928 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:15 crc kubenswrapper[4830]: I0131 09:02:15.799009 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:15 crc kubenswrapper[4830]: I0131 09:02:15.799032 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:15 crc kubenswrapper[4830]: I0131 09:02:15.799067 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:15 crc kubenswrapper[4830]: I0131 09:02:15.799092 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:15Z","lastTransitionTime":"2026-01-31T09:02:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:15 crc kubenswrapper[4830]: I0131 09:02:15.902157 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:15 crc kubenswrapper[4830]: I0131 09:02:15.902948 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:15 crc kubenswrapper[4830]: I0131 09:02:15.903009 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:15 crc kubenswrapper[4830]: I0131 09:02:15.903038 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:15 crc kubenswrapper[4830]: I0131 09:02:15.903056 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:15Z","lastTransitionTime":"2026-01-31T09:02:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:16 crc kubenswrapper[4830]: I0131 09:02:16.005585 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:16 crc kubenswrapper[4830]: I0131 09:02:16.005662 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:16 crc kubenswrapper[4830]: I0131 09:02:16.005678 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:16 crc kubenswrapper[4830]: I0131 09:02:16.005699 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:16 crc kubenswrapper[4830]: I0131 09:02:16.005714 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:16Z","lastTransitionTime":"2026-01-31T09:02:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:16 crc kubenswrapper[4830]: I0131 09:02:16.109041 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:16 crc kubenswrapper[4830]: I0131 09:02:16.109086 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:16 crc kubenswrapper[4830]: I0131 09:02:16.109097 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:16 crc kubenswrapper[4830]: I0131 09:02:16.109116 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:16 crc kubenswrapper[4830]: I0131 09:02:16.109128 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:16Z","lastTransitionTime":"2026-01-31T09:02:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:16 crc kubenswrapper[4830]: I0131 09:02:16.211691 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:16 crc kubenswrapper[4830]: I0131 09:02:16.211764 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:16 crc kubenswrapper[4830]: I0131 09:02:16.211775 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:16 crc kubenswrapper[4830]: I0131 09:02:16.211795 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:16 crc kubenswrapper[4830]: I0131 09:02:16.211807 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:16Z","lastTransitionTime":"2026-01-31T09:02:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:16 crc kubenswrapper[4830]: I0131 09:02:16.251549 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 09:02:16 crc kubenswrapper[4830]: I0131 09:02:16.251746 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 09:02:16 crc kubenswrapper[4830]: E0131 09:02:16.251814 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 09:02:16 crc kubenswrapper[4830]: I0131 09:02:16.251566 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5kl8z" Jan 31 09:02:16 crc kubenswrapper[4830]: I0131 09:02:16.251613 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 09:02:16 crc kubenswrapper[4830]: E0131 09:02:16.251944 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 09:02:16 crc kubenswrapper[4830]: E0131 09:02:16.252033 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5kl8z" podUID="c1fa30e4-0c03-43ab-9c37-f7ec86153b27" Jan 31 09:02:16 crc kubenswrapper[4830]: E0131 09:02:16.252161 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 09:02:16 crc kubenswrapper[4830]: I0131 09:02:16.255714 4830 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 23:38:31.258413504 +0000 UTC Jan 31 09:02:16 crc kubenswrapper[4830]: I0131 09:02:16.265466 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pmbpr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca325f50-edf0-4f3d-ab92-17f40a73d274\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d2a0a6bafefdee2120d6573808366f2455c8606c350f69b9e62bfb2903f6303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7p56d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:22Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pmbpr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:16Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:16 crc kubenswrapper[4830]: I0131 09:02:16.282564 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-cjqbn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b7e133cc-19e8-4770-9146-88dac53a6531\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:02:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:02:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9875f32d43bbc74af3de68db341e1562d735fcd5fba747d5ca7aceea458db68a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4fc53764819654361fe0c4c89480ef4e2b42eb79d71ab8b88f1cc9283c67ce70\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T09:02:10Z\\\",\\\"message\\\":\\\"2026-01-31T09:01:24+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_a3601813-05b9-4c26-9298-bb115810fa0c\\\\n2026-01-31T09:01:24+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_a3601813-05b9-4c26-9298-bb115810fa0c to /host/opt/cni/bin/\\\\n2026-01-31T09:01:25Z [verbose] multus-daemon started\\\\n2026-01-31T09:01:25Z [verbose] Readiness Indicator file check\\\\n2026-01-31T09:02:10Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:23Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:02:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-msp6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-cjqbn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:16Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:16 crc kubenswrapper[4830]: I0131 09:02:16.301130 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-x27jw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"227117cb-01d3-4e44-9da3-b1d577fb3ee2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d59f81b73056481d4e6eb23c2a98c3c088b5255b82cd28e0cad0ac2a9b271cfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wndtp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b9804ac2b3c56a17dc1665cc7ca0c622f9f7a5dbf19f804b155c517a5be61e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b9804ac2b3c56a17dc1665cc7ca0c622f9f7a5dbf19f804b155c517a5be61e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wndtp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e777b7bcec1e8930ac1c280460b5733614b633b1ed72522e124f8585ef5cb38d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e777b7bcec1e8930ac1c280460b5733614b633b1ed72522e124f8585ef5cb38d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wndtp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10692d497b8cf269a439c603376a958fcead6d7ee2338eff4e014a75eaa26417\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10692d497b8cf269a439c603376a958fcead6d7ee2338eff4e014a75eaa26417\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wndtp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1275255b2597fbb09db9612f496ab3177afffe0c7d74d406bc4b1bc861b5b54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c1275255b2597fbb09db9612f496ab3177afffe0c7d74d406bc4b1bc861b5b54\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wndtp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://443af6aeef0b259a54c898d9c08f6e39d663ddf06d36d8712ee8ebb7fd2ebf5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://443af6aeef0b259a54c898d9c08f6e39d663ddf06d36d8712ee8ebb7fd2ebf5a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wndtp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e55fc9c3b6f0e9526860d830179e759cb2eae3c52847b675db6ccce9b6ed2766\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e55fc9c3b6f0e9526860d830179e759cb2eae3c52847b675db6ccce9b6ed2766\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wndtp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-x27jw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:16Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:16 crc kubenswrapper[4830]: I0131 09:02:16.316473 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:16Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:16 crc kubenswrapper[4830]: I0131 09:02:16.316706 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:16 crc kubenswrapper[4830]: I0131 09:02:16.316749 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:16 crc kubenswrapper[4830]: I0131 09:02:16.316765 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:16 crc kubenswrapper[4830]: I0131 09:02:16.316786 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:16 crc kubenswrapper[4830]: I0131 09:02:16.316800 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:16Z","lastTransitionTime":"2026-01-31T09:02:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:16 crc kubenswrapper[4830]: I0131 09:02:16.330627 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"20ed341f-ef9c-4242-981d-80c09f22a37f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://28d65a2b0e667ff1bd912564180a6a9ce77e91a104c6028e19bc58e0ab3b295d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c484d4ed3775a955f3077e3404135c771c581eef1e8fa614d7cdd6e521bbb426\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6889405df5b8cc9e09b36c99a5db06048c9de193dd5744d2b48c07f42c477bd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e99664db53d57a91882867cdf4ab33d52a2e165c53f91cd1b918a32c49a7afa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:00:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:16Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:16 crc kubenswrapper[4830]: I0131 09:02:16.350040 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c33f93dea1eda6a26c80682447157b317a4aea4af66d200c5bdfc162ad593e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:16Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:16 crc kubenswrapper[4830]: I0131 09:02:16.363418 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:16Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:16 crc kubenswrapper[4830]: I0131 09:02:16.376213 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2018dd8e7153f3ce64992dc6f931ae09c5f77931cd0743a9fe2557673b6a41f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:16Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:16 crc kubenswrapper[4830]: I0131 09:02:16.389206 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"158dbfda-9b0a-4809-9946-3c6ee2d082dc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4f2590c48b20124bb8d0271755d430719ece306dbdc95acc26258abaf331ee2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqp59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7cfb7ee25dc18bb1412f69e9bbc3a9055029ed188a12baa5ceef7d5445ad597c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqp59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:23Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gt7kd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:16Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:16 crc kubenswrapper[4830]: I0131 09:02:16.401138 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"026d8790-dc0a-472e-953a-66afc0fcd6e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1dc96f3d1e085f925a6a1b73ef1312bd85072065059f20eb6c11f7d044635f8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://49f1cea3266a97316fb0737cb770f6da2abfd58b016987b92c19aa20a9366129\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c457892625099d1b14d857643ba5c70e76cfe582ee31c1b8736f4e278557ab1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ecd1234e4873862db88981fcc0a8c9fd9fc7f913649528a5c274c2feb4617b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ecd1234e4873862db88981fcc0a8c9fd9fc7f913649528a5c274c2feb4617b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:00:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:00:57Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:00:56Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:16Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:16 crc kubenswrapper[4830]: I0131 09:02:16.419140 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-r8pc4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"159b9801-57e3-4cf0-9b81-10aacb5eef83\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://320f24051b1a05475b54823ec3401ad252f0efd6c0d3c5098643f959bad15179\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae12083012db5dcd6cc0b23cf64d04c952587334fdf617d36d4cdc2f657eb163\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://351a35da1a2503d6c5d323c7a9a26460236f70517498b41055e6d21bbf92b358\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3614c1ff7ca6762c15b9b85e6187ad94029630e54254e8ccb45b5e7c24692561\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27c8241e84418d80626b17fe37ccf994304394c3fa85c7f1a058d8d54ad7e7d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fd109715e65725addfeba399eacb1abbe16132e06e9ee50b542e877ca9d8d82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a83288b35d051a945187b06a0f1e8f61aec52b6343034cc2f57354b61a9309b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a83288b35d051a945187b06a0f1e8f61aec52b6343034cc2f57354b61a9309b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T09:01:57Z\\\",\\\"message\\\":\\\"flector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 09:01:57.209441 6558 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0131 09:01:57.209613 6558 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 09:01:57.210444 6558 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0131 09:01:57.210672 6558 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 09:01:57.211135 6558 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 09:01:57.211342 6558 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0131 09:01:57.212161 6558 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 09:01:57.212266 6558 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:56Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-r8pc4_openshift-ovn-kubernetes(159b9801-57e3-4cf0-9b81-10aacb5eef83)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a39d2c99a3b1f6ff5d003d2163046c5127e41dac82f41744c1fbffd0cec8ff09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba6e5a840aca1952877a5d2c18af1d28779b13d54e0c653a2b02b3c3d7b748e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba6e5a840aca1952877a5d2c18af1d28779b13d54e0c653a2b02b3c3d7b748e4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-r8pc4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:16Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:16 crc kubenswrapper[4830]: I0131 09:02:16.419522 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:16 crc kubenswrapper[4830]: I0131 09:02:16.419589 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:16 crc kubenswrapper[4830]: I0131 09:02:16.419603 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:16 crc kubenswrapper[4830]: I0131 09:02:16.419619 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:16 crc kubenswrapper[4830]: I0131 09:02:16.419630 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:16Z","lastTransitionTime":"2026-01-31T09:02:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:16 crc kubenswrapper[4830]: I0131 09:02:16.433002 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7vq99" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"44acb8ed-5840-46fa-9ba1-1b89653e1478\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07cae4ce61629c9f8e48863d0775cf4fed46422db85ba8b29477e098b697fb1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9w5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86ac3b3a214c6bca20d7fdc92a49647dfdaf8de4391f331890f74900ab7eca11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9w5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:35Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7vq99\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:16Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:16 crc kubenswrapper[4830]: I0131 09:02:16.446826 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0468c8e291590c1fa2ffc9b7786bbbf6f3cfb7889fc3adaf7f7a4d3b0edcb1ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4601859d7e52904b3390b2ebca48210c4b3bf132b00a4735a1dbe6cfdc7bd02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:16Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:16 crc kubenswrapper[4830]: I0131 09:02:16.460012 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c75e1c36-b769-464e-96eb-6d9b3c5aa384\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fc4f1b4979b8d902676eced34650daea491e68b4c4377492b928a9f0f78d12b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b68f539fdc8bbf394adaf06d3e8682cdf498d0994c53f3754caf282cf9cf3607\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://539885da1b8f083e7ae878fec45416be66a5b06df063f046a05a4981bc8a8f75\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9b64732f8259953717c8ad355889afd462ce339c881ba9c105f6d3f39245e79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8cac33719e081864153ce20c60069a21036f3c29f7b4395021c3a2fe4f09dbc9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"message\\\":\\\"le observer\\\\nW0131 09:01:15.957161 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0131 09:01:15.957363 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 09:01:15.958528 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2684978895/tls.crt::/tmp/serving-cert-2684978895/tls.key\\\\\\\"\\\\nI0131 09:01:16.545024 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 09:01:16.548405 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 09:01:16.548444 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 09:01:16.548470 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 09:01:16.548480 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 09:01:16.557075 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 09:01:16.557112 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 09:01:16.557118 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 09:01:16.557123 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 09:01:16.557127 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 09:01:16.557130 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 09:01:16.557133 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0131 09:01:16.557468 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0131 09:01:16.564014 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d4d8bd6451d7416a2a312147ec282db5de8410910834f555c8d09062902130d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41db253848cac205af5f91621cac4e8223bda7d76984afd734a093f8e8a2d125\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41db253848cac205af5f91621cac4e8223bda7d76984afd734a093f8e8a2d125\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:00:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:00:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:00:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:16Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:16 crc kubenswrapper[4830]: I0131 09:02:16.473802 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:16Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:16 crc kubenswrapper[4830]: I0131 09:02:16.488786 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-zt78q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f8a0ccd-540b-4151-a34d-438e433cb141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://362d0fc182d79e72720f3686e7fb5219372cf72d8be09c8086713b692e8d66d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6zlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:25Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-zt78q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:16Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:16 crc kubenswrapper[4830]: I0131 09:02:16.501699 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5kl8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1fa30e4-0c03-43ab-9c37-f7ec86153b27\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jgvfn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jgvfn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:36Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5kl8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:16Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:16 crc kubenswrapper[4830]: I0131 09:02:16.522868 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:16 crc kubenswrapper[4830]: I0131 09:02:16.523102 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:16 crc kubenswrapper[4830]: I0131 09:02:16.523165 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:16 crc kubenswrapper[4830]: I0131 09:02:16.523226 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:16 crc kubenswrapper[4830]: I0131 09:02:16.523313 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:16Z","lastTransitionTime":"2026-01-31T09:02:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:16 crc kubenswrapper[4830]: I0131 09:02:16.626342 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:16 crc kubenswrapper[4830]: I0131 09:02:16.626394 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:16 crc kubenswrapper[4830]: I0131 09:02:16.626404 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:16 crc kubenswrapper[4830]: I0131 09:02:16.626422 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:16 crc kubenswrapper[4830]: I0131 09:02:16.626433 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:16Z","lastTransitionTime":"2026-01-31T09:02:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:16 crc kubenswrapper[4830]: I0131 09:02:16.728907 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:16 crc kubenswrapper[4830]: I0131 09:02:16.728956 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:16 crc kubenswrapper[4830]: I0131 09:02:16.728965 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:16 crc kubenswrapper[4830]: I0131 09:02:16.728983 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:16 crc kubenswrapper[4830]: I0131 09:02:16.728994 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:16Z","lastTransitionTime":"2026-01-31T09:02:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:16 crc kubenswrapper[4830]: I0131 09:02:16.832265 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:16 crc kubenswrapper[4830]: I0131 09:02:16.832332 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:16 crc kubenswrapper[4830]: I0131 09:02:16.832354 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:16 crc kubenswrapper[4830]: I0131 09:02:16.832383 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:16 crc kubenswrapper[4830]: I0131 09:02:16.832410 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:16Z","lastTransitionTime":"2026-01-31T09:02:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:16 crc kubenswrapper[4830]: I0131 09:02:16.935789 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:16 crc kubenswrapper[4830]: I0131 09:02:16.935866 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:16 crc kubenswrapper[4830]: I0131 09:02:16.935900 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:16 crc kubenswrapper[4830]: I0131 09:02:16.935928 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:16 crc kubenswrapper[4830]: I0131 09:02:16.935949 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:16Z","lastTransitionTime":"2026-01-31T09:02:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:17 crc kubenswrapper[4830]: I0131 09:02:17.039855 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:17 crc kubenswrapper[4830]: I0131 09:02:17.039929 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:17 crc kubenswrapper[4830]: I0131 09:02:17.039954 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:17 crc kubenswrapper[4830]: I0131 09:02:17.039978 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:17 crc kubenswrapper[4830]: I0131 09:02:17.039996 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:17Z","lastTransitionTime":"2026-01-31T09:02:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:17 crc kubenswrapper[4830]: I0131 09:02:17.143861 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:17 crc kubenswrapper[4830]: I0131 09:02:17.143944 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:17 crc kubenswrapper[4830]: I0131 09:02:17.143998 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:17 crc kubenswrapper[4830]: I0131 09:02:17.144065 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:17 crc kubenswrapper[4830]: I0131 09:02:17.144087 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:17Z","lastTransitionTime":"2026-01-31T09:02:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:17 crc kubenswrapper[4830]: I0131 09:02:17.248241 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:17 crc kubenswrapper[4830]: I0131 09:02:17.248317 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:17 crc kubenswrapper[4830]: I0131 09:02:17.248340 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:17 crc kubenswrapper[4830]: I0131 09:02:17.248372 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:17 crc kubenswrapper[4830]: I0131 09:02:17.248393 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:17Z","lastTransitionTime":"2026-01-31T09:02:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:17 crc kubenswrapper[4830]: I0131 09:02:17.256577 4830 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 05:17:13.62054803 +0000 UTC Jan 31 09:02:17 crc kubenswrapper[4830]: I0131 09:02:17.351758 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:17 crc kubenswrapper[4830]: I0131 09:02:17.351825 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:17 crc kubenswrapper[4830]: I0131 09:02:17.351842 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:17 crc kubenswrapper[4830]: I0131 09:02:17.351868 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:17 crc kubenswrapper[4830]: I0131 09:02:17.351887 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:17Z","lastTransitionTime":"2026-01-31T09:02:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:17 crc kubenswrapper[4830]: I0131 09:02:17.455113 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:17 crc kubenswrapper[4830]: I0131 09:02:17.455160 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:17 crc kubenswrapper[4830]: I0131 09:02:17.455172 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:17 crc kubenswrapper[4830]: I0131 09:02:17.455193 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:17 crc kubenswrapper[4830]: I0131 09:02:17.455207 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:17Z","lastTransitionTime":"2026-01-31T09:02:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:17 crc kubenswrapper[4830]: I0131 09:02:17.558302 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:17 crc kubenswrapper[4830]: I0131 09:02:17.558383 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:17 crc kubenswrapper[4830]: I0131 09:02:17.558407 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:17 crc kubenswrapper[4830]: I0131 09:02:17.558441 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:17 crc kubenswrapper[4830]: I0131 09:02:17.558461 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:17Z","lastTransitionTime":"2026-01-31T09:02:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:17 crc kubenswrapper[4830]: I0131 09:02:17.661182 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:17 crc kubenswrapper[4830]: I0131 09:02:17.661239 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:17 crc kubenswrapper[4830]: I0131 09:02:17.661250 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:17 crc kubenswrapper[4830]: I0131 09:02:17.661269 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:17 crc kubenswrapper[4830]: I0131 09:02:17.661282 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:17Z","lastTransitionTime":"2026-01-31T09:02:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:17 crc kubenswrapper[4830]: I0131 09:02:17.764563 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:17 crc kubenswrapper[4830]: I0131 09:02:17.764611 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:17 crc kubenswrapper[4830]: I0131 09:02:17.764623 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:17 crc kubenswrapper[4830]: I0131 09:02:17.764641 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:17 crc kubenswrapper[4830]: I0131 09:02:17.764653 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:17Z","lastTransitionTime":"2026-01-31T09:02:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:17 crc kubenswrapper[4830]: I0131 09:02:17.868240 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:17 crc kubenswrapper[4830]: I0131 09:02:17.868306 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:17 crc kubenswrapper[4830]: I0131 09:02:17.868320 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:17 crc kubenswrapper[4830]: I0131 09:02:17.868341 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:17 crc kubenswrapper[4830]: I0131 09:02:17.868357 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:17Z","lastTransitionTime":"2026-01-31T09:02:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:17 crc kubenswrapper[4830]: I0131 09:02:17.972514 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:17 crc kubenswrapper[4830]: I0131 09:02:17.972623 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:17 crc kubenswrapper[4830]: I0131 09:02:17.973019 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:17 crc kubenswrapper[4830]: I0131 09:02:17.973089 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:17 crc kubenswrapper[4830]: I0131 09:02:17.973367 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:17Z","lastTransitionTime":"2026-01-31T09:02:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:18 crc kubenswrapper[4830]: I0131 09:02:18.076318 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:18 crc kubenswrapper[4830]: I0131 09:02:18.076379 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:18 crc kubenswrapper[4830]: I0131 09:02:18.076392 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:18 crc kubenswrapper[4830]: I0131 09:02:18.076411 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:18 crc kubenswrapper[4830]: I0131 09:02:18.076426 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:18Z","lastTransitionTime":"2026-01-31T09:02:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:18 crc kubenswrapper[4830]: I0131 09:02:18.179659 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:18 crc kubenswrapper[4830]: I0131 09:02:18.179707 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:18 crc kubenswrapper[4830]: I0131 09:02:18.179720 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:18 crc kubenswrapper[4830]: I0131 09:02:18.179781 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:18 crc kubenswrapper[4830]: I0131 09:02:18.179794 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:18Z","lastTransitionTime":"2026-01-31T09:02:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:18 crc kubenswrapper[4830]: I0131 09:02:18.251032 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 09:02:18 crc kubenswrapper[4830]: I0131 09:02:18.251186 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 09:02:18 crc kubenswrapper[4830]: E0131 09:02:18.251331 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 09:02:18 crc kubenswrapper[4830]: I0131 09:02:18.251554 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 09:02:18 crc kubenswrapper[4830]: I0131 09:02:18.251600 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5kl8z" Jan 31 09:02:18 crc kubenswrapper[4830]: E0131 09:02:18.251667 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 09:02:18 crc kubenswrapper[4830]: E0131 09:02:18.251871 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 09:02:18 crc kubenswrapper[4830]: E0131 09:02:18.252176 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5kl8z" podUID="c1fa30e4-0c03-43ab-9c37-f7ec86153b27" Jan 31 09:02:18 crc kubenswrapper[4830]: I0131 09:02:18.256844 4830 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 03:47:56.770135484 +0000 UTC Jan 31 09:02:18 crc kubenswrapper[4830]: I0131 09:02:18.281780 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:18 crc kubenswrapper[4830]: I0131 09:02:18.281832 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:18 crc kubenswrapper[4830]: I0131 09:02:18.281845 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:18 crc kubenswrapper[4830]: I0131 09:02:18.281863 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:18 crc kubenswrapper[4830]: I0131 09:02:18.281876 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:18Z","lastTransitionTime":"2026-01-31T09:02:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:18 crc kubenswrapper[4830]: I0131 09:02:18.385216 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:18 crc kubenswrapper[4830]: I0131 09:02:18.385274 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:18 crc kubenswrapper[4830]: I0131 09:02:18.385288 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:18 crc kubenswrapper[4830]: I0131 09:02:18.385307 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:18 crc kubenswrapper[4830]: I0131 09:02:18.385320 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:18Z","lastTransitionTime":"2026-01-31T09:02:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:18 crc kubenswrapper[4830]: I0131 09:02:18.488002 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:18 crc kubenswrapper[4830]: I0131 09:02:18.488034 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:18 crc kubenswrapper[4830]: I0131 09:02:18.488041 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:18 crc kubenswrapper[4830]: I0131 09:02:18.488055 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:18 crc kubenswrapper[4830]: I0131 09:02:18.488063 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:18Z","lastTransitionTime":"2026-01-31T09:02:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:18 crc kubenswrapper[4830]: I0131 09:02:18.591104 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:18 crc kubenswrapper[4830]: I0131 09:02:18.591175 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:18 crc kubenswrapper[4830]: I0131 09:02:18.591199 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:18 crc kubenswrapper[4830]: I0131 09:02:18.591229 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:18 crc kubenswrapper[4830]: I0131 09:02:18.591251 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:18Z","lastTransitionTime":"2026-01-31T09:02:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:18 crc kubenswrapper[4830]: I0131 09:02:18.694986 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:18 crc kubenswrapper[4830]: I0131 09:02:18.695053 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:18 crc kubenswrapper[4830]: I0131 09:02:18.695068 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:18 crc kubenswrapper[4830]: I0131 09:02:18.695089 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:18 crc kubenswrapper[4830]: I0131 09:02:18.695104 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:18Z","lastTransitionTime":"2026-01-31T09:02:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:18 crc kubenswrapper[4830]: I0131 09:02:18.798607 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:18 crc kubenswrapper[4830]: I0131 09:02:18.798654 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:18 crc kubenswrapper[4830]: I0131 09:02:18.798668 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:18 crc kubenswrapper[4830]: I0131 09:02:18.798688 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:18 crc kubenswrapper[4830]: I0131 09:02:18.798700 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:18Z","lastTransitionTime":"2026-01-31T09:02:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:18 crc kubenswrapper[4830]: I0131 09:02:18.901191 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:18 crc kubenswrapper[4830]: I0131 09:02:18.901235 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:18 crc kubenswrapper[4830]: I0131 09:02:18.901244 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:18 crc kubenswrapper[4830]: I0131 09:02:18.901261 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:18 crc kubenswrapper[4830]: I0131 09:02:18.901273 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:18Z","lastTransitionTime":"2026-01-31T09:02:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:19 crc kubenswrapper[4830]: I0131 09:02:19.003524 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:19 crc kubenswrapper[4830]: I0131 09:02:19.003588 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:19 crc kubenswrapper[4830]: I0131 09:02:19.003605 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:19 crc kubenswrapper[4830]: I0131 09:02:19.003628 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:19 crc kubenswrapper[4830]: I0131 09:02:19.003643 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:19Z","lastTransitionTime":"2026-01-31T09:02:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:19 crc kubenswrapper[4830]: I0131 09:02:19.107048 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:19 crc kubenswrapper[4830]: I0131 09:02:19.107091 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:19 crc kubenswrapper[4830]: I0131 09:02:19.107100 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:19 crc kubenswrapper[4830]: I0131 09:02:19.107117 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:19 crc kubenswrapper[4830]: I0131 09:02:19.107148 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:19Z","lastTransitionTime":"2026-01-31T09:02:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:19 crc kubenswrapper[4830]: I0131 09:02:19.210345 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:19 crc kubenswrapper[4830]: I0131 09:02:19.210406 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:19 crc kubenswrapper[4830]: I0131 09:02:19.210419 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:19 crc kubenswrapper[4830]: I0131 09:02:19.210442 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:19 crc kubenswrapper[4830]: I0131 09:02:19.210457 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:19Z","lastTransitionTime":"2026-01-31T09:02:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:19 crc kubenswrapper[4830]: I0131 09:02:19.257008 4830 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 03:27:04.455429886 +0000 UTC Jan 31 09:02:19 crc kubenswrapper[4830]: I0131 09:02:19.313928 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:19 crc kubenswrapper[4830]: I0131 09:02:19.313993 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:19 crc kubenswrapper[4830]: I0131 09:02:19.314007 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:19 crc kubenswrapper[4830]: I0131 09:02:19.314037 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:19 crc kubenswrapper[4830]: I0131 09:02:19.314052 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:19Z","lastTransitionTime":"2026-01-31T09:02:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:19 crc kubenswrapper[4830]: I0131 09:02:19.417354 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:19 crc kubenswrapper[4830]: I0131 09:02:19.417411 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:19 crc kubenswrapper[4830]: I0131 09:02:19.417423 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:19 crc kubenswrapper[4830]: I0131 09:02:19.417445 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:19 crc kubenswrapper[4830]: I0131 09:02:19.417459 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:19Z","lastTransitionTime":"2026-01-31T09:02:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:19 crc kubenswrapper[4830]: I0131 09:02:19.520715 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:19 crc kubenswrapper[4830]: I0131 09:02:19.520840 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:19 crc kubenswrapper[4830]: I0131 09:02:19.520855 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:19 crc kubenswrapper[4830]: I0131 09:02:19.520876 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:19 crc kubenswrapper[4830]: I0131 09:02:19.520892 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:19Z","lastTransitionTime":"2026-01-31T09:02:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:19 crc kubenswrapper[4830]: I0131 09:02:19.624019 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:19 crc kubenswrapper[4830]: I0131 09:02:19.624071 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:19 crc kubenswrapper[4830]: I0131 09:02:19.624086 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:19 crc kubenswrapper[4830]: I0131 09:02:19.624106 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:19 crc kubenswrapper[4830]: I0131 09:02:19.624119 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:19Z","lastTransitionTime":"2026-01-31T09:02:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:19 crc kubenswrapper[4830]: I0131 09:02:19.727435 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:19 crc kubenswrapper[4830]: I0131 09:02:19.727510 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:19 crc kubenswrapper[4830]: I0131 09:02:19.727530 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:19 crc kubenswrapper[4830]: I0131 09:02:19.727558 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:19 crc kubenswrapper[4830]: I0131 09:02:19.727576 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:19Z","lastTransitionTime":"2026-01-31T09:02:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:19 crc kubenswrapper[4830]: I0131 09:02:19.831338 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:19 crc kubenswrapper[4830]: I0131 09:02:19.831422 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:19 crc kubenswrapper[4830]: I0131 09:02:19.831444 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:19 crc kubenswrapper[4830]: I0131 09:02:19.831476 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:19 crc kubenswrapper[4830]: I0131 09:02:19.831498 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:19Z","lastTransitionTime":"2026-01-31T09:02:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:19 crc kubenswrapper[4830]: I0131 09:02:19.934804 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:19 crc kubenswrapper[4830]: I0131 09:02:19.934867 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:19 crc kubenswrapper[4830]: I0131 09:02:19.934885 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:19 crc kubenswrapper[4830]: I0131 09:02:19.934910 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:19 crc kubenswrapper[4830]: I0131 09:02:19.934928 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:19Z","lastTransitionTime":"2026-01-31T09:02:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:20 crc kubenswrapper[4830]: I0131 09:02:20.038285 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:20 crc kubenswrapper[4830]: I0131 09:02:20.039357 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:20 crc kubenswrapper[4830]: I0131 09:02:20.039488 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:20 crc kubenswrapper[4830]: I0131 09:02:20.039625 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:20 crc kubenswrapper[4830]: I0131 09:02:20.039784 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:20Z","lastTransitionTime":"2026-01-31T09:02:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:20 crc kubenswrapper[4830]: I0131 09:02:20.142519 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:20 crc kubenswrapper[4830]: I0131 09:02:20.142937 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:20 crc kubenswrapper[4830]: I0131 09:02:20.143051 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:20 crc kubenswrapper[4830]: I0131 09:02:20.143156 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:20 crc kubenswrapper[4830]: I0131 09:02:20.143289 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:20Z","lastTransitionTime":"2026-01-31T09:02:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:20 crc kubenswrapper[4830]: I0131 09:02:20.197255 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 09:02:20 crc kubenswrapper[4830]: I0131 09:02:20.197423 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 09:02:20 crc kubenswrapper[4830]: I0131 09:02:20.197508 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 09:02:20 crc kubenswrapper[4830]: I0131 09:02:20.197567 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 09:02:20 crc kubenswrapper[4830]: I0131 09:02:20.197618 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 09:02:20 crc kubenswrapper[4830]: E0131 09:02:20.197766 4830 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 31 09:02:20 crc kubenswrapper[4830]: E0131 09:02:20.197840 4830 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 31 09:02:20 crc kubenswrapper[4830]: E0131 09:02:20.197886 4830 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 31 09:02:20 crc kubenswrapper[4830]: E0131 09:02:20.197916 4830 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 09:02:20 crc kubenswrapper[4830]: E0131 09:02:20.197937 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-31 09:03:24.1978915 +0000 UTC m=+148.691253993 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 31 09:02:20 crc kubenswrapper[4830]: E0131 09:02:20.197952 4830 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 31 09:02:20 crc kubenswrapper[4830]: E0131 09:02:20.197992 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-31 09:03:24.197971873 +0000 UTC m=+148.691334355 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 09:02:20 crc kubenswrapper[4830]: E0131 09:02:20.197998 4830 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 31 09:02:20 crc kubenswrapper[4830]: E0131 09:02:20.198036 4830 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 09:02:20 crc kubenswrapper[4830]: E0131 09:02:20.198101 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 09:03:24.198066706 +0000 UTC m=+148.691429188 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 09:02:20 crc kubenswrapper[4830]: E0131 09:02:20.198141 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-31 09:03:24.198127157 +0000 UTC m=+148.691489639 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 09:02:20 crc kubenswrapper[4830]: E0131 09:02:20.198596 4830 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 31 09:02:20 crc kubenswrapper[4830]: E0131 09:02:20.198752 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-31 09:03:24.198701954 +0000 UTC m=+148.692064436 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 31 09:02:20 crc kubenswrapper[4830]: I0131 09:02:20.246477 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:20 crc kubenswrapper[4830]: I0131 09:02:20.246943 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:20 crc kubenswrapper[4830]: I0131 09:02:20.247113 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:20 crc kubenswrapper[4830]: I0131 09:02:20.247289 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:20 crc kubenswrapper[4830]: I0131 09:02:20.247441 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:20Z","lastTransitionTime":"2026-01-31T09:02:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:20 crc kubenswrapper[4830]: I0131 09:02:20.251146 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 09:02:20 crc kubenswrapper[4830]: I0131 09:02:20.251219 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 09:02:20 crc kubenswrapper[4830]: I0131 09:02:20.251151 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 09:02:20 crc kubenswrapper[4830]: E0131 09:02:20.251306 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 09:02:20 crc kubenswrapper[4830]: I0131 09:02:20.251391 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5kl8z" Jan 31 09:02:20 crc kubenswrapper[4830]: E0131 09:02:20.251606 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 09:02:20 crc kubenswrapper[4830]: E0131 09:02:20.251826 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5kl8z" podUID="c1fa30e4-0c03-43ab-9c37-f7ec86153b27" Jan 31 09:02:20 crc kubenswrapper[4830]: E0131 09:02:20.251882 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 09:02:20 crc kubenswrapper[4830]: I0131 09:02:20.257251 4830 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 20:01:26.464814195 +0000 UTC Jan 31 09:02:20 crc kubenswrapper[4830]: I0131 09:02:20.350405 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:20 crc kubenswrapper[4830]: I0131 09:02:20.350456 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:20 crc kubenswrapper[4830]: I0131 09:02:20.350468 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:20 crc kubenswrapper[4830]: I0131 09:02:20.350489 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:20 crc kubenswrapper[4830]: I0131 09:02:20.350505 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:20Z","lastTransitionTime":"2026-01-31T09:02:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:20 crc kubenswrapper[4830]: I0131 09:02:20.453988 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:20 crc kubenswrapper[4830]: I0131 09:02:20.454520 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:20 crc kubenswrapper[4830]: I0131 09:02:20.454624 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:20 crc kubenswrapper[4830]: I0131 09:02:20.454741 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:20 crc kubenswrapper[4830]: I0131 09:02:20.454857 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:20Z","lastTransitionTime":"2026-01-31T09:02:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:20 crc kubenswrapper[4830]: I0131 09:02:20.558079 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:20 crc kubenswrapper[4830]: I0131 09:02:20.558135 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:20 crc kubenswrapper[4830]: I0131 09:02:20.558147 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:20 crc kubenswrapper[4830]: I0131 09:02:20.558167 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:20 crc kubenswrapper[4830]: I0131 09:02:20.558180 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:20Z","lastTransitionTime":"2026-01-31T09:02:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:20 crc kubenswrapper[4830]: I0131 09:02:20.660566 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:20 crc kubenswrapper[4830]: I0131 09:02:20.660636 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:20 crc kubenswrapper[4830]: I0131 09:02:20.660650 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:20 crc kubenswrapper[4830]: I0131 09:02:20.660675 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:20 crc kubenswrapper[4830]: I0131 09:02:20.660689 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:20Z","lastTransitionTime":"2026-01-31T09:02:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:20 crc kubenswrapper[4830]: I0131 09:02:20.764541 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:20 crc kubenswrapper[4830]: I0131 09:02:20.764586 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:20 crc kubenswrapper[4830]: I0131 09:02:20.764595 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:20 crc kubenswrapper[4830]: I0131 09:02:20.764612 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:20 crc kubenswrapper[4830]: I0131 09:02:20.764623 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:20Z","lastTransitionTime":"2026-01-31T09:02:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:20 crc kubenswrapper[4830]: I0131 09:02:20.868234 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:20 crc kubenswrapper[4830]: I0131 09:02:20.868319 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:20 crc kubenswrapper[4830]: I0131 09:02:20.868344 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:20 crc kubenswrapper[4830]: I0131 09:02:20.868375 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:20 crc kubenswrapper[4830]: I0131 09:02:20.868400 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:20Z","lastTransitionTime":"2026-01-31T09:02:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:20 crc kubenswrapper[4830]: I0131 09:02:20.972217 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:20 crc kubenswrapper[4830]: I0131 09:02:20.972296 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:20 crc kubenswrapper[4830]: I0131 09:02:20.972321 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:20 crc kubenswrapper[4830]: I0131 09:02:20.972349 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:20 crc kubenswrapper[4830]: I0131 09:02:20.972369 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:20Z","lastTransitionTime":"2026-01-31T09:02:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:21 crc kubenswrapper[4830]: I0131 09:02:21.075959 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:21 crc kubenswrapper[4830]: I0131 09:02:21.076013 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:21 crc kubenswrapper[4830]: I0131 09:02:21.076024 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:21 crc kubenswrapper[4830]: I0131 09:02:21.076072 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:21 crc kubenswrapper[4830]: I0131 09:02:21.076087 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:21Z","lastTransitionTime":"2026-01-31T09:02:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:21 crc kubenswrapper[4830]: I0131 09:02:21.179932 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:21 crc kubenswrapper[4830]: I0131 09:02:21.179985 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:21 crc kubenswrapper[4830]: I0131 09:02:21.179996 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:21 crc kubenswrapper[4830]: I0131 09:02:21.180012 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:21 crc kubenswrapper[4830]: I0131 09:02:21.180023 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:21Z","lastTransitionTime":"2026-01-31T09:02:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:21 crc kubenswrapper[4830]: I0131 09:02:21.258442 4830 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 13:25:18.824747772 +0000 UTC Jan 31 09:02:21 crc kubenswrapper[4830]: I0131 09:02:21.282993 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:21 crc kubenswrapper[4830]: I0131 09:02:21.283075 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:21 crc kubenswrapper[4830]: I0131 09:02:21.283087 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:21 crc kubenswrapper[4830]: I0131 09:02:21.283106 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:21 crc kubenswrapper[4830]: I0131 09:02:21.283119 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:21Z","lastTransitionTime":"2026-01-31T09:02:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:21 crc kubenswrapper[4830]: I0131 09:02:21.387240 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:21 crc kubenswrapper[4830]: I0131 09:02:21.387329 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:21 crc kubenswrapper[4830]: I0131 09:02:21.387353 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:21 crc kubenswrapper[4830]: I0131 09:02:21.387390 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:21 crc kubenswrapper[4830]: I0131 09:02:21.387417 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:21Z","lastTransitionTime":"2026-01-31T09:02:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:21 crc kubenswrapper[4830]: I0131 09:02:21.490995 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:21 crc kubenswrapper[4830]: I0131 09:02:21.491092 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:21 crc kubenswrapper[4830]: I0131 09:02:21.491118 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:21 crc kubenswrapper[4830]: I0131 09:02:21.491155 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:21 crc kubenswrapper[4830]: I0131 09:02:21.491184 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:21Z","lastTransitionTime":"2026-01-31T09:02:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:21 crc kubenswrapper[4830]: I0131 09:02:21.594466 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:21 crc kubenswrapper[4830]: I0131 09:02:21.594527 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:21 crc kubenswrapper[4830]: I0131 09:02:21.594544 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:21 crc kubenswrapper[4830]: I0131 09:02:21.594563 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:21 crc kubenswrapper[4830]: I0131 09:02:21.595014 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:21Z","lastTransitionTime":"2026-01-31T09:02:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:21 crc kubenswrapper[4830]: I0131 09:02:21.697537 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:21 crc kubenswrapper[4830]: I0131 09:02:21.697594 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:21 crc kubenswrapper[4830]: I0131 09:02:21.697605 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:21 crc kubenswrapper[4830]: I0131 09:02:21.697625 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:21 crc kubenswrapper[4830]: I0131 09:02:21.697638 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:21Z","lastTransitionTime":"2026-01-31T09:02:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:21 crc kubenswrapper[4830]: I0131 09:02:21.800993 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:21 crc kubenswrapper[4830]: I0131 09:02:21.801057 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:21 crc kubenswrapper[4830]: I0131 09:02:21.801071 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:21 crc kubenswrapper[4830]: I0131 09:02:21.801102 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:21 crc kubenswrapper[4830]: I0131 09:02:21.801125 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:21Z","lastTransitionTime":"2026-01-31T09:02:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:21 crc kubenswrapper[4830]: I0131 09:02:21.903380 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:21 crc kubenswrapper[4830]: I0131 09:02:21.903436 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:21 crc kubenswrapper[4830]: I0131 09:02:21.903450 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:21 crc kubenswrapper[4830]: I0131 09:02:21.903474 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:21 crc kubenswrapper[4830]: I0131 09:02:21.903485 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:21Z","lastTransitionTime":"2026-01-31T09:02:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:22 crc kubenswrapper[4830]: I0131 09:02:22.006649 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:22 crc kubenswrapper[4830]: I0131 09:02:22.006703 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:22 crc kubenswrapper[4830]: I0131 09:02:22.006717 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:22 crc kubenswrapper[4830]: I0131 09:02:22.006779 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:22 crc kubenswrapper[4830]: I0131 09:02:22.006794 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:22Z","lastTransitionTime":"2026-01-31T09:02:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:22 crc kubenswrapper[4830]: I0131 09:02:22.109990 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:22 crc kubenswrapper[4830]: I0131 09:02:22.110051 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:22 crc kubenswrapper[4830]: I0131 09:02:22.110068 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:22 crc kubenswrapper[4830]: I0131 09:02:22.110093 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:22 crc kubenswrapper[4830]: I0131 09:02:22.110111 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:22Z","lastTransitionTime":"2026-01-31T09:02:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:22 crc kubenswrapper[4830]: I0131 09:02:22.213059 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:22 crc kubenswrapper[4830]: I0131 09:02:22.213104 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:22 crc kubenswrapper[4830]: I0131 09:02:22.213113 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:22 crc kubenswrapper[4830]: I0131 09:02:22.213130 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:22 crc kubenswrapper[4830]: I0131 09:02:22.213150 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:22Z","lastTransitionTime":"2026-01-31T09:02:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:22 crc kubenswrapper[4830]: I0131 09:02:22.251049 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 09:02:22 crc kubenswrapper[4830]: I0131 09:02:22.251127 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 09:02:22 crc kubenswrapper[4830]: E0131 09:02:22.251239 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 09:02:22 crc kubenswrapper[4830]: I0131 09:02:22.251343 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5kl8z" Jan 31 09:02:22 crc kubenswrapper[4830]: E0131 09:02:22.251580 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 09:02:22 crc kubenswrapper[4830]: I0131 09:02:22.251813 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 09:02:22 crc kubenswrapper[4830]: E0131 09:02:22.251908 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5kl8z" podUID="c1fa30e4-0c03-43ab-9c37-f7ec86153b27" Jan 31 09:02:22 crc kubenswrapper[4830]: E0131 09:02:22.252054 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 09:02:22 crc kubenswrapper[4830]: I0131 09:02:22.258769 4830 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 00:26:49.00920198 +0000 UTC Jan 31 09:02:22 crc kubenswrapper[4830]: I0131 09:02:22.316304 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:22 crc kubenswrapper[4830]: I0131 09:02:22.316357 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:22 crc kubenswrapper[4830]: I0131 09:02:22.316369 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:22 crc kubenswrapper[4830]: I0131 09:02:22.316387 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:22 crc kubenswrapper[4830]: I0131 09:02:22.316399 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:22Z","lastTransitionTime":"2026-01-31T09:02:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:22 crc kubenswrapper[4830]: I0131 09:02:22.420234 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:22 crc kubenswrapper[4830]: I0131 09:02:22.420303 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:22 crc kubenswrapper[4830]: I0131 09:02:22.420314 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:22 crc kubenswrapper[4830]: I0131 09:02:22.420335 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:22 crc kubenswrapper[4830]: I0131 09:02:22.420348 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:22Z","lastTransitionTime":"2026-01-31T09:02:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:22 crc kubenswrapper[4830]: I0131 09:02:22.524344 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:22 crc kubenswrapper[4830]: I0131 09:02:22.524482 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:22 crc kubenswrapper[4830]: I0131 09:02:22.524522 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:22 crc kubenswrapper[4830]: I0131 09:02:22.524549 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:22 crc kubenswrapper[4830]: I0131 09:02:22.524569 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:22Z","lastTransitionTime":"2026-01-31T09:02:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:22 crc kubenswrapper[4830]: I0131 09:02:22.627873 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:22 crc kubenswrapper[4830]: I0131 09:02:22.627931 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:22 crc kubenswrapper[4830]: I0131 09:02:22.627943 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:22 crc kubenswrapper[4830]: I0131 09:02:22.627964 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:22 crc kubenswrapper[4830]: I0131 09:02:22.627979 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:22Z","lastTransitionTime":"2026-01-31T09:02:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:22 crc kubenswrapper[4830]: I0131 09:02:22.731458 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:22 crc kubenswrapper[4830]: I0131 09:02:22.731533 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:22 crc kubenswrapper[4830]: I0131 09:02:22.731548 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:22 crc kubenswrapper[4830]: I0131 09:02:22.731569 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:22 crc kubenswrapper[4830]: I0131 09:02:22.731583 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:22Z","lastTransitionTime":"2026-01-31T09:02:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:22 crc kubenswrapper[4830]: I0131 09:02:22.835650 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:22 crc kubenswrapper[4830]: I0131 09:02:22.835707 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:22 crc kubenswrapper[4830]: I0131 09:02:22.835720 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:22 crc kubenswrapper[4830]: I0131 09:02:22.835766 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:22 crc kubenswrapper[4830]: I0131 09:02:22.835782 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:22Z","lastTransitionTime":"2026-01-31T09:02:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:22 crc kubenswrapper[4830]: I0131 09:02:22.939377 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:22 crc kubenswrapper[4830]: I0131 09:02:22.939416 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:22 crc kubenswrapper[4830]: I0131 09:02:22.939425 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:22 crc kubenswrapper[4830]: I0131 09:02:22.939441 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:22 crc kubenswrapper[4830]: I0131 09:02:22.939451 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:22Z","lastTransitionTime":"2026-01-31T09:02:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:23 crc kubenswrapper[4830]: I0131 09:02:23.042803 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:23 crc kubenswrapper[4830]: I0131 09:02:23.042875 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:23 crc kubenswrapper[4830]: I0131 09:02:23.042896 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:23 crc kubenswrapper[4830]: I0131 09:02:23.042924 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:23 crc kubenswrapper[4830]: I0131 09:02:23.042944 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:23Z","lastTransitionTime":"2026-01-31T09:02:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:23 crc kubenswrapper[4830]: I0131 09:02:23.145279 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:23 crc kubenswrapper[4830]: I0131 09:02:23.145322 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:23 crc kubenswrapper[4830]: I0131 09:02:23.145330 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:23 crc kubenswrapper[4830]: I0131 09:02:23.145347 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:23 crc kubenswrapper[4830]: I0131 09:02:23.145375 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:23Z","lastTransitionTime":"2026-01-31T09:02:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:23 crc kubenswrapper[4830]: I0131 09:02:23.248408 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:23 crc kubenswrapper[4830]: I0131 09:02:23.248518 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:23 crc kubenswrapper[4830]: I0131 09:02:23.248534 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:23 crc kubenswrapper[4830]: I0131 09:02:23.248558 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:23 crc kubenswrapper[4830]: I0131 09:02:23.248571 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:23Z","lastTransitionTime":"2026-01-31T09:02:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:23 crc kubenswrapper[4830]: I0131 09:02:23.259114 4830 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 04:17:13.869226905 +0000 UTC Jan 31 09:02:23 crc kubenswrapper[4830]: I0131 09:02:23.351891 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:23 crc kubenswrapper[4830]: I0131 09:02:23.351962 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:23 crc kubenswrapper[4830]: I0131 09:02:23.351985 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:23 crc kubenswrapper[4830]: I0131 09:02:23.352016 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:23 crc kubenswrapper[4830]: I0131 09:02:23.352039 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:23Z","lastTransitionTime":"2026-01-31T09:02:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:23 crc kubenswrapper[4830]: I0131 09:02:23.455775 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:23 crc kubenswrapper[4830]: I0131 09:02:23.455840 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:23 crc kubenswrapper[4830]: I0131 09:02:23.455855 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:23 crc kubenswrapper[4830]: I0131 09:02:23.455875 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:23 crc kubenswrapper[4830]: I0131 09:02:23.455889 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:23Z","lastTransitionTime":"2026-01-31T09:02:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:23 crc kubenswrapper[4830]: I0131 09:02:23.559607 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:23 crc kubenswrapper[4830]: I0131 09:02:23.559657 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:23 crc kubenswrapper[4830]: I0131 09:02:23.559672 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:23 crc kubenswrapper[4830]: I0131 09:02:23.559693 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:23 crc kubenswrapper[4830]: I0131 09:02:23.559706 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:23Z","lastTransitionTime":"2026-01-31T09:02:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:23 crc kubenswrapper[4830]: I0131 09:02:23.667648 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:23 crc kubenswrapper[4830]: I0131 09:02:23.667748 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:23 crc kubenswrapper[4830]: I0131 09:02:23.667766 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:23 crc kubenswrapper[4830]: I0131 09:02:23.667804 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:23 crc kubenswrapper[4830]: I0131 09:02:23.668024 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:23Z","lastTransitionTime":"2026-01-31T09:02:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:23 crc kubenswrapper[4830]: I0131 09:02:23.670530 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:23 crc kubenswrapper[4830]: I0131 09:02:23.670572 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:23 crc kubenswrapper[4830]: I0131 09:02:23.670582 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:23 crc kubenswrapper[4830]: I0131 09:02:23.670599 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:23 crc kubenswrapper[4830]: I0131 09:02:23.670610 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:23Z","lastTransitionTime":"2026-01-31T09:02:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:23 crc kubenswrapper[4830]: E0131 09:02:23.693497 4830 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T09:02:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T09:02:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T09:02:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T09:02:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T09:02:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T09:02:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T09:02:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T09:02:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"09bf5dcf-c0f5-4874-a379-a4244cbfeb7d\\\",\\\"systemUUID\\\":\\\"c42072f0-7f1e-4cb8-a24e-882cf5477d0b\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:23Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:23 crc kubenswrapper[4830]: I0131 09:02:23.702232 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:23 crc kubenswrapper[4830]: I0131 09:02:23.702270 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:23 crc kubenswrapper[4830]: I0131 09:02:23.702283 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:23 crc kubenswrapper[4830]: I0131 09:02:23.702299 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:23 crc kubenswrapper[4830]: I0131 09:02:23.702310 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:23Z","lastTransitionTime":"2026-01-31T09:02:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:23 crc kubenswrapper[4830]: E0131 09:02:23.724845 4830 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T09:02:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T09:02:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T09:02:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T09:02:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T09:02:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T09:02:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T09:02:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T09:02:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"09bf5dcf-c0f5-4874-a379-a4244cbfeb7d\\\",\\\"systemUUID\\\":\\\"c42072f0-7f1e-4cb8-a24e-882cf5477d0b\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:23Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:23 crc kubenswrapper[4830]: I0131 09:02:23.731162 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:23 crc kubenswrapper[4830]: I0131 09:02:23.731208 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:23 crc kubenswrapper[4830]: I0131 09:02:23.731220 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:23 crc kubenswrapper[4830]: I0131 09:02:23.731243 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:23 crc kubenswrapper[4830]: I0131 09:02:23.731257 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:23Z","lastTransitionTime":"2026-01-31T09:02:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:23 crc kubenswrapper[4830]: E0131 09:02:23.750323 4830 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T09:02:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T09:02:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T09:02:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T09:02:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T09:02:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T09:02:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T09:02:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T09:02:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"09bf5dcf-c0f5-4874-a379-a4244cbfeb7d\\\",\\\"systemUUID\\\":\\\"c42072f0-7f1e-4cb8-a24e-882cf5477d0b\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:23Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:23 crc kubenswrapper[4830]: I0131 09:02:23.755892 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:23 crc kubenswrapper[4830]: I0131 09:02:23.755935 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:23 crc kubenswrapper[4830]: I0131 09:02:23.755947 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:23 crc kubenswrapper[4830]: I0131 09:02:23.755965 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:23 crc kubenswrapper[4830]: I0131 09:02:23.755978 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:23Z","lastTransitionTime":"2026-01-31T09:02:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:23 crc kubenswrapper[4830]: E0131 09:02:23.770954 4830 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T09:02:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T09:02:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T09:02:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T09:02:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T09:02:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T09:02:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T09:02:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T09:02:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"09bf5dcf-c0f5-4874-a379-a4244cbfeb7d\\\",\\\"systemUUID\\\":\\\"c42072f0-7f1e-4cb8-a24e-882cf5477d0b\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:23Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:23 crc kubenswrapper[4830]: I0131 09:02:23.775533 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:23 crc kubenswrapper[4830]: I0131 09:02:23.775582 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:23 crc kubenswrapper[4830]: I0131 09:02:23.775615 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:23 crc kubenswrapper[4830]: I0131 09:02:23.775633 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:23 crc kubenswrapper[4830]: I0131 09:02:23.775643 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:23Z","lastTransitionTime":"2026-01-31T09:02:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:23 crc kubenswrapper[4830]: E0131 09:02:23.787220 4830 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T09:02:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T09:02:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T09:02:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T09:02:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T09:02:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T09:02:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T09:02:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T09:02:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"09bf5dcf-c0f5-4874-a379-a4244cbfeb7d\\\",\\\"systemUUID\\\":\\\"c42072f0-7f1e-4cb8-a24e-882cf5477d0b\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:23Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:23 crc kubenswrapper[4830]: E0131 09:02:23.787375 4830 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 31 09:02:23 crc kubenswrapper[4830]: I0131 09:02:23.789059 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:23 crc kubenswrapper[4830]: I0131 09:02:23.789137 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:23 crc kubenswrapper[4830]: I0131 09:02:23.789153 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:23 crc kubenswrapper[4830]: I0131 09:02:23.789171 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:23 crc kubenswrapper[4830]: I0131 09:02:23.789184 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:23Z","lastTransitionTime":"2026-01-31T09:02:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:23 crc kubenswrapper[4830]: I0131 09:02:23.892182 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:23 crc kubenswrapper[4830]: I0131 09:02:23.892223 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:23 crc kubenswrapper[4830]: I0131 09:02:23.892234 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:23 crc kubenswrapper[4830]: I0131 09:02:23.892254 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:23 crc kubenswrapper[4830]: I0131 09:02:23.892267 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:23Z","lastTransitionTime":"2026-01-31T09:02:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:23 crc kubenswrapper[4830]: I0131 09:02:23.995080 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:23 crc kubenswrapper[4830]: I0131 09:02:23.995163 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:23 crc kubenswrapper[4830]: I0131 09:02:23.995186 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:23 crc kubenswrapper[4830]: I0131 09:02:23.995234 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:23 crc kubenswrapper[4830]: I0131 09:02:23.995260 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:23Z","lastTransitionTime":"2026-01-31T09:02:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:24 crc kubenswrapper[4830]: I0131 09:02:24.098526 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:24 crc kubenswrapper[4830]: I0131 09:02:24.098561 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:24 crc kubenswrapper[4830]: I0131 09:02:24.098570 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:24 crc kubenswrapper[4830]: I0131 09:02:24.098583 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:24 crc kubenswrapper[4830]: I0131 09:02:24.098593 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:24Z","lastTransitionTime":"2026-01-31T09:02:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:24 crc kubenswrapper[4830]: I0131 09:02:24.201264 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:24 crc kubenswrapper[4830]: I0131 09:02:24.201310 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:24 crc kubenswrapper[4830]: I0131 09:02:24.201324 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:24 crc kubenswrapper[4830]: I0131 09:02:24.201340 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:24 crc kubenswrapper[4830]: I0131 09:02:24.201354 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:24Z","lastTransitionTime":"2026-01-31T09:02:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:24 crc kubenswrapper[4830]: I0131 09:02:24.250714 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 09:02:24 crc kubenswrapper[4830]: I0131 09:02:24.250822 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 09:02:24 crc kubenswrapper[4830]: I0131 09:02:24.250706 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 09:02:24 crc kubenswrapper[4830]: I0131 09:02:24.251315 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5kl8z" Jan 31 09:02:24 crc kubenswrapper[4830]: E0131 09:02:24.251851 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 09:02:24 crc kubenswrapper[4830]: E0131 09:02:24.251988 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 09:02:24 crc kubenswrapper[4830]: E0131 09:02:24.252018 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 09:02:24 crc kubenswrapper[4830]: E0131 09:02:24.252208 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5kl8z" podUID="c1fa30e4-0c03-43ab-9c37-f7ec86153b27" Jan 31 09:02:24 crc kubenswrapper[4830]: I0131 09:02:24.260223 4830 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 10:22:25.044895713 +0000 UTC Jan 31 09:02:24 crc kubenswrapper[4830]: I0131 09:02:24.305346 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:24 crc kubenswrapper[4830]: I0131 09:02:24.305427 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:24 crc kubenswrapper[4830]: I0131 09:02:24.305441 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:24 crc kubenswrapper[4830]: I0131 09:02:24.305465 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:24 crc kubenswrapper[4830]: I0131 09:02:24.305501 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:24Z","lastTransitionTime":"2026-01-31T09:02:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:24 crc kubenswrapper[4830]: I0131 09:02:24.409291 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:24 crc kubenswrapper[4830]: I0131 09:02:24.409345 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:24 crc kubenswrapper[4830]: I0131 09:02:24.409362 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:24 crc kubenswrapper[4830]: I0131 09:02:24.409386 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:24 crc kubenswrapper[4830]: I0131 09:02:24.409405 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:24Z","lastTransitionTime":"2026-01-31T09:02:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:24 crc kubenswrapper[4830]: I0131 09:02:24.512878 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:24 crc kubenswrapper[4830]: I0131 09:02:24.512933 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:24 crc kubenswrapper[4830]: I0131 09:02:24.512942 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:24 crc kubenswrapper[4830]: I0131 09:02:24.512961 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:24 crc kubenswrapper[4830]: I0131 09:02:24.512976 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:24Z","lastTransitionTime":"2026-01-31T09:02:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:24 crc kubenswrapper[4830]: I0131 09:02:24.616909 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:24 crc kubenswrapper[4830]: I0131 09:02:24.616994 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:24 crc kubenswrapper[4830]: I0131 09:02:24.617013 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:24 crc kubenswrapper[4830]: I0131 09:02:24.617048 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:24 crc kubenswrapper[4830]: I0131 09:02:24.617068 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:24Z","lastTransitionTime":"2026-01-31T09:02:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:24 crc kubenswrapper[4830]: I0131 09:02:24.720443 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:24 crc kubenswrapper[4830]: I0131 09:02:24.720494 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:24 crc kubenswrapper[4830]: I0131 09:02:24.720505 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:24 crc kubenswrapper[4830]: I0131 09:02:24.720523 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:24 crc kubenswrapper[4830]: I0131 09:02:24.720534 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:24Z","lastTransitionTime":"2026-01-31T09:02:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:24 crc kubenswrapper[4830]: I0131 09:02:24.823789 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:24 crc kubenswrapper[4830]: I0131 09:02:24.823853 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:24 crc kubenswrapper[4830]: I0131 09:02:24.823865 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:24 crc kubenswrapper[4830]: I0131 09:02:24.823887 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:24 crc kubenswrapper[4830]: I0131 09:02:24.823900 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:24Z","lastTransitionTime":"2026-01-31T09:02:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:24 crc kubenswrapper[4830]: I0131 09:02:24.927188 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:24 crc kubenswrapper[4830]: I0131 09:02:24.927239 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:24 crc kubenswrapper[4830]: I0131 09:02:24.927252 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:24 crc kubenswrapper[4830]: I0131 09:02:24.927270 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:24 crc kubenswrapper[4830]: I0131 09:02:24.927284 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:24Z","lastTransitionTime":"2026-01-31T09:02:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:25 crc kubenswrapper[4830]: I0131 09:02:25.030196 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:25 crc kubenswrapper[4830]: I0131 09:02:25.030238 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:25 crc kubenswrapper[4830]: I0131 09:02:25.030247 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:25 crc kubenswrapper[4830]: I0131 09:02:25.030262 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:25 crc kubenswrapper[4830]: I0131 09:02:25.030273 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:25Z","lastTransitionTime":"2026-01-31T09:02:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:25 crc kubenswrapper[4830]: I0131 09:02:25.134531 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:25 crc kubenswrapper[4830]: I0131 09:02:25.134578 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:25 crc kubenswrapper[4830]: I0131 09:02:25.134589 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:25 crc kubenswrapper[4830]: I0131 09:02:25.134610 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:25 crc kubenswrapper[4830]: I0131 09:02:25.134623 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:25Z","lastTransitionTime":"2026-01-31T09:02:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:25 crc kubenswrapper[4830]: I0131 09:02:25.238155 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:25 crc kubenswrapper[4830]: I0131 09:02:25.238209 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:25 crc kubenswrapper[4830]: I0131 09:02:25.238224 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:25 crc kubenswrapper[4830]: I0131 09:02:25.238244 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:25 crc kubenswrapper[4830]: I0131 09:02:25.238261 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:25Z","lastTransitionTime":"2026-01-31T09:02:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:25 crc kubenswrapper[4830]: I0131 09:02:25.251773 4830 scope.go:117] "RemoveContainer" containerID="3a83288b35d051a945187b06a0f1e8f61aec52b6343034cc2f57354b61a9309b" Jan 31 09:02:25 crc kubenswrapper[4830]: I0131 09:02:25.260819 4830 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 12:45:55.321948598 +0000 UTC Jan 31 09:02:25 crc kubenswrapper[4830]: I0131 09:02:25.341835 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:25 crc kubenswrapper[4830]: I0131 09:02:25.341881 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:25 crc kubenswrapper[4830]: I0131 09:02:25.341892 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:25 crc kubenswrapper[4830]: I0131 09:02:25.341910 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:25 crc kubenswrapper[4830]: I0131 09:02:25.341922 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:25Z","lastTransitionTime":"2026-01-31T09:02:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:25 crc kubenswrapper[4830]: I0131 09:02:25.444501 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:25 crc kubenswrapper[4830]: I0131 09:02:25.444562 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:25 crc kubenswrapper[4830]: I0131 09:02:25.444573 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:25 crc kubenswrapper[4830]: I0131 09:02:25.444594 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:25 crc kubenswrapper[4830]: I0131 09:02:25.444604 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:25Z","lastTransitionTime":"2026-01-31T09:02:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:25 crc kubenswrapper[4830]: I0131 09:02:25.547018 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:25 crc kubenswrapper[4830]: I0131 09:02:25.547050 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:25 crc kubenswrapper[4830]: I0131 09:02:25.547058 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:25 crc kubenswrapper[4830]: I0131 09:02:25.547074 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:25 crc kubenswrapper[4830]: I0131 09:02:25.547085 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:25Z","lastTransitionTime":"2026-01-31T09:02:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:25 crc kubenswrapper[4830]: I0131 09:02:25.650320 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:25 crc kubenswrapper[4830]: I0131 09:02:25.650393 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:25 crc kubenswrapper[4830]: I0131 09:02:25.650403 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:25 crc kubenswrapper[4830]: I0131 09:02:25.650445 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:25 crc kubenswrapper[4830]: I0131 09:02:25.650461 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:25Z","lastTransitionTime":"2026-01-31T09:02:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:25 crc kubenswrapper[4830]: I0131 09:02:25.785948 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:25 crc kubenswrapper[4830]: I0131 09:02:25.785991 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:25 crc kubenswrapper[4830]: I0131 09:02:25.786001 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:25 crc kubenswrapper[4830]: I0131 09:02:25.786019 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:25 crc kubenswrapper[4830]: I0131 09:02:25.786028 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:25Z","lastTransitionTime":"2026-01-31T09:02:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:25 crc kubenswrapper[4830]: I0131 09:02:25.789944 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-r8pc4_159b9801-57e3-4cf0-9b81-10aacb5eef83/ovnkube-controller/2.log" Jan 31 09:02:25 crc kubenswrapper[4830]: I0131 09:02:25.796386 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-r8pc4" event={"ID":"159b9801-57e3-4cf0-9b81-10aacb5eef83","Type":"ContainerStarted","Data":"766440d35d97de136fa66a347be009991bd05f76b51aff44c7369006f3196a4f"} Jan 31 09:02:25 crc kubenswrapper[4830]: I0131 09:02:25.797432 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-r8pc4" Jan 31 09:02:25 crc kubenswrapper[4830]: I0131 09:02:25.813928 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0468c8e291590c1fa2ffc9b7786bbbf6f3cfb7889fc3adaf7f7a4d3b0edcb1ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4601859d7e52904b3390b2ebca48210c4b3bf132b00a4735a1dbe6cfdc7bd02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:25Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:25 crc kubenswrapper[4830]: I0131 09:02:25.834365 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-r8pc4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"159b9801-57e3-4cf0-9b81-10aacb5eef83\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://320f24051b1a05475b54823ec3401ad252f0efd6c0d3c5098643f959bad15179\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae12083012db5dcd6cc0b23cf64d04c952587334fdf617d36d4cdc2f657eb163\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://351a35da1a2503d6c5d323c7a9a26460236f70517498b41055e6d21bbf92b358\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3614c1ff7ca6762c15b9b85e6187ad94029630e54254e8ccb45b5e7c24692561\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27c8241e84418d80626b17fe37ccf994304394c3fa85c7f1a058d8d54ad7e7d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fd109715e65725addfeba399eacb1abbe16132e06e9ee50b542e877ca9d8d82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://766440d35d97de136fa66a347be009991bd05f76b51aff44c7369006f3196a4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a83288b35d051a945187b06a0f1e8f61aec52b6343034cc2f57354b61a9309b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T09:01:57Z\\\",\\\"message\\\":\\\"flector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 09:01:57.209441 6558 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0131 09:01:57.209613 6558 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 09:01:57.210444 6558 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0131 09:01:57.210672 6558 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 09:01:57.211135 6558 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 09:01:57.211342 6558 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0131 09:01:57.212161 6558 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 09:01:57.212266 6558 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:56Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:02:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a39d2c99a3b1f6ff5d003d2163046c5127e41dac82f41744c1fbffd0cec8ff09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba6e5a840aca1952877a5d2c18af1d28779b13d54e0c653a2b02b3c3d7b748e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba6e5a840aca1952877a5d2c18af1d28779b13d54e0c653a2b02b3c3d7b748e4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-r8pc4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:25Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:25 crc kubenswrapper[4830]: I0131 09:02:25.851929 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7vq99" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"44acb8ed-5840-46fa-9ba1-1b89653e1478\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07cae4ce61629c9f8e48863d0775cf4fed46422db85ba8b29477e098b697fb1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9w5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86ac3b3a214c6bca20d7fdc92a49647dfdaf8de4391f331890f74900ab7eca11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9w5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:35Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7vq99\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:25Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:25 crc kubenswrapper[4830]: I0131 09:02:25.864294 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-zt78q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f8a0ccd-540b-4151-a34d-438e433cb141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://362d0fc182d79e72720f3686e7fb5219372cf72d8be09c8086713b692e8d66d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6zlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:25Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-zt78q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:25Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:25 crc kubenswrapper[4830]: I0131 09:02:25.878474 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5kl8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1fa30e4-0c03-43ab-9c37-f7ec86153b27\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jgvfn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jgvfn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:36Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5kl8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:25Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:25 crc kubenswrapper[4830]: I0131 09:02:25.888108 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:25 crc kubenswrapper[4830]: I0131 09:02:25.888159 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:25 crc kubenswrapper[4830]: I0131 09:02:25.888171 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:25 crc kubenswrapper[4830]: I0131 09:02:25.888189 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:25 crc kubenswrapper[4830]: I0131 09:02:25.888201 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:25Z","lastTransitionTime":"2026-01-31T09:02:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:25 crc kubenswrapper[4830]: I0131 09:02:25.893810 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c75e1c36-b769-464e-96eb-6d9b3c5aa384\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fc4f1b4979b8d902676eced34650daea491e68b4c4377492b928a9f0f78d12b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b68f539fdc8bbf394adaf06d3e8682cdf498d0994c53f3754caf282cf9cf3607\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://539885da1b8f083e7ae878fec45416be66a5b06df063f046a05a4981bc8a8f75\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9b64732f8259953717c8ad355889afd462ce339c881ba9c105f6d3f39245e79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8cac33719e081864153ce20c60069a21036f3c29f7b4395021c3a2fe4f09dbc9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"message\\\":\\\"le observer\\\\nW0131 09:01:15.957161 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0131 09:01:15.957363 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 09:01:15.958528 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2684978895/tls.crt::/tmp/serving-cert-2684978895/tls.key\\\\\\\"\\\\nI0131 09:01:16.545024 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 09:01:16.548405 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 09:01:16.548444 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 09:01:16.548470 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 09:01:16.548480 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 09:01:16.557075 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 09:01:16.557112 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 09:01:16.557118 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 09:01:16.557123 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 09:01:16.557127 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 09:01:16.557130 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 09:01:16.557133 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0131 09:01:16.557468 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0131 09:01:16.564014 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d4d8bd6451d7416a2a312147ec282db5de8410910834f555c8d09062902130d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41db253848cac205af5f91621cac4e8223bda7d76984afd734a093f8e8a2d125\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41db253848cac205af5f91621cac4e8223bda7d76984afd734a093f8e8a2d125\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:00:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:00:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:00:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:25Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:25 crc kubenswrapper[4830]: I0131 09:02:25.909332 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:25Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:25 crc kubenswrapper[4830]: I0131 09:02:25.929025 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-x27jw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"227117cb-01d3-4e44-9da3-b1d577fb3ee2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d59f81b73056481d4e6eb23c2a98c3c088b5255b82cd28e0cad0ac2a9b271cfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wndtp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b9804ac2b3c56a17dc1665cc7ca0c622f9f7a5dbf19f804b155c517a5be61e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b9804ac2b3c56a17dc1665cc7ca0c622f9f7a5dbf19f804b155c517a5be61e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wndtp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e777b7bcec1e8930ac1c280460b5733614b633b1ed72522e124f8585ef5cb38d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e777b7bcec1e8930ac1c280460b5733614b633b1ed72522e124f8585ef5cb38d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wndtp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10692d497b8cf269a439c603376a958fcead6d7ee2338eff4e014a75eaa26417\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10692d497b8cf269a439c603376a958fcead6d7ee2338eff4e014a75eaa26417\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wndtp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1275255b2597fbb09db9612f496ab3177afffe0c7d74d406bc4b1bc861b5b54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c1275255b2597fbb09db9612f496ab3177afffe0c7d74d406bc4b1bc861b5b54\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wndtp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://443af6aeef0b259a54c898d9c08f6e39d663ddf06d36d8712ee8ebb7fd2ebf5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://443af6aeef0b259a54c898d9c08f6e39d663ddf06d36d8712ee8ebb7fd2ebf5a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wndtp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e55fc9c3b6f0e9526860d830179e759cb2eae3c52847b675db6ccce9b6ed2766\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e55fc9c3b6f0e9526860d830179e759cb2eae3c52847b675db6ccce9b6ed2766\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wndtp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-x27jw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:25Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:25 crc kubenswrapper[4830]: I0131 09:02:25.944303 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:25Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:25 crc kubenswrapper[4830]: I0131 09:02:25.958398 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pmbpr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca325f50-edf0-4f3d-ab92-17f40a73d274\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d2a0a6bafefdee2120d6573808366f2455c8606c350f69b9e62bfb2903f6303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7p56d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:22Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pmbpr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:25Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:25 crc kubenswrapper[4830]: I0131 09:02:25.974754 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-cjqbn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b7e133cc-19e8-4770-9146-88dac53a6531\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:02:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:02:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9875f32d43bbc74af3de68db341e1562d735fcd5fba747d5ca7aceea458db68a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4fc53764819654361fe0c4c89480ef4e2b42eb79d71ab8b88f1cc9283c67ce70\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T09:02:10Z\\\",\\\"message\\\":\\\"2026-01-31T09:01:24+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_a3601813-05b9-4c26-9298-bb115810fa0c\\\\n2026-01-31T09:01:24+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_a3601813-05b9-4c26-9298-bb115810fa0c to /host/opt/cni/bin/\\\\n2026-01-31T09:01:25Z [verbose] multus-daemon started\\\\n2026-01-31T09:01:25Z [verbose] Readiness Indicator file check\\\\n2026-01-31T09:02:10Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:23Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:02:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-msp6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-cjqbn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:25Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:25 crc kubenswrapper[4830]: I0131 09:02:25.991306 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:25 crc kubenswrapper[4830]: I0131 09:02:25.991364 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:25 crc kubenswrapper[4830]: I0131 09:02:25.991379 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:25 crc kubenswrapper[4830]: I0131 09:02:25.991404 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:25 crc kubenswrapper[4830]: I0131 09:02:25.991416 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:25Z","lastTransitionTime":"2026-01-31T09:02:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:25 crc kubenswrapper[4830]: I0131 09:02:25.996848 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c33f93dea1eda6a26c80682447157b317a4aea4af66d200c5bdfc162ad593e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:25Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:26 crc kubenswrapper[4830]: I0131 09:02:26.011691 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:26Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:26 crc kubenswrapper[4830]: I0131 09:02:26.026275 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2018dd8e7153f3ce64992dc6f931ae09c5f77931cd0743a9fe2557673b6a41f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:26Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:26 crc kubenswrapper[4830]: I0131 09:02:26.042152 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"158dbfda-9b0a-4809-9946-3c6ee2d082dc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4f2590c48b20124bb8d0271755d430719ece306dbdc95acc26258abaf331ee2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqp59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7cfb7ee25dc18bb1412f69e9bbc3a9055029ed188a12baa5ceef7d5445ad597c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqp59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:23Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gt7kd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:26Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:26 crc kubenswrapper[4830]: I0131 09:02:26.058795 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"026d8790-dc0a-472e-953a-66afc0fcd6e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1dc96f3d1e085f925a6a1b73ef1312bd85072065059f20eb6c11f7d044635f8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://49f1cea3266a97316fb0737cb770f6da2abfd58b016987b92c19aa20a9366129\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c457892625099d1b14d857643ba5c70e76cfe582ee31c1b8736f4e278557ab1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ecd1234e4873862db88981fcc0a8c9fd9fc7f913649528a5c274c2feb4617b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ecd1234e4873862db88981fcc0a8c9fd9fc7f913649528a5c274c2feb4617b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:00:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:00:57Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:00:56Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:26Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:26 crc kubenswrapper[4830]: I0131 09:02:26.077487 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"20ed341f-ef9c-4242-981d-80c09f22a37f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://28d65a2b0e667ff1bd912564180a6a9ce77e91a104c6028e19bc58e0ab3b295d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c484d4ed3775a955f3077e3404135c771c581eef1e8fa614d7cdd6e521bbb426\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6889405df5b8cc9e09b36c99a5db06048c9de193dd5744d2b48c07f42c477bd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e99664db53d57a91882867cdf4ab33d52a2e165c53f91cd1b918a32c49a7afa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:00:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:26Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:26 crc kubenswrapper[4830]: I0131 09:02:26.094266 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:26 crc kubenswrapper[4830]: I0131 09:02:26.094337 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:26 crc kubenswrapper[4830]: I0131 09:02:26.094347 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:26 crc kubenswrapper[4830]: I0131 09:02:26.094385 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:26 crc kubenswrapper[4830]: I0131 09:02:26.094397 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:26Z","lastTransitionTime":"2026-01-31T09:02:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:26 crc kubenswrapper[4830]: I0131 09:02:26.197516 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:26 crc kubenswrapper[4830]: I0131 09:02:26.197558 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:26 crc kubenswrapper[4830]: I0131 09:02:26.197569 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:26 crc kubenswrapper[4830]: I0131 09:02:26.197587 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:26 crc kubenswrapper[4830]: I0131 09:02:26.197599 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:26Z","lastTransitionTime":"2026-01-31T09:02:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:26 crc kubenswrapper[4830]: I0131 09:02:26.251068 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 09:02:26 crc kubenswrapper[4830]: I0131 09:02:26.251116 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 09:02:26 crc kubenswrapper[4830]: I0131 09:02:26.251075 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 09:02:26 crc kubenswrapper[4830]: I0131 09:02:26.251183 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5kl8z" Jan 31 09:02:26 crc kubenswrapper[4830]: E0131 09:02:26.251758 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 09:02:26 crc kubenswrapper[4830]: E0131 09:02:26.251878 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 09:02:26 crc kubenswrapper[4830]: E0131 09:02:26.251946 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 09:02:26 crc kubenswrapper[4830]: E0131 09:02:26.252099 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5kl8z" podUID="c1fa30e4-0c03-43ab-9c37-f7ec86153b27" Jan 31 09:02:26 crc kubenswrapper[4830]: I0131 09:02:26.261827 4830 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 22:32:55.111931869 +0000 UTC Jan 31 09:02:26 crc kubenswrapper[4830]: I0131 09:02:26.267563 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0468c8e291590c1fa2ffc9b7786bbbf6f3cfb7889fc3adaf7f7a4d3b0edcb1ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4601859d7e52904b3390b2ebca48210c4b3bf132b00a4735a1dbe6cfdc7bd02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:26Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:26 crc kubenswrapper[4830]: I0131 09:02:26.289945 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-r8pc4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"159b9801-57e3-4cf0-9b81-10aacb5eef83\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://320f24051b1a05475b54823ec3401ad252f0efd6c0d3c5098643f959bad15179\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae12083012db5dcd6cc0b23cf64d04c952587334fdf617d36d4cdc2f657eb163\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://351a35da1a2503d6c5d323c7a9a26460236f70517498b41055e6d21bbf92b358\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3614c1ff7ca6762c15b9b85e6187ad94029630e54254e8ccb45b5e7c24692561\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27c8241e84418d80626b17fe37ccf994304394c3fa85c7f1a058d8d54ad7e7d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fd109715e65725addfeba399eacb1abbe16132e06e9ee50b542e877ca9d8d82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://766440d35d97de136fa66a347be009991bd05f76b51aff44c7369006f3196a4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a83288b35d051a945187b06a0f1e8f61aec52b6343034cc2f57354b61a9309b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T09:01:57Z\\\",\\\"message\\\":\\\"flector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 09:01:57.209441 6558 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0131 09:01:57.209613 6558 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 09:01:57.210444 6558 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0131 09:01:57.210672 6558 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 09:01:57.211135 6558 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 09:01:57.211342 6558 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0131 09:01:57.212161 6558 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 09:01:57.212266 6558 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:56Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:02:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a39d2c99a3b1f6ff5d003d2163046c5127e41dac82f41744c1fbffd0cec8ff09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba6e5a840aca1952877a5d2c18af1d28779b13d54e0c653a2b02b3c3d7b748e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba6e5a840aca1952877a5d2c18af1d28779b13d54e0c653a2b02b3c3d7b748e4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-r8pc4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:26Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:26 crc kubenswrapper[4830]: I0131 09:02:26.300664 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:26 crc kubenswrapper[4830]: I0131 09:02:26.301032 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:26 crc kubenswrapper[4830]: I0131 09:02:26.301128 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:26 crc kubenswrapper[4830]: I0131 09:02:26.301244 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:26 crc kubenswrapper[4830]: I0131 09:02:26.301311 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:26Z","lastTransitionTime":"2026-01-31T09:02:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:26 crc kubenswrapper[4830]: I0131 09:02:26.305308 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7vq99" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"44acb8ed-5840-46fa-9ba1-1b89653e1478\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07cae4ce61629c9f8e48863d0775cf4fed46422db85ba8b29477e098b697fb1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9w5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86ac3b3a214c6bca20d7fdc92a49647dfdaf8de4391f331890f74900ab7eca11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9w5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:35Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7vq99\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:26Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:26 crc kubenswrapper[4830]: I0131 09:02:26.315703 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-zt78q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f8a0ccd-540b-4151-a34d-438e433cb141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://362d0fc182d79e72720f3686e7fb5219372cf72d8be09c8086713b692e8d66d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6zlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:25Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-zt78q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:26Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:26 crc kubenswrapper[4830]: I0131 09:02:26.327289 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5kl8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1fa30e4-0c03-43ab-9c37-f7ec86153b27\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jgvfn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jgvfn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:36Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5kl8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:26Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:26 crc kubenswrapper[4830]: I0131 09:02:26.350412 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c75e1c36-b769-464e-96eb-6d9b3c5aa384\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fc4f1b4979b8d902676eced34650daea491e68b4c4377492b928a9f0f78d12b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b68f539fdc8bbf394adaf06d3e8682cdf498d0994c53f3754caf282cf9cf3607\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://539885da1b8f083e7ae878fec45416be66a5b06df063f046a05a4981bc8a8f75\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9b64732f8259953717c8ad355889afd462ce339c881ba9c105f6d3f39245e79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8cac33719e081864153ce20c60069a21036f3c29f7b4395021c3a2fe4f09dbc9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"message\\\":\\\"le observer\\\\nW0131 09:01:15.957161 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0131 09:01:15.957363 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 09:01:15.958528 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2684978895/tls.crt::/tmp/serving-cert-2684978895/tls.key\\\\\\\"\\\\nI0131 09:01:16.545024 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 09:01:16.548405 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 09:01:16.548444 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 09:01:16.548470 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 09:01:16.548480 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 09:01:16.557075 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 09:01:16.557112 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 09:01:16.557118 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 09:01:16.557123 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 09:01:16.557127 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 09:01:16.557130 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 09:01:16.557133 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0131 09:01:16.557468 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0131 09:01:16.564014 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d4d8bd6451d7416a2a312147ec282db5de8410910834f555c8d09062902130d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41db253848cac205af5f91621cac4e8223bda7d76984afd734a093f8e8a2d125\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41db253848cac205af5f91621cac4e8223bda7d76984afd734a093f8e8a2d125\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:00:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:00:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:00:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:26Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:26 crc kubenswrapper[4830]: I0131 09:02:26.367762 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:26Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:26 crc kubenswrapper[4830]: I0131 09:02:26.386892 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-x27jw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"227117cb-01d3-4e44-9da3-b1d577fb3ee2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d59f81b73056481d4e6eb23c2a98c3c088b5255b82cd28e0cad0ac2a9b271cfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wndtp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b9804ac2b3c56a17dc1665cc7ca0c622f9f7a5dbf19f804b155c517a5be61e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b9804ac2b3c56a17dc1665cc7ca0c622f9f7a5dbf19f804b155c517a5be61e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wndtp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e777b7bcec1e8930ac1c280460b5733614b633b1ed72522e124f8585ef5cb38d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e777b7bcec1e8930ac1c280460b5733614b633b1ed72522e124f8585ef5cb38d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wndtp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10692d497b8cf269a439c603376a958fcead6d7ee2338eff4e014a75eaa26417\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10692d497b8cf269a439c603376a958fcead6d7ee2338eff4e014a75eaa26417\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wndtp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1275255b2597fbb09db9612f496ab3177afffe0c7d74d406bc4b1bc861b5b54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c1275255b2597fbb09db9612f496ab3177afffe0c7d74d406bc4b1bc861b5b54\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wndtp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://443af6aeef0b259a54c898d9c08f6e39d663ddf06d36d8712ee8ebb7fd2ebf5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://443af6aeef0b259a54c898d9c08f6e39d663ddf06d36d8712ee8ebb7fd2ebf5a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wndtp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e55fc9c3b6f0e9526860d830179e759cb2eae3c52847b675db6ccce9b6ed2766\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e55fc9c3b6f0e9526860d830179e759cb2eae3c52847b675db6ccce9b6ed2766\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wndtp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-x27jw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:26Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:26 crc kubenswrapper[4830]: I0131 09:02:26.403322 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:26Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:26 crc kubenswrapper[4830]: I0131 09:02:26.404171 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:26 crc kubenswrapper[4830]: I0131 09:02:26.404232 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:26 crc kubenswrapper[4830]: I0131 09:02:26.404246 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:26 crc kubenswrapper[4830]: I0131 09:02:26.404271 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:26 crc kubenswrapper[4830]: I0131 09:02:26.404286 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:26Z","lastTransitionTime":"2026-01-31T09:02:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:26 crc kubenswrapper[4830]: I0131 09:02:26.416325 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pmbpr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca325f50-edf0-4f3d-ab92-17f40a73d274\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d2a0a6bafefdee2120d6573808366f2455c8606c350f69b9e62bfb2903f6303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7p56d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:22Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pmbpr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:26Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:26 crc kubenswrapper[4830]: I0131 09:02:26.429488 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-cjqbn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b7e133cc-19e8-4770-9146-88dac53a6531\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:02:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:02:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9875f32d43bbc74af3de68db341e1562d735fcd5fba747d5ca7aceea458db68a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4fc53764819654361fe0c4c89480ef4e2b42eb79d71ab8b88f1cc9283c67ce70\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T09:02:10Z\\\",\\\"message\\\":\\\"2026-01-31T09:01:24+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_a3601813-05b9-4c26-9298-bb115810fa0c\\\\n2026-01-31T09:01:24+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_a3601813-05b9-4c26-9298-bb115810fa0c to /host/opt/cni/bin/\\\\n2026-01-31T09:01:25Z [verbose] multus-daemon started\\\\n2026-01-31T09:01:25Z [verbose] Readiness Indicator file check\\\\n2026-01-31T09:02:10Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:23Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:02:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-msp6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-cjqbn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:26Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:26 crc kubenswrapper[4830]: I0131 09:02:26.444882 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c33f93dea1eda6a26c80682447157b317a4aea4af66d200c5bdfc162ad593e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:26Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:26 crc kubenswrapper[4830]: I0131 09:02:26.457928 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:26Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:26 crc kubenswrapper[4830]: I0131 09:02:26.470932 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2018dd8e7153f3ce64992dc6f931ae09c5f77931cd0743a9fe2557673b6a41f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:26Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:26 crc kubenswrapper[4830]: I0131 09:02:26.484673 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"158dbfda-9b0a-4809-9946-3c6ee2d082dc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4f2590c48b20124bb8d0271755d430719ece306dbdc95acc26258abaf331ee2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqp59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7cfb7ee25dc18bb1412f69e9bbc3a9055029ed188a12baa5ceef7d5445ad597c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqp59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:23Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gt7kd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:26Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:26 crc kubenswrapper[4830]: I0131 09:02:26.499212 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"026d8790-dc0a-472e-953a-66afc0fcd6e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1dc96f3d1e085f925a6a1b73ef1312bd85072065059f20eb6c11f7d044635f8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://49f1cea3266a97316fb0737cb770f6da2abfd58b016987b92c19aa20a9366129\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c457892625099d1b14d857643ba5c70e76cfe582ee31c1b8736f4e278557ab1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ecd1234e4873862db88981fcc0a8c9fd9fc7f913649528a5c274c2feb4617b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ecd1234e4873862db88981fcc0a8c9fd9fc7f913649528a5c274c2feb4617b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:00:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:00:57Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:00:56Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:26Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:26 crc kubenswrapper[4830]: I0131 09:02:26.508245 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:26 crc kubenswrapper[4830]: I0131 09:02:26.508294 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:26 crc kubenswrapper[4830]: I0131 09:02:26.508306 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:26 crc kubenswrapper[4830]: I0131 09:02:26.508323 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:26 crc kubenswrapper[4830]: I0131 09:02:26.508337 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:26Z","lastTransitionTime":"2026-01-31T09:02:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:26 crc kubenswrapper[4830]: I0131 09:02:26.517174 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"20ed341f-ef9c-4242-981d-80c09f22a37f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://28d65a2b0e667ff1bd912564180a6a9ce77e91a104c6028e19bc58e0ab3b295d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c484d4ed3775a955f3077e3404135c771c581eef1e8fa614d7cdd6e521bbb426\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6889405df5b8cc9e09b36c99a5db06048c9de193dd5744d2b48c07f42c477bd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e99664db53d57a91882867cdf4ab33d52a2e165c53f91cd1b918a32c49a7afa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:00:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:26Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:26 crc kubenswrapper[4830]: I0131 09:02:26.613399 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:26 crc kubenswrapper[4830]: I0131 09:02:26.613436 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:26 crc kubenswrapper[4830]: I0131 09:02:26.613446 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:26 crc kubenswrapper[4830]: I0131 09:02:26.613463 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:26 crc kubenswrapper[4830]: I0131 09:02:26.613473 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:26Z","lastTransitionTime":"2026-01-31T09:02:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:26 crc kubenswrapper[4830]: I0131 09:02:26.716206 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:26 crc kubenswrapper[4830]: I0131 09:02:26.716260 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:26 crc kubenswrapper[4830]: I0131 09:02:26.716270 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:26 crc kubenswrapper[4830]: I0131 09:02:26.716291 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:26 crc kubenswrapper[4830]: I0131 09:02:26.716303 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:26Z","lastTransitionTime":"2026-01-31T09:02:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:26 crc kubenswrapper[4830]: I0131 09:02:26.801923 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-r8pc4_159b9801-57e3-4cf0-9b81-10aacb5eef83/ovnkube-controller/3.log" Jan 31 09:02:26 crc kubenswrapper[4830]: I0131 09:02:26.802412 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-r8pc4_159b9801-57e3-4cf0-9b81-10aacb5eef83/ovnkube-controller/2.log" Jan 31 09:02:26 crc kubenswrapper[4830]: I0131 09:02:26.804596 4830 generic.go:334] "Generic (PLEG): container finished" podID="159b9801-57e3-4cf0-9b81-10aacb5eef83" containerID="766440d35d97de136fa66a347be009991bd05f76b51aff44c7369006f3196a4f" exitCode=1 Jan 31 09:02:26 crc kubenswrapper[4830]: I0131 09:02:26.804641 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-r8pc4" event={"ID":"159b9801-57e3-4cf0-9b81-10aacb5eef83","Type":"ContainerDied","Data":"766440d35d97de136fa66a347be009991bd05f76b51aff44c7369006f3196a4f"} Jan 31 09:02:26 crc kubenswrapper[4830]: I0131 09:02:26.804679 4830 scope.go:117] "RemoveContainer" containerID="3a83288b35d051a945187b06a0f1e8f61aec52b6343034cc2f57354b61a9309b" Jan 31 09:02:26 crc kubenswrapper[4830]: I0131 09:02:26.805265 4830 scope.go:117] "RemoveContainer" containerID="766440d35d97de136fa66a347be009991bd05f76b51aff44c7369006f3196a4f" Jan 31 09:02:26 crc kubenswrapper[4830]: E0131 09:02:26.805409 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-r8pc4_openshift-ovn-kubernetes(159b9801-57e3-4cf0-9b81-10aacb5eef83)\"" pod="openshift-ovn-kubernetes/ovnkube-node-r8pc4" podUID="159b9801-57e3-4cf0-9b81-10aacb5eef83" Jan 31 09:02:26 crc kubenswrapper[4830]: I0131 09:02:26.821427 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:26 crc kubenswrapper[4830]: I0131 09:02:26.821472 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:26 crc kubenswrapper[4830]: I0131 09:02:26.821482 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:26 crc kubenswrapper[4830]: I0131 09:02:26.821498 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:26 crc kubenswrapper[4830]: I0131 09:02:26.821511 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:26Z","lastTransitionTime":"2026-01-31T09:02:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:26 crc kubenswrapper[4830]: I0131 09:02:26.825676 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0468c8e291590c1fa2ffc9b7786bbbf6f3cfb7889fc3adaf7f7a4d3b0edcb1ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4601859d7e52904b3390b2ebca48210c4b3bf132b00a4735a1dbe6cfdc7bd02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:26Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:26 crc kubenswrapper[4830]: I0131 09:02:26.847419 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-r8pc4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"159b9801-57e3-4cf0-9b81-10aacb5eef83\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://320f24051b1a05475b54823ec3401ad252f0efd6c0d3c5098643f959bad15179\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae12083012db5dcd6cc0b23cf64d04c952587334fdf617d36d4cdc2f657eb163\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://351a35da1a2503d6c5d323c7a9a26460236f70517498b41055e6d21bbf92b358\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3614c1ff7ca6762c15b9b85e6187ad94029630e54254e8ccb45b5e7c24692561\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27c8241e84418d80626b17fe37ccf994304394c3fa85c7f1a058d8d54ad7e7d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fd109715e65725addfeba399eacb1abbe16132e06e9ee50b542e877ca9d8d82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://766440d35d97de136fa66a347be009991bd05f76b51aff44c7369006f3196a4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a83288b35d051a945187b06a0f1e8f61aec52b6343034cc2f57354b61a9309b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T09:01:57Z\\\",\\\"message\\\":\\\"flector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 09:01:57.209441 6558 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0131 09:01:57.209613 6558 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 09:01:57.210444 6558 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0131 09:01:57.210672 6558 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 09:01:57.211135 6558 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 09:01:57.211342 6558 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0131 09:01:57.212161 6558 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 09:01:57.212266 6558 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:56Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://766440d35d97de136fa66a347be009991bd05f76b51aff44c7369006f3196a4f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T09:02:26Z\\\",\\\"message\\\":\\\"default : 3.977794ms\\\\nI0131 09:02:26.237698 6968 services_controller.go:451] Built service openshift-service-ca-operator/metrics cluster-wide LB for network=default: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-service-ca-operator/metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-service-ca-operator/metrics\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.40\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0131 09:02:26.237860 6968 services_controller.go:356] Processing sync for service openshift-marketplace/marketplace-operator-metrics for network=default\\\\nI0131 09:02:26.237838 6968 services_controller.go:451] Built service openshift-kube-apiserver/apiserver cluster-wide LB for network=default: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-kube-apiserver/apiserver_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-kube-apiserver/apiserver\\\\\\\"}, Opts:services.LBOpts{Reject:tr\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T09:02:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a39d2c99a3b1f6ff5d003d2163046c5127e41dac82f41744c1fbffd0cec8ff09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba6e5a840aca1952877a5d2c18af1d28779b13d54e0c653a2b02b3c3d7b748e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba6e5a840aca1952877a5d2c18af1d28779b13d54e0c653a2b02b3c3d7b748e4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-r8pc4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:26Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:26 crc kubenswrapper[4830]: I0131 09:02:26.866146 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7vq99" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"44acb8ed-5840-46fa-9ba1-1b89653e1478\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07cae4ce61629c9f8e48863d0775cf4fed46422db85ba8b29477e098b697fb1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9w5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86ac3b3a214c6bca20d7fdc92a49647dfdaf8de4391f331890f74900ab7eca11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9w5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:35Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7vq99\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:26Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:26 crc kubenswrapper[4830]: I0131 09:02:26.878185 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5kl8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1fa30e4-0c03-43ab-9c37-f7ec86153b27\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jgvfn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jgvfn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:36Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5kl8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:26Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:26 crc kubenswrapper[4830]: I0131 09:02:26.896587 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c75e1c36-b769-464e-96eb-6d9b3c5aa384\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fc4f1b4979b8d902676eced34650daea491e68b4c4377492b928a9f0f78d12b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b68f539fdc8bbf394adaf06d3e8682cdf498d0994c53f3754caf282cf9cf3607\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://539885da1b8f083e7ae878fec45416be66a5b06df063f046a05a4981bc8a8f75\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9b64732f8259953717c8ad355889afd462ce339c881ba9c105f6d3f39245e79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8cac33719e081864153ce20c60069a21036f3c29f7b4395021c3a2fe4f09dbc9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"message\\\":\\\"le observer\\\\nW0131 09:01:15.957161 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0131 09:01:15.957363 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 09:01:15.958528 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2684978895/tls.crt::/tmp/serving-cert-2684978895/tls.key\\\\\\\"\\\\nI0131 09:01:16.545024 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 09:01:16.548405 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 09:01:16.548444 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 09:01:16.548470 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 09:01:16.548480 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 09:01:16.557075 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 09:01:16.557112 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 09:01:16.557118 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 09:01:16.557123 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 09:01:16.557127 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 09:01:16.557130 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 09:01:16.557133 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0131 09:01:16.557468 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0131 09:01:16.564014 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d4d8bd6451d7416a2a312147ec282db5de8410910834f555c8d09062902130d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41db253848cac205af5f91621cac4e8223bda7d76984afd734a093f8e8a2d125\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41db253848cac205af5f91621cac4e8223bda7d76984afd734a093f8e8a2d125\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:00:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:00:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:00:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:26Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:26 crc kubenswrapper[4830]: I0131 09:02:26.915530 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:26Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:26 crc kubenswrapper[4830]: I0131 09:02:26.923548 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:26 crc kubenswrapper[4830]: I0131 09:02:26.923594 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:26 crc kubenswrapper[4830]: I0131 09:02:26.923626 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:26 crc kubenswrapper[4830]: I0131 09:02:26.923645 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:26 crc kubenswrapper[4830]: I0131 09:02:26.923663 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:26Z","lastTransitionTime":"2026-01-31T09:02:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:26 crc kubenswrapper[4830]: I0131 09:02:26.928958 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-zt78q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f8a0ccd-540b-4151-a34d-438e433cb141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://362d0fc182d79e72720f3686e7fb5219372cf72d8be09c8086713b692e8d66d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6zlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:25Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-zt78q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:26Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:26 crc kubenswrapper[4830]: I0131 09:02:26.946650 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:26Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:26 crc kubenswrapper[4830]: I0131 09:02:26.959690 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pmbpr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca325f50-edf0-4f3d-ab92-17f40a73d274\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d2a0a6bafefdee2120d6573808366f2455c8606c350f69b9e62bfb2903f6303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7p56d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:22Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pmbpr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:26Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:26 crc kubenswrapper[4830]: I0131 09:02:26.978277 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-cjqbn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b7e133cc-19e8-4770-9146-88dac53a6531\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:02:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:02:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9875f32d43bbc74af3de68db341e1562d735fcd5fba747d5ca7aceea458db68a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4fc53764819654361fe0c4c89480ef4e2b42eb79d71ab8b88f1cc9283c67ce70\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T09:02:10Z\\\",\\\"message\\\":\\\"2026-01-31T09:01:24+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_a3601813-05b9-4c26-9298-bb115810fa0c\\\\n2026-01-31T09:01:24+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_a3601813-05b9-4c26-9298-bb115810fa0c to /host/opt/cni/bin/\\\\n2026-01-31T09:01:25Z [verbose] multus-daemon started\\\\n2026-01-31T09:01:25Z [verbose] Readiness Indicator file check\\\\n2026-01-31T09:02:10Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:23Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:02:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-msp6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-cjqbn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:26Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:26 crc kubenswrapper[4830]: I0131 09:02:26.995429 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-x27jw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"227117cb-01d3-4e44-9da3-b1d577fb3ee2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d59f81b73056481d4e6eb23c2a98c3c088b5255b82cd28e0cad0ac2a9b271cfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wndtp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b9804ac2b3c56a17dc1665cc7ca0c622f9f7a5dbf19f804b155c517a5be61e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b9804ac2b3c56a17dc1665cc7ca0c622f9f7a5dbf19f804b155c517a5be61e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wndtp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e777b7bcec1e8930ac1c280460b5733614b633b1ed72522e124f8585ef5cb38d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e777b7bcec1e8930ac1c280460b5733614b633b1ed72522e124f8585ef5cb38d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wndtp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10692d497b8cf269a439c603376a958fcead6d7ee2338eff4e014a75eaa26417\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10692d497b8cf269a439c603376a958fcead6d7ee2338eff4e014a75eaa26417\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wndtp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1275255b2597fbb09db9612f496ab3177afffe0c7d74d406bc4b1bc861b5b54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c1275255b2597fbb09db9612f496ab3177afffe0c7d74d406bc4b1bc861b5b54\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wndtp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://443af6aeef0b259a54c898d9c08f6e39d663ddf06d36d8712ee8ebb7fd2ebf5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://443af6aeef0b259a54c898d9c08f6e39d663ddf06d36d8712ee8ebb7fd2ebf5a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wndtp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e55fc9c3b6f0e9526860d830179e759cb2eae3c52847b675db6ccce9b6ed2766\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e55fc9c3b6f0e9526860d830179e759cb2eae3c52847b675db6ccce9b6ed2766\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wndtp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-x27jw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:26Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:27 crc kubenswrapper[4830]: I0131 09:02:27.008778 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:27Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:27 crc kubenswrapper[4830]: I0131 09:02:27.021123 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2018dd8e7153f3ce64992dc6f931ae09c5f77931cd0743a9fe2557673b6a41f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:27Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:27 crc kubenswrapper[4830]: I0131 09:02:27.025980 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:27 crc kubenswrapper[4830]: I0131 09:02:27.026022 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:27 crc kubenswrapper[4830]: I0131 09:02:27.026035 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:27 crc kubenswrapper[4830]: I0131 09:02:27.026053 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:27 crc kubenswrapper[4830]: I0131 09:02:27.026064 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:27Z","lastTransitionTime":"2026-01-31T09:02:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:27 crc kubenswrapper[4830]: I0131 09:02:27.031929 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"158dbfda-9b0a-4809-9946-3c6ee2d082dc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4f2590c48b20124bb8d0271755d430719ece306dbdc95acc26258abaf331ee2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqp59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7cfb7ee25dc18bb1412f69e9bbc3a9055029ed188a12baa5ceef7d5445ad597c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqp59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:23Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gt7kd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:27Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:27 crc kubenswrapper[4830]: I0131 09:02:27.042853 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"026d8790-dc0a-472e-953a-66afc0fcd6e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1dc96f3d1e085f925a6a1b73ef1312bd85072065059f20eb6c11f7d044635f8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://49f1cea3266a97316fb0737cb770f6da2abfd58b016987b92c19aa20a9366129\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c457892625099d1b14d857643ba5c70e76cfe582ee31c1b8736f4e278557ab1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ecd1234e4873862db88981fcc0a8c9fd9fc7f913649528a5c274c2feb4617b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ecd1234e4873862db88981fcc0a8c9fd9fc7f913649528a5c274c2feb4617b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:00:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:00:57Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:00:56Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:27Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:27 crc kubenswrapper[4830]: I0131 09:02:27.055050 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"20ed341f-ef9c-4242-981d-80c09f22a37f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://28d65a2b0e667ff1bd912564180a6a9ce77e91a104c6028e19bc58e0ab3b295d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c484d4ed3775a955f3077e3404135c771c581eef1e8fa614d7cdd6e521bbb426\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6889405df5b8cc9e09b36c99a5db06048c9de193dd5744d2b48c07f42c477bd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e99664db53d57a91882867cdf4ab33d52a2e165c53f91cd1b918a32c49a7afa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:00:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:27Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:27 crc kubenswrapper[4830]: I0131 09:02:27.073539 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c33f93dea1eda6a26c80682447157b317a4aea4af66d200c5bdfc162ad593e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:27Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:27 crc kubenswrapper[4830]: I0131 09:02:27.129102 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:27 crc kubenswrapper[4830]: I0131 09:02:27.129148 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:27 crc kubenswrapper[4830]: I0131 09:02:27.129156 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:27 crc kubenswrapper[4830]: I0131 09:02:27.129171 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:27 crc kubenswrapper[4830]: I0131 09:02:27.129184 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:27Z","lastTransitionTime":"2026-01-31T09:02:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:27 crc kubenswrapper[4830]: I0131 09:02:27.232590 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:27 crc kubenswrapper[4830]: I0131 09:02:27.232636 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:27 crc kubenswrapper[4830]: I0131 09:02:27.232645 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:27 crc kubenswrapper[4830]: I0131 09:02:27.232686 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:27 crc kubenswrapper[4830]: I0131 09:02:27.232698 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:27Z","lastTransitionTime":"2026-01-31T09:02:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:27 crc kubenswrapper[4830]: I0131 09:02:27.262201 4830 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 14:12:38.596021215 +0000 UTC Jan 31 09:02:27 crc kubenswrapper[4830]: I0131 09:02:27.336502 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:27 crc kubenswrapper[4830]: I0131 09:02:27.336556 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:27 crc kubenswrapper[4830]: I0131 09:02:27.336569 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:27 crc kubenswrapper[4830]: I0131 09:02:27.336588 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:27 crc kubenswrapper[4830]: I0131 09:02:27.336602 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:27Z","lastTransitionTime":"2026-01-31T09:02:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:27 crc kubenswrapper[4830]: I0131 09:02:27.440785 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:27 crc kubenswrapper[4830]: I0131 09:02:27.440862 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:27 crc kubenswrapper[4830]: I0131 09:02:27.440881 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:27 crc kubenswrapper[4830]: I0131 09:02:27.440911 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:27 crc kubenswrapper[4830]: I0131 09:02:27.440931 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:27Z","lastTransitionTime":"2026-01-31T09:02:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:27 crc kubenswrapper[4830]: I0131 09:02:27.543561 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:27 crc kubenswrapper[4830]: I0131 09:02:27.543628 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:27 crc kubenswrapper[4830]: I0131 09:02:27.543639 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:27 crc kubenswrapper[4830]: I0131 09:02:27.543658 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:27 crc kubenswrapper[4830]: I0131 09:02:27.543670 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:27Z","lastTransitionTime":"2026-01-31T09:02:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:27 crc kubenswrapper[4830]: I0131 09:02:27.647837 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:27 crc kubenswrapper[4830]: I0131 09:02:27.647892 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:27 crc kubenswrapper[4830]: I0131 09:02:27.647902 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:27 crc kubenswrapper[4830]: I0131 09:02:27.647922 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:27 crc kubenswrapper[4830]: I0131 09:02:27.647932 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:27Z","lastTransitionTime":"2026-01-31T09:02:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:27 crc kubenswrapper[4830]: I0131 09:02:27.751526 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:27 crc kubenswrapper[4830]: I0131 09:02:27.751609 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:27 crc kubenswrapper[4830]: I0131 09:02:27.751644 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:27 crc kubenswrapper[4830]: I0131 09:02:27.751667 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:27 crc kubenswrapper[4830]: I0131 09:02:27.751685 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:27Z","lastTransitionTime":"2026-01-31T09:02:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:27 crc kubenswrapper[4830]: I0131 09:02:27.810714 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-r8pc4_159b9801-57e3-4cf0-9b81-10aacb5eef83/ovnkube-controller/3.log" Jan 31 09:02:27 crc kubenswrapper[4830]: I0131 09:02:27.817573 4830 scope.go:117] "RemoveContainer" containerID="766440d35d97de136fa66a347be009991bd05f76b51aff44c7369006f3196a4f" Jan 31 09:02:27 crc kubenswrapper[4830]: E0131 09:02:27.817999 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-r8pc4_openshift-ovn-kubernetes(159b9801-57e3-4cf0-9b81-10aacb5eef83)\"" pod="openshift-ovn-kubernetes/ovnkube-node-r8pc4" podUID="159b9801-57e3-4cf0-9b81-10aacb5eef83" Jan 31 09:02:27 crc kubenswrapper[4830]: I0131 09:02:27.835293 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0468c8e291590c1fa2ffc9b7786bbbf6f3cfb7889fc3adaf7f7a4d3b0edcb1ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4601859d7e52904b3390b2ebca48210c4b3bf132b00a4735a1dbe6cfdc7bd02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:27Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:27 crc kubenswrapper[4830]: I0131 09:02:27.855147 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:27 crc kubenswrapper[4830]: I0131 09:02:27.855226 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:27 crc kubenswrapper[4830]: I0131 09:02:27.855242 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:27 crc kubenswrapper[4830]: I0131 09:02:27.855261 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:27 crc kubenswrapper[4830]: I0131 09:02:27.855276 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:27Z","lastTransitionTime":"2026-01-31T09:02:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:27 crc kubenswrapper[4830]: I0131 09:02:27.860432 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-r8pc4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"159b9801-57e3-4cf0-9b81-10aacb5eef83\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://320f24051b1a05475b54823ec3401ad252f0efd6c0d3c5098643f959bad15179\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae12083012db5dcd6cc0b23cf64d04c952587334fdf617d36d4cdc2f657eb163\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://351a35da1a2503d6c5d323c7a9a26460236f70517498b41055e6d21bbf92b358\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3614c1ff7ca6762c15b9b85e6187ad94029630e54254e8ccb45b5e7c24692561\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27c8241e84418d80626b17fe37ccf994304394c3fa85c7f1a058d8d54ad7e7d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fd109715e65725addfeba399eacb1abbe16132e06e9ee50b542e877ca9d8d82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://766440d35d97de136fa66a347be009991bd05f76b51aff44c7369006f3196a4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://766440d35d97de136fa66a347be009991bd05f76b51aff44c7369006f3196a4f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T09:02:26Z\\\",\\\"message\\\":\\\"default : 3.977794ms\\\\nI0131 09:02:26.237698 6968 services_controller.go:451] Built service openshift-service-ca-operator/metrics cluster-wide LB for network=default: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-service-ca-operator/metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-service-ca-operator/metrics\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.40\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0131 09:02:26.237860 6968 services_controller.go:356] Processing sync for service openshift-marketplace/marketplace-operator-metrics for network=default\\\\nI0131 09:02:26.237838 6968 services_controller.go:451] Built service openshift-kube-apiserver/apiserver cluster-wide LB for network=default: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-kube-apiserver/apiserver_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-kube-apiserver/apiserver\\\\\\\"}, Opts:services.LBOpts{Reject:tr\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T09:02:25Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-r8pc4_openshift-ovn-kubernetes(159b9801-57e3-4cf0-9b81-10aacb5eef83)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a39d2c99a3b1f6ff5d003d2163046c5127e41dac82f41744c1fbffd0cec8ff09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba6e5a840aca1952877a5d2c18af1d28779b13d54e0c653a2b02b3c3d7b748e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba6e5a840aca1952877a5d2c18af1d28779b13d54e0c653a2b02b3c3d7b748e4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-r8pc4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:27Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:27 crc kubenswrapper[4830]: I0131 09:02:27.873867 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7vq99" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"44acb8ed-5840-46fa-9ba1-1b89653e1478\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07cae4ce61629c9f8e48863d0775cf4fed46422db85ba8b29477e098b697fb1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9w5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86ac3b3a214c6bca20d7fdc92a49647dfdaf8de4391f331890f74900ab7eca11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9w5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:35Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7vq99\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:27Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:27 crc kubenswrapper[4830]: I0131 09:02:27.893635 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c75e1c36-b769-464e-96eb-6d9b3c5aa384\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fc4f1b4979b8d902676eced34650daea491e68b4c4377492b928a9f0f78d12b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b68f539fdc8bbf394adaf06d3e8682cdf498d0994c53f3754caf282cf9cf3607\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://539885da1b8f083e7ae878fec45416be66a5b06df063f046a05a4981bc8a8f75\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9b64732f8259953717c8ad355889afd462ce339c881ba9c105f6d3f39245e79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8cac33719e081864153ce20c60069a21036f3c29f7b4395021c3a2fe4f09dbc9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"message\\\":\\\"le observer\\\\nW0131 09:01:15.957161 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0131 09:01:15.957363 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 09:01:15.958528 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2684978895/tls.crt::/tmp/serving-cert-2684978895/tls.key\\\\\\\"\\\\nI0131 09:01:16.545024 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 09:01:16.548405 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 09:01:16.548444 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 09:01:16.548470 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 09:01:16.548480 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 09:01:16.557075 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 09:01:16.557112 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 09:01:16.557118 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 09:01:16.557123 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 09:01:16.557127 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 09:01:16.557130 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 09:01:16.557133 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0131 09:01:16.557468 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0131 09:01:16.564014 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d4d8bd6451d7416a2a312147ec282db5de8410910834f555c8d09062902130d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41db253848cac205af5f91621cac4e8223bda7d76984afd734a093f8e8a2d125\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41db253848cac205af5f91621cac4e8223bda7d76984afd734a093f8e8a2d125\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:00:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:00:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:00:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:27Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:27 crc kubenswrapper[4830]: I0131 09:02:27.909078 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:27Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:27 crc kubenswrapper[4830]: I0131 09:02:27.922197 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-zt78q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f8a0ccd-540b-4151-a34d-438e433cb141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://362d0fc182d79e72720f3686e7fb5219372cf72d8be09c8086713b692e8d66d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6zlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:25Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-zt78q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:27Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:27 crc kubenswrapper[4830]: I0131 09:02:27.936756 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5kl8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1fa30e4-0c03-43ab-9c37-f7ec86153b27\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jgvfn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jgvfn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:36Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5kl8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:27Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:27 crc kubenswrapper[4830]: I0131 09:02:27.953085 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:27Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:27 crc kubenswrapper[4830]: I0131 09:02:27.959140 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:27 crc kubenswrapper[4830]: I0131 09:02:27.959211 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:27 crc kubenswrapper[4830]: I0131 09:02:27.959228 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:27 crc kubenswrapper[4830]: I0131 09:02:27.959251 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:27 crc kubenswrapper[4830]: I0131 09:02:27.959265 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:27Z","lastTransitionTime":"2026-01-31T09:02:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:27 crc kubenswrapper[4830]: I0131 09:02:27.967620 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pmbpr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca325f50-edf0-4f3d-ab92-17f40a73d274\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d2a0a6bafefdee2120d6573808366f2455c8606c350f69b9e62bfb2903f6303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7p56d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:22Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pmbpr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:27Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:27 crc kubenswrapper[4830]: I0131 09:02:27.984076 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-cjqbn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b7e133cc-19e8-4770-9146-88dac53a6531\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:02:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:02:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9875f32d43bbc74af3de68db341e1562d735fcd5fba747d5ca7aceea458db68a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4fc53764819654361fe0c4c89480ef4e2b42eb79d71ab8b88f1cc9283c67ce70\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T09:02:10Z\\\",\\\"message\\\":\\\"2026-01-31T09:01:24+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_a3601813-05b9-4c26-9298-bb115810fa0c\\\\n2026-01-31T09:01:24+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_a3601813-05b9-4c26-9298-bb115810fa0c to /host/opt/cni/bin/\\\\n2026-01-31T09:01:25Z [verbose] multus-daemon started\\\\n2026-01-31T09:01:25Z [verbose] Readiness Indicator file check\\\\n2026-01-31T09:02:10Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:23Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:02:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-msp6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-cjqbn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:27Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:28 crc kubenswrapper[4830]: I0131 09:02:28.004847 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-x27jw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"227117cb-01d3-4e44-9da3-b1d577fb3ee2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d59f81b73056481d4e6eb23c2a98c3c088b5255b82cd28e0cad0ac2a9b271cfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wndtp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b9804ac2b3c56a17dc1665cc7ca0c622f9f7a5dbf19f804b155c517a5be61e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b9804ac2b3c56a17dc1665cc7ca0c622f9f7a5dbf19f804b155c517a5be61e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wndtp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e777b7bcec1e8930ac1c280460b5733614b633b1ed72522e124f8585ef5cb38d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e777b7bcec1e8930ac1c280460b5733614b633b1ed72522e124f8585ef5cb38d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wndtp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10692d497b8cf269a439c603376a958fcead6d7ee2338eff4e014a75eaa26417\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10692d497b8cf269a439c603376a958fcead6d7ee2338eff4e014a75eaa26417\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wndtp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1275255b2597fbb09db9612f496ab3177afffe0c7d74d406bc4b1bc861b5b54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c1275255b2597fbb09db9612f496ab3177afffe0c7d74d406bc4b1bc861b5b54\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wndtp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://443af6aeef0b259a54c898d9c08f6e39d663ddf06d36d8712ee8ebb7fd2ebf5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://443af6aeef0b259a54c898d9c08f6e39d663ddf06d36d8712ee8ebb7fd2ebf5a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wndtp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e55fc9c3b6f0e9526860d830179e759cb2eae3c52847b675db6ccce9b6ed2766\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e55fc9c3b6f0e9526860d830179e759cb2eae3c52847b675db6ccce9b6ed2766\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wndtp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-x27jw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:28Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:28 crc kubenswrapper[4830]: I0131 09:02:28.019598 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2018dd8e7153f3ce64992dc6f931ae09c5f77931cd0743a9fe2557673b6a41f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:28Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:28 crc kubenswrapper[4830]: I0131 09:02:28.033852 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"158dbfda-9b0a-4809-9946-3c6ee2d082dc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4f2590c48b20124bb8d0271755d430719ece306dbdc95acc26258abaf331ee2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqp59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7cfb7ee25dc18bb1412f69e9bbc3a9055029ed188a12baa5ceef7d5445ad597c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqp59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:23Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gt7kd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:28Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:28 crc kubenswrapper[4830]: I0131 09:02:28.048603 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"026d8790-dc0a-472e-953a-66afc0fcd6e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1dc96f3d1e085f925a6a1b73ef1312bd85072065059f20eb6c11f7d044635f8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://49f1cea3266a97316fb0737cb770f6da2abfd58b016987b92c19aa20a9366129\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c457892625099d1b14d857643ba5c70e76cfe582ee31c1b8736f4e278557ab1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ecd1234e4873862db88981fcc0a8c9fd9fc7f913649528a5c274c2feb4617b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ecd1234e4873862db88981fcc0a8c9fd9fc7f913649528a5c274c2feb4617b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:00:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:00:57Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:00:56Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:28Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:28 crc kubenswrapper[4830]: I0131 09:02:28.062859 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:28 crc kubenswrapper[4830]: I0131 09:02:28.062935 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:28 crc kubenswrapper[4830]: I0131 09:02:28.062951 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:28 crc kubenswrapper[4830]: I0131 09:02:28.062982 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:28 crc kubenswrapper[4830]: I0131 09:02:28.062998 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:28Z","lastTransitionTime":"2026-01-31T09:02:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:28 crc kubenswrapper[4830]: I0131 09:02:28.063469 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"20ed341f-ef9c-4242-981d-80c09f22a37f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://28d65a2b0e667ff1bd912564180a6a9ce77e91a104c6028e19bc58e0ab3b295d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c484d4ed3775a955f3077e3404135c771c581eef1e8fa614d7cdd6e521bbb426\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6889405df5b8cc9e09b36c99a5db06048c9de193dd5744d2b48c07f42c477bd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e99664db53d57a91882867cdf4ab33d52a2e165c53f91cd1b918a32c49a7afa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:00:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:28Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:28 crc kubenswrapper[4830]: I0131 09:02:28.081874 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c33f93dea1eda6a26c80682447157b317a4aea4af66d200c5bdfc162ad593e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:28Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:28 crc kubenswrapper[4830]: I0131 09:02:28.103463 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:28Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:28 crc kubenswrapper[4830]: I0131 09:02:28.165998 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:28 crc kubenswrapper[4830]: I0131 09:02:28.166420 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:28 crc kubenswrapper[4830]: I0131 09:02:28.166630 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:28 crc kubenswrapper[4830]: I0131 09:02:28.166831 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:28 crc kubenswrapper[4830]: I0131 09:02:28.167047 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:28Z","lastTransitionTime":"2026-01-31T09:02:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:28 crc kubenswrapper[4830]: I0131 09:02:28.251347 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 09:02:28 crc kubenswrapper[4830]: E0131 09:02:28.251792 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 09:02:28 crc kubenswrapper[4830]: I0131 09:02:28.252078 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 09:02:28 crc kubenswrapper[4830]: I0131 09:02:28.252219 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5kl8z" Jan 31 09:02:28 crc kubenswrapper[4830]: I0131 09:02:28.252075 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 09:02:28 crc kubenswrapper[4830]: E0131 09:02:28.252231 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 09:02:28 crc kubenswrapper[4830]: E0131 09:02:28.252513 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5kl8z" podUID="c1fa30e4-0c03-43ab-9c37-f7ec86153b27" Jan 31 09:02:28 crc kubenswrapper[4830]: E0131 09:02:28.252641 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 09:02:28 crc kubenswrapper[4830]: I0131 09:02:28.263104 4830 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 18:58:51.513257694 +0000 UTC Jan 31 09:02:28 crc kubenswrapper[4830]: I0131 09:02:28.269510 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Jan 31 09:02:28 crc kubenswrapper[4830]: I0131 09:02:28.270997 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:28 crc kubenswrapper[4830]: I0131 09:02:28.271106 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:28 crc kubenswrapper[4830]: I0131 09:02:28.271167 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:28 crc kubenswrapper[4830]: I0131 09:02:28.271228 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:28 crc kubenswrapper[4830]: I0131 09:02:28.271285 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:28Z","lastTransitionTime":"2026-01-31T09:02:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:28 crc kubenswrapper[4830]: I0131 09:02:28.373507 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:28 crc kubenswrapper[4830]: I0131 09:02:28.373573 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:28 crc kubenswrapper[4830]: I0131 09:02:28.373595 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:28 crc kubenswrapper[4830]: I0131 09:02:28.373629 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:28 crc kubenswrapper[4830]: I0131 09:02:28.373655 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:28Z","lastTransitionTime":"2026-01-31T09:02:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:28 crc kubenswrapper[4830]: I0131 09:02:28.476856 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:28 crc kubenswrapper[4830]: I0131 09:02:28.477180 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:28 crc kubenswrapper[4830]: I0131 09:02:28.477260 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:28 crc kubenswrapper[4830]: I0131 09:02:28.477409 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:28 crc kubenswrapper[4830]: I0131 09:02:28.477488 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:28Z","lastTransitionTime":"2026-01-31T09:02:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:28 crc kubenswrapper[4830]: I0131 09:02:28.580485 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:28 crc kubenswrapper[4830]: I0131 09:02:28.580542 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:28 crc kubenswrapper[4830]: I0131 09:02:28.580552 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:28 crc kubenswrapper[4830]: I0131 09:02:28.580566 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:28 crc kubenswrapper[4830]: I0131 09:02:28.580575 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:28Z","lastTransitionTime":"2026-01-31T09:02:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:28 crc kubenswrapper[4830]: I0131 09:02:28.683187 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:28 crc kubenswrapper[4830]: I0131 09:02:28.683226 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:28 crc kubenswrapper[4830]: I0131 09:02:28.683236 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:28 crc kubenswrapper[4830]: I0131 09:02:28.683252 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:28 crc kubenswrapper[4830]: I0131 09:02:28.683265 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:28Z","lastTransitionTime":"2026-01-31T09:02:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:28 crc kubenswrapper[4830]: I0131 09:02:28.785690 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:28 crc kubenswrapper[4830]: I0131 09:02:28.785759 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:28 crc kubenswrapper[4830]: I0131 09:02:28.785772 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:28 crc kubenswrapper[4830]: I0131 09:02:28.785794 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:28 crc kubenswrapper[4830]: I0131 09:02:28.785809 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:28Z","lastTransitionTime":"2026-01-31T09:02:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:28 crc kubenswrapper[4830]: I0131 09:02:28.889155 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:28 crc kubenswrapper[4830]: I0131 09:02:28.889201 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:28 crc kubenswrapper[4830]: I0131 09:02:28.889212 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:28 crc kubenswrapper[4830]: I0131 09:02:28.889230 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:28 crc kubenswrapper[4830]: I0131 09:02:28.889241 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:28Z","lastTransitionTime":"2026-01-31T09:02:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:28 crc kubenswrapper[4830]: I0131 09:02:28.992316 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:28 crc kubenswrapper[4830]: I0131 09:02:28.992406 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:28 crc kubenswrapper[4830]: I0131 09:02:28.992415 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:28 crc kubenswrapper[4830]: I0131 09:02:28.992435 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:28 crc kubenswrapper[4830]: I0131 09:02:28.992447 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:28Z","lastTransitionTime":"2026-01-31T09:02:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:29 crc kubenswrapper[4830]: I0131 09:02:29.141524 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:29 crc kubenswrapper[4830]: I0131 09:02:29.141583 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:29 crc kubenswrapper[4830]: I0131 09:02:29.141595 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:29 crc kubenswrapper[4830]: I0131 09:02:29.141615 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:29 crc kubenswrapper[4830]: I0131 09:02:29.141631 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:29Z","lastTransitionTime":"2026-01-31T09:02:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:29 crc kubenswrapper[4830]: I0131 09:02:29.245162 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:29 crc kubenswrapper[4830]: I0131 09:02:29.245208 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:29 crc kubenswrapper[4830]: I0131 09:02:29.245219 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:29 crc kubenswrapper[4830]: I0131 09:02:29.245237 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:29 crc kubenswrapper[4830]: I0131 09:02:29.245247 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:29Z","lastTransitionTime":"2026-01-31T09:02:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:29 crc kubenswrapper[4830]: I0131 09:02:29.263853 4830 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 22:25:17.390867719 +0000 UTC Jan 31 09:02:29 crc kubenswrapper[4830]: I0131 09:02:29.348583 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:29 crc kubenswrapper[4830]: I0131 09:02:29.348631 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:29 crc kubenswrapper[4830]: I0131 09:02:29.348642 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:29 crc kubenswrapper[4830]: I0131 09:02:29.348661 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:29 crc kubenswrapper[4830]: I0131 09:02:29.348674 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:29Z","lastTransitionTime":"2026-01-31T09:02:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:29 crc kubenswrapper[4830]: I0131 09:02:29.452003 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:29 crc kubenswrapper[4830]: I0131 09:02:29.452049 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:29 crc kubenswrapper[4830]: I0131 09:02:29.452059 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:29 crc kubenswrapper[4830]: I0131 09:02:29.452078 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:29 crc kubenswrapper[4830]: I0131 09:02:29.452093 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:29Z","lastTransitionTime":"2026-01-31T09:02:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:29 crc kubenswrapper[4830]: I0131 09:02:29.555826 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:29 crc kubenswrapper[4830]: I0131 09:02:29.555871 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:29 crc kubenswrapper[4830]: I0131 09:02:29.555880 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:29 crc kubenswrapper[4830]: I0131 09:02:29.555898 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:29 crc kubenswrapper[4830]: I0131 09:02:29.555909 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:29Z","lastTransitionTime":"2026-01-31T09:02:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:29 crc kubenswrapper[4830]: I0131 09:02:29.660022 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:29 crc kubenswrapper[4830]: I0131 09:02:29.660135 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:29 crc kubenswrapper[4830]: I0131 09:02:29.660167 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:29 crc kubenswrapper[4830]: I0131 09:02:29.660210 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:29 crc kubenswrapper[4830]: I0131 09:02:29.660237 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:29Z","lastTransitionTime":"2026-01-31T09:02:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:29 crc kubenswrapper[4830]: I0131 09:02:29.764824 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:29 crc kubenswrapper[4830]: I0131 09:02:29.764881 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:29 crc kubenswrapper[4830]: I0131 09:02:29.764893 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:29 crc kubenswrapper[4830]: I0131 09:02:29.764917 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:29 crc kubenswrapper[4830]: I0131 09:02:29.764931 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:29Z","lastTransitionTime":"2026-01-31T09:02:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:29 crc kubenswrapper[4830]: I0131 09:02:29.867715 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:29 crc kubenswrapper[4830]: I0131 09:02:29.867857 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:29 crc kubenswrapper[4830]: I0131 09:02:29.867873 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:29 crc kubenswrapper[4830]: I0131 09:02:29.867902 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:29 crc kubenswrapper[4830]: I0131 09:02:29.867913 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:29Z","lastTransitionTime":"2026-01-31T09:02:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:29 crc kubenswrapper[4830]: I0131 09:02:29.971419 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:29 crc kubenswrapper[4830]: I0131 09:02:29.971478 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:29 crc kubenswrapper[4830]: I0131 09:02:29.971491 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:29 crc kubenswrapper[4830]: I0131 09:02:29.971512 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:29 crc kubenswrapper[4830]: I0131 09:02:29.971524 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:29Z","lastTransitionTime":"2026-01-31T09:02:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:30 crc kubenswrapper[4830]: I0131 09:02:30.074792 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:30 crc kubenswrapper[4830]: I0131 09:02:30.074892 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:30 crc kubenswrapper[4830]: I0131 09:02:30.074907 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:30 crc kubenswrapper[4830]: I0131 09:02:30.074944 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:30 crc kubenswrapper[4830]: I0131 09:02:30.074959 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:30Z","lastTransitionTime":"2026-01-31T09:02:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:30 crc kubenswrapper[4830]: I0131 09:02:30.178269 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:30 crc kubenswrapper[4830]: I0131 09:02:30.178335 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:30 crc kubenswrapper[4830]: I0131 09:02:30.178353 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:30 crc kubenswrapper[4830]: I0131 09:02:30.178379 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:30 crc kubenswrapper[4830]: I0131 09:02:30.178401 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:30Z","lastTransitionTime":"2026-01-31T09:02:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:30 crc kubenswrapper[4830]: I0131 09:02:30.250997 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 09:02:30 crc kubenswrapper[4830]: E0131 09:02:30.251252 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 09:02:30 crc kubenswrapper[4830]: I0131 09:02:30.251635 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5kl8z" Jan 31 09:02:30 crc kubenswrapper[4830]: E0131 09:02:30.251847 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5kl8z" podUID="c1fa30e4-0c03-43ab-9c37-f7ec86153b27" Jan 31 09:02:30 crc kubenswrapper[4830]: I0131 09:02:30.252210 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 09:02:30 crc kubenswrapper[4830]: I0131 09:02:30.252277 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 09:02:30 crc kubenswrapper[4830]: E0131 09:02:30.252553 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 09:02:30 crc kubenswrapper[4830]: E0131 09:02:30.252622 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 09:02:30 crc kubenswrapper[4830]: I0131 09:02:30.264629 4830 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 01:30:35.723738846 +0000 UTC Jan 31 09:02:30 crc kubenswrapper[4830]: I0131 09:02:30.281701 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:30 crc kubenswrapper[4830]: I0131 09:02:30.281851 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:30 crc kubenswrapper[4830]: I0131 09:02:30.281871 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:30 crc kubenswrapper[4830]: I0131 09:02:30.281896 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:30 crc kubenswrapper[4830]: I0131 09:02:30.281916 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:30Z","lastTransitionTime":"2026-01-31T09:02:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:30 crc kubenswrapper[4830]: I0131 09:02:30.385381 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:30 crc kubenswrapper[4830]: I0131 09:02:30.385454 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:30 crc kubenswrapper[4830]: I0131 09:02:30.385471 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:30 crc kubenswrapper[4830]: I0131 09:02:30.385495 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:30 crc kubenswrapper[4830]: I0131 09:02:30.385515 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:30Z","lastTransitionTime":"2026-01-31T09:02:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:30 crc kubenswrapper[4830]: I0131 09:02:30.489274 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:30 crc kubenswrapper[4830]: I0131 09:02:30.489358 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:30 crc kubenswrapper[4830]: I0131 09:02:30.489378 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:30 crc kubenswrapper[4830]: I0131 09:02:30.489405 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:30 crc kubenswrapper[4830]: I0131 09:02:30.489424 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:30Z","lastTransitionTime":"2026-01-31T09:02:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:30 crc kubenswrapper[4830]: I0131 09:02:30.592933 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:30 crc kubenswrapper[4830]: I0131 09:02:30.593019 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:30 crc kubenswrapper[4830]: I0131 09:02:30.593051 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:30 crc kubenswrapper[4830]: I0131 09:02:30.593093 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:30 crc kubenswrapper[4830]: I0131 09:02:30.593119 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:30Z","lastTransitionTime":"2026-01-31T09:02:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:30 crc kubenswrapper[4830]: I0131 09:02:30.697478 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:30 crc kubenswrapper[4830]: I0131 09:02:30.697591 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:30 crc kubenswrapper[4830]: I0131 09:02:30.697626 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:30 crc kubenswrapper[4830]: I0131 09:02:30.697664 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:30 crc kubenswrapper[4830]: I0131 09:02:30.697689 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:30Z","lastTransitionTime":"2026-01-31T09:02:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:30 crc kubenswrapper[4830]: I0131 09:02:30.800525 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:30 crc kubenswrapper[4830]: I0131 09:02:30.800609 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:30 crc kubenswrapper[4830]: I0131 09:02:30.800627 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:30 crc kubenswrapper[4830]: I0131 09:02:30.800660 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:30 crc kubenswrapper[4830]: I0131 09:02:30.800677 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:30Z","lastTransitionTime":"2026-01-31T09:02:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:30 crc kubenswrapper[4830]: I0131 09:02:30.905025 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:30 crc kubenswrapper[4830]: I0131 09:02:30.905093 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:30 crc kubenswrapper[4830]: I0131 09:02:30.905111 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:30 crc kubenswrapper[4830]: I0131 09:02:30.905137 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:30 crc kubenswrapper[4830]: I0131 09:02:30.905158 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:30Z","lastTransitionTime":"2026-01-31T09:02:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:31 crc kubenswrapper[4830]: I0131 09:02:31.008395 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:31 crc kubenswrapper[4830]: I0131 09:02:31.008490 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:31 crc kubenswrapper[4830]: I0131 09:02:31.008511 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:31 crc kubenswrapper[4830]: I0131 09:02:31.008533 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:31 crc kubenswrapper[4830]: I0131 09:02:31.008546 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:31Z","lastTransitionTime":"2026-01-31T09:02:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:31 crc kubenswrapper[4830]: I0131 09:02:31.111619 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:31 crc kubenswrapper[4830]: I0131 09:02:31.111680 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:31 crc kubenswrapper[4830]: I0131 09:02:31.111700 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:31 crc kubenswrapper[4830]: I0131 09:02:31.111743 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:31 crc kubenswrapper[4830]: I0131 09:02:31.111760 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:31Z","lastTransitionTime":"2026-01-31T09:02:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:31 crc kubenswrapper[4830]: I0131 09:02:31.214709 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:31 crc kubenswrapper[4830]: I0131 09:02:31.214822 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:31 crc kubenswrapper[4830]: I0131 09:02:31.214835 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:31 crc kubenswrapper[4830]: I0131 09:02:31.214852 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:31 crc kubenswrapper[4830]: I0131 09:02:31.214866 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:31Z","lastTransitionTime":"2026-01-31T09:02:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:31 crc kubenswrapper[4830]: I0131 09:02:31.264991 4830 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 07:19:35.905583081 +0000 UTC Jan 31 09:02:31 crc kubenswrapper[4830]: I0131 09:02:31.317569 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:31 crc kubenswrapper[4830]: I0131 09:02:31.317627 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:31 crc kubenswrapper[4830]: I0131 09:02:31.317646 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:31 crc kubenswrapper[4830]: I0131 09:02:31.317667 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:31 crc kubenswrapper[4830]: I0131 09:02:31.317679 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:31Z","lastTransitionTime":"2026-01-31T09:02:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:31 crc kubenswrapper[4830]: I0131 09:02:31.421672 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:31 crc kubenswrapper[4830]: I0131 09:02:31.421772 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:31 crc kubenswrapper[4830]: I0131 09:02:31.421792 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:31 crc kubenswrapper[4830]: I0131 09:02:31.421856 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:31 crc kubenswrapper[4830]: I0131 09:02:31.421875 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:31Z","lastTransitionTime":"2026-01-31T09:02:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:31 crc kubenswrapper[4830]: I0131 09:02:31.524588 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:31 crc kubenswrapper[4830]: I0131 09:02:31.524643 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:31 crc kubenswrapper[4830]: I0131 09:02:31.524655 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:31 crc kubenswrapper[4830]: I0131 09:02:31.524677 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:31 crc kubenswrapper[4830]: I0131 09:02:31.524693 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:31Z","lastTransitionTime":"2026-01-31T09:02:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:31 crc kubenswrapper[4830]: I0131 09:02:31.627384 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:31 crc kubenswrapper[4830]: I0131 09:02:31.627420 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:31 crc kubenswrapper[4830]: I0131 09:02:31.627433 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:31 crc kubenswrapper[4830]: I0131 09:02:31.627449 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:31 crc kubenswrapper[4830]: I0131 09:02:31.627462 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:31Z","lastTransitionTime":"2026-01-31T09:02:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:31 crc kubenswrapper[4830]: I0131 09:02:31.730196 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:31 crc kubenswrapper[4830]: I0131 09:02:31.730744 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:31 crc kubenswrapper[4830]: I0131 09:02:31.730757 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:31 crc kubenswrapper[4830]: I0131 09:02:31.730776 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:31 crc kubenswrapper[4830]: I0131 09:02:31.730789 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:31Z","lastTransitionTime":"2026-01-31T09:02:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:31 crc kubenswrapper[4830]: I0131 09:02:31.833535 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:31 crc kubenswrapper[4830]: I0131 09:02:31.833602 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:31 crc kubenswrapper[4830]: I0131 09:02:31.833612 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:31 crc kubenswrapper[4830]: I0131 09:02:31.833629 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:31 crc kubenswrapper[4830]: I0131 09:02:31.833639 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:31Z","lastTransitionTime":"2026-01-31T09:02:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:31 crc kubenswrapper[4830]: I0131 09:02:31.936868 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:31 crc kubenswrapper[4830]: I0131 09:02:31.936922 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:31 crc kubenswrapper[4830]: I0131 09:02:31.936932 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:31 crc kubenswrapper[4830]: I0131 09:02:31.936950 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:31 crc kubenswrapper[4830]: I0131 09:02:31.936963 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:31Z","lastTransitionTime":"2026-01-31T09:02:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:32 crc kubenswrapper[4830]: I0131 09:02:32.039078 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:32 crc kubenswrapper[4830]: I0131 09:02:32.039134 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:32 crc kubenswrapper[4830]: I0131 09:02:32.039157 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:32 crc kubenswrapper[4830]: I0131 09:02:32.039182 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:32 crc kubenswrapper[4830]: I0131 09:02:32.039195 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:32Z","lastTransitionTime":"2026-01-31T09:02:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:32 crc kubenswrapper[4830]: I0131 09:02:32.142326 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:32 crc kubenswrapper[4830]: I0131 09:02:32.142384 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:32 crc kubenswrapper[4830]: I0131 09:02:32.142397 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:32 crc kubenswrapper[4830]: I0131 09:02:32.142418 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:32 crc kubenswrapper[4830]: I0131 09:02:32.142436 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:32Z","lastTransitionTime":"2026-01-31T09:02:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:32 crc kubenswrapper[4830]: I0131 09:02:32.244925 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:32 crc kubenswrapper[4830]: I0131 09:02:32.244994 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:32 crc kubenswrapper[4830]: I0131 09:02:32.245010 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:32 crc kubenswrapper[4830]: I0131 09:02:32.245032 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:32 crc kubenswrapper[4830]: I0131 09:02:32.245046 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:32Z","lastTransitionTime":"2026-01-31T09:02:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:32 crc kubenswrapper[4830]: I0131 09:02:32.250609 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5kl8z" Jan 31 09:02:32 crc kubenswrapper[4830]: I0131 09:02:32.250647 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 09:02:32 crc kubenswrapper[4830]: I0131 09:02:32.250609 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 09:02:32 crc kubenswrapper[4830]: E0131 09:02:32.250831 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5kl8z" podUID="c1fa30e4-0c03-43ab-9c37-f7ec86153b27" Jan 31 09:02:32 crc kubenswrapper[4830]: I0131 09:02:32.250843 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 09:02:32 crc kubenswrapper[4830]: E0131 09:02:32.251090 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 09:02:32 crc kubenswrapper[4830]: E0131 09:02:32.251217 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 09:02:32 crc kubenswrapper[4830]: E0131 09:02:32.251464 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 09:02:32 crc kubenswrapper[4830]: I0131 09:02:32.265523 4830 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 10:17:05.461540632 +0000 UTC Jan 31 09:02:32 crc kubenswrapper[4830]: I0131 09:02:32.347714 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:32 crc kubenswrapper[4830]: I0131 09:02:32.347775 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:32 crc kubenswrapper[4830]: I0131 09:02:32.347789 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:32 crc kubenswrapper[4830]: I0131 09:02:32.347813 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:32 crc kubenswrapper[4830]: I0131 09:02:32.347827 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:32Z","lastTransitionTime":"2026-01-31T09:02:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:32 crc kubenswrapper[4830]: I0131 09:02:32.450596 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:32 crc kubenswrapper[4830]: I0131 09:02:32.450663 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:32 crc kubenswrapper[4830]: I0131 09:02:32.450683 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:32 crc kubenswrapper[4830]: I0131 09:02:32.450705 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:32 crc kubenswrapper[4830]: I0131 09:02:32.450739 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:32Z","lastTransitionTime":"2026-01-31T09:02:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:32 crc kubenswrapper[4830]: I0131 09:02:32.554019 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:32 crc kubenswrapper[4830]: I0131 09:02:32.554085 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:32 crc kubenswrapper[4830]: I0131 09:02:32.554098 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:32 crc kubenswrapper[4830]: I0131 09:02:32.554119 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:32 crc kubenswrapper[4830]: I0131 09:02:32.554138 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:32Z","lastTransitionTime":"2026-01-31T09:02:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:32 crc kubenswrapper[4830]: I0131 09:02:32.656593 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:32 crc kubenswrapper[4830]: I0131 09:02:32.656640 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:32 crc kubenswrapper[4830]: I0131 09:02:32.656649 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:32 crc kubenswrapper[4830]: I0131 09:02:32.656667 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:32 crc kubenswrapper[4830]: I0131 09:02:32.656676 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:32Z","lastTransitionTime":"2026-01-31T09:02:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:32 crc kubenswrapper[4830]: I0131 09:02:32.759238 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:32 crc kubenswrapper[4830]: I0131 09:02:32.759274 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:32 crc kubenswrapper[4830]: I0131 09:02:32.759284 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:32 crc kubenswrapper[4830]: I0131 09:02:32.759303 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:32 crc kubenswrapper[4830]: I0131 09:02:32.759315 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:32Z","lastTransitionTime":"2026-01-31T09:02:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:32 crc kubenswrapper[4830]: I0131 09:02:32.861897 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:32 crc kubenswrapper[4830]: I0131 09:02:32.861977 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:32 crc kubenswrapper[4830]: I0131 09:02:32.861994 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:32 crc kubenswrapper[4830]: I0131 09:02:32.862026 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:32 crc kubenswrapper[4830]: I0131 09:02:32.862043 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:32Z","lastTransitionTime":"2026-01-31T09:02:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:32 crc kubenswrapper[4830]: I0131 09:02:32.965519 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:32 crc kubenswrapper[4830]: I0131 09:02:32.965568 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:32 crc kubenswrapper[4830]: I0131 09:02:32.965577 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:32 crc kubenswrapper[4830]: I0131 09:02:32.965595 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:32 crc kubenswrapper[4830]: I0131 09:02:32.965605 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:32Z","lastTransitionTime":"2026-01-31T09:02:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:33 crc kubenswrapper[4830]: I0131 09:02:33.068597 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:33 crc kubenswrapper[4830]: I0131 09:02:33.068653 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:33 crc kubenswrapper[4830]: I0131 09:02:33.068664 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:33 crc kubenswrapper[4830]: I0131 09:02:33.068683 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:33 crc kubenswrapper[4830]: I0131 09:02:33.068699 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:33Z","lastTransitionTime":"2026-01-31T09:02:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:33 crc kubenswrapper[4830]: I0131 09:02:33.172086 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:33 crc kubenswrapper[4830]: I0131 09:02:33.172137 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:33 crc kubenswrapper[4830]: I0131 09:02:33.172151 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:33 crc kubenswrapper[4830]: I0131 09:02:33.172183 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:33 crc kubenswrapper[4830]: I0131 09:02:33.172204 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:33Z","lastTransitionTime":"2026-01-31T09:02:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:33 crc kubenswrapper[4830]: I0131 09:02:33.265886 4830 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 12:47:16.979169763 +0000 UTC Jan 31 09:02:33 crc kubenswrapper[4830]: I0131 09:02:33.275253 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:33 crc kubenswrapper[4830]: I0131 09:02:33.275299 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:33 crc kubenswrapper[4830]: I0131 09:02:33.275314 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:33 crc kubenswrapper[4830]: I0131 09:02:33.275335 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:33 crc kubenswrapper[4830]: I0131 09:02:33.275350 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:33Z","lastTransitionTime":"2026-01-31T09:02:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:33 crc kubenswrapper[4830]: I0131 09:02:33.378391 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:33 crc kubenswrapper[4830]: I0131 09:02:33.378431 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:33 crc kubenswrapper[4830]: I0131 09:02:33.378440 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:33 crc kubenswrapper[4830]: I0131 09:02:33.378458 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:33 crc kubenswrapper[4830]: I0131 09:02:33.378468 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:33Z","lastTransitionTime":"2026-01-31T09:02:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:33 crc kubenswrapper[4830]: I0131 09:02:33.481586 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:33 crc kubenswrapper[4830]: I0131 09:02:33.481633 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:33 crc kubenswrapper[4830]: I0131 09:02:33.481643 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:33 crc kubenswrapper[4830]: I0131 09:02:33.481663 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:33 crc kubenswrapper[4830]: I0131 09:02:33.481675 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:33Z","lastTransitionTime":"2026-01-31T09:02:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:33 crc kubenswrapper[4830]: I0131 09:02:33.584614 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:33 crc kubenswrapper[4830]: I0131 09:02:33.584663 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:33 crc kubenswrapper[4830]: I0131 09:02:33.584673 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:33 crc kubenswrapper[4830]: I0131 09:02:33.584692 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:33 crc kubenswrapper[4830]: I0131 09:02:33.584706 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:33Z","lastTransitionTime":"2026-01-31T09:02:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:33 crc kubenswrapper[4830]: I0131 09:02:33.687544 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:33 crc kubenswrapper[4830]: I0131 09:02:33.687633 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:33 crc kubenswrapper[4830]: I0131 09:02:33.687643 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:33 crc kubenswrapper[4830]: I0131 09:02:33.687660 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:33 crc kubenswrapper[4830]: I0131 09:02:33.687670 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:33Z","lastTransitionTime":"2026-01-31T09:02:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:33 crc kubenswrapper[4830]: I0131 09:02:33.790167 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:33 crc kubenswrapper[4830]: I0131 09:02:33.790298 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:33 crc kubenswrapper[4830]: I0131 09:02:33.790317 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:33 crc kubenswrapper[4830]: I0131 09:02:33.790371 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:33 crc kubenswrapper[4830]: I0131 09:02:33.790388 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:33Z","lastTransitionTime":"2026-01-31T09:02:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:33 crc kubenswrapper[4830]: I0131 09:02:33.893359 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:33 crc kubenswrapper[4830]: I0131 09:02:33.893411 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:33 crc kubenswrapper[4830]: I0131 09:02:33.893423 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:33 crc kubenswrapper[4830]: I0131 09:02:33.893441 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:33 crc kubenswrapper[4830]: I0131 09:02:33.893453 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:33Z","lastTransitionTime":"2026-01-31T09:02:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:33 crc kubenswrapper[4830]: I0131 09:02:33.996051 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:33 crc kubenswrapper[4830]: I0131 09:02:33.996107 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:33 crc kubenswrapper[4830]: I0131 09:02:33.996118 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:33 crc kubenswrapper[4830]: I0131 09:02:33.996137 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:33 crc kubenswrapper[4830]: I0131 09:02:33.996152 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:33Z","lastTransitionTime":"2026-01-31T09:02:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:34 crc kubenswrapper[4830]: I0131 09:02:34.091901 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:34 crc kubenswrapper[4830]: I0131 09:02:34.091962 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:34 crc kubenswrapper[4830]: I0131 09:02:34.091976 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:34 crc kubenswrapper[4830]: I0131 09:02:34.091998 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:34 crc kubenswrapper[4830]: I0131 09:02:34.092012 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:34Z","lastTransitionTime":"2026-01-31T09:02:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:34 crc kubenswrapper[4830]: E0131 09:02:34.113143 4830 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T09:02:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T09:02:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T09:02:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T09:02:34Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T09:02:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T09:02:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T09:02:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T09:02:34Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"09bf5dcf-c0f5-4874-a379-a4244cbfeb7d\\\",\\\"systemUUID\\\":\\\"c42072f0-7f1e-4cb8-a24e-882cf5477d0b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:34Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:34 crc kubenswrapper[4830]: I0131 09:02:34.123931 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:34 crc kubenswrapper[4830]: I0131 09:02:34.124042 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:34 crc kubenswrapper[4830]: I0131 09:02:34.124067 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:34 crc kubenswrapper[4830]: I0131 09:02:34.124098 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:34 crc kubenswrapper[4830]: I0131 09:02:34.124127 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:34Z","lastTransitionTime":"2026-01-31T09:02:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:34 crc kubenswrapper[4830]: E0131 09:02:34.141669 4830 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T09:02:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T09:02:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T09:02:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T09:02:34Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T09:02:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T09:02:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T09:02:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T09:02:34Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"09bf5dcf-c0f5-4874-a379-a4244cbfeb7d\\\",\\\"systemUUID\\\":\\\"c42072f0-7f1e-4cb8-a24e-882cf5477d0b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:34Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:34 crc kubenswrapper[4830]: I0131 09:02:34.147239 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:34 crc kubenswrapper[4830]: I0131 09:02:34.147306 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:34 crc kubenswrapper[4830]: I0131 09:02:34.147318 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:34 crc kubenswrapper[4830]: I0131 09:02:34.147340 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:34 crc kubenswrapper[4830]: I0131 09:02:34.147354 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:34Z","lastTransitionTime":"2026-01-31T09:02:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:34 crc kubenswrapper[4830]: E0131 09:02:34.162315 4830 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T09:02:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T09:02:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T09:02:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T09:02:34Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T09:02:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T09:02:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T09:02:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T09:02:34Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"09bf5dcf-c0f5-4874-a379-a4244cbfeb7d\\\",\\\"systemUUID\\\":\\\"c42072f0-7f1e-4cb8-a24e-882cf5477d0b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:34Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:34 crc kubenswrapper[4830]: I0131 09:02:34.166745 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:34 crc kubenswrapper[4830]: I0131 09:02:34.166790 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:34 crc kubenswrapper[4830]: I0131 09:02:34.166804 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:34 crc kubenswrapper[4830]: I0131 09:02:34.166826 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:34 crc kubenswrapper[4830]: I0131 09:02:34.166842 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:34Z","lastTransitionTime":"2026-01-31T09:02:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:34 crc kubenswrapper[4830]: E0131 09:02:34.181374 4830 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T09:02:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T09:02:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T09:02:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T09:02:34Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T09:02:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T09:02:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T09:02:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T09:02:34Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"09bf5dcf-c0f5-4874-a379-a4244cbfeb7d\\\",\\\"systemUUID\\\":\\\"c42072f0-7f1e-4cb8-a24e-882cf5477d0b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:34Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:34 crc kubenswrapper[4830]: I0131 09:02:34.184819 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:34 crc kubenswrapper[4830]: I0131 09:02:34.184877 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:34 crc kubenswrapper[4830]: I0131 09:02:34.184891 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:34 crc kubenswrapper[4830]: I0131 09:02:34.184913 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:34 crc kubenswrapper[4830]: I0131 09:02:34.184926 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:34Z","lastTransitionTime":"2026-01-31T09:02:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:34 crc kubenswrapper[4830]: E0131 09:02:34.197419 4830 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T09:02:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T09:02:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T09:02:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T09:02:34Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T09:02:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T09:02:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T09:02:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T09:02:34Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"09bf5dcf-c0f5-4874-a379-a4244cbfeb7d\\\",\\\"systemUUID\\\":\\\"c42072f0-7f1e-4cb8-a24e-882cf5477d0b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:34Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:34 crc kubenswrapper[4830]: E0131 09:02:34.197612 4830 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 31 09:02:34 crc kubenswrapper[4830]: I0131 09:02:34.199574 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:34 crc kubenswrapper[4830]: I0131 09:02:34.199615 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:34 crc kubenswrapper[4830]: I0131 09:02:34.199625 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:34 crc kubenswrapper[4830]: I0131 09:02:34.199646 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:34 crc kubenswrapper[4830]: I0131 09:02:34.199660 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:34Z","lastTransitionTime":"2026-01-31T09:02:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:34 crc kubenswrapper[4830]: I0131 09:02:34.251385 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5kl8z" Jan 31 09:02:34 crc kubenswrapper[4830]: I0131 09:02:34.251495 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 09:02:34 crc kubenswrapper[4830]: I0131 09:02:34.251520 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 09:02:34 crc kubenswrapper[4830]: I0131 09:02:34.251528 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 09:02:34 crc kubenswrapper[4830]: E0131 09:02:34.251843 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5kl8z" podUID="c1fa30e4-0c03-43ab-9c37-f7ec86153b27" Jan 31 09:02:34 crc kubenswrapper[4830]: E0131 09:02:34.251951 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 09:02:34 crc kubenswrapper[4830]: E0131 09:02:34.252015 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 09:02:34 crc kubenswrapper[4830]: E0131 09:02:34.252056 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 09:02:34 crc kubenswrapper[4830]: I0131 09:02:34.267050 4830 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 11:45:56.356036623 +0000 UTC Jan 31 09:02:34 crc kubenswrapper[4830]: I0131 09:02:34.302001 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:34 crc kubenswrapper[4830]: I0131 09:02:34.302035 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:34 crc kubenswrapper[4830]: I0131 09:02:34.302046 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:34 crc kubenswrapper[4830]: I0131 09:02:34.302062 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:34 crc kubenswrapper[4830]: I0131 09:02:34.302073 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:34Z","lastTransitionTime":"2026-01-31T09:02:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:34 crc kubenswrapper[4830]: I0131 09:02:34.405354 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:34 crc kubenswrapper[4830]: I0131 09:02:34.405434 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:34 crc kubenswrapper[4830]: I0131 09:02:34.405450 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:34 crc kubenswrapper[4830]: I0131 09:02:34.405473 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:34 crc kubenswrapper[4830]: I0131 09:02:34.405488 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:34Z","lastTransitionTime":"2026-01-31T09:02:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:34 crc kubenswrapper[4830]: I0131 09:02:34.508626 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:34 crc kubenswrapper[4830]: I0131 09:02:34.508666 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:34 crc kubenswrapper[4830]: I0131 09:02:34.508675 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:34 crc kubenswrapper[4830]: I0131 09:02:34.508692 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:34 crc kubenswrapper[4830]: I0131 09:02:34.508702 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:34Z","lastTransitionTime":"2026-01-31T09:02:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:34 crc kubenswrapper[4830]: I0131 09:02:34.611699 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:34 crc kubenswrapper[4830]: I0131 09:02:34.611764 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:34 crc kubenswrapper[4830]: I0131 09:02:34.611778 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:34 crc kubenswrapper[4830]: I0131 09:02:34.611798 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:34 crc kubenswrapper[4830]: I0131 09:02:34.611810 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:34Z","lastTransitionTime":"2026-01-31T09:02:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:34 crc kubenswrapper[4830]: I0131 09:02:34.714639 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:34 crc kubenswrapper[4830]: I0131 09:02:34.714750 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:34 crc kubenswrapper[4830]: I0131 09:02:34.714766 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:34 crc kubenswrapper[4830]: I0131 09:02:34.714790 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:34 crc kubenswrapper[4830]: I0131 09:02:34.714804 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:34Z","lastTransitionTime":"2026-01-31T09:02:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:34 crc kubenswrapper[4830]: I0131 09:02:34.817047 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:34 crc kubenswrapper[4830]: I0131 09:02:34.817105 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:34 crc kubenswrapper[4830]: I0131 09:02:34.817121 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:34 crc kubenswrapper[4830]: I0131 09:02:34.817141 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:34 crc kubenswrapper[4830]: I0131 09:02:34.817158 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:34Z","lastTransitionTime":"2026-01-31T09:02:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:34 crc kubenswrapper[4830]: I0131 09:02:34.919749 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:34 crc kubenswrapper[4830]: I0131 09:02:34.919790 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:34 crc kubenswrapper[4830]: I0131 09:02:34.919811 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:34 crc kubenswrapper[4830]: I0131 09:02:34.919830 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:34 crc kubenswrapper[4830]: I0131 09:02:34.919842 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:34Z","lastTransitionTime":"2026-01-31T09:02:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:35 crc kubenswrapper[4830]: I0131 09:02:35.022291 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:35 crc kubenswrapper[4830]: I0131 09:02:35.022332 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:35 crc kubenswrapper[4830]: I0131 09:02:35.022343 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:35 crc kubenswrapper[4830]: I0131 09:02:35.022362 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:35 crc kubenswrapper[4830]: I0131 09:02:35.022375 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:35Z","lastTransitionTime":"2026-01-31T09:02:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:35 crc kubenswrapper[4830]: I0131 09:02:35.125150 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:35 crc kubenswrapper[4830]: I0131 09:02:35.125194 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:35 crc kubenswrapper[4830]: I0131 09:02:35.125203 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:35 crc kubenswrapper[4830]: I0131 09:02:35.125222 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:35 crc kubenswrapper[4830]: I0131 09:02:35.125233 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:35Z","lastTransitionTime":"2026-01-31T09:02:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:35 crc kubenswrapper[4830]: I0131 09:02:35.228126 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:35 crc kubenswrapper[4830]: I0131 09:02:35.228177 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:35 crc kubenswrapper[4830]: I0131 09:02:35.228185 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:35 crc kubenswrapper[4830]: I0131 09:02:35.228201 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:35 crc kubenswrapper[4830]: I0131 09:02:35.228214 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:35Z","lastTransitionTime":"2026-01-31T09:02:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:35 crc kubenswrapper[4830]: I0131 09:02:35.267529 4830 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 15:22:32.547645056 +0000 UTC Jan 31 09:02:35 crc kubenswrapper[4830]: I0131 09:02:35.331577 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:35 crc kubenswrapper[4830]: I0131 09:02:35.331636 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:35 crc kubenswrapper[4830]: I0131 09:02:35.331652 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:35 crc kubenswrapper[4830]: I0131 09:02:35.331669 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:35 crc kubenswrapper[4830]: I0131 09:02:35.331683 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:35Z","lastTransitionTime":"2026-01-31T09:02:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:35 crc kubenswrapper[4830]: I0131 09:02:35.434454 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:35 crc kubenswrapper[4830]: I0131 09:02:35.434512 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:35 crc kubenswrapper[4830]: I0131 09:02:35.434522 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:35 crc kubenswrapper[4830]: I0131 09:02:35.434540 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:35 crc kubenswrapper[4830]: I0131 09:02:35.434552 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:35Z","lastTransitionTime":"2026-01-31T09:02:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:35 crc kubenswrapper[4830]: I0131 09:02:35.538182 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:35 crc kubenswrapper[4830]: I0131 09:02:35.538256 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:35 crc kubenswrapper[4830]: I0131 09:02:35.538295 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:35 crc kubenswrapper[4830]: I0131 09:02:35.538321 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:35 crc kubenswrapper[4830]: I0131 09:02:35.538338 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:35Z","lastTransitionTime":"2026-01-31T09:02:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:35 crc kubenswrapper[4830]: I0131 09:02:35.641266 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:35 crc kubenswrapper[4830]: I0131 09:02:35.641319 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:35 crc kubenswrapper[4830]: I0131 09:02:35.641331 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:35 crc kubenswrapper[4830]: I0131 09:02:35.641349 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:35 crc kubenswrapper[4830]: I0131 09:02:35.641361 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:35Z","lastTransitionTime":"2026-01-31T09:02:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:35 crc kubenswrapper[4830]: I0131 09:02:35.744303 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:35 crc kubenswrapper[4830]: I0131 09:02:35.744354 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:35 crc kubenswrapper[4830]: I0131 09:02:35.744363 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:35 crc kubenswrapper[4830]: I0131 09:02:35.744376 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:35 crc kubenswrapper[4830]: I0131 09:02:35.744386 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:35Z","lastTransitionTime":"2026-01-31T09:02:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:35 crc kubenswrapper[4830]: I0131 09:02:35.846696 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:35 crc kubenswrapper[4830]: I0131 09:02:35.846784 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:35 crc kubenswrapper[4830]: I0131 09:02:35.846802 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:35 crc kubenswrapper[4830]: I0131 09:02:35.846819 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:35 crc kubenswrapper[4830]: I0131 09:02:35.846833 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:35Z","lastTransitionTime":"2026-01-31T09:02:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:35 crc kubenswrapper[4830]: I0131 09:02:35.950681 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:35 crc kubenswrapper[4830]: I0131 09:02:35.950746 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:35 crc kubenswrapper[4830]: I0131 09:02:35.950759 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:35 crc kubenswrapper[4830]: I0131 09:02:35.950775 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:35 crc kubenswrapper[4830]: I0131 09:02:35.950787 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:35Z","lastTransitionTime":"2026-01-31T09:02:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:36 crc kubenswrapper[4830]: I0131 09:02:36.053567 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:36 crc kubenswrapper[4830]: I0131 09:02:36.053623 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:36 crc kubenswrapper[4830]: I0131 09:02:36.053637 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:36 crc kubenswrapper[4830]: I0131 09:02:36.053661 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:36 crc kubenswrapper[4830]: I0131 09:02:36.053676 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:36Z","lastTransitionTime":"2026-01-31T09:02:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:36 crc kubenswrapper[4830]: I0131 09:02:36.156147 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:36 crc kubenswrapper[4830]: I0131 09:02:36.156206 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:36 crc kubenswrapper[4830]: I0131 09:02:36.156218 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:36 crc kubenswrapper[4830]: I0131 09:02:36.156250 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:36 crc kubenswrapper[4830]: I0131 09:02:36.156263 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:36Z","lastTransitionTime":"2026-01-31T09:02:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:36 crc kubenswrapper[4830]: I0131 09:02:36.251564 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 09:02:36 crc kubenswrapper[4830]: E0131 09:02:36.251855 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 09:02:36 crc kubenswrapper[4830]: I0131 09:02:36.251941 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 09:02:36 crc kubenswrapper[4830]: I0131 09:02:36.251990 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5kl8z" Jan 31 09:02:36 crc kubenswrapper[4830]: I0131 09:02:36.252090 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 09:02:36 crc kubenswrapper[4830]: E0131 09:02:36.252101 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 09:02:36 crc kubenswrapper[4830]: E0131 09:02:36.252210 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5kl8z" podUID="c1fa30e4-0c03-43ab-9c37-f7ec86153b27" Jan 31 09:02:36 crc kubenswrapper[4830]: E0131 09:02:36.252386 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 09:02:36 crc kubenswrapper[4830]: I0131 09:02:36.260273 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:36 crc kubenswrapper[4830]: I0131 09:02:36.260318 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:36 crc kubenswrapper[4830]: I0131 09:02:36.260332 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:36 crc kubenswrapper[4830]: I0131 09:02:36.260347 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:36 crc kubenswrapper[4830]: I0131 09:02:36.260358 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:36Z","lastTransitionTime":"2026-01-31T09:02:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:36 crc kubenswrapper[4830]: I0131 09:02:36.268877 4830 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 12:01:15.312044055 +0000 UTC Jan 31 09:02:36 crc kubenswrapper[4830]: I0131 09:02:36.268999 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0468c8e291590c1fa2ffc9b7786bbbf6f3cfb7889fc3adaf7f7a4d3b0edcb1ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4601859d7e52904b3390b2ebca48210c4b3bf132b00a4735a1dbe6cfdc7bd02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:36Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:36 crc kubenswrapper[4830]: I0131 09:02:36.302289 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-r8pc4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"159b9801-57e3-4cf0-9b81-10aacb5eef83\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://320f24051b1a05475b54823ec3401ad252f0efd6c0d3c5098643f959bad15179\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae12083012db5dcd6cc0b23cf64d04c952587334fdf617d36d4cdc2f657eb163\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://351a35da1a2503d6c5d323c7a9a26460236f70517498b41055e6d21bbf92b358\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3614c1ff7ca6762c15b9b85e6187ad94029630e54254e8ccb45b5e7c24692561\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27c8241e84418d80626b17fe37ccf994304394c3fa85c7f1a058d8d54ad7e7d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fd109715e65725addfeba399eacb1abbe16132e06e9ee50b542e877ca9d8d82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://766440d35d97de136fa66a347be009991bd05f76b51aff44c7369006f3196a4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://766440d35d97de136fa66a347be009991bd05f76b51aff44c7369006f3196a4f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T09:02:26Z\\\",\\\"message\\\":\\\"default : 3.977794ms\\\\nI0131 09:02:26.237698 6968 services_controller.go:451] Built service openshift-service-ca-operator/metrics cluster-wide LB for network=default: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-service-ca-operator/metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-service-ca-operator/metrics\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.40\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0131 09:02:26.237860 6968 services_controller.go:356] Processing sync for service openshift-marketplace/marketplace-operator-metrics for network=default\\\\nI0131 09:02:26.237838 6968 services_controller.go:451] Built service openshift-kube-apiserver/apiserver cluster-wide LB for network=default: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-kube-apiserver/apiserver_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-kube-apiserver/apiserver\\\\\\\"}, Opts:services.LBOpts{Reject:tr\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T09:02:25Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-r8pc4_openshift-ovn-kubernetes(159b9801-57e3-4cf0-9b81-10aacb5eef83)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a39d2c99a3b1f6ff5d003d2163046c5127e41dac82f41744c1fbffd0cec8ff09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba6e5a840aca1952877a5d2c18af1d28779b13d54e0c653a2b02b3c3d7b748e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba6e5a840aca1952877a5d2c18af1d28779b13d54e0c653a2b02b3c3d7b748e4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nvq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-r8pc4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:36Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:36 crc kubenswrapper[4830]: I0131 09:02:36.315817 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7vq99" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"44acb8ed-5840-46fa-9ba1-1b89653e1478\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07cae4ce61629c9f8e48863d0775cf4fed46422db85ba8b29477e098b697fb1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9w5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86ac3b3a214c6bca20d7fdc92a49647dfdaf8de4391f331890f74900ab7eca11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9w5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:35Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7vq99\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:36Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:36 crc kubenswrapper[4830]: I0131 09:02:36.339471 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c75e1c36-b769-464e-96eb-6d9b3c5aa384\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fc4f1b4979b8d902676eced34650daea491e68b4c4377492b928a9f0f78d12b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b68f539fdc8bbf394adaf06d3e8682cdf498d0994c53f3754caf282cf9cf3607\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://539885da1b8f083e7ae878fec45416be66a5b06df063f046a05a4981bc8a8f75\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9b64732f8259953717c8ad355889afd462ce339c881ba9c105f6d3f39245e79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8cac33719e081864153ce20c60069a21036f3c29f7b4395021c3a2fe4f09dbc9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"message\\\":\\\"le observer\\\\nW0131 09:01:15.957161 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0131 09:01:15.957363 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 09:01:15.958528 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2684978895/tls.crt::/tmp/serving-cert-2684978895/tls.key\\\\\\\"\\\\nI0131 09:01:16.545024 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 09:01:16.548405 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 09:01:16.548444 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 09:01:16.548470 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 09:01:16.548480 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 09:01:16.557075 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 09:01:16.557112 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 09:01:16.557118 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 09:01:16.557123 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 09:01:16.557127 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 09:01:16.557130 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 09:01:16.557133 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0131 09:01:16.557468 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0131 09:01:16.564014 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d4d8bd6451d7416a2a312147ec282db5de8410910834f555c8d09062902130d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41db253848cac205af5f91621cac4e8223bda7d76984afd734a093f8e8a2d125\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41db253848cac205af5f91621cac4e8223bda7d76984afd734a093f8e8a2d125\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:00:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:00:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:00:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:36Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:36 crc kubenswrapper[4830]: I0131 09:02:36.360970 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:36Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:36 crc kubenswrapper[4830]: I0131 09:02:36.364206 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:36 crc kubenswrapper[4830]: I0131 09:02:36.364254 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:36 crc kubenswrapper[4830]: I0131 09:02:36.364269 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:36 crc kubenswrapper[4830]: I0131 09:02:36.364291 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:36 crc kubenswrapper[4830]: I0131 09:02:36.364304 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:36Z","lastTransitionTime":"2026-01-31T09:02:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:36 crc kubenswrapper[4830]: I0131 09:02:36.377582 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-zt78q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f8a0ccd-540b-4151-a34d-438e433cb141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://362d0fc182d79e72720f3686e7fb5219372cf72d8be09c8086713b692e8d66d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6zlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:25Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-zt78q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:36Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:36 crc kubenswrapper[4830]: I0131 09:02:36.390989 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5kl8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1fa30e4-0c03-43ab-9c37-f7ec86153b27\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jgvfn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jgvfn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:36Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5kl8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:36Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:36 crc kubenswrapper[4830]: I0131 09:02:36.405364 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:36Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:36 crc kubenswrapper[4830]: I0131 09:02:36.415536 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pmbpr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca325f50-edf0-4f3d-ab92-17f40a73d274\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d2a0a6bafefdee2120d6573808366f2455c8606c350f69b9e62bfb2903f6303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7p56d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:22Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pmbpr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:36Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:36 crc kubenswrapper[4830]: I0131 09:02:36.429375 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-cjqbn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b7e133cc-19e8-4770-9146-88dac53a6531\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:02:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:02:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9875f32d43bbc74af3de68db341e1562d735fcd5fba747d5ca7aceea458db68a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4fc53764819654361fe0c4c89480ef4e2b42eb79d71ab8b88f1cc9283c67ce70\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T09:02:10Z\\\",\\\"message\\\":\\\"2026-01-31T09:01:24+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_a3601813-05b9-4c26-9298-bb115810fa0c\\\\n2026-01-31T09:01:24+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_a3601813-05b9-4c26-9298-bb115810fa0c to /host/opt/cni/bin/\\\\n2026-01-31T09:01:25Z [verbose] multus-daemon started\\\\n2026-01-31T09:01:25Z [verbose] Readiness Indicator file check\\\\n2026-01-31T09:02:10Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:23Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:02:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-msp6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-cjqbn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:36Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:36 crc kubenswrapper[4830]: I0131 09:02:36.450082 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-x27jw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"227117cb-01d3-4e44-9da3-b1d577fb3ee2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d59f81b73056481d4e6eb23c2a98c3c088b5255b82cd28e0cad0ac2a9b271cfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wndtp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b9804ac2b3c56a17dc1665cc7ca0c622f9f7a5dbf19f804b155c517a5be61e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b9804ac2b3c56a17dc1665cc7ca0c622f9f7a5dbf19f804b155c517a5be61e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wndtp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e777b7bcec1e8930ac1c280460b5733614b633b1ed72522e124f8585ef5cb38d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e777b7bcec1e8930ac1c280460b5733614b633b1ed72522e124f8585ef5cb38d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wndtp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10692d497b8cf269a439c603376a958fcead6d7ee2338eff4e014a75eaa26417\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10692d497b8cf269a439c603376a958fcead6d7ee2338eff4e014a75eaa26417\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wndtp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1275255b2597fbb09db9612f496ab3177afffe0c7d74d406bc4b1bc861b5b54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c1275255b2597fbb09db9612f496ab3177afffe0c7d74d406bc4b1bc861b5b54\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wndtp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://443af6aeef0b259a54c898d9c08f6e39d663ddf06d36d8712ee8ebb7fd2ebf5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://443af6aeef0b259a54c898d9c08f6e39d663ddf06d36d8712ee8ebb7fd2ebf5a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wndtp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e55fc9c3b6f0e9526860d830179e759cb2eae3c52847b675db6ccce9b6ed2766\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e55fc9c3b6f0e9526860d830179e759cb2eae3c52847b675db6ccce9b6ed2766\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wndtp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-x27jw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:36Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:36 crc kubenswrapper[4830]: I0131 09:02:36.464103 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"158dbfda-9b0a-4809-9946-3c6ee2d082dc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4f2590c48b20124bb8d0271755d430719ece306dbdc95acc26258abaf331ee2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqp59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7cfb7ee25dc18bb1412f69e9bbc3a9055029ed188a12baa5ceef7d5445ad597c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqp59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:23Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gt7kd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:36Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:36 crc kubenswrapper[4830]: I0131 09:02:36.468136 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:36 crc kubenswrapper[4830]: I0131 09:02:36.468182 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:36 crc kubenswrapper[4830]: I0131 09:02:36.468191 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:36 crc kubenswrapper[4830]: I0131 09:02:36.468210 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:36 crc kubenswrapper[4830]: I0131 09:02:36.468221 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:36Z","lastTransitionTime":"2026-01-31T09:02:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:36 crc kubenswrapper[4830]: I0131 09:02:36.481500 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"026d8790-dc0a-472e-953a-66afc0fcd6e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1dc96f3d1e085f925a6a1b73ef1312bd85072065059f20eb6c11f7d044635f8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://49f1cea3266a97316fb0737cb770f6da2abfd58b016987b92c19aa20a9366129\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c457892625099d1b14d857643ba5c70e76cfe582ee31c1b8736f4e278557ab1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ecd1234e4873862db88981fcc0a8c9fd9fc7f913649528a5c274c2feb4617b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ecd1234e4873862db88981fcc0a8c9fd9fc7f913649528a5c274c2feb4617b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:00:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:00:57Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:00:56Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:36Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:36 crc kubenswrapper[4830]: I0131 09:02:36.497522 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"20ed341f-ef9c-4242-981d-80c09f22a37f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://28d65a2b0e667ff1bd912564180a6a9ce77e91a104c6028e19bc58e0ab3b295d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c484d4ed3775a955f3077e3404135c771c581eef1e8fa614d7cdd6e521bbb426\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6889405df5b8cc9e09b36c99a5db06048c9de193dd5744d2b48c07f42c477bd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e99664db53d57a91882867cdf4ab33d52a2e165c53f91cd1b918a32c49a7afa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:00:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:36Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:36 crc kubenswrapper[4830]: I0131 09:02:36.509215 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6062c552-a94d-43b4-a946-1bd6e3268786\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66ad13b4b3b7a21a296839b27f9730dcfd25d38b53430aa75e642c6bf04cb365\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9a5ad1758f6e487cb246cd0b326198c357b07fa83729681d0e68a5a358c811f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9a5ad1758f6e487cb246cd0b326198c357b07fa83729681d0e68a5a358c811f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:00:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:00:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:00:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:36Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:36 crc kubenswrapper[4830]: I0131 09:02:36.524701 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c33f93dea1eda6a26c80682447157b317a4aea4af66d200c5bdfc162ad593e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:36Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:36 crc kubenswrapper[4830]: I0131 09:02:36.538292 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:36Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:36 crc kubenswrapper[4830]: I0131 09:02:36.549989 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2018dd8e7153f3ce64992dc6f931ae09c5f77931cd0743a9fe2557673b6a41f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:36Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:36 crc kubenswrapper[4830]: I0131 09:02:36.571515 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:36 crc kubenswrapper[4830]: I0131 09:02:36.571574 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:36 crc kubenswrapper[4830]: I0131 09:02:36.571586 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:36 crc kubenswrapper[4830]: I0131 09:02:36.571607 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:36 crc kubenswrapper[4830]: I0131 09:02:36.571620 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:36Z","lastTransitionTime":"2026-01-31T09:02:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:36 crc kubenswrapper[4830]: I0131 09:02:36.673629 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:36 crc kubenswrapper[4830]: I0131 09:02:36.673671 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:36 crc kubenswrapper[4830]: I0131 09:02:36.673682 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:36 crc kubenswrapper[4830]: I0131 09:02:36.673699 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:36 crc kubenswrapper[4830]: I0131 09:02:36.673712 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:36Z","lastTransitionTime":"2026-01-31T09:02:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:36 crc kubenswrapper[4830]: I0131 09:02:36.776635 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:36 crc kubenswrapper[4830]: I0131 09:02:36.776701 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:36 crc kubenswrapper[4830]: I0131 09:02:36.776719 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:36 crc kubenswrapper[4830]: I0131 09:02:36.776764 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:36 crc kubenswrapper[4830]: I0131 09:02:36.776779 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:36Z","lastTransitionTime":"2026-01-31T09:02:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:36 crc kubenswrapper[4830]: I0131 09:02:36.880114 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:36 crc kubenswrapper[4830]: I0131 09:02:36.880192 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:36 crc kubenswrapper[4830]: I0131 09:02:36.880211 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:36 crc kubenswrapper[4830]: I0131 09:02:36.880242 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:36 crc kubenswrapper[4830]: I0131 09:02:36.880263 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:36Z","lastTransitionTime":"2026-01-31T09:02:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:36 crc kubenswrapper[4830]: I0131 09:02:36.983477 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:36 crc kubenswrapper[4830]: I0131 09:02:36.983547 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:36 crc kubenswrapper[4830]: I0131 09:02:36.983564 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:36 crc kubenswrapper[4830]: I0131 09:02:36.983591 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:36 crc kubenswrapper[4830]: I0131 09:02:36.983611 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:36Z","lastTransitionTime":"2026-01-31T09:02:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:37 crc kubenswrapper[4830]: I0131 09:02:37.086564 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:37 crc kubenswrapper[4830]: I0131 09:02:37.086623 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:37 crc kubenswrapper[4830]: I0131 09:02:37.086637 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:37 crc kubenswrapper[4830]: I0131 09:02:37.086660 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:37 crc kubenswrapper[4830]: I0131 09:02:37.086674 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:37Z","lastTransitionTime":"2026-01-31T09:02:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:37 crc kubenswrapper[4830]: I0131 09:02:37.189538 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:37 crc kubenswrapper[4830]: I0131 09:02:37.189636 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:37 crc kubenswrapper[4830]: I0131 09:02:37.189659 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:37 crc kubenswrapper[4830]: I0131 09:02:37.189693 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:37 crc kubenswrapper[4830]: I0131 09:02:37.189753 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:37Z","lastTransitionTime":"2026-01-31T09:02:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:37 crc kubenswrapper[4830]: I0131 09:02:37.269411 4830 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 01:48:33.901621878 +0000 UTC Jan 31 09:02:37 crc kubenswrapper[4830]: I0131 09:02:37.292787 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:37 crc kubenswrapper[4830]: I0131 09:02:37.292882 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:37 crc kubenswrapper[4830]: I0131 09:02:37.292899 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:37 crc kubenswrapper[4830]: I0131 09:02:37.292925 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:37 crc kubenswrapper[4830]: I0131 09:02:37.292944 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:37Z","lastTransitionTime":"2026-01-31T09:02:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:37 crc kubenswrapper[4830]: I0131 09:02:37.396774 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:37 crc kubenswrapper[4830]: I0131 09:02:37.396857 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:37 crc kubenswrapper[4830]: I0131 09:02:37.396881 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:37 crc kubenswrapper[4830]: I0131 09:02:37.396912 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:37 crc kubenswrapper[4830]: I0131 09:02:37.396935 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:37Z","lastTransitionTime":"2026-01-31T09:02:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:37 crc kubenswrapper[4830]: I0131 09:02:37.500452 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:37 crc kubenswrapper[4830]: I0131 09:02:37.500501 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:37 crc kubenswrapper[4830]: I0131 09:02:37.500516 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:37 crc kubenswrapper[4830]: I0131 09:02:37.500580 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:37 crc kubenswrapper[4830]: I0131 09:02:37.500596 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:37Z","lastTransitionTime":"2026-01-31T09:02:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:37 crc kubenswrapper[4830]: I0131 09:02:37.603818 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:37 crc kubenswrapper[4830]: I0131 09:02:37.603873 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:37 crc kubenswrapper[4830]: I0131 09:02:37.603885 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:37 crc kubenswrapper[4830]: I0131 09:02:37.603903 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:37 crc kubenswrapper[4830]: I0131 09:02:37.603916 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:37Z","lastTransitionTime":"2026-01-31T09:02:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:37 crc kubenswrapper[4830]: I0131 09:02:37.707512 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:37 crc kubenswrapper[4830]: I0131 09:02:37.707595 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:37 crc kubenswrapper[4830]: I0131 09:02:37.707615 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:37 crc kubenswrapper[4830]: I0131 09:02:37.707639 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:37 crc kubenswrapper[4830]: I0131 09:02:37.707657 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:37Z","lastTransitionTime":"2026-01-31T09:02:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:37 crc kubenswrapper[4830]: I0131 09:02:37.811674 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:37 crc kubenswrapper[4830]: I0131 09:02:37.811784 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:37 crc kubenswrapper[4830]: I0131 09:02:37.811816 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:37 crc kubenswrapper[4830]: I0131 09:02:37.811861 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:37 crc kubenswrapper[4830]: I0131 09:02:37.811889 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:37Z","lastTransitionTime":"2026-01-31T09:02:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:37 crc kubenswrapper[4830]: I0131 09:02:37.914401 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:37 crc kubenswrapper[4830]: I0131 09:02:37.914445 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:37 crc kubenswrapper[4830]: I0131 09:02:37.914455 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:37 crc kubenswrapper[4830]: I0131 09:02:37.914475 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:37 crc kubenswrapper[4830]: I0131 09:02:37.914488 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:37Z","lastTransitionTime":"2026-01-31T09:02:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:38 crc kubenswrapper[4830]: I0131 09:02:38.017274 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:38 crc kubenswrapper[4830]: I0131 09:02:38.017357 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:38 crc kubenswrapper[4830]: I0131 09:02:38.017379 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:38 crc kubenswrapper[4830]: I0131 09:02:38.017412 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:38 crc kubenswrapper[4830]: I0131 09:02:38.017434 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:38Z","lastTransitionTime":"2026-01-31T09:02:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:38 crc kubenswrapper[4830]: I0131 09:02:38.120464 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:38 crc kubenswrapper[4830]: I0131 09:02:38.120511 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:38 crc kubenswrapper[4830]: I0131 09:02:38.120523 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:38 crc kubenswrapper[4830]: I0131 09:02:38.120543 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:38 crc kubenswrapper[4830]: I0131 09:02:38.120557 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:38Z","lastTransitionTime":"2026-01-31T09:02:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:38 crc kubenswrapper[4830]: I0131 09:02:38.223904 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:38 crc kubenswrapper[4830]: I0131 09:02:38.223949 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:38 crc kubenswrapper[4830]: I0131 09:02:38.223961 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:38 crc kubenswrapper[4830]: I0131 09:02:38.223980 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:38 crc kubenswrapper[4830]: I0131 09:02:38.223992 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:38Z","lastTransitionTime":"2026-01-31T09:02:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:38 crc kubenswrapper[4830]: I0131 09:02:38.250829 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 09:02:38 crc kubenswrapper[4830]: I0131 09:02:38.250934 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5kl8z" Jan 31 09:02:38 crc kubenswrapper[4830]: I0131 09:02:38.250885 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 09:02:38 crc kubenswrapper[4830]: I0131 09:02:38.250850 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 09:02:38 crc kubenswrapper[4830]: E0131 09:02:38.251067 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 09:02:38 crc kubenswrapper[4830]: E0131 09:02:38.251184 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 09:02:38 crc kubenswrapper[4830]: E0131 09:02:38.251267 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5kl8z" podUID="c1fa30e4-0c03-43ab-9c37-f7ec86153b27" Jan 31 09:02:38 crc kubenswrapper[4830]: E0131 09:02:38.251357 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 09:02:38 crc kubenswrapper[4830]: I0131 09:02:38.269949 4830 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 14:01:44.426815045 +0000 UTC Jan 31 09:02:38 crc kubenswrapper[4830]: I0131 09:02:38.327081 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:38 crc kubenswrapper[4830]: I0131 09:02:38.327160 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:38 crc kubenswrapper[4830]: I0131 09:02:38.327183 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:38 crc kubenswrapper[4830]: I0131 09:02:38.327208 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:38 crc kubenswrapper[4830]: I0131 09:02:38.327227 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:38Z","lastTransitionTime":"2026-01-31T09:02:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:38 crc kubenswrapper[4830]: I0131 09:02:38.430616 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:38 crc kubenswrapper[4830]: I0131 09:02:38.430696 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:38 crc kubenswrapper[4830]: I0131 09:02:38.430760 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:38 crc kubenswrapper[4830]: I0131 09:02:38.430805 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:38 crc kubenswrapper[4830]: I0131 09:02:38.430832 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:38Z","lastTransitionTime":"2026-01-31T09:02:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:38 crc kubenswrapper[4830]: I0131 09:02:38.534243 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:38 crc kubenswrapper[4830]: I0131 09:02:38.534301 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:38 crc kubenswrapper[4830]: I0131 09:02:38.534318 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:38 crc kubenswrapper[4830]: I0131 09:02:38.534339 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:38 crc kubenswrapper[4830]: I0131 09:02:38.534351 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:38Z","lastTransitionTime":"2026-01-31T09:02:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:38 crc kubenswrapper[4830]: I0131 09:02:38.637634 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:38 crc kubenswrapper[4830]: I0131 09:02:38.637690 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:38 crc kubenswrapper[4830]: I0131 09:02:38.637702 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:38 crc kubenswrapper[4830]: I0131 09:02:38.637751 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:38 crc kubenswrapper[4830]: I0131 09:02:38.637767 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:38Z","lastTransitionTime":"2026-01-31T09:02:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:38 crc kubenswrapper[4830]: I0131 09:02:38.740396 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:38 crc kubenswrapper[4830]: I0131 09:02:38.740484 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:38 crc kubenswrapper[4830]: I0131 09:02:38.740507 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:38 crc kubenswrapper[4830]: I0131 09:02:38.740540 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:38 crc kubenswrapper[4830]: I0131 09:02:38.740565 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:38Z","lastTransitionTime":"2026-01-31T09:02:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:38 crc kubenswrapper[4830]: I0131 09:02:38.843499 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:38 crc kubenswrapper[4830]: I0131 09:02:38.843556 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:38 crc kubenswrapper[4830]: I0131 09:02:38.843570 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:38 crc kubenswrapper[4830]: I0131 09:02:38.843591 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:38 crc kubenswrapper[4830]: I0131 09:02:38.843604 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:38Z","lastTransitionTime":"2026-01-31T09:02:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:38 crc kubenswrapper[4830]: I0131 09:02:38.946780 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:38 crc kubenswrapper[4830]: I0131 09:02:38.946836 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:38 crc kubenswrapper[4830]: I0131 09:02:38.946853 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:38 crc kubenswrapper[4830]: I0131 09:02:38.946877 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:38 crc kubenswrapper[4830]: I0131 09:02:38.946896 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:38Z","lastTransitionTime":"2026-01-31T09:02:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:39 crc kubenswrapper[4830]: I0131 09:02:39.050145 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:39 crc kubenswrapper[4830]: I0131 09:02:39.050227 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:39 crc kubenswrapper[4830]: I0131 09:02:39.050249 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:39 crc kubenswrapper[4830]: I0131 09:02:39.050279 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:39 crc kubenswrapper[4830]: I0131 09:02:39.050301 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:39Z","lastTransitionTime":"2026-01-31T09:02:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:39 crc kubenswrapper[4830]: I0131 09:02:39.153555 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:39 crc kubenswrapper[4830]: I0131 09:02:39.153619 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:39 crc kubenswrapper[4830]: I0131 09:02:39.153635 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:39 crc kubenswrapper[4830]: I0131 09:02:39.153663 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:39 crc kubenswrapper[4830]: I0131 09:02:39.153680 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:39Z","lastTransitionTime":"2026-01-31T09:02:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:39 crc kubenswrapper[4830]: I0131 09:02:39.257338 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:39 crc kubenswrapper[4830]: I0131 09:02:39.257395 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:39 crc kubenswrapper[4830]: I0131 09:02:39.257453 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:39 crc kubenswrapper[4830]: I0131 09:02:39.257475 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:39 crc kubenswrapper[4830]: I0131 09:02:39.257486 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:39Z","lastTransitionTime":"2026-01-31T09:02:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:39 crc kubenswrapper[4830]: I0131 09:02:39.270840 4830 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 20:12:10.939346502 +0000 UTC Jan 31 09:02:39 crc kubenswrapper[4830]: I0131 09:02:39.361198 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:39 crc kubenswrapper[4830]: I0131 09:02:39.361233 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:39 crc kubenswrapper[4830]: I0131 09:02:39.361246 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:39 crc kubenswrapper[4830]: I0131 09:02:39.361264 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:39 crc kubenswrapper[4830]: I0131 09:02:39.361276 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:39Z","lastTransitionTime":"2026-01-31T09:02:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:39 crc kubenswrapper[4830]: I0131 09:02:39.464563 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:39 crc kubenswrapper[4830]: I0131 09:02:39.464636 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:39 crc kubenswrapper[4830]: I0131 09:02:39.464655 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:39 crc kubenswrapper[4830]: I0131 09:02:39.464683 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:39 crc kubenswrapper[4830]: I0131 09:02:39.464707 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:39Z","lastTransitionTime":"2026-01-31T09:02:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:39 crc kubenswrapper[4830]: I0131 09:02:39.567247 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:39 crc kubenswrapper[4830]: I0131 09:02:39.567290 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:39 crc kubenswrapper[4830]: I0131 09:02:39.567326 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:39 crc kubenswrapper[4830]: I0131 09:02:39.567344 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:39 crc kubenswrapper[4830]: I0131 09:02:39.567356 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:39Z","lastTransitionTime":"2026-01-31T09:02:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:39 crc kubenswrapper[4830]: I0131 09:02:39.670860 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:39 crc kubenswrapper[4830]: I0131 09:02:39.670940 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:39 crc kubenswrapper[4830]: I0131 09:02:39.670961 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:39 crc kubenswrapper[4830]: I0131 09:02:39.670993 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:39 crc kubenswrapper[4830]: I0131 09:02:39.671014 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:39Z","lastTransitionTime":"2026-01-31T09:02:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:39 crc kubenswrapper[4830]: I0131 09:02:39.773961 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:39 crc kubenswrapper[4830]: I0131 09:02:39.774033 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:39 crc kubenswrapper[4830]: I0131 09:02:39.774056 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:39 crc kubenswrapper[4830]: I0131 09:02:39.774085 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:39 crc kubenswrapper[4830]: I0131 09:02:39.774105 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:39Z","lastTransitionTime":"2026-01-31T09:02:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:39 crc kubenswrapper[4830]: I0131 09:02:39.876875 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:39 crc kubenswrapper[4830]: I0131 09:02:39.876935 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:39 crc kubenswrapper[4830]: I0131 09:02:39.876946 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:39 crc kubenswrapper[4830]: I0131 09:02:39.876968 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:39 crc kubenswrapper[4830]: I0131 09:02:39.876989 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:39Z","lastTransitionTime":"2026-01-31T09:02:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:39 crc kubenswrapper[4830]: I0131 09:02:39.980404 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:39 crc kubenswrapper[4830]: I0131 09:02:39.980476 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:39 crc kubenswrapper[4830]: I0131 09:02:39.980502 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:39 crc kubenswrapper[4830]: I0131 09:02:39.980535 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:39 crc kubenswrapper[4830]: I0131 09:02:39.980559 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:39Z","lastTransitionTime":"2026-01-31T09:02:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:40 crc kubenswrapper[4830]: I0131 09:02:40.084990 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:40 crc kubenswrapper[4830]: I0131 09:02:40.085099 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:40 crc kubenswrapper[4830]: I0131 09:02:40.085119 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:40 crc kubenswrapper[4830]: I0131 09:02:40.085148 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:40 crc kubenswrapper[4830]: I0131 09:02:40.085174 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:40Z","lastTransitionTime":"2026-01-31T09:02:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:40 crc kubenswrapper[4830]: I0131 09:02:40.187385 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:40 crc kubenswrapper[4830]: I0131 09:02:40.187434 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:40 crc kubenswrapper[4830]: I0131 09:02:40.187448 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:40 crc kubenswrapper[4830]: I0131 09:02:40.187468 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:40 crc kubenswrapper[4830]: I0131 09:02:40.187482 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:40Z","lastTransitionTime":"2026-01-31T09:02:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:40 crc kubenswrapper[4830]: I0131 09:02:40.250876 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5kl8z" Jan 31 09:02:40 crc kubenswrapper[4830]: I0131 09:02:40.251009 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 09:02:40 crc kubenswrapper[4830]: I0131 09:02:40.251259 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 09:02:40 crc kubenswrapper[4830]: I0131 09:02:40.251297 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 09:02:40 crc kubenswrapper[4830]: E0131 09:02:40.251407 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 09:02:40 crc kubenswrapper[4830]: E0131 09:02:40.251532 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 09:02:40 crc kubenswrapper[4830]: E0131 09:02:40.251707 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5kl8z" podUID="c1fa30e4-0c03-43ab-9c37-f7ec86153b27" Jan 31 09:02:40 crc kubenswrapper[4830]: I0131 09:02:40.251812 4830 scope.go:117] "RemoveContainer" containerID="766440d35d97de136fa66a347be009991bd05f76b51aff44c7369006f3196a4f" Jan 31 09:02:40 crc kubenswrapper[4830]: E0131 09:02:40.251819 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 09:02:40 crc kubenswrapper[4830]: E0131 09:02:40.252070 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-r8pc4_openshift-ovn-kubernetes(159b9801-57e3-4cf0-9b81-10aacb5eef83)\"" pod="openshift-ovn-kubernetes/ovnkube-node-r8pc4" podUID="159b9801-57e3-4cf0-9b81-10aacb5eef83" Jan 31 09:02:40 crc kubenswrapper[4830]: I0131 09:02:40.271866 4830 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 04:54:11.5739209 +0000 UTC Jan 31 09:02:40 crc kubenswrapper[4830]: I0131 09:02:40.290360 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:40 crc kubenswrapper[4830]: I0131 09:02:40.290438 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:40 crc kubenswrapper[4830]: I0131 09:02:40.290450 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:40 crc kubenswrapper[4830]: I0131 09:02:40.290472 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:40 crc kubenswrapper[4830]: I0131 09:02:40.290491 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:40Z","lastTransitionTime":"2026-01-31T09:02:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:40 crc kubenswrapper[4830]: I0131 09:02:40.393173 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:40 crc kubenswrapper[4830]: I0131 09:02:40.393213 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:40 crc kubenswrapper[4830]: I0131 09:02:40.393221 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:40 crc kubenswrapper[4830]: I0131 09:02:40.393243 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:40 crc kubenswrapper[4830]: I0131 09:02:40.393253 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:40Z","lastTransitionTime":"2026-01-31T09:02:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:40 crc kubenswrapper[4830]: I0131 09:02:40.496508 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:40 crc kubenswrapper[4830]: I0131 09:02:40.496563 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:40 crc kubenswrapper[4830]: I0131 09:02:40.496573 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:40 crc kubenswrapper[4830]: I0131 09:02:40.496596 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:40 crc kubenswrapper[4830]: I0131 09:02:40.496606 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:40Z","lastTransitionTime":"2026-01-31T09:02:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:40 crc kubenswrapper[4830]: I0131 09:02:40.599669 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:40 crc kubenswrapper[4830]: I0131 09:02:40.599795 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:40 crc kubenswrapper[4830]: I0131 09:02:40.599809 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:40 crc kubenswrapper[4830]: I0131 09:02:40.599829 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:40 crc kubenswrapper[4830]: I0131 09:02:40.599844 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:40Z","lastTransitionTime":"2026-01-31T09:02:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:40 crc kubenswrapper[4830]: I0131 09:02:40.703474 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:40 crc kubenswrapper[4830]: I0131 09:02:40.703548 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:40 crc kubenswrapper[4830]: I0131 09:02:40.703567 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:40 crc kubenswrapper[4830]: I0131 09:02:40.703625 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:40 crc kubenswrapper[4830]: I0131 09:02:40.703639 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:40Z","lastTransitionTime":"2026-01-31T09:02:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:40 crc kubenswrapper[4830]: I0131 09:02:40.820005 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:40 crc kubenswrapper[4830]: I0131 09:02:40.820072 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:40 crc kubenswrapper[4830]: I0131 09:02:40.820090 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:40 crc kubenswrapper[4830]: I0131 09:02:40.820118 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:40 crc kubenswrapper[4830]: I0131 09:02:40.820143 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:40Z","lastTransitionTime":"2026-01-31T09:02:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:40 crc kubenswrapper[4830]: I0131 09:02:40.923083 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:40 crc kubenswrapper[4830]: I0131 09:02:40.923133 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:40 crc kubenswrapper[4830]: I0131 09:02:40.923144 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:40 crc kubenswrapper[4830]: I0131 09:02:40.923163 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:40 crc kubenswrapper[4830]: I0131 09:02:40.923173 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:40Z","lastTransitionTime":"2026-01-31T09:02:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:40 crc kubenswrapper[4830]: I0131 09:02:40.937237 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c1fa30e4-0c03-43ab-9c37-f7ec86153b27-metrics-certs\") pod \"network-metrics-daemon-5kl8z\" (UID: \"c1fa30e4-0c03-43ab-9c37-f7ec86153b27\") " pod="openshift-multus/network-metrics-daemon-5kl8z" Jan 31 09:02:40 crc kubenswrapper[4830]: E0131 09:02:40.937444 4830 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 31 09:02:40 crc kubenswrapper[4830]: E0131 09:02:40.937567 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c1fa30e4-0c03-43ab-9c37-f7ec86153b27-metrics-certs podName:c1fa30e4-0c03-43ab-9c37-f7ec86153b27 nodeName:}" failed. No retries permitted until 2026-01-31 09:03:44.937531106 +0000 UTC m=+169.430893588 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/c1fa30e4-0c03-43ab-9c37-f7ec86153b27-metrics-certs") pod "network-metrics-daemon-5kl8z" (UID: "c1fa30e4-0c03-43ab-9c37-f7ec86153b27") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 31 09:02:41 crc kubenswrapper[4830]: I0131 09:02:41.025741 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:41 crc kubenswrapper[4830]: I0131 09:02:41.025790 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:41 crc kubenswrapper[4830]: I0131 09:02:41.025812 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:41 crc kubenswrapper[4830]: I0131 09:02:41.025835 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:41 crc kubenswrapper[4830]: I0131 09:02:41.025846 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:41Z","lastTransitionTime":"2026-01-31T09:02:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:41 crc kubenswrapper[4830]: I0131 09:02:41.129675 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:41 crc kubenswrapper[4830]: I0131 09:02:41.129774 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:41 crc kubenswrapper[4830]: I0131 09:02:41.129788 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:41 crc kubenswrapper[4830]: I0131 09:02:41.129813 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:41 crc kubenswrapper[4830]: I0131 09:02:41.129826 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:41Z","lastTransitionTime":"2026-01-31T09:02:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:41 crc kubenswrapper[4830]: I0131 09:02:41.233131 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:41 crc kubenswrapper[4830]: I0131 09:02:41.233207 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:41 crc kubenswrapper[4830]: I0131 09:02:41.233221 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:41 crc kubenswrapper[4830]: I0131 09:02:41.233246 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:41 crc kubenswrapper[4830]: I0131 09:02:41.233259 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:41Z","lastTransitionTime":"2026-01-31T09:02:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:41 crc kubenswrapper[4830]: I0131 09:02:41.272637 4830 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 04:46:18.780754536 +0000 UTC Jan 31 09:02:41 crc kubenswrapper[4830]: I0131 09:02:41.336713 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:41 crc kubenswrapper[4830]: I0131 09:02:41.336810 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:41 crc kubenswrapper[4830]: I0131 09:02:41.336827 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:41 crc kubenswrapper[4830]: I0131 09:02:41.336852 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:41 crc kubenswrapper[4830]: I0131 09:02:41.336870 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:41Z","lastTransitionTime":"2026-01-31T09:02:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:41 crc kubenswrapper[4830]: I0131 09:02:41.440197 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:41 crc kubenswrapper[4830]: I0131 09:02:41.440268 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:41 crc kubenswrapper[4830]: I0131 09:02:41.440292 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:41 crc kubenswrapper[4830]: I0131 09:02:41.440321 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:41 crc kubenswrapper[4830]: I0131 09:02:41.440342 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:41Z","lastTransitionTime":"2026-01-31T09:02:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:41 crc kubenswrapper[4830]: I0131 09:02:41.543925 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:41 crc kubenswrapper[4830]: I0131 09:02:41.543974 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:41 crc kubenswrapper[4830]: I0131 09:02:41.543988 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:41 crc kubenswrapper[4830]: I0131 09:02:41.544007 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:41 crc kubenswrapper[4830]: I0131 09:02:41.544021 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:41Z","lastTransitionTime":"2026-01-31T09:02:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:41 crc kubenswrapper[4830]: I0131 09:02:41.647806 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:41 crc kubenswrapper[4830]: I0131 09:02:41.647852 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:41 crc kubenswrapper[4830]: I0131 09:02:41.647865 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:41 crc kubenswrapper[4830]: I0131 09:02:41.647886 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:41 crc kubenswrapper[4830]: I0131 09:02:41.647901 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:41Z","lastTransitionTime":"2026-01-31T09:02:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:41 crc kubenswrapper[4830]: I0131 09:02:41.751085 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:41 crc kubenswrapper[4830]: I0131 09:02:41.751128 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:41 crc kubenswrapper[4830]: I0131 09:02:41.751137 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:41 crc kubenswrapper[4830]: I0131 09:02:41.751184 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:41 crc kubenswrapper[4830]: I0131 09:02:41.751195 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:41Z","lastTransitionTime":"2026-01-31T09:02:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:41 crc kubenswrapper[4830]: I0131 09:02:41.854565 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:41 crc kubenswrapper[4830]: I0131 09:02:41.854648 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:41 crc kubenswrapper[4830]: I0131 09:02:41.854672 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:41 crc kubenswrapper[4830]: I0131 09:02:41.854704 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:41 crc kubenswrapper[4830]: I0131 09:02:41.854765 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:41Z","lastTransitionTime":"2026-01-31T09:02:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:41 crc kubenswrapper[4830]: I0131 09:02:41.958068 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:41 crc kubenswrapper[4830]: I0131 09:02:41.958123 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:41 crc kubenswrapper[4830]: I0131 09:02:41.958133 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:41 crc kubenswrapper[4830]: I0131 09:02:41.958150 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:41 crc kubenswrapper[4830]: I0131 09:02:41.958161 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:41Z","lastTransitionTime":"2026-01-31T09:02:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:42 crc kubenswrapper[4830]: I0131 09:02:42.061742 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:42 crc kubenswrapper[4830]: I0131 09:02:42.061799 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:42 crc kubenswrapper[4830]: I0131 09:02:42.061811 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:42 crc kubenswrapper[4830]: I0131 09:02:42.061834 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:42 crc kubenswrapper[4830]: I0131 09:02:42.061847 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:42Z","lastTransitionTime":"2026-01-31T09:02:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:42 crc kubenswrapper[4830]: I0131 09:02:42.165099 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:42 crc kubenswrapper[4830]: I0131 09:02:42.165172 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:42 crc kubenswrapper[4830]: I0131 09:02:42.165185 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:42 crc kubenswrapper[4830]: I0131 09:02:42.165204 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:42 crc kubenswrapper[4830]: I0131 09:02:42.165217 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:42Z","lastTransitionTime":"2026-01-31T09:02:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:42 crc kubenswrapper[4830]: I0131 09:02:42.251013 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 09:02:42 crc kubenswrapper[4830]: I0131 09:02:42.251170 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 09:02:42 crc kubenswrapper[4830]: I0131 09:02:42.251840 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 09:02:42 crc kubenswrapper[4830]: I0131 09:02:42.251952 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5kl8z" Jan 31 09:02:42 crc kubenswrapper[4830]: E0131 09:02:42.252098 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 09:02:42 crc kubenswrapper[4830]: E0131 09:02:42.260868 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5kl8z" podUID="c1fa30e4-0c03-43ab-9c37-f7ec86153b27" Jan 31 09:02:42 crc kubenswrapper[4830]: E0131 09:02:42.261134 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 09:02:42 crc kubenswrapper[4830]: E0131 09:02:42.261290 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 09:02:42 crc kubenswrapper[4830]: I0131 09:02:42.268907 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Jan 31 09:02:42 crc kubenswrapper[4830]: I0131 09:02:42.269349 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:42 crc kubenswrapper[4830]: I0131 09:02:42.269408 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:42 crc kubenswrapper[4830]: I0131 09:02:42.269425 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:42 crc kubenswrapper[4830]: I0131 09:02:42.269446 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:42 crc kubenswrapper[4830]: I0131 09:02:42.269461 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:42Z","lastTransitionTime":"2026-01-31T09:02:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:42 crc kubenswrapper[4830]: I0131 09:02:42.273503 4830 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 07:30:30.613640238 +0000 UTC Jan 31 09:02:42 crc kubenswrapper[4830]: I0131 09:02:42.372973 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:42 crc kubenswrapper[4830]: I0131 09:02:42.373022 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:42 crc kubenswrapper[4830]: I0131 09:02:42.373030 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:42 crc kubenswrapper[4830]: I0131 09:02:42.373049 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:42 crc kubenswrapper[4830]: I0131 09:02:42.373060 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:42Z","lastTransitionTime":"2026-01-31T09:02:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:42 crc kubenswrapper[4830]: I0131 09:02:42.475858 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:42 crc kubenswrapper[4830]: I0131 09:02:42.475916 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:42 crc kubenswrapper[4830]: I0131 09:02:42.475929 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:42 crc kubenswrapper[4830]: I0131 09:02:42.475952 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:42 crc kubenswrapper[4830]: I0131 09:02:42.475969 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:42Z","lastTransitionTime":"2026-01-31T09:02:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:42 crc kubenswrapper[4830]: I0131 09:02:42.579348 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:42 crc kubenswrapper[4830]: I0131 09:02:42.579421 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:42 crc kubenswrapper[4830]: I0131 09:02:42.579434 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:42 crc kubenswrapper[4830]: I0131 09:02:42.579454 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:42 crc kubenswrapper[4830]: I0131 09:02:42.579466 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:42Z","lastTransitionTime":"2026-01-31T09:02:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:42 crc kubenswrapper[4830]: I0131 09:02:42.682555 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:42 crc kubenswrapper[4830]: I0131 09:02:42.682605 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:42 crc kubenswrapper[4830]: I0131 09:02:42.682618 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:42 crc kubenswrapper[4830]: I0131 09:02:42.682638 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:42 crc kubenswrapper[4830]: I0131 09:02:42.682651 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:42Z","lastTransitionTime":"2026-01-31T09:02:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:42 crc kubenswrapper[4830]: I0131 09:02:42.785772 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:42 crc kubenswrapper[4830]: I0131 09:02:42.785833 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:42 crc kubenswrapper[4830]: I0131 09:02:42.785845 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:42 crc kubenswrapper[4830]: I0131 09:02:42.785872 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:42 crc kubenswrapper[4830]: I0131 09:02:42.785885 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:42Z","lastTransitionTime":"2026-01-31T09:02:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:42 crc kubenswrapper[4830]: I0131 09:02:42.890391 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:42 crc kubenswrapper[4830]: I0131 09:02:42.890431 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:42 crc kubenswrapper[4830]: I0131 09:02:42.890443 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:42 crc kubenswrapper[4830]: I0131 09:02:42.890465 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:42 crc kubenswrapper[4830]: I0131 09:02:42.890481 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:42Z","lastTransitionTime":"2026-01-31T09:02:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:42 crc kubenswrapper[4830]: I0131 09:02:42.994257 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:42 crc kubenswrapper[4830]: I0131 09:02:42.994312 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:42 crc kubenswrapper[4830]: I0131 09:02:42.994325 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:42 crc kubenswrapper[4830]: I0131 09:02:42.994342 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:42 crc kubenswrapper[4830]: I0131 09:02:42.994355 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:42Z","lastTransitionTime":"2026-01-31T09:02:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:43 crc kubenswrapper[4830]: I0131 09:02:43.097384 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:43 crc kubenswrapper[4830]: I0131 09:02:43.097435 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:43 crc kubenswrapper[4830]: I0131 09:02:43.097445 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:43 crc kubenswrapper[4830]: I0131 09:02:43.097462 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:43 crc kubenswrapper[4830]: I0131 09:02:43.097471 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:43Z","lastTransitionTime":"2026-01-31T09:02:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:43 crc kubenswrapper[4830]: I0131 09:02:43.200492 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:43 crc kubenswrapper[4830]: I0131 09:02:43.200589 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:43 crc kubenswrapper[4830]: I0131 09:02:43.200609 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:43 crc kubenswrapper[4830]: I0131 09:02:43.200638 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:43 crc kubenswrapper[4830]: I0131 09:02:43.200656 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:43Z","lastTransitionTime":"2026-01-31T09:02:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:43 crc kubenswrapper[4830]: I0131 09:02:43.274226 4830 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 09:31:38.189954971 +0000 UTC Jan 31 09:02:43 crc kubenswrapper[4830]: I0131 09:02:43.303191 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:43 crc kubenswrapper[4830]: I0131 09:02:43.303237 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:43 crc kubenswrapper[4830]: I0131 09:02:43.303246 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:43 crc kubenswrapper[4830]: I0131 09:02:43.303268 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:43 crc kubenswrapper[4830]: I0131 09:02:43.303335 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:43Z","lastTransitionTime":"2026-01-31T09:02:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:43 crc kubenswrapper[4830]: I0131 09:02:43.405566 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:43 crc kubenswrapper[4830]: I0131 09:02:43.405608 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:43 crc kubenswrapper[4830]: I0131 09:02:43.405618 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:43 crc kubenswrapper[4830]: I0131 09:02:43.405637 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:43 crc kubenswrapper[4830]: I0131 09:02:43.405649 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:43Z","lastTransitionTime":"2026-01-31T09:02:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:43 crc kubenswrapper[4830]: I0131 09:02:43.510150 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:43 crc kubenswrapper[4830]: I0131 09:02:43.510213 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:43 crc kubenswrapper[4830]: I0131 09:02:43.510226 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:43 crc kubenswrapper[4830]: I0131 09:02:43.510246 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:43 crc kubenswrapper[4830]: I0131 09:02:43.510263 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:43Z","lastTransitionTime":"2026-01-31T09:02:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:43 crc kubenswrapper[4830]: I0131 09:02:43.613864 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:43 crc kubenswrapper[4830]: I0131 09:02:43.613915 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:43 crc kubenswrapper[4830]: I0131 09:02:43.613926 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:43 crc kubenswrapper[4830]: I0131 09:02:43.613943 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:43 crc kubenswrapper[4830]: I0131 09:02:43.613953 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:43Z","lastTransitionTime":"2026-01-31T09:02:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:43 crc kubenswrapper[4830]: I0131 09:02:43.716662 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:43 crc kubenswrapper[4830]: I0131 09:02:43.716697 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:43 crc kubenswrapper[4830]: I0131 09:02:43.716707 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:43 crc kubenswrapper[4830]: I0131 09:02:43.716736 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:43 crc kubenswrapper[4830]: I0131 09:02:43.716747 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:43Z","lastTransitionTime":"2026-01-31T09:02:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:43 crc kubenswrapper[4830]: I0131 09:02:43.820546 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:43 crc kubenswrapper[4830]: I0131 09:02:43.820624 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:43 crc kubenswrapper[4830]: I0131 09:02:43.820640 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:43 crc kubenswrapper[4830]: I0131 09:02:43.820662 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:43 crc kubenswrapper[4830]: I0131 09:02:43.820677 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:43Z","lastTransitionTime":"2026-01-31T09:02:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:43 crc kubenswrapper[4830]: I0131 09:02:43.924552 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:43 crc kubenswrapper[4830]: I0131 09:02:43.924634 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:43 crc kubenswrapper[4830]: I0131 09:02:43.924669 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:43 crc kubenswrapper[4830]: I0131 09:02:43.924702 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:43 crc kubenswrapper[4830]: I0131 09:02:43.924767 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:43Z","lastTransitionTime":"2026-01-31T09:02:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:44 crc kubenswrapper[4830]: I0131 09:02:44.027949 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:44 crc kubenswrapper[4830]: I0131 09:02:44.028001 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:44 crc kubenswrapper[4830]: I0131 09:02:44.028017 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:44 crc kubenswrapper[4830]: I0131 09:02:44.028040 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:44 crc kubenswrapper[4830]: I0131 09:02:44.028050 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:44Z","lastTransitionTime":"2026-01-31T09:02:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:44 crc kubenswrapper[4830]: I0131 09:02:44.131131 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:44 crc kubenswrapper[4830]: I0131 09:02:44.131192 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:44 crc kubenswrapper[4830]: I0131 09:02:44.131206 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:44 crc kubenswrapper[4830]: I0131 09:02:44.131226 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:44 crc kubenswrapper[4830]: I0131 09:02:44.131244 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:44Z","lastTransitionTime":"2026-01-31T09:02:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:44 crc kubenswrapper[4830]: I0131 09:02:44.234962 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:44 crc kubenswrapper[4830]: I0131 09:02:44.235021 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:44 crc kubenswrapper[4830]: I0131 09:02:44.235041 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:44 crc kubenswrapper[4830]: I0131 09:02:44.235069 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:44 crc kubenswrapper[4830]: I0131 09:02:44.235090 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:44Z","lastTransitionTime":"2026-01-31T09:02:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:44 crc kubenswrapper[4830]: I0131 09:02:44.250794 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 09:02:44 crc kubenswrapper[4830]: I0131 09:02:44.250818 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 09:02:44 crc kubenswrapper[4830]: I0131 09:02:44.250876 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 09:02:44 crc kubenswrapper[4830]: E0131 09:02:44.251250 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 09:02:44 crc kubenswrapper[4830]: I0131 09:02:44.251286 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5kl8z" Jan 31 09:02:44 crc kubenswrapper[4830]: E0131 09:02:44.251536 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 09:02:44 crc kubenswrapper[4830]: E0131 09:02:44.251593 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5kl8z" podUID="c1fa30e4-0c03-43ab-9c37-f7ec86153b27" Jan 31 09:02:44 crc kubenswrapper[4830]: E0131 09:02:44.251714 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 09:02:44 crc kubenswrapper[4830]: I0131 09:02:44.275068 4830 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 10:16:48.87999773 +0000 UTC Jan 31 09:02:44 crc kubenswrapper[4830]: I0131 09:02:44.338645 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:44 crc kubenswrapper[4830]: I0131 09:02:44.338763 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:44 crc kubenswrapper[4830]: I0131 09:02:44.338792 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:44 crc kubenswrapper[4830]: I0131 09:02:44.338822 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:44 crc kubenswrapper[4830]: I0131 09:02:44.338846 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:44Z","lastTransitionTime":"2026-01-31T09:02:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:44 crc kubenswrapper[4830]: I0131 09:02:44.412401 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:44 crc kubenswrapper[4830]: I0131 09:02:44.412453 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:44 crc kubenswrapper[4830]: I0131 09:02:44.412470 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:44 crc kubenswrapper[4830]: I0131 09:02:44.412492 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:44 crc kubenswrapper[4830]: I0131 09:02:44.412509 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:44Z","lastTransitionTime":"2026-01-31T09:02:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:44 crc kubenswrapper[4830]: E0131 09:02:44.430716 4830 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T09:02:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T09:02:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T09:02:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T09:02:44Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T09:02:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T09:02:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T09:02:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T09:02:44Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"09bf5dcf-c0f5-4874-a379-a4244cbfeb7d\\\",\\\"systemUUID\\\":\\\"c42072f0-7f1e-4cb8-a24e-882cf5477d0b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:44Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:44 crc kubenswrapper[4830]: I0131 09:02:44.436597 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:44 crc kubenswrapper[4830]: I0131 09:02:44.436661 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:44 crc kubenswrapper[4830]: I0131 09:02:44.436687 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:44 crc kubenswrapper[4830]: I0131 09:02:44.436718 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:44 crc kubenswrapper[4830]: I0131 09:02:44.436776 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:44Z","lastTransitionTime":"2026-01-31T09:02:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:44 crc kubenswrapper[4830]: E0131 09:02:44.458608 4830 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T09:02:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T09:02:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T09:02:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T09:02:44Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T09:02:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T09:02:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T09:02:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T09:02:44Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"09bf5dcf-c0f5-4874-a379-a4244cbfeb7d\\\",\\\"systemUUID\\\":\\\"c42072f0-7f1e-4cb8-a24e-882cf5477d0b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:44Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:44 crc kubenswrapper[4830]: I0131 09:02:44.463096 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:44 crc kubenswrapper[4830]: I0131 09:02:44.463147 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:44 crc kubenswrapper[4830]: I0131 09:02:44.463158 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:44 crc kubenswrapper[4830]: I0131 09:02:44.463181 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:44 crc kubenswrapper[4830]: I0131 09:02:44.463193 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:44Z","lastTransitionTime":"2026-01-31T09:02:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:44 crc kubenswrapper[4830]: E0131 09:02:44.476663 4830 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T09:02:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T09:02:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T09:02:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T09:02:44Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T09:02:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T09:02:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T09:02:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T09:02:44Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"09bf5dcf-c0f5-4874-a379-a4244cbfeb7d\\\",\\\"systemUUID\\\":\\\"c42072f0-7f1e-4cb8-a24e-882cf5477d0b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:44Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:44 crc kubenswrapper[4830]: I0131 09:02:44.481220 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:44 crc kubenswrapper[4830]: I0131 09:02:44.481259 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:44 crc kubenswrapper[4830]: I0131 09:02:44.481269 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:44 crc kubenswrapper[4830]: I0131 09:02:44.481286 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:44 crc kubenswrapper[4830]: I0131 09:02:44.481298 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:44Z","lastTransitionTime":"2026-01-31T09:02:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:44 crc kubenswrapper[4830]: E0131 09:02:44.495225 4830 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T09:02:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T09:02:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T09:02:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T09:02:44Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T09:02:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T09:02:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T09:02:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T09:02:44Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"09bf5dcf-c0f5-4874-a379-a4244cbfeb7d\\\",\\\"systemUUID\\\":\\\"c42072f0-7f1e-4cb8-a24e-882cf5477d0b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:44Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:44 crc kubenswrapper[4830]: I0131 09:02:44.498617 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:44 crc kubenswrapper[4830]: I0131 09:02:44.498669 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:44 crc kubenswrapper[4830]: I0131 09:02:44.498682 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:44 crc kubenswrapper[4830]: I0131 09:02:44.498698 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:44 crc kubenswrapper[4830]: I0131 09:02:44.498715 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:44Z","lastTransitionTime":"2026-01-31T09:02:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:44 crc kubenswrapper[4830]: E0131 09:02:44.509766 4830 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T09:02:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T09:02:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T09:02:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T09:02:44Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T09:02:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T09:02:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T09:02:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T09:02:44Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"09bf5dcf-c0f5-4874-a379-a4244cbfeb7d\\\",\\\"systemUUID\\\":\\\"c42072f0-7f1e-4cb8-a24e-882cf5477d0b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:44Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:44 crc kubenswrapper[4830]: E0131 09:02:44.509952 4830 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 31 09:02:44 crc kubenswrapper[4830]: I0131 09:02:44.511820 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:44 crc kubenswrapper[4830]: I0131 09:02:44.511891 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:44 crc kubenswrapper[4830]: I0131 09:02:44.511901 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:44 crc kubenswrapper[4830]: I0131 09:02:44.511916 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:44 crc kubenswrapper[4830]: I0131 09:02:44.511945 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:44Z","lastTransitionTime":"2026-01-31T09:02:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:44 crc kubenswrapper[4830]: I0131 09:02:44.614945 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:44 crc kubenswrapper[4830]: I0131 09:02:44.614993 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:44 crc kubenswrapper[4830]: I0131 09:02:44.615004 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:44 crc kubenswrapper[4830]: I0131 09:02:44.615023 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:44 crc kubenswrapper[4830]: I0131 09:02:44.615035 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:44Z","lastTransitionTime":"2026-01-31T09:02:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:44 crc kubenswrapper[4830]: I0131 09:02:44.718556 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:44 crc kubenswrapper[4830]: I0131 09:02:44.718652 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:44 crc kubenswrapper[4830]: I0131 09:02:44.718716 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:44 crc kubenswrapper[4830]: I0131 09:02:44.718933 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:44 crc kubenswrapper[4830]: I0131 09:02:44.718950 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:44Z","lastTransitionTime":"2026-01-31T09:02:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:44 crc kubenswrapper[4830]: I0131 09:02:44.822080 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:44 crc kubenswrapper[4830]: I0131 09:02:44.822133 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:44 crc kubenswrapper[4830]: I0131 09:02:44.822148 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:44 crc kubenswrapper[4830]: I0131 09:02:44.822171 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:44 crc kubenswrapper[4830]: I0131 09:02:44.822186 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:44Z","lastTransitionTime":"2026-01-31T09:02:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:44 crc kubenswrapper[4830]: I0131 09:02:44.924904 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:44 crc kubenswrapper[4830]: I0131 09:02:44.924957 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:44 crc kubenswrapper[4830]: I0131 09:02:44.924977 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:44 crc kubenswrapper[4830]: I0131 09:02:44.925003 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:44 crc kubenswrapper[4830]: I0131 09:02:44.925020 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:44Z","lastTransitionTime":"2026-01-31T09:02:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:45 crc kubenswrapper[4830]: I0131 09:02:45.027769 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:45 crc kubenswrapper[4830]: I0131 09:02:45.027814 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:45 crc kubenswrapper[4830]: I0131 09:02:45.027824 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:45 crc kubenswrapper[4830]: I0131 09:02:45.027840 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:45 crc kubenswrapper[4830]: I0131 09:02:45.027849 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:45Z","lastTransitionTime":"2026-01-31T09:02:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:45 crc kubenswrapper[4830]: I0131 09:02:45.130568 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:45 crc kubenswrapper[4830]: I0131 09:02:45.130611 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:45 crc kubenswrapper[4830]: I0131 09:02:45.130620 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:45 crc kubenswrapper[4830]: I0131 09:02:45.130654 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:45 crc kubenswrapper[4830]: I0131 09:02:45.130667 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:45Z","lastTransitionTime":"2026-01-31T09:02:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:45 crc kubenswrapper[4830]: I0131 09:02:45.234219 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:45 crc kubenswrapper[4830]: I0131 09:02:45.234263 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:45 crc kubenswrapper[4830]: I0131 09:02:45.234273 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:45 crc kubenswrapper[4830]: I0131 09:02:45.234291 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:45 crc kubenswrapper[4830]: I0131 09:02:45.234301 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:45Z","lastTransitionTime":"2026-01-31T09:02:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:45 crc kubenswrapper[4830]: I0131 09:02:45.275827 4830 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 03:49:09.18773409 +0000 UTC Jan 31 09:02:45 crc kubenswrapper[4830]: I0131 09:02:45.337563 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:45 crc kubenswrapper[4830]: I0131 09:02:45.337626 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:45 crc kubenswrapper[4830]: I0131 09:02:45.337646 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:45 crc kubenswrapper[4830]: I0131 09:02:45.337676 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:45 crc kubenswrapper[4830]: I0131 09:02:45.337697 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:45Z","lastTransitionTime":"2026-01-31T09:02:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:45 crc kubenswrapper[4830]: I0131 09:02:45.440745 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:45 crc kubenswrapper[4830]: I0131 09:02:45.440797 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:45 crc kubenswrapper[4830]: I0131 09:02:45.440810 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:45 crc kubenswrapper[4830]: I0131 09:02:45.440828 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:45 crc kubenswrapper[4830]: I0131 09:02:45.440839 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:45Z","lastTransitionTime":"2026-01-31T09:02:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:45 crc kubenswrapper[4830]: I0131 09:02:45.544098 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:45 crc kubenswrapper[4830]: I0131 09:02:45.544146 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:45 crc kubenswrapper[4830]: I0131 09:02:45.544158 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:45 crc kubenswrapper[4830]: I0131 09:02:45.544178 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:45 crc kubenswrapper[4830]: I0131 09:02:45.544191 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:45Z","lastTransitionTime":"2026-01-31T09:02:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:45 crc kubenswrapper[4830]: I0131 09:02:45.647519 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:45 crc kubenswrapper[4830]: I0131 09:02:45.647572 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:45 crc kubenswrapper[4830]: I0131 09:02:45.647583 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:45 crc kubenswrapper[4830]: I0131 09:02:45.647598 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:45 crc kubenswrapper[4830]: I0131 09:02:45.647611 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:45Z","lastTransitionTime":"2026-01-31T09:02:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:45 crc kubenswrapper[4830]: I0131 09:02:45.751060 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:45 crc kubenswrapper[4830]: I0131 09:02:45.751122 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:45 crc kubenswrapper[4830]: I0131 09:02:45.751133 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:45 crc kubenswrapper[4830]: I0131 09:02:45.751156 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:45 crc kubenswrapper[4830]: I0131 09:02:45.751169 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:45Z","lastTransitionTime":"2026-01-31T09:02:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:45 crc kubenswrapper[4830]: I0131 09:02:45.854350 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:45 crc kubenswrapper[4830]: I0131 09:02:45.854427 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:45 crc kubenswrapper[4830]: I0131 09:02:45.854452 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:45 crc kubenswrapper[4830]: I0131 09:02:45.854487 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:45 crc kubenswrapper[4830]: I0131 09:02:45.854510 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:45Z","lastTransitionTime":"2026-01-31T09:02:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:45 crc kubenswrapper[4830]: I0131 09:02:45.958093 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:45 crc kubenswrapper[4830]: I0131 09:02:45.958153 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:45 crc kubenswrapper[4830]: I0131 09:02:45.958168 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:45 crc kubenswrapper[4830]: I0131 09:02:45.958192 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:45 crc kubenswrapper[4830]: I0131 09:02:45.958217 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:45Z","lastTransitionTime":"2026-01-31T09:02:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:46 crc kubenswrapper[4830]: I0131 09:02:46.062519 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:46 crc kubenswrapper[4830]: I0131 09:02:46.062597 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:46 crc kubenswrapper[4830]: I0131 09:02:46.062612 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:46 crc kubenswrapper[4830]: I0131 09:02:46.062637 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:46 crc kubenswrapper[4830]: I0131 09:02:46.062658 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:46Z","lastTransitionTime":"2026-01-31T09:02:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:46 crc kubenswrapper[4830]: I0131 09:02:46.165660 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:46 crc kubenswrapper[4830]: I0131 09:02:46.165745 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:46 crc kubenswrapper[4830]: I0131 09:02:46.165763 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:46 crc kubenswrapper[4830]: I0131 09:02:46.165784 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:46 crc kubenswrapper[4830]: I0131 09:02:46.165798 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:46Z","lastTransitionTime":"2026-01-31T09:02:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:46 crc kubenswrapper[4830]: I0131 09:02:46.251345 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5kl8z" Jan 31 09:02:46 crc kubenswrapper[4830]: I0131 09:02:46.251411 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 09:02:46 crc kubenswrapper[4830]: E0131 09:02:46.251546 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5kl8z" podUID="c1fa30e4-0c03-43ab-9c37-f7ec86153b27" Jan 31 09:02:46 crc kubenswrapper[4830]: I0131 09:02:46.251628 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 09:02:46 crc kubenswrapper[4830]: I0131 09:02:46.251636 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 09:02:46 crc kubenswrapper[4830]: E0131 09:02:46.251858 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 09:02:46 crc kubenswrapper[4830]: E0131 09:02:46.251962 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 09:02:46 crc kubenswrapper[4830]: E0131 09:02:46.252060 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 09:02:46 crc kubenswrapper[4830]: I0131 09:02:46.269209 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:46 crc kubenswrapper[4830]: I0131 09:02:46.269264 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:46 crc kubenswrapper[4830]: I0131 09:02:46.269358 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:46 crc kubenswrapper[4830]: I0131 09:02:46.269387 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:46 crc kubenswrapper[4830]: I0131 09:02:46.269410 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:46Z","lastTransitionTime":"2026-01-31T09:02:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:46 crc kubenswrapper[4830]: I0131 09:02:46.276203 4830 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 23:32:51.600546454 +0000 UTC Jan 31 09:02:46 crc kubenswrapper[4830]: I0131 09:02:46.279525 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df241106-eb1b-4e72-a643-1f90acf5a1f9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9d13fc8d32c706bbb52086b93137196db2708e789ca1b4f5a53656f1cec21e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f4774e14cd30528af22073868dbbe43ebe8427a1843caa8c8e01226fd63b755\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c3a31116226244e63ee914eda9ab1ff5eea97e5a6bea459cb43d11863386c7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://726b5f788f66263de9451cf0f037d42d2dbc8b008923aa807dfd2020558c9ec8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5502540b07b2fa92d36fff1afe0f6d48fae1f9d4a54d50ebb1c373546a61a7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2096510a04ddaabd5c882f0d0913df7d2be58b1bece01c9d9952aa0ef70fdbb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2096510a04ddaabd5c882f0d0913df7d2be58b1bece01c9d9952aa0ef70fdbb6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:00:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:00:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48e6efdc83e36583b849fc3d7e0e36091b0b3586073ae15546cd3bfa9764fb81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://48e6efdc83e36583b849fc3d7e0e36091b0b3586073ae15546cd3bfa9764fb81\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:00:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:00:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8e86e6c091d7dbff392d0040ce519065173e2ccc0813d9fc5d172442a53e261f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e86e6c091d7dbff392d0040ce519065173e2ccc0813d9fc5d172442a53e261f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:00:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:00:56Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:46Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:46 crc kubenswrapper[4830]: I0131 09:02:46.297471 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c75e1c36-b769-464e-96eb-6d9b3c5aa384\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:00:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fc4f1b4979b8d902676eced34650daea491e68b4c4377492b928a9f0f78d12b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b68f539fdc8bbf394adaf06d3e8682cdf498d0994c53f3754caf282cf9cf3607\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://539885da1b8f083e7ae878fec45416be66a5b06df063f046a05a4981bc8a8f75\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9b64732f8259953717c8ad355889afd462ce339c881ba9c105f6d3f39245e79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8cac33719e081864153ce20c60069a21036f3c29f7b4395021c3a2fe4f09dbc9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T09:01:16Z\\\",\\\"message\\\":\\\"le observer\\\\nW0131 09:01:15.957161 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0131 09:01:15.957363 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 09:01:15.958528 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2684978895/tls.crt::/tmp/serving-cert-2684978895/tls.key\\\\\\\"\\\\nI0131 09:01:16.545024 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 09:01:16.548405 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 09:01:16.548444 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 09:01:16.548470 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 09:01:16.548480 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 09:01:16.557075 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 09:01:16.557112 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 09:01:16.557118 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 09:01:16.557123 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 09:01:16.557127 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 09:01:16.557130 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 09:01:16.557133 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0131 09:01:16.557468 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0131 09:01:16.564014 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d4d8bd6451d7416a2a312147ec282db5de8410910834f555c8d09062902130d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:00:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41db253848cac205af5f91621cac4e8223bda7d76984afd734a093f8e8a2d125\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41db253848cac205af5f91621cac4e8223bda7d76984afd734a093f8e8a2d125\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:00:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:00:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:00:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:46Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:46 crc kubenswrapper[4830]: I0131 09:02:46.350850 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-zt78q" podStartSLOduration=84.35082366 podStartE2EDuration="1m24.35082366s" podCreationTimestamp="2026-01-31 09:01:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 09:02:46.339965268 +0000 UTC m=+110.833327720" watchObservedRunningTime="2026-01-31 09:02:46.35082366 +0000 UTC m=+110.844186102" Jan 31 09:02:46 crc kubenswrapper[4830]: I0131 09:02:46.373224 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:46 crc kubenswrapper[4830]: I0131 09:02:46.373277 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:46 crc kubenswrapper[4830]: I0131 09:02:46.373291 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:46 crc kubenswrapper[4830]: I0131 09:02:46.373312 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:46 crc kubenswrapper[4830]: I0131 09:02:46.373326 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:46Z","lastTransitionTime":"2026-01-31T09:02:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:46 crc kubenswrapper[4830]: I0131 09:02:46.378627 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-pmbpr" podStartSLOduration=84.378605296 podStartE2EDuration="1m24.378605296s" podCreationTimestamp="2026-01-31 09:01:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 09:02:46.378261537 +0000 UTC m=+110.871623989" watchObservedRunningTime="2026-01-31 09:02:46.378605296 +0000 UTC m=+110.871967748" Jan 31 09:02:46 crc kubenswrapper[4830]: I0131 09:02:46.393852 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-cjqbn" podStartSLOduration=84.393829673 podStartE2EDuration="1m24.393829673s" podCreationTimestamp="2026-01-31 09:01:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 09:02:46.392623428 +0000 UTC m=+110.885985880" watchObservedRunningTime="2026-01-31 09:02:46.393829673 +0000 UTC m=+110.887192135" Jan 31 09:02:46 crc kubenswrapper[4830]: I0131 09:02:46.422717 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-x27jw" podStartSLOduration=84.422691691 podStartE2EDuration="1m24.422691691s" podCreationTimestamp="2026-01-31 09:01:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 09:02:46.41395596 +0000 UTC m=+110.907318412" watchObservedRunningTime="2026-01-31 09:02:46.422691691 +0000 UTC m=+110.916054133" Jan 31 09:02:46 crc kubenswrapper[4830]: I0131 09:02:46.435406 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=56.435386355 podStartE2EDuration="56.435386355s" podCreationTimestamp="2026-01-31 09:01:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 09:02:46.434498509 +0000 UTC m=+110.927860951" watchObservedRunningTime="2026-01-31 09:02:46.435386355 +0000 UTC m=+110.928748817" Jan 31 09:02:46 crc kubenswrapper[4830]: I0131 09:02:46.460118 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=88.460095113 podStartE2EDuration="1m28.460095113s" podCreationTimestamp="2026-01-31 09:01:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 09:02:46.449201321 +0000 UTC m=+110.942563763" watchObservedRunningTime="2026-01-31 09:02:46.460095113 +0000 UTC m=+110.953457555" Jan 31 09:02:46 crc kubenswrapper[4830]: I0131 09:02:46.460907 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=18.460899767 podStartE2EDuration="18.460899767s" podCreationTimestamp="2026-01-31 09:02:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 09:02:46.460896226 +0000 UTC m=+110.954258668" watchObservedRunningTime="2026-01-31 09:02:46.460899767 +0000 UTC m=+110.954262209" Jan 31 09:02:46 crc kubenswrapper[4830]: I0131 09:02:46.476244 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:46 crc kubenswrapper[4830]: I0131 09:02:46.476537 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:46 crc kubenswrapper[4830]: I0131 09:02:46.476618 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:46 crc kubenswrapper[4830]: I0131 09:02:46.476743 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:46 crc kubenswrapper[4830]: I0131 09:02:46.476816 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:46Z","lastTransitionTime":"2026-01-31T09:02:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:46 crc kubenswrapper[4830]: I0131 09:02:46.517265 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" podStartSLOduration=84.517233312 podStartE2EDuration="1m24.517233312s" podCreationTimestamp="2026-01-31 09:01:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 09:02:46.517226562 +0000 UTC m=+111.010589004" watchObservedRunningTime="2026-01-31 09:02:46.517233312 +0000 UTC m=+111.010595754" Jan 31 09:02:46 crc kubenswrapper[4830]: I0131 09:02:46.579657 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:46 crc kubenswrapper[4830]: I0131 09:02:46.579738 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:46 crc kubenswrapper[4830]: I0131 09:02:46.579752 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:46 crc kubenswrapper[4830]: I0131 09:02:46.579773 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:46 crc kubenswrapper[4830]: I0131 09:02:46.579785 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:46Z","lastTransitionTime":"2026-01-31T09:02:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:46 crc kubenswrapper[4830]: I0131 09:02:46.682643 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:46 crc kubenswrapper[4830]: I0131 09:02:46.682705 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:46 crc kubenswrapper[4830]: I0131 09:02:46.682762 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:46 crc kubenswrapper[4830]: I0131 09:02:46.682790 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:46 crc kubenswrapper[4830]: I0131 09:02:46.682808 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:46Z","lastTransitionTime":"2026-01-31T09:02:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:46 crc kubenswrapper[4830]: I0131 09:02:46.786099 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:46 crc kubenswrapper[4830]: I0131 09:02:46.786155 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:46 crc kubenswrapper[4830]: I0131 09:02:46.786172 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:46 crc kubenswrapper[4830]: I0131 09:02:46.786195 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:46 crc kubenswrapper[4830]: I0131 09:02:46.786215 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:46Z","lastTransitionTime":"2026-01-31T09:02:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:46 crc kubenswrapper[4830]: I0131 09:02:46.888786 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:46 crc kubenswrapper[4830]: I0131 09:02:46.888838 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:46 crc kubenswrapper[4830]: I0131 09:02:46.888849 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:46 crc kubenswrapper[4830]: I0131 09:02:46.888869 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:46 crc kubenswrapper[4830]: I0131 09:02:46.888882 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:46Z","lastTransitionTime":"2026-01-31T09:02:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:46 crc kubenswrapper[4830]: I0131 09:02:46.992483 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:46 crc kubenswrapper[4830]: I0131 09:02:46.992532 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:46 crc kubenswrapper[4830]: I0131 09:02:46.992543 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:46 crc kubenswrapper[4830]: I0131 09:02:46.992560 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:46 crc kubenswrapper[4830]: I0131 09:02:46.992570 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:46Z","lastTransitionTime":"2026-01-31T09:02:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:47 crc kubenswrapper[4830]: I0131 09:02:47.095400 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:47 crc kubenswrapper[4830]: I0131 09:02:47.095456 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:47 crc kubenswrapper[4830]: I0131 09:02:47.095468 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:47 crc kubenswrapper[4830]: I0131 09:02:47.095488 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:47 crc kubenswrapper[4830]: I0131 09:02:47.095503 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:47Z","lastTransitionTime":"2026-01-31T09:02:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:47 crc kubenswrapper[4830]: I0131 09:02:47.199104 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:47 crc kubenswrapper[4830]: I0131 09:02:47.199152 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:47 crc kubenswrapper[4830]: I0131 09:02:47.199160 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:47 crc kubenswrapper[4830]: I0131 09:02:47.199174 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:47 crc kubenswrapper[4830]: I0131 09:02:47.199184 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:47Z","lastTransitionTime":"2026-01-31T09:02:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:47 crc kubenswrapper[4830]: I0131 09:02:47.276854 4830 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 11:38:07.037912167 +0000 UTC Jan 31 09:02:47 crc kubenswrapper[4830]: I0131 09:02:47.302703 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:47 crc kubenswrapper[4830]: I0131 09:02:47.302774 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:47 crc kubenswrapper[4830]: I0131 09:02:47.302802 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:47 crc kubenswrapper[4830]: I0131 09:02:47.302821 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:47 crc kubenswrapper[4830]: I0131 09:02:47.302830 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:47Z","lastTransitionTime":"2026-01-31T09:02:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:47 crc kubenswrapper[4830]: I0131 09:02:47.405360 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:47 crc kubenswrapper[4830]: I0131 09:02:47.405390 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:47 crc kubenswrapper[4830]: I0131 09:02:47.405397 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:47 crc kubenswrapper[4830]: I0131 09:02:47.405411 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:47 crc kubenswrapper[4830]: I0131 09:02:47.405421 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:47Z","lastTransitionTime":"2026-01-31T09:02:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:47 crc kubenswrapper[4830]: I0131 09:02:47.507562 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:47 crc kubenswrapper[4830]: I0131 09:02:47.507609 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:47 crc kubenswrapper[4830]: I0131 09:02:47.507621 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:47 crc kubenswrapper[4830]: I0131 09:02:47.507638 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:47 crc kubenswrapper[4830]: I0131 09:02:47.507648 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:47Z","lastTransitionTime":"2026-01-31T09:02:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:47 crc kubenswrapper[4830]: I0131 09:02:47.611237 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:47 crc kubenswrapper[4830]: I0131 09:02:47.611296 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:47 crc kubenswrapper[4830]: I0131 09:02:47.611312 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:47 crc kubenswrapper[4830]: I0131 09:02:47.611337 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:47 crc kubenswrapper[4830]: I0131 09:02:47.611352 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:47Z","lastTransitionTime":"2026-01-31T09:02:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:47 crc kubenswrapper[4830]: I0131 09:02:47.714771 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:47 crc kubenswrapper[4830]: I0131 09:02:47.714843 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:47 crc kubenswrapper[4830]: I0131 09:02:47.714858 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:47 crc kubenswrapper[4830]: I0131 09:02:47.714888 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:47 crc kubenswrapper[4830]: I0131 09:02:47.714905 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:47Z","lastTransitionTime":"2026-01-31T09:02:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:47 crc kubenswrapper[4830]: I0131 09:02:47.817825 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:47 crc kubenswrapper[4830]: I0131 09:02:47.817874 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:47 crc kubenswrapper[4830]: I0131 09:02:47.817884 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:47 crc kubenswrapper[4830]: I0131 09:02:47.817907 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:47 crc kubenswrapper[4830]: I0131 09:02:47.817918 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:47Z","lastTransitionTime":"2026-01-31T09:02:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:47 crc kubenswrapper[4830]: I0131 09:02:47.920812 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:47 crc kubenswrapper[4830]: I0131 09:02:47.920868 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:47 crc kubenswrapper[4830]: I0131 09:02:47.920879 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:47 crc kubenswrapper[4830]: I0131 09:02:47.920898 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:47 crc kubenswrapper[4830]: I0131 09:02:47.920912 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:47Z","lastTransitionTime":"2026-01-31T09:02:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:48 crc kubenswrapper[4830]: I0131 09:02:48.023737 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:48 crc kubenswrapper[4830]: I0131 09:02:48.023812 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:48 crc kubenswrapper[4830]: I0131 09:02:48.023837 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:48 crc kubenswrapper[4830]: I0131 09:02:48.023865 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:48 crc kubenswrapper[4830]: I0131 09:02:48.023880 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:48Z","lastTransitionTime":"2026-01-31T09:02:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:48 crc kubenswrapper[4830]: I0131 09:02:48.126925 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:48 crc kubenswrapper[4830]: I0131 09:02:48.127003 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:48 crc kubenswrapper[4830]: I0131 09:02:48.127025 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:48 crc kubenswrapper[4830]: I0131 09:02:48.127056 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:48 crc kubenswrapper[4830]: I0131 09:02:48.127074 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:48Z","lastTransitionTime":"2026-01-31T09:02:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:48 crc kubenswrapper[4830]: I0131 09:02:48.230402 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:48 crc kubenswrapper[4830]: I0131 09:02:48.230458 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:48 crc kubenswrapper[4830]: I0131 09:02:48.230469 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:48 crc kubenswrapper[4830]: I0131 09:02:48.230489 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:48 crc kubenswrapper[4830]: I0131 09:02:48.230501 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:48Z","lastTransitionTime":"2026-01-31T09:02:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:48 crc kubenswrapper[4830]: I0131 09:02:48.251408 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 09:02:48 crc kubenswrapper[4830]: I0131 09:02:48.251448 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5kl8z" Jan 31 09:02:48 crc kubenswrapper[4830]: I0131 09:02:48.251522 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 09:02:48 crc kubenswrapper[4830]: E0131 09:02:48.251648 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 09:02:48 crc kubenswrapper[4830]: I0131 09:02:48.251760 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 09:02:48 crc kubenswrapper[4830]: E0131 09:02:48.251862 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5kl8z" podUID="c1fa30e4-0c03-43ab-9c37-f7ec86153b27" Jan 31 09:02:48 crc kubenswrapper[4830]: E0131 09:02:48.251951 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 09:02:48 crc kubenswrapper[4830]: E0131 09:02:48.251958 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 09:02:48 crc kubenswrapper[4830]: I0131 09:02:48.277407 4830 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 08:21:28.994473866 +0000 UTC Jan 31 09:02:48 crc kubenswrapper[4830]: I0131 09:02:48.333052 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:48 crc kubenswrapper[4830]: I0131 09:02:48.333095 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:48 crc kubenswrapper[4830]: I0131 09:02:48.333104 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:48 crc kubenswrapper[4830]: I0131 09:02:48.333122 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:48 crc kubenswrapper[4830]: I0131 09:02:48.333133 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:48Z","lastTransitionTime":"2026-01-31T09:02:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:48 crc kubenswrapper[4830]: I0131 09:02:48.436180 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:48 crc kubenswrapper[4830]: I0131 09:02:48.436253 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:48 crc kubenswrapper[4830]: I0131 09:02:48.436270 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:48 crc kubenswrapper[4830]: I0131 09:02:48.436298 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:48 crc kubenswrapper[4830]: I0131 09:02:48.436316 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:48Z","lastTransitionTime":"2026-01-31T09:02:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:48 crc kubenswrapper[4830]: I0131 09:02:48.538925 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:48 crc kubenswrapper[4830]: I0131 09:02:48.538967 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:48 crc kubenswrapper[4830]: I0131 09:02:48.538975 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:48 crc kubenswrapper[4830]: I0131 09:02:48.538990 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:48 crc kubenswrapper[4830]: I0131 09:02:48.539000 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:48Z","lastTransitionTime":"2026-01-31T09:02:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:48 crc kubenswrapper[4830]: I0131 09:02:48.643448 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:48 crc kubenswrapper[4830]: I0131 09:02:48.643531 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:48 crc kubenswrapper[4830]: I0131 09:02:48.643553 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:48 crc kubenswrapper[4830]: I0131 09:02:48.643582 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:48 crc kubenswrapper[4830]: I0131 09:02:48.643610 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:48Z","lastTransitionTime":"2026-01-31T09:02:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:48 crc kubenswrapper[4830]: I0131 09:02:48.746633 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:48 crc kubenswrapper[4830]: I0131 09:02:48.746681 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:48 crc kubenswrapper[4830]: I0131 09:02:48.746690 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:48 crc kubenswrapper[4830]: I0131 09:02:48.746705 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:48 crc kubenswrapper[4830]: I0131 09:02:48.746718 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:48Z","lastTransitionTime":"2026-01-31T09:02:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:48 crc kubenswrapper[4830]: I0131 09:02:48.849948 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:48 crc kubenswrapper[4830]: I0131 09:02:48.849999 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:48 crc kubenswrapper[4830]: I0131 09:02:48.850013 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:48 crc kubenswrapper[4830]: I0131 09:02:48.850036 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:48 crc kubenswrapper[4830]: I0131 09:02:48.850049 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:48Z","lastTransitionTime":"2026-01-31T09:02:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:48 crc kubenswrapper[4830]: I0131 09:02:48.953163 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:48 crc kubenswrapper[4830]: I0131 09:02:48.953235 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:48 crc kubenswrapper[4830]: I0131 09:02:48.953250 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:48 crc kubenswrapper[4830]: I0131 09:02:48.953274 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:48 crc kubenswrapper[4830]: I0131 09:02:48.953291 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:48Z","lastTransitionTime":"2026-01-31T09:02:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:49 crc kubenswrapper[4830]: I0131 09:02:49.055960 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:49 crc kubenswrapper[4830]: I0131 09:02:49.056018 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:49 crc kubenswrapper[4830]: I0131 09:02:49.056030 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:49 crc kubenswrapper[4830]: I0131 09:02:49.056049 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:49 crc kubenswrapper[4830]: I0131 09:02:49.056062 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:49Z","lastTransitionTime":"2026-01-31T09:02:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:49 crc kubenswrapper[4830]: I0131 09:02:49.159868 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:49 crc kubenswrapper[4830]: I0131 09:02:49.159941 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:49 crc kubenswrapper[4830]: I0131 09:02:49.159952 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:49 crc kubenswrapper[4830]: I0131 09:02:49.159971 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:49 crc kubenswrapper[4830]: I0131 09:02:49.159985 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:49Z","lastTransitionTime":"2026-01-31T09:02:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:49 crc kubenswrapper[4830]: I0131 09:02:49.263132 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:49 crc kubenswrapper[4830]: I0131 09:02:49.263171 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:49 crc kubenswrapper[4830]: I0131 09:02:49.263180 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:49 crc kubenswrapper[4830]: I0131 09:02:49.263195 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:49 crc kubenswrapper[4830]: I0131 09:02:49.263206 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:49Z","lastTransitionTime":"2026-01-31T09:02:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:49 crc kubenswrapper[4830]: I0131 09:02:49.278354 4830 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 10:48:09.457229486 +0000 UTC Jan 31 09:02:49 crc kubenswrapper[4830]: I0131 09:02:49.366102 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:49 crc kubenswrapper[4830]: I0131 09:02:49.366152 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:49 crc kubenswrapper[4830]: I0131 09:02:49.366162 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:49 crc kubenswrapper[4830]: I0131 09:02:49.366176 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:49 crc kubenswrapper[4830]: I0131 09:02:49.366187 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:49Z","lastTransitionTime":"2026-01-31T09:02:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:49 crc kubenswrapper[4830]: I0131 09:02:49.470049 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:49 crc kubenswrapper[4830]: I0131 09:02:49.470094 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:49 crc kubenswrapper[4830]: I0131 09:02:49.470103 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:49 crc kubenswrapper[4830]: I0131 09:02:49.470118 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:49 crc kubenswrapper[4830]: I0131 09:02:49.470127 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:49Z","lastTransitionTime":"2026-01-31T09:02:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:49 crc kubenswrapper[4830]: I0131 09:02:49.572940 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:49 crc kubenswrapper[4830]: I0131 09:02:49.573013 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:49 crc kubenswrapper[4830]: I0131 09:02:49.573038 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:49 crc kubenswrapper[4830]: I0131 09:02:49.573100 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:49 crc kubenswrapper[4830]: I0131 09:02:49.573135 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:49Z","lastTransitionTime":"2026-01-31T09:02:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:49 crc kubenswrapper[4830]: I0131 09:02:49.675000 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:49 crc kubenswrapper[4830]: I0131 09:02:49.675044 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:49 crc kubenswrapper[4830]: I0131 09:02:49.675057 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:49 crc kubenswrapper[4830]: I0131 09:02:49.675076 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:49 crc kubenswrapper[4830]: I0131 09:02:49.675086 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:49Z","lastTransitionTime":"2026-01-31T09:02:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:49 crc kubenswrapper[4830]: I0131 09:02:49.778062 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:49 crc kubenswrapper[4830]: I0131 09:02:49.778106 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:49 crc kubenswrapper[4830]: I0131 09:02:49.778118 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:49 crc kubenswrapper[4830]: I0131 09:02:49.778135 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:49 crc kubenswrapper[4830]: I0131 09:02:49.778147 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:49Z","lastTransitionTime":"2026-01-31T09:02:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:49 crc kubenswrapper[4830]: I0131 09:02:49.880833 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:49 crc kubenswrapper[4830]: I0131 09:02:49.880899 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:49 crc kubenswrapper[4830]: I0131 09:02:49.880922 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:49 crc kubenswrapper[4830]: I0131 09:02:49.880953 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:49 crc kubenswrapper[4830]: I0131 09:02:49.880975 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:49Z","lastTransitionTime":"2026-01-31T09:02:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:49 crc kubenswrapper[4830]: I0131 09:02:49.984980 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:49 crc kubenswrapper[4830]: I0131 09:02:49.985085 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:49 crc kubenswrapper[4830]: I0131 09:02:49.985112 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:49 crc kubenswrapper[4830]: I0131 09:02:49.985141 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:49 crc kubenswrapper[4830]: I0131 09:02:49.985160 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:49Z","lastTransitionTime":"2026-01-31T09:02:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:50 crc kubenswrapper[4830]: I0131 09:02:50.088839 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:50 crc kubenswrapper[4830]: I0131 09:02:50.088896 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:50 crc kubenswrapper[4830]: I0131 09:02:50.088913 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:50 crc kubenswrapper[4830]: I0131 09:02:50.088940 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:50 crc kubenswrapper[4830]: I0131 09:02:50.088959 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:50Z","lastTransitionTime":"2026-01-31T09:02:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:50 crc kubenswrapper[4830]: I0131 09:02:50.191213 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:50 crc kubenswrapper[4830]: I0131 09:02:50.191268 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:50 crc kubenswrapper[4830]: I0131 09:02:50.191282 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:50 crc kubenswrapper[4830]: I0131 09:02:50.191299 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:50 crc kubenswrapper[4830]: I0131 09:02:50.191309 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:50Z","lastTransitionTime":"2026-01-31T09:02:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:50 crc kubenswrapper[4830]: I0131 09:02:50.251406 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 09:02:50 crc kubenswrapper[4830]: I0131 09:02:50.251475 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 09:02:50 crc kubenswrapper[4830]: E0131 09:02:50.251558 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 09:02:50 crc kubenswrapper[4830]: E0131 09:02:50.251630 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 09:02:50 crc kubenswrapper[4830]: I0131 09:02:50.251702 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 09:02:50 crc kubenswrapper[4830]: I0131 09:02:50.251906 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5kl8z" Jan 31 09:02:50 crc kubenswrapper[4830]: E0131 09:02:50.252048 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 09:02:50 crc kubenswrapper[4830]: E0131 09:02:50.252426 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5kl8z" podUID="c1fa30e4-0c03-43ab-9c37-f7ec86153b27" Jan 31 09:02:50 crc kubenswrapper[4830]: I0131 09:02:50.279129 4830 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 05:21:46.201299258 +0000 UTC Jan 31 09:02:50 crc kubenswrapper[4830]: I0131 09:02:50.294699 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:50 crc kubenswrapper[4830]: I0131 09:02:50.294770 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:50 crc kubenswrapper[4830]: I0131 09:02:50.294783 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:50 crc kubenswrapper[4830]: I0131 09:02:50.294802 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:50 crc kubenswrapper[4830]: I0131 09:02:50.294819 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:50Z","lastTransitionTime":"2026-01-31T09:02:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:50 crc kubenswrapper[4830]: I0131 09:02:50.398104 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:50 crc kubenswrapper[4830]: I0131 09:02:50.398163 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:50 crc kubenswrapper[4830]: I0131 09:02:50.398180 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:50 crc kubenswrapper[4830]: I0131 09:02:50.398202 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:50 crc kubenswrapper[4830]: I0131 09:02:50.398219 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:50Z","lastTransitionTime":"2026-01-31T09:02:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:50 crc kubenswrapper[4830]: I0131 09:02:50.501353 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:50 crc kubenswrapper[4830]: I0131 09:02:50.501449 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:50 crc kubenswrapper[4830]: I0131 09:02:50.501484 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:50 crc kubenswrapper[4830]: I0131 09:02:50.501518 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:50 crc kubenswrapper[4830]: I0131 09:02:50.501540 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:50Z","lastTransitionTime":"2026-01-31T09:02:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:50 crc kubenswrapper[4830]: I0131 09:02:50.604929 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:50 crc kubenswrapper[4830]: I0131 09:02:50.604986 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:50 crc kubenswrapper[4830]: I0131 09:02:50.605000 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:50 crc kubenswrapper[4830]: I0131 09:02:50.605024 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:50 crc kubenswrapper[4830]: I0131 09:02:50.605036 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:50Z","lastTransitionTime":"2026-01-31T09:02:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:50 crc kubenswrapper[4830]: I0131 09:02:50.707994 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:50 crc kubenswrapper[4830]: I0131 09:02:50.708073 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:50 crc kubenswrapper[4830]: I0131 09:02:50.708091 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:50 crc kubenswrapper[4830]: I0131 09:02:50.708117 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:50 crc kubenswrapper[4830]: I0131 09:02:50.708136 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:50Z","lastTransitionTime":"2026-01-31T09:02:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:50 crc kubenswrapper[4830]: I0131 09:02:50.811331 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:50 crc kubenswrapper[4830]: I0131 09:02:50.811403 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:50 crc kubenswrapper[4830]: I0131 09:02:50.811421 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:50 crc kubenswrapper[4830]: I0131 09:02:50.811448 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:50 crc kubenswrapper[4830]: I0131 09:02:50.811469 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:50Z","lastTransitionTime":"2026-01-31T09:02:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:50 crc kubenswrapper[4830]: I0131 09:02:50.915078 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:50 crc kubenswrapper[4830]: I0131 09:02:50.915156 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:50 crc kubenswrapper[4830]: I0131 09:02:50.915176 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:50 crc kubenswrapper[4830]: I0131 09:02:50.915200 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:50 crc kubenswrapper[4830]: I0131 09:02:50.915220 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:50Z","lastTransitionTime":"2026-01-31T09:02:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:51 crc kubenswrapper[4830]: I0131 09:02:51.019468 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:51 crc kubenswrapper[4830]: I0131 09:02:51.019547 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:51 crc kubenswrapper[4830]: I0131 09:02:51.019560 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:51 crc kubenswrapper[4830]: I0131 09:02:51.019585 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:51 crc kubenswrapper[4830]: I0131 09:02:51.019598 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:51Z","lastTransitionTime":"2026-01-31T09:02:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:51 crc kubenswrapper[4830]: I0131 09:02:51.123106 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:51 crc kubenswrapper[4830]: I0131 09:02:51.123183 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:51 crc kubenswrapper[4830]: I0131 09:02:51.123208 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:51 crc kubenswrapper[4830]: I0131 09:02:51.123238 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:51 crc kubenswrapper[4830]: I0131 09:02:51.123265 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:51Z","lastTransitionTime":"2026-01-31T09:02:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:51 crc kubenswrapper[4830]: I0131 09:02:51.226250 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:51 crc kubenswrapper[4830]: I0131 09:02:51.226316 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:51 crc kubenswrapper[4830]: I0131 09:02:51.226333 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:51 crc kubenswrapper[4830]: I0131 09:02:51.226360 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:51 crc kubenswrapper[4830]: I0131 09:02:51.226378 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:51Z","lastTransitionTime":"2026-01-31T09:02:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:51 crc kubenswrapper[4830]: I0131 09:02:51.280181 4830 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 06:46:12.266356291 +0000 UTC Jan 31 09:02:51 crc kubenswrapper[4830]: I0131 09:02:51.330060 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:51 crc kubenswrapper[4830]: I0131 09:02:51.330137 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:51 crc kubenswrapper[4830]: I0131 09:02:51.330170 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:51 crc kubenswrapper[4830]: I0131 09:02:51.330200 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:51 crc kubenswrapper[4830]: I0131 09:02:51.330221 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:51Z","lastTransitionTime":"2026-01-31T09:02:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:51 crc kubenswrapper[4830]: I0131 09:02:51.433251 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:51 crc kubenswrapper[4830]: I0131 09:02:51.433311 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:51 crc kubenswrapper[4830]: I0131 09:02:51.433322 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:51 crc kubenswrapper[4830]: I0131 09:02:51.433344 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:51 crc kubenswrapper[4830]: I0131 09:02:51.433358 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:51Z","lastTransitionTime":"2026-01-31T09:02:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:51 crc kubenswrapper[4830]: I0131 09:02:51.541322 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:51 crc kubenswrapper[4830]: I0131 09:02:51.541369 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:51 crc kubenswrapper[4830]: I0131 09:02:51.541380 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:51 crc kubenswrapper[4830]: I0131 09:02:51.541401 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:51 crc kubenswrapper[4830]: I0131 09:02:51.541413 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:51Z","lastTransitionTime":"2026-01-31T09:02:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:51 crc kubenswrapper[4830]: I0131 09:02:51.645501 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:51 crc kubenswrapper[4830]: I0131 09:02:51.645564 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:51 crc kubenswrapper[4830]: I0131 09:02:51.645598 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:51 crc kubenswrapper[4830]: I0131 09:02:51.645623 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:51 crc kubenswrapper[4830]: I0131 09:02:51.645635 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:51Z","lastTransitionTime":"2026-01-31T09:02:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:51 crc kubenswrapper[4830]: I0131 09:02:51.749027 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:51 crc kubenswrapper[4830]: I0131 09:02:51.749123 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:51 crc kubenswrapper[4830]: I0131 09:02:51.749149 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:51 crc kubenswrapper[4830]: I0131 09:02:51.749184 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:51 crc kubenswrapper[4830]: I0131 09:02:51.749210 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:51Z","lastTransitionTime":"2026-01-31T09:02:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:51 crc kubenswrapper[4830]: I0131 09:02:51.852120 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:51 crc kubenswrapper[4830]: I0131 09:02:51.852182 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:51 crc kubenswrapper[4830]: I0131 09:02:51.852201 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:51 crc kubenswrapper[4830]: I0131 09:02:51.852225 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:51 crc kubenswrapper[4830]: I0131 09:02:51.852240 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:51Z","lastTransitionTime":"2026-01-31T09:02:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:51 crc kubenswrapper[4830]: I0131 09:02:51.956090 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:51 crc kubenswrapper[4830]: I0131 09:02:51.956153 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:51 crc kubenswrapper[4830]: I0131 09:02:51.956193 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:51 crc kubenswrapper[4830]: I0131 09:02:51.956227 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:51 crc kubenswrapper[4830]: I0131 09:02:51.956250 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:51Z","lastTransitionTime":"2026-01-31T09:02:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:52 crc kubenswrapper[4830]: I0131 09:02:52.058794 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:52 crc kubenswrapper[4830]: I0131 09:02:52.059040 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:52 crc kubenswrapper[4830]: I0131 09:02:52.059099 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:52 crc kubenswrapper[4830]: I0131 09:02:52.059130 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:52 crc kubenswrapper[4830]: I0131 09:02:52.059147 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:52Z","lastTransitionTime":"2026-01-31T09:02:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:52 crc kubenswrapper[4830]: I0131 09:02:52.162414 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:52 crc kubenswrapper[4830]: I0131 09:02:52.162454 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:52 crc kubenswrapper[4830]: I0131 09:02:52.162462 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:52 crc kubenswrapper[4830]: I0131 09:02:52.162477 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:52 crc kubenswrapper[4830]: I0131 09:02:52.162487 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:52Z","lastTransitionTime":"2026-01-31T09:02:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:52 crc kubenswrapper[4830]: I0131 09:02:52.250938 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5kl8z" Jan 31 09:02:52 crc kubenswrapper[4830]: I0131 09:02:52.250979 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 09:02:52 crc kubenswrapper[4830]: I0131 09:02:52.250979 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 09:02:52 crc kubenswrapper[4830]: I0131 09:02:52.251011 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 09:02:52 crc kubenswrapper[4830]: E0131 09:02:52.251125 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5kl8z" podUID="c1fa30e4-0c03-43ab-9c37-f7ec86153b27" Jan 31 09:02:52 crc kubenswrapper[4830]: E0131 09:02:52.251217 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 09:02:52 crc kubenswrapper[4830]: E0131 09:02:52.251361 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 09:02:52 crc kubenswrapper[4830]: E0131 09:02:52.251491 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 09:02:52 crc kubenswrapper[4830]: I0131 09:02:52.265521 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:52 crc kubenswrapper[4830]: I0131 09:02:52.265570 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:52 crc kubenswrapper[4830]: I0131 09:02:52.265585 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:52 crc kubenswrapper[4830]: I0131 09:02:52.265607 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:52 crc kubenswrapper[4830]: I0131 09:02:52.265623 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:52Z","lastTransitionTime":"2026-01-31T09:02:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:52 crc kubenswrapper[4830]: I0131 09:02:52.280976 4830 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 02:35:35.772785801 +0000 UTC Jan 31 09:02:52 crc kubenswrapper[4830]: I0131 09:02:52.369758 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:52 crc kubenswrapper[4830]: I0131 09:02:52.369810 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:52 crc kubenswrapper[4830]: I0131 09:02:52.369826 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:52 crc kubenswrapper[4830]: I0131 09:02:52.369850 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:52 crc kubenswrapper[4830]: I0131 09:02:52.369865 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:52Z","lastTransitionTime":"2026-01-31T09:02:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:52 crc kubenswrapper[4830]: I0131 09:02:52.473191 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:52 crc kubenswrapper[4830]: I0131 09:02:52.473258 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:52 crc kubenswrapper[4830]: I0131 09:02:52.473273 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:52 crc kubenswrapper[4830]: I0131 09:02:52.473297 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:52 crc kubenswrapper[4830]: I0131 09:02:52.473312 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:52Z","lastTransitionTime":"2026-01-31T09:02:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:52 crc kubenswrapper[4830]: I0131 09:02:52.577206 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:52 crc kubenswrapper[4830]: I0131 09:02:52.577274 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:52 crc kubenswrapper[4830]: I0131 09:02:52.577285 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:52 crc kubenswrapper[4830]: I0131 09:02:52.577307 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:52 crc kubenswrapper[4830]: I0131 09:02:52.577327 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:52Z","lastTransitionTime":"2026-01-31T09:02:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:52 crc kubenswrapper[4830]: I0131 09:02:52.681031 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:52 crc kubenswrapper[4830]: I0131 09:02:52.681082 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:52 crc kubenswrapper[4830]: I0131 09:02:52.681095 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:52 crc kubenswrapper[4830]: I0131 09:02:52.681115 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:52 crc kubenswrapper[4830]: I0131 09:02:52.681129 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:52Z","lastTransitionTime":"2026-01-31T09:02:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:52 crc kubenswrapper[4830]: I0131 09:02:52.783425 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:52 crc kubenswrapper[4830]: I0131 09:02:52.783484 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:52 crc kubenswrapper[4830]: I0131 09:02:52.783500 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:52 crc kubenswrapper[4830]: I0131 09:02:52.783522 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:52 crc kubenswrapper[4830]: I0131 09:02:52.783535 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:52Z","lastTransitionTime":"2026-01-31T09:02:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:52 crc kubenswrapper[4830]: I0131 09:02:52.886146 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:52 crc kubenswrapper[4830]: I0131 09:02:52.886186 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:52 crc kubenswrapper[4830]: I0131 09:02:52.886195 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:52 crc kubenswrapper[4830]: I0131 09:02:52.886213 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:52 crc kubenswrapper[4830]: I0131 09:02:52.886222 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:52Z","lastTransitionTime":"2026-01-31T09:02:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:52 crc kubenswrapper[4830]: I0131 09:02:52.989023 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:52 crc kubenswrapper[4830]: I0131 09:02:52.989074 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:52 crc kubenswrapper[4830]: I0131 09:02:52.989083 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:52 crc kubenswrapper[4830]: I0131 09:02:52.989101 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:52 crc kubenswrapper[4830]: I0131 09:02:52.989110 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:52Z","lastTransitionTime":"2026-01-31T09:02:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:53 crc kubenswrapper[4830]: I0131 09:02:53.092395 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:53 crc kubenswrapper[4830]: I0131 09:02:53.092441 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:53 crc kubenswrapper[4830]: I0131 09:02:53.092460 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:53 crc kubenswrapper[4830]: I0131 09:02:53.092488 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:53 crc kubenswrapper[4830]: I0131 09:02:53.092502 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:53Z","lastTransitionTime":"2026-01-31T09:02:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:53 crc kubenswrapper[4830]: I0131 09:02:53.195626 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:53 crc kubenswrapper[4830]: I0131 09:02:53.195688 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:53 crc kubenswrapper[4830]: I0131 09:02:53.195743 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:53 crc kubenswrapper[4830]: I0131 09:02:53.195767 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:53 crc kubenswrapper[4830]: I0131 09:02:53.195783 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:53Z","lastTransitionTime":"2026-01-31T09:02:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:53 crc kubenswrapper[4830]: I0131 09:02:53.281223 4830 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 03:39:55.168412088 +0000 UTC Jan 31 09:02:53 crc kubenswrapper[4830]: I0131 09:02:53.298521 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:53 crc kubenswrapper[4830]: I0131 09:02:53.298576 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:53 crc kubenswrapper[4830]: I0131 09:02:53.298591 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:53 crc kubenswrapper[4830]: I0131 09:02:53.298616 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:53 crc kubenswrapper[4830]: I0131 09:02:53.298630 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:53Z","lastTransitionTime":"2026-01-31T09:02:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:53 crc kubenswrapper[4830]: I0131 09:02:53.401184 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:53 crc kubenswrapper[4830]: I0131 09:02:53.401238 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:53 crc kubenswrapper[4830]: I0131 09:02:53.401247 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:53 crc kubenswrapper[4830]: I0131 09:02:53.401265 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:53 crc kubenswrapper[4830]: I0131 09:02:53.401280 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:53Z","lastTransitionTime":"2026-01-31T09:02:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:53 crc kubenswrapper[4830]: I0131 09:02:53.503956 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:53 crc kubenswrapper[4830]: I0131 09:02:53.504000 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:53 crc kubenswrapper[4830]: I0131 09:02:53.504017 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:53 crc kubenswrapper[4830]: I0131 09:02:53.504038 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:53 crc kubenswrapper[4830]: I0131 09:02:53.504052 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:53Z","lastTransitionTime":"2026-01-31T09:02:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:53 crc kubenswrapper[4830]: I0131 09:02:53.606092 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:53 crc kubenswrapper[4830]: I0131 09:02:53.606146 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:53 crc kubenswrapper[4830]: I0131 09:02:53.606158 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:53 crc kubenswrapper[4830]: I0131 09:02:53.606176 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:53 crc kubenswrapper[4830]: I0131 09:02:53.606187 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:53Z","lastTransitionTime":"2026-01-31T09:02:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:53 crc kubenswrapper[4830]: I0131 09:02:53.709841 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:53 crc kubenswrapper[4830]: I0131 09:02:53.709896 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:53 crc kubenswrapper[4830]: I0131 09:02:53.709907 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:53 crc kubenswrapper[4830]: I0131 09:02:53.709926 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:53 crc kubenswrapper[4830]: I0131 09:02:53.709940 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:53Z","lastTransitionTime":"2026-01-31T09:02:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:53 crc kubenswrapper[4830]: I0131 09:02:53.812625 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:53 crc kubenswrapper[4830]: I0131 09:02:53.812677 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:53 crc kubenswrapper[4830]: I0131 09:02:53.812694 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:53 crc kubenswrapper[4830]: I0131 09:02:53.812716 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:53 crc kubenswrapper[4830]: I0131 09:02:53.812780 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:53Z","lastTransitionTime":"2026-01-31T09:02:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:53 crc kubenswrapper[4830]: I0131 09:02:53.915568 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:53 crc kubenswrapper[4830]: I0131 09:02:53.915765 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:53 crc kubenswrapper[4830]: I0131 09:02:53.915783 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:53 crc kubenswrapper[4830]: I0131 09:02:53.915803 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:53 crc kubenswrapper[4830]: I0131 09:02:53.915820 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:53Z","lastTransitionTime":"2026-01-31T09:02:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:54 crc kubenswrapper[4830]: I0131 09:02:54.019010 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:54 crc kubenswrapper[4830]: I0131 09:02:54.019110 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:54 crc kubenswrapper[4830]: I0131 09:02:54.019128 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:54 crc kubenswrapper[4830]: I0131 09:02:54.019156 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:54 crc kubenswrapper[4830]: I0131 09:02:54.019181 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:54Z","lastTransitionTime":"2026-01-31T09:02:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:54 crc kubenswrapper[4830]: I0131 09:02:54.122432 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:54 crc kubenswrapper[4830]: I0131 09:02:54.122516 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:54 crc kubenswrapper[4830]: I0131 09:02:54.122527 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:54 crc kubenswrapper[4830]: I0131 09:02:54.122551 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:54 crc kubenswrapper[4830]: I0131 09:02:54.122564 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:54Z","lastTransitionTime":"2026-01-31T09:02:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:54 crc kubenswrapper[4830]: I0131 09:02:54.225734 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:54 crc kubenswrapper[4830]: I0131 09:02:54.225770 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:54 crc kubenswrapper[4830]: I0131 09:02:54.225780 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:54 crc kubenswrapper[4830]: I0131 09:02:54.225800 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:54 crc kubenswrapper[4830]: I0131 09:02:54.225810 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:54Z","lastTransitionTime":"2026-01-31T09:02:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:54 crc kubenswrapper[4830]: I0131 09:02:54.250422 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 09:02:54 crc kubenswrapper[4830]: I0131 09:02:54.250583 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5kl8z" Jan 31 09:02:54 crc kubenswrapper[4830]: E0131 09:02:54.250620 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 09:02:54 crc kubenswrapper[4830]: I0131 09:02:54.250682 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 09:02:54 crc kubenswrapper[4830]: I0131 09:02:54.250834 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 09:02:54 crc kubenswrapper[4830]: E0131 09:02:54.250981 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5kl8z" podUID="c1fa30e4-0c03-43ab-9c37-f7ec86153b27" Jan 31 09:02:54 crc kubenswrapper[4830]: E0131 09:02:54.251125 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 09:02:54 crc kubenswrapper[4830]: E0131 09:02:54.251961 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 09:02:54 crc kubenswrapper[4830]: I0131 09:02:54.282657 4830 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 01:08:41.597134194 +0000 UTC Jan 31 09:02:54 crc kubenswrapper[4830]: I0131 09:02:54.328519 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:54 crc kubenswrapper[4830]: I0131 09:02:54.328574 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:54 crc kubenswrapper[4830]: I0131 09:02:54.328591 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:54 crc kubenswrapper[4830]: I0131 09:02:54.328615 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:54 crc kubenswrapper[4830]: I0131 09:02:54.328632 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:54Z","lastTransitionTime":"2026-01-31T09:02:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:54 crc kubenswrapper[4830]: I0131 09:02:54.432421 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:54 crc kubenswrapper[4830]: I0131 09:02:54.432499 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:54 crc kubenswrapper[4830]: I0131 09:02:54.432522 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:54 crc kubenswrapper[4830]: I0131 09:02:54.432560 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:54 crc kubenswrapper[4830]: I0131 09:02:54.432784 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:54Z","lastTransitionTime":"2026-01-31T09:02:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:54 crc kubenswrapper[4830]: I0131 09:02:54.535635 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:54 crc kubenswrapper[4830]: I0131 09:02:54.535695 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:54 crc kubenswrapper[4830]: I0131 09:02:54.535711 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:54 crc kubenswrapper[4830]: I0131 09:02:54.535772 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:54 crc kubenswrapper[4830]: I0131 09:02:54.535789 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:54Z","lastTransitionTime":"2026-01-31T09:02:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:54 crc kubenswrapper[4830]: I0131 09:02:54.638957 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:54 crc kubenswrapper[4830]: I0131 09:02:54.639001 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:54 crc kubenswrapper[4830]: I0131 09:02:54.639012 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:54 crc kubenswrapper[4830]: I0131 09:02:54.639026 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:54 crc kubenswrapper[4830]: I0131 09:02:54.639038 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:54Z","lastTransitionTime":"2026-01-31T09:02:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:54 crc kubenswrapper[4830]: I0131 09:02:54.742410 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:54 crc kubenswrapper[4830]: I0131 09:02:54.742459 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:54 crc kubenswrapper[4830]: I0131 09:02:54.742468 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:54 crc kubenswrapper[4830]: I0131 09:02:54.742485 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:54 crc kubenswrapper[4830]: I0131 09:02:54.742496 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:54Z","lastTransitionTime":"2026-01-31T09:02:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:54 crc kubenswrapper[4830]: I0131 09:02:54.743696 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:54 crc kubenswrapper[4830]: I0131 09:02:54.743761 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:54 crc kubenswrapper[4830]: I0131 09:02:54.743774 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:54 crc kubenswrapper[4830]: I0131 09:02:54.743793 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:54 crc kubenswrapper[4830]: I0131 09:02:54.743805 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:54Z","lastTransitionTime":"2026-01-31T09:02:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:54 crc kubenswrapper[4830]: I0131 09:02:54.791003 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7vq99" podStartSLOduration=92.790980311 podStartE2EDuration="1m32.790980311s" podCreationTimestamp="2026-01-31 09:01:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 09:02:46.576461991 +0000 UTC m=+111.069824443" watchObservedRunningTime="2026-01-31 09:02:54.790980311 +0000 UTC m=+119.284342753" Jan 31 09:02:54 crc kubenswrapper[4830]: I0131 09:02:54.791562 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-r8vfw"] Jan 31 09:02:54 crc kubenswrapper[4830]: I0131 09:02:54.791988 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-r8vfw" Jan 31 09:02:54 crc kubenswrapper[4830]: I0131 09:02:54.796083 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 31 09:02:54 crc kubenswrapper[4830]: I0131 09:02:54.796097 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 31 09:02:54 crc kubenswrapper[4830]: I0131 09:02:54.796198 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 31 09:02:54 crc kubenswrapper[4830]: I0131 09:02:54.796102 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 31 09:02:54 crc kubenswrapper[4830]: I0131 09:02:54.852576 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=12.852554087 podStartE2EDuration="12.852554087s" podCreationTimestamp="2026-01-31 09:02:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 09:02:54.833149541 +0000 UTC m=+119.326512003" watchObservedRunningTime="2026-01-31 09:02:54.852554087 +0000 UTC m=+119.345916529" Jan 31 09:02:54 crc kubenswrapper[4830]: I0131 09:02:54.852842 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=97.852836505 podStartE2EDuration="1m37.852836505s" podCreationTimestamp="2026-01-31 09:01:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 09:02:54.852065783 +0000 UTC m=+119.345428225" watchObservedRunningTime="2026-01-31 09:02:54.852836505 +0000 UTC m=+119.346198947" Jan 31 09:02:54 crc kubenswrapper[4830]: I0131 09:02:54.900885 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/fdc1c918-8499-46ca-84a9-c6aa1191fb52-service-ca\") pod \"cluster-version-operator-5c965bbfc6-r8vfw\" (UID: \"fdc1c918-8499-46ca-84a9-c6aa1191fb52\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-r8vfw" Jan 31 09:02:54 crc kubenswrapper[4830]: I0131 09:02:54.900937 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/fdc1c918-8499-46ca-84a9-c6aa1191fb52-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-r8vfw\" (UID: \"fdc1c918-8499-46ca-84a9-c6aa1191fb52\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-r8vfw" Jan 31 09:02:54 crc kubenswrapper[4830]: I0131 09:02:54.900977 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/fdc1c918-8499-46ca-84a9-c6aa1191fb52-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-r8vfw\" (UID: \"fdc1c918-8499-46ca-84a9-c6aa1191fb52\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-r8vfw" Jan 31 09:02:54 crc kubenswrapper[4830]: I0131 09:02:54.901006 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fdc1c918-8499-46ca-84a9-c6aa1191fb52-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-r8vfw\" (UID: \"fdc1c918-8499-46ca-84a9-c6aa1191fb52\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-r8vfw" Jan 31 09:02:54 crc kubenswrapper[4830]: I0131 09:02:54.901072 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fdc1c918-8499-46ca-84a9-c6aa1191fb52-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-r8vfw\" (UID: \"fdc1c918-8499-46ca-84a9-c6aa1191fb52\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-r8vfw" Jan 31 09:02:55 crc kubenswrapper[4830]: I0131 09:02:55.002797 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fdc1c918-8499-46ca-84a9-c6aa1191fb52-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-r8vfw\" (UID: \"fdc1c918-8499-46ca-84a9-c6aa1191fb52\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-r8vfw" Jan 31 09:02:55 crc kubenswrapper[4830]: I0131 09:02:55.002866 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/fdc1c918-8499-46ca-84a9-c6aa1191fb52-service-ca\") pod \"cluster-version-operator-5c965bbfc6-r8vfw\" (UID: \"fdc1c918-8499-46ca-84a9-c6aa1191fb52\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-r8vfw" Jan 31 09:02:55 crc kubenswrapper[4830]: I0131 09:02:55.002894 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/fdc1c918-8499-46ca-84a9-c6aa1191fb52-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-r8vfw\" (UID: \"fdc1c918-8499-46ca-84a9-c6aa1191fb52\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-r8vfw" Jan 31 09:02:55 crc kubenswrapper[4830]: I0131 09:02:55.002925 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/fdc1c918-8499-46ca-84a9-c6aa1191fb52-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-r8vfw\" (UID: \"fdc1c918-8499-46ca-84a9-c6aa1191fb52\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-r8vfw" Jan 31 09:02:55 crc kubenswrapper[4830]: I0131 09:02:55.002956 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fdc1c918-8499-46ca-84a9-c6aa1191fb52-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-r8vfw\" (UID: \"fdc1c918-8499-46ca-84a9-c6aa1191fb52\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-r8vfw" Jan 31 09:02:55 crc kubenswrapper[4830]: I0131 09:02:55.003053 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/fdc1c918-8499-46ca-84a9-c6aa1191fb52-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-r8vfw\" (UID: \"fdc1c918-8499-46ca-84a9-c6aa1191fb52\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-r8vfw" Jan 31 09:02:55 crc kubenswrapper[4830]: I0131 09:02:55.003159 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/fdc1c918-8499-46ca-84a9-c6aa1191fb52-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-r8vfw\" (UID: \"fdc1c918-8499-46ca-84a9-c6aa1191fb52\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-r8vfw" Jan 31 09:02:55 crc kubenswrapper[4830]: I0131 09:02:55.004463 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/fdc1c918-8499-46ca-84a9-c6aa1191fb52-service-ca\") pod \"cluster-version-operator-5c965bbfc6-r8vfw\" (UID: \"fdc1c918-8499-46ca-84a9-c6aa1191fb52\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-r8vfw" Jan 31 09:02:55 crc kubenswrapper[4830]: I0131 09:02:55.012574 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fdc1c918-8499-46ca-84a9-c6aa1191fb52-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-r8vfw\" (UID: \"fdc1c918-8499-46ca-84a9-c6aa1191fb52\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-r8vfw" Jan 31 09:02:55 crc kubenswrapper[4830]: I0131 09:02:55.022868 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fdc1c918-8499-46ca-84a9-c6aa1191fb52-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-r8vfw\" (UID: \"fdc1c918-8499-46ca-84a9-c6aa1191fb52\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-r8vfw" Jan 31 09:02:55 crc kubenswrapper[4830]: I0131 09:02:55.104781 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-r8vfw" Jan 31 09:02:55 crc kubenswrapper[4830]: I0131 09:02:55.251903 4830 scope.go:117] "RemoveContainer" containerID="766440d35d97de136fa66a347be009991bd05f76b51aff44c7369006f3196a4f" Jan 31 09:02:55 crc kubenswrapper[4830]: E0131 09:02:55.252499 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-r8pc4_openshift-ovn-kubernetes(159b9801-57e3-4cf0-9b81-10aacb5eef83)\"" pod="openshift-ovn-kubernetes/ovnkube-node-r8pc4" podUID="159b9801-57e3-4cf0-9b81-10aacb5eef83" Jan 31 09:02:55 crc kubenswrapper[4830]: I0131 09:02:55.283400 4830 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 15:23:48.925330221 +0000 UTC Jan 31 09:02:55 crc kubenswrapper[4830]: I0131 09:02:55.283453 4830 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Jan 31 09:02:55 crc kubenswrapper[4830]: I0131 09:02:55.292226 4830 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 31 09:02:55 crc kubenswrapper[4830]: I0131 09:02:55.918232 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-r8vfw" event={"ID":"fdc1c918-8499-46ca-84a9-c6aa1191fb52","Type":"ContainerStarted","Data":"092769bc3274ca29a33e2286a47af89abe780c5ad0de0a4cd7e586ac668a878d"} Jan 31 09:02:55 crc kubenswrapper[4830]: I0131 09:02:55.918293 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-r8vfw" event={"ID":"fdc1c918-8499-46ca-84a9-c6aa1191fb52","Type":"ContainerStarted","Data":"a9481a756625942a6734fd00e57a6694dbe0833cce2e50b922e0133ad8a744c5"} Jan 31 09:02:56 crc kubenswrapper[4830]: I0131 09:02:56.250977 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 09:02:56 crc kubenswrapper[4830]: I0131 09:02:56.251045 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5kl8z" Jan 31 09:02:56 crc kubenswrapper[4830]: I0131 09:02:56.251044 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 09:02:56 crc kubenswrapper[4830]: I0131 09:02:56.251158 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 09:02:56 crc kubenswrapper[4830]: E0131 09:02:56.252026 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 09:02:56 crc kubenswrapper[4830]: E0131 09:02:56.252235 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 09:02:56 crc kubenswrapper[4830]: E0131 09:02:56.252158 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5kl8z" podUID="c1fa30e4-0c03-43ab-9c37-f7ec86153b27" Jan 31 09:02:56 crc kubenswrapper[4830]: E0131 09:02:56.252311 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 09:02:56 crc kubenswrapper[4830]: E0131 09:02:56.285772 4830 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Jan 31 09:02:56 crc kubenswrapper[4830]: E0131 09:02:56.378559 4830 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 31 09:02:56 crc kubenswrapper[4830]: I0131 09:02:56.923049 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-cjqbn_b7e133cc-19e8-4770-9146-88dac53a6531/kube-multus/1.log" Jan 31 09:02:56 crc kubenswrapper[4830]: I0131 09:02:56.923653 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-cjqbn_b7e133cc-19e8-4770-9146-88dac53a6531/kube-multus/0.log" Jan 31 09:02:56 crc kubenswrapper[4830]: I0131 09:02:56.923745 4830 generic.go:334] "Generic (PLEG): container finished" podID="b7e133cc-19e8-4770-9146-88dac53a6531" containerID="9875f32d43bbc74af3de68db341e1562d735fcd5fba747d5ca7aceea458db68a" exitCode=1 Jan 31 09:02:56 crc kubenswrapper[4830]: I0131 09:02:56.923791 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-cjqbn" event={"ID":"b7e133cc-19e8-4770-9146-88dac53a6531","Type":"ContainerDied","Data":"9875f32d43bbc74af3de68db341e1562d735fcd5fba747d5ca7aceea458db68a"} Jan 31 09:02:56 crc kubenswrapper[4830]: I0131 09:02:56.923840 4830 scope.go:117] "RemoveContainer" containerID="4fc53764819654361fe0c4c89480ef4e2b42eb79d71ab8b88f1cc9283c67ce70" Jan 31 09:02:56 crc kubenswrapper[4830]: I0131 09:02:56.924400 4830 scope.go:117] "RemoveContainer" containerID="9875f32d43bbc74af3de68db341e1562d735fcd5fba747d5ca7aceea458db68a" Jan 31 09:02:56 crc kubenswrapper[4830]: E0131 09:02:56.924623 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-cjqbn_openshift-multus(b7e133cc-19e8-4770-9146-88dac53a6531)\"" pod="openshift-multus/multus-cjqbn" podUID="b7e133cc-19e8-4770-9146-88dac53a6531" Jan 31 09:02:56 crc kubenswrapper[4830]: I0131 09:02:56.951771 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-r8vfw" podStartSLOduration=94.951745009 podStartE2EDuration="1m34.951745009s" podCreationTimestamp="2026-01-31 09:01:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 09:02:55.934323421 +0000 UTC m=+120.427685863" watchObservedRunningTime="2026-01-31 09:02:56.951745009 +0000 UTC m=+121.445107461" Jan 31 09:02:57 crc kubenswrapper[4830]: I0131 09:02:57.930776 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-cjqbn_b7e133cc-19e8-4770-9146-88dac53a6531/kube-multus/1.log" Jan 31 09:02:58 crc kubenswrapper[4830]: I0131 09:02:58.250590 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 09:02:58 crc kubenswrapper[4830]: I0131 09:02:58.250667 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 09:02:58 crc kubenswrapper[4830]: E0131 09:02:58.250833 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 09:02:58 crc kubenswrapper[4830]: I0131 09:02:58.250873 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5kl8z" Jan 31 09:02:58 crc kubenswrapper[4830]: I0131 09:02:58.250842 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 09:02:58 crc kubenswrapper[4830]: E0131 09:02:58.251106 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 09:02:58 crc kubenswrapper[4830]: E0131 09:02:58.251177 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5kl8z" podUID="c1fa30e4-0c03-43ab-9c37-f7ec86153b27" Jan 31 09:02:58 crc kubenswrapper[4830]: E0131 09:02:58.251278 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 09:03:00 crc kubenswrapper[4830]: I0131 09:03:00.251211 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 09:03:00 crc kubenswrapper[4830]: I0131 09:03:00.251287 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 09:03:00 crc kubenswrapper[4830]: I0131 09:03:00.251476 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 09:03:00 crc kubenswrapper[4830]: E0131 09:03:00.251569 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 09:03:00 crc kubenswrapper[4830]: E0131 09:03:00.251704 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 09:03:00 crc kubenswrapper[4830]: E0131 09:03:00.251774 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 09:03:00 crc kubenswrapper[4830]: I0131 09:03:00.251402 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5kl8z" Jan 31 09:03:00 crc kubenswrapper[4830]: E0131 09:03:00.252084 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5kl8z" podUID="c1fa30e4-0c03-43ab-9c37-f7ec86153b27" Jan 31 09:03:01 crc kubenswrapper[4830]: E0131 09:03:01.380792 4830 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 31 09:03:02 crc kubenswrapper[4830]: I0131 09:03:02.250983 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 09:03:02 crc kubenswrapper[4830]: I0131 09:03:02.251059 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5kl8z" Jan 31 09:03:02 crc kubenswrapper[4830]: I0131 09:03:02.251102 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 09:03:02 crc kubenswrapper[4830]: I0131 09:03:02.251207 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 09:03:02 crc kubenswrapper[4830]: E0131 09:03:02.251227 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 09:03:02 crc kubenswrapper[4830]: E0131 09:03:02.251375 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 09:03:02 crc kubenswrapper[4830]: E0131 09:03:02.251435 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 09:03:02 crc kubenswrapper[4830]: E0131 09:03:02.251508 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5kl8z" podUID="c1fa30e4-0c03-43ab-9c37-f7ec86153b27" Jan 31 09:03:04 crc kubenswrapper[4830]: I0131 09:03:04.251201 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 09:03:04 crc kubenswrapper[4830]: I0131 09:03:04.251275 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5kl8z" Jan 31 09:03:04 crc kubenswrapper[4830]: I0131 09:03:04.251275 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 09:03:04 crc kubenswrapper[4830]: E0131 09:03:04.251403 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 09:03:04 crc kubenswrapper[4830]: I0131 09:03:04.251467 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 09:03:04 crc kubenswrapper[4830]: E0131 09:03:04.251475 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5kl8z" podUID="c1fa30e4-0c03-43ab-9c37-f7ec86153b27" Jan 31 09:03:04 crc kubenswrapper[4830]: E0131 09:03:04.251538 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 09:03:04 crc kubenswrapper[4830]: E0131 09:03:04.251602 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 09:03:06 crc kubenswrapper[4830]: I0131 09:03:06.250717 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5kl8z" Jan 31 09:03:06 crc kubenswrapper[4830]: I0131 09:03:06.251004 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 09:03:06 crc kubenswrapper[4830]: I0131 09:03:06.251016 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 09:03:06 crc kubenswrapper[4830]: I0131 09:03:06.251055 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 09:03:06 crc kubenswrapper[4830]: E0131 09:03:06.251860 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5kl8z" podUID="c1fa30e4-0c03-43ab-9c37-f7ec86153b27" Jan 31 09:03:06 crc kubenswrapper[4830]: E0131 09:03:06.252112 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 09:03:06 crc kubenswrapper[4830]: E0131 09:03:06.252331 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 09:03:06 crc kubenswrapper[4830]: E0131 09:03:06.252424 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 09:03:06 crc kubenswrapper[4830]: E0131 09:03:06.381405 4830 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 31 09:03:08 crc kubenswrapper[4830]: I0131 09:03:08.251386 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 09:03:08 crc kubenswrapper[4830]: I0131 09:03:08.251448 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5kl8z" Jan 31 09:03:08 crc kubenswrapper[4830]: I0131 09:03:08.251387 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 09:03:08 crc kubenswrapper[4830]: E0131 09:03:08.251559 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 09:03:08 crc kubenswrapper[4830]: E0131 09:03:08.251613 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 09:03:08 crc kubenswrapper[4830]: I0131 09:03:08.251627 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 09:03:08 crc kubenswrapper[4830]: E0131 09:03:08.251696 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5kl8z" podUID="c1fa30e4-0c03-43ab-9c37-f7ec86153b27" Jan 31 09:03:08 crc kubenswrapper[4830]: E0131 09:03:08.251856 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 09:03:10 crc kubenswrapper[4830]: I0131 09:03:10.251204 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 09:03:10 crc kubenswrapper[4830]: E0131 09:03:10.251385 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 09:03:10 crc kubenswrapper[4830]: I0131 09:03:10.251629 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5kl8z" Jan 31 09:03:10 crc kubenswrapper[4830]: E0131 09:03:10.251693 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5kl8z" podUID="c1fa30e4-0c03-43ab-9c37-f7ec86153b27" Jan 31 09:03:10 crc kubenswrapper[4830]: I0131 09:03:10.251988 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 09:03:10 crc kubenswrapper[4830]: E0131 09:03:10.252072 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 09:03:10 crc kubenswrapper[4830]: I0131 09:03:10.252899 4830 scope.go:117] "RemoveContainer" containerID="766440d35d97de136fa66a347be009991bd05f76b51aff44c7369006f3196a4f" Jan 31 09:03:10 crc kubenswrapper[4830]: I0131 09:03:10.253273 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 09:03:10 crc kubenswrapper[4830]: E0131 09:03:10.253347 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 09:03:10 crc kubenswrapper[4830]: I0131 09:03:10.977392 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-r8pc4_159b9801-57e3-4cf0-9b81-10aacb5eef83/ovnkube-controller/3.log" Jan 31 09:03:10 crc kubenswrapper[4830]: I0131 09:03:10.981257 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-r8pc4" event={"ID":"159b9801-57e3-4cf0-9b81-10aacb5eef83","Type":"ContainerStarted","Data":"f4d93300488a1d98f2b7829b938554fd6261d49065ba6bab59723ae725087360"} Jan 31 09:03:10 crc kubenswrapper[4830]: I0131 09:03:10.981830 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-r8pc4" Jan 31 09:03:11 crc kubenswrapper[4830]: I0131 09:03:11.017167 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-r8pc4" podStartSLOduration=109.017144699 podStartE2EDuration="1m49.017144699s" podCreationTimestamp="2026-01-31 09:01:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 09:03:11.01682207 +0000 UTC m=+135.510184532" watchObservedRunningTime="2026-01-31 09:03:11.017144699 +0000 UTC m=+135.510507131" Jan 31 09:03:11 crc kubenswrapper[4830]: I0131 09:03:11.124528 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-5kl8z"] Jan 31 09:03:11 crc kubenswrapper[4830]: I0131 09:03:11.124736 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5kl8z" Jan 31 09:03:11 crc kubenswrapper[4830]: E0131 09:03:11.124938 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5kl8z" podUID="c1fa30e4-0c03-43ab-9c37-f7ec86153b27" Jan 31 09:03:11 crc kubenswrapper[4830]: E0131 09:03:11.382836 4830 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 31 09:03:12 crc kubenswrapper[4830]: I0131 09:03:12.250932 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 09:03:12 crc kubenswrapper[4830]: I0131 09:03:12.250989 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 09:03:12 crc kubenswrapper[4830]: E0131 09:03:12.251523 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 09:03:12 crc kubenswrapper[4830]: I0131 09:03:12.251549 4830 scope.go:117] "RemoveContainer" containerID="9875f32d43bbc74af3de68db341e1562d735fcd5fba747d5ca7aceea458db68a" Jan 31 09:03:12 crc kubenswrapper[4830]: I0131 09:03:12.251013 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 09:03:12 crc kubenswrapper[4830]: E0131 09:03:12.251607 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 09:03:12 crc kubenswrapper[4830]: E0131 09:03:12.251710 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 09:03:12 crc kubenswrapper[4830]: I0131 09:03:12.989686 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-cjqbn_b7e133cc-19e8-4770-9146-88dac53a6531/kube-multus/1.log" Jan 31 09:03:12 crc kubenswrapper[4830]: I0131 09:03:12.989783 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-cjqbn" event={"ID":"b7e133cc-19e8-4770-9146-88dac53a6531","Type":"ContainerStarted","Data":"688600880adb08704161ae3933906d1341bce11f0e4231769fa30f33301668d5"} Jan 31 09:03:13 crc kubenswrapper[4830]: I0131 09:03:13.251004 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5kl8z" Jan 31 09:03:13 crc kubenswrapper[4830]: E0131 09:03:13.251215 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5kl8z" podUID="c1fa30e4-0c03-43ab-9c37-f7ec86153b27" Jan 31 09:03:14 crc kubenswrapper[4830]: I0131 09:03:14.251175 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 09:03:14 crc kubenswrapper[4830]: E0131 09:03:14.251348 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 09:03:14 crc kubenswrapper[4830]: I0131 09:03:14.251483 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 09:03:14 crc kubenswrapper[4830]: E0131 09:03:14.251649 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 09:03:14 crc kubenswrapper[4830]: I0131 09:03:14.251709 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 09:03:14 crc kubenswrapper[4830]: E0131 09:03:14.251805 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 09:03:15 crc kubenswrapper[4830]: I0131 09:03:15.251431 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5kl8z" Jan 31 09:03:15 crc kubenswrapper[4830]: E0131 09:03:15.251620 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5kl8z" podUID="c1fa30e4-0c03-43ab-9c37-f7ec86153b27" Jan 31 09:03:16 crc kubenswrapper[4830]: I0131 09:03:16.251222 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 09:03:16 crc kubenswrapper[4830]: I0131 09:03:16.251272 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 09:03:16 crc kubenswrapper[4830]: E0131 09:03:16.252461 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 09:03:16 crc kubenswrapper[4830]: I0131 09:03:16.252562 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 09:03:16 crc kubenswrapper[4830]: E0131 09:03:16.252836 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 09:03:16 crc kubenswrapper[4830]: E0131 09:03:16.252934 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 09:03:17 crc kubenswrapper[4830]: I0131 09:03:17.251061 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5kl8z" Jan 31 09:03:17 crc kubenswrapper[4830]: I0131 09:03:17.253648 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 31 09:03:17 crc kubenswrapper[4830]: I0131 09:03:17.254148 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 31 09:03:18 crc kubenswrapper[4830]: I0131 09:03:18.250652 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 09:03:18 crc kubenswrapper[4830]: I0131 09:03:18.250688 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 09:03:18 crc kubenswrapper[4830]: I0131 09:03:18.250776 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 09:03:18 crc kubenswrapper[4830]: I0131 09:03:18.253914 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 31 09:03:18 crc kubenswrapper[4830]: I0131 09:03:18.254008 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 31 09:03:18 crc kubenswrapper[4830]: I0131 09:03:18.254110 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 31 09:03:18 crc kubenswrapper[4830]: I0131 09:03:18.254917 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 31 09:03:24 crc kubenswrapper[4830]: I0131 09:03:24.250631 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 09:03:24 crc kubenswrapper[4830]: E0131 09:03:24.250943 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 09:05:26.250890224 +0000 UTC m=+270.744252676 (durationBeforeRetry 2m2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 09:03:24 crc kubenswrapper[4830]: I0131 09:03:24.251056 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 09:03:24 crc kubenswrapper[4830]: I0131 09:03:24.251092 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 09:03:24 crc kubenswrapper[4830]: I0131 09:03:24.251129 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 09:03:24 crc kubenswrapper[4830]: I0131 09:03:24.251150 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 09:03:24 crc kubenswrapper[4830]: I0131 09:03:24.252484 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 09:03:24 crc kubenswrapper[4830]: I0131 09:03:24.259260 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 09:03:24 crc kubenswrapper[4830]: I0131 09:03:24.259357 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 09:03:24 crc kubenswrapper[4830]: I0131 09:03:24.259446 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 09:03:24 crc kubenswrapper[4830]: I0131 09:03:24.264984 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 09:03:24 crc kubenswrapper[4830]: I0131 09:03:24.272172 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 09:03:24 crc kubenswrapper[4830]: I0131 09:03:24.279836 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 09:03:24 crc kubenswrapper[4830]: W0131 09:03:24.514941 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3b6479f0_333b_4a96_9adf_2099afdc2447.slice/crio-0ae8a839ce495c344e5a8fa1113f226063ad41c43b08f1a7e5b4276b00a1ab99 WatchSource:0}: Error finding container 0ae8a839ce495c344e5a8fa1113f226063ad41c43b08f1a7e5b4276b00a1ab99: Status 404 returned error can't find the container with id 0ae8a839ce495c344e5a8fa1113f226063ad41c43b08f1a7e5b4276b00a1ab99 Jan 31 09:03:24 crc kubenswrapper[4830]: W0131 09:03:24.710632 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5fe485a1_e14f_4c09_b5b9_f252bc42b7e8.slice/crio-268fdb7e5cb1f0d88871c5e2da7a375541235152dbb79e6694417e21b324526f WatchSource:0}: Error finding container 268fdb7e5cb1f0d88871c5e2da7a375541235152dbb79e6694417e21b324526f: Status 404 returned error can't find the container with id 268fdb7e5cb1f0d88871c5e2da7a375541235152dbb79e6694417e21b324526f Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.038471 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"560dfc3f1fb5c334c1db207485f1344db3650d6d1b5a3edf536d6bbff34bd7fe"} Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.038524 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"268fdb7e5cb1f0d88871c5e2da7a375541235152dbb79e6694417e21b324526f"} Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.040230 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"ec5af549995e3af74c1f643c366f20b095525373f8401199e6d48abbd4d3d742"} Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.040301 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"c5f47be79a8a20cb6068469e7b0d2e105da9bf8234098834ba6b8ab218cadbb5"} Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.041524 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"d8241bd152064e28176a5694b66c3df08d902708a7a54c6137634051692c5e0f"} Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.041548 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"0ae8a839ce495c344e5a8fa1113f226063ad41c43b08f1a7e5b4276b00a1ab99"} Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.041768 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.151992 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.191530 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-2p57l"] Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.192177 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-2p57l" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.195187 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-knkww"] Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.195793 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-knkww" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.196094 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-lpktp"] Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.196662 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-lpktp" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.196823 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-n65sj"] Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.197330 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-n65sj" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.197604 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-hkd74"] Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.198172 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-hkd74" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.198466 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-pwk76"] Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.199210 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-gp4nv"] Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.199280 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pwk76" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.199789 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-gp4nv" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.200201 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-rdzrw"] Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.200598 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-rdzrw" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.200806 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-9gw75"] Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.201261 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-9gw75" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.216641 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.216904 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.217162 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.230511 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-ttnrg"] Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.231091 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-ttnrg" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.232281 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-l8ckt"] Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.232954 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-l8ckt" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.238406 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.238574 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.238698 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.238767 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.238868 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.238907 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.239084 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.239200 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.239298 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.239312 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.239343 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.239402 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.239427 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.239457 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.239510 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.239519 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.239625 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.239641 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.239676 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.239626 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.239796 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.240036 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.240199 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.240359 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.240668 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.240797 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.240896 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.241004 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.241109 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.241221 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.241320 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.241417 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.241516 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.241615 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.241808 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.241908 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.242108 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.242226 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.242377 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.242955 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.243080 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.243221 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.243608 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-pkx9p"] Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.244325 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.244668 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.245112 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.247524 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.247804 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.247914 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.248312 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.248406 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.248811 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.249086 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.249885 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-8nn2k"] Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.250350 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-pkx9p" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.250572 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-8nn2k" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.250354 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ngd6n"] Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.251543 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-hzk7b"] Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.251921 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-htl5l"] Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.252170 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ngd6n" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.252587 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-vjnc8"] Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.252920 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-hzk7b" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.253046 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-vjnc8" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.253396 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-htl5l" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.253059 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.253969 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.254464 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-47wc2"] Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.255151 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-47wc2" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.255403 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-26msj"] Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.255906 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-26msj" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.256762 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.257115 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.257239 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.257372 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.257588 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.257972 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.258556 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-2klp9"] Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.259161 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-2klp9" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.261645 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-7m8b7"] Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.262018 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-n5blr"] Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.262338 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-lpktp"] Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.262405 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-n5blr" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.262946 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-7m8b7" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.264294 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-knkww"] Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.265057 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-vbcgc"] Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.284446 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-vbcgc" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.296438 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.296923 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.297230 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.298498 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/33210b82-c473-4bf8-b40d-a29b00833ea0-serving-cert\") pod \"controller-manager-879f6c89f-rdzrw\" (UID: \"33210b82-c473-4bf8-b40d-a29b00833ea0\") " pod="openshift-controller-manager/controller-manager-879f6c89f-rdzrw" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.301196 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.312614 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.312966 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.312806 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0f4287bc-c7a7-4ee2-8212-3611b978e2e8-client-ca\") pod \"route-controller-manager-6576b87f9c-knkww\" (UID: \"0f4287bc-c7a7-4ee2-8212-3611b978e2e8\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-knkww" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.313088 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.313132 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/c61fa19c-7742-4ab1-b3ca-9607723fe94d-etcd-client\") pod \"apiserver-7bbb656c7d-pwk76\" (UID: \"c61fa19c-7742-4ab1-b3ca-9607723fe94d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pwk76" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.313180 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d1346d7f-25da-4035-9c88-1f96c034d795-serving-cert\") pod \"openshift-config-operator-7777fb866f-ttnrg\" (UID: \"d1346d7f-25da-4035-9c88-1f96c034d795\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-ttnrg" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.313213 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/83cc5fe8-7965-46aa-b846-33d1b8d317f8-service-ca\") pod \"console-f9d7485db-gp4nv\" (UID: \"83cc5fe8-7965-46aa-b846-33d1b8d317f8\") " pod="openshift-console/console-f9d7485db-gp4nv" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.313230 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/00ab4f1c-2cc4-46b0-9e22-df58e5327352-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-hkd74\" (UID: \"00ab4f1c-2cc4-46b0-9e22-df58e5327352\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-hkd74" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.313245 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/00ab4f1c-2cc4-46b0-9e22-df58e5327352-serving-cert\") pod \"authentication-operator-69f744f599-hkd74\" (UID: \"00ab4f1c-2cc4-46b0-9e22-df58e5327352\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-hkd74" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.313265 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c61fa19c-7742-4ab1-b3ca-9607723fe94d-serving-cert\") pod \"apiserver-7bbb656c7d-pwk76\" (UID: \"c61fa19c-7742-4ab1-b3ca-9607723fe94d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pwk76" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.313373 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c61fa19c-7742-4ab1-b3ca-9607723fe94d-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-pwk76\" (UID: \"c61fa19c-7742-4ab1-b3ca-9607723fe94d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pwk76" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.313395 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cvh66\" (UniqueName: \"kubernetes.io/projected/b0ebeb47-d72b-4d2f-b2e8-aee1f880da1e-kube-api-access-cvh66\") pod \"openshift-apiserver-operator-796bbdcf4f-lpktp\" (UID: \"b0ebeb47-d72b-4d2f-b2e8-aee1f880da1e\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-lpktp" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.313423 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xmvvr\" (UniqueName: \"kubernetes.io/projected/0268e3ae-370f-43f0-9528-ff84b5983dac-kube-api-access-xmvvr\") pod \"cluster-samples-operator-665b6dd947-9gw75\" (UID: \"0268e3ae-370f-43f0-9528-ff84b5983dac\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-9gw75" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.313440 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ww7vr\" (UniqueName: \"kubernetes.io/projected/a8d26ab0-33c3-4eb7-928b-ffba996579d9-kube-api-access-ww7vr\") pod \"downloads-7954f5f757-l8ckt\" (UID: \"a8d26ab0-33c3-4eb7-928b-ffba996579d9\") " pod="openshift-console/downloads-7954f5f757-l8ckt" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.313454 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/c61fa19c-7742-4ab1-b3ca-9607723fe94d-encryption-config\") pod \"apiserver-7bbb656c7d-pwk76\" (UID: \"c61fa19c-7742-4ab1-b3ca-9607723fe94d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pwk76" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.313479 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/33210b82-c473-4bf8-b40d-a29b00833ea0-config\") pod \"controller-manager-879f6c89f-rdzrw\" (UID: \"33210b82-c473-4bf8-b40d-a29b00833ea0\") " pod="openshift-controller-manager/controller-manager-879f6c89f-rdzrw" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.313499 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-88jbt\" (UniqueName: \"kubernetes.io/projected/2af5c820-fefe-42fe-83da-0aeccb301182-kube-api-access-88jbt\") pod \"machine-approver-56656f9798-2p57l\" (UID: \"2af5c820-fefe-42fe-83da-0aeccb301182\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-2p57l" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.313514 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/00ab4f1c-2cc4-46b0-9e22-df58e5327352-service-ca-bundle\") pod \"authentication-operator-69f744f599-hkd74\" (UID: \"00ab4f1c-2cc4-46b0-9e22-df58e5327352\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-hkd74" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.313538 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0f4287bc-c7a7-4ee2-8212-3611b978e2e8-serving-cert\") pod \"route-controller-manager-6576b87f9c-knkww\" (UID: \"0f4287bc-c7a7-4ee2-8212-3611b978e2e8\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-knkww" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.313560 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/0268e3ae-370f-43f0-9528-ff84b5983dac-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-9gw75\" (UID: \"0268e3ae-370f-43f0-9528-ff84b5983dac\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-9gw75" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.313579 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/83cc5fe8-7965-46aa-b846-33d1b8d317f8-console-config\") pod \"console-f9d7485db-gp4nv\" (UID: \"83cc5fe8-7965-46aa-b846-33d1b8d317f8\") " pod="openshift-console/console-f9d7485db-gp4nv" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.313595 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8qdxg\" (UniqueName: \"kubernetes.io/projected/00ab4f1c-2cc4-46b0-9e22-df58e5327352-kube-api-access-8qdxg\") pod \"authentication-operator-69f744f599-hkd74\" (UID: \"00ab4f1c-2cc4-46b0-9e22-df58e5327352\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-hkd74" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.313615 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2wwdq\" (UniqueName: \"kubernetes.io/projected/83cc5fe8-7965-46aa-b846-33d1b8d317f8-kube-api-access-2wwdq\") pod \"console-f9d7485db-gp4nv\" (UID: \"83cc5fe8-7965-46aa-b846-33d1b8d317f8\") " pod="openshift-console/console-f9d7485db-gp4nv" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.313634 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5a386557-0e05-4f84-b5fc-a389083d2743-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-n65sj\" (UID: \"5a386557-0e05-4f84-b5fc-a389083d2743\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-n65sj" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.313652 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5a386557-0e05-4f84-b5fc-a389083d2743-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-n65sj\" (UID: \"5a386557-0e05-4f84-b5fc-a389083d2743\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-n65sj" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.313677 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sktbs\" (UniqueName: \"kubernetes.io/projected/0f4287bc-c7a7-4ee2-8212-3611b978e2e8-kube-api-access-sktbs\") pod \"route-controller-manager-6576b87f9c-knkww\" (UID: \"0f4287bc-c7a7-4ee2-8212-3611b978e2e8\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-knkww" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.313697 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/83cc5fe8-7965-46aa-b846-33d1b8d317f8-trusted-ca-bundle\") pod \"console-f9d7485db-gp4nv\" (UID: \"83cc5fe8-7965-46aa-b846-33d1b8d317f8\") " pod="openshift-console/console-f9d7485db-gp4nv" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.313742 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b0ebeb47-d72b-4d2f-b2e8-aee1f880da1e-config\") pod \"openshift-apiserver-operator-796bbdcf4f-lpktp\" (UID: \"b0ebeb47-d72b-4d2f-b2e8-aee1f880da1e\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-lpktp" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.313759 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0f4287bc-c7a7-4ee2-8212-3611b978e2e8-config\") pod \"route-controller-manager-6576b87f9c-knkww\" (UID: \"0f4287bc-c7a7-4ee2-8212-3611b978e2e8\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-knkww" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.313774 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j9b4k\" (UniqueName: \"kubernetes.io/projected/d1346d7f-25da-4035-9c88-1f96c034d795-kube-api-access-j9b4k\") pod \"openshift-config-operator-7777fb866f-ttnrg\" (UID: \"d1346d7f-25da-4035-9c88-1f96c034d795\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-ttnrg" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.313794 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/2af5c820-fefe-42fe-83da-0aeccb301182-machine-approver-tls\") pod \"machine-approver-56656f9798-2p57l\" (UID: \"2af5c820-fefe-42fe-83da-0aeccb301182\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-2p57l" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.313811 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rzsbm\" (UniqueName: \"kubernetes.io/projected/33210b82-c473-4bf8-b40d-a29b00833ea0-kube-api-access-rzsbm\") pod \"controller-manager-879f6c89f-rdzrw\" (UID: \"33210b82-c473-4bf8-b40d-a29b00833ea0\") " pod="openshift-controller-manager/controller-manager-879f6c89f-rdzrw" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.313830 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/2af5c820-fefe-42fe-83da-0aeccb301182-auth-proxy-config\") pod \"machine-approver-56656f9798-2p57l\" (UID: \"2af5c820-fefe-42fe-83da-0aeccb301182\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-2p57l" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.313846 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2r6fj\" (UniqueName: \"kubernetes.io/projected/5a386557-0e05-4f84-b5fc-a389083d2743-kube-api-access-2r6fj\") pod \"openshift-controller-manager-operator-756b6f6bc6-n65sj\" (UID: \"5a386557-0e05-4f84-b5fc-a389083d2743\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-n65sj" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.313865 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k8dt8\" (UniqueName: \"kubernetes.io/projected/c61fa19c-7742-4ab1-b3ca-9607723fe94d-kube-api-access-k8dt8\") pod \"apiserver-7bbb656c7d-pwk76\" (UID: \"c61fa19c-7742-4ab1-b3ca-9607723fe94d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pwk76" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.313881 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/c61fa19c-7742-4ab1-b3ca-9607723fe94d-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-pwk76\" (UID: \"c61fa19c-7742-4ab1-b3ca-9607723fe94d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pwk76" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.314047 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/00ab4f1c-2cc4-46b0-9e22-df58e5327352-config\") pod \"authentication-operator-69f744f599-hkd74\" (UID: \"00ab4f1c-2cc4-46b0-9e22-df58e5327352\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-hkd74" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.314113 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/83cc5fe8-7965-46aa-b846-33d1b8d317f8-oauth-serving-cert\") pod \"console-f9d7485db-gp4nv\" (UID: \"83cc5fe8-7965-46aa-b846-33d1b8d317f8\") " pod="openshift-console/console-f9d7485db-gp4nv" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.314136 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2af5c820-fefe-42fe-83da-0aeccb301182-config\") pod \"machine-approver-56656f9798-2p57l\" (UID: \"2af5c820-fefe-42fe-83da-0aeccb301182\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-2p57l" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.314158 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/83cc5fe8-7965-46aa-b846-33d1b8d317f8-console-serving-cert\") pod \"console-f9d7485db-gp4nv\" (UID: \"83cc5fe8-7965-46aa-b846-33d1b8d317f8\") " pod="openshift-console/console-f9d7485db-gp4nv" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.314178 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/83cc5fe8-7965-46aa-b846-33d1b8d317f8-console-oauth-config\") pod \"console-f9d7485db-gp4nv\" (UID: \"83cc5fe8-7965-46aa-b846-33d1b8d317f8\") " pod="openshift-console/console-f9d7485db-gp4nv" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.314230 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/33210b82-c473-4bf8-b40d-a29b00833ea0-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-rdzrw\" (UID: \"33210b82-c473-4bf8-b40d-a29b00833ea0\") " pod="openshift-controller-manager/controller-manager-879f6c89f-rdzrw" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.314256 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/c61fa19c-7742-4ab1-b3ca-9607723fe94d-audit-policies\") pod \"apiserver-7bbb656c7d-pwk76\" (UID: \"c61fa19c-7742-4ab1-b3ca-9607723fe94d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pwk76" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.314252 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.314398 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.314452 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b0ebeb47-d72b-4d2f-b2e8-aee1f880da1e-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-lpktp\" (UID: \"b0ebeb47-d72b-4d2f-b2e8-aee1f880da1e\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-lpktp" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.314473 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/33210b82-c473-4bf8-b40d-a29b00833ea0-client-ca\") pod \"controller-manager-879f6c89f-rdzrw\" (UID: \"33210b82-c473-4bf8-b40d-a29b00833ea0\") " pod="openshift-controller-manager/controller-manager-879f6c89f-rdzrw" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.314494 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c61fa19c-7742-4ab1-b3ca-9607723fe94d-audit-dir\") pod \"apiserver-7bbb656c7d-pwk76\" (UID: \"c61fa19c-7742-4ab1-b3ca-9607723fe94d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pwk76" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.314518 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/d1346d7f-25da-4035-9c88-1f96c034d795-available-featuregates\") pod \"openshift-config-operator-7777fb866f-ttnrg\" (UID: \"d1346d7f-25da-4035-9c88-1f96c034d795\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-ttnrg" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.314650 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.314820 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.314947 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.315096 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.315154 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.315302 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.315315 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.315569 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.315661 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.315836 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.316010 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.316149 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.315836 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.316403 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.316497 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.316574 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.316680 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-8wdp6"] Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.316775 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.316885 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.317332 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.312764 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.318685 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.318842 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.318939 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.319912 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.320158 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.325146 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.328869 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.329105 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-8wdp6" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.333407 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.337066 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-9d827"] Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.337120 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.337937 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5blhw"] Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.338343 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-qtqdv"] Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.338709 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-9d827" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.338893 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-qtqdv" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.338983 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.339263 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5blhw" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.341876 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.346178 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.346873 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-kcsj5"] Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.347816 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-kcsj5" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.348225 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-9rz4w"] Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.348522 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.360456 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29497500-66dl8"] Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.360960 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-skqcc"] Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.361437 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-n4rml"] Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.361453 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-9rz4w" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.361811 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-x8zjt"] Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.362043 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-n4rml" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.362338 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29497500-66dl8" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.362390 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-skqcc" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.382035 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-ckvgq"] Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.405749 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.413433 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-x8zjt" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.415565 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/ce03ae75-703f-4d6a-b98a-e866689b08e3-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-qtqdv\" (UID: \"ce03ae75-703f-4d6a-b98a-e866689b08e3\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-qtqdv" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.415689 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/83cc5fe8-7965-46aa-b846-33d1b8d317f8-oauth-serving-cert\") pod \"console-f9d7485db-gp4nv\" (UID: \"83cc5fe8-7965-46aa-b846-33d1b8d317f8\") " pod="openshift-console/console-f9d7485db-gp4nv" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.415760 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2af5c820-fefe-42fe-83da-0aeccb301182-config\") pod \"machine-approver-56656f9798-2p57l\" (UID: \"2af5c820-fefe-42fe-83da-0aeccb301182\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-2p57l" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.415857 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/5fe5bd86-a665-4a73-8892-fd12a784463d-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-hzk7b\" (UID: \"5fe5bd86-a665-4a73-8892-fd12a784463d\") " pod="openshift-authentication/oauth-openshift-558db77b4-hzk7b" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.415930 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/83cc5fe8-7965-46aa-b846-33d1b8d317f8-console-serving-cert\") pod \"console-f9d7485db-gp4nv\" (UID: \"83cc5fe8-7965-46aa-b846-33d1b8d317f8\") " pod="openshift-console/console-f9d7485db-gp4nv" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.416443 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/83cc5fe8-7965-46aa-b846-33d1b8d317f8-console-oauth-config\") pod \"console-f9d7485db-gp4nv\" (UID: \"83cc5fe8-7965-46aa-b846-33d1b8d317f8\") " pod="openshift-console/console-f9d7485db-gp4nv" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.416573 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bts25\" (UniqueName: \"kubernetes.io/projected/cef36034-4148-4107-9c32-4b75ac7046b5-kube-api-access-bts25\") pod \"ingress-operator-5b745b69d9-47wc2\" (UID: \"cef36034-4148-4107-9c32-4b75ac7046b5\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-47wc2" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.417276 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/33210b82-c473-4bf8-b40d-a29b00833ea0-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-rdzrw\" (UID: \"33210b82-c473-4bf8-b40d-a29b00833ea0\") " pod="openshift-controller-manager/controller-manager-879f6c89f-rdzrw" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.417595 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/5fe5bd86-a665-4a73-8892-fd12a784463d-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-hzk7b\" (UID: \"5fe5bd86-a665-4a73-8892-fd12a784463d\") " pod="openshift-authentication/oauth-openshift-558db77b4-hzk7b" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.417960 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/c61fa19c-7742-4ab1-b3ca-9607723fe94d-audit-policies\") pod \"apiserver-7bbb656c7d-pwk76\" (UID: \"c61fa19c-7742-4ab1-b3ca-9607723fe94d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pwk76" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.418029 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b0ebeb47-d72b-4d2f-b2e8-aee1f880da1e-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-lpktp\" (UID: \"b0ebeb47-d72b-4d2f-b2e8-aee1f880da1e\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-lpktp" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.418144 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/33210b82-c473-4bf8-b40d-a29b00833ea0-client-ca\") pod \"controller-manager-879f6c89f-rdzrw\" (UID: \"33210b82-c473-4bf8-b40d-a29b00833ea0\") " pod="openshift-controller-manager/controller-manager-879f6c89f-rdzrw" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.418178 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c61fa19c-7742-4ab1-b3ca-9607723fe94d-audit-dir\") pod \"apiserver-7bbb656c7d-pwk76\" (UID: \"c61fa19c-7742-4ab1-b3ca-9607723fe94d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pwk76" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.418249 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xrsbs\" (UniqueName: \"kubernetes.io/projected/2a94efc3-19bc-47ce-b48a-4f4b3351d955-kube-api-access-xrsbs\") pod \"apiserver-76f77b778f-htl5l\" (UID: \"2a94efc3-19bc-47ce-b48a-4f4b3351d955\") " pod="openshift-apiserver/apiserver-76f77b778f-htl5l" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.418310 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/96ad696c-eaac-4e34-a986-d31a24d8d7bb-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-n5blr\" (UID: \"96ad696c-eaac-4e34-a986-d31a24d8d7bb\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-n5blr" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.418383 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/bf986437-9998-4cd1-90b8-b2e0716e8d37-default-certificate\") pod \"router-default-5444994796-vbcgc\" (UID: \"bf986437-9998-4cd1-90b8-b2e0716e8d37\") " pod="openshift-ingress/router-default-5444994796-vbcgc" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.418468 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/d1346d7f-25da-4035-9c88-1f96c034d795-available-featuregates\") pod \"openshift-config-operator-7777fb866f-ttnrg\" (UID: \"d1346d7f-25da-4035-9c88-1f96c034d795\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-ttnrg" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.418501 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/2a94efc3-19bc-47ce-b48a-4f4b3351d955-audit\") pod \"apiserver-76f77b778f-htl5l\" (UID: \"2a94efc3-19bc-47ce-b48a-4f4b3351d955\") " pod="openshift-apiserver/apiserver-76f77b778f-htl5l" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.418558 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/33210b82-c473-4bf8-b40d-a29b00833ea0-serving-cert\") pod \"controller-manager-879f6c89f-rdzrw\" (UID: \"33210b82-c473-4bf8-b40d-a29b00833ea0\") " pod="openshift-controller-manager/controller-manager-879f6c89f-rdzrw" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.418590 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/2a94efc3-19bc-47ce-b48a-4f4b3351d955-etcd-client\") pod \"apiserver-76f77b778f-htl5l\" (UID: \"2a94efc3-19bc-47ce-b48a-4f4b3351d955\") " pod="openshift-apiserver/apiserver-76f77b778f-htl5l" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.418691 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/2a94efc3-19bc-47ce-b48a-4f4b3351d955-etcd-serving-ca\") pod \"apiserver-76f77b778f-htl5l\" (UID: \"2a94efc3-19bc-47ce-b48a-4f4b3351d955\") " pod="openshift-apiserver/apiserver-76f77b778f-htl5l" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.418759 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/96ad696c-eaac-4e34-a986-d31a24d8d7bb-config\") pod \"kube-controller-manager-operator-78b949d7b-n5blr\" (UID: \"96ad696c-eaac-4e34-a986-d31a24d8d7bb\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-n5blr" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.418843 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0f4287bc-c7a7-4ee2-8212-3611b978e2e8-client-ca\") pod \"route-controller-manager-6576b87f9c-knkww\" (UID: \"0f4287bc-c7a7-4ee2-8212-3611b978e2e8\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-knkww" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.418882 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/c61fa19c-7742-4ab1-b3ca-9607723fe94d-etcd-client\") pod \"apiserver-7bbb656c7d-pwk76\" (UID: \"c61fa19c-7742-4ab1-b3ca-9607723fe94d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pwk76" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.419128 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d1346d7f-25da-4035-9c88-1f96c034d795-serving-cert\") pod \"openshift-config-operator-7777fb866f-ttnrg\" (UID: \"d1346d7f-25da-4035-9c88-1f96c034d795\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-ttnrg" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.419198 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/83cc5fe8-7965-46aa-b846-33d1b8d317f8-service-ca\") pod \"console-f9d7485db-gp4nv\" (UID: \"83cc5fe8-7965-46aa-b846-33d1b8d317f8\") " pod="openshift-console/console-f9d7485db-gp4nv" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.419276 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/00ab4f1c-2cc4-46b0-9e22-df58e5327352-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-hkd74\" (UID: \"00ab4f1c-2cc4-46b0-9e22-df58e5327352\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-hkd74" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.419357 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/cef36034-4148-4107-9c32-4b75ac7046b5-bound-sa-token\") pod \"ingress-operator-5b745b69d9-47wc2\" (UID: \"cef36034-4148-4107-9c32-4b75ac7046b5\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-47wc2" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.419410 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5fe5bd86-a665-4a73-8892-fd12a784463d-audit-dir\") pod \"oauth-openshift-558db77b4-hzk7b\" (UID: \"5fe5bd86-a665-4a73-8892-fd12a784463d\") " pod="openshift-authentication/oauth-openshift-558db77b4-hzk7b" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.419534 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/00ab4f1c-2cc4-46b0-9e22-df58e5327352-serving-cert\") pod \"authentication-operator-69f744f599-hkd74\" (UID: \"00ab4f1c-2cc4-46b0-9e22-df58e5327352\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-hkd74" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.419606 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c61fa19c-7742-4ab1-b3ca-9607723fe94d-serving-cert\") pod \"apiserver-7bbb656c7d-pwk76\" (UID: \"c61fa19c-7742-4ab1-b3ca-9607723fe94d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pwk76" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.419642 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c61fa19c-7742-4ab1-b3ca-9607723fe94d-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-pwk76\" (UID: \"c61fa19c-7742-4ab1-b3ca-9607723fe94d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pwk76" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.419938 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cvh66\" (UniqueName: \"kubernetes.io/projected/b0ebeb47-d72b-4d2f-b2e8-aee1f880da1e-kube-api-access-cvh66\") pod \"openshift-apiserver-operator-796bbdcf4f-lpktp\" (UID: \"b0ebeb47-d72b-4d2f-b2e8-aee1f880da1e\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-lpktp" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.420019 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/5fe5bd86-a665-4a73-8892-fd12a784463d-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-hzk7b\" (UID: \"5fe5bd86-a665-4a73-8892-fd12a784463d\") " pod="openshift-authentication/oauth-openshift-558db77b4-hzk7b" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.420104 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xmvvr\" (UniqueName: \"kubernetes.io/projected/0268e3ae-370f-43f0-9528-ff84b5983dac-kube-api-access-xmvvr\") pod \"cluster-samples-operator-665b6dd947-9gw75\" (UID: \"0268e3ae-370f-43f0-9528-ff84b5983dac\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-9gw75" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.420141 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/2a94efc3-19bc-47ce-b48a-4f4b3351d955-node-pullsecrets\") pod \"apiserver-76f77b778f-htl5l\" (UID: \"2a94efc3-19bc-47ce-b48a-4f4b3351d955\") " pod="openshift-apiserver/apiserver-76f77b778f-htl5l" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.420212 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2a94efc3-19bc-47ce-b48a-4f4b3351d955-serving-cert\") pod \"apiserver-76f77b778f-htl5l\" (UID: \"2a94efc3-19bc-47ce-b48a-4f4b3351d955\") " pod="openshift-apiserver/apiserver-76f77b778f-htl5l" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.420317 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/cef36034-4148-4107-9c32-4b75ac7046b5-metrics-tls\") pod \"ingress-operator-5b745b69d9-47wc2\" (UID: \"cef36034-4148-4107-9c32-4b75ac7046b5\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-47wc2" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.420371 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ww7vr\" (UniqueName: \"kubernetes.io/projected/a8d26ab0-33c3-4eb7-928b-ffba996579d9-kube-api-access-ww7vr\") pod \"downloads-7954f5f757-l8ckt\" (UID: \"a8d26ab0-33c3-4eb7-928b-ffba996579d9\") " pod="openshift-console/downloads-7954f5f757-l8ckt" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.420421 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/c61fa19c-7742-4ab1-b3ca-9607723fe94d-encryption-config\") pod \"apiserver-7bbb656c7d-pwk76\" (UID: \"c61fa19c-7742-4ab1-b3ca-9607723fe94d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pwk76" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.420522 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/33210b82-c473-4bf8-b40d-a29b00833ea0-client-ca\") pod \"controller-manager-879f6c89f-rdzrw\" (UID: \"33210b82-c473-4bf8-b40d-a29b00833ea0\") " pod="openshift-controller-manager/controller-manager-879f6c89f-rdzrw" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.421001 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/2a94efc3-19bc-47ce-b48a-4f4b3351d955-encryption-config\") pod \"apiserver-76f77b778f-htl5l\" (UID: \"2a94efc3-19bc-47ce-b48a-4f4b3351d955\") " pod="openshift-apiserver/apiserver-76f77b778f-htl5l" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.421185 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/33210b82-c473-4bf8-b40d-a29b00833ea0-config\") pod \"controller-manager-879f6c89f-rdzrw\" (UID: \"33210b82-c473-4bf8-b40d-a29b00833ea0\") " pod="openshift-controller-manager/controller-manager-879f6c89f-rdzrw" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.421249 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bf986437-9998-4cd1-90b8-b2e0716e8d37-service-ca-bundle\") pod \"router-default-5444994796-vbcgc\" (UID: \"bf986437-9998-4cd1-90b8-b2e0716e8d37\") " pod="openshift-ingress/router-default-5444994796-vbcgc" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.421324 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/5fe5bd86-a665-4a73-8892-fd12a784463d-audit-policies\") pod \"oauth-openshift-558db77b4-hzk7b\" (UID: \"5fe5bd86-a665-4a73-8892-fd12a784463d\") " pod="openshift-authentication/oauth-openshift-558db77b4-hzk7b" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.421365 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xq96w\" (UniqueName: \"kubernetes.io/projected/ce03ae75-703f-4d6a-b98a-e866689b08e3-kube-api-access-xq96w\") pod \"control-plane-machine-set-operator-78cbb6b69f-qtqdv\" (UID: \"ce03ae75-703f-4d6a-b98a-e866689b08e3\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-qtqdv" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.421399 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-88jbt\" (UniqueName: \"kubernetes.io/projected/2af5c820-fefe-42fe-83da-0aeccb301182-kube-api-access-88jbt\") pod \"machine-approver-56656f9798-2p57l\" (UID: \"2af5c820-fefe-42fe-83da-0aeccb301182\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-2p57l" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.421442 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/00ab4f1c-2cc4-46b0-9e22-df58e5327352-service-ca-bundle\") pod \"authentication-operator-69f744f599-hkd74\" (UID: \"00ab4f1c-2cc4-46b0-9e22-df58e5327352\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-hkd74" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.421490 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/5fe5bd86-a665-4a73-8892-fd12a784463d-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-hzk7b\" (UID: \"5fe5bd86-a665-4a73-8892-fd12a784463d\") " pod="openshift-authentication/oauth-openshift-558db77b4-hzk7b" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.421530 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/93359d96-ca07-4f0c-8b0a-a23f1635dcb1-signing-cabundle\") pod \"service-ca-9c57cc56f-kcsj5\" (UID: \"93359d96-ca07-4f0c-8b0a-a23f1635dcb1\") " pod="openshift-service-ca/service-ca-9c57cc56f-kcsj5" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.421558 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0f4287bc-c7a7-4ee2-8212-3611b978e2e8-serving-cert\") pod \"route-controller-manager-6576b87f9c-knkww\" (UID: \"0f4287bc-c7a7-4ee2-8212-3611b978e2e8\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-knkww" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.421585 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/96ad696c-eaac-4e34-a986-d31a24d8d7bb-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-n5blr\" (UID: \"96ad696c-eaac-4e34-a986-d31a24d8d7bb\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-n5blr" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.421682 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/0268e3ae-370f-43f0-9528-ff84b5983dac-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-9gw75\" (UID: \"0268e3ae-370f-43f0-9528-ff84b5983dac\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-9gw75" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.423000 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/83cc5fe8-7965-46aa-b846-33d1b8d317f8-console-config\") pod \"console-f9d7485db-gp4nv\" (UID: \"83cc5fe8-7965-46aa-b846-33d1b8d317f8\") " pod="openshift-console/console-f9d7485db-gp4nv" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.433914 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/83cc5fe8-7965-46aa-b846-33d1b8d317f8-oauth-serving-cert\") pod \"console-f9d7485db-gp4nv\" (UID: \"83cc5fe8-7965-46aa-b846-33d1b8d317f8\") " pod="openshift-console/console-f9d7485db-gp4nv" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.442875 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0f4287bc-c7a7-4ee2-8212-3611b978e2e8-client-ca\") pod \"route-controller-manager-6576b87f9c-knkww\" (UID: \"0f4287bc-c7a7-4ee2-8212-3611b978e2e8\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-knkww" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.452951 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2af5c820-fefe-42fe-83da-0aeccb301182-config\") pod \"machine-approver-56656f9798-2p57l\" (UID: \"2af5c820-fefe-42fe-83da-0aeccb301182\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-2p57l" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.456247 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-bs7f7"] Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.456838 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.457238 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-lp7ks"] Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.460241 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lb8hp"] Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.460631 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-xwn99"] Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.461297 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-fnk7f"] Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.462177 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/00ab4f1c-2cc4-46b0-9e22-df58e5327352-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-hkd74\" (UID: \"00ab4f1c-2cc4-46b0-9e22-df58e5327352\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-hkd74" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.463068 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c61fa19c-7742-4ab1-b3ca-9607723fe94d-audit-dir\") pod \"apiserver-7bbb656c7d-pwk76\" (UID: \"c61fa19c-7742-4ab1-b3ca-9607723fe94d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pwk76" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.463289 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.463612 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.463765 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/83cc5fe8-7965-46aa-b846-33d1b8d317f8-service-ca\") pod \"console-f9d7485db-gp4nv\" (UID: \"83cc5fe8-7965-46aa-b846-33d1b8d317f8\") " pod="openshift-console/console-f9d7485db-gp4nv" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.463784 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.464245 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-ckvgq" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.464309 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/33210b82-c473-4bf8-b40d-a29b00833ea0-config\") pod \"controller-manager-879f6c89f-rdzrw\" (UID: \"33210b82-c473-4bf8-b40d-a29b00833ea0\") " pod="openshift-controller-manager/controller-manager-879f6c89f-rdzrw" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.464442 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/33210b82-c473-4bf8-b40d-a29b00833ea0-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-rdzrw\" (UID: \"33210b82-c473-4bf8-b40d-a29b00833ea0\") " pod="openshift-controller-manager/controller-manager-879f6c89f-rdzrw" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.464633 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/d1346d7f-25da-4035-9c88-1f96c034d795-available-featuregates\") pod \"openshift-config-operator-7777fb866f-ttnrg\" (UID: \"d1346d7f-25da-4035-9c88-1f96c034d795\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-ttnrg" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.465066 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c61fa19c-7742-4ab1-b3ca-9607723fe94d-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-pwk76\" (UID: \"c61fa19c-7742-4ab1-b3ca-9607723fe94d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pwk76" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.465273 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-fnk7f" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.465418 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d1346d7f-25da-4035-9c88-1f96c034d795-serving-cert\") pod \"openshift-config-operator-7777fb866f-ttnrg\" (UID: \"d1346d7f-25da-4035-9c88-1f96c034d795\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-ttnrg" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.465604 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-bs7f7" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.465924 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c61fa19c-7742-4ab1-b3ca-9607723fe94d-serving-cert\") pod \"apiserver-7bbb656c7d-pwk76\" (UID: \"c61fa19c-7742-4ab1-b3ca-9607723fe94d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pwk76" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.466119 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-lp7ks" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.466366 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lb8hp" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.466582 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-xwn99" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.467363 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/0268e3ae-370f-43f0-9528-ff84b5983dac-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-9gw75\" (UID: \"0268e3ae-370f-43f0-9528-ff84b5983dac\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-9gw75" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.467497 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8qdxg\" (UniqueName: \"kubernetes.io/projected/00ab4f1c-2cc4-46b0-9e22-df58e5327352-kube-api-access-8qdxg\") pod \"authentication-operator-69f744f599-hkd74\" (UID: \"00ab4f1c-2cc4-46b0-9e22-df58e5327352\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-hkd74" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.467525 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/83cc5fe8-7965-46aa-b846-33d1b8d317f8-console-config\") pod \"console-f9d7485db-gp4nv\" (UID: \"83cc5fe8-7965-46aa-b846-33d1b8d317f8\") " pod="openshift-console/console-f9d7485db-gp4nv" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.467547 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cs9w7\" (UniqueName: \"kubernetes.io/projected/5fe5bd86-a665-4a73-8892-fd12a784463d-kube-api-access-cs9w7\") pod \"oauth-openshift-558db77b4-hzk7b\" (UID: \"5fe5bd86-a665-4a73-8892-fd12a784463d\") " pod="openshift-authentication/oauth-openshift-558db77b4-hzk7b" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.467510 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/00ab4f1c-2cc4-46b0-9e22-df58e5327352-service-ca-bundle\") pod \"authentication-operator-69f744f599-hkd74\" (UID: \"00ab4f1c-2cc4-46b0-9e22-df58e5327352\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-hkd74" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.467578 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/2a94efc3-19bc-47ce-b48a-4f4b3351d955-image-import-ca\") pod \"apiserver-76f77b778f-htl5l\" (UID: \"2a94efc3-19bc-47ce-b48a-4f4b3351d955\") " pod="openshift-apiserver/apiserver-76f77b778f-htl5l" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.467606 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/2a94efc3-19bc-47ce-b48a-4f4b3351d955-audit-dir\") pod \"apiserver-76f77b778f-htl5l\" (UID: \"2a94efc3-19bc-47ce-b48a-4f4b3351d955\") " pod="openshift-apiserver/apiserver-76f77b778f-htl5l" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.467642 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2wwdq\" (UniqueName: \"kubernetes.io/projected/83cc5fe8-7965-46aa-b846-33d1b8d317f8-kube-api-access-2wwdq\") pod \"console-f9d7485db-gp4nv\" (UID: \"83cc5fe8-7965-46aa-b846-33d1b8d317f8\") " pod="openshift-console/console-f9d7485db-gp4nv" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.467690 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5a386557-0e05-4f84-b5fc-a389083d2743-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-n65sj\" (UID: \"5a386557-0e05-4f84-b5fc-a389083d2743\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-n65sj" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.467711 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xjv7v\" (UniqueName: \"kubernetes.io/projected/93359d96-ca07-4f0c-8b0a-a23f1635dcb1-kube-api-access-xjv7v\") pod \"service-ca-9c57cc56f-kcsj5\" (UID: \"93359d96-ca07-4f0c-8b0a-a23f1635dcb1\") " pod="openshift-service-ca/service-ca-9c57cc56f-kcsj5" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.467771 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5a386557-0e05-4f84-b5fc-a389083d2743-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-n65sj\" (UID: \"5a386557-0e05-4f84-b5fc-a389083d2743\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-n65sj" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.467791 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/bf986437-9998-4cd1-90b8-b2e0716e8d37-stats-auth\") pod \"router-default-5444994796-vbcgc\" (UID: \"bf986437-9998-4cd1-90b8-b2e0716e8d37\") " pod="openshift-ingress/router-default-5444994796-vbcgc" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.467813 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sktbs\" (UniqueName: \"kubernetes.io/projected/0f4287bc-c7a7-4ee2-8212-3611b978e2e8-kube-api-access-sktbs\") pod \"route-controller-manager-6576b87f9c-knkww\" (UID: \"0f4287bc-c7a7-4ee2-8212-3611b978e2e8\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-knkww" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.467884 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/83cc5fe8-7965-46aa-b846-33d1b8d317f8-trusted-ca-bundle\") pod \"console-f9d7485db-gp4nv\" (UID: \"83cc5fe8-7965-46aa-b846-33d1b8d317f8\") " pod="openshift-console/console-f9d7485db-gp4nv" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.467903 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2a94efc3-19bc-47ce-b48a-4f4b3351d955-trusted-ca-bundle\") pod \"apiserver-76f77b778f-htl5l\" (UID: \"2a94efc3-19bc-47ce-b48a-4f4b3351d955\") " pod="openshift-apiserver/apiserver-76f77b778f-htl5l" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.467927 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/93359d96-ca07-4f0c-8b0a-a23f1635dcb1-signing-key\") pod \"service-ca-9c57cc56f-kcsj5\" (UID: \"93359d96-ca07-4f0c-8b0a-a23f1635dcb1\") " pod="openshift-service-ca/service-ca-9c57cc56f-kcsj5" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.467963 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5dwgd\" (UniqueName: \"kubernetes.io/projected/bf986437-9998-4cd1-90b8-b2e0716e8d37-kube-api-access-5dwgd\") pod \"router-default-5444994796-vbcgc\" (UID: \"bf986437-9998-4cd1-90b8-b2e0716e8d37\") " pod="openshift-ingress/router-default-5444994796-vbcgc" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.467994 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b0ebeb47-d72b-4d2f-b2e8-aee1f880da1e-config\") pod \"openshift-apiserver-operator-796bbdcf4f-lpktp\" (UID: \"b0ebeb47-d72b-4d2f-b2e8-aee1f880da1e\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-lpktp" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.468014 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0f4287bc-c7a7-4ee2-8212-3611b978e2e8-config\") pod \"route-controller-manager-6576b87f9c-knkww\" (UID: \"0f4287bc-c7a7-4ee2-8212-3611b978e2e8\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-knkww" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.468021 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/c61fa19c-7742-4ab1-b3ca-9607723fe94d-audit-policies\") pod \"apiserver-7bbb656c7d-pwk76\" (UID: \"c61fa19c-7742-4ab1-b3ca-9607723fe94d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pwk76" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.468034 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j9b4k\" (UniqueName: \"kubernetes.io/projected/d1346d7f-25da-4035-9c88-1f96c034d795-kube-api-access-j9b4k\") pod \"openshift-config-operator-7777fb866f-ttnrg\" (UID: \"d1346d7f-25da-4035-9c88-1f96c034d795\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-ttnrg" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.468104 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/5fe5bd86-a665-4a73-8892-fd12a784463d-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-hzk7b\" (UID: \"5fe5bd86-a665-4a73-8892-fd12a784463d\") " pod="openshift-authentication/oauth-openshift-558db77b4-hzk7b" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.468133 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/5fe5bd86-a665-4a73-8892-fd12a784463d-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-hzk7b\" (UID: \"5fe5bd86-a665-4a73-8892-fd12a784463d\") " pod="openshift-authentication/oauth-openshift-558db77b4-hzk7b" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.468155 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/5fe5bd86-a665-4a73-8892-fd12a784463d-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-hzk7b\" (UID: \"5fe5bd86-a665-4a73-8892-fd12a784463d\") " pod="openshift-authentication/oauth-openshift-558db77b4-hzk7b" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.468176 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2a94efc3-19bc-47ce-b48a-4f4b3351d955-config\") pod \"apiserver-76f77b778f-htl5l\" (UID: \"2a94efc3-19bc-47ce-b48a-4f4b3351d955\") " pod="openshift-apiserver/apiserver-76f77b778f-htl5l" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.468205 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/2af5c820-fefe-42fe-83da-0aeccb301182-machine-approver-tls\") pod \"machine-approver-56656f9798-2p57l\" (UID: \"2af5c820-fefe-42fe-83da-0aeccb301182\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-2p57l" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.468226 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rzsbm\" (UniqueName: \"kubernetes.io/projected/33210b82-c473-4bf8-b40d-a29b00833ea0-kube-api-access-rzsbm\") pod \"controller-manager-879f6c89f-rdzrw\" (UID: \"33210b82-c473-4bf8-b40d-a29b00833ea0\") " pod="openshift-controller-manager/controller-manager-879f6c89f-rdzrw" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.468249 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/bf986437-9998-4cd1-90b8-b2e0716e8d37-metrics-certs\") pod \"router-default-5444994796-vbcgc\" (UID: \"bf986437-9998-4cd1-90b8-b2e0716e8d37\") " pod="openshift-ingress/router-default-5444994796-vbcgc" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.468274 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/5fe5bd86-a665-4a73-8892-fd12a784463d-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-hzk7b\" (UID: \"5fe5bd86-a665-4a73-8892-fd12a784463d\") " pod="openshift-authentication/oauth-openshift-558db77b4-hzk7b" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.468298 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/2af5c820-fefe-42fe-83da-0aeccb301182-auth-proxy-config\") pod \"machine-approver-56656f9798-2p57l\" (UID: \"2af5c820-fefe-42fe-83da-0aeccb301182\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-2p57l" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.468320 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/5fe5bd86-a665-4a73-8892-fd12a784463d-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-hzk7b\" (UID: \"5fe5bd86-a665-4a73-8892-fd12a784463d\") " pod="openshift-authentication/oauth-openshift-558db77b4-hzk7b" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.468342 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2r6fj\" (UniqueName: \"kubernetes.io/projected/5a386557-0e05-4f84-b5fc-a389083d2743-kube-api-access-2r6fj\") pod \"openshift-controller-manager-operator-756b6f6bc6-n65sj\" (UID: \"5a386557-0e05-4f84-b5fc-a389083d2743\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-n65sj" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.468361 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/cef36034-4148-4107-9c32-4b75ac7046b5-trusted-ca\") pod \"ingress-operator-5b745b69d9-47wc2\" (UID: \"cef36034-4148-4107-9c32-4b75ac7046b5\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-47wc2" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.468457 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k8dt8\" (UniqueName: \"kubernetes.io/projected/c61fa19c-7742-4ab1-b3ca-9607723fe94d-kube-api-access-k8dt8\") pod \"apiserver-7bbb656c7d-pwk76\" (UID: \"c61fa19c-7742-4ab1-b3ca-9607723fe94d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pwk76" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.468478 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/c61fa19c-7742-4ab1-b3ca-9607723fe94d-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-pwk76\" (UID: \"c61fa19c-7742-4ab1-b3ca-9607723fe94d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pwk76" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.468501 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/00ab4f1c-2cc4-46b0-9e22-df58e5327352-config\") pod \"authentication-operator-69f744f599-hkd74\" (UID: \"00ab4f1c-2cc4-46b0-9e22-df58e5327352\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-hkd74" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.468520 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5fe5bd86-a665-4a73-8892-fd12a784463d-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-hzk7b\" (UID: \"5fe5bd86-a665-4a73-8892-fd12a784463d\") " pod="openshift-authentication/oauth-openshift-558db77b4-hzk7b" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.468542 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/5fe5bd86-a665-4a73-8892-fd12a784463d-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-hzk7b\" (UID: \"5fe5bd86-a665-4a73-8892-fd12a784463d\") " pod="openshift-authentication/oauth-openshift-558db77b4-hzk7b" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.468620 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/33210b82-c473-4bf8-b40d-a29b00833ea0-serving-cert\") pod \"controller-manager-879f6c89f-rdzrw\" (UID: \"33210b82-c473-4bf8-b40d-a29b00833ea0\") " pod="openshift-controller-manager/controller-manager-879f6c89f-rdzrw" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.468875 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b0ebeb47-d72b-4d2f-b2e8-aee1f880da1e-config\") pod \"openshift-apiserver-operator-796bbdcf4f-lpktp\" (UID: \"b0ebeb47-d72b-4d2f-b2e8-aee1f880da1e\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-lpktp" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.468997 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5a386557-0e05-4f84-b5fc-a389083d2743-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-n65sj\" (UID: \"5a386557-0e05-4f84-b5fc-a389083d2743\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-n65sj" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.469127 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0f4287bc-c7a7-4ee2-8212-3611b978e2e8-serving-cert\") pod \"route-controller-manager-6576b87f9c-knkww\" (UID: \"0f4287bc-c7a7-4ee2-8212-3611b978e2e8\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-knkww" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.470032 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/00ab4f1c-2cc4-46b0-9e22-df58e5327352-config\") pod \"authentication-operator-69f744f599-hkd74\" (UID: \"00ab4f1c-2cc4-46b0-9e22-df58e5327352\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-hkd74" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.470099 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/2af5c820-fefe-42fe-83da-0aeccb301182-auth-proxy-config\") pod \"machine-approver-56656f9798-2p57l\" (UID: \"2af5c820-fefe-42fe-83da-0aeccb301182\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-2p57l" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.469998 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/c61fa19c-7742-4ab1-b3ca-9607723fe94d-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-pwk76\" (UID: \"c61fa19c-7742-4ab1-b3ca-9607723fe94d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pwk76" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.470466 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/c61fa19c-7742-4ab1-b3ca-9607723fe94d-etcd-client\") pod \"apiserver-7bbb656c7d-pwk76\" (UID: \"c61fa19c-7742-4ab1-b3ca-9607723fe94d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pwk76" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.470521 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/83cc5fe8-7965-46aa-b846-33d1b8d317f8-trusted-ca-bundle\") pod \"console-f9d7485db-gp4nv\" (UID: \"83cc5fe8-7965-46aa-b846-33d1b8d317f8\") " pod="openshift-console/console-f9d7485db-gp4nv" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.470737 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0f4287bc-c7a7-4ee2-8212-3611b978e2e8-config\") pod \"route-controller-manager-6576b87f9c-knkww\" (UID: \"0f4287bc-c7a7-4ee2-8212-3611b978e2e8\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-knkww" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.471765 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b0ebeb47-d72b-4d2f-b2e8-aee1f880da1e-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-lpktp\" (UID: \"b0ebeb47-d72b-4d2f-b2e8-aee1f880da1e\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-lpktp" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.471972 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/83cc5fe8-7965-46aa-b846-33d1b8d317f8-console-oauth-config\") pod \"console-f9d7485db-gp4nv\" (UID: \"83cc5fe8-7965-46aa-b846-33d1b8d317f8\") " pod="openshift-console/console-f9d7485db-gp4nv" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.471971 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5a386557-0e05-4f84-b5fc-a389083d2743-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-n65sj\" (UID: \"5a386557-0e05-4f84-b5fc-a389083d2743\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-n65sj" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.472651 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/83cc5fe8-7965-46aa-b846-33d1b8d317f8-console-serving-cert\") pod \"console-f9d7485db-gp4nv\" (UID: \"83cc5fe8-7965-46aa-b846-33d1b8d317f8\") " pod="openshift-console/console-f9d7485db-gp4nv" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.472711 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-gp4nv"] Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.473881 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/c61fa19c-7742-4ab1-b3ca-9607723fe94d-encryption-config\") pod \"apiserver-7bbb656c7d-pwk76\" (UID: \"c61fa19c-7742-4ab1-b3ca-9607723fe94d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pwk76" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.474997 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.475316 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.478295 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/00ab4f1c-2cc4-46b0-9e22-df58e5327352-serving-cert\") pod \"authentication-operator-69f744f599-hkd74\" (UID: \"00ab4f1c-2cc4-46b0-9e22-df58e5327352\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-hkd74" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.480525 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-8nn2k"] Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.480635 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-rdzrw"] Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.485218 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-n65sj"] Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.485270 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ngd6n"] Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.485281 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-hkd74"] Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.488634 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-hzk7b"] Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.488997 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-26msj"] Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.492943 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-2klp9"] Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.495575 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/2af5c820-fefe-42fe-83da-0aeccb301182-machine-approver-tls\") pod \"machine-approver-56656f9798-2p57l\" (UID: \"2af5c820-fefe-42fe-83da-0aeccb301182\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-2p57l" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.495663 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-9gw75"] Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.495701 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-htl5l"] Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.496786 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-9d827"] Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.497669 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-8wdp6"] Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.498870 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-l8ckt"] Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.501056 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-vjnc8"] Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.501618 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-7m8b7"] Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.504094 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-ttnrg"] Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.504457 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.511496 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-47wc2"] Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.511580 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-kcsj5"] Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.511598 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-pwk76"] Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.514425 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-n5blr"] Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.515678 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-ckvgq"] Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.516647 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.516689 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-pkx9p"] Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.517782 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-rhvlq"] Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.519626 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-bsxrt"] Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.520205 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-rhvlq" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.520328 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-bsxrt" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.520209 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-xwn99"] Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.521403 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-qtqdv"] Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.522272 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-n4rml"] Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.523384 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-x8zjt"] Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.525011 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-9rz4w"] Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.526059 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lb8hp"] Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.526606 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5blhw"] Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.527654 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-skqcc"] Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.528840 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-lp7ks"] Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.530567 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29497500-66dl8"] Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.531668 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-bs7f7"] Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.533299 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-fnk7f"] Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.534859 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-rhvlq"] Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.534929 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.538079 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-whjm4"] Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.539187 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-whjm4"] Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.539422 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-whjm4" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.554823 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.569546 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2a94efc3-19bc-47ce-b48a-4f4b3351d955-trusted-ca-bundle\") pod \"apiserver-76f77b778f-htl5l\" (UID: \"2a94efc3-19bc-47ce-b48a-4f4b3351d955\") " pod="openshift-apiserver/apiserver-76f77b778f-htl5l" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.569620 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/93359d96-ca07-4f0c-8b0a-a23f1635dcb1-signing-key\") pod \"service-ca-9c57cc56f-kcsj5\" (UID: \"93359d96-ca07-4f0c-8b0a-a23f1635dcb1\") " pod="openshift-service-ca/service-ca-9c57cc56f-kcsj5" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.569659 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5dwgd\" (UniqueName: \"kubernetes.io/projected/bf986437-9998-4cd1-90b8-b2e0716e8d37-kube-api-access-5dwgd\") pod \"router-default-5444994796-vbcgc\" (UID: \"bf986437-9998-4cd1-90b8-b2e0716e8d37\") " pod="openshift-ingress/router-default-5444994796-vbcgc" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.569710 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/5fe5bd86-a665-4a73-8892-fd12a784463d-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-hzk7b\" (UID: \"5fe5bd86-a665-4a73-8892-fd12a784463d\") " pod="openshift-authentication/oauth-openshift-558db77b4-hzk7b" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.569795 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/5fe5bd86-a665-4a73-8892-fd12a784463d-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-hzk7b\" (UID: \"5fe5bd86-a665-4a73-8892-fd12a784463d\") " pod="openshift-authentication/oauth-openshift-558db77b4-hzk7b" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.569822 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/5fe5bd86-a665-4a73-8892-fd12a784463d-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-hzk7b\" (UID: \"5fe5bd86-a665-4a73-8892-fd12a784463d\") " pod="openshift-authentication/oauth-openshift-558db77b4-hzk7b" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.569847 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2a94efc3-19bc-47ce-b48a-4f4b3351d955-config\") pod \"apiserver-76f77b778f-htl5l\" (UID: \"2a94efc3-19bc-47ce-b48a-4f4b3351d955\") " pod="openshift-apiserver/apiserver-76f77b778f-htl5l" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.569875 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/bf986437-9998-4cd1-90b8-b2e0716e8d37-metrics-certs\") pod \"router-default-5444994796-vbcgc\" (UID: \"bf986437-9998-4cd1-90b8-b2e0716e8d37\") " pod="openshift-ingress/router-default-5444994796-vbcgc" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.569923 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/5fe5bd86-a665-4a73-8892-fd12a784463d-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-hzk7b\" (UID: \"5fe5bd86-a665-4a73-8892-fd12a784463d\") " pod="openshift-authentication/oauth-openshift-558db77b4-hzk7b" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.569969 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/5fe5bd86-a665-4a73-8892-fd12a784463d-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-hzk7b\" (UID: \"5fe5bd86-a665-4a73-8892-fd12a784463d\") " pod="openshift-authentication/oauth-openshift-558db77b4-hzk7b" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.570016 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/cef36034-4148-4107-9c32-4b75ac7046b5-trusted-ca\") pod \"ingress-operator-5b745b69d9-47wc2\" (UID: \"cef36034-4148-4107-9c32-4b75ac7046b5\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-47wc2" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.570044 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5fe5bd86-a665-4a73-8892-fd12a784463d-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-hzk7b\" (UID: \"5fe5bd86-a665-4a73-8892-fd12a784463d\") " pod="openshift-authentication/oauth-openshift-558db77b4-hzk7b" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.570069 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/5fe5bd86-a665-4a73-8892-fd12a784463d-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-hzk7b\" (UID: \"5fe5bd86-a665-4a73-8892-fd12a784463d\") " pod="openshift-authentication/oauth-openshift-558db77b4-hzk7b" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.570113 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/5fe5bd86-a665-4a73-8892-fd12a784463d-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-hzk7b\" (UID: \"5fe5bd86-a665-4a73-8892-fd12a784463d\") " pod="openshift-authentication/oauth-openshift-558db77b4-hzk7b" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.570141 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/ce03ae75-703f-4d6a-b98a-e866689b08e3-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-qtqdv\" (UID: \"ce03ae75-703f-4d6a-b98a-e866689b08e3\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-qtqdv" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.570180 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bts25\" (UniqueName: \"kubernetes.io/projected/cef36034-4148-4107-9c32-4b75ac7046b5-kube-api-access-bts25\") pod \"ingress-operator-5b745b69d9-47wc2\" (UID: \"cef36034-4148-4107-9c32-4b75ac7046b5\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-47wc2" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.570207 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/5fe5bd86-a665-4a73-8892-fd12a784463d-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-hzk7b\" (UID: \"5fe5bd86-a665-4a73-8892-fd12a784463d\") " pod="openshift-authentication/oauth-openshift-558db77b4-hzk7b" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.570232 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xrsbs\" (UniqueName: \"kubernetes.io/projected/2a94efc3-19bc-47ce-b48a-4f4b3351d955-kube-api-access-xrsbs\") pod \"apiserver-76f77b778f-htl5l\" (UID: \"2a94efc3-19bc-47ce-b48a-4f4b3351d955\") " pod="openshift-apiserver/apiserver-76f77b778f-htl5l" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.570255 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/96ad696c-eaac-4e34-a986-d31a24d8d7bb-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-n5blr\" (UID: \"96ad696c-eaac-4e34-a986-d31a24d8d7bb\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-n5blr" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.570285 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/bf986437-9998-4cd1-90b8-b2e0716e8d37-default-certificate\") pod \"router-default-5444994796-vbcgc\" (UID: \"bf986437-9998-4cd1-90b8-b2e0716e8d37\") " pod="openshift-ingress/router-default-5444994796-vbcgc" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.570311 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/2a94efc3-19bc-47ce-b48a-4f4b3351d955-audit\") pod \"apiserver-76f77b778f-htl5l\" (UID: \"2a94efc3-19bc-47ce-b48a-4f4b3351d955\") " pod="openshift-apiserver/apiserver-76f77b778f-htl5l" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.570336 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/2a94efc3-19bc-47ce-b48a-4f4b3351d955-etcd-client\") pod \"apiserver-76f77b778f-htl5l\" (UID: \"2a94efc3-19bc-47ce-b48a-4f4b3351d955\") " pod="openshift-apiserver/apiserver-76f77b778f-htl5l" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.570359 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/2a94efc3-19bc-47ce-b48a-4f4b3351d955-etcd-serving-ca\") pod \"apiserver-76f77b778f-htl5l\" (UID: \"2a94efc3-19bc-47ce-b48a-4f4b3351d955\") " pod="openshift-apiserver/apiserver-76f77b778f-htl5l" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.570382 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/96ad696c-eaac-4e34-a986-d31a24d8d7bb-config\") pod \"kube-controller-manager-operator-78b949d7b-n5blr\" (UID: \"96ad696c-eaac-4e34-a986-d31a24d8d7bb\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-n5blr" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.570409 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/cef36034-4148-4107-9c32-4b75ac7046b5-bound-sa-token\") pod \"ingress-operator-5b745b69d9-47wc2\" (UID: \"cef36034-4148-4107-9c32-4b75ac7046b5\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-47wc2" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.570431 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5fe5bd86-a665-4a73-8892-fd12a784463d-audit-dir\") pod \"oauth-openshift-558db77b4-hzk7b\" (UID: \"5fe5bd86-a665-4a73-8892-fd12a784463d\") " pod="openshift-authentication/oauth-openshift-558db77b4-hzk7b" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.570472 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/5fe5bd86-a665-4a73-8892-fd12a784463d-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-hzk7b\" (UID: \"5fe5bd86-a665-4a73-8892-fd12a784463d\") " pod="openshift-authentication/oauth-openshift-558db77b4-hzk7b" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.570501 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2a94efc3-19bc-47ce-b48a-4f4b3351d955-serving-cert\") pod \"apiserver-76f77b778f-htl5l\" (UID: \"2a94efc3-19bc-47ce-b48a-4f4b3351d955\") " pod="openshift-apiserver/apiserver-76f77b778f-htl5l" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.570520 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/cef36034-4148-4107-9c32-4b75ac7046b5-metrics-tls\") pod \"ingress-operator-5b745b69d9-47wc2\" (UID: \"cef36034-4148-4107-9c32-4b75ac7046b5\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-47wc2" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.570548 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/2a94efc3-19bc-47ce-b48a-4f4b3351d955-node-pullsecrets\") pod \"apiserver-76f77b778f-htl5l\" (UID: \"2a94efc3-19bc-47ce-b48a-4f4b3351d955\") " pod="openshift-apiserver/apiserver-76f77b778f-htl5l" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.570580 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/2a94efc3-19bc-47ce-b48a-4f4b3351d955-encryption-config\") pod \"apiserver-76f77b778f-htl5l\" (UID: \"2a94efc3-19bc-47ce-b48a-4f4b3351d955\") " pod="openshift-apiserver/apiserver-76f77b778f-htl5l" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.570603 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bf986437-9998-4cd1-90b8-b2e0716e8d37-service-ca-bundle\") pod \"router-default-5444994796-vbcgc\" (UID: \"bf986437-9998-4cd1-90b8-b2e0716e8d37\") " pod="openshift-ingress/router-default-5444994796-vbcgc" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.570628 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/5fe5bd86-a665-4a73-8892-fd12a784463d-audit-policies\") pod \"oauth-openshift-558db77b4-hzk7b\" (UID: \"5fe5bd86-a665-4a73-8892-fd12a784463d\") " pod="openshift-authentication/oauth-openshift-558db77b4-hzk7b" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.570651 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xq96w\" (UniqueName: \"kubernetes.io/projected/ce03ae75-703f-4d6a-b98a-e866689b08e3-kube-api-access-xq96w\") pod \"control-plane-machine-set-operator-78cbb6b69f-qtqdv\" (UID: \"ce03ae75-703f-4d6a-b98a-e866689b08e3\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-qtqdv" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.570683 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/5fe5bd86-a665-4a73-8892-fd12a784463d-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-hzk7b\" (UID: \"5fe5bd86-a665-4a73-8892-fd12a784463d\") " pod="openshift-authentication/oauth-openshift-558db77b4-hzk7b" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.570708 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/93359d96-ca07-4f0c-8b0a-a23f1635dcb1-signing-cabundle\") pod \"service-ca-9c57cc56f-kcsj5\" (UID: \"93359d96-ca07-4f0c-8b0a-a23f1635dcb1\") " pod="openshift-service-ca/service-ca-9c57cc56f-kcsj5" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.570756 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/96ad696c-eaac-4e34-a986-d31a24d8d7bb-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-n5blr\" (UID: \"96ad696c-eaac-4e34-a986-d31a24d8d7bb\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-n5blr" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.570802 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cs9w7\" (UniqueName: \"kubernetes.io/projected/5fe5bd86-a665-4a73-8892-fd12a784463d-kube-api-access-cs9w7\") pod \"oauth-openshift-558db77b4-hzk7b\" (UID: \"5fe5bd86-a665-4a73-8892-fd12a784463d\") " pod="openshift-authentication/oauth-openshift-558db77b4-hzk7b" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.570827 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/2a94efc3-19bc-47ce-b48a-4f4b3351d955-image-import-ca\") pod \"apiserver-76f77b778f-htl5l\" (UID: \"2a94efc3-19bc-47ce-b48a-4f4b3351d955\") " pod="openshift-apiserver/apiserver-76f77b778f-htl5l" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.570849 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/2a94efc3-19bc-47ce-b48a-4f4b3351d955-audit-dir\") pod \"apiserver-76f77b778f-htl5l\" (UID: \"2a94efc3-19bc-47ce-b48a-4f4b3351d955\") " pod="openshift-apiserver/apiserver-76f77b778f-htl5l" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.570874 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xjv7v\" (UniqueName: \"kubernetes.io/projected/93359d96-ca07-4f0c-8b0a-a23f1635dcb1-kube-api-access-xjv7v\") pod \"service-ca-9c57cc56f-kcsj5\" (UID: \"93359d96-ca07-4f0c-8b0a-a23f1635dcb1\") " pod="openshift-service-ca/service-ca-9c57cc56f-kcsj5" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.570900 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/bf986437-9998-4cd1-90b8-b2e0716e8d37-stats-auth\") pod \"router-default-5444994796-vbcgc\" (UID: \"bf986437-9998-4cd1-90b8-b2e0716e8d37\") " pod="openshift-ingress/router-default-5444994796-vbcgc" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.571262 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/5fe5bd86-a665-4a73-8892-fd12a784463d-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-hzk7b\" (UID: \"5fe5bd86-a665-4a73-8892-fd12a784463d\") " pod="openshift-authentication/oauth-openshift-558db77b4-hzk7b" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.571946 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2a94efc3-19bc-47ce-b48a-4f4b3351d955-trusted-ca-bundle\") pod \"apiserver-76f77b778f-htl5l\" (UID: \"2a94efc3-19bc-47ce-b48a-4f4b3351d955\") " pod="openshift-apiserver/apiserver-76f77b778f-htl5l" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.571969 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/2a94efc3-19bc-47ce-b48a-4f4b3351d955-audit\") pod \"apiserver-76f77b778f-htl5l\" (UID: \"2a94efc3-19bc-47ce-b48a-4f4b3351d955\") " pod="openshift-apiserver/apiserver-76f77b778f-htl5l" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.572028 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/2a94efc3-19bc-47ce-b48a-4f4b3351d955-audit-dir\") pod \"apiserver-76f77b778f-htl5l\" (UID: \"2a94efc3-19bc-47ce-b48a-4f4b3351d955\") " pod="openshift-apiserver/apiserver-76f77b778f-htl5l" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.572050 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2a94efc3-19bc-47ce-b48a-4f4b3351d955-config\") pod \"apiserver-76f77b778f-htl5l\" (UID: \"2a94efc3-19bc-47ce-b48a-4f4b3351d955\") " pod="openshift-apiserver/apiserver-76f77b778f-htl5l" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.572090 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5fe5bd86-a665-4a73-8892-fd12a784463d-audit-dir\") pod \"oauth-openshift-558db77b4-hzk7b\" (UID: \"5fe5bd86-a665-4a73-8892-fd12a784463d\") " pod="openshift-authentication/oauth-openshift-558db77b4-hzk7b" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.572350 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/5fe5bd86-a665-4a73-8892-fd12a784463d-audit-policies\") pod \"oauth-openshift-558db77b4-hzk7b\" (UID: \"5fe5bd86-a665-4a73-8892-fd12a784463d\") " pod="openshift-authentication/oauth-openshift-558db77b4-hzk7b" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.572785 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/5fe5bd86-a665-4a73-8892-fd12a784463d-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-hzk7b\" (UID: \"5fe5bd86-a665-4a73-8892-fd12a784463d\") " pod="openshift-authentication/oauth-openshift-558db77b4-hzk7b" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.576218 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/2a94efc3-19bc-47ce-b48a-4f4b3351d955-image-import-ca\") pod \"apiserver-76f77b778f-htl5l\" (UID: \"2a94efc3-19bc-47ce-b48a-4f4b3351d955\") " pod="openshift-apiserver/apiserver-76f77b778f-htl5l" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.576632 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5fe5bd86-a665-4a73-8892-fd12a784463d-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-hzk7b\" (UID: \"5fe5bd86-a665-4a73-8892-fd12a784463d\") " pod="openshift-authentication/oauth-openshift-558db77b4-hzk7b" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.573162 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/2a94efc3-19bc-47ce-b48a-4f4b3351d955-etcd-serving-ca\") pod \"apiserver-76f77b778f-htl5l\" (UID: \"2a94efc3-19bc-47ce-b48a-4f4b3351d955\") " pod="openshift-apiserver/apiserver-76f77b778f-htl5l" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.577146 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.577910 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/2a94efc3-19bc-47ce-b48a-4f4b3351d955-encryption-config\") pod \"apiserver-76f77b778f-htl5l\" (UID: \"2a94efc3-19bc-47ce-b48a-4f4b3351d955\") " pod="openshift-apiserver/apiserver-76f77b778f-htl5l" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.577746 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/5fe5bd86-a665-4a73-8892-fd12a784463d-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-hzk7b\" (UID: \"5fe5bd86-a665-4a73-8892-fd12a784463d\") " pod="openshift-authentication/oauth-openshift-558db77b4-hzk7b" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.578185 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/5fe5bd86-a665-4a73-8892-fd12a784463d-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-hzk7b\" (UID: \"5fe5bd86-a665-4a73-8892-fd12a784463d\") " pod="openshift-authentication/oauth-openshift-558db77b4-hzk7b" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.578319 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/5fe5bd86-a665-4a73-8892-fd12a784463d-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-hzk7b\" (UID: \"5fe5bd86-a665-4a73-8892-fd12a784463d\") " pod="openshift-authentication/oauth-openshift-558db77b4-hzk7b" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.578451 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/2a94efc3-19bc-47ce-b48a-4f4b3351d955-node-pullsecrets\") pod \"apiserver-76f77b778f-htl5l\" (UID: \"2a94efc3-19bc-47ce-b48a-4f4b3351d955\") " pod="openshift-apiserver/apiserver-76f77b778f-htl5l" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.578908 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/5fe5bd86-a665-4a73-8892-fd12a784463d-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-hzk7b\" (UID: \"5fe5bd86-a665-4a73-8892-fd12a784463d\") " pod="openshift-authentication/oauth-openshift-558db77b4-hzk7b" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.579119 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/5fe5bd86-a665-4a73-8892-fd12a784463d-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-hzk7b\" (UID: \"5fe5bd86-a665-4a73-8892-fd12a784463d\") " pod="openshift-authentication/oauth-openshift-558db77b4-hzk7b" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.579399 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/5fe5bd86-a665-4a73-8892-fd12a784463d-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-hzk7b\" (UID: \"5fe5bd86-a665-4a73-8892-fd12a784463d\") " pod="openshift-authentication/oauth-openshift-558db77b4-hzk7b" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.580021 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/5fe5bd86-a665-4a73-8892-fd12a784463d-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-hzk7b\" (UID: \"5fe5bd86-a665-4a73-8892-fd12a784463d\") " pod="openshift-authentication/oauth-openshift-558db77b4-hzk7b" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.581280 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/cef36034-4148-4107-9c32-4b75ac7046b5-metrics-tls\") pod \"ingress-operator-5b745b69d9-47wc2\" (UID: \"cef36034-4148-4107-9c32-4b75ac7046b5\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-47wc2" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.582374 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/2a94efc3-19bc-47ce-b48a-4f4b3351d955-etcd-client\") pod \"apiserver-76f77b778f-htl5l\" (UID: \"2a94efc3-19bc-47ce-b48a-4f4b3351d955\") " pod="openshift-apiserver/apiserver-76f77b778f-htl5l" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.586738 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/5fe5bd86-a665-4a73-8892-fd12a784463d-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-hzk7b\" (UID: \"5fe5bd86-a665-4a73-8892-fd12a784463d\") " pod="openshift-authentication/oauth-openshift-558db77b4-hzk7b" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.589017 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2a94efc3-19bc-47ce-b48a-4f4b3351d955-serving-cert\") pod \"apiserver-76f77b778f-htl5l\" (UID: \"2a94efc3-19bc-47ce-b48a-4f4b3351d955\") " pod="openshift-apiserver/apiserver-76f77b778f-htl5l" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.594717 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.614669 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.635782 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.655657 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.674131 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.694604 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.715312 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.739347 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.742256 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/cef36034-4148-4107-9c32-4b75ac7046b5-trusted-ca\") pod \"ingress-operator-5b745b69d9-47wc2\" (UID: \"cef36034-4148-4107-9c32-4b75ac7046b5\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-47wc2" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.754662 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.774383 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.796993 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.815494 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.836006 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.857654 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.865276 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/96ad696c-eaac-4e34-a986-d31a24d8d7bb-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-n5blr\" (UID: \"96ad696c-eaac-4e34-a986-d31a24d8d7bb\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-n5blr" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.877068 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.883465 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/96ad696c-eaac-4e34-a986-d31a24d8d7bb-config\") pod \"kube-controller-manager-operator-78b949d7b-n5blr\" (UID: \"96ad696c-eaac-4e34-a986-d31a24d8d7bb\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-n5blr" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.895303 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.914862 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.926279 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/bf986437-9998-4cd1-90b8-b2e0716e8d37-stats-auth\") pod \"router-default-5444994796-vbcgc\" (UID: \"bf986437-9998-4cd1-90b8-b2e0716e8d37\") " pod="openshift-ingress/router-default-5444994796-vbcgc" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.934355 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.945344 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/bf986437-9998-4cd1-90b8-b2e0716e8d37-metrics-certs\") pod \"router-default-5444994796-vbcgc\" (UID: \"bf986437-9998-4cd1-90b8-b2e0716e8d37\") " pod="openshift-ingress/router-default-5444994796-vbcgc" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.955826 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.975144 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.985796 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/bf986437-9998-4cd1-90b8-b2e0716e8d37-default-certificate\") pod \"router-default-5444994796-vbcgc\" (UID: \"bf986437-9998-4cd1-90b8-b2e0716e8d37\") " pod="openshift-ingress/router-default-5444994796-vbcgc" Jan 31 09:03:25 crc kubenswrapper[4830]: I0131 09:03:25.995131 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 31 09:03:26 crc kubenswrapper[4830]: I0131 09:03:26.004016 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bf986437-9998-4cd1-90b8-b2e0716e8d37-service-ca-bundle\") pod \"router-default-5444994796-vbcgc\" (UID: \"bf986437-9998-4cd1-90b8-b2e0716e8d37\") " pod="openshift-ingress/router-default-5444994796-vbcgc" Jan 31 09:03:26 crc kubenswrapper[4830]: I0131 09:03:26.021158 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 31 09:03:26 crc kubenswrapper[4830]: I0131 09:03:26.054948 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 31 09:03:26 crc kubenswrapper[4830]: I0131 09:03:26.074831 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 31 09:03:26 crc kubenswrapper[4830]: I0131 09:03:26.095571 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 31 09:03:26 crc kubenswrapper[4830]: I0131 09:03:26.114721 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 31 09:03:26 crc kubenswrapper[4830]: I0131 09:03:26.135129 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 31 09:03:26 crc kubenswrapper[4830]: I0131 09:03:26.155446 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 31 09:03:26 crc kubenswrapper[4830]: I0131 09:03:26.174682 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 31 09:03:26 crc kubenswrapper[4830]: I0131 09:03:26.186317 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/ce03ae75-703f-4d6a-b98a-e866689b08e3-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-qtqdv\" (UID: \"ce03ae75-703f-4d6a-b98a-e866689b08e3\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-qtqdv" Jan 31 09:03:26 crc kubenswrapper[4830]: I0131 09:03:26.195656 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 31 09:03:26 crc kubenswrapper[4830]: I0131 09:03:26.216051 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 31 09:03:26 crc kubenswrapper[4830]: I0131 09:03:26.235153 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 31 09:03:26 crc kubenswrapper[4830]: I0131 09:03:26.255800 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 31 09:03:26 crc kubenswrapper[4830]: I0131 09:03:26.275085 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 31 09:03:26 crc kubenswrapper[4830]: I0131 09:03:26.296195 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 31 09:03:26 crc kubenswrapper[4830]: I0131 09:03:26.315041 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 31 09:03:26 crc kubenswrapper[4830]: I0131 09:03:26.323876 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/93359d96-ca07-4f0c-8b0a-a23f1635dcb1-signing-key\") pod \"service-ca-9c57cc56f-kcsj5\" (UID: \"93359d96-ca07-4f0c-8b0a-a23f1635dcb1\") " pod="openshift-service-ca/service-ca-9c57cc56f-kcsj5" Jan 31 09:03:26 crc kubenswrapper[4830]: I0131 09:03:26.333992 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 31 09:03:26 crc kubenswrapper[4830]: I0131 09:03:26.352684 4830 request.go:700] Waited for 1.004485694s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-service-ca/configmaps?fieldSelector=metadata.name%3Dsigning-cabundle&limit=500&resourceVersion=0 Jan 31 09:03:26 crc kubenswrapper[4830]: I0131 09:03:26.354416 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 31 09:03:26 crc kubenswrapper[4830]: I0131 09:03:26.363474 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/93359d96-ca07-4f0c-8b0a-a23f1635dcb1-signing-cabundle\") pod \"service-ca-9c57cc56f-kcsj5\" (UID: \"93359d96-ca07-4f0c-8b0a-a23f1635dcb1\") " pod="openshift-service-ca/service-ca-9c57cc56f-kcsj5" Jan 31 09:03:26 crc kubenswrapper[4830]: I0131 09:03:26.375134 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 31 09:03:26 crc kubenswrapper[4830]: I0131 09:03:26.394574 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 31 09:03:26 crc kubenswrapper[4830]: I0131 09:03:26.434303 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 31 09:03:26 crc kubenswrapper[4830]: I0131 09:03:26.455008 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 31 09:03:26 crc kubenswrapper[4830]: I0131 09:03:26.474683 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 31 09:03:26 crc kubenswrapper[4830]: I0131 09:03:26.494543 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 31 09:03:26 crc kubenswrapper[4830]: I0131 09:03:26.516460 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 31 09:03:26 crc kubenswrapper[4830]: I0131 09:03:26.534715 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 31 09:03:26 crc kubenswrapper[4830]: I0131 09:03:26.554808 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 31 09:03:26 crc kubenswrapper[4830]: I0131 09:03:26.575829 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 31 09:03:26 crc kubenswrapper[4830]: I0131 09:03:26.595473 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 31 09:03:26 crc kubenswrapper[4830]: I0131 09:03:26.614942 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 31 09:03:26 crc kubenswrapper[4830]: I0131 09:03:26.635535 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 31 09:03:26 crc kubenswrapper[4830]: I0131 09:03:26.655499 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 31 09:03:26 crc kubenswrapper[4830]: I0131 09:03:26.675994 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 31 09:03:26 crc kubenswrapper[4830]: I0131 09:03:26.695435 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 31 09:03:26 crc kubenswrapper[4830]: I0131 09:03:26.715397 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 31 09:03:26 crc kubenswrapper[4830]: I0131 09:03:26.735219 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 31 09:03:26 crc kubenswrapper[4830]: I0131 09:03:26.755168 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 31 09:03:26 crc kubenswrapper[4830]: I0131 09:03:26.794934 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8qdxg\" (UniqueName: \"kubernetes.io/projected/00ab4f1c-2cc4-46b0-9e22-df58e5327352-kube-api-access-8qdxg\") pod \"authentication-operator-69f744f599-hkd74\" (UID: \"00ab4f1c-2cc4-46b0-9e22-df58e5327352\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-hkd74" Jan 31 09:03:26 crc kubenswrapper[4830]: I0131 09:03:26.811513 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-88jbt\" (UniqueName: \"kubernetes.io/projected/2af5c820-fefe-42fe-83da-0aeccb301182-kube-api-access-88jbt\") pod \"machine-approver-56656f9798-2p57l\" (UID: \"2af5c820-fefe-42fe-83da-0aeccb301182\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-2p57l" Jan 31 09:03:26 crc kubenswrapper[4830]: I0131 09:03:26.817849 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 31 09:03:26 crc kubenswrapper[4830]: I0131 09:03:26.851697 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xmvvr\" (UniqueName: \"kubernetes.io/projected/0268e3ae-370f-43f0-9528-ff84b5983dac-kube-api-access-xmvvr\") pod \"cluster-samples-operator-665b6dd947-9gw75\" (UID: \"0268e3ae-370f-43f0-9528-ff84b5983dac\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-9gw75" Jan 31 09:03:26 crc kubenswrapper[4830]: I0131 09:03:26.855811 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 31 09:03:26 crc kubenswrapper[4830]: I0131 09:03:26.859897 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-hkd74" Jan 31 09:03:26 crc kubenswrapper[4830]: I0131 09:03:26.875182 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 31 09:03:26 crc kubenswrapper[4830]: I0131 09:03:26.896380 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 31 09:03:26 crc kubenswrapper[4830]: I0131 09:03:26.916024 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 31 09:03:26 crc kubenswrapper[4830]: I0131 09:03:26.946918 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 31 09:03:26 crc kubenswrapper[4830]: I0131 09:03:26.964006 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 31 09:03:26 crc kubenswrapper[4830]: I0131 09:03:26.975146 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 31 09:03:26 crc kubenswrapper[4830]: I0131 09:03:26.995485 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 31 09:03:27 crc kubenswrapper[4830]: I0131 09:03:27.014686 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 31 09:03:27 crc kubenswrapper[4830]: I0131 09:03:27.029519 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-2p57l" Jan 31 09:03:27 crc kubenswrapper[4830]: I0131 09:03:27.036240 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 31 09:03:27 crc kubenswrapper[4830]: I0131 09:03:27.039469 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-hkd74"] Jan 31 09:03:27 crc kubenswrapper[4830]: I0131 09:03:27.044370 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-9gw75" Jan 31 09:03:27 crc kubenswrapper[4830]: W0131 09:03:27.049620 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod00ab4f1c_2cc4_46b0_9e22_df58e5327352.slice/crio-6593b7539584220e3eafc6279a246dfb8fad6b341d20b8f1e04f5581e6d50d89 WatchSource:0}: Error finding container 6593b7539584220e3eafc6279a246dfb8fad6b341d20b8f1e04f5581e6d50d89: Status 404 returned error can't find the container with id 6593b7539584220e3eafc6279a246dfb8fad6b341d20b8f1e04f5581e6d50d89 Jan 31 09:03:27 crc kubenswrapper[4830]: W0131 09:03:27.050387 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2af5c820_fefe_42fe_83da_0aeccb301182.slice/crio-8ad022512797386ad20bc5536fabe05232b16cb172dfa089a6deb0b313e41d1b WatchSource:0}: Error finding container 8ad022512797386ad20bc5536fabe05232b16cb172dfa089a6deb0b313e41d1b: Status 404 returned error can't find the container with id 8ad022512797386ad20bc5536fabe05232b16cb172dfa089a6deb0b313e41d1b Jan 31 09:03:27 crc kubenswrapper[4830]: I0131 09:03:27.055505 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 31 09:03:27 crc kubenswrapper[4830]: I0131 09:03:27.059850 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-2p57l" event={"ID":"2af5c820-fefe-42fe-83da-0aeccb301182","Type":"ContainerStarted","Data":"8ad022512797386ad20bc5536fabe05232b16cb172dfa089a6deb0b313e41d1b"} Jan 31 09:03:27 crc kubenswrapper[4830]: I0131 09:03:27.060888 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-hkd74" event={"ID":"00ab4f1c-2cc4-46b0-9e22-df58e5327352","Type":"ContainerStarted","Data":"6593b7539584220e3eafc6279a246dfb8fad6b341d20b8f1e04f5581e6d50d89"} Jan 31 09:03:27 crc kubenswrapper[4830]: I0131 09:03:27.093243 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cvh66\" (UniqueName: \"kubernetes.io/projected/b0ebeb47-d72b-4d2f-b2e8-aee1f880da1e-kube-api-access-cvh66\") pod \"openshift-apiserver-operator-796bbdcf4f-lpktp\" (UID: \"b0ebeb47-d72b-4d2f-b2e8-aee1f880da1e\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-lpktp" Jan 31 09:03:27 crc kubenswrapper[4830]: I0131 09:03:27.113944 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-lpktp" Jan 31 09:03:27 crc kubenswrapper[4830]: I0131 09:03:27.114592 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2wwdq\" (UniqueName: \"kubernetes.io/projected/83cc5fe8-7965-46aa-b846-33d1b8d317f8-kube-api-access-2wwdq\") pod \"console-f9d7485db-gp4nv\" (UID: \"83cc5fe8-7965-46aa-b846-33d1b8d317f8\") " pod="openshift-console/console-f9d7485db-gp4nv" Jan 31 09:03:27 crc kubenswrapper[4830]: I0131 09:03:27.145177 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j9b4k\" (UniqueName: \"kubernetes.io/projected/d1346d7f-25da-4035-9c88-1f96c034d795-kube-api-access-j9b4k\") pod \"openshift-config-operator-7777fb866f-ttnrg\" (UID: \"d1346d7f-25da-4035-9c88-1f96c034d795\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-ttnrg" Jan 31 09:03:27 crc kubenswrapper[4830]: I0131 09:03:27.157014 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2r6fj\" (UniqueName: \"kubernetes.io/projected/5a386557-0e05-4f84-b5fc-a389083d2743-kube-api-access-2r6fj\") pod \"openshift-controller-manager-operator-756b6f6bc6-n65sj\" (UID: \"5a386557-0e05-4f84-b5fc-a389083d2743\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-n65sj" Jan 31 09:03:27 crc kubenswrapper[4830]: I0131 09:03:27.177702 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rzsbm\" (UniqueName: \"kubernetes.io/projected/33210b82-c473-4bf8-b40d-a29b00833ea0-kube-api-access-rzsbm\") pod \"controller-manager-879f6c89f-rdzrw\" (UID: \"33210b82-c473-4bf8-b40d-a29b00833ea0\") " pod="openshift-controller-manager/controller-manager-879f6c89f-rdzrw" Jan 31 09:03:27 crc kubenswrapper[4830]: I0131 09:03:27.194143 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sktbs\" (UniqueName: \"kubernetes.io/projected/0f4287bc-c7a7-4ee2-8212-3611b978e2e8-kube-api-access-sktbs\") pod \"route-controller-manager-6576b87f9c-knkww\" (UID: \"0f4287bc-c7a7-4ee2-8212-3611b978e2e8\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-knkww" Jan 31 09:03:27 crc kubenswrapper[4830]: I0131 09:03:27.213677 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k8dt8\" (UniqueName: \"kubernetes.io/projected/c61fa19c-7742-4ab1-b3ca-9607723fe94d-kube-api-access-k8dt8\") pod \"apiserver-7bbb656c7d-pwk76\" (UID: \"c61fa19c-7742-4ab1-b3ca-9607723fe94d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pwk76" Jan 31 09:03:27 crc kubenswrapper[4830]: I0131 09:03:27.233526 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ww7vr\" (UniqueName: \"kubernetes.io/projected/a8d26ab0-33c3-4eb7-928b-ffba996579d9-kube-api-access-ww7vr\") pod \"downloads-7954f5f757-l8ckt\" (UID: \"a8d26ab0-33c3-4eb7-928b-ffba996579d9\") " pod="openshift-console/downloads-7954f5f757-l8ckt" Jan 31 09:03:27 crc kubenswrapper[4830]: I0131 09:03:27.234266 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 31 09:03:27 crc kubenswrapper[4830]: I0131 09:03:27.243687 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-9gw75"] Jan 31 09:03:27 crc kubenswrapper[4830]: I0131 09:03:27.255694 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 31 09:03:27 crc kubenswrapper[4830]: I0131 09:03:27.279049 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 31 09:03:27 crc kubenswrapper[4830]: I0131 09:03:27.296496 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 31 09:03:27 crc kubenswrapper[4830]: I0131 09:03:27.315596 4830 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 31 09:03:27 crc kubenswrapper[4830]: I0131 09:03:27.328034 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-gp4nv" Jan 31 09:03:27 crc kubenswrapper[4830]: I0131 09:03:27.334908 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-rdzrw" Jan 31 09:03:27 crc kubenswrapper[4830]: I0131 09:03:27.335013 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 31 09:03:27 crc kubenswrapper[4830]: I0131 09:03:27.348089 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-knkww" Jan 31 09:03:27 crc kubenswrapper[4830]: I0131 09:03:27.351254 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-ttnrg" Jan 31 09:03:27 crc kubenswrapper[4830]: I0131 09:03:27.370256 4830 request.go:700] Waited for 1.849566239s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/secrets?fieldSelector=metadata.name%3Dmachine-config-server-dockercfg-qx5rd&limit=500&resourceVersion=0 Jan 31 09:03:27 crc kubenswrapper[4830]: I0131 09:03:27.370362 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-l8ckt" Jan 31 09:03:27 crc kubenswrapper[4830]: I0131 09:03:27.388848 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 31 09:03:27 crc kubenswrapper[4830]: I0131 09:03:27.389026 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 31 09:03:27 crc kubenswrapper[4830]: I0131 09:03:27.395578 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 31 09:03:27 crc kubenswrapper[4830]: I0131 09:03:27.415555 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 31 09:03:27 crc kubenswrapper[4830]: I0131 09:03:27.428165 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-lpktp"] Jan 31 09:03:27 crc kubenswrapper[4830]: I0131 09:03:27.436790 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 31 09:03:27 crc kubenswrapper[4830]: I0131 09:03:27.439685 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-n65sj" Jan 31 09:03:27 crc kubenswrapper[4830]: I0131 09:03:27.455204 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 31 09:03:27 crc kubenswrapper[4830]: I0131 09:03:27.496912 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5dwgd\" (UniqueName: \"kubernetes.io/projected/bf986437-9998-4cd1-90b8-b2e0716e8d37-kube-api-access-5dwgd\") pod \"router-default-5444994796-vbcgc\" (UID: \"bf986437-9998-4cd1-90b8-b2e0716e8d37\") " pod="openshift-ingress/router-default-5444994796-vbcgc" Jan 31 09:03:27 crc kubenswrapper[4830]: I0131 09:03:27.505784 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pwk76" Jan 31 09:03:27 crc kubenswrapper[4830]: I0131 09:03:27.532603 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xrsbs\" (UniqueName: \"kubernetes.io/projected/2a94efc3-19bc-47ce-b48a-4f4b3351d955-kube-api-access-xrsbs\") pod \"apiserver-76f77b778f-htl5l\" (UID: \"2a94efc3-19bc-47ce-b48a-4f4b3351d955\") " pod="openshift-apiserver/apiserver-76f77b778f-htl5l" Jan 31 09:03:27 crc kubenswrapper[4830]: I0131 09:03:27.545128 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xjv7v\" (UniqueName: \"kubernetes.io/projected/93359d96-ca07-4f0c-8b0a-a23f1635dcb1-kube-api-access-xjv7v\") pod \"service-ca-9c57cc56f-kcsj5\" (UID: \"93359d96-ca07-4f0c-8b0a-a23f1635dcb1\") " pod="openshift-service-ca/service-ca-9c57cc56f-kcsj5" Jan 31 09:03:27 crc kubenswrapper[4830]: I0131 09:03:27.562279 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xq96w\" (UniqueName: \"kubernetes.io/projected/ce03ae75-703f-4d6a-b98a-e866689b08e3-kube-api-access-xq96w\") pod \"control-plane-machine-set-operator-78cbb6b69f-qtqdv\" (UID: \"ce03ae75-703f-4d6a-b98a-e866689b08e3\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-qtqdv" Jan 31 09:03:27 crc kubenswrapper[4830]: I0131 09:03:27.590122 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bts25\" (UniqueName: \"kubernetes.io/projected/cef36034-4148-4107-9c32-4b75ac7046b5-kube-api-access-bts25\") pod \"ingress-operator-5b745b69d9-47wc2\" (UID: \"cef36034-4148-4107-9c32-4b75ac7046b5\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-47wc2" Jan 31 09:03:27 crc kubenswrapper[4830]: I0131 09:03:27.609671 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/96ad696c-eaac-4e34-a986-d31a24d8d7bb-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-n5blr\" (UID: \"96ad696c-eaac-4e34-a986-d31a24d8d7bb\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-n5blr" Jan 31 09:03:27 crc kubenswrapper[4830]: I0131 09:03:27.614104 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cs9w7\" (UniqueName: \"kubernetes.io/projected/5fe5bd86-a665-4a73-8892-fd12a784463d-kube-api-access-cs9w7\") pod \"oauth-openshift-558db77b4-hzk7b\" (UID: \"5fe5bd86-a665-4a73-8892-fd12a784463d\") " pod="openshift-authentication/oauth-openshift-558db77b4-hzk7b" Jan 31 09:03:27 crc kubenswrapper[4830]: I0131 09:03:27.617084 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-qtqdv" Jan 31 09:03:27 crc kubenswrapper[4830]: I0131 09:03:27.618823 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-rdzrw"] Jan 31 09:03:27 crc kubenswrapper[4830]: I0131 09:03:27.655422 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/cef36034-4148-4107-9c32-4b75ac7046b5-bound-sa-token\") pod \"ingress-operator-5b745b69d9-47wc2\" (UID: \"cef36034-4148-4107-9c32-4b75ac7046b5\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-47wc2" Jan 31 09:03:27 crc kubenswrapper[4830]: W0131 09:03:27.664506 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod33210b82_c473_4bf8_b40d_a29b00833ea0.slice/crio-73db3496020c41a6d9c43cdd7a272c7e7b198ce445c18806d1b24285a64d36df WatchSource:0}: Error finding container 73db3496020c41a6d9c43cdd7a272c7e7b198ce445c18806d1b24285a64d36df: Status 404 returned error can't find the container with id 73db3496020c41a6d9c43cdd7a272c7e7b198ce445c18806d1b24285a64d36df Jan 31 09:03:27 crc kubenswrapper[4830]: I0131 09:03:27.672886 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-knkww"] Jan 31 09:03:27 crc kubenswrapper[4830]: I0131 09:03:27.693832 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-ttnrg"] Jan 31 09:03:27 crc kubenswrapper[4830]: I0131 09:03:27.696263 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-gp4nv"] Jan 31 09:03:27 crc kubenswrapper[4830]: I0131 09:03:27.710398 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d5e84919-6083-4967-aced-6e3e10b7e69d-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-5blhw\" (UID: \"d5e84919-6083-4967-aced-6e3e10b7e69d\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5blhw" Jan 31 09:03:27 crc kubenswrapper[4830]: I0131 09:03:27.710556 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zqr7j\" (UniqueName: \"kubernetes.io/projected/acf2d685-5b8b-41ab-b91d-2e3b58b8b584-kube-api-access-zqr7j\") pod \"image-registry-697d97f7c8-7m8b7\" (UID: \"acf2d685-5b8b-41ab-b91d-2e3b58b8b584\") " pod="openshift-image-registry/image-registry-697d97f7c8-7m8b7" Jan 31 09:03:27 crc kubenswrapper[4830]: I0131 09:03:27.710662 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bc99ac19-2796-495d-82d4-6eda76879f40-config\") pod \"machine-api-operator-5694c8668f-8nn2k\" (UID: \"bc99ac19-2796-495d-82d4-6eda76879f40\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-8nn2k" Jan 31 09:03:27 crc kubenswrapper[4830]: I0131 09:03:27.710934 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dqlzk\" (UniqueName: \"kubernetes.io/projected/23da5bb2-d0a9-4b0d-8755-ea8e58234b18-kube-api-access-dqlzk\") pod \"migrator-59844c95c7-9d827\" (UID: \"23da5bb2-d0a9-4b0d-8755-ea8e58234b18\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-9d827" Jan 31 09:03:27 crc kubenswrapper[4830]: I0131 09:03:27.711086 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lrw5t\" (UniqueName: \"kubernetes.io/projected/691a8aff-6fcd-400a-ace9-fb3fa8778206-kube-api-access-lrw5t\") pod \"console-operator-58897d9998-pkx9p\" (UID: \"691a8aff-6fcd-400a-ace9-fb3fa8778206\") " pod="openshift-console-operator/console-operator-58897d9998-pkx9p" Jan 31 09:03:27 crc kubenswrapper[4830]: I0131 09:03:27.711122 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/90237d04-95e8-4523-bd3d-bc8cedfc0f5f-proxy-tls\") pod \"machine-config-operator-74547568cd-8wdp6\" (UID: \"90237d04-95e8-4523-bd3d-bc8cedfc0f5f\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-8wdp6" Jan 31 09:03:27 crc kubenswrapper[4830]: I0131 09:03:27.711423 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7d1b2f20-d886-4b8d-8cb3-fcfc83ac4c12-serving-cert\") pod \"etcd-operator-b45778765-26msj\" (UID: \"7d1b2f20-d886-4b8d-8cb3-fcfc83ac4c12\") " pod="openshift-etcd-operator/etcd-operator-b45778765-26msj" Jan 31 09:03:27 crc kubenswrapper[4830]: I0131 09:03:27.711593 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jlkzg\" (UniqueName: \"kubernetes.io/projected/21ee0584-e383-47cf-af98-48c65d9fba74-kube-api-access-jlkzg\") pod \"dns-operator-744455d44c-2klp9\" (UID: \"21ee0584-e383-47cf-af98-48c65d9fba74\") " pod="openshift-dns-operator/dns-operator-744455d44c-2klp9" Jan 31 09:03:27 crc kubenswrapper[4830]: I0131 09:03:27.711780 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/bc99ac19-2796-495d-82d4-6eda76879f40-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-8nn2k\" (UID: \"bc99ac19-2796-495d-82d4-6eda76879f40\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-8nn2k" Jan 31 09:03:27 crc kubenswrapper[4830]: I0131 09:03:27.711816 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/acf2d685-5b8b-41ab-b91d-2e3b58b8b584-registry-tls\") pod \"image-registry-697d97f7c8-7m8b7\" (UID: \"acf2d685-5b8b-41ab-b91d-2e3b58b8b584\") " pod="openshift-image-registry/image-registry-697d97f7c8-7m8b7" Jan 31 09:03:27 crc kubenswrapper[4830]: I0131 09:03:27.712045 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rzjvd\" (UniqueName: \"kubernetes.io/projected/bc99ac19-2796-495d-82d4-6eda76879f40-kube-api-access-rzjvd\") pod \"machine-api-operator-5694c8668f-8nn2k\" (UID: \"bc99ac19-2796-495d-82d4-6eda76879f40\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-8nn2k" Jan 31 09:03:27 crc kubenswrapper[4830]: I0131 09:03:27.712107 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/129959a3-2970-4a90-b833-20ae33af36ba-cert\") pod \"ingress-canary-9rz4w\" (UID: \"129959a3-2970-4a90-b833-20ae33af36ba\") " pod="openshift-ingress-canary/ingress-canary-9rz4w" Jan 31 09:03:27 crc kubenswrapper[4830]: I0131 09:03:27.712355 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/691a8aff-6fcd-400a-ace9-fb3fa8778206-config\") pod \"console-operator-58897d9998-pkx9p\" (UID: \"691a8aff-6fcd-400a-ace9-fb3fa8778206\") " pod="openshift-console-operator/console-operator-58897d9998-pkx9p" Jan 31 09:03:27 crc kubenswrapper[4830]: I0131 09:03:27.712544 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8kslh\" (UniqueName: \"kubernetes.io/projected/90237d04-95e8-4523-bd3d-bc8cedfc0f5f-kube-api-access-8kslh\") pod \"machine-config-operator-74547568cd-8wdp6\" (UID: \"90237d04-95e8-4523-bd3d-bc8cedfc0f5f\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-8wdp6" Jan 31 09:03:27 crc kubenswrapper[4830]: I0131 09:03:27.712583 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/acf2d685-5b8b-41ab-b91d-2e3b58b8b584-trusted-ca\") pod \"image-registry-697d97f7c8-7m8b7\" (UID: \"acf2d685-5b8b-41ab-b91d-2e3b58b8b584\") " pod="openshift-image-registry/image-registry-697d97f7c8-7m8b7" Jan 31 09:03:27 crc kubenswrapper[4830]: I0131 09:03:27.712639 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-24s52\" (UniqueName: \"kubernetes.io/projected/a32cf0f1-bf23-4522-a619-71d1b1dab082-kube-api-access-24s52\") pod \"cluster-image-registry-operator-dc59b4c8b-ngd6n\" (UID: \"a32cf0f1-bf23-4522-a619-71d1b1dab082\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ngd6n" Jan 31 09:03:27 crc kubenswrapper[4830]: I0131 09:03:27.712672 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/7d1b2f20-d886-4b8d-8cb3-fcfc83ac4c12-etcd-ca\") pod \"etcd-operator-b45778765-26msj\" (UID: \"7d1b2f20-d886-4b8d-8cb3-fcfc83ac4c12\") " pod="openshift-etcd-operator/etcd-operator-b45778765-26msj" Jan 31 09:03:27 crc kubenswrapper[4830]: I0131 09:03:27.712917 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/90237d04-95e8-4523-bd3d-bc8cedfc0f5f-auth-proxy-config\") pod \"machine-config-operator-74547568cd-8wdp6\" (UID: \"90237d04-95e8-4523-bd3d-bc8cedfc0f5f\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-8wdp6" Jan 31 09:03:27 crc kubenswrapper[4830]: I0131 09:03:27.712981 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/acf2d685-5b8b-41ab-b91d-2e3b58b8b584-bound-sa-token\") pod \"image-registry-697d97f7c8-7m8b7\" (UID: \"acf2d685-5b8b-41ab-b91d-2e3b58b8b584\") " pod="openshift-image-registry/image-registry-697d97f7c8-7m8b7" Jan 31 09:03:27 crc kubenswrapper[4830]: I0131 09:03:27.713242 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2ab6816a-65ef-41b2-b416-60491d4423d9-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-vjnc8\" (UID: \"2ab6816a-65ef-41b2-b416-60491d4423d9\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-vjnc8" Jan 31 09:03:27 crc kubenswrapper[4830]: I0131 09:03:27.713283 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7z4j6\" (UniqueName: \"kubernetes.io/projected/d5e84919-6083-4967-aced-6e3e10b7e69d-kube-api-access-7z4j6\") pod \"kube-storage-version-migrator-operator-b67b599dd-5blhw\" (UID: \"d5e84919-6083-4967-aced-6e3e10b7e69d\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5blhw" Jan 31 09:03:27 crc kubenswrapper[4830]: I0131 09:03:27.713330 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a32cf0f1-bf23-4522-a619-71d1b1dab082-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-ngd6n\" (UID: \"a32cf0f1-bf23-4522-a619-71d1b1dab082\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ngd6n" Jan 31 09:03:27 crc kubenswrapper[4830]: I0131 09:03:27.713366 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6z997\" (UniqueName: \"kubernetes.io/projected/129959a3-2970-4a90-b833-20ae33af36ba-kube-api-access-6z997\") pod \"ingress-canary-9rz4w\" (UID: \"129959a3-2970-4a90-b833-20ae33af36ba\") " pod="openshift-ingress-canary/ingress-canary-9rz4w" Jan 31 09:03:27 crc kubenswrapper[4830]: I0131 09:03:27.713514 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/a32cf0f1-bf23-4522-a619-71d1b1dab082-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-ngd6n\" (UID: \"a32cf0f1-bf23-4522-a619-71d1b1dab082\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ngd6n" Jan 31 09:03:27 crc kubenswrapper[4830]: I0131 09:03:27.713618 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-hzk7b" Jan 31 09:03:27 crc kubenswrapper[4830]: I0131 09:03:27.713654 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/7d1b2f20-d886-4b8d-8cb3-fcfc83ac4c12-etcd-client\") pod \"etcd-operator-b45778765-26msj\" (UID: \"7d1b2f20-d886-4b8d-8cb3-fcfc83ac4c12\") " pod="openshift-etcd-operator/etcd-operator-b45778765-26msj" Jan 31 09:03:27 crc kubenswrapper[4830]: I0131 09:03:27.713802 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/691a8aff-6fcd-400a-ace9-fb3fa8778206-trusted-ca\") pod \"console-operator-58897d9998-pkx9p\" (UID: \"691a8aff-6fcd-400a-ace9-fb3fa8778206\") " pod="openshift-console-operator/console-operator-58897d9998-pkx9p" Jan 31 09:03:27 crc kubenswrapper[4830]: I0131 09:03:27.714017 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/7d1b2f20-d886-4b8d-8cb3-fcfc83ac4c12-etcd-service-ca\") pod \"etcd-operator-b45778765-26msj\" (UID: \"7d1b2f20-d886-4b8d-8cb3-fcfc83ac4c12\") " pod="openshift-etcd-operator/etcd-operator-b45778765-26msj" Jan 31 09:03:27 crc kubenswrapper[4830]: I0131 09:03:27.714056 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ppb98\" (UniqueName: \"kubernetes.io/projected/7d1b2f20-d886-4b8d-8cb3-fcfc83ac4c12-kube-api-access-ppb98\") pod \"etcd-operator-b45778765-26msj\" (UID: \"7d1b2f20-d886-4b8d-8cb3-fcfc83ac4c12\") " pod="openshift-etcd-operator/etcd-operator-b45778765-26msj" Jan 31 09:03:27 crc kubenswrapper[4830]: I0131 09:03:27.717062 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7m8b7\" (UID: \"acf2d685-5b8b-41ab-b91d-2e3b58b8b584\") " pod="openshift-image-registry/image-registry-697d97f7c8-7m8b7" Jan 31 09:03:27 crc kubenswrapper[4830]: I0131 09:03:27.717172 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/acf2d685-5b8b-41ab-b91d-2e3b58b8b584-registry-certificates\") pod \"image-registry-697d97f7c8-7m8b7\" (UID: \"acf2d685-5b8b-41ab-b91d-2e3b58b8b584\") " pod="openshift-image-registry/image-registry-697d97f7c8-7m8b7" Jan 31 09:03:27 crc kubenswrapper[4830]: I0131 09:03:27.717248 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/90237d04-95e8-4523-bd3d-bc8cedfc0f5f-images\") pod \"machine-config-operator-74547568cd-8wdp6\" (UID: \"90237d04-95e8-4523-bd3d-bc8cedfc0f5f\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-8wdp6" Jan 31 09:03:27 crc kubenswrapper[4830]: I0131 09:03:27.717289 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2ab6816a-65ef-41b2-b416-60491d4423d9-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-vjnc8\" (UID: \"2ab6816a-65ef-41b2-b416-60491d4423d9\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-vjnc8" Jan 31 09:03:27 crc kubenswrapper[4830]: I0131 09:03:27.717585 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d5e84919-6083-4967-aced-6e3e10b7e69d-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-5blhw\" (UID: \"d5e84919-6083-4967-aced-6e3e10b7e69d\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5blhw" Jan 31 09:03:27 crc kubenswrapper[4830]: I0131 09:03:27.717645 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/acf2d685-5b8b-41ab-b91d-2e3b58b8b584-installation-pull-secrets\") pod \"image-registry-697d97f7c8-7m8b7\" (UID: \"acf2d685-5b8b-41ab-b91d-2e3b58b8b584\") " pod="openshift-image-registry/image-registry-697d97f7c8-7m8b7" Jan 31 09:03:27 crc kubenswrapper[4830]: I0131 09:03:27.717686 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7d1b2f20-d886-4b8d-8cb3-fcfc83ac4c12-config\") pod \"etcd-operator-b45778765-26msj\" (UID: \"7d1b2f20-d886-4b8d-8cb3-fcfc83ac4c12\") " pod="openshift-etcd-operator/etcd-operator-b45778765-26msj" Jan 31 09:03:27 crc kubenswrapper[4830]: I0131 09:03:27.718685 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a32cf0f1-bf23-4522-a619-71d1b1dab082-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-ngd6n\" (UID: \"a32cf0f1-bf23-4522-a619-71d1b1dab082\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ngd6n" Jan 31 09:03:27 crc kubenswrapper[4830]: I0131 09:03:27.718900 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/acf2d685-5b8b-41ab-b91d-2e3b58b8b584-ca-trust-extracted\") pod \"image-registry-697d97f7c8-7m8b7\" (UID: \"acf2d685-5b8b-41ab-b91d-2e3b58b8b584\") " pod="openshift-image-registry/image-registry-697d97f7c8-7m8b7" Jan 31 09:03:27 crc kubenswrapper[4830]: I0131 09:03:27.722211 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/21ee0584-e383-47cf-af98-48c65d9fba74-metrics-tls\") pod \"dns-operator-744455d44c-2klp9\" (UID: \"21ee0584-e383-47cf-af98-48c65d9fba74\") " pod="openshift-dns-operator/dns-operator-744455d44c-2klp9" Jan 31 09:03:27 crc kubenswrapper[4830]: I0131 09:03:27.722249 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/bc99ac19-2796-495d-82d4-6eda76879f40-images\") pod \"machine-api-operator-5694c8668f-8nn2k\" (UID: \"bc99ac19-2796-495d-82d4-6eda76879f40\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-8nn2k" Jan 31 09:03:27 crc kubenswrapper[4830]: I0131 09:03:27.722539 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2ab6816a-65ef-41b2-b416-60491d4423d9-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-vjnc8\" (UID: \"2ab6816a-65ef-41b2-b416-60491d4423d9\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-vjnc8" Jan 31 09:03:27 crc kubenswrapper[4830]: I0131 09:03:27.722598 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/691a8aff-6fcd-400a-ace9-fb3fa8778206-serving-cert\") pod \"console-operator-58897d9998-pkx9p\" (UID: \"691a8aff-6fcd-400a-ace9-fb3fa8778206\") " pod="openshift-console-operator/console-operator-58897d9998-pkx9p" Jan 31 09:03:27 crc kubenswrapper[4830]: E0131 09:03:27.723227 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 09:03:28.223207981 +0000 UTC m=+152.716570423 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7m8b7" (UID: "acf2d685-5b8b-41ab-b91d-2e3b58b8b584") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 09:03:27 crc kubenswrapper[4830]: I0131 09:03:27.738706 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-htl5l" Jan 31 09:03:27 crc kubenswrapper[4830]: W0131 09:03:27.743677 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod83cc5fe8_7965_46aa_b846_33d1b8d317f8.slice/crio-47b669fb814c76ba9b2b65afb80e4d3e6bf938ddb0db4eac8228e3a0dc714ec6 WatchSource:0}: Error finding container 47b669fb814c76ba9b2b65afb80e4d3e6bf938ddb0db4eac8228e3a0dc714ec6: Status 404 returned error can't find the container with id 47b669fb814c76ba9b2b65afb80e4d3e6bf938ddb0db4eac8228e3a0dc714ec6 Jan 31 09:03:27 crc kubenswrapper[4830]: I0131 09:03:27.743879 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-l8ckt"] Jan 31 09:03:27 crc kubenswrapper[4830]: W0131 09:03:27.750034 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1346d7f_25da_4035_9c88_1f96c034d795.slice/crio-5ef50dc738b653bfd21350e73687236150116f82c7fd37a99b6090962ce4fcc6 WatchSource:0}: Error finding container 5ef50dc738b653bfd21350e73687236150116f82c7fd37a99b6090962ce4fcc6: Status 404 returned error can't find the container with id 5ef50dc738b653bfd21350e73687236150116f82c7fd37a99b6090962ce4fcc6 Jan 31 09:03:27 crc kubenswrapper[4830]: I0131 09:03:27.761867 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-n5blr" Jan 31 09:03:27 crc kubenswrapper[4830]: I0131 09:03:27.767277 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-n65sj"] Jan 31 09:03:27 crc kubenswrapper[4830]: I0131 09:03:27.767717 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-47wc2" Jan 31 09:03:27 crc kubenswrapper[4830]: I0131 09:03:27.787327 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-vbcgc" Jan 31 09:03:27 crc kubenswrapper[4830]: W0131 09:03:27.804754 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5a386557_0e05_4f84_b5fc_a389083d2743.slice/crio-41d92b38e9b6769989d71ee9a9c4ac96c87a1b4cc5b0530e48b9bcc3a3f4ece3 WatchSource:0}: Error finding container 41d92b38e9b6769989d71ee9a9c4ac96c87a1b4cc5b0530e48b9bcc3a3f4ece3: Status 404 returned error can't find the container with id 41d92b38e9b6769989d71ee9a9c4ac96c87a1b4cc5b0530e48b9bcc3a3f4ece3 Jan 31 09:03:27 crc kubenswrapper[4830]: I0131 09:03:27.821988 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-kcsj5" Jan 31 09:03:27 crc kubenswrapper[4830]: I0131 09:03:27.823480 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 09:03:27 crc kubenswrapper[4830]: I0131 09:03:27.823821 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bc99ac19-2796-495d-82d4-6eda76879f40-config\") pod \"machine-api-operator-5694c8668f-8nn2k\" (UID: \"bc99ac19-2796-495d-82d4-6eda76879f40\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-8nn2k" Jan 31 09:03:27 crc kubenswrapper[4830]: I0131 09:03:27.823867 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k5nbz\" (UniqueName: \"kubernetes.io/projected/642f0efa-b4e9-45d2-a5c3-f53ff0fb7687-kube-api-access-k5nbz\") pod \"machine-config-server-bsxrt\" (UID: \"642f0efa-b4e9-45d2-a5c3-f53ff0fb7687\") " pod="openshift-machine-config-operator/machine-config-server-bsxrt" Jan 31 09:03:27 crc kubenswrapper[4830]: I0131 09:03:27.823926 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4phcv\" (UniqueName: \"kubernetes.io/projected/e80e8b17-711d-46d8-a240-4fa52e093545-kube-api-access-4phcv\") pod \"packageserver-d55dfcdfc-lp7ks\" (UID: \"e80e8b17-711d-46d8-a240-4fa52e093545\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-lp7ks" Jan 31 09:03:27 crc kubenswrapper[4830]: I0131 09:03:27.823956 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dqlzk\" (UniqueName: \"kubernetes.io/projected/23da5bb2-d0a9-4b0d-8755-ea8e58234b18-kube-api-access-dqlzk\") pod \"migrator-59844c95c7-9d827\" (UID: \"23da5bb2-d0a9-4b0d-8755-ea8e58234b18\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-9d827" Jan 31 09:03:27 crc kubenswrapper[4830]: I0131 09:03:27.823986 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/cf057c5a-deef-4c01-bd58-f761ec86e2f4-srv-cert\") pod \"catalog-operator-68c6474976-n4rml\" (UID: \"cf057c5a-deef-4c01-bd58-f761ec86e2f4\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-n4rml" Jan 31 09:03:27 crc kubenswrapper[4830]: I0131 09:03:27.824043 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lrw5t\" (UniqueName: \"kubernetes.io/projected/691a8aff-6fcd-400a-ace9-fb3fa8778206-kube-api-access-lrw5t\") pod \"console-operator-58897d9998-pkx9p\" (UID: \"691a8aff-6fcd-400a-ace9-fb3fa8778206\") " pod="openshift-console-operator/console-operator-58897d9998-pkx9p" Jan 31 09:03:27 crc kubenswrapper[4830]: I0131 09:03:27.824107 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/90237d04-95e8-4523-bd3d-bc8cedfc0f5f-proxy-tls\") pod \"machine-config-operator-74547568cd-8wdp6\" (UID: \"90237d04-95e8-4523-bd3d-bc8cedfc0f5f\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-8wdp6" Jan 31 09:03:27 crc kubenswrapper[4830]: I0131 09:03:27.824153 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e33286d6-0e3e-47d4-bf68-11b642927bee-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-x8zjt\" (UID: \"e33286d6-0e3e-47d4-bf68-11b642927bee\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-x8zjt" Jan 31 09:03:27 crc kubenswrapper[4830]: I0131 09:03:27.824183 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e80e8b17-711d-46d8-a240-4fa52e093545-webhook-cert\") pod \"packageserver-d55dfcdfc-lp7ks\" (UID: \"e80e8b17-711d-46d8-a240-4fa52e093545\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-lp7ks" Jan 31 09:03:27 crc kubenswrapper[4830]: E0131 09:03:27.824802 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 09:03:28.324775451 +0000 UTC m=+152.818137893 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 09:03:27 crc kubenswrapper[4830]: I0131 09:03:27.825790 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7d1b2f20-d886-4b8d-8cb3-fcfc83ac4c12-serving-cert\") pod \"etcd-operator-b45778765-26msj\" (UID: \"7d1b2f20-d886-4b8d-8cb3-fcfc83ac4c12\") " pod="openshift-etcd-operator/etcd-operator-b45778765-26msj" Jan 31 09:03:27 crc kubenswrapper[4830]: I0131 09:03:27.825846 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jlkzg\" (UniqueName: \"kubernetes.io/projected/21ee0584-e383-47cf-af98-48c65d9fba74-kube-api-access-jlkzg\") pod \"dns-operator-744455d44c-2klp9\" (UID: \"21ee0584-e383-47cf-af98-48c65d9fba74\") " pod="openshift-dns-operator/dns-operator-744455d44c-2klp9" Jan 31 09:03:27 crc kubenswrapper[4830]: I0131 09:03:27.825884 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7qlkn\" (UniqueName: \"kubernetes.io/projected/ee16c0c8-3b38-4e29-b2dc-633b09648c2f-kube-api-access-7qlkn\") pod \"machine-config-controller-84d6567774-skqcc\" (UID: \"ee16c0c8-3b38-4e29-b2dc-633b09648c2f\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-skqcc" Jan 31 09:03:27 crc kubenswrapper[4830]: I0131 09:03:27.825907 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dc74377f-6986-4156-9c2b-7a003f07d6ff-config-volume\") pod \"collect-profiles-29497500-66dl8\" (UID: \"dc74377f-6986-4156-9c2b-7a003f07d6ff\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497500-66dl8" Jan 31 09:03:27 crc kubenswrapper[4830]: I0131 09:03:27.825945 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/bc99ac19-2796-495d-82d4-6eda76879f40-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-8nn2k\" (UID: \"bc99ac19-2796-495d-82d4-6eda76879f40\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-8nn2k" Jan 31 09:03:27 crc kubenswrapper[4830]: I0131 09:03:27.825968 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/e80e8b17-711d-46d8-a240-4fa52e093545-tmpfs\") pod \"packageserver-d55dfcdfc-lp7ks\" (UID: \"e80e8b17-711d-46d8-a240-4fa52e093545\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-lp7ks" Jan 31 09:03:27 crc kubenswrapper[4830]: I0131 09:03:27.827462 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/acf2d685-5b8b-41ab-b91d-2e3b58b8b584-registry-tls\") pod \"image-registry-697d97f7c8-7m8b7\" (UID: \"acf2d685-5b8b-41ab-b91d-2e3b58b8b584\") " pod="openshift-image-registry/image-registry-697d97f7c8-7m8b7" Jan 31 09:03:27 crc kubenswrapper[4830]: I0131 09:03:27.828428 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/cf057c5a-deef-4c01-bd58-f761ec86e2f4-profile-collector-cert\") pod \"catalog-operator-68c6474976-n4rml\" (UID: \"cf057c5a-deef-4c01-bd58-f761ec86e2f4\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-n4rml" Jan 31 09:03:27 crc kubenswrapper[4830]: I0131 09:03:27.828561 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rzjvd\" (UniqueName: \"kubernetes.io/projected/bc99ac19-2796-495d-82d4-6eda76879f40-kube-api-access-rzjvd\") pod \"machine-api-operator-5694c8668f-8nn2k\" (UID: \"bc99ac19-2796-495d-82d4-6eda76879f40\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-8nn2k" Jan 31 09:03:27 crc kubenswrapper[4830]: I0131 09:03:27.830114 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/30c7c034-9492-4051-9cc9-235a6d87bd03-socket-dir\") pod \"csi-hostpathplugin-rhvlq\" (UID: \"30c7c034-9492-4051-9cc9-235a6d87bd03\") " pod="hostpath-provisioner/csi-hostpathplugin-rhvlq" Jan 31 09:03:27 crc kubenswrapper[4830]: I0131 09:03:27.831602 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/129959a3-2970-4a90-b833-20ae33af36ba-cert\") pod \"ingress-canary-9rz4w\" (UID: \"129959a3-2970-4a90-b833-20ae33af36ba\") " pod="openshift-ingress-canary/ingress-canary-9rz4w" Jan 31 09:03:27 crc kubenswrapper[4830]: I0131 09:03:27.831647 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/691a8aff-6fcd-400a-ace9-fb3fa8778206-config\") pod \"console-operator-58897d9998-pkx9p\" (UID: \"691a8aff-6fcd-400a-ace9-fb3fa8778206\") " pod="openshift-console-operator/console-operator-58897d9998-pkx9p" Jan 31 09:03:27 crc kubenswrapper[4830]: I0131 09:03:27.831673 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8kslh\" (UniqueName: \"kubernetes.io/projected/90237d04-95e8-4523-bd3d-bc8cedfc0f5f-kube-api-access-8kslh\") pod \"machine-config-operator-74547568cd-8wdp6\" (UID: \"90237d04-95e8-4523-bd3d-bc8cedfc0f5f\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-8wdp6" Jan 31 09:03:27 crc kubenswrapper[4830]: I0131 09:03:27.831699 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vnqcc\" (UniqueName: \"kubernetes.io/projected/dc74377f-6986-4156-9c2b-7a003f07d6ff-kube-api-access-vnqcc\") pod \"collect-profiles-29497500-66dl8\" (UID: \"dc74377f-6986-4156-9c2b-7a003f07d6ff\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497500-66dl8" Jan 31 09:03:27 crc kubenswrapper[4830]: I0131 09:03:27.831760 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/acf2d685-5b8b-41ab-b91d-2e3b58b8b584-trusted-ca\") pod \"image-registry-697d97f7c8-7m8b7\" (UID: \"acf2d685-5b8b-41ab-b91d-2e3b58b8b584\") " pod="openshift-image-registry/image-registry-697d97f7c8-7m8b7" Jan 31 09:03:27 crc kubenswrapper[4830]: I0131 09:03:27.831785 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/acf2d685-5b8b-41ab-b91d-2e3b58b8b584-registry-tls\") pod \"image-registry-697d97f7c8-7m8b7\" (UID: \"acf2d685-5b8b-41ab-b91d-2e3b58b8b584\") " pod="openshift-image-registry/image-registry-697d97f7c8-7m8b7" Jan 31 09:03:27 crc kubenswrapper[4830]: I0131 09:03:27.831787 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sjbhq\" (UniqueName: \"kubernetes.io/projected/b9a73d68-2213-477c-a55d-91c86b7ce674-kube-api-access-sjbhq\") pod \"multus-admission-controller-857f4d67dd-xwn99\" (UID: \"b9a73d68-2213-477c-a55d-91c86b7ce674\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-xwn99" Jan 31 09:03:27 crc kubenswrapper[4830]: I0131 09:03:27.831898 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/7d1b2f20-d886-4b8d-8cb3-fcfc83ac4c12-etcd-ca\") pod \"etcd-operator-b45778765-26msj\" (UID: \"7d1b2f20-d886-4b8d-8cb3-fcfc83ac4c12\") " pod="openshift-etcd-operator/etcd-operator-b45778765-26msj" Jan 31 09:03:27 crc kubenswrapper[4830]: I0131 09:03:27.831940 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/30c7c034-9492-4051-9cc9-235a6d87bd03-registration-dir\") pod \"csi-hostpathplugin-rhvlq\" (UID: \"30c7c034-9492-4051-9cc9-235a6d87bd03\") " pod="hostpath-provisioner/csi-hostpathplugin-rhvlq" Jan 31 09:03:27 crc kubenswrapper[4830]: I0131 09:03:27.831990 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-24s52\" (UniqueName: \"kubernetes.io/projected/a32cf0f1-bf23-4522-a619-71d1b1dab082-kube-api-access-24s52\") pod \"cluster-image-registry-operator-dc59b4c8b-ngd6n\" (UID: \"a32cf0f1-bf23-4522-a619-71d1b1dab082\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ngd6n" Jan 31 09:03:27 crc kubenswrapper[4830]: I0131 09:03:27.832032 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/30c7c034-9492-4051-9cc9-235a6d87bd03-csi-data-dir\") pod \"csi-hostpathplugin-rhvlq\" (UID: \"30c7c034-9492-4051-9cc9-235a6d87bd03\") " pod="hostpath-provisioner/csi-hostpathplugin-rhvlq" Jan 31 09:03:27 crc kubenswrapper[4830]: I0131 09:03:27.832055 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/90237d04-95e8-4523-bd3d-bc8cedfc0f5f-auth-proxy-config\") pod \"machine-config-operator-74547568cd-8wdp6\" (UID: \"90237d04-95e8-4523-bd3d-bc8cedfc0f5f\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-8wdp6" Jan 31 09:03:27 crc kubenswrapper[4830]: I0131 09:03:27.832076 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/642f0efa-b4e9-45d2-a5c3-f53ff0fb7687-certs\") pod \"machine-config-server-bsxrt\" (UID: \"642f0efa-b4e9-45d2-a5c3-f53ff0fb7687\") " pod="openshift-machine-config-operator/machine-config-server-bsxrt" Jan 31 09:03:27 crc kubenswrapper[4830]: I0131 09:03:27.832152 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/13f1c33b-cede-4fb1-9651-15d0dcd36173-srv-cert\") pod \"olm-operator-6b444d44fb-lb8hp\" (UID: \"13f1c33b-cede-4fb1-9651-15d0dcd36173\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lb8hp" Jan 31 09:03:27 crc kubenswrapper[4830]: I0131 09:03:27.832194 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/acf2d685-5b8b-41ab-b91d-2e3b58b8b584-bound-sa-token\") pod \"image-registry-697d97f7c8-7m8b7\" (UID: \"acf2d685-5b8b-41ab-b91d-2e3b58b8b584\") " pod="openshift-image-registry/image-registry-697d97f7c8-7m8b7" Jan 31 09:03:27 crc kubenswrapper[4830]: I0131 09:03:27.832218 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2ab6816a-65ef-41b2-b416-60491d4423d9-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-vjnc8\" (UID: \"2ab6816a-65ef-41b2-b416-60491d4423d9\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-vjnc8" Jan 31 09:03:27 crc kubenswrapper[4830]: I0131 09:03:27.832239 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7z4j6\" (UniqueName: \"kubernetes.io/projected/d5e84919-6083-4967-aced-6e3e10b7e69d-kube-api-access-7z4j6\") pod \"kube-storage-version-migrator-operator-b67b599dd-5blhw\" (UID: \"d5e84919-6083-4967-aced-6e3e10b7e69d\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5blhw" Jan 31 09:03:27 crc kubenswrapper[4830]: I0131 09:03:27.832261 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/36a7a51a-2662-4f3b-aa1d-d674cf676b9d-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-fnk7f\" (UID: \"36a7a51a-2662-4f3b-aa1d-d674cf676b9d\") " pod="openshift-marketplace/marketplace-operator-79b997595-fnk7f" Jan 31 09:03:27 crc kubenswrapper[4830]: I0131 09:03:27.832269 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bc99ac19-2796-495d-82d4-6eda76879f40-config\") pod \"machine-api-operator-5694c8668f-8nn2k\" (UID: \"bc99ac19-2796-495d-82d4-6eda76879f40\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-8nn2k" Jan 31 09:03:27 crc kubenswrapper[4830]: I0131 09:03:27.832285 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a32cf0f1-bf23-4522-a619-71d1b1dab082-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-ngd6n\" (UID: \"a32cf0f1-bf23-4522-a619-71d1b1dab082\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ngd6n" Jan 31 09:03:27 crc kubenswrapper[4830]: I0131 09:03:27.832388 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/13f1c33b-cede-4fb1-9651-15d0dcd36173-profile-collector-cert\") pod \"olm-operator-6b444d44fb-lb8hp\" (UID: \"13f1c33b-cede-4fb1-9651-15d0dcd36173\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lb8hp" Jan 31 09:03:27 crc kubenswrapper[4830]: I0131 09:03:27.832456 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/a32cf0f1-bf23-4522-a619-71d1b1dab082-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-ngd6n\" (UID: \"a32cf0f1-bf23-4522-a619-71d1b1dab082\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ngd6n" Jan 31 09:03:27 crc kubenswrapper[4830]: I0131 09:03:27.832489 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/7d1b2f20-d886-4b8d-8cb3-fcfc83ac4c12-etcd-client\") pod \"etcd-operator-b45778765-26msj\" (UID: \"7d1b2f20-d886-4b8d-8cb3-fcfc83ac4c12\") " pod="openshift-etcd-operator/etcd-operator-b45778765-26msj" Jan 31 09:03:27 crc kubenswrapper[4830]: I0131 09:03:27.832520 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6z997\" (UniqueName: \"kubernetes.io/projected/129959a3-2970-4a90-b833-20ae33af36ba-kube-api-access-6z997\") pod \"ingress-canary-9rz4w\" (UID: \"129959a3-2970-4a90-b833-20ae33af36ba\") " pod="openshift-ingress-canary/ingress-canary-9rz4w" Jan 31 09:03:27 crc kubenswrapper[4830]: I0131 09:03:27.832565 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/691a8aff-6fcd-400a-ace9-fb3fa8778206-trusted-ca\") pod \"console-operator-58897d9998-pkx9p\" (UID: \"691a8aff-6fcd-400a-ace9-fb3fa8778206\") " pod="openshift-console-operator/console-operator-58897d9998-pkx9p" Jan 31 09:03:27 crc kubenswrapper[4830]: I0131 09:03:27.832598 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d9wqg\" (UniqueName: \"kubernetes.io/projected/7db6391f-ccc4-41d2-82ff-aa58d3297625-kube-api-access-d9wqg\") pod \"service-ca-operator-777779d784-bs7f7\" (UID: \"7db6391f-ccc4-41d2-82ff-aa58d3297625\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-bs7f7" Jan 31 09:03:27 crc kubenswrapper[4830]: I0131 09:03:27.832638 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/7d1b2f20-d886-4b8d-8cb3-fcfc83ac4c12-etcd-service-ca\") pod \"etcd-operator-b45778765-26msj\" (UID: \"7d1b2f20-d886-4b8d-8cb3-fcfc83ac4c12\") " pod="openshift-etcd-operator/etcd-operator-b45778765-26msj" Jan 31 09:03:27 crc kubenswrapper[4830]: I0131 09:03:27.832663 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ppb98\" (UniqueName: \"kubernetes.io/projected/7d1b2f20-d886-4b8d-8cb3-fcfc83ac4c12-kube-api-access-ppb98\") pod \"etcd-operator-b45778765-26msj\" (UID: \"7d1b2f20-d886-4b8d-8cb3-fcfc83ac4c12\") " pod="openshift-etcd-operator/etcd-operator-b45778765-26msj" Jan 31 09:03:27 crc kubenswrapper[4830]: I0131 09:03:27.832741 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7m8b7\" (UID: \"acf2d685-5b8b-41ab-b91d-2e3b58b8b584\") " pod="openshift-image-registry/image-registry-697d97f7c8-7m8b7" Jan 31 09:03:27 crc kubenswrapper[4830]: I0131 09:03:27.832772 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ktx44\" (UniqueName: \"kubernetes.io/projected/13f1c33b-cede-4fb1-9651-15d0dcd36173-kube-api-access-ktx44\") pod \"olm-operator-6b444d44fb-lb8hp\" (UID: \"13f1c33b-cede-4fb1-9651-15d0dcd36173\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lb8hp" Jan 31 09:03:27 crc kubenswrapper[4830]: I0131 09:03:27.832845 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k7gjk\" (UniqueName: \"kubernetes.io/projected/cf057c5a-deef-4c01-bd58-f761ec86e2f4-kube-api-access-k7gjk\") pod \"catalog-operator-68c6474976-n4rml\" (UID: \"cf057c5a-deef-4c01-bd58-f761ec86e2f4\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-n4rml" Jan 31 09:03:27 crc kubenswrapper[4830]: I0131 09:03:27.832890 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/acf2d685-5b8b-41ab-b91d-2e3b58b8b584-registry-certificates\") pod \"image-registry-697d97f7c8-7m8b7\" (UID: \"acf2d685-5b8b-41ab-b91d-2e3b58b8b584\") " pod="openshift-image-registry/image-registry-697d97f7c8-7m8b7" Jan 31 09:03:27 crc kubenswrapper[4830]: I0131 09:03:27.832921 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nhctp\" (UniqueName: \"kubernetes.io/projected/36a7a51a-2662-4f3b-aa1d-d674cf676b9d-kube-api-access-nhctp\") pod \"marketplace-operator-79b997595-fnk7f\" (UID: \"36a7a51a-2662-4f3b-aa1d-d674cf676b9d\") " pod="openshift-marketplace/marketplace-operator-79b997595-fnk7f" Jan 31 09:03:27 crc kubenswrapper[4830]: I0131 09:03:27.832982 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e33286d6-0e3e-47d4-bf68-11b642927bee-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-x8zjt\" (UID: \"e33286d6-0e3e-47d4-bf68-11b642927bee\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-x8zjt" Jan 31 09:03:27 crc kubenswrapper[4830]: I0131 09:03:27.833020 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/90237d04-95e8-4523-bd3d-bc8cedfc0f5f-images\") pod \"machine-config-operator-74547568cd-8wdp6\" (UID: \"90237d04-95e8-4523-bd3d-bc8cedfc0f5f\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-8wdp6" Jan 31 09:03:27 crc kubenswrapper[4830]: I0131 09:03:27.833052 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2ab6816a-65ef-41b2-b416-60491d4423d9-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-vjnc8\" (UID: \"2ab6816a-65ef-41b2-b416-60491d4423d9\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-vjnc8" Jan 31 09:03:27 crc kubenswrapper[4830]: I0131 09:03:27.833074 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e80e8b17-711d-46d8-a240-4fa52e093545-apiservice-cert\") pod \"packageserver-d55dfcdfc-lp7ks\" (UID: \"e80e8b17-711d-46d8-a240-4fa52e093545\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-lp7ks" Jan 31 09:03:27 crc kubenswrapper[4830]: I0131 09:03:27.833133 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/acf2d685-5b8b-41ab-b91d-2e3b58b8b584-installation-pull-secrets\") pod \"image-registry-697d97f7c8-7m8b7\" (UID: \"acf2d685-5b8b-41ab-b91d-2e3b58b8b584\") " pod="openshift-image-registry/image-registry-697d97f7c8-7m8b7" Jan 31 09:03:27 crc kubenswrapper[4830]: I0131 09:03:27.833161 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d5e84919-6083-4967-aced-6e3e10b7e69d-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-5blhw\" (UID: \"d5e84919-6083-4967-aced-6e3e10b7e69d\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5blhw" Jan 31 09:03:27 crc kubenswrapper[4830]: I0131 09:03:27.833189 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ee16c0c8-3b38-4e29-b2dc-633b09648c2f-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-skqcc\" (UID: \"ee16c0c8-3b38-4e29-b2dc-633b09648c2f\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-skqcc" Jan 31 09:03:27 crc kubenswrapper[4830]: I0131 09:03:27.833217 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/ee16c0c8-3b38-4e29-b2dc-633b09648c2f-proxy-tls\") pod \"machine-config-controller-84d6567774-skqcc\" (UID: \"ee16c0c8-3b38-4e29-b2dc-633b09648c2f\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-skqcc" Jan 31 09:03:27 crc kubenswrapper[4830]: I0131 09:03:27.833267 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7d1b2f20-d886-4b8d-8cb3-fcfc83ac4c12-config\") pod \"etcd-operator-b45778765-26msj\" (UID: \"7d1b2f20-d886-4b8d-8cb3-fcfc83ac4c12\") " pod="openshift-etcd-operator/etcd-operator-b45778765-26msj" Jan 31 09:03:27 crc kubenswrapper[4830]: I0131 09:03:27.833293 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7db6391f-ccc4-41d2-82ff-aa58d3297625-config\") pod \"service-ca-operator-777779d784-bs7f7\" (UID: \"7db6391f-ccc4-41d2-82ff-aa58d3297625\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-bs7f7" Jan 31 09:03:27 crc kubenswrapper[4830]: I0131 09:03:27.833394 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/b9a73d68-2213-477c-a55d-91c86b7ce674-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-xwn99\" (UID: \"b9a73d68-2213-477c-a55d-91c86b7ce674\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-xwn99" Jan 31 09:03:27 crc kubenswrapper[4830]: I0131 09:03:27.833442 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7db6391f-ccc4-41d2-82ff-aa58d3297625-serving-cert\") pod \"service-ca-operator-777779d784-bs7f7\" (UID: \"7db6391f-ccc4-41d2-82ff-aa58d3297625\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-bs7f7" Jan 31 09:03:27 crc kubenswrapper[4830]: I0131 09:03:27.833465 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/30c7c034-9492-4051-9cc9-235a6d87bd03-mountpoint-dir\") pod \"csi-hostpathplugin-rhvlq\" (UID: \"30c7c034-9492-4051-9cc9-235a6d87bd03\") " pod="hostpath-provisioner/csi-hostpathplugin-rhvlq" Jan 31 09:03:27 crc kubenswrapper[4830]: I0131 09:03:27.833526 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a32cf0f1-bf23-4522-a619-71d1b1dab082-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-ngd6n\" (UID: \"a32cf0f1-bf23-4522-a619-71d1b1dab082\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ngd6n" Jan 31 09:03:27 crc kubenswrapper[4830]: I0131 09:03:27.833577 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/acf2d685-5b8b-41ab-b91d-2e3b58b8b584-ca-trust-extracted\") pod \"image-registry-697d97f7c8-7m8b7\" (UID: \"acf2d685-5b8b-41ab-b91d-2e3b58b8b584\") " pod="openshift-image-registry/image-registry-697d97f7c8-7m8b7" Jan 31 09:03:27 crc kubenswrapper[4830]: I0131 09:03:27.833596 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/36a7a51a-2662-4f3b-aa1d-d674cf676b9d-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-fnk7f\" (UID: \"36a7a51a-2662-4f3b-aa1d-d674cf676b9d\") " pod="openshift-marketplace/marketplace-operator-79b997595-fnk7f" Jan 31 09:03:27 crc kubenswrapper[4830]: I0131 09:03:27.833617 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/48cbaebb-5495-4965-a7fd-207d2d7ef0fc-config-volume\") pod \"dns-default-whjm4\" (UID: \"48cbaebb-5495-4965-a7fd-207d2d7ef0fc\") " pod="openshift-dns/dns-default-whjm4" Jan 31 09:03:27 crc kubenswrapper[4830]: I0131 09:03:27.833665 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/bc99ac19-2796-495d-82d4-6eda76879f40-images\") pod \"machine-api-operator-5694c8668f-8nn2k\" (UID: \"bc99ac19-2796-495d-82d4-6eda76879f40\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-8nn2k" Jan 31 09:03:27 crc kubenswrapper[4830]: I0131 09:03:27.833704 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/30c7c034-9492-4051-9cc9-235a6d87bd03-plugins-dir\") pod \"csi-hostpathplugin-rhvlq\" (UID: \"30c7c034-9492-4051-9cc9-235a6d87bd03\") " pod="hostpath-provisioner/csi-hostpathplugin-rhvlq" Jan 31 09:03:27 crc kubenswrapper[4830]: I0131 09:03:27.834391 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/691a8aff-6fcd-400a-ace9-fb3fa8778206-config\") pod \"console-operator-58897d9998-pkx9p\" (UID: \"691a8aff-6fcd-400a-ace9-fb3fa8778206\") " pod="openshift-console-operator/console-operator-58897d9998-pkx9p" Jan 31 09:03:27 crc kubenswrapper[4830]: I0131 09:03:27.833268 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/7d1b2f20-d886-4b8d-8cb3-fcfc83ac4c12-etcd-ca\") pod \"etcd-operator-b45778765-26msj\" (UID: \"7d1b2f20-d886-4b8d-8cb3-fcfc83ac4c12\") " pod="openshift-etcd-operator/etcd-operator-b45778765-26msj" Jan 31 09:03:27 crc kubenswrapper[4830]: I0131 09:03:27.835525 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d5e84919-6083-4967-aced-6e3e10b7e69d-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-5blhw\" (UID: \"d5e84919-6083-4967-aced-6e3e10b7e69d\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5blhw" Jan 31 09:03:27 crc kubenswrapper[4830]: I0131 09:03:27.836231 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/90237d04-95e8-4523-bd3d-bc8cedfc0f5f-images\") pod \"machine-config-operator-74547568cd-8wdp6\" (UID: \"90237d04-95e8-4523-bd3d-bc8cedfc0f5f\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-8wdp6" Jan 31 09:03:27 crc kubenswrapper[4830]: I0131 09:03:27.836860 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7d1b2f20-d886-4b8d-8cb3-fcfc83ac4c12-serving-cert\") pod \"etcd-operator-b45778765-26msj\" (UID: \"7d1b2f20-d886-4b8d-8cb3-fcfc83ac4c12\") " pod="openshift-etcd-operator/etcd-operator-b45778765-26msj" Jan 31 09:03:27 crc kubenswrapper[4830]: I0131 09:03:27.837083 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/691a8aff-6fcd-400a-ace9-fb3fa8778206-trusted-ca\") pod \"console-operator-58897d9998-pkx9p\" (UID: \"691a8aff-6fcd-400a-ace9-fb3fa8778206\") " pod="openshift-console-operator/console-operator-58897d9998-pkx9p" Jan 31 09:03:27 crc kubenswrapper[4830]: I0131 09:03:27.837112 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/7d1b2f20-d886-4b8d-8cb3-fcfc83ac4c12-etcd-service-ca\") pod \"etcd-operator-b45778765-26msj\" (UID: \"7d1b2f20-d886-4b8d-8cb3-fcfc83ac4c12\") " pod="openshift-etcd-operator/etcd-operator-b45778765-26msj" Jan 31 09:03:27 crc kubenswrapper[4830]: I0131 09:03:27.837242 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/acf2d685-5b8b-41ab-b91d-2e3b58b8b584-ca-trust-extracted\") pod \"image-registry-697d97f7c8-7m8b7\" (UID: \"acf2d685-5b8b-41ab-b91d-2e3b58b8b584\") " pod="openshift-image-registry/image-registry-697d97f7c8-7m8b7" Jan 31 09:03:27 crc kubenswrapper[4830]: I0131 09:03:27.837771 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7d1b2f20-d886-4b8d-8cb3-fcfc83ac4c12-config\") pod \"etcd-operator-b45778765-26msj\" (UID: \"7d1b2f20-d886-4b8d-8cb3-fcfc83ac4c12\") " pod="openshift-etcd-operator/etcd-operator-b45778765-26msj" Jan 31 09:03:27 crc kubenswrapper[4830]: I0131 09:03:27.838100 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a32cf0f1-bf23-4522-a619-71d1b1dab082-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-ngd6n\" (UID: \"a32cf0f1-bf23-4522-a619-71d1b1dab082\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ngd6n" Jan 31 09:03:27 crc kubenswrapper[4830]: I0131 09:03:27.834580 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/90237d04-95e8-4523-bd3d-bc8cedfc0f5f-proxy-tls\") pod \"machine-config-operator-74547568cd-8wdp6\" (UID: \"90237d04-95e8-4523-bd3d-bc8cedfc0f5f\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-8wdp6" Jan 31 09:03:27 crc kubenswrapper[4830]: I0131 09:03:27.838309 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/90237d04-95e8-4523-bd3d-bc8cedfc0f5f-auth-proxy-config\") pod \"machine-config-operator-74547568cd-8wdp6\" (UID: \"90237d04-95e8-4523-bd3d-bc8cedfc0f5f\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-8wdp6" Jan 31 09:03:27 crc kubenswrapper[4830]: E0131 09:03:27.838544 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 09:03:28.338524982 +0000 UTC m=+152.831887424 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7m8b7" (UID: "acf2d685-5b8b-41ab-b91d-2e3b58b8b584") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 09:03:27 crc kubenswrapper[4830]: I0131 09:03:27.839365 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/21ee0584-e383-47cf-af98-48c65d9fba74-metrics-tls\") pod \"dns-operator-744455d44c-2klp9\" (UID: \"21ee0584-e383-47cf-af98-48c65d9fba74\") " pod="openshift-dns-operator/dns-operator-744455d44c-2klp9" Jan 31 09:03:27 crc kubenswrapper[4830]: I0131 09:03:27.840157 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e33286d6-0e3e-47d4-bf68-11b642927bee-config\") pod \"kube-apiserver-operator-766d6c64bb-x8zjt\" (UID: \"e33286d6-0e3e-47d4-bf68-11b642927bee\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-x8zjt" Jan 31 09:03:27 crc kubenswrapper[4830]: I0131 09:03:27.839648 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/acf2d685-5b8b-41ab-b91d-2e3b58b8b584-registry-certificates\") pod \"image-registry-697d97f7c8-7m8b7\" (UID: \"acf2d685-5b8b-41ab-b91d-2e3b58b8b584\") " pod="openshift-image-registry/image-registry-697d97f7c8-7m8b7" Jan 31 09:03:27 crc kubenswrapper[4830]: I0131 09:03:27.840095 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/129959a3-2970-4a90-b833-20ae33af36ba-cert\") pod \"ingress-canary-9rz4w\" (UID: \"129959a3-2970-4a90-b833-20ae33af36ba\") " pod="openshift-ingress-canary/ingress-canary-9rz4w" Jan 31 09:03:27 crc kubenswrapper[4830]: I0131 09:03:27.839509 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/bc99ac19-2796-495d-82d4-6eda76879f40-images\") pod \"machine-api-operator-5694c8668f-8nn2k\" (UID: \"bc99ac19-2796-495d-82d4-6eda76879f40\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-8nn2k" Jan 31 09:03:27 crc kubenswrapper[4830]: I0131 09:03:27.840553 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2ab6816a-65ef-41b2-b416-60491d4423d9-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-vjnc8\" (UID: \"2ab6816a-65ef-41b2-b416-60491d4423d9\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-vjnc8" Jan 31 09:03:27 crc kubenswrapper[4830]: I0131 09:03:27.840648 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2ab6816a-65ef-41b2-b416-60491d4423d9-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-vjnc8\" (UID: \"2ab6816a-65ef-41b2-b416-60491d4423d9\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-vjnc8" Jan 31 09:03:27 crc kubenswrapper[4830]: I0131 09:03:27.840796 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/691a8aff-6fcd-400a-ace9-fb3fa8778206-serving-cert\") pod \"console-operator-58897d9998-pkx9p\" (UID: \"691a8aff-6fcd-400a-ace9-fb3fa8778206\") " pod="openshift-console-operator/console-operator-58897d9998-pkx9p" Jan 31 09:03:27 crc kubenswrapper[4830]: I0131 09:03:27.841607 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2ab6816a-65ef-41b2-b416-60491d4423d9-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-vjnc8\" (UID: \"2ab6816a-65ef-41b2-b416-60491d4423d9\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-vjnc8" Jan 31 09:03:27 crc kubenswrapper[4830]: I0131 09:03:27.842261 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/dc74377f-6986-4156-9c2b-7a003f07d6ff-secret-volume\") pod \"collect-profiles-29497500-66dl8\" (UID: \"dc74377f-6986-4156-9c2b-7a003f07d6ff\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497500-66dl8" Jan 31 09:03:27 crc kubenswrapper[4830]: I0131 09:03:27.842632 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hgrjx\" (UniqueName: \"kubernetes.io/projected/007a4117-0dfe-485e-85df-6bc68e0cee5e-kube-api-access-hgrjx\") pod \"package-server-manager-789f6589d5-ckvgq\" (UID: \"007a4117-0dfe-485e-85df-6bc68e0cee5e\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-ckvgq" Jan 31 09:03:27 crc kubenswrapper[4830]: I0131 09:03:27.842745 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/48cbaebb-5495-4965-a7fd-207d2d7ef0fc-metrics-tls\") pod \"dns-default-whjm4\" (UID: \"48cbaebb-5495-4965-a7fd-207d2d7ef0fc\") " pod="openshift-dns/dns-default-whjm4" Jan 31 09:03:27 crc kubenswrapper[4830]: I0131 09:03:27.842840 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d5e84919-6083-4967-aced-6e3e10b7e69d-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-5blhw\" (UID: \"d5e84919-6083-4967-aced-6e3e10b7e69d\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5blhw" Jan 31 09:03:27 crc kubenswrapper[4830]: I0131 09:03:27.842880 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bk5vf\" (UniqueName: \"kubernetes.io/projected/48cbaebb-5495-4965-a7fd-207d2d7ef0fc-kube-api-access-bk5vf\") pod \"dns-default-whjm4\" (UID: \"48cbaebb-5495-4965-a7fd-207d2d7ef0fc\") " pod="openshift-dns/dns-default-whjm4" Jan 31 09:03:27 crc kubenswrapper[4830]: I0131 09:03:27.842885 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/bc99ac19-2796-495d-82d4-6eda76879f40-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-8nn2k\" (UID: \"bc99ac19-2796-495d-82d4-6eda76879f40\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-8nn2k" Jan 31 09:03:27 crc kubenswrapper[4830]: I0131 09:03:27.842914 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zqr7j\" (UniqueName: \"kubernetes.io/projected/acf2d685-5b8b-41ab-b91d-2e3b58b8b584-kube-api-access-zqr7j\") pod \"image-registry-697d97f7c8-7m8b7\" (UID: \"acf2d685-5b8b-41ab-b91d-2e3b58b8b584\") " pod="openshift-image-registry/image-registry-697d97f7c8-7m8b7" Jan 31 09:03:27 crc kubenswrapper[4830]: I0131 09:03:27.842946 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/007a4117-0dfe-485e-85df-6bc68e0cee5e-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-ckvgq\" (UID: \"007a4117-0dfe-485e-85df-6bc68e0cee5e\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-ckvgq" Jan 31 09:03:27 crc kubenswrapper[4830]: I0131 09:03:27.842965 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/642f0efa-b4e9-45d2-a5c3-f53ff0fb7687-node-bootstrap-token\") pod \"machine-config-server-bsxrt\" (UID: \"642f0efa-b4e9-45d2-a5c3-f53ff0fb7687\") " pod="openshift-machine-config-operator/machine-config-server-bsxrt" Jan 31 09:03:27 crc kubenswrapper[4830]: I0131 09:03:27.843029 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dmrmn\" (UniqueName: \"kubernetes.io/projected/30c7c034-9492-4051-9cc9-235a6d87bd03-kube-api-access-dmrmn\") pod \"csi-hostpathplugin-rhvlq\" (UID: \"30c7c034-9492-4051-9cc9-235a6d87bd03\") " pod="hostpath-provisioner/csi-hostpathplugin-rhvlq" Jan 31 09:03:27 crc kubenswrapper[4830]: I0131 09:03:27.843097 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/21ee0584-e383-47cf-af98-48c65d9fba74-metrics-tls\") pod \"dns-operator-744455d44c-2klp9\" (UID: \"21ee0584-e383-47cf-af98-48c65d9fba74\") " pod="openshift-dns-operator/dns-operator-744455d44c-2klp9" Jan 31 09:03:27 crc kubenswrapper[4830]: I0131 09:03:27.843366 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/7d1b2f20-d886-4b8d-8cb3-fcfc83ac4c12-etcd-client\") pod \"etcd-operator-b45778765-26msj\" (UID: \"7d1b2f20-d886-4b8d-8cb3-fcfc83ac4c12\") " pod="openshift-etcd-operator/etcd-operator-b45778765-26msj" Jan 31 09:03:27 crc kubenswrapper[4830]: I0131 09:03:27.838719 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/acf2d685-5b8b-41ab-b91d-2e3b58b8b584-trusted-ca\") pod \"image-registry-697d97f7c8-7m8b7\" (UID: \"acf2d685-5b8b-41ab-b91d-2e3b58b8b584\") " pod="openshift-image-registry/image-registry-697d97f7c8-7m8b7" Jan 31 09:03:27 crc kubenswrapper[4830]: I0131 09:03:27.846265 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/acf2d685-5b8b-41ab-b91d-2e3b58b8b584-installation-pull-secrets\") pod \"image-registry-697d97f7c8-7m8b7\" (UID: \"acf2d685-5b8b-41ab-b91d-2e3b58b8b584\") " pod="openshift-image-registry/image-registry-697d97f7c8-7m8b7" Jan 31 09:03:27 crc kubenswrapper[4830]: I0131 09:03:27.847495 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/a32cf0f1-bf23-4522-a619-71d1b1dab082-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-ngd6n\" (UID: \"a32cf0f1-bf23-4522-a619-71d1b1dab082\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ngd6n" Jan 31 09:03:27 crc kubenswrapper[4830]: I0131 09:03:27.857140 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/691a8aff-6fcd-400a-ace9-fb3fa8778206-serving-cert\") pod \"console-operator-58897d9998-pkx9p\" (UID: \"691a8aff-6fcd-400a-ace9-fb3fa8778206\") " pod="openshift-console-operator/console-operator-58897d9998-pkx9p" Jan 31 09:03:27 crc kubenswrapper[4830]: I0131 09:03:27.858916 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lrw5t\" (UniqueName: \"kubernetes.io/projected/691a8aff-6fcd-400a-ace9-fb3fa8778206-kube-api-access-lrw5t\") pod \"console-operator-58897d9998-pkx9p\" (UID: \"691a8aff-6fcd-400a-ace9-fb3fa8778206\") " pod="openshift-console-operator/console-operator-58897d9998-pkx9p" Jan 31 09:03:27 crc kubenswrapper[4830]: I0131 09:03:27.864265 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d5e84919-6083-4967-aced-6e3e10b7e69d-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-5blhw\" (UID: \"d5e84919-6083-4967-aced-6e3e10b7e69d\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5blhw" Jan 31 09:03:27 crc kubenswrapper[4830]: I0131 09:03:27.874962 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dqlzk\" (UniqueName: \"kubernetes.io/projected/23da5bb2-d0a9-4b0d-8755-ea8e58234b18-kube-api-access-dqlzk\") pod \"migrator-59844c95c7-9d827\" (UID: \"23da5bb2-d0a9-4b0d-8755-ea8e58234b18\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-9d827" Jan 31 09:03:27 crc kubenswrapper[4830]: I0131 09:03:27.892502 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-pwk76"] Jan 31 09:03:27 crc kubenswrapper[4830]: I0131 09:03:27.893984 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jlkzg\" (UniqueName: \"kubernetes.io/projected/21ee0584-e383-47cf-af98-48c65d9fba74-kube-api-access-jlkzg\") pod \"dns-operator-744455d44c-2klp9\" (UID: \"21ee0584-e383-47cf-af98-48c65d9fba74\") " pod="openshift-dns-operator/dns-operator-744455d44c-2klp9" Jan 31 09:03:27 crc kubenswrapper[4830]: I0131 09:03:27.944656 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 09:03:27 crc kubenswrapper[4830]: I0131 09:03:27.944891 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e33286d6-0e3e-47d4-bf68-11b642927bee-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-x8zjt\" (UID: \"e33286d6-0e3e-47d4-bf68-11b642927bee\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-x8zjt" Jan 31 09:03:27 crc kubenswrapper[4830]: I0131 09:03:27.944916 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e80e8b17-711d-46d8-a240-4fa52e093545-apiservice-cert\") pod \"packageserver-d55dfcdfc-lp7ks\" (UID: \"e80e8b17-711d-46d8-a240-4fa52e093545\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-lp7ks" Jan 31 09:03:27 crc kubenswrapper[4830]: I0131 09:03:27.944938 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ee16c0c8-3b38-4e29-b2dc-633b09648c2f-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-skqcc\" (UID: \"ee16c0c8-3b38-4e29-b2dc-633b09648c2f\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-skqcc" Jan 31 09:03:27 crc kubenswrapper[4830]: I0131 09:03:27.944972 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/ee16c0c8-3b38-4e29-b2dc-633b09648c2f-proxy-tls\") pod \"machine-config-controller-84d6567774-skqcc\" (UID: \"ee16c0c8-3b38-4e29-b2dc-633b09648c2f\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-skqcc" Jan 31 09:03:27 crc kubenswrapper[4830]: I0131 09:03:27.944995 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7db6391f-ccc4-41d2-82ff-aa58d3297625-config\") pod \"service-ca-operator-777779d784-bs7f7\" (UID: \"7db6391f-ccc4-41d2-82ff-aa58d3297625\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-bs7f7" Jan 31 09:03:27 crc kubenswrapper[4830]: I0131 09:03:27.945013 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/b9a73d68-2213-477c-a55d-91c86b7ce674-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-xwn99\" (UID: \"b9a73d68-2213-477c-a55d-91c86b7ce674\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-xwn99" Jan 31 09:03:27 crc kubenswrapper[4830]: I0131 09:03:27.945030 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7db6391f-ccc4-41d2-82ff-aa58d3297625-serving-cert\") pod \"service-ca-operator-777779d784-bs7f7\" (UID: \"7db6391f-ccc4-41d2-82ff-aa58d3297625\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-bs7f7" Jan 31 09:03:27 crc kubenswrapper[4830]: I0131 09:03:27.945058 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/30c7c034-9492-4051-9cc9-235a6d87bd03-mountpoint-dir\") pod \"csi-hostpathplugin-rhvlq\" (UID: \"30c7c034-9492-4051-9cc9-235a6d87bd03\") " pod="hostpath-provisioner/csi-hostpathplugin-rhvlq" Jan 31 09:03:27 crc kubenswrapper[4830]: I0131 09:03:27.945080 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/48cbaebb-5495-4965-a7fd-207d2d7ef0fc-config-volume\") pod \"dns-default-whjm4\" (UID: \"48cbaebb-5495-4965-a7fd-207d2d7ef0fc\") " pod="openshift-dns/dns-default-whjm4" Jan 31 09:03:27 crc kubenswrapper[4830]: I0131 09:03:27.945100 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/36a7a51a-2662-4f3b-aa1d-d674cf676b9d-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-fnk7f\" (UID: \"36a7a51a-2662-4f3b-aa1d-d674cf676b9d\") " pod="openshift-marketplace/marketplace-operator-79b997595-fnk7f" Jan 31 09:03:27 crc kubenswrapper[4830]: I0131 09:03:27.945118 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/30c7c034-9492-4051-9cc9-235a6d87bd03-plugins-dir\") pod \"csi-hostpathplugin-rhvlq\" (UID: \"30c7c034-9492-4051-9cc9-235a6d87bd03\") " pod="hostpath-provisioner/csi-hostpathplugin-rhvlq" Jan 31 09:03:27 crc kubenswrapper[4830]: I0131 09:03:27.945138 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e33286d6-0e3e-47d4-bf68-11b642927bee-config\") pod \"kube-apiserver-operator-766d6c64bb-x8zjt\" (UID: \"e33286d6-0e3e-47d4-bf68-11b642927bee\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-x8zjt" Jan 31 09:03:27 crc kubenswrapper[4830]: I0131 09:03:27.945157 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/dc74377f-6986-4156-9c2b-7a003f07d6ff-secret-volume\") pod \"collect-profiles-29497500-66dl8\" (UID: \"dc74377f-6986-4156-9c2b-7a003f07d6ff\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497500-66dl8" Jan 31 09:03:27 crc kubenswrapper[4830]: I0131 09:03:27.945176 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hgrjx\" (UniqueName: \"kubernetes.io/projected/007a4117-0dfe-485e-85df-6bc68e0cee5e-kube-api-access-hgrjx\") pod \"package-server-manager-789f6589d5-ckvgq\" (UID: \"007a4117-0dfe-485e-85df-6bc68e0cee5e\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-ckvgq" Jan 31 09:03:27 crc kubenswrapper[4830]: I0131 09:03:27.945191 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/48cbaebb-5495-4965-a7fd-207d2d7ef0fc-metrics-tls\") pod \"dns-default-whjm4\" (UID: \"48cbaebb-5495-4965-a7fd-207d2d7ef0fc\") " pod="openshift-dns/dns-default-whjm4" Jan 31 09:03:27 crc kubenswrapper[4830]: I0131 09:03:27.945211 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bk5vf\" (UniqueName: \"kubernetes.io/projected/48cbaebb-5495-4965-a7fd-207d2d7ef0fc-kube-api-access-bk5vf\") pod \"dns-default-whjm4\" (UID: \"48cbaebb-5495-4965-a7fd-207d2d7ef0fc\") " pod="openshift-dns/dns-default-whjm4" Jan 31 09:03:27 crc kubenswrapper[4830]: I0131 09:03:27.945231 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/642f0efa-b4e9-45d2-a5c3-f53ff0fb7687-node-bootstrap-token\") pod \"machine-config-server-bsxrt\" (UID: \"642f0efa-b4e9-45d2-a5c3-f53ff0fb7687\") " pod="openshift-machine-config-operator/machine-config-server-bsxrt" Jan 31 09:03:27 crc kubenswrapper[4830]: I0131 09:03:27.945256 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/007a4117-0dfe-485e-85df-6bc68e0cee5e-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-ckvgq\" (UID: \"007a4117-0dfe-485e-85df-6bc68e0cee5e\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-ckvgq" Jan 31 09:03:27 crc kubenswrapper[4830]: I0131 09:03:27.945280 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dmrmn\" (UniqueName: \"kubernetes.io/projected/30c7c034-9492-4051-9cc9-235a6d87bd03-kube-api-access-dmrmn\") pod \"csi-hostpathplugin-rhvlq\" (UID: \"30c7c034-9492-4051-9cc9-235a6d87bd03\") " pod="hostpath-provisioner/csi-hostpathplugin-rhvlq" Jan 31 09:03:27 crc kubenswrapper[4830]: I0131 09:03:27.945297 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k5nbz\" (UniqueName: \"kubernetes.io/projected/642f0efa-b4e9-45d2-a5c3-f53ff0fb7687-kube-api-access-k5nbz\") pod \"machine-config-server-bsxrt\" (UID: \"642f0efa-b4e9-45d2-a5c3-f53ff0fb7687\") " pod="openshift-machine-config-operator/machine-config-server-bsxrt" Jan 31 09:03:27 crc kubenswrapper[4830]: I0131 09:03:27.945316 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4phcv\" (UniqueName: \"kubernetes.io/projected/e80e8b17-711d-46d8-a240-4fa52e093545-kube-api-access-4phcv\") pod \"packageserver-d55dfcdfc-lp7ks\" (UID: \"e80e8b17-711d-46d8-a240-4fa52e093545\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-lp7ks" Jan 31 09:03:27 crc kubenswrapper[4830]: I0131 09:03:27.945333 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/cf057c5a-deef-4c01-bd58-f761ec86e2f4-srv-cert\") pod \"catalog-operator-68c6474976-n4rml\" (UID: \"cf057c5a-deef-4c01-bd58-f761ec86e2f4\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-n4rml" Jan 31 09:03:27 crc kubenswrapper[4830]: I0131 09:03:27.945355 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e33286d6-0e3e-47d4-bf68-11b642927bee-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-x8zjt\" (UID: \"e33286d6-0e3e-47d4-bf68-11b642927bee\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-x8zjt" Jan 31 09:03:27 crc kubenswrapper[4830]: I0131 09:03:27.945371 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e80e8b17-711d-46d8-a240-4fa52e093545-webhook-cert\") pod \"packageserver-d55dfcdfc-lp7ks\" (UID: \"e80e8b17-711d-46d8-a240-4fa52e093545\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-lp7ks" Jan 31 09:03:27 crc kubenswrapper[4830]: I0131 09:03:27.945391 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7qlkn\" (UniqueName: \"kubernetes.io/projected/ee16c0c8-3b38-4e29-b2dc-633b09648c2f-kube-api-access-7qlkn\") pod \"machine-config-controller-84d6567774-skqcc\" (UID: \"ee16c0c8-3b38-4e29-b2dc-633b09648c2f\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-skqcc" Jan 31 09:03:27 crc kubenswrapper[4830]: I0131 09:03:27.945409 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dc74377f-6986-4156-9c2b-7a003f07d6ff-config-volume\") pod \"collect-profiles-29497500-66dl8\" (UID: \"dc74377f-6986-4156-9c2b-7a003f07d6ff\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497500-66dl8" Jan 31 09:03:27 crc kubenswrapper[4830]: I0131 09:03:27.945426 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/e80e8b17-711d-46d8-a240-4fa52e093545-tmpfs\") pod \"packageserver-d55dfcdfc-lp7ks\" (UID: \"e80e8b17-711d-46d8-a240-4fa52e093545\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-lp7ks" Jan 31 09:03:27 crc kubenswrapper[4830]: I0131 09:03:27.945445 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/cf057c5a-deef-4c01-bd58-f761ec86e2f4-profile-collector-cert\") pod \"catalog-operator-68c6474976-n4rml\" (UID: \"cf057c5a-deef-4c01-bd58-f761ec86e2f4\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-n4rml" Jan 31 09:03:27 crc kubenswrapper[4830]: I0131 09:03:27.945502 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/30c7c034-9492-4051-9cc9-235a6d87bd03-socket-dir\") pod \"csi-hostpathplugin-rhvlq\" (UID: \"30c7c034-9492-4051-9cc9-235a6d87bd03\") " pod="hostpath-provisioner/csi-hostpathplugin-rhvlq" Jan 31 09:03:27 crc kubenswrapper[4830]: I0131 09:03:27.945527 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vnqcc\" (UniqueName: \"kubernetes.io/projected/dc74377f-6986-4156-9c2b-7a003f07d6ff-kube-api-access-vnqcc\") pod \"collect-profiles-29497500-66dl8\" (UID: \"dc74377f-6986-4156-9c2b-7a003f07d6ff\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497500-66dl8" Jan 31 09:03:27 crc kubenswrapper[4830]: I0131 09:03:27.945546 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sjbhq\" (UniqueName: \"kubernetes.io/projected/b9a73d68-2213-477c-a55d-91c86b7ce674-kube-api-access-sjbhq\") pod \"multus-admission-controller-857f4d67dd-xwn99\" (UID: \"b9a73d68-2213-477c-a55d-91c86b7ce674\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-xwn99" Jan 31 09:03:27 crc kubenswrapper[4830]: I0131 09:03:27.945571 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/30c7c034-9492-4051-9cc9-235a6d87bd03-registration-dir\") pod \"csi-hostpathplugin-rhvlq\" (UID: \"30c7c034-9492-4051-9cc9-235a6d87bd03\") " pod="hostpath-provisioner/csi-hostpathplugin-rhvlq" Jan 31 09:03:27 crc kubenswrapper[4830]: I0131 09:03:27.945587 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/30c7c034-9492-4051-9cc9-235a6d87bd03-csi-data-dir\") pod \"csi-hostpathplugin-rhvlq\" (UID: \"30c7c034-9492-4051-9cc9-235a6d87bd03\") " pod="hostpath-provisioner/csi-hostpathplugin-rhvlq" Jan 31 09:03:27 crc kubenswrapper[4830]: I0131 09:03:27.945603 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/642f0efa-b4e9-45d2-a5c3-f53ff0fb7687-certs\") pod \"machine-config-server-bsxrt\" (UID: \"642f0efa-b4e9-45d2-a5c3-f53ff0fb7687\") " pod="openshift-machine-config-operator/machine-config-server-bsxrt" Jan 31 09:03:27 crc kubenswrapper[4830]: I0131 09:03:27.945623 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/13f1c33b-cede-4fb1-9651-15d0dcd36173-srv-cert\") pod \"olm-operator-6b444d44fb-lb8hp\" (UID: \"13f1c33b-cede-4fb1-9651-15d0dcd36173\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lb8hp" Jan 31 09:03:27 crc kubenswrapper[4830]: I0131 09:03:27.945656 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/36a7a51a-2662-4f3b-aa1d-d674cf676b9d-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-fnk7f\" (UID: \"36a7a51a-2662-4f3b-aa1d-d674cf676b9d\") " pod="openshift-marketplace/marketplace-operator-79b997595-fnk7f" Jan 31 09:03:27 crc kubenswrapper[4830]: I0131 09:03:27.945684 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/13f1c33b-cede-4fb1-9651-15d0dcd36173-profile-collector-cert\") pod \"olm-operator-6b444d44fb-lb8hp\" (UID: \"13f1c33b-cede-4fb1-9651-15d0dcd36173\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lb8hp" Jan 31 09:03:27 crc kubenswrapper[4830]: I0131 09:03:27.945710 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d9wqg\" (UniqueName: \"kubernetes.io/projected/7db6391f-ccc4-41d2-82ff-aa58d3297625-kube-api-access-d9wqg\") pod \"service-ca-operator-777779d784-bs7f7\" (UID: \"7db6391f-ccc4-41d2-82ff-aa58d3297625\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-bs7f7" Jan 31 09:03:27 crc kubenswrapper[4830]: I0131 09:03:27.945773 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ktx44\" (UniqueName: \"kubernetes.io/projected/13f1c33b-cede-4fb1-9651-15d0dcd36173-kube-api-access-ktx44\") pod \"olm-operator-6b444d44fb-lb8hp\" (UID: \"13f1c33b-cede-4fb1-9651-15d0dcd36173\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lb8hp" Jan 31 09:03:27 crc kubenswrapper[4830]: I0131 09:03:27.945802 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k7gjk\" (UniqueName: \"kubernetes.io/projected/cf057c5a-deef-4c01-bd58-f761ec86e2f4-kube-api-access-k7gjk\") pod \"catalog-operator-68c6474976-n4rml\" (UID: \"cf057c5a-deef-4c01-bd58-f761ec86e2f4\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-n4rml" Jan 31 09:03:27 crc kubenswrapper[4830]: I0131 09:03:27.945818 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nhctp\" (UniqueName: \"kubernetes.io/projected/36a7a51a-2662-4f3b-aa1d-d674cf676b9d-kube-api-access-nhctp\") pod \"marketplace-operator-79b997595-fnk7f\" (UID: \"36a7a51a-2662-4f3b-aa1d-d674cf676b9d\") " pod="openshift-marketplace/marketplace-operator-79b997595-fnk7f" Jan 31 09:03:27 crc kubenswrapper[4830]: E0131 09:03:27.946081 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 09:03:28.446065637 +0000 UTC m=+152.939428079 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 09:03:27 crc kubenswrapper[4830]: I0131 09:03:27.950271 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e80e8b17-711d-46d8-a240-4fa52e093545-apiservice-cert\") pod \"packageserver-d55dfcdfc-lp7ks\" (UID: \"e80e8b17-711d-46d8-a240-4fa52e093545\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-lp7ks" Jan 31 09:03:27 crc kubenswrapper[4830]: I0131 09:03:27.950361 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ee16c0c8-3b38-4e29-b2dc-633b09648c2f-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-skqcc\" (UID: \"ee16c0c8-3b38-4e29-b2dc-633b09648c2f\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-skqcc" Jan 31 09:03:27 crc kubenswrapper[4830]: I0131 09:03:27.951120 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a32cf0f1-bf23-4522-a619-71d1b1dab082-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-ngd6n\" (UID: \"a32cf0f1-bf23-4522-a619-71d1b1dab082\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ngd6n" Jan 31 09:03:27 crc kubenswrapper[4830]: I0131 09:03:27.951466 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/30c7c034-9492-4051-9cc9-235a6d87bd03-mountpoint-dir\") pod \"csi-hostpathplugin-rhvlq\" (UID: \"30c7c034-9492-4051-9cc9-235a6d87bd03\") " pod="hostpath-provisioner/csi-hostpathplugin-rhvlq" Jan 31 09:03:27 crc kubenswrapper[4830]: I0131 09:03:27.951785 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e33286d6-0e3e-47d4-bf68-11b642927bee-config\") pod \"kube-apiserver-operator-766d6c64bb-x8zjt\" (UID: \"e33286d6-0e3e-47d4-bf68-11b642927bee\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-x8zjt" Jan 31 09:03:27 crc kubenswrapper[4830]: I0131 09:03:27.952072 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7db6391f-ccc4-41d2-82ff-aa58d3297625-config\") pod \"service-ca-operator-777779d784-bs7f7\" (UID: \"7db6391f-ccc4-41d2-82ff-aa58d3297625\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-bs7f7" Jan 31 09:03:27 crc kubenswrapper[4830]: I0131 09:03:27.952327 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/48cbaebb-5495-4965-a7fd-207d2d7ef0fc-config-volume\") pod \"dns-default-whjm4\" (UID: \"48cbaebb-5495-4965-a7fd-207d2d7ef0fc\") " pod="openshift-dns/dns-default-whjm4" Jan 31 09:03:27 crc kubenswrapper[4830]: I0131 09:03:27.954903 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/30c7c034-9492-4051-9cc9-235a6d87bd03-registration-dir\") pod \"csi-hostpathplugin-rhvlq\" (UID: \"30c7c034-9492-4051-9cc9-235a6d87bd03\") " pod="hostpath-provisioner/csi-hostpathplugin-rhvlq" Jan 31 09:03:27 crc kubenswrapper[4830]: I0131 09:03:27.957255 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/cf057c5a-deef-4c01-bd58-f761ec86e2f4-srv-cert\") pod \"catalog-operator-68c6474976-n4rml\" (UID: \"cf057c5a-deef-4c01-bd58-f761ec86e2f4\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-n4rml" Jan 31 09:03:27 crc kubenswrapper[4830]: I0131 09:03:27.958021 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/e80e8b17-711d-46d8-a240-4fa52e093545-tmpfs\") pod \"packageserver-d55dfcdfc-lp7ks\" (UID: \"e80e8b17-711d-46d8-a240-4fa52e093545\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-lp7ks" Jan 31 09:03:27 crc kubenswrapper[4830]: I0131 09:03:27.958410 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/36a7a51a-2662-4f3b-aa1d-d674cf676b9d-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-fnk7f\" (UID: \"36a7a51a-2662-4f3b-aa1d-d674cf676b9d\") " pod="openshift-marketplace/marketplace-operator-79b997595-fnk7f" Jan 31 09:03:27 crc kubenswrapper[4830]: I0131 09:03:27.958507 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/30c7c034-9492-4051-9cc9-235a6d87bd03-socket-dir\") pod \"csi-hostpathplugin-rhvlq\" (UID: \"30c7c034-9492-4051-9cc9-235a6d87bd03\") " pod="hostpath-provisioner/csi-hostpathplugin-rhvlq" Jan 31 09:03:27 crc kubenswrapper[4830]: I0131 09:03:27.963579 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/30c7c034-9492-4051-9cc9-235a6d87bd03-csi-data-dir\") pod \"csi-hostpathplugin-rhvlq\" (UID: \"30c7c034-9492-4051-9cc9-235a6d87bd03\") " pod="hostpath-provisioner/csi-hostpathplugin-rhvlq" Jan 31 09:03:27 crc kubenswrapper[4830]: I0131 09:03:27.964100 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/30c7c034-9492-4051-9cc9-235a6d87bd03-plugins-dir\") pod \"csi-hostpathplugin-rhvlq\" (UID: \"30c7c034-9492-4051-9cc9-235a6d87bd03\") " pod="hostpath-provisioner/csi-hostpathplugin-rhvlq" Jan 31 09:03:27 crc kubenswrapper[4830]: I0131 09:03:27.965266 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dc74377f-6986-4156-9c2b-7a003f07d6ff-config-volume\") pod \"collect-profiles-29497500-66dl8\" (UID: \"dc74377f-6986-4156-9c2b-7a003f07d6ff\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497500-66dl8" Jan 31 09:03:27 crc kubenswrapper[4830]: I0131 09:03:27.966138 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/36a7a51a-2662-4f3b-aa1d-d674cf676b9d-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-fnk7f\" (UID: \"36a7a51a-2662-4f3b-aa1d-d674cf676b9d\") " pod="openshift-marketplace/marketplace-operator-79b997595-fnk7f" Jan 31 09:03:27 crc kubenswrapper[4830]: I0131 09:03:27.967210 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/ee16c0c8-3b38-4e29-b2dc-633b09648c2f-proxy-tls\") pod \"machine-config-controller-84d6567774-skqcc\" (UID: \"ee16c0c8-3b38-4e29-b2dc-633b09648c2f\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-skqcc" Jan 31 09:03:27 crc kubenswrapper[4830]: I0131 09:03:27.968022 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/dc74377f-6986-4156-9c2b-7a003f07d6ff-secret-volume\") pod \"collect-profiles-29497500-66dl8\" (UID: \"dc74377f-6986-4156-9c2b-7a003f07d6ff\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497500-66dl8" Jan 31 09:03:27 crc kubenswrapper[4830]: I0131 09:03:27.970311 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/642f0efa-b4e9-45d2-a5c3-f53ff0fb7687-certs\") pod \"machine-config-server-bsxrt\" (UID: \"642f0efa-b4e9-45d2-a5c3-f53ff0fb7687\") " pod="openshift-machine-config-operator/machine-config-server-bsxrt" Jan 31 09:03:27 crc kubenswrapper[4830]: I0131 09:03:27.970799 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/48cbaebb-5495-4965-a7fd-207d2d7ef0fc-metrics-tls\") pod \"dns-default-whjm4\" (UID: \"48cbaebb-5495-4965-a7fd-207d2d7ef0fc\") " pod="openshift-dns/dns-default-whjm4" Jan 31 09:03:27 crc kubenswrapper[4830]: I0131 09:03:27.971543 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/642f0efa-b4e9-45d2-a5c3-f53ff0fb7687-node-bootstrap-token\") pod \"machine-config-server-bsxrt\" (UID: \"642f0efa-b4e9-45d2-a5c3-f53ff0fb7687\") " pod="openshift-machine-config-operator/machine-config-server-bsxrt" Jan 31 09:03:27 crc kubenswrapper[4830]: I0131 09:03:27.972364 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/007a4117-0dfe-485e-85df-6bc68e0cee5e-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-ckvgq\" (UID: \"007a4117-0dfe-485e-85df-6bc68e0cee5e\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-ckvgq" Jan 31 09:03:27 crc kubenswrapper[4830]: I0131 09:03:27.979625 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/cf057c5a-deef-4c01-bd58-f761ec86e2f4-profile-collector-cert\") pod \"catalog-operator-68c6474976-n4rml\" (UID: \"cf057c5a-deef-4c01-bd58-f761ec86e2f4\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-n4rml" Jan 31 09:03:27 crc kubenswrapper[4830]: I0131 09:03:27.980044 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8kslh\" (UniqueName: \"kubernetes.io/projected/90237d04-95e8-4523-bd3d-bc8cedfc0f5f-kube-api-access-8kslh\") pod \"machine-config-operator-74547568cd-8wdp6\" (UID: \"90237d04-95e8-4523-bd3d-bc8cedfc0f5f\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-8wdp6" Jan 31 09:03:27 crc kubenswrapper[4830]: I0131 09:03:27.980503 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e33286d6-0e3e-47d4-bf68-11b642927bee-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-x8zjt\" (UID: \"e33286d6-0e3e-47d4-bf68-11b642927bee\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-x8zjt" Jan 31 09:03:27 crc kubenswrapper[4830]: I0131 09:03:27.980953 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/b9a73d68-2213-477c-a55d-91c86b7ce674-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-xwn99\" (UID: \"b9a73d68-2213-477c-a55d-91c86b7ce674\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-xwn99" Jan 31 09:03:27 crc kubenswrapper[4830]: I0131 09:03:27.984938 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/13f1c33b-cede-4fb1-9651-15d0dcd36173-srv-cert\") pod \"olm-operator-6b444d44fb-lb8hp\" (UID: \"13f1c33b-cede-4fb1-9651-15d0dcd36173\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lb8hp" Jan 31 09:03:27 crc kubenswrapper[4830]: I0131 09:03:27.985380 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/13f1c33b-cede-4fb1-9651-15d0dcd36173-profile-collector-cert\") pod \"olm-operator-6b444d44fb-lb8hp\" (UID: \"13f1c33b-cede-4fb1-9651-15d0dcd36173\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lb8hp" Jan 31 09:03:27 crc kubenswrapper[4830]: I0131 09:03:27.985364 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-24s52\" (UniqueName: \"kubernetes.io/projected/a32cf0f1-bf23-4522-a619-71d1b1dab082-kube-api-access-24s52\") pod \"cluster-image-registry-operator-dc59b4c8b-ngd6n\" (UID: \"a32cf0f1-bf23-4522-a619-71d1b1dab082\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ngd6n" Jan 31 09:03:27 crc kubenswrapper[4830]: I0131 09:03:27.989988 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-pkx9p" Jan 31 09:03:27 crc kubenswrapper[4830]: I0131 09:03:27.990451 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-qtqdv"] Jan 31 09:03:27 crc kubenswrapper[4830]: I0131 09:03:27.991501 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e80e8b17-711d-46d8-a240-4fa52e093545-webhook-cert\") pod \"packageserver-d55dfcdfc-lp7ks\" (UID: \"e80e8b17-711d-46d8-a240-4fa52e093545\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-lp7ks" Jan 31 09:03:27 crc kubenswrapper[4830]: I0131 09:03:27.993325 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7db6391f-ccc4-41d2-82ff-aa58d3297625-serving-cert\") pod \"service-ca-operator-777779d784-bs7f7\" (UID: \"7db6391f-ccc4-41d2-82ff-aa58d3297625\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-bs7f7" Jan 31 09:03:28 crc kubenswrapper[4830]: I0131 09:03:27.999894 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ngd6n" Jan 31 09:03:28 crc kubenswrapper[4830]: I0131 09:03:28.009228 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7z4j6\" (UniqueName: \"kubernetes.io/projected/d5e84919-6083-4967-aced-6e3e10b7e69d-kube-api-access-7z4j6\") pod \"kube-storage-version-migrator-operator-b67b599dd-5blhw\" (UID: \"d5e84919-6083-4967-aced-6e3e10b7e69d\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5blhw" Jan 31 09:03:28 crc kubenswrapper[4830]: I0131 09:03:28.026916 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/acf2d685-5b8b-41ab-b91d-2e3b58b8b584-bound-sa-token\") pod \"image-registry-697d97f7c8-7m8b7\" (UID: \"acf2d685-5b8b-41ab-b91d-2e3b58b8b584\") " pod="openshift-image-registry/image-registry-697d97f7c8-7m8b7" Jan 31 09:03:28 crc kubenswrapper[4830]: I0131 09:03:28.049516 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7m8b7\" (UID: \"acf2d685-5b8b-41ab-b91d-2e3b58b8b584\") " pod="openshift-image-registry/image-registry-697d97f7c8-7m8b7" Jan 31 09:03:28 crc kubenswrapper[4830]: E0131 09:03:28.049979 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 09:03:28.549964695 +0000 UTC m=+153.043327137 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7m8b7" (UID: "acf2d685-5b8b-41ab-b91d-2e3b58b8b584") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 09:03:28 crc kubenswrapper[4830]: I0131 09:03:28.050675 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-2klp9" Jan 31 09:03:28 crc kubenswrapper[4830]: I0131 09:03:28.052204 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2ab6816a-65ef-41b2-b416-60491d4423d9-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-vjnc8\" (UID: \"2ab6816a-65ef-41b2-b416-60491d4423d9\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-vjnc8" Jan 31 09:03:28 crc kubenswrapper[4830]: I0131 09:03:28.062468 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6z997\" (UniqueName: \"kubernetes.io/projected/129959a3-2970-4a90-b833-20ae33af36ba-kube-api-access-6z997\") pod \"ingress-canary-9rz4w\" (UID: \"129959a3-2970-4a90-b833-20ae33af36ba\") " pod="openshift-ingress-canary/ingress-canary-9rz4w" Jan 31 09:03:28 crc kubenswrapper[4830]: I0131 09:03:28.081522 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-hkd74" event={"ID":"00ab4f1c-2cc4-46b0-9e22-df58e5327352","Type":"ContainerStarted","Data":"9bb0f1093a37424441fc8374c5fb71cb747c472d42f4f79a9b45c2da6c131ac0"} Jan 31 09:03:28 crc kubenswrapper[4830]: I0131 09:03:28.095906 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ppb98\" (UniqueName: \"kubernetes.io/projected/7d1b2f20-d886-4b8d-8cb3-fcfc83ac4c12-kube-api-access-ppb98\") pod \"etcd-operator-b45778765-26msj\" (UID: \"7d1b2f20-d886-4b8d-8cb3-fcfc83ac4c12\") " pod="openshift-etcd-operator/etcd-operator-b45778765-26msj" Jan 31 09:03:28 crc kubenswrapper[4830]: I0131 09:03:28.096465 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-8wdp6" Jan 31 09:03:28 crc kubenswrapper[4830]: I0131 09:03:28.099326 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zqr7j\" (UniqueName: \"kubernetes.io/projected/acf2d685-5b8b-41ab-b91d-2e3b58b8b584-kube-api-access-zqr7j\") pod \"image-registry-697d97f7c8-7m8b7\" (UID: \"acf2d685-5b8b-41ab-b91d-2e3b58b8b584\") " pod="openshift-image-registry/image-registry-697d97f7c8-7m8b7" Jan 31 09:03:28 crc kubenswrapper[4830]: I0131 09:03:28.101567 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pwk76" event={"ID":"c61fa19c-7742-4ab1-b3ca-9607723fe94d","Type":"ContainerStarted","Data":"46e2da18d5b684cc6c70a4df26d909e05530b054e0b4263921bb35bf82288f2d"} Jan 31 09:03:28 crc kubenswrapper[4830]: I0131 09:03:28.110457 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-qtqdv" event={"ID":"ce03ae75-703f-4d6a-b98a-e866689b08e3","Type":"ContainerStarted","Data":"89490202a8f2a671e1fc137556d56499326111d9eb1fe931019e3fa913ff24a9"} Jan 31 09:03:28 crc kubenswrapper[4830]: I0131 09:03:28.112266 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rzjvd\" (UniqueName: \"kubernetes.io/projected/bc99ac19-2796-495d-82d4-6eda76879f40-kube-api-access-rzjvd\") pod \"machine-api-operator-5694c8668f-8nn2k\" (UID: \"bc99ac19-2796-495d-82d4-6eda76879f40\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-8nn2k" Jan 31 09:03:28 crc kubenswrapper[4830]: I0131 09:03:28.125405 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-9d827" Jan 31 09:03:28 crc kubenswrapper[4830]: I0131 09:03:28.126637 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-ttnrg" event={"ID":"d1346d7f-25da-4035-9c88-1f96c034d795","Type":"ContainerStarted","Data":"7a568f3811b401a78b2cb3b4c16a1582e19172add81be2af28bdd2f0d21f41d8"} Jan 31 09:03:28 crc kubenswrapper[4830]: I0131 09:03:28.126934 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5blhw" Jan 31 09:03:28 crc kubenswrapper[4830]: I0131 09:03:28.133475 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-ttnrg" event={"ID":"d1346d7f-25da-4035-9c88-1f96c034d795","Type":"ContainerStarted","Data":"5ef50dc738b653bfd21350e73687236150116f82c7fd37a99b6090962ce4fcc6"} Jan 31 09:03:28 crc kubenswrapper[4830]: I0131 09:03:28.134898 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-9rz4w" Jan 31 09:03:28 crc kubenswrapper[4830]: I0131 09:03:28.148827 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-rdzrw" event={"ID":"33210b82-c473-4bf8-b40d-a29b00833ea0","Type":"ContainerStarted","Data":"73db3496020c41a6d9c43cdd7a272c7e7b198ce445c18806d1b24285a64d36df"} Jan 31 09:03:28 crc kubenswrapper[4830]: I0131 09:03:28.151914 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 09:03:28 crc kubenswrapper[4830]: E0131 09:03:28.153942 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 09:03:28.65268884 +0000 UTC m=+153.146051282 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 09:03:28 crc kubenswrapper[4830]: I0131 09:03:28.157082 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7m8b7\" (UID: \"acf2d685-5b8b-41ab-b91d-2e3b58b8b584\") " pod="openshift-image-registry/image-registry-697d97f7c8-7m8b7" Jan 31 09:03:28 crc kubenswrapper[4830]: E0131 09:03:28.158160 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 09:03:28.658143149 +0000 UTC m=+153.151505581 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7m8b7" (UID: "acf2d685-5b8b-41ab-b91d-2e3b58b8b584") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 09:03:28 crc kubenswrapper[4830]: I0131 09:03:28.161954 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nhctp\" (UniqueName: \"kubernetes.io/projected/36a7a51a-2662-4f3b-aa1d-d674cf676b9d-kube-api-access-nhctp\") pod \"marketplace-operator-79b997595-fnk7f\" (UID: \"36a7a51a-2662-4f3b-aa1d-d674cf676b9d\") " pod="openshift-marketplace/marketplace-operator-79b997595-fnk7f" Jan 31 09:03:28 crc kubenswrapper[4830]: I0131 09:03:28.162806 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-hzk7b"] Jan 31 09:03:28 crc kubenswrapper[4830]: I0131 09:03:28.177487 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e33286d6-0e3e-47d4-bf68-11b642927bee-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-x8zjt\" (UID: \"e33286d6-0e3e-47d4-bf68-11b642927bee\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-x8zjt" Jan 31 09:03:28 crc kubenswrapper[4830]: I0131 09:03:28.189769 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-fnk7f" Jan 31 09:03:28 crc kubenswrapper[4830]: I0131 09:03:28.200493 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-gp4nv" event={"ID":"83cc5fe8-7965-46aa-b846-33d1b8d317f8","Type":"ContainerStarted","Data":"47b669fb814c76ba9b2b65afb80e4d3e6bf938ddb0db4eac8228e3a0dc714ec6"} Jan 31 09:03:28 crc kubenswrapper[4830]: I0131 09:03:28.208686 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-lpktp" event={"ID":"b0ebeb47-d72b-4d2f-b2e8-aee1f880da1e","Type":"ContainerStarted","Data":"7c6ca37323b1df2114075317fb4dd588b00513094ae9cb671a269b15d85bf0ce"} Jan 31 09:03:28 crc kubenswrapper[4830]: I0131 09:03:28.208748 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-lpktp" event={"ID":"b0ebeb47-d72b-4d2f-b2e8-aee1f880da1e","Type":"ContainerStarted","Data":"9e705abc12d7ef324f6b04dfed43158664007bf7677c5cb913e1fe4f44ff3c5a"} Jan 31 09:03:28 crc kubenswrapper[4830]: I0131 09:03:28.220306 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-htl5l"] Jan 31 09:03:28 crc kubenswrapper[4830]: I0131 09:03:28.226064 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4phcv\" (UniqueName: \"kubernetes.io/projected/e80e8b17-711d-46d8-a240-4fa52e093545-kube-api-access-4phcv\") pod \"packageserver-d55dfcdfc-lp7ks\" (UID: \"e80e8b17-711d-46d8-a240-4fa52e093545\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-lp7ks" Jan 31 09:03:28 crc kubenswrapper[4830]: I0131 09:03:28.231787 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sjbhq\" (UniqueName: \"kubernetes.io/projected/b9a73d68-2213-477c-a55d-91c86b7ce674-kube-api-access-sjbhq\") pod \"multus-admission-controller-857f4d67dd-xwn99\" (UID: \"b9a73d68-2213-477c-a55d-91c86b7ce674\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-xwn99" Jan 31 09:03:28 crc kubenswrapper[4830]: I0131 09:03:28.232977 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-l8ckt" event={"ID":"a8d26ab0-33c3-4eb7-928b-ffba996579d9","Type":"ContainerStarted","Data":"f02943a3f1a3a5b1d253da5c0e047618695dde6b04d0f0dd3648bd95fc345be9"} Jan 31 09:03:28 crc kubenswrapper[4830]: I0131 09:03:28.239132 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-9gw75" event={"ID":"0268e3ae-370f-43f0-9528-ff84b5983dac","Type":"ContainerStarted","Data":"a8f2206e65deb554d947a5b893851f94d4d0153dd249e5223818da7c75805707"} Jan 31 09:03:28 crc kubenswrapper[4830]: I0131 09:03:28.239174 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-9gw75" event={"ID":"0268e3ae-370f-43f0-9528-ff84b5983dac","Type":"ContainerStarted","Data":"9244d69940920231c3f19723fdc6dec86ae912dd54a790d943325b9e81b62be8"} Jan 31 09:03:28 crc kubenswrapper[4830]: I0131 09:03:28.239186 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-9gw75" event={"ID":"0268e3ae-370f-43f0-9528-ff84b5983dac","Type":"ContainerStarted","Data":"b8b2751a3bcaca58d552bede250a7baa5791678dd379f08bf22cc2291d0ea03b"} Jan 31 09:03:28 crc kubenswrapper[4830]: I0131 09:03:28.246375 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ktx44\" (UniqueName: \"kubernetes.io/projected/13f1c33b-cede-4fb1-9651-15d0dcd36173-kube-api-access-ktx44\") pod \"olm-operator-6b444d44fb-lb8hp\" (UID: \"13f1c33b-cede-4fb1-9651-15d0dcd36173\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lb8hp" Jan 31 09:03:28 crc kubenswrapper[4830]: I0131 09:03:28.249577 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-n5blr"] Jan 31 09:03:28 crc kubenswrapper[4830]: I0131 09:03:28.258525 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 09:03:28 crc kubenswrapper[4830]: E0131 09:03:28.259833 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 09:03:28.75971606 +0000 UTC m=+153.253078492 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 09:03:28 crc kubenswrapper[4830]: I0131 09:03:28.269020 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-2p57l" event={"ID":"2af5c820-fefe-42fe-83da-0aeccb301182","Type":"ContainerStarted","Data":"84f258f8a11a98e7c220d0433cfd56d89850cbd6cf0b4ea5015f00aa1b3eb112"} Jan 31 09:03:28 crc kubenswrapper[4830]: I0131 09:03:28.269391 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-2p57l" event={"ID":"2af5c820-fefe-42fe-83da-0aeccb301182","Type":"ContainerStarted","Data":"b838ac173153ffc263a8e3a823d879d8ee4dd82ee3d794876b172f4f44b82db2"} Jan 31 09:03:28 crc kubenswrapper[4830]: I0131 09:03:28.271097 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-vbcgc" event={"ID":"bf986437-9998-4cd1-90b8-b2e0716e8d37","Type":"ContainerStarted","Data":"5ff37a7dafed9b2622c013ab02bddfafff9b96a9db90d79b6500bff539a33fca"} Jan 31 09:03:28 crc kubenswrapper[4830]: I0131 09:03:28.273146 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k7gjk\" (UniqueName: \"kubernetes.io/projected/cf057c5a-deef-4c01-bd58-f761ec86e2f4-kube-api-access-k7gjk\") pod \"catalog-operator-68c6474976-n4rml\" (UID: \"cf057c5a-deef-4c01-bd58-f761ec86e2f4\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-n4rml" Jan 31 09:03:28 crc kubenswrapper[4830]: I0131 09:03:28.278866 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-n65sj" event={"ID":"5a386557-0e05-4f84-b5fc-a389083d2743","Type":"ContainerStarted","Data":"41d92b38e9b6769989d71ee9a9c4ac96c87a1b4cc5b0530e48b9bcc3a3f4ece3"} Jan 31 09:03:28 crc kubenswrapper[4830]: I0131 09:03:28.280243 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d9wqg\" (UniqueName: \"kubernetes.io/projected/7db6391f-ccc4-41d2-82ff-aa58d3297625-kube-api-access-d9wqg\") pod \"service-ca-operator-777779d784-bs7f7\" (UID: \"7db6391f-ccc4-41d2-82ff-aa58d3297625\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-bs7f7" Jan 31 09:03:28 crc kubenswrapper[4830]: I0131 09:03:28.280416 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-47wc2"] Jan 31 09:03:28 crc kubenswrapper[4830]: I0131 09:03:28.293734 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-8nn2k" Jan 31 09:03:28 crc kubenswrapper[4830]: I0131 09:03:28.295347 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dmrmn\" (UniqueName: \"kubernetes.io/projected/30c7c034-9492-4051-9cc9-235a6d87bd03-kube-api-access-dmrmn\") pod \"csi-hostpathplugin-rhvlq\" (UID: \"30c7c034-9492-4051-9cc9-235a6d87bd03\") " pod="hostpath-provisioner/csi-hostpathplugin-rhvlq" Jan 31 09:03:28 crc kubenswrapper[4830]: I0131 09:03:28.307517 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-knkww" event={"ID":"0f4287bc-c7a7-4ee2-8212-3611b978e2e8","Type":"ContainerStarted","Data":"0a78637f8b7343d66b6b94425540ddbe97c86fe7a3066716b0fa8f790875c0a2"} Jan 31 09:03:28 crc kubenswrapper[4830]: I0131 09:03:28.308041 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-knkww" Jan 31 09:03:28 crc kubenswrapper[4830]: I0131 09:03:28.315320 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7qlkn\" (UniqueName: \"kubernetes.io/projected/ee16c0c8-3b38-4e29-b2dc-633b09648c2f-kube-api-access-7qlkn\") pod \"machine-config-controller-84d6567774-skqcc\" (UID: \"ee16c0c8-3b38-4e29-b2dc-633b09648c2f\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-skqcc" Jan 31 09:03:28 crc kubenswrapper[4830]: I0131 09:03:28.322563 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-vjnc8" Jan 31 09:03:28 crc kubenswrapper[4830]: I0131 09:03:28.325867 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-kcsj5"] Jan 31 09:03:28 crc kubenswrapper[4830]: I0131 09:03:28.339804 4830 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-knkww container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" start-of-body= Jan 31 09:03:28 crc kubenswrapper[4830]: I0131 09:03:28.339877 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-knkww" podUID="0f4287bc-c7a7-4ee2-8212-3611b978e2e8" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" Jan 31 09:03:28 crc kubenswrapper[4830]: I0131 09:03:28.340557 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-26msj" Jan 31 09:03:28 crc kubenswrapper[4830]: I0131 09:03:28.349972 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k5nbz\" (UniqueName: \"kubernetes.io/projected/642f0efa-b4e9-45d2-a5c3-f53ff0fb7687-kube-api-access-k5nbz\") pod \"machine-config-server-bsxrt\" (UID: \"642f0efa-b4e9-45d2-a5c3-f53ff0fb7687\") " pod="openshift-machine-config-operator/machine-config-server-bsxrt" Jan 31 09:03:28 crc kubenswrapper[4830]: I0131 09:03:28.351461 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bk5vf\" (UniqueName: \"kubernetes.io/projected/48cbaebb-5495-4965-a7fd-207d2d7ef0fc-kube-api-access-bk5vf\") pod \"dns-default-whjm4\" (UID: \"48cbaebb-5495-4965-a7fd-207d2d7ef0fc\") " pod="openshift-dns/dns-default-whjm4" Jan 31 09:03:28 crc kubenswrapper[4830]: I0131 09:03:28.361897 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7m8b7\" (UID: \"acf2d685-5b8b-41ab-b91d-2e3b58b8b584\") " pod="openshift-image-registry/image-registry-697d97f7c8-7m8b7" Jan 31 09:03:28 crc kubenswrapper[4830]: E0131 09:03:28.363321 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 09:03:28.86329471 +0000 UTC m=+153.356657152 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7m8b7" (UID: "acf2d685-5b8b-41ab-b91d-2e3b58b8b584") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 09:03:28 crc kubenswrapper[4830]: I0131 09:03:28.381453 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hgrjx\" (UniqueName: \"kubernetes.io/projected/007a4117-0dfe-485e-85df-6bc68e0cee5e-kube-api-access-hgrjx\") pod \"package-server-manager-789f6589d5-ckvgq\" (UID: \"007a4117-0dfe-485e-85df-6bc68e0cee5e\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-ckvgq" Jan 31 09:03:28 crc kubenswrapper[4830]: I0131 09:03:28.404037 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vnqcc\" (UniqueName: \"kubernetes.io/projected/dc74377f-6986-4156-9c2b-7a003f07d6ff-kube-api-access-vnqcc\") pod \"collect-profiles-29497500-66dl8\" (UID: \"dc74377f-6986-4156-9c2b-7a003f07d6ff\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497500-66dl8" Jan 31 09:03:28 crc kubenswrapper[4830]: I0131 09:03:28.446524 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-n4rml" Jan 31 09:03:28 crc kubenswrapper[4830]: I0131 09:03:28.463089 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 09:03:28 crc kubenswrapper[4830]: E0131 09:03:28.465137 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 09:03:28.965078887 +0000 UTC m=+153.458441569 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 09:03:28 crc kubenswrapper[4830]: I0131 09:03:28.466229 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-skqcc" Jan 31 09:03:28 crc kubenswrapper[4830]: I0131 09:03:28.466657 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29497500-66dl8" Jan 31 09:03:28 crc kubenswrapper[4830]: I0131 09:03:28.479750 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-x8zjt" Jan 31 09:03:28 crc kubenswrapper[4830]: I0131 09:03:28.489085 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-ckvgq" Jan 31 09:03:28 crc kubenswrapper[4830]: I0131 09:03:28.507128 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-lp7ks" Jan 31 09:03:28 crc kubenswrapper[4830]: I0131 09:03:28.517210 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lb8hp" Jan 31 09:03:28 crc kubenswrapper[4830]: I0131 09:03:28.528973 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-xwn99" Jan 31 09:03:28 crc kubenswrapper[4830]: I0131 09:03:28.541015 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-bs7f7" Jan 31 09:03:28 crc kubenswrapper[4830]: I0131 09:03:28.541840 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ngd6n"] Jan 31 09:03:28 crc kubenswrapper[4830]: I0131 09:03:28.565037 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7m8b7\" (UID: \"acf2d685-5b8b-41ab-b91d-2e3b58b8b584\") " pod="openshift-image-registry/image-registry-697d97f7c8-7m8b7" Jan 31 09:03:28 crc kubenswrapper[4830]: E0131 09:03:28.565509 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 09:03:29.065490954 +0000 UTC m=+153.558853406 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7m8b7" (UID: "acf2d685-5b8b-41ab-b91d-2e3b58b8b584") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 09:03:28 crc kubenswrapper[4830]: I0131 09:03:28.569300 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-rhvlq" Jan 31 09:03:28 crc kubenswrapper[4830]: I0131 09:03:28.583234 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-bsxrt" Jan 31 09:03:28 crc kubenswrapper[4830]: I0131 09:03:28.593592 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-whjm4" Jan 31 09:03:28 crc kubenswrapper[4830]: I0131 09:03:28.640568 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-2klp9"] Jan 31 09:03:28 crc kubenswrapper[4830]: I0131 09:03:28.675717 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 09:03:28 crc kubenswrapper[4830]: E0131 09:03:28.675886 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 09:03:29.175850651 +0000 UTC m=+153.669213093 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 09:03:28 crc kubenswrapper[4830]: I0131 09:03:28.676202 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7m8b7\" (UID: \"acf2d685-5b8b-41ab-b91d-2e3b58b8b584\") " pod="openshift-image-registry/image-registry-697d97f7c8-7m8b7" Jan 31 09:03:28 crc kubenswrapper[4830]: E0131 09:03:28.676614 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 09:03:29.176599153 +0000 UTC m=+153.669961595 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7m8b7" (UID: "acf2d685-5b8b-41ab-b91d-2e3b58b8b584") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 09:03:28 crc kubenswrapper[4830]: I0131 09:03:28.779783 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 09:03:28 crc kubenswrapper[4830]: E0131 09:03:28.780008 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 09:03:29.279981087 +0000 UTC m=+153.773343529 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 09:03:28 crc kubenswrapper[4830]: I0131 09:03:28.780531 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7m8b7\" (UID: \"acf2d685-5b8b-41ab-b91d-2e3b58b8b584\") " pod="openshift-image-registry/image-registry-697d97f7c8-7m8b7" Jan 31 09:03:28 crc kubenswrapper[4830]: E0131 09:03:28.780971 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 09:03:29.280919964 +0000 UTC m=+153.774282396 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7m8b7" (UID: "acf2d685-5b8b-41ab-b91d-2e3b58b8b584") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 09:03:28 crc kubenswrapper[4830]: I0131 09:03:28.853079 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-8wdp6"] Jan 31 09:03:28 crc kubenswrapper[4830]: I0131 09:03:28.895715 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-pkx9p"] Jan 31 09:03:28 crc kubenswrapper[4830]: I0131 09:03:28.896832 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 09:03:28 crc kubenswrapper[4830]: E0131 09:03:28.897525 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 09:03:29.397503463 +0000 UTC m=+153.890865905 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 09:03:28 crc kubenswrapper[4830]: I0131 09:03:28.934620 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-9d827"] Jan 31 09:03:29 crc kubenswrapper[4830]: I0131 09:03:29.001002 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7m8b7\" (UID: \"acf2d685-5b8b-41ab-b91d-2e3b58b8b584\") " pod="openshift-image-registry/image-registry-697d97f7c8-7m8b7" Jan 31 09:03:29 crc kubenswrapper[4830]: E0131 09:03:29.001397 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 09:03:29.501382481 +0000 UTC m=+153.994744923 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7m8b7" (UID: "acf2d685-5b8b-41ab-b91d-2e3b58b8b584") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 09:03:29 crc kubenswrapper[4830]: I0131 09:03:29.026985 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5blhw"] Jan 31 09:03:29 crc kubenswrapper[4830]: I0131 09:03:29.096944 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-9rz4w"] Jan 31 09:03:29 crc kubenswrapper[4830]: I0131 09:03:29.102015 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 09:03:29 crc kubenswrapper[4830]: E0131 09:03:29.102780 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 09:03:29.602761147 +0000 UTC m=+154.096123579 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 09:03:29 crc kubenswrapper[4830]: I0131 09:03:29.168122 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-ckvgq"] Jan 31 09:03:29 crc kubenswrapper[4830]: I0131 09:03:29.204927 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7m8b7\" (UID: \"acf2d685-5b8b-41ab-b91d-2e3b58b8b584\") " pod="openshift-image-registry/image-registry-697d97f7c8-7m8b7" Jan 31 09:03:29 crc kubenswrapper[4830]: E0131 09:03:29.205376 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 09:03:29.705361488 +0000 UTC m=+154.198723930 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7m8b7" (UID: "acf2d685-5b8b-41ab-b91d-2e3b58b8b584") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 09:03:29 crc kubenswrapper[4830]: W0131 09:03:29.223517 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod129959a3_2970_4a90_b833_20ae33af36ba.slice/crio-5535a24c62923dda443732ed2c114a2b3b46ddbbf106746f0812556cdb2c7ef9 WatchSource:0}: Error finding container 5535a24c62923dda443732ed2c114a2b3b46ddbbf106746f0812556cdb2c7ef9: Status 404 returned error can't find the container with id 5535a24c62923dda443732ed2c114a2b3b46ddbbf106746f0812556cdb2c7ef9 Jan 31 09:03:29 crc kubenswrapper[4830]: I0131 09:03:29.228865 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-hkd74" podStartSLOduration=127.228807722 podStartE2EDuration="2m7.228807722s" podCreationTimestamp="2026-01-31 09:01:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 09:03:29.204521674 +0000 UTC m=+153.697884116" watchObservedRunningTime="2026-01-31 09:03:29.228807722 +0000 UTC m=+153.722170164" Jan 31 09:03:29 crc kubenswrapper[4830]: I0131 09:03:29.230946 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-26msj"] Jan 31 09:03:29 crc kubenswrapper[4830]: I0131 09:03:29.239444 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-skqcc"] Jan 31 09:03:29 crc kubenswrapper[4830]: I0131 09:03:29.256917 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-fnk7f"] Jan 31 09:03:29 crc kubenswrapper[4830]: I0131 09:03:29.265169 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-vjnc8"] Jan 31 09:03:29 crc kubenswrapper[4830]: I0131 09:03:29.265287 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-lpktp" podStartSLOduration=127.265256234 podStartE2EDuration="2m7.265256234s" podCreationTimestamp="2026-01-31 09:01:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 09:03:29.25070689 +0000 UTC m=+153.744069322" watchObservedRunningTime="2026-01-31 09:03:29.265256234 +0000 UTC m=+153.758618676" Jan 31 09:03:29 crc kubenswrapper[4830]: I0131 09:03:29.287547 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-8nn2k"] Jan 31 09:03:29 crc kubenswrapper[4830]: I0131 09:03:29.306569 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 09:03:29 crc kubenswrapper[4830]: E0131 09:03:29.307105 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 09:03:29.807084254 +0000 UTC m=+154.300446696 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 09:03:29 crc kubenswrapper[4830]: I0131 09:03:29.337023 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-n4rml"] Jan 31 09:03:29 crc kubenswrapper[4830]: I0131 09:03:29.352041 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-knkww" podStartSLOduration=127.352014403 podStartE2EDuration="2m7.352014403s" podCreationTimestamp="2026-01-31 09:01:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 09:03:29.351993783 +0000 UTC m=+153.845356225" watchObservedRunningTime="2026-01-31 09:03:29.352014403 +0000 UTC m=+153.845376835" Jan 31 09:03:29 crc kubenswrapper[4830]: I0131 09:03:29.377259 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-knkww" event={"ID":"0f4287bc-c7a7-4ee2-8212-3611b978e2e8","Type":"ContainerStarted","Data":"c93ef06b4cc611d689048f6986abcd84dc1de88007a083281962fb48d9fe17b4"} Jan 31 09:03:29 crc kubenswrapper[4830]: W0131 09:03:29.379513 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbc99ac19_2796_495d_82d4_6eda76879f40.slice/crio-dcf246705ffdebd7a6a5a9424487bae3aa2ba8a4f2bf543c55f370aff017000a WatchSource:0}: Error finding container dcf246705ffdebd7a6a5a9424487bae3aa2ba8a4f2bf543c55f370aff017000a: Status 404 returned error can't find the container with id dcf246705ffdebd7a6a5a9424487bae3aa2ba8a4f2bf543c55f370aff017000a Jan 31 09:03:29 crc kubenswrapper[4830]: I0131 09:03:29.390147 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-bsxrt" event={"ID":"642f0efa-b4e9-45d2-a5c3-f53ff0fb7687","Type":"ContainerStarted","Data":"31a4223f0a2f0c8ab2ea163885c67c012eb0a4377b3c1d72eb9f921a6bbdba68"} Jan 31 09:03:29 crc kubenswrapper[4830]: I0131 09:03:29.392560 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-htl5l" event={"ID":"2a94efc3-19bc-47ce-b48a-4f4b3351d955","Type":"ContainerStarted","Data":"ea2f46d43d60d78e07ba114b10b86d41506644a9cd830a837d68c63cee5586eb"} Jan 31 09:03:29 crc kubenswrapper[4830]: I0131 09:03:29.411399 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7m8b7\" (UID: \"acf2d685-5b8b-41ab-b91d-2e3b58b8b584\") " pod="openshift-image-registry/image-registry-697d97f7c8-7m8b7" Jan 31 09:03:29 crc kubenswrapper[4830]: E0131 09:03:29.413013 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 09:03:29.912985091 +0000 UTC m=+154.406347533 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7m8b7" (UID: "acf2d685-5b8b-41ab-b91d-2e3b58b8b584") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 09:03:29 crc kubenswrapper[4830]: I0131 09:03:29.414470 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-pkx9p" event={"ID":"691a8aff-6fcd-400a-ace9-fb3fa8778206","Type":"ContainerStarted","Data":"619b1d1a023448169f2633472031ebea3f5678b03a37fbe2801e517a182778a0"} Jan 31 09:03:29 crc kubenswrapper[4830]: I0131 09:03:29.428063 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-rdzrw" event={"ID":"33210b82-c473-4bf8-b40d-a29b00833ea0","Type":"ContainerStarted","Data":"022ea8a18a302916854f6b760b83a358dccdbbcd5c291d9804b6a782c98e9a71"} Jan 31 09:03:29 crc kubenswrapper[4830]: I0131 09:03:29.429256 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-rdzrw" Jan 31 09:03:29 crc kubenswrapper[4830]: W0131 09:03:29.444476 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod007a4117_0dfe_485e_85df_6bc68e0cee5e.slice/crio-463bbd5a3868014cb4212f5a866f006df2f48b2b282a63e7f5fa9a2f72c2fa70 WatchSource:0}: Error finding container 463bbd5a3868014cb4212f5a866f006df2f48b2b282a63e7f5fa9a2f72c2fa70: Status 404 returned error can't find the container with id 463bbd5a3868014cb4212f5a866f006df2f48b2b282a63e7f5fa9a2f72c2fa70 Jan 31 09:03:29 crc kubenswrapper[4830]: W0131 09:03:29.447637 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2ab6816a_65ef_41b2_b416_60491d4423d9.slice/crio-2b3c2f89eee4f082d89c7fbbbda67a9512343057678ea098aa76312915dc2e75 WatchSource:0}: Error finding container 2b3c2f89eee4f082d89c7fbbbda67a9512343057678ea098aa76312915dc2e75: Status 404 returned error can't find the container with id 2b3c2f89eee4f082d89c7fbbbda67a9512343057678ea098aa76312915dc2e75 Jan 31 09:03:29 crc kubenswrapper[4830]: I0131 09:03:29.470816 4830 csr.go:261] certificate signing request csr-h9bqg is approved, waiting to be issued Jan 31 09:03:29 crc kubenswrapper[4830]: I0131 09:03:29.477274 4830 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-rdzrw container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.10:8443/healthz\": dial tcp 10.217.0.10:8443: connect: connection refused" start-of-body= Jan 31 09:03:29 crc kubenswrapper[4830]: I0131 09:03:29.477356 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-rdzrw" podUID="33210b82-c473-4bf8-b40d-a29b00833ea0" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.10:8443/healthz\": dial tcp 10.217.0.10:8443: connect: connection refused" Jan 31 09:03:29 crc kubenswrapper[4830]: I0131 09:03:29.483302 4830 csr.go:257] certificate signing request csr-h9bqg is issued Jan 31 09:03:29 crc kubenswrapper[4830]: I0131 09:03:29.491829 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-9d827" event={"ID":"23da5bb2-d0a9-4b0d-8755-ea8e58234b18","Type":"ContainerStarted","Data":"5c1eed5ffb35a79be2680bd4b153a96edaab6e84715576067645da7b45b4e519"} Jan 31 09:03:29 crc kubenswrapper[4830]: I0131 09:03:29.501420 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-n5blr" event={"ID":"96ad696c-eaac-4e34-a986-d31a24d8d7bb","Type":"ContainerStarted","Data":"f4b88df68d9547b26da53afee15ae6a4702d6c61f167be65d79274c978fea385"} Jan 31 09:03:29 crc kubenswrapper[4830]: I0131 09:03:29.516835 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 09:03:29 crc kubenswrapper[4830]: E0131 09:03:29.517327 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 09:03:30.017288582 +0000 UTC m=+154.510651024 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 09:03:29 crc kubenswrapper[4830]: I0131 09:03:29.528801 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-x8zjt"] Jan 31 09:03:29 crc kubenswrapper[4830]: I0131 09:03:29.530715 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ngd6n" event={"ID":"a32cf0f1-bf23-4522-a619-71d1b1dab082","Type":"ContainerStarted","Data":"eab517fe2c57c5705be65522943deb5cfe52fd8a40bb113ab5e61915f2de8e72"} Jan 31 09:03:29 crc kubenswrapper[4830]: I0131 09:03:29.555442 4830 generic.go:334] "Generic (PLEG): container finished" podID="d1346d7f-25da-4035-9c88-1f96c034d795" containerID="7a568f3811b401a78b2cb3b4c16a1582e19172add81be2af28bdd2f0d21f41d8" exitCode=0 Jan 31 09:03:29 crc kubenswrapper[4830]: I0131 09:03:29.555528 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-ttnrg" event={"ID":"d1346d7f-25da-4035-9c88-1f96c034d795","Type":"ContainerDied","Data":"7a568f3811b401a78b2cb3b4c16a1582e19172add81be2af28bdd2f0d21f41d8"} Jan 31 09:03:29 crc kubenswrapper[4830]: I0131 09:03:29.561464 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-n65sj" event={"ID":"5a386557-0e05-4f84-b5fc-a389083d2743","Type":"ContainerStarted","Data":"a715b2420e56cededc54a04c2801f4015766544ae9eba932841f45c779a9852f"} Jan 31 09:03:29 crc kubenswrapper[4830]: I0131 09:03:29.566738 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-hzk7b" event={"ID":"5fe5bd86-a665-4a73-8892-fd12a784463d","Type":"ContainerStarted","Data":"8adab15d8c8c06af57609909ec54cc53623ecd57ee4d7656578ddfd785fa5321"} Jan 31 09:03:29 crc kubenswrapper[4830]: I0131 09:03:29.590888 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-qtqdv" event={"ID":"ce03ae75-703f-4d6a-b98a-e866689b08e3","Type":"ContainerStarted","Data":"f0158abbe5636af2c23ae0f6983f7c03f011f2c8b6509852d6681ededc93458b"} Jan 31 09:03:29 crc kubenswrapper[4830]: I0131 09:03:29.596126 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-47wc2" event={"ID":"cef36034-4148-4107-9c32-4b75ac7046b5","Type":"ContainerStarted","Data":"119db29f9a3844a3490368ac01413d89e50bf5e3b9ffccec37d222ad403b528c"} Jan 31 09:03:29 crc kubenswrapper[4830]: I0131 09:03:29.596188 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-47wc2" event={"ID":"cef36034-4148-4107-9c32-4b75ac7046b5","Type":"ContainerStarted","Data":"a36715a16397eacdb2921874510beb5aef69d1badd2f8c94a504e96231a9c0cb"} Jan 31 09:03:29 crc kubenswrapper[4830]: I0131 09:03:29.602026 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-8wdp6" event={"ID":"90237d04-95e8-4523-bd3d-bc8cedfc0f5f","Type":"ContainerStarted","Data":"a6c086f40bf36473672628e5767898aab22a0d624b98406587af25fe75f3952d"} Jan 31 09:03:29 crc kubenswrapper[4830]: I0131 09:03:29.604244 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-2klp9" event={"ID":"21ee0584-e383-47cf-af98-48c65d9fba74","Type":"ContainerStarted","Data":"8bee4a142db7ec71ff4e78ae43f2a99ee8dc85df8f42bce89db523506d73f73c"} Jan 31 09:03:29 crc kubenswrapper[4830]: I0131 09:03:29.605919 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-gp4nv" event={"ID":"83cc5fe8-7965-46aa-b846-33d1b8d317f8","Type":"ContainerStarted","Data":"4d8a6a78e590a29565dc28a9b5bb611fc4a65cc7c4e41bb1ec1d59ce1b636727"} Jan 31 09:03:29 crc kubenswrapper[4830]: I0131 09:03:29.609375 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-kcsj5" event={"ID":"93359d96-ca07-4f0c-8b0a-a23f1635dcb1","Type":"ContainerStarted","Data":"8a36421ff8552953bf63b75b8f06da5daa0ff455b0ce7e8cd25095f6ae42a4d7"} Jan 31 09:03:29 crc kubenswrapper[4830]: I0131 09:03:29.609431 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-kcsj5" event={"ID":"93359d96-ca07-4f0c-8b0a-a23f1635dcb1","Type":"ContainerStarted","Data":"ac19940c4226456a5283335cf268c9760ec5a6861253f27a6e3a0bb82f980712"} Jan 31 09:03:29 crc kubenswrapper[4830]: I0131 09:03:29.612640 4830 generic.go:334] "Generic (PLEG): container finished" podID="c61fa19c-7742-4ab1-b3ca-9607723fe94d" containerID="080322db4393f76ed95423aff847214c33b2f42609ca516c977299aed7d2d549" exitCode=0 Jan 31 09:03:29 crc kubenswrapper[4830]: I0131 09:03:29.613749 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pwk76" event={"ID":"c61fa19c-7742-4ab1-b3ca-9607723fe94d","Type":"ContainerDied","Data":"080322db4393f76ed95423aff847214c33b2f42609ca516c977299aed7d2d549"} Jan 31 09:03:29 crc kubenswrapper[4830]: I0131 09:03:29.619475 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7m8b7\" (UID: \"acf2d685-5b8b-41ab-b91d-2e3b58b8b584\") " pod="openshift-image-registry/image-registry-697d97f7c8-7m8b7" Jan 31 09:03:29 crc kubenswrapper[4830]: E0131 09:03:29.619994 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 09:03:30.119972995 +0000 UTC m=+154.613335437 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7m8b7" (UID: "acf2d685-5b8b-41ab-b91d-2e3b58b8b584") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 09:03:29 crc kubenswrapper[4830]: I0131 09:03:29.631454 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-l8ckt" event={"ID":"a8d26ab0-33c3-4eb7-928b-ffba996579d9","Type":"ContainerStarted","Data":"663300a1eec888f0c1315103a2cb4760fc9ed1d0e7eb16f88381ae83cf26de31"} Jan 31 09:03:29 crc kubenswrapper[4830]: I0131 09:03:29.632769 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-l8ckt" Jan 31 09:03:29 crc kubenswrapper[4830]: I0131 09:03:29.638274 4830 patch_prober.go:28] interesting pod/downloads-7954f5f757-l8ckt container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" start-of-body= Jan 31 09:03:29 crc kubenswrapper[4830]: I0131 09:03:29.638352 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-l8ckt" podUID="a8d26ab0-33c3-4eb7-928b-ffba996579d9" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" Jan 31 09:03:29 crc kubenswrapper[4830]: I0131 09:03:29.650571 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-9rz4w" event={"ID":"129959a3-2970-4a90-b833-20ae33af36ba","Type":"ContainerStarted","Data":"5535a24c62923dda443732ed2c114a2b3b46ddbbf106746f0812556cdb2c7ef9"} Jan 31 09:03:29 crc kubenswrapper[4830]: I0131 09:03:29.663052 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-vbcgc" event={"ID":"bf986437-9998-4cd1-90b8-b2e0716e8d37","Type":"ContainerStarted","Data":"80f837c980bbb2106b85f0e8ae5ce486b89cde72328711691a5e7a58dca33a3f"} Jan 31 09:03:29 crc kubenswrapper[4830]: I0131 09:03:29.673523 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5blhw" event={"ID":"d5e84919-6083-4967-aced-6e3e10b7e69d","Type":"ContainerStarted","Data":"fd63e57144feaec4419f1b815f71aaf1b0c46ce4799330a54dfc22675f0c9742"} Jan 31 09:03:29 crc kubenswrapper[4830]: I0131 09:03:29.724995 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 09:03:29 crc kubenswrapper[4830]: E0131 09:03:29.725286 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 09:03:30.225229494 +0000 UTC m=+154.718591936 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 09:03:29 crc kubenswrapper[4830]: I0131 09:03:29.727138 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7m8b7\" (UID: \"acf2d685-5b8b-41ab-b91d-2e3b58b8b584\") " pod="openshift-image-registry/image-registry-697d97f7c8-7m8b7" Jan 31 09:03:29 crc kubenswrapper[4830]: E0131 09:03:29.736104 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 09:03:30.2360771 +0000 UTC m=+154.729439542 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7m8b7" (UID: "acf2d685-5b8b-41ab-b91d-2e3b58b8b584") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 09:03:29 crc kubenswrapper[4830]: I0131 09:03:29.788933 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-vbcgc" Jan 31 09:03:29 crc kubenswrapper[4830]: I0131 09:03:29.806769 4830 patch_prober.go:28] interesting pod/router-default-5444994796-vbcgc container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 31 09:03:29 crc kubenswrapper[4830]: [-]has-synced failed: reason withheld Jan 31 09:03:29 crc kubenswrapper[4830]: [+]process-running ok Jan 31 09:03:29 crc kubenswrapper[4830]: healthz check failed Jan 31 09:03:29 crc kubenswrapper[4830]: I0131 09:03:29.806848 4830 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-vbcgc" podUID="bf986437-9998-4cd1-90b8-b2e0716e8d37" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 31 09:03:29 crc kubenswrapper[4830]: I0131 09:03:29.827769 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-2p57l" podStartSLOduration=127.827692981 podStartE2EDuration="2m7.827692981s" podCreationTimestamp="2026-01-31 09:01:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 09:03:29.827262038 +0000 UTC m=+154.320624480" watchObservedRunningTime="2026-01-31 09:03:29.827692981 +0000 UTC m=+154.321055433" Jan 31 09:03:29 crc kubenswrapper[4830]: I0131 09:03:29.831378 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-knkww" Jan 31 09:03:29 crc kubenswrapper[4830]: I0131 09:03:29.838321 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 09:03:29 crc kubenswrapper[4830]: E0131 09:03:29.839228 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 09:03:30.339195296 +0000 UTC m=+154.832557908 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 09:03:29 crc kubenswrapper[4830]: I0131 09:03:29.940217 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7m8b7\" (UID: \"acf2d685-5b8b-41ab-b91d-2e3b58b8b584\") " pod="openshift-image-registry/image-registry-697d97f7c8-7m8b7" Jan 31 09:03:29 crc kubenswrapper[4830]: E0131 09:03:29.940607 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 09:03:30.440593872 +0000 UTC m=+154.933956304 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7m8b7" (UID: "acf2d685-5b8b-41ab-b91d-2e3b58b8b584") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 09:03:29 crc kubenswrapper[4830]: I0131 09:03:29.989200 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lb8hp"] Jan 31 09:03:30 crc kubenswrapper[4830]: I0131 09:03:30.046019 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 09:03:30 crc kubenswrapper[4830]: E0131 09:03:30.046479 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 09:03:30.546447928 +0000 UTC m=+155.039810370 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 09:03:30 crc kubenswrapper[4830]: I0131 09:03:30.046745 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7m8b7\" (UID: \"acf2d685-5b8b-41ab-b91d-2e3b58b8b584\") " pod="openshift-image-registry/image-registry-697d97f7c8-7m8b7" Jan 31 09:03:30 crc kubenswrapper[4830]: E0131 09:03:30.047160 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 09:03:30.547145058 +0000 UTC m=+155.040507500 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7m8b7" (UID: "acf2d685-5b8b-41ab-b91d-2e3b58b8b584") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 09:03:30 crc kubenswrapper[4830]: I0131 09:03:30.094715 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-xwn99"] Jan 31 09:03:30 crc kubenswrapper[4830]: I0131 09:03:30.109857 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-9gw75" podStartSLOduration=128.109837896 podStartE2EDuration="2m8.109837896s" podCreationTimestamp="2026-01-31 09:01:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 09:03:30.103760669 +0000 UTC m=+154.597123121" watchObservedRunningTime="2026-01-31 09:03:30.109837896 +0000 UTC m=+154.603200338" Jan 31 09:03:30 crc kubenswrapper[4830]: I0131 09:03:30.112847 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29497500-66dl8"] Jan 31 09:03:30 crc kubenswrapper[4830]: I0131 09:03:30.146692 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-kcsj5" podStartSLOduration=128.146665379 podStartE2EDuration="2m8.146665379s" podCreationTimestamp="2026-01-31 09:01:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 09:03:30.138397898 +0000 UTC m=+154.631760341" watchObservedRunningTime="2026-01-31 09:03:30.146665379 +0000 UTC m=+154.640027811" Jan 31 09:03:30 crc kubenswrapper[4830]: I0131 09:03:30.163095 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 09:03:30 crc kubenswrapper[4830]: E0131 09:03:30.168720 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 09:03:30.668683201 +0000 UTC m=+155.162045643 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 09:03:30 crc kubenswrapper[4830]: I0131 09:03:30.246339 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-bs7f7"] Jan 31 09:03:30 crc kubenswrapper[4830]: I0131 09:03:30.256298 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-gp4nv" podStartSLOduration=128.256265785 podStartE2EDuration="2m8.256265785s" podCreationTimestamp="2026-01-31 09:01:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 09:03:30.233340546 +0000 UTC m=+154.726702988" watchObservedRunningTime="2026-01-31 09:03:30.256265785 +0000 UTC m=+154.749628227" Jan 31 09:03:30 crc kubenswrapper[4830]: I0131 09:03:30.270193 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7m8b7\" (UID: \"acf2d685-5b8b-41ab-b91d-2e3b58b8b584\") " pod="openshift-image-registry/image-registry-697d97f7c8-7m8b7" Jan 31 09:03:30 crc kubenswrapper[4830]: E0131 09:03:30.271618 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 09:03:30.771599202 +0000 UTC m=+155.264961644 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7m8b7" (UID: "acf2d685-5b8b-41ab-b91d-2e3b58b8b584") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 09:03:30 crc kubenswrapper[4830]: I0131 09:03:30.272670 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-vbcgc" podStartSLOduration=128.272639772 podStartE2EDuration="2m8.272639772s" podCreationTimestamp="2026-01-31 09:01:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 09:03:30.269214472 +0000 UTC m=+154.762576924" watchObservedRunningTime="2026-01-31 09:03:30.272639772 +0000 UTC m=+154.766002214" Jan 31 09:03:30 crc kubenswrapper[4830]: I0131 09:03:30.306745 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-qtqdv" podStartSLOduration=128.306702875 podStartE2EDuration="2m8.306702875s" podCreationTimestamp="2026-01-31 09:01:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 09:03:30.293066217 +0000 UTC m=+154.786428659" watchObservedRunningTime="2026-01-31 09:03:30.306702875 +0000 UTC m=+154.800065327" Jan 31 09:03:30 crc kubenswrapper[4830]: I0131 09:03:30.370073 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-n65sj" podStartSLOduration=128.370050132 podStartE2EDuration="2m8.370050132s" podCreationTimestamp="2026-01-31 09:01:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 09:03:30.333232908 +0000 UTC m=+154.826595350" watchObservedRunningTime="2026-01-31 09:03:30.370050132 +0000 UTC m=+154.863412574" Jan 31 09:03:30 crc kubenswrapper[4830]: I0131 09:03:30.372541 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 09:03:30 crc kubenswrapper[4830]: E0131 09:03:30.373017 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 09:03:30.872994348 +0000 UTC m=+155.366356790 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 09:03:30 crc kubenswrapper[4830]: I0131 09:03:30.435561 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-lp7ks"] Jan 31 09:03:30 crc kubenswrapper[4830]: I0131 09:03:30.439452 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-rhvlq"] Jan 31 09:03:30 crc kubenswrapper[4830]: I0131 09:03:30.462245 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-rdzrw" podStartSLOduration=128.462217769 podStartE2EDuration="2m8.462217769s" podCreationTimestamp="2026-01-31 09:01:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 09:03:30.452293779 +0000 UTC m=+154.945656231" watchObservedRunningTime="2026-01-31 09:03:30.462217769 +0000 UTC m=+154.955580211" Jan 31 09:03:30 crc kubenswrapper[4830]: I0131 09:03:30.465463 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-whjm4"] Jan 31 09:03:30 crc kubenswrapper[4830]: I0131 09:03:30.474162 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7m8b7\" (UID: \"acf2d685-5b8b-41ab-b91d-2e3b58b8b584\") " pod="openshift-image-registry/image-registry-697d97f7c8-7m8b7" Jan 31 09:03:30 crc kubenswrapper[4830]: E0131 09:03:30.475340 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 09:03:30.975251059 +0000 UTC m=+155.468613501 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7m8b7" (UID: "acf2d685-5b8b-41ab-b91d-2e3b58b8b584") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 09:03:30 crc kubenswrapper[4830]: I0131 09:03:30.482698 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-l8ckt" podStartSLOduration=128.482668815 podStartE2EDuration="2m8.482668815s" podCreationTimestamp="2026-01-31 09:01:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 09:03:30.480580534 +0000 UTC m=+154.973942976" watchObservedRunningTime="2026-01-31 09:03:30.482668815 +0000 UTC m=+154.976031247" Jan 31 09:03:30 crc kubenswrapper[4830]: I0131 09:03:30.487081 4830 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-01-31 08:58:29 +0000 UTC, rotation deadline is 2026-10-26 20:50:32.806170054 +0000 UTC Jan 31 09:03:30 crc kubenswrapper[4830]: I0131 09:03:30.487145 4830 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 6443h47m2.319027449s for next certificate rotation Jan 31 09:03:30 crc kubenswrapper[4830]: I0131 09:03:30.577108 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 09:03:30 crc kubenswrapper[4830]: E0131 09:03:30.577280 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 09:03:31.077239742 +0000 UTC m=+155.570602194 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 09:03:30 crc kubenswrapper[4830]: I0131 09:03:30.577535 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7m8b7\" (UID: \"acf2d685-5b8b-41ab-b91d-2e3b58b8b584\") " pod="openshift-image-registry/image-registry-697d97f7c8-7m8b7" Jan 31 09:03:30 crc kubenswrapper[4830]: E0131 09:03:30.578021 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 09:03:31.078011294 +0000 UTC m=+155.571373736 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7m8b7" (UID: "acf2d685-5b8b-41ab-b91d-2e3b58b8b584") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 09:03:30 crc kubenswrapper[4830]: W0131 09:03:30.581269 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod48cbaebb_5495_4965_a7fd_207d2d7ef0fc.slice/crio-3aa23da5042b6b3a5c7a48cecac3a34888658c054a20e8113105faf720e2ca47 WatchSource:0}: Error finding container 3aa23da5042b6b3a5c7a48cecac3a34888658c054a20e8113105faf720e2ca47: Status 404 returned error can't find the container with id 3aa23da5042b6b3a5c7a48cecac3a34888658c054a20e8113105faf720e2ca47 Jan 31 09:03:30 crc kubenswrapper[4830]: I0131 09:03:30.680742 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 09:03:30 crc kubenswrapper[4830]: E0131 09:03:30.680934 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 09:03:31.180895114 +0000 UTC m=+155.674257556 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 09:03:30 crc kubenswrapper[4830]: I0131 09:03:30.681530 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7m8b7\" (UID: \"acf2d685-5b8b-41ab-b91d-2e3b58b8b584\") " pod="openshift-image-registry/image-registry-697d97f7c8-7m8b7" Jan 31 09:03:30 crc kubenswrapper[4830]: E0131 09:03:30.681912 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 09:03:31.181897633 +0000 UTC m=+155.675260065 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7m8b7" (UID: "acf2d685-5b8b-41ab-b91d-2e3b58b8b584") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 09:03:30 crc kubenswrapper[4830]: I0131 09:03:30.755241 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ngd6n" event={"ID":"a32cf0f1-bf23-4522-a619-71d1b1dab082","Type":"ContainerStarted","Data":"1baf6744d88e325d3a05ae5b1a58ce4627dfb93f9f5e0be12ea0a13efd7c337e"} Jan 31 09:03:30 crc kubenswrapper[4830]: I0131 09:03:30.782213 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 09:03:30 crc kubenswrapper[4830]: E0131 09:03:30.782616 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 09:03:31.282589658 +0000 UTC m=+155.775952100 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 09:03:30 crc kubenswrapper[4830]: I0131 09:03:30.794184 4830 patch_prober.go:28] interesting pod/router-default-5444994796-vbcgc container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 31 09:03:30 crc kubenswrapper[4830]: [-]has-synced failed: reason withheld Jan 31 09:03:30 crc kubenswrapper[4830]: [+]process-running ok Jan 31 09:03:30 crc kubenswrapper[4830]: healthz check failed Jan 31 09:03:30 crc kubenswrapper[4830]: I0131 09:03:30.794297 4830 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-vbcgc" podUID="bf986437-9998-4cd1-90b8-b2e0716e8d37" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 31 09:03:30 crc kubenswrapper[4830]: I0131 09:03:30.826098 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-2klp9" event={"ID":"21ee0584-e383-47cf-af98-48c65d9fba74","Type":"ContainerStarted","Data":"9657ee876e4cec7621415a469ac4e3b6bc20f21c952b7bcb4ac1c8401e3b4a40"} Jan 31 09:03:30 crc kubenswrapper[4830]: I0131 09:03:30.845956 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-lp7ks" event={"ID":"e80e8b17-711d-46d8-a240-4fa52e093545","Type":"ContainerStarted","Data":"68003ee84da13367c5b51bd43eb3ffb040db211cd4e4dd0a3ad44f0a099d5b35"} Jan 31 09:03:30 crc kubenswrapper[4830]: I0131 09:03:30.888647 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7m8b7\" (UID: \"acf2d685-5b8b-41ab-b91d-2e3b58b8b584\") " pod="openshift-image-registry/image-registry-697d97f7c8-7m8b7" Jan 31 09:03:30 crc kubenswrapper[4830]: E0131 09:03:30.890214 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 09:03:31.390188535 +0000 UTC m=+155.883550977 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7m8b7" (UID: "acf2d685-5b8b-41ab-b91d-2e3b58b8b584") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 09:03:30 crc kubenswrapper[4830]: I0131 09:03:30.894440 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-9rz4w" event={"ID":"129959a3-2970-4a90-b833-20ae33af36ba","Type":"ContainerStarted","Data":"789ba7f8610289b4f22a9b1efd232937942ce7e4102e8e1b6f79ee841358a9c1"} Jan 31 09:03:30 crc kubenswrapper[4830]: I0131 09:03:30.929158 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-47wc2" event={"ID":"cef36034-4148-4107-9c32-4b75ac7046b5","Type":"ContainerStarted","Data":"84759c59733f58523dcddc0c08bb379c98c790ab1e6b0307b3058f99ed4b8c53"} Jan 31 09:03:30 crc kubenswrapper[4830]: I0131 09:03:30.955348 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-xwn99" event={"ID":"b9a73d68-2213-477c-a55d-91c86b7ce674","Type":"ContainerStarted","Data":"0c99c56de485f06d8ebb44e158598fe2a2f13ea1195ea395adaffdc84b6e1c93"} Jan 31 09:03:30 crc kubenswrapper[4830]: I0131 09:03:30.989935 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 09:03:30 crc kubenswrapper[4830]: E0131 09:03:30.992149 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 09:03:31.492123947 +0000 UTC m=+155.985486389 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 09:03:30 crc kubenswrapper[4830]: I0131 09:03:30.999351 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-n5blr" event={"ID":"96ad696c-eaac-4e34-a986-d31a24d8d7bb","Type":"ContainerStarted","Data":"69f92400a79ee2cbadf28f0c5f619b79f1a86c4a0a68d0a53d4f056c952b155e"} Jan 31 09:03:31 crc kubenswrapper[4830]: I0131 09:03:31.018391 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-skqcc" event={"ID":"ee16c0c8-3b38-4e29-b2dc-633b09648c2f","Type":"ContainerStarted","Data":"02aceb077d6049ccee01b940c73bcf75dfd47412e53ada62071743b21e2e4deb"} Jan 31 09:03:31 crc kubenswrapper[4830]: I0131 09:03:31.039306 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-htl5l" event={"ID":"2a94efc3-19bc-47ce-b48a-4f4b3351d955","Type":"ContainerDied","Data":"66b5dd4fc9dd6831ca546f7b5d5de9d7b719344f93d79a57ba7efd26d503336d"} Jan 31 09:03:31 crc kubenswrapper[4830]: I0131 09:03:31.039158 4830 generic.go:334] "Generic (PLEG): container finished" podID="2a94efc3-19bc-47ce-b48a-4f4b3351d955" containerID="66b5dd4fc9dd6831ca546f7b5d5de9d7b719344f93d79a57ba7efd26d503336d" exitCode=0 Jan 31 09:03:31 crc kubenswrapper[4830]: I0131 09:03:31.058355 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-bs7f7" event={"ID":"7db6391f-ccc4-41d2-82ff-aa58d3297625","Type":"ContainerStarted","Data":"d5e61a0a5e465ae46380172419acd829c18618fe8393e2ea51e683ed05400406"} Jan 31 09:03:31 crc kubenswrapper[4830]: I0131 09:03:31.087843 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-8nn2k" event={"ID":"bc99ac19-2796-495d-82d4-6eda76879f40","Type":"ContainerStarted","Data":"dcf246705ffdebd7a6a5a9424487bae3aa2ba8a4f2bf543c55f370aff017000a"} Jan 31 09:03:31 crc kubenswrapper[4830]: I0131 09:03:31.089804 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-hzk7b" event={"ID":"5fe5bd86-a665-4a73-8892-fd12a784463d","Type":"ContainerStarted","Data":"bb377573acca1cadcbbd0e2208ca9329c7f68ae0060779b2e74b9b113b146b89"} Jan 31 09:03:31 crc kubenswrapper[4830]: I0131 09:03:31.090957 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-hzk7b" Jan 31 09:03:31 crc kubenswrapper[4830]: I0131 09:03:31.091637 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7m8b7\" (UID: \"acf2d685-5b8b-41ab-b91d-2e3b58b8b584\") " pod="openshift-image-registry/image-registry-697d97f7c8-7m8b7" Jan 31 09:03:31 crc kubenswrapper[4830]: E0131 09:03:31.093682 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 09:03:31.593666187 +0000 UTC m=+156.087028639 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7m8b7" (UID: "acf2d685-5b8b-41ab-b91d-2e3b58b8b584") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 09:03:31 crc kubenswrapper[4830]: I0131 09:03:31.097498 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-rhvlq" event={"ID":"30c7c034-9492-4051-9cc9-235a6d87bd03","Type":"ContainerStarted","Data":"d973c8d349ae9b093c24aacd064d686b0e6fa62a037b9d1b6d93904bd5219e5b"} Jan 31 09:03:31 crc kubenswrapper[4830]: I0131 09:03:31.102037 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lb8hp" event={"ID":"13f1c33b-cede-4fb1-9651-15d0dcd36173","Type":"ContainerStarted","Data":"19ddf6c8f7783b724d0e024ca90f47b017e6743f2b65910ada0561bfeafd06db"} Jan 31 09:03:31 crc kubenswrapper[4830]: I0131 09:03:31.104280 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-47wc2" podStartSLOduration=129.104264296 podStartE2EDuration="2m9.104264296s" podCreationTimestamp="2026-01-31 09:01:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 09:03:31.10029372 +0000 UTC m=+155.593656172" watchObservedRunningTime="2026-01-31 09:03:31.104264296 +0000 UTC m=+155.597626738" Jan 31 09:03:31 crc kubenswrapper[4830]: I0131 09:03:31.105306 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-n4rml" event={"ID":"cf057c5a-deef-4c01-bd58-f761ec86e2f4","Type":"ContainerStarted","Data":"17032142b5df05d2213336871163a93fd4eab3ea50f98e1d78e6922aa9a503bc"} Jan 31 09:03:31 crc kubenswrapper[4830]: I0131 09:03:31.106046 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-n4rml" Jan 31 09:03:31 crc kubenswrapper[4830]: I0131 09:03:31.117907 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-8wdp6" event={"ID":"90237d04-95e8-4523-bd3d-bc8cedfc0f5f","Type":"ContainerStarted","Data":"83915c2f8a2c7b7a16cd6a30ff746f867eccb32f69d6daf063cd4e7e1e97c583"} Jan 31 09:03:31 crc kubenswrapper[4830]: I0131 09:03:31.119744 4830 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-n4rml container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.32:8443/healthz\": dial tcp 10.217.0.32:8443: connect: connection refused" start-of-body= Jan 31 09:03:31 crc kubenswrapper[4830]: I0131 09:03:31.119828 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-n4rml" podUID="cf057c5a-deef-4c01-bd58-f761ec86e2f4" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.32:8443/healthz\": dial tcp 10.217.0.32:8443: connect: connection refused" Jan 31 09:03:31 crc kubenswrapper[4830]: I0131 09:03:31.122918 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-vjnc8" event={"ID":"2ab6816a-65ef-41b2-b416-60491d4423d9","Type":"ContainerStarted","Data":"2b3c2f89eee4f082d89c7fbbbda67a9512343057678ea098aa76312915dc2e75"} Jan 31 09:03:31 crc kubenswrapper[4830]: I0131 09:03:31.124529 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29497500-66dl8" event={"ID":"dc74377f-6986-4156-9c2b-7a003f07d6ff","Type":"ContainerStarted","Data":"fb9fc9e13aa13e5035e4d31402f5be1f1c8c1ae07e5be38ea76a827d38e986f8"} Jan 31 09:03:31 crc kubenswrapper[4830]: I0131 09:03:31.128676 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-whjm4" event={"ID":"48cbaebb-5495-4965-a7fd-207d2d7ef0fc","Type":"ContainerStarted","Data":"3aa23da5042b6b3a5c7a48cecac3a34888658c054a20e8113105faf720e2ca47"} Jan 31 09:03:31 crc kubenswrapper[4830]: I0131 09:03:31.131937 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-pkx9p" event={"ID":"691a8aff-6fcd-400a-ace9-fb3fa8778206","Type":"ContainerStarted","Data":"497622e31559cfebe662e6932b434973f3b3c9ada6b4f06670330d37ab8d06cb"} Jan 31 09:03:31 crc kubenswrapper[4830]: I0131 09:03:31.133031 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-pkx9p" Jan 31 09:03:31 crc kubenswrapper[4830]: I0131 09:03:31.138059 4830 patch_prober.go:28] interesting pod/console-operator-58897d9998-pkx9p container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.21:8443/readyz\": dial tcp 10.217.0.21:8443: connect: connection refused" start-of-body= Jan 31 09:03:31 crc kubenswrapper[4830]: I0131 09:03:31.138125 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-pkx9p" podUID="691a8aff-6fcd-400a-ace9-fb3fa8778206" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.21:8443/readyz\": dial tcp 10.217.0.21:8443: connect: connection refused" Jan 31 09:03:31 crc kubenswrapper[4830]: I0131 09:03:31.139174 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-n5blr" podStartSLOduration=129.139145513 podStartE2EDuration="2m9.139145513s" podCreationTimestamp="2026-01-31 09:01:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 09:03:31.138454333 +0000 UTC m=+155.631816775" watchObservedRunningTime="2026-01-31 09:03:31.139145513 +0000 UTC m=+155.632507955" Jan 31 09:03:31 crc kubenswrapper[4830]: I0131 09:03:31.153649 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-fnk7f" event={"ID":"36a7a51a-2662-4f3b-aa1d-d674cf676b9d","Type":"ContainerStarted","Data":"e333f126646e33e3be1b2d0c1dda0c5012c306f8b9919b39797eb66da8e04c59"} Jan 31 09:03:31 crc kubenswrapper[4830]: I0131 09:03:31.154388 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-fnk7f" Jan 31 09:03:31 crc kubenswrapper[4830]: I0131 09:03:31.158695 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5blhw" event={"ID":"d5e84919-6083-4967-aced-6e3e10b7e69d","Type":"ContainerStarted","Data":"8a5890f18ca7544311496bd67138f24565dfa1261425045d0afb918bc5ef9472"} Jan 31 09:03:31 crc kubenswrapper[4830]: I0131 09:03:31.161383 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-bsxrt" event={"ID":"642f0efa-b4e9-45d2-a5c3-f53ff0fb7687","Type":"ContainerStarted","Data":"ed5ade38de3bf6a7d04880e7ccddcfd1c1194881e2644d076e37aa6ebfe81b49"} Jan 31 09:03:31 crc kubenswrapper[4830]: I0131 09:03:31.162434 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-x8zjt" event={"ID":"e33286d6-0e3e-47d4-bf68-11b642927bee","Type":"ContainerStarted","Data":"3afac8c1ecbf69fcce74b74560e459ba59727bf70dc8213f73ff08ed6a467318"} Jan 31 09:03:31 crc kubenswrapper[4830]: I0131 09:03:31.163367 4830 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-fnk7f container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.40:8080/healthz\": dial tcp 10.217.0.40:8080: connect: connection refused" start-of-body= Jan 31 09:03:31 crc kubenswrapper[4830]: I0131 09:03:31.163402 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-fnk7f" podUID="36a7a51a-2662-4f3b-aa1d-d674cf676b9d" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.40:8080/healthz\": dial tcp 10.217.0.40:8080: connect: connection refused" Jan 31 09:03:31 crc kubenswrapper[4830]: I0131 09:03:31.164414 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-ckvgq" event={"ID":"007a4117-0dfe-485e-85df-6bc68e0cee5e","Type":"ContainerStarted","Data":"463bbd5a3868014cb4212f5a866f006df2f48b2b282a63e7f5fa9a2f72c2fa70"} Jan 31 09:03:31 crc kubenswrapper[4830]: I0131 09:03:31.166693 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-9d827" event={"ID":"23da5bb2-d0a9-4b0d-8755-ea8e58234b18","Type":"ContainerStarted","Data":"a8942b831a77d4cdaf3b53ec031aee510faa0d45120f6ebf14027acbf7fe0689"} Jan 31 09:03:31 crc kubenswrapper[4830]: I0131 09:03:31.168872 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-26msj" event={"ID":"7d1b2f20-d886-4b8d-8cb3-fcfc83ac4c12","Type":"ContainerStarted","Data":"3a27d789b1f171261b89d85f254745e12a8b5680a1490858b0303d0c8aaf4603"} Jan 31 09:03:31 crc kubenswrapper[4830]: I0131 09:03:31.170855 4830 patch_prober.go:28] interesting pod/downloads-7954f5f757-l8ckt container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" start-of-body= Jan 31 09:03:31 crc kubenswrapper[4830]: I0131 09:03:31.170892 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-l8ckt" podUID="a8d26ab0-33c3-4eb7-928b-ffba996579d9" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" Jan 31 09:03:31 crc kubenswrapper[4830]: I0131 09:03:31.176000 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ngd6n" podStartSLOduration=129.175964596 podStartE2EDuration="2m9.175964596s" podCreationTimestamp="2026-01-31 09:01:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 09:03:31.173391661 +0000 UTC m=+155.666754103" watchObservedRunningTime="2026-01-31 09:03:31.175964596 +0000 UTC m=+155.669327038" Jan 31 09:03:31 crc kubenswrapper[4830]: I0131 09:03:31.184201 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-rdzrw" Jan 31 09:03:31 crc kubenswrapper[4830]: I0131 09:03:31.193495 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 09:03:31 crc kubenswrapper[4830]: E0131 09:03:31.195459 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 09:03:31.695428764 +0000 UTC m=+156.188791206 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 09:03:31 crc kubenswrapper[4830]: I0131 09:03:31.300613 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-9rz4w" podStartSLOduration=6.300584299 podStartE2EDuration="6.300584299s" podCreationTimestamp="2026-01-31 09:03:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 09:03:31.279270738 +0000 UTC m=+155.772633190" watchObservedRunningTime="2026-01-31 09:03:31.300584299 +0000 UTC m=+155.793946751" Jan 31 09:03:31 crc kubenswrapper[4830]: I0131 09:03:31.308449 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7m8b7\" (UID: \"acf2d685-5b8b-41ab-b91d-2e3b58b8b584\") " pod="openshift-image-registry/image-registry-697d97f7c8-7m8b7" Jan 31 09:03:31 crc kubenswrapper[4830]: E0131 09:03:31.342710 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 09:03:31.842676656 +0000 UTC m=+156.336039098 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7m8b7" (UID: "acf2d685-5b8b-41ab-b91d-2e3b58b8b584") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 09:03:31 crc kubenswrapper[4830]: I0131 09:03:31.412055 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 09:03:31 crc kubenswrapper[4830]: E0131 09:03:31.413817 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 09:03:31.913779509 +0000 UTC m=+156.407141961 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 09:03:31 crc kubenswrapper[4830]: I0131 09:03:31.442098 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-n4rml" podStartSLOduration=129.442073114 podStartE2EDuration="2m9.442073114s" podCreationTimestamp="2026-01-31 09:01:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 09:03:31.380386916 +0000 UTC m=+155.873749358" watchObservedRunningTime="2026-01-31 09:03:31.442073114 +0000 UTC m=+155.935435556" Jan 31 09:03:31 crc kubenswrapper[4830]: I0131 09:03:31.532787 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7m8b7\" (UID: \"acf2d685-5b8b-41ab-b91d-2e3b58b8b584\") " pod="openshift-image-registry/image-registry-697d97f7c8-7m8b7" Jan 31 09:03:31 crc kubenswrapper[4830]: E0131 09:03:31.533888 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 09:03:32.03387523 +0000 UTC m=+156.527237672 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7m8b7" (UID: "acf2d685-5b8b-41ab-b91d-2e3b58b8b584") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 09:03:31 crc kubenswrapper[4830]: I0131 09:03:31.551103 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-hzk7b" podStartSLOduration=129.551079022 podStartE2EDuration="2m9.551079022s" podCreationTimestamp="2026-01-31 09:01:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 09:03:31.550778373 +0000 UTC m=+156.044140835" watchObservedRunningTime="2026-01-31 09:03:31.551079022 +0000 UTC m=+156.044441464" Jan 31 09:03:31 crc kubenswrapper[4830]: I0131 09:03:31.553109 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-fnk7f" podStartSLOduration=129.553103391 podStartE2EDuration="2m9.553103391s" podCreationTimestamp="2026-01-31 09:01:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 09:03:31.447637716 +0000 UTC m=+155.941000158" watchObservedRunningTime="2026-01-31 09:03:31.553103391 +0000 UTC m=+156.046465833" Jan 31 09:03:31 crc kubenswrapper[4830]: I0131 09:03:31.634647 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 09:03:31 crc kubenswrapper[4830]: E0131 09:03:31.635270 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 09:03:32.135250105 +0000 UTC m=+156.628612547 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 09:03:31 crc kubenswrapper[4830]: I0131 09:03:31.737502 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7m8b7\" (UID: \"acf2d685-5b8b-41ab-b91d-2e3b58b8b584\") " pod="openshift-image-registry/image-registry-697d97f7c8-7m8b7" Jan 31 09:03:31 crc kubenswrapper[4830]: E0131 09:03:31.752042 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 09:03:32.252010198 +0000 UTC m=+156.745372640 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7m8b7" (UID: "acf2d685-5b8b-41ab-b91d-2e3b58b8b584") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 09:03:31 crc kubenswrapper[4830]: I0131 09:03:31.753613 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-pkx9p" podStartSLOduration=129.753593065 podStartE2EDuration="2m9.753593065s" podCreationTimestamp="2026-01-31 09:01:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 09:03:31.701945769 +0000 UTC m=+156.195308211" watchObservedRunningTime="2026-01-31 09:03:31.753593065 +0000 UTC m=+156.246955517" Jan 31 09:03:31 crc kubenswrapper[4830]: I0131 09:03:31.754302 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-bsxrt" podStartSLOduration=6.754296045 podStartE2EDuration="6.754296045s" podCreationTimestamp="2026-01-31 09:03:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 09:03:31.75309376 +0000 UTC m=+156.246456222" watchObservedRunningTime="2026-01-31 09:03:31.754296045 +0000 UTC m=+156.247658487" Jan 31 09:03:31 crc kubenswrapper[4830]: I0131 09:03:31.793469 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5blhw" podStartSLOduration=129.793450796 podStartE2EDuration="2m9.793450796s" podCreationTimestamp="2026-01-31 09:01:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 09:03:31.792968992 +0000 UTC m=+156.286331434" watchObservedRunningTime="2026-01-31 09:03:31.793450796 +0000 UTC m=+156.286813239" Jan 31 09:03:31 crc kubenswrapper[4830]: I0131 09:03:31.826449 4830 patch_prober.go:28] interesting pod/router-default-5444994796-vbcgc container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 31 09:03:31 crc kubenswrapper[4830]: [-]has-synced failed: reason withheld Jan 31 09:03:31 crc kubenswrapper[4830]: [+]process-running ok Jan 31 09:03:31 crc kubenswrapper[4830]: healthz check failed Jan 31 09:03:31 crc kubenswrapper[4830]: I0131 09:03:31.826527 4830 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-vbcgc" podUID="bf986437-9998-4cd1-90b8-b2e0716e8d37" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 31 09:03:31 crc kubenswrapper[4830]: I0131 09:03:31.854231 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 09:03:31 crc kubenswrapper[4830]: E0131 09:03:31.854796 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 09:03:32.354773164 +0000 UTC m=+156.848135606 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 09:03:31 crc kubenswrapper[4830]: I0131 09:03:31.957578 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7m8b7\" (UID: \"acf2d685-5b8b-41ab-b91d-2e3b58b8b584\") " pod="openshift-image-registry/image-registry-697d97f7c8-7m8b7" Jan 31 09:03:31 crc kubenswrapper[4830]: E0131 09:03:31.958388 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 09:03:32.458371534 +0000 UTC m=+156.951733976 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7m8b7" (UID: "acf2d685-5b8b-41ab-b91d-2e3b58b8b584") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 09:03:32 crc kubenswrapper[4830]: I0131 09:03:32.058664 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 09:03:32 crc kubenswrapper[4830]: E0131 09:03:32.058980 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 09:03:32.558943626 +0000 UTC m=+157.052306068 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 09:03:32 crc kubenswrapper[4830]: I0131 09:03:32.059310 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7m8b7\" (UID: \"acf2d685-5b8b-41ab-b91d-2e3b58b8b584\") " pod="openshift-image-registry/image-registry-697d97f7c8-7m8b7" Jan 31 09:03:32 crc kubenswrapper[4830]: E0131 09:03:32.059711 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 09:03:32.559694228 +0000 UTC m=+157.053056660 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7m8b7" (UID: "acf2d685-5b8b-41ab-b91d-2e3b58b8b584") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 09:03:32 crc kubenswrapper[4830]: I0131 09:03:32.091425 4830 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-hzk7b container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.14:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 31 09:03:32 crc kubenswrapper[4830]: I0131 09:03:32.091527 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-hzk7b" podUID="5fe5bd86-a665-4a73-8892-fd12a784463d" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.14:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 31 09:03:32 crc kubenswrapper[4830]: I0131 09:03:32.159861 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 09:03:32 crc kubenswrapper[4830]: E0131 09:03:32.160137 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 09:03:32.660118546 +0000 UTC m=+157.153480988 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 09:03:32 crc kubenswrapper[4830]: I0131 09:03:32.185177 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lb8hp" event={"ID":"13f1c33b-cede-4fb1-9651-15d0dcd36173","Type":"ContainerStarted","Data":"e0cb3249c4e74782086ada27cb6cdcdf73644dbc41e394c8950ad3621a48b54d"} Jan 31 09:03:32 crc kubenswrapper[4830]: I0131 09:03:32.187383 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lb8hp" Jan 31 09:03:32 crc kubenswrapper[4830]: I0131 09:03:32.189110 4830 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-lb8hp container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.38:8443/healthz\": dial tcp 10.217.0.38:8443: connect: connection refused" start-of-body= Jan 31 09:03:32 crc kubenswrapper[4830]: I0131 09:03:32.189186 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lb8hp" podUID="13f1c33b-cede-4fb1-9651-15d0dcd36173" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.38:8443/healthz\": dial tcp 10.217.0.38:8443: connect: connection refused" Jan 31 09:03:32 crc kubenswrapper[4830]: I0131 09:03:32.206411 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-ttnrg" event={"ID":"d1346d7f-25da-4035-9c88-1f96c034d795","Type":"ContainerStarted","Data":"85d3d5001bb1210574c9fdb22694fa1d3ee858ab7e8b183782ae2dc18e10a849"} Jan 31 09:03:32 crc kubenswrapper[4830]: I0131 09:03:32.207287 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-ttnrg" Jan 31 09:03:32 crc kubenswrapper[4830]: I0131 09:03:32.220826 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-ckvgq" event={"ID":"007a4117-0dfe-485e-85df-6bc68e0cee5e","Type":"ContainerStarted","Data":"d90adb47121b9222b981576066f0df9e3cafccb2f5b0004e261272503fa48a5d"} Jan 31 09:03:32 crc kubenswrapper[4830]: I0131 09:03:32.220917 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-ckvgq" event={"ID":"007a4117-0dfe-485e-85df-6bc68e0cee5e","Type":"ContainerStarted","Data":"f2a1cf5c1fea0ee962f9168bbb050711e978d014295d796c81c53270551275e0"} Jan 31 09:03:32 crc kubenswrapper[4830]: I0131 09:03:32.221289 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-ckvgq" Jan 31 09:03:32 crc kubenswrapper[4830]: I0131 09:03:32.242471 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pwk76" event={"ID":"c61fa19c-7742-4ab1-b3ca-9607723fe94d","Type":"ContainerStarted","Data":"995f8f19915f66c235fa4292fdc68319a3418f6ca92d59c66d96a815ab0dc176"} Jan 31 09:03:32 crc kubenswrapper[4830]: I0131 09:03:32.261269 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7m8b7\" (UID: \"acf2d685-5b8b-41ab-b91d-2e3b58b8b584\") " pod="openshift-image-registry/image-registry-697d97f7c8-7m8b7" Jan 31 09:03:32 crc kubenswrapper[4830]: E0131 09:03:32.261792 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 09:03:32.761776359 +0000 UTC m=+157.255138801 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7m8b7" (UID: "acf2d685-5b8b-41ab-b91d-2e3b58b8b584") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 09:03:32 crc kubenswrapper[4830]: I0131 09:03:32.277198 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-xwn99" event={"ID":"b9a73d68-2213-477c-a55d-91c86b7ce674","Type":"ContainerStarted","Data":"c9bebf81f74e6ca95e2b5d86ed00845748fa4ce6b8fdccfe0d9b514a74ad2d65"} Jan 31 09:03:32 crc kubenswrapper[4830]: I0131 09:03:32.282526 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lb8hp" podStartSLOduration=130.282505004 podStartE2EDuration="2m10.282505004s" podCreationTimestamp="2026-01-31 09:01:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 09:03:32.229765926 +0000 UTC m=+156.723128378" watchObservedRunningTime="2026-01-31 09:03:32.282505004 +0000 UTC m=+156.775867446" Jan 31 09:03:32 crc kubenswrapper[4830]: I0131 09:03:32.292774 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-ttnrg" podStartSLOduration=130.292748762 podStartE2EDuration="2m10.292748762s" podCreationTimestamp="2026-01-31 09:01:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 09:03:32.283028849 +0000 UTC m=+156.776391291" watchObservedRunningTime="2026-01-31 09:03:32.292748762 +0000 UTC m=+156.786111204" Jan 31 09:03:32 crc kubenswrapper[4830]: I0131 09:03:32.305500 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29497500-66dl8" event={"ID":"dc74377f-6986-4156-9c2b-7a003f07d6ff","Type":"ContainerStarted","Data":"2325efe2d12a50cd38de3263bca44ba166a7c07c01d65a0da7aaab18fd9d7718"} Jan 31 09:03:32 crc kubenswrapper[4830]: I0131 09:03:32.307472 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-skqcc" event={"ID":"ee16c0c8-3b38-4e29-b2dc-633b09648c2f","Type":"ContainerStarted","Data":"35f05ba948d30bffde342227e7ed1293a208a164ac8bd2248c80af1d60327a6d"} Jan 31 09:03:32 crc kubenswrapper[4830]: I0131 09:03:32.307505 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-skqcc" event={"ID":"ee16c0c8-3b38-4e29-b2dc-633b09648c2f","Type":"ContainerStarted","Data":"937208af548ee940404965f5914435c6fefc5b59eebdc1f2c58c7ebdead3b6f3"} Jan 31 09:03:32 crc kubenswrapper[4830]: I0131 09:03:32.319573 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-26msj" event={"ID":"7d1b2f20-d886-4b8d-8cb3-fcfc83ac4c12","Type":"ContainerStarted","Data":"f048bc7441c3f316a80a12d87c7a84bac7b5f5bff340ac7ed2eab7ee26328205"} Jan 31 09:03:32 crc kubenswrapper[4830]: I0131 09:03:32.351763 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-n4rml" event={"ID":"cf057c5a-deef-4c01-bd58-f761ec86e2f4","Type":"ContainerStarted","Data":"4ee9412f00cc39ee85a53e00735952960dbf6826e8a88f21b12231d990adad8a"} Jan 31 09:03:32 crc kubenswrapper[4830]: I0131 09:03:32.352049 4830 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-n4rml container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.32:8443/healthz\": dial tcp 10.217.0.32:8443: connect: connection refused" start-of-body= Jan 31 09:03:32 crc kubenswrapper[4830]: I0131 09:03:32.352136 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-n4rml" podUID="cf057c5a-deef-4c01-bd58-f761ec86e2f4" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.32:8443/healthz\": dial tcp 10.217.0.32:8443: connect: connection refused" Jan 31 09:03:32 crc kubenswrapper[4830]: I0131 09:03:32.362282 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 09:03:32 crc kubenswrapper[4830]: E0131 09:03:32.363596 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 09:03:32.863572707 +0000 UTC m=+157.356935149 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 09:03:32 crc kubenswrapper[4830]: I0131 09:03:32.366330 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-8wdp6" event={"ID":"90237d04-95e8-4523-bd3d-bc8cedfc0f5f","Type":"ContainerStarted","Data":"7009ab8a2536f1d83b054eb2428945dd272b1b7bca18d4cfc912105950d32b6a"} Jan 31 09:03:32 crc kubenswrapper[4830]: I0131 09:03:32.394250 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-ckvgq" podStartSLOduration=130.394228941 podStartE2EDuration="2m10.394228941s" podCreationTimestamp="2026-01-31 09:01:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 09:03:32.335261852 +0000 UTC m=+156.828624294" watchObservedRunningTime="2026-01-31 09:03:32.394228941 +0000 UTC m=+156.887591383" Jan 31 09:03:32 crc kubenswrapper[4830]: I0131 09:03:32.394920 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-lp7ks" event={"ID":"e80e8b17-711d-46d8-a240-4fa52e093545","Type":"ContainerStarted","Data":"4bb4b393d788389636a749f9855b6b5af59603d34816a47e960c64dbe48662c7"} Jan 31 09:03:32 crc kubenswrapper[4830]: I0131 09:03:32.396554 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-lp7ks" Jan 31 09:03:32 crc kubenswrapper[4830]: I0131 09:03:32.402977 4830 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-lp7ks container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.39:5443/healthz\": dial tcp 10.217.0.39:5443: connect: connection refused" start-of-body= Jan 31 09:03:32 crc kubenswrapper[4830]: I0131 09:03:32.403061 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-lp7ks" podUID="e80e8b17-711d-46d8-a240-4fa52e093545" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.39:5443/healthz\": dial tcp 10.217.0.39:5443: connect: connection refused" Jan 31 09:03:32 crc kubenswrapper[4830]: I0131 09:03:32.410841 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-vjnc8" event={"ID":"2ab6816a-65ef-41b2-b416-60491d4423d9","Type":"ContainerStarted","Data":"bccb00cbb119f87a133b9c7d789282c581149845d510ffa1de0ca1472d85bfd2"} Jan 31 09:03:32 crc kubenswrapper[4830]: I0131 09:03:32.438558 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pwk76" podStartSLOduration=130.438529762 podStartE2EDuration="2m10.438529762s" podCreationTimestamp="2026-01-31 09:01:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 09:03:32.393817639 +0000 UTC m=+156.887180101" watchObservedRunningTime="2026-01-31 09:03:32.438529762 +0000 UTC m=+156.931892204" Jan 31 09:03:32 crc kubenswrapper[4830]: I0131 09:03:32.464748 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-x8zjt" event={"ID":"e33286d6-0e3e-47d4-bf68-11b642927bee","Type":"ContainerStarted","Data":"d3b4ff43e121648a770d4b85d4820c894bd01a3594130d3dd2ee98adbae1380c"} Jan 31 09:03:32 crc kubenswrapper[4830]: I0131 09:03:32.470784 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-bs7f7" event={"ID":"7db6391f-ccc4-41d2-82ff-aa58d3297625","Type":"ContainerStarted","Data":"a7734edb4a7dbba8591e78d32682b13988f40feb5fae0f09f7d79f018e1353ed"} Jan 31 09:03:32 crc kubenswrapper[4830]: I0131 09:03:32.475918 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-fnk7f" event={"ID":"36a7a51a-2662-4f3b-aa1d-d674cf676b9d","Type":"ContainerStarted","Data":"b90565efd448c3a205961e4d926bf471147c2a338b39eef1471085e2888f47a0"} Jan 31 09:03:32 crc kubenswrapper[4830]: I0131 09:03:32.481645 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7m8b7\" (UID: \"acf2d685-5b8b-41ab-b91d-2e3b58b8b584\") " pod="openshift-image-registry/image-registry-697d97f7c8-7m8b7" Jan 31 09:03:32 crc kubenswrapper[4830]: I0131 09:03:32.486957 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-26msj" podStartSLOduration=130.486930833 podStartE2EDuration="2m10.486930833s" podCreationTimestamp="2026-01-31 09:01:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 09:03:32.440002515 +0000 UTC m=+156.933364957" watchObservedRunningTime="2026-01-31 09:03:32.486930833 +0000 UTC m=+156.980293275" Jan 31 09:03:32 crc kubenswrapper[4830]: I0131 09:03:32.488424 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29497500-66dl8" podStartSLOduration=130.488415797 podStartE2EDuration="2m10.488415797s" podCreationTimestamp="2026-01-31 09:01:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 09:03:32.485420709 +0000 UTC m=+156.978783161" watchObservedRunningTime="2026-01-31 09:03:32.488415797 +0000 UTC m=+156.981778239" Jan 31 09:03:32 crc kubenswrapper[4830]: E0131 09:03:32.499703 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 09:03:32.999671585 +0000 UTC m=+157.493034027 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7m8b7" (UID: "acf2d685-5b8b-41ab-b91d-2e3b58b8b584") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 09:03:32 crc kubenswrapper[4830]: I0131 09:03:32.511269 4830 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-fnk7f container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.40:8080/healthz\": dial tcp 10.217.0.40:8080: connect: connection refused" start-of-body= Jan 31 09:03:32 crc kubenswrapper[4830]: I0131 09:03:32.511379 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-fnk7f" podUID="36a7a51a-2662-4f3b-aa1d-d674cf676b9d" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.40:8080/healthz\": dial tcp 10.217.0.40:8080: connect: connection refused" Jan 31 09:03:32 crc kubenswrapper[4830]: I0131 09:03:32.518572 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pwk76" Jan 31 09:03:32 crc kubenswrapper[4830]: I0131 09:03:32.519155 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pwk76" Jan 31 09:03:32 crc kubenswrapper[4830]: I0131 09:03:32.524549 4830 patch_prober.go:28] interesting pod/apiserver-7bbb656c7d-pwk76 container/oauth-apiserver namespace/openshift-oauth-apiserver: Startup probe status=failure output="Get \"https://10.217.0.22:8443/livez\": dial tcp 10.217.0.22:8443: connect: connection refused" start-of-body= Jan 31 09:03:32 crc kubenswrapper[4830]: I0131 09:03:32.524653 4830 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pwk76" podUID="c61fa19c-7742-4ab1-b3ca-9607723fe94d" containerName="oauth-apiserver" probeResult="failure" output="Get \"https://10.217.0.22:8443/livez\": dial tcp 10.217.0.22:8443: connect: connection refused" Jan 31 09:03:32 crc kubenswrapper[4830]: I0131 09:03:32.557566 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-9d827" event={"ID":"23da5bb2-d0a9-4b0d-8755-ea8e58234b18","Type":"ContainerStarted","Data":"fac5c5ed385cd6a8edd09b68c7af39517ab5847f11c9a98ecc59b002a258f819"} Jan 31 09:03:32 crc kubenswrapper[4830]: I0131 09:03:32.573214 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-skqcc" podStartSLOduration=130.573181698 podStartE2EDuration="2m10.573181698s" podCreationTimestamp="2026-01-31 09:01:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 09:03:32.562364582 +0000 UTC m=+157.055727034" watchObservedRunningTime="2026-01-31 09:03:32.573181698 +0000 UTC m=+157.066544140" Jan 31 09:03:32 crc kubenswrapper[4830]: I0131 09:03:32.592431 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 09:03:32 crc kubenswrapper[4830]: E0131 09:03:32.594826 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 09:03:33.094785088 +0000 UTC m=+157.588147530 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 09:03:32 crc kubenswrapper[4830]: I0131 09:03:32.612932 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-8nn2k" event={"ID":"bc99ac19-2796-495d-82d4-6eda76879f40","Type":"ContainerStarted","Data":"9ad85c840ae0a24ccea015fb46faaf041d996b75ddee5998362d97ad3191dbde"} Jan 31 09:03:32 crc kubenswrapper[4830]: I0131 09:03:32.612988 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-8nn2k" event={"ID":"bc99ac19-2796-495d-82d4-6eda76879f40","Type":"ContainerStarted","Data":"42d8c67b08125b7ab5b91a95710ccbf8ff2e05fc0a698ad8b842b3c8c758face"} Jan 31 09:03:32 crc kubenswrapper[4830]: I0131 09:03:32.619203 4830 patch_prober.go:28] interesting pod/downloads-7954f5f757-l8ckt container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" start-of-body= Jan 31 09:03:32 crc kubenswrapper[4830]: I0131 09:03:32.619314 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-l8ckt" podUID="a8d26ab0-33c3-4eb7-928b-ffba996579d9" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" Jan 31 09:03:32 crc kubenswrapper[4830]: I0131 09:03:32.638949 4830 patch_prober.go:28] interesting pod/console-operator-58897d9998-pkx9p container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.21:8443/readyz\": dial tcp 10.217.0.21:8443: connect: connection refused" start-of-body= Jan 31 09:03:32 crc kubenswrapper[4830]: I0131 09:03:32.639082 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-pkx9p" podUID="691a8aff-6fcd-400a-ace9-fb3fa8778206" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.21:8443/readyz\": dial tcp 10.217.0.21:8443: connect: connection refused" Jan 31 09:03:32 crc kubenswrapper[4830]: I0131 09:03:32.701436 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7m8b7\" (UID: \"acf2d685-5b8b-41ab-b91d-2e3b58b8b584\") " pod="openshift-image-registry/image-registry-697d97f7c8-7m8b7" Jan 31 09:03:32 crc kubenswrapper[4830]: E0131 09:03:32.722959 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 09:03:33.222932183 +0000 UTC m=+157.716294625 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7m8b7" (UID: "acf2d685-5b8b-41ab-b91d-2e3b58b8b584") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 09:03:32 crc kubenswrapper[4830]: I0131 09:03:32.752002 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-8wdp6" podStartSLOduration=130.75197736 podStartE2EDuration="2m10.75197736s" podCreationTimestamp="2026-01-31 09:01:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 09:03:32.642068966 +0000 UTC m=+157.135431408" watchObservedRunningTime="2026-01-31 09:03:32.75197736 +0000 UTC m=+157.245339802" Jan 31 09:03:32 crc kubenswrapper[4830]: I0131 09:03:32.810484 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 09:03:32 crc kubenswrapper[4830]: I0131 09:03:32.810540 4830 patch_prober.go:28] interesting pod/router-default-5444994796-vbcgc container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 31 09:03:32 crc kubenswrapper[4830]: [-]has-synced failed: reason withheld Jan 31 09:03:32 crc kubenswrapper[4830]: [+]process-running ok Jan 31 09:03:32 crc kubenswrapper[4830]: healthz check failed Jan 31 09:03:32 crc kubenswrapper[4830]: I0131 09:03:32.811244 4830 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-vbcgc" podUID="bf986437-9998-4cd1-90b8-b2e0716e8d37" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 31 09:03:32 crc kubenswrapper[4830]: E0131 09:03:32.810812 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 09:03:33.310693602 +0000 UTC m=+157.804056044 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 09:03:32 crc kubenswrapper[4830]: I0131 09:03:32.811653 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7m8b7\" (UID: \"acf2d685-5b8b-41ab-b91d-2e3b58b8b584\") " pod="openshift-image-registry/image-registry-697d97f7c8-7m8b7" Jan 31 09:03:32 crc kubenswrapper[4830]: E0131 09:03:32.817919 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 09:03:33.317897192 +0000 UTC m=+157.811259634 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7m8b7" (UID: "acf2d685-5b8b-41ab-b91d-2e3b58b8b584") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 09:03:32 crc kubenswrapper[4830]: I0131 09:03:32.834848 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-vjnc8" podStartSLOduration=130.834824275 podStartE2EDuration="2m10.834824275s" podCreationTimestamp="2026-01-31 09:01:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 09:03:32.773099116 +0000 UTC m=+157.266461578" watchObservedRunningTime="2026-01-31 09:03:32.834824275 +0000 UTC m=+157.328186717" Jan 31 09:03:32 crc kubenswrapper[4830]: I0131 09:03:32.853667 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-x8zjt" podStartSLOduration=130.853635164 podStartE2EDuration="2m10.853635164s" podCreationTimestamp="2026-01-31 09:01:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 09:03:32.834480825 +0000 UTC m=+157.327843277" watchObservedRunningTime="2026-01-31 09:03:32.853635164 +0000 UTC m=+157.346997606" Jan 31 09:03:32 crc kubenswrapper[4830]: I0131 09:03:32.915355 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 09:03:32 crc kubenswrapper[4830]: E0131 09:03:32.915838 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 09:03:33.415815336 +0000 UTC m=+157.909177778 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 09:03:32 crc kubenswrapper[4830]: I0131 09:03:32.935641 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-bs7f7" podStartSLOduration=130.935617394 podStartE2EDuration="2m10.935617394s" podCreationTimestamp="2026-01-31 09:01:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 09:03:32.894203556 +0000 UTC m=+157.387565998" watchObservedRunningTime="2026-01-31 09:03:32.935617394 +0000 UTC m=+157.428979836" Jan 31 09:03:32 crc kubenswrapper[4830]: I0131 09:03:32.978916 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-9d827" podStartSLOduration=130.978891955 podStartE2EDuration="2m10.978891955s" podCreationTimestamp="2026-01-31 09:01:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 09:03:32.937779967 +0000 UTC m=+157.431142409" watchObservedRunningTime="2026-01-31 09:03:32.978891955 +0000 UTC m=+157.472254397" Jan 31 09:03:33 crc kubenswrapper[4830]: I0131 09:03:33.005997 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-hzk7b" Jan 31 09:03:33 crc kubenswrapper[4830]: I0131 09:03:33.021675 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7m8b7\" (UID: \"acf2d685-5b8b-41ab-b91d-2e3b58b8b584\") " pod="openshift-image-registry/image-registry-697d97f7c8-7m8b7" Jan 31 09:03:33 crc kubenswrapper[4830]: E0131 09:03:33.022096 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 09:03:33.522080864 +0000 UTC m=+158.015443306 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7m8b7" (UID: "acf2d685-5b8b-41ab-b91d-2e3b58b8b584") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 09:03:33 crc kubenswrapper[4830]: I0131 09:03:33.084274 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-lp7ks" podStartSLOduration=131.084254377 podStartE2EDuration="2m11.084254377s" podCreationTimestamp="2026-01-31 09:01:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 09:03:32.980749349 +0000 UTC m=+157.474111791" watchObservedRunningTime="2026-01-31 09:03:33.084254377 +0000 UTC m=+157.577616819" Jan 31 09:03:33 crc kubenswrapper[4830]: I0131 09:03:33.123499 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 09:03:33 crc kubenswrapper[4830]: E0131 09:03:33.127005 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 09:03:33.626968662 +0000 UTC m=+158.120331104 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 09:03:33 crc kubenswrapper[4830]: I0131 09:03:33.131833 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7m8b7\" (UID: \"acf2d685-5b8b-41ab-b91d-2e3b58b8b584\") " pod="openshift-image-registry/image-registry-697d97f7c8-7m8b7" Jan 31 09:03:33 crc kubenswrapper[4830]: E0131 09:03:33.132408 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 09:03:33.63239335 +0000 UTC m=+158.125755792 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7m8b7" (UID: "acf2d685-5b8b-41ab-b91d-2e3b58b8b584") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 09:03:33 crc kubenswrapper[4830]: I0131 09:03:33.166978 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-8nn2k" podStartSLOduration=131.166958978 podStartE2EDuration="2m11.166958978s" podCreationTimestamp="2026-01-31 09:01:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 09:03:33.086833182 +0000 UTC m=+157.580195624" watchObservedRunningTime="2026-01-31 09:03:33.166958978 +0000 UTC m=+157.660321420" Jan 31 09:03:33 crc kubenswrapper[4830]: I0131 09:03:33.240495 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 09:03:33 crc kubenswrapper[4830]: E0131 09:03:33.241261 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 09:03:33.741238673 +0000 UTC m=+158.234601115 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 09:03:33 crc kubenswrapper[4830]: I0131 09:03:33.342992 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7m8b7\" (UID: \"acf2d685-5b8b-41ab-b91d-2e3b58b8b584\") " pod="openshift-image-registry/image-registry-697d97f7c8-7m8b7" Jan 31 09:03:33 crc kubenswrapper[4830]: E0131 09:03:33.343359 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 09:03:33.84334546 +0000 UTC m=+158.336707902 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7m8b7" (UID: "acf2d685-5b8b-41ab-b91d-2e3b58b8b584") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 09:03:33 crc kubenswrapper[4830]: I0131 09:03:33.445378 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 09:03:33 crc kubenswrapper[4830]: E0131 09:03:33.445632 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 09:03:33.945594731 +0000 UTC m=+158.438957173 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 09:03:33 crc kubenswrapper[4830]: I0131 09:03:33.445708 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7m8b7\" (UID: \"acf2d685-5b8b-41ab-b91d-2e3b58b8b584\") " pod="openshift-image-registry/image-registry-697d97f7c8-7m8b7" Jan 31 09:03:33 crc kubenswrapper[4830]: E0131 09:03:33.446168 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 09:03:33.946159077 +0000 UTC m=+158.439521519 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7m8b7" (UID: "acf2d685-5b8b-41ab-b91d-2e3b58b8b584") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 09:03:33 crc kubenswrapper[4830]: I0131 09:03:33.547077 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 09:03:33 crc kubenswrapper[4830]: E0131 09:03:33.547507 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 09:03:34.047483251 +0000 UTC m=+158.540845693 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 09:03:33 crc kubenswrapper[4830]: I0131 09:03:33.620268 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-xwn99" event={"ID":"b9a73d68-2213-477c-a55d-91c86b7ce674","Type":"ContainerStarted","Data":"2ddaa85b0a8e05d2e4f3df6c8d5e2b10f28d320789435ba85142b0201990f3cc"} Jan 31 09:03:33 crc kubenswrapper[4830]: I0131 09:03:33.622599 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-2klp9" event={"ID":"21ee0584-e383-47cf-af98-48c65d9fba74","Type":"ContainerStarted","Data":"93ccda21e3b00432e4c5a9ba0199d8bb0f18fe9f9702491e455262a265c1cb8f"} Jan 31 09:03:33 crc kubenswrapper[4830]: I0131 09:03:33.623915 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-rhvlq" event={"ID":"30c7c034-9492-4051-9cc9-235a6d87bd03","Type":"ContainerStarted","Data":"861e5fb60b79def2c1dfcc957378f0d56b4c59a0c5eb05ef901be4e32e8ae112"} Jan 31 09:03:33 crc kubenswrapper[4830]: I0131 09:03:33.624993 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-whjm4" event={"ID":"48cbaebb-5495-4965-a7fd-207d2d7ef0fc","Type":"ContainerStarted","Data":"4d3e8915861f88d4710f57fc17293fcb57f6892c3f4da31bad6a5210537f6d14"} Jan 31 09:03:33 crc kubenswrapper[4830]: I0131 09:03:33.625018 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-whjm4" event={"ID":"48cbaebb-5495-4965-a7fd-207d2d7ef0fc","Type":"ContainerStarted","Data":"c7394f0a8b03361f2b2bf93384acddc765da037391482119e8d636e2502ec08e"} Jan 31 09:03:33 crc kubenswrapper[4830]: I0131 09:03:33.625390 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-whjm4" Jan 31 09:03:33 crc kubenswrapper[4830]: I0131 09:03:33.627887 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-htl5l" event={"ID":"2a94efc3-19bc-47ce-b48a-4f4b3351d955","Type":"ContainerStarted","Data":"5b3dffbf1f639360b64c75218da065285e15da02656f9a0935eae2060353abc4"} Jan 31 09:03:33 crc kubenswrapper[4830]: I0131 09:03:33.627912 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-htl5l" event={"ID":"2a94efc3-19bc-47ce-b48a-4f4b3351d955","Type":"ContainerStarted","Data":"bd993b46bbcad25fd809e6aeb623b2ee0f9179f56e3a4fd46f0302cd74d73a58"} Jan 31 09:03:33 crc kubenswrapper[4830]: I0131 09:03:33.632004 4830 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-fnk7f container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.40:8080/healthz\": dial tcp 10.217.0.40:8080: connect: connection refused" start-of-body= Jan 31 09:03:33 crc kubenswrapper[4830]: I0131 09:03:33.632046 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-fnk7f" podUID="36a7a51a-2662-4f3b-aa1d-d674cf676b9d" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.40:8080/healthz\": dial tcp 10.217.0.40:8080: connect: connection refused" Jan 31 09:03:33 crc kubenswrapper[4830]: I0131 09:03:33.634152 4830 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-lb8hp container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.38:8443/healthz\": dial tcp 10.217.0.38:8443: connect: connection refused" start-of-body= Jan 31 09:03:33 crc kubenswrapper[4830]: I0131 09:03:33.634259 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lb8hp" podUID="13f1c33b-cede-4fb1-9651-15d0dcd36173" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.38:8443/healthz\": dial tcp 10.217.0.38:8443: connect: connection refused" Jan 31 09:03:33 crc kubenswrapper[4830]: I0131 09:03:33.647089 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-n4rml" Jan 31 09:03:33 crc kubenswrapper[4830]: I0131 09:03:33.649136 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7m8b7\" (UID: \"acf2d685-5b8b-41ab-b91d-2e3b58b8b584\") " pod="openshift-image-registry/image-registry-697d97f7c8-7m8b7" Jan 31 09:03:33 crc kubenswrapper[4830]: E0131 09:03:33.649632 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 09:03:34.149616199 +0000 UTC m=+158.642978641 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7m8b7" (UID: "acf2d685-5b8b-41ab-b91d-2e3b58b8b584") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 09:03:33 crc kubenswrapper[4830]: I0131 09:03:33.682641 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-xwn99" podStartSLOduration=131.682615411 podStartE2EDuration="2m11.682615411s" podCreationTimestamp="2026-01-31 09:01:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 09:03:33.679483199 +0000 UTC m=+158.172845641" watchObservedRunningTime="2026-01-31 09:03:33.682615411 +0000 UTC m=+158.175977853" Jan 31 09:03:33 crc kubenswrapper[4830]: I0131 09:03:33.750851 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 09:03:33 crc kubenswrapper[4830]: E0131 09:03:33.751102 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 09:03:34.251049896 +0000 UTC m=+158.744412338 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 09:03:33 crc kubenswrapper[4830]: I0131 09:03:33.752285 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7m8b7\" (UID: \"acf2d685-5b8b-41ab-b91d-2e3b58b8b584\") " pod="openshift-image-registry/image-registry-697d97f7c8-7m8b7" Jan 31 09:03:33 crc kubenswrapper[4830]: E0131 09:03:33.764764 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 09:03:34.264735695 +0000 UTC m=+158.758098137 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7m8b7" (UID: "acf2d685-5b8b-41ab-b91d-2e3b58b8b584") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 09:03:33 crc kubenswrapper[4830]: I0131 09:03:33.800205 4830 patch_prober.go:28] interesting pod/router-default-5444994796-vbcgc container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 31 09:03:33 crc kubenswrapper[4830]: [-]has-synced failed: reason withheld Jan 31 09:03:33 crc kubenswrapper[4830]: [+]process-running ok Jan 31 09:03:33 crc kubenswrapper[4830]: healthz check failed Jan 31 09:03:33 crc kubenswrapper[4830]: I0131 09:03:33.800292 4830 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-vbcgc" podUID="bf986437-9998-4cd1-90b8-b2e0716e8d37" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 31 09:03:33 crc kubenswrapper[4830]: I0131 09:03:33.856540 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 09:03:33 crc kubenswrapper[4830]: E0131 09:03:33.857035 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 09:03:34.357018085 +0000 UTC m=+158.850380527 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 09:03:33 crc kubenswrapper[4830]: I0131 09:03:33.885528 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-htl5l" podStartSLOduration=131.885509706 podStartE2EDuration="2m11.885509706s" podCreationTimestamp="2026-01-31 09:01:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 09:03:33.82047501 +0000 UTC m=+158.313837452" watchObservedRunningTime="2026-01-31 09:03:33.885509706 +0000 UTC m=+158.378872138" Jan 31 09:03:33 crc kubenswrapper[4830]: I0131 09:03:33.925939 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-2klp9" podStartSLOduration=131.925918474 podStartE2EDuration="2m11.925918474s" podCreationTimestamp="2026-01-31 09:01:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 09:03:33.887258336 +0000 UTC m=+158.380620768" watchObservedRunningTime="2026-01-31 09:03:33.925918474 +0000 UTC m=+158.419280916" Jan 31 09:03:33 crc kubenswrapper[4830]: I0131 09:03:33.958931 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7m8b7\" (UID: \"acf2d685-5b8b-41ab-b91d-2e3b58b8b584\") " pod="openshift-image-registry/image-registry-697d97f7c8-7m8b7" Jan 31 09:03:33 crc kubenswrapper[4830]: E0131 09:03:33.964531 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 09:03:34.464511739 +0000 UTC m=+158.957874181 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7m8b7" (UID: "acf2d685-5b8b-41ab-b91d-2e3b58b8b584") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 09:03:33 crc kubenswrapper[4830]: I0131 09:03:33.994050 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-whjm4" podStartSLOduration=8.994028189 podStartE2EDuration="8.994028189s" podCreationTimestamp="2026-01-31 09:03:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 09:03:33.990709012 +0000 UTC m=+158.484071454" watchObservedRunningTime="2026-01-31 09:03:33.994028189 +0000 UTC m=+158.487390631" Jan 31 09:03:34 crc kubenswrapper[4830]: I0131 09:03:34.063473 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 09:03:34 crc kubenswrapper[4830]: E0131 09:03:34.063906 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 09:03:34.563860925 +0000 UTC m=+159.057223377 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 09:03:34 crc kubenswrapper[4830]: I0131 09:03:34.165784 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7m8b7\" (UID: \"acf2d685-5b8b-41ab-b91d-2e3b58b8b584\") " pod="openshift-image-registry/image-registry-697d97f7c8-7m8b7" Jan 31 09:03:34 crc kubenswrapper[4830]: E0131 09:03:34.166282 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 09:03:34.66626043 +0000 UTC m=+159.159622872 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7m8b7" (UID: "acf2d685-5b8b-41ab-b91d-2e3b58b8b584") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 09:03:34 crc kubenswrapper[4830]: I0131 09:03:34.266475 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 09:03:34 crc kubenswrapper[4830]: E0131 09:03:34.266679 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 09:03:34.766644487 +0000 UTC m=+159.260006929 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 09:03:34 crc kubenswrapper[4830]: I0131 09:03:34.267349 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7m8b7\" (UID: \"acf2d685-5b8b-41ab-b91d-2e3b58b8b584\") " pod="openshift-image-registry/image-registry-697d97f7c8-7m8b7" Jan 31 09:03:34 crc kubenswrapper[4830]: E0131 09:03:34.267754 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 09:03:34.767744439 +0000 UTC m=+159.261106881 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7m8b7" (UID: "acf2d685-5b8b-41ab-b91d-2e3b58b8b584") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 09:03:34 crc kubenswrapper[4830]: I0131 09:03:34.368205 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 09:03:34 crc kubenswrapper[4830]: E0131 09:03:34.368476 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 09:03:34.868432944 +0000 UTC m=+159.361795386 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 09:03:34 crc kubenswrapper[4830]: I0131 09:03:34.368626 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7m8b7\" (UID: \"acf2d685-5b8b-41ab-b91d-2e3b58b8b584\") " pod="openshift-image-registry/image-registry-697d97f7c8-7m8b7" Jan 31 09:03:34 crc kubenswrapper[4830]: E0131 09:03:34.369086 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 09:03:34.869070123 +0000 UTC m=+159.362432565 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7m8b7" (UID: "acf2d685-5b8b-41ab-b91d-2e3b58b8b584") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 09:03:34 crc kubenswrapper[4830]: I0131 09:03:34.377294 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-q8t9t"] Jan 31 09:03:34 crc kubenswrapper[4830]: I0131 09:03:34.378527 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-q8t9t" Jan 31 09:03:34 crc kubenswrapper[4830]: I0131 09:03:34.381548 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 31 09:03:34 crc kubenswrapper[4830]: I0131 09:03:34.410253 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-q8t9t"] Jan 31 09:03:34 crc kubenswrapper[4830]: I0131 09:03:34.470074 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 09:03:34 crc kubenswrapper[4830]: E0131 09:03:34.470445 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 09:03:34.970428117 +0000 UTC m=+159.463790549 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 09:03:34 crc kubenswrapper[4830]: I0131 09:03:34.486016 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-pkx9p" Jan 31 09:03:34 crc kubenswrapper[4830]: I0131 09:03:34.560067 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-2ssr8"] Jan 31 09:03:34 crc kubenswrapper[4830]: I0131 09:03:34.561307 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2ssr8" Jan 31 09:03:34 crc kubenswrapper[4830]: I0131 09:03:34.571451 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bhrcb\" (UniqueName: \"kubernetes.io/projected/db7a137a-b7f9-4446-85f6-ea0d2f0caedd-kube-api-access-bhrcb\") pod \"community-operators-q8t9t\" (UID: \"db7a137a-b7f9-4446-85f6-ea0d2f0caedd\") " pod="openshift-marketplace/community-operators-q8t9t" Jan 31 09:03:34 crc kubenswrapper[4830]: I0131 09:03:34.571500 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 31 09:03:34 crc kubenswrapper[4830]: I0131 09:03:34.571582 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/db7a137a-b7f9-4446-85f6-ea0d2f0caedd-catalog-content\") pod \"community-operators-q8t9t\" (UID: \"db7a137a-b7f9-4446-85f6-ea0d2f0caedd\") " pod="openshift-marketplace/community-operators-q8t9t" Jan 31 09:03:34 crc kubenswrapper[4830]: I0131 09:03:34.572014 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7m8b7\" (UID: \"acf2d685-5b8b-41ab-b91d-2e3b58b8b584\") " pod="openshift-image-registry/image-registry-697d97f7c8-7m8b7" Jan 31 09:03:34 crc kubenswrapper[4830]: I0131 09:03:34.572066 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/db7a137a-b7f9-4446-85f6-ea0d2f0caedd-utilities\") pod \"community-operators-q8t9t\" (UID: \"db7a137a-b7f9-4446-85f6-ea0d2f0caedd\") " pod="openshift-marketplace/community-operators-q8t9t" Jan 31 09:03:34 crc kubenswrapper[4830]: E0131 09:03:34.572445 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 09:03:35.072430231 +0000 UTC m=+159.565792673 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7m8b7" (UID: "acf2d685-5b8b-41ab-b91d-2e3b58b8b584") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 09:03:34 crc kubenswrapper[4830]: I0131 09:03:34.592285 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-2ssr8"] Jan 31 09:03:34 crc kubenswrapper[4830]: I0131 09:03:34.633520 4830 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-lp7ks container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.39:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 31 09:03:34 crc kubenswrapper[4830]: I0131 09:03:34.633603 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-lp7ks" podUID="e80e8b17-711d-46d8-a240-4fa52e093545" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.39:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 31 09:03:34 crc kubenswrapper[4830]: I0131 09:03:34.673036 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 09:03:34 crc kubenswrapper[4830]: I0131 09:03:34.673307 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3e020928-b063-4d3c-8992-e712fe3d1b1d-utilities\") pod \"certified-operators-2ssr8\" (UID: \"3e020928-b063-4d3c-8992-e712fe3d1b1d\") " pod="openshift-marketplace/certified-operators-2ssr8" Jan 31 09:03:34 crc kubenswrapper[4830]: I0131 09:03:34.673349 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/db7a137a-b7f9-4446-85f6-ea0d2f0caedd-utilities\") pod \"community-operators-q8t9t\" (UID: \"db7a137a-b7f9-4446-85f6-ea0d2f0caedd\") " pod="openshift-marketplace/community-operators-q8t9t" Jan 31 09:03:34 crc kubenswrapper[4830]: I0131 09:03:34.673396 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zts58\" (UniqueName: \"kubernetes.io/projected/3e020928-b063-4d3c-8992-e712fe3d1b1d-kube-api-access-zts58\") pod \"certified-operators-2ssr8\" (UID: \"3e020928-b063-4d3c-8992-e712fe3d1b1d\") " pod="openshift-marketplace/certified-operators-2ssr8" Jan 31 09:03:34 crc kubenswrapper[4830]: I0131 09:03:34.673435 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bhrcb\" (UniqueName: \"kubernetes.io/projected/db7a137a-b7f9-4446-85f6-ea0d2f0caedd-kube-api-access-bhrcb\") pod \"community-operators-q8t9t\" (UID: \"db7a137a-b7f9-4446-85f6-ea0d2f0caedd\") " pod="openshift-marketplace/community-operators-q8t9t" Jan 31 09:03:34 crc kubenswrapper[4830]: I0131 09:03:34.673498 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/db7a137a-b7f9-4446-85f6-ea0d2f0caedd-catalog-content\") pod \"community-operators-q8t9t\" (UID: \"db7a137a-b7f9-4446-85f6-ea0d2f0caedd\") " pod="openshift-marketplace/community-operators-q8t9t" Jan 31 09:03:34 crc kubenswrapper[4830]: I0131 09:03:34.673525 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3e020928-b063-4d3c-8992-e712fe3d1b1d-catalog-content\") pod \"certified-operators-2ssr8\" (UID: \"3e020928-b063-4d3c-8992-e712fe3d1b1d\") " pod="openshift-marketplace/certified-operators-2ssr8" Jan 31 09:03:34 crc kubenswrapper[4830]: E0131 09:03:34.673652 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 09:03:35.173628301 +0000 UTC m=+159.666990743 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 09:03:34 crc kubenswrapper[4830]: I0131 09:03:34.674192 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/db7a137a-b7f9-4446-85f6-ea0d2f0caedd-utilities\") pod \"community-operators-q8t9t\" (UID: \"db7a137a-b7f9-4446-85f6-ea0d2f0caedd\") " pod="openshift-marketplace/community-operators-q8t9t" Jan 31 09:03:34 crc kubenswrapper[4830]: I0131 09:03:34.675096 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/db7a137a-b7f9-4446-85f6-ea0d2f0caedd-catalog-content\") pod \"community-operators-q8t9t\" (UID: \"db7a137a-b7f9-4446-85f6-ea0d2f0caedd\") " pod="openshift-marketplace/community-operators-q8t9t" Jan 31 09:03:34 crc kubenswrapper[4830]: I0131 09:03:34.679340 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lb8hp" Jan 31 09:03:34 crc kubenswrapper[4830]: I0131 09:03:34.683736 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-ttnrg" Jan 31 09:03:34 crc kubenswrapper[4830]: I0131 09:03:34.706241 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bhrcb\" (UniqueName: \"kubernetes.io/projected/db7a137a-b7f9-4446-85f6-ea0d2f0caedd-kube-api-access-bhrcb\") pod \"community-operators-q8t9t\" (UID: \"db7a137a-b7f9-4446-85f6-ea0d2f0caedd\") " pod="openshift-marketplace/community-operators-q8t9t" Jan 31 09:03:34 crc kubenswrapper[4830]: I0131 09:03:34.762763 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-dcmsg"] Jan 31 09:03:34 crc kubenswrapper[4830]: I0131 09:03:34.763818 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dcmsg" Jan 31 09:03:34 crc kubenswrapper[4830]: I0131 09:03:34.775048 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zts58\" (UniqueName: \"kubernetes.io/projected/3e020928-b063-4d3c-8992-e712fe3d1b1d-kube-api-access-zts58\") pod \"certified-operators-2ssr8\" (UID: \"3e020928-b063-4d3c-8992-e712fe3d1b1d\") " pod="openshift-marketplace/certified-operators-2ssr8" Jan 31 09:03:34 crc kubenswrapper[4830]: I0131 09:03:34.775260 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3e020928-b063-4d3c-8992-e712fe3d1b1d-catalog-content\") pod \"certified-operators-2ssr8\" (UID: \"3e020928-b063-4d3c-8992-e712fe3d1b1d\") " pod="openshift-marketplace/certified-operators-2ssr8" Jan 31 09:03:34 crc kubenswrapper[4830]: I0131 09:03:34.775427 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7m8b7\" (UID: \"acf2d685-5b8b-41ab-b91d-2e3b58b8b584\") " pod="openshift-image-registry/image-registry-697d97f7c8-7m8b7" Jan 31 09:03:34 crc kubenswrapper[4830]: I0131 09:03:34.775516 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3e020928-b063-4d3c-8992-e712fe3d1b1d-utilities\") pod \"certified-operators-2ssr8\" (UID: \"3e020928-b063-4d3c-8992-e712fe3d1b1d\") " pod="openshift-marketplace/certified-operators-2ssr8" Jan 31 09:03:34 crc kubenswrapper[4830]: I0131 09:03:34.777411 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3e020928-b063-4d3c-8992-e712fe3d1b1d-utilities\") pod \"certified-operators-2ssr8\" (UID: \"3e020928-b063-4d3c-8992-e712fe3d1b1d\") " pod="openshift-marketplace/certified-operators-2ssr8" Jan 31 09:03:34 crc kubenswrapper[4830]: E0131 09:03:34.787979 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 09:03:35.287951694 +0000 UTC m=+159.781314136 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7m8b7" (UID: "acf2d685-5b8b-41ab-b91d-2e3b58b8b584") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 09:03:34 crc kubenswrapper[4830]: I0131 09:03:34.797696 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3e020928-b063-4d3c-8992-e712fe3d1b1d-catalog-content\") pod \"certified-operators-2ssr8\" (UID: \"3e020928-b063-4d3c-8992-e712fe3d1b1d\") " pod="openshift-marketplace/certified-operators-2ssr8" Jan 31 09:03:34 crc kubenswrapper[4830]: I0131 09:03:34.804276 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-dcmsg"] Jan 31 09:03:34 crc kubenswrapper[4830]: I0131 09:03:34.808416 4830 patch_prober.go:28] interesting pod/router-default-5444994796-vbcgc container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 31 09:03:34 crc kubenswrapper[4830]: [-]has-synced failed: reason withheld Jan 31 09:03:34 crc kubenswrapper[4830]: [+]process-running ok Jan 31 09:03:34 crc kubenswrapper[4830]: healthz check failed Jan 31 09:03:34 crc kubenswrapper[4830]: I0131 09:03:34.838848 4830 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-vbcgc" podUID="bf986437-9998-4cd1-90b8-b2e0716e8d37" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 31 09:03:34 crc kubenswrapper[4830]: I0131 09:03:34.871693 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zts58\" (UniqueName: \"kubernetes.io/projected/3e020928-b063-4d3c-8992-e712fe3d1b1d-kube-api-access-zts58\") pod \"certified-operators-2ssr8\" (UID: \"3e020928-b063-4d3c-8992-e712fe3d1b1d\") " pod="openshift-marketplace/certified-operators-2ssr8" Jan 31 09:03:34 crc kubenswrapper[4830]: I0131 09:03:34.878444 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 09:03:34 crc kubenswrapper[4830]: I0131 09:03:34.878746 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a1aef8b4-46c8-4ca1-87d9-0bdc6a53ca33-utilities\") pod \"community-operators-dcmsg\" (UID: \"a1aef8b4-46c8-4ca1-87d9-0bdc6a53ca33\") " pod="openshift-marketplace/community-operators-dcmsg" Jan 31 09:03:34 crc kubenswrapper[4830]: I0131 09:03:34.878834 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a1aef8b4-46c8-4ca1-87d9-0bdc6a53ca33-catalog-content\") pod \"community-operators-dcmsg\" (UID: \"a1aef8b4-46c8-4ca1-87d9-0bdc6a53ca33\") " pod="openshift-marketplace/community-operators-dcmsg" Jan 31 09:03:34 crc kubenswrapper[4830]: I0131 09:03:34.878872 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j8s8x\" (UniqueName: \"kubernetes.io/projected/a1aef8b4-46c8-4ca1-87d9-0bdc6a53ca33-kube-api-access-j8s8x\") pod \"community-operators-dcmsg\" (UID: \"a1aef8b4-46c8-4ca1-87d9-0bdc6a53ca33\") " pod="openshift-marketplace/community-operators-dcmsg" Jan 31 09:03:34 crc kubenswrapper[4830]: E0131 09:03:34.878994 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 09:03:35.378975468 +0000 UTC m=+159.872337910 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 09:03:34 crc kubenswrapper[4830]: I0131 09:03:34.884189 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2ssr8" Jan 31 09:03:34 crc kubenswrapper[4830]: I0131 09:03:34.972670 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-bqmr6"] Jan 31 09:03:34 crc kubenswrapper[4830]: I0131 09:03:34.974052 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bqmr6" Jan 31 09:03:34 crc kubenswrapper[4830]: I0131 09:03:34.986808 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a1aef8b4-46c8-4ca1-87d9-0bdc6a53ca33-catalog-content\") pod \"community-operators-dcmsg\" (UID: \"a1aef8b4-46c8-4ca1-87d9-0bdc6a53ca33\") " pod="openshift-marketplace/community-operators-dcmsg" Jan 31 09:03:34 crc kubenswrapper[4830]: I0131 09:03:34.986866 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j8s8x\" (UniqueName: \"kubernetes.io/projected/a1aef8b4-46c8-4ca1-87d9-0bdc6a53ca33-kube-api-access-j8s8x\") pod \"community-operators-dcmsg\" (UID: \"a1aef8b4-46c8-4ca1-87d9-0bdc6a53ca33\") " pod="openshift-marketplace/community-operators-dcmsg" Jan 31 09:03:34 crc kubenswrapper[4830]: I0131 09:03:34.986899 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a1aef8b4-46c8-4ca1-87d9-0bdc6a53ca33-utilities\") pod \"community-operators-dcmsg\" (UID: \"a1aef8b4-46c8-4ca1-87d9-0bdc6a53ca33\") " pod="openshift-marketplace/community-operators-dcmsg" Jan 31 09:03:34 crc kubenswrapper[4830]: I0131 09:03:34.986929 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7m8b7\" (UID: \"acf2d685-5b8b-41ab-b91d-2e3b58b8b584\") " pod="openshift-image-registry/image-registry-697d97f7c8-7m8b7" Jan 31 09:03:34 crc kubenswrapper[4830]: I0131 09:03:34.987491 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a1aef8b4-46c8-4ca1-87d9-0bdc6a53ca33-catalog-content\") pod \"community-operators-dcmsg\" (UID: \"a1aef8b4-46c8-4ca1-87d9-0bdc6a53ca33\") " pod="openshift-marketplace/community-operators-dcmsg" Jan 31 09:03:34 crc kubenswrapper[4830]: E0131 09:03:34.987859 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 09:03:35.487818841 +0000 UTC m=+159.981181283 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7m8b7" (UID: "acf2d685-5b8b-41ab-b91d-2e3b58b8b584") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 09:03:34 crc kubenswrapper[4830]: I0131 09:03:34.987950 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a1aef8b4-46c8-4ca1-87d9-0bdc6a53ca33-utilities\") pod \"community-operators-dcmsg\" (UID: \"a1aef8b4-46c8-4ca1-87d9-0bdc6a53ca33\") " pod="openshift-marketplace/community-operators-dcmsg" Jan 31 09:03:35 crc kubenswrapper[4830]: I0131 09:03:35.004136 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-q8t9t" Jan 31 09:03:35 crc kubenswrapper[4830]: I0131 09:03:35.004862 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-bqmr6"] Jan 31 09:03:35 crc kubenswrapper[4830]: I0131 09:03:35.046244 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-lp7ks" Jan 31 09:03:35 crc kubenswrapper[4830]: I0131 09:03:35.048131 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j8s8x\" (UniqueName: \"kubernetes.io/projected/a1aef8b4-46c8-4ca1-87d9-0bdc6a53ca33-kube-api-access-j8s8x\") pod \"community-operators-dcmsg\" (UID: \"a1aef8b4-46c8-4ca1-87d9-0bdc6a53ca33\") " pod="openshift-marketplace/community-operators-dcmsg" Jan 31 09:03:35 crc kubenswrapper[4830]: I0131 09:03:35.088155 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 09:03:35 crc kubenswrapper[4830]: I0131 09:03:35.088425 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ea666a92-d7aa-4e9b-8c54-88ad8ae517aa-catalog-content\") pod \"certified-operators-bqmr6\" (UID: \"ea666a92-d7aa-4e9b-8c54-88ad8ae517aa\") " pod="openshift-marketplace/certified-operators-bqmr6" Jan 31 09:03:35 crc kubenswrapper[4830]: E0131 09:03:35.088586 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 09:03:35.588562198 +0000 UTC m=+160.081924640 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 09:03:35 crc kubenswrapper[4830]: I0131 09:03:35.088477 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m7tpr\" (UniqueName: \"kubernetes.io/projected/ea666a92-d7aa-4e9b-8c54-88ad8ae517aa-kube-api-access-m7tpr\") pod \"certified-operators-bqmr6\" (UID: \"ea666a92-d7aa-4e9b-8c54-88ad8ae517aa\") " pod="openshift-marketplace/certified-operators-bqmr6" Jan 31 09:03:35 crc kubenswrapper[4830]: I0131 09:03:35.088645 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ea666a92-d7aa-4e9b-8c54-88ad8ae517aa-utilities\") pod \"certified-operators-bqmr6\" (UID: \"ea666a92-d7aa-4e9b-8c54-88ad8ae517aa\") " pod="openshift-marketplace/certified-operators-bqmr6" Jan 31 09:03:35 crc kubenswrapper[4830]: I0131 09:03:35.109277 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dcmsg" Jan 31 09:03:35 crc kubenswrapper[4830]: I0131 09:03:35.194425 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ea666a92-d7aa-4e9b-8c54-88ad8ae517aa-catalog-content\") pod \"certified-operators-bqmr6\" (UID: \"ea666a92-d7aa-4e9b-8c54-88ad8ae517aa\") " pod="openshift-marketplace/certified-operators-bqmr6" Jan 31 09:03:35 crc kubenswrapper[4830]: I0131 09:03:35.194505 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7m8b7\" (UID: \"acf2d685-5b8b-41ab-b91d-2e3b58b8b584\") " pod="openshift-image-registry/image-registry-697d97f7c8-7m8b7" Jan 31 09:03:35 crc kubenswrapper[4830]: I0131 09:03:35.194540 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m7tpr\" (UniqueName: \"kubernetes.io/projected/ea666a92-d7aa-4e9b-8c54-88ad8ae517aa-kube-api-access-m7tpr\") pod \"certified-operators-bqmr6\" (UID: \"ea666a92-d7aa-4e9b-8c54-88ad8ae517aa\") " pod="openshift-marketplace/certified-operators-bqmr6" Jan 31 09:03:35 crc kubenswrapper[4830]: I0131 09:03:35.194564 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ea666a92-d7aa-4e9b-8c54-88ad8ae517aa-utilities\") pod \"certified-operators-bqmr6\" (UID: \"ea666a92-d7aa-4e9b-8c54-88ad8ae517aa\") " pod="openshift-marketplace/certified-operators-bqmr6" Jan 31 09:03:35 crc kubenswrapper[4830]: I0131 09:03:35.195136 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ea666a92-d7aa-4e9b-8c54-88ad8ae517aa-catalog-content\") pod \"certified-operators-bqmr6\" (UID: \"ea666a92-d7aa-4e9b-8c54-88ad8ae517aa\") " pod="openshift-marketplace/certified-operators-bqmr6" Jan 31 09:03:35 crc kubenswrapper[4830]: E0131 09:03:35.195229 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 09:03:35.695205986 +0000 UTC m=+160.188568428 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7m8b7" (UID: "acf2d685-5b8b-41ab-b91d-2e3b58b8b584") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 09:03:35 crc kubenswrapper[4830]: I0131 09:03:35.195704 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ea666a92-d7aa-4e9b-8c54-88ad8ae517aa-utilities\") pod \"certified-operators-bqmr6\" (UID: \"ea666a92-d7aa-4e9b-8c54-88ad8ae517aa\") " pod="openshift-marketplace/certified-operators-bqmr6" Jan 31 09:03:35 crc kubenswrapper[4830]: I0131 09:03:35.239625 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m7tpr\" (UniqueName: \"kubernetes.io/projected/ea666a92-d7aa-4e9b-8c54-88ad8ae517aa-kube-api-access-m7tpr\") pod \"certified-operators-bqmr6\" (UID: \"ea666a92-d7aa-4e9b-8c54-88ad8ae517aa\") " pod="openshift-marketplace/certified-operators-bqmr6" Jan 31 09:03:35 crc kubenswrapper[4830]: I0131 09:03:35.252384 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 31 09:03:35 crc kubenswrapper[4830]: I0131 09:03:35.253228 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 31 09:03:35 crc kubenswrapper[4830]: I0131 09:03:35.271370 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Jan 31 09:03:35 crc kubenswrapper[4830]: I0131 09:03:35.271610 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Jan 31 09:03:35 crc kubenswrapper[4830]: I0131 09:03:35.296638 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 09:03:35 crc kubenswrapper[4830]: E0131 09:03:35.297094 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 09:03:35.797073575 +0000 UTC m=+160.290436017 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 09:03:35 crc kubenswrapper[4830]: I0131 09:03:35.308702 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 31 09:03:35 crc kubenswrapper[4830]: I0131 09:03:35.314198 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bqmr6" Jan 31 09:03:35 crc kubenswrapper[4830]: I0131 09:03:35.399332 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8b19967e-b79f-42d5-b37b-2711aa675ac2-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"8b19967e-b79f-42d5-b37b-2711aa675ac2\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 31 09:03:35 crc kubenswrapper[4830]: I0131 09:03:35.399403 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8b19967e-b79f-42d5-b37b-2711aa675ac2-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"8b19967e-b79f-42d5-b37b-2711aa675ac2\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 31 09:03:35 crc kubenswrapper[4830]: I0131 09:03:35.399481 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7m8b7\" (UID: \"acf2d685-5b8b-41ab-b91d-2e3b58b8b584\") " pod="openshift-image-registry/image-registry-697d97f7c8-7m8b7" Jan 31 09:03:35 crc kubenswrapper[4830]: E0131 09:03:35.399907 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 09:03:35.899890713 +0000 UTC m=+160.393253155 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7m8b7" (UID: "acf2d685-5b8b-41ab-b91d-2e3b58b8b584") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 09:03:35 crc kubenswrapper[4830]: I0131 09:03:35.501978 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 09:03:35 crc kubenswrapper[4830]: I0131 09:03:35.503717 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8b19967e-b79f-42d5-b37b-2711aa675ac2-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"8b19967e-b79f-42d5-b37b-2711aa675ac2\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 31 09:03:35 crc kubenswrapper[4830]: I0131 09:03:35.503866 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8b19967e-b79f-42d5-b37b-2711aa675ac2-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"8b19967e-b79f-42d5-b37b-2711aa675ac2\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 31 09:03:35 crc kubenswrapper[4830]: E0131 09:03:35.504821 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 09:03:36.004803361 +0000 UTC m=+160.498165803 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 09:03:35 crc kubenswrapper[4830]: I0131 09:03:35.504862 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8b19967e-b79f-42d5-b37b-2711aa675ac2-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"8b19967e-b79f-42d5-b37b-2711aa675ac2\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 31 09:03:35 crc kubenswrapper[4830]: I0131 09:03:35.556908 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8b19967e-b79f-42d5-b37b-2711aa675ac2-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"8b19967e-b79f-42d5-b37b-2711aa675ac2\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 31 09:03:35 crc kubenswrapper[4830]: I0131 09:03:35.597226 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 31 09:03:35 crc kubenswrapper[4830]: I0131 09:03:35.605510 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7m8b7\" (UID: \"acf2d685-5b8b-41ab-b91d-2e3b58b8b584\") " pod="openshift-image-registry/image-registry-697d97f7c8-7m8b7" Jan 31 09:03:35 crc kubenswrapper[4830]: E0131 09:03:35.605961 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 09:03:36.10594491 +0000 UTC m=+160.599307352 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7m8b7" (UID: "acf2d685-5b8b-41ab-b91d-2e3b58b8b584") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 09:03:35 crc kubenswrapper[4830]: I0131 09:03:35.693044 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-rhvlq" event={"ID":"30c7c034-9492-4051-9cc9-235a6d87bd03","Type":"ContainerStarted","Data":"5472b2a90ca6ed6ea47bdbf8d850c28e38abfdda34ecf7824a580b2a723b83f6"} Jan 31 09:03:35 crc kubenswrapper[4830]: I0131 09:03:35.710173 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 09:03:35 crc kubenswrapper[4830]: E0131 09:03:35.710604 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 09:03:36.21058443 +0000 UTC m=+160.703946872 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 09:03:35 crc kubenswrapper[4830]: I0131 09:03:35.815046 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7m8b7\" (UID: \"acf2d685-5b8b-41ab-b91d-2e3b58b8b584\") " pod="openshift-image-registry/image-registry-697d97f7c8-7m8b7" Jan 31 09:03:35 crc kubenswrapper[4830]: E0131 09:03:35.815190 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 09:03:36.315169879 +0000 UTC m=+160.808532321 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7m8b7" (UID: "acf2d685-5b8b-41ab-b91d-2e3b58b8b584") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 09:03:35 crc kubenswrapper[4830]: I0131 09:03:35.821968 4830 patch_prober.go:28] interesting pod/router-default-5444994796-vbcgc container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 31 09:03:35 crc kubenswrapper[4830]: [-]has-synced failed: reason withheld Jan 31 09:03:35 crc kubenswrapper[4830]: [+]process-running ok Jan 31 09:03:35 crc kubenswrapper[4830]: healthz check failed Jan 31 09:03:35 crc kubenswrapper[4830]: I0131 09:03:35.822035 4830 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-vbcgc" podUID="bf986437-9998-4cd1-90b8-b2e0716e8d37" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 31 09:03:35 crc kubenswrapper[4830]: I0131 09:03:35.916710 4830 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Jan 31 09:03:35 crc kubenswrapper[4830]: I0131 09:03:35.916773 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 09:03:35 crc kubenswrapper[4830]: E0131 09:03:35.916860 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 09:03:36.416837613 +0000 UTC m=+160.910200055 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 09:03:35 crc kubenswrapper[4830]: I0131 09:03:35.917908 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7m8b7\" (UID: \"acf2d685-5b8b-41ab-b91d-2e3b58b8b584\") " pod="openshift-image-registry/image-registry-697d97f7c8-7m8b7" Jan 31 09:03:35 crc kubenswrapper[4830]: E0131 09:03:35.918284 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 09:03:36.418269045 +0000 UTC m=+160.911631487 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7m8b7" (UID: "acf2d685-5b8b-41ab-b91d-2e3b58b8b584") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 09:03:35 crc kubenswrapper[4830]: I0131 09:03:35.969383 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-2ssr8"] Jan 31 09:03:36 crc kubenswrapper[4830]: I0131 09:03:36.016145 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-bqmr6"] Jan 31 09:03:36 crc kubenswrapper[4830]: I0131 09:03:36.019543 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 09:03:36 crc kubenswrapper[4830]: E0131 09:03:36.019982 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 09:03:36.519961709 +0000 UTC m=+161.013324151 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 09:03:36 crc kubenswrapper[4830]: I0131 09:03:36.106479 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 31 09:03:36 crc kubenswrapper[4830]: I0131 09:03:36.126129 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7m8b7\" (UID: \"acf2d685-5b8b-41ab-b91d-2e3b58b8b584\") " pod="openshift-image-registry/image-registry-697d97f7c8-7m8b7" Jan 31 09:03:36 crc kubenswrapper[4830]: E0131 09:03:36.126831 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 09:03:36.626809134 +0000 UTC m=+161.120171576 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7m8b7" (UID: "acf2d685-5b8b-41ab-b91d-2e3b58b8b584") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 09:03:36 crc kubenswrapper[4830]: I0131 09:03:36.154169 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-q8t9t"] Jan 31 09:03:36 crc kubenswrapper[4830]: I0131 09:03:36.156193 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-dcmsg"] Jan 31 09:03:36 crc kubenswrapper[4830]: W0131 09:03:36.187971 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda1aef8b4_46c8_4ca1_87d9_0bdc6a53ca33.slice/crio-03b507624453911af4ea42a682692fe3a6f9d62abbbfc9481bb1c38237367306 WatchSource:0}: Error finding container 03b507624453911af4ea42a682692fe3a6f9d62abbbfc9481bb1c38237367306: Status 404 returned error can't find the container with id 03b507624453911af4ea42a682692fe3a6f9d62abbbfc9481bb1c38237367306 Jan 31 09:03:36 crc kubenswrapper[4830]: I0131 09:03:36.229809 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 09:03:36 crc kubenswrapper[4830]: E0131 09:03:36.230099 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 09:03:36.730007333 +0000 UTC m=+161.223369785 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 09:03:36 crc kubenswrapper[4830]: I0131 09:03:36.230341 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7m8b7\" (UID: \"acf2d685-5b8b-41ab-b91d-2e3b58b8b584\") " pod="openshift-image-registry/image-registry-697d97f7c8-7m8b7" Jan 31 09:03:36 crc kubenswrapper[4830]: E0131 09:03:36.230814 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 09:03:36.730803886 +0000 UTC m=+161.224166328 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7m8b7" (UID: "acf2d685-5b8b-41ab-b91d-2e3b58b8b584") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 09:03:36 crc kubenswrapper[4830]: I0131 09:03:36.334044 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 09:03:36 crc kubenswrapper[4830]: E0131 09:03:36.334392 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 09:03:36.834375615 +0000 UTC m=+161.327738057 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 09:03:36 crc kubenswrapper[4830]: I0131 09:03:36.361640 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-sxn8r"] Jan 31 09:03:36 crc kubenswrapper[4830]: I0131 09:03:36.366323 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sxn8r" Jan 31 09:03:36 crc kubenswrapper[4830]: I0131 09:03:36.368843 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 31 09:03:36 crc kubenswrapper[4830]: I0131 09:03:36.372491 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-sxn8r"] Jan 31 09:03:36 crc kubenswrapper[4830]: I0131 09:03:36.436229 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7m8b7\" (UID: \"acf2d685-5b8b-41ab-b91d-2e3b58b8b584\") " pod="openshift-image-registry/image-registry-697d97f7c8-7m8b7" Jan 31 09:03:36 crc kubenswrapper[4830]: E0131 09:03:36.437271 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 09:03:36.936897814 +0000 UTC m=+161.430260266 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7m8b7" (UID: "acf2d685-5b8b-41ab-b91d-2e3b58b8b584") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 09:03:36 crc kubenswrapper[4830]: I0131 09:03:36.453932 4830 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-01-31T09:03:35.917115821Z","Handler":null,"Name":""} Jan 31 09:03:36 crc kubenswrapper[4830]: I0131 09:03:36.467944 4830 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Jan 31 09:03:36 crc kubenswrapper[4830]: I0131 09:03:36.467998 4830 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Jan 31 09:03:36 crc kubenswrapper[4830]: I0131 09:03:36.537625 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 09:03:36 crc kubenswrapper[4830]: I0131 09:03:36.537858 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rhc4h\" (UniqueName: \"kubernetes.io/projected/3868f465-887b-4580-8c17-293665785251-kube-api-access-rhc4h\") pod \"redhat-marketplace-sxn8r\" (UID: \"3868f465-887b-4580-8c17-293665785251\") " pod="openshift-marketplace/redhat-marketplace-sxn8r" Jan 31 09:03:36 crc kubenswrapper[4830]: I0131 09:03:36.537926 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3868f465-887b-4580-8c17-293665785251-utilities\") pod \"redhat-marketplace-sxn8r\" (UID: \"3868f465-887b-4580-8c17-293665785251\") " pod="openshift-marketplace/redhat-marketplace-sxn8r" Jan 31 09:03:36 crc kubenswrapper[4830]: I0131 09:03:36.537957 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3868f465-887b-4580-8c17-293665785251-catalog-content\") pod \"redhat-marketplace-sxn8r\" (UID: \"3868f465-887b-4580-8c17-293665785251\") " pod="openshift-marketplace/redhat-marketplace-sxn8r" Jan 31 09:03:36 crc kubenswrapper[4830]: I0131 09:03:36.544215 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 31 09:03:36 crc kubenswrapper[4830]: I0131 09:03:36.639459 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rhc4h\" (UniqueName: \"kubernetes.io/projected/3868f465-887b-4580-8c17-293665785251-kube-api-access-rhc4h\") pod \"redhat-marketplace-sxn8r\" (UID: \"3868f465-887b-4580-8c17-293665785251\") " pod="openshift-marketplace/redhat-marketplace-sxn8r" Jan 31 09:03:36 crc kubenswrapper[4830]: I0131 09:03:36.639575 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7m8b7\" (UID: \"acf2d685-5b8b-41ab-b91d-2e3b58b8b584\") " pod="openshift-image-registry/image-registry-697d97f7c8-7m8b7" Jan 31 09:03:36 crc kubenswrapper[4830]: I0131 09:03:36.639640 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3868f465-887b-4580-8c17-293665785251-utilities\") pod \"redhat-marketplace-sxn8r\" (UID: \"3868f465-887b-4580-8c17-293665785251\") " pod="openshift-marketplace/redhat-marketplace-sxn8r" Jan 31 09:03:36 crc kubenswrapper[4830]: I0131 09:03:36.639675 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3868f465-887b-4580-8c17-293665785251-catalog-content\") pod \"redhat-marketplace-sxn8r\" (UID: \"3868f465-887b-4580-8c17-293665785251\") " pod="openshift-marketplace/redhat-marketplace-sxn8r" Jan 31 09:03:36 crc kubenswrapper[4830]: I0131 09:03:36.640204 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3868f465-887b-4580-8c17-293665785251-utilities\") pod \"redhat-marketplace-sxn8r\" (UID: \"3868f465-887b-4580-8c17-293665785251\") " pod="openshift-marketplace/redhat-marketplace-sxn8r" Jan 31 09:03:36 crc kubenswrapper[4830]: I0131 09:03:36.640340 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3868f465-887b-4580-8c17-293665785251-catalog-content\") pod \"redhat-marketplace-sxn8r\" (UID: \"3868f465-887b-4580-8c17-293665785251\") " pod="openshift-marketplace/redhat-marketplace-sxn8r" Jan 31 09:03:36 crc kubenswrapper[4830]: I0131 09:03:36.647317 4830 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 31 09:03:36 crc kubenswrapper[4830]: I0131 09:03:36.647374 4830 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7m8b7\" (UID: \"acf2d685-5b8b-41ab-b91d-2e3b58b8b584\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-7m8b7" Jan 31 09:03:36 crc kubenswrapper[4830]: I0131 09:03:36.664199 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rhc4h\" (UniqueName: \"kubernetes.io/projected/3868f465-887b-4580-8c17-293665785251-kube-api-access-rhc4h\") pod \"redhat-marketplace-sxn8r\" (UID: \"3868f465-887b-4580-8c17-293665785251\") " pod="openshift-marketplace/redhat-marketplace-sxn8r" Jan 31 09:03:36 crc kubenswrapper[4830]: I0131 09:03:36.684938 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7m8b7\" (UID: \"acf2d685-5b8b-41ab-b91d-2e3b58b8b584\") " pod="openshift-image-registry/image-registry-697d97f7c8-7m8b7" Jan 31 09:03:36 crc kubenswrapper[4830]: I0131 09:03:36.701331 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"8b19967e-b79f-42d5-b37b-2711aa675ac2","Type":"ContainerStarted","Data":"beca7de0fdbc170c31ba5a2976b46fc31c78077e99de730ce2500d5a147d8b1b"} Jan 31 09:03:36 crc kubenswrapper[4830]: I0131 09:03:36.703712 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-rhvlq" event={"ID":"30c7c034-9492-4051-9cc9-235a6d87bd03","Type":"ContainerStarted","Data":"c5b1f67903f877dc31711b8c45d9e372604b2e07fae7dcad7526149c456df699"} Jan 31 09:03:36 crc kubenswrapper[4830]: I0131 09:03:36.703764 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-rhvlq" event={"ID":"30c7c034-9492-4051-9cc9-235a6d87bd03","Type":"ContainerStarted","Data":"2f5371c59312c9a6b0e6f045a5d21ac2ab8b154ee74f8ab1cb8a9d25b788eee3"} Jan 31 09:03:36 crc kubenswrapper[4830]: I0131 09:03:36.710705 4830 generic.go:334] "Generic (PLEG): container finished" podID="db7a137a-b7f9-4446-85f6-ea0d2f0caedd" containerID="b1da15b1cfd5f09f4f82e796703792e2bfc71f61de85bcc01295357613e9d7f0" exitCode=0 Jan 31 09:03:36 crc kubenswrapper[4830]: I0131 09:03:36.710831 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q8t9t" event={"ID":"db7a137a-b7f9-4446-85f6-ea0d2f0caedd","Type":"ContainerDied","Data":"b1da15b1cfd5f09f4f82e796703792e2bfc71f61de85bcc01295357613e9d7f0"} Jan 31 09:03:36 crc kubenswrapper[4830]: I0131 09:03:36.710874 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q8t9t" event={"ID":"db7a137a-b7f9-4446-85f6-ea0d2f0caedd","Type":"ContainerStarted","Data":"3408accb1abcb9f45ad912603142df63d056931c7e91eaaa009bb4bd10e4c29d"} Jan 31 09:03:36 crc kubenswrapper[4830]: I0131 09:03:36.712847 4830 generic.go:334] "Generic (PLEG): container finished" podID="ea666a92-d7aa-4e9b-8c54-88ad8ae517aa" containerID="e1c344810985a78a6f2463f53077155a5f741fa2978808c0cb856ec7bb4cd54c" exitCode=0 Jan 31 09:03:36 crc kubenswrapper[4830]: I0131 09:03:36.712940 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bqmr6" event={"ID":"ea666a92-d7aa-4e9b-8c54-88ad8ae517aa","Type":"ContainerDied","Data":"e1c344810985a78a6f2463f53077155a5f741fa2978808c0cb856ec7bb4cd54c"} Jan 31 09:03:36 crc kubenswrapper[4830]: I0131 09:03:36.712971 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bqmr6" event={"ID":"ea666a92-d7aa-4e9b-8c54-88ad8ae517aa","Type":"ContainerStarted","Data":"7c980da832516b94ea92d1a2e598952b5112b8f3ebaa11375103b5cbe79592ba"} Jan 31 09:03:36 crc kubenswrapper[4830]: I0131 09:03:36.713932 4830 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 31 09:03:36 crc kubenswrapper[4830]: I0131 09:03:36.719269 4830 generic.go:334] "Generic (PLEG): container finished" podID="3e020928-b063-4d3c-8992-e712fe3d1b1d" containerID="87e5360931c06df89cc5a321b5a6e533de79e0a177545667824500166052980a" exitCode=0 Jan 31 09:03:36 crc kubenswrapper[4830]: I0131 09:03:36.719349 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2ssr8" event={"ID":"3e020928-b063-4d3c-8992-e712fe3d1b1d","Type":"ContainerDied","Data":"87e5360931c06df89cc5a321b5a6e533de79e0a177545667824500166052980a"} Jan 31 09:03:36 crc kubenswrapper[4830]: I0131 09:03:36.719377 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2ssr8" event={"ID":"3e020928-b063-4d3c-8992-e712fe3d1b1d","Type":"ContainerStarted","Data":"6570560ae4c56864c000a86f57a5dbed953675349c6784122415e645d9d9067d"} Jan 31 09:03:36 crc kubenswrapper[4830]: I0131 09:03:36.726786 4830 generic.go:334] "Generic (PLEG): container finished" podID="dc74377f-6986-4156-9c2b-7a003f07d6ff" containerID="2325efe2d12a50cd38de3263bca44ba166a7c07c01d65a0da7aaab18fd9d7718" exitCode=0 Jan 31 09:03:36 crc kubenswrapper[4830]: I0131 09:03:36.726851 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29497500-66dl8" event={"ID":"dc74377f-6986-4156-9c2b-7a003f07d6ff","Type":"ContainerDied","Data":"2325efe2d12a50cd38de3263bca44ba166a7c07c01d65a0da7aaab18fd9d7718"} Jan 31 09:03:36 crc kubenswrapper[4830]: I0131 09:03:36.730294 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-rhvlq" podStartSLOduration=11.730278717000001 podStartE2EDuration="11.730278717s" podCreationTimestamp="2026-01-31 09:03:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 09:03:36.727541027 +0000 UTC m=+161.220903489" watchObservedRunningTime="2026-01-31 09:03:36.730278717 +0000 UTC m=+161.223641159" Jan 31 09:03:36 crc kubenswrapper[4830]: I0131 09:03:36.737988 4830 generic.go:334] "Generic (PLEG): container finished" podID="a1aef8b4-46c8-4ca1-87d9-0bdc6a53ca33" containerID="0c2a8b7249d284d0603796fd9cb83b9256f84058c98bacc657c84b6ea6f3eb8d" exitCode=0 Jan 31 09:03:36 crc kubenswrapper[4830]: I0131 09:03:36.738500 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dcmsg" event={"ID":"a1aef8b4-46c8-4ca1-87d9-0bdc6a53ca33","Type":"ContainerDied","Data":"0c2a8b7249d284d0603796fd9cb83b9256f84058c98bacc657c84b6ea6f3eb8d"} Jan 31 09:03:36 crc kubenswrapper[4830]: I0131 09:03:36.738535 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dcmsg" event={"ID":"a1aef8b4-46c8-4ca1-87d9-0bdc6a53ca33","Type":"ContainerStarted","Data":"03b507624453911af4ea42a682692fe3a6f9d62abbbfc9481bb1c38237367306"} Jan 31 09:03:36 crc kubenswrapper[4830]: I0131 09:03:36.745798 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-qgnfw"] Jan 31 09:03:36 crc kubenswrapper[4830]: I0131 09:03:36.747000 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qgnfw" Jan 31 09:03:36 crc kubenswrapper[4830]: I0131 09:03:36.766415 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-qgnfw"] Jan 31 09:03:36 crc kubenswrapper[4830]: I0131 09:03:36.776778 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-7m8b7" Jan 31 09:03:36 crc kubenswrapper[4830]: I0131 09:03:36.800563 4830 patch_prober.go:28] interesting pod/router-default-5444994796-vbcgc container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 31 09:03:36 crc kubenswrapper[4830]: [-]has-synced failed: reason withheld Jan 31 09:03:36 crc kubenswrapper[4830]: [+]process-running ok Jan 31 09:03:36 crc kubenswrapper[4830]: healthz check failed Jan 31 09:03:36 crc kubenswrapper[4830]: I0131 09:03:36.800663 4830 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-vbcgc" podUID="bf986437-9998-4cd1-90b8-b2e0716e8d37" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 31 09:03:36 crc kubenswrapper[4830]: I0131 09:03:36.876015 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sxn8r" Jan 31 09:03:36 crc kubenswrapper[4830]: I0131 09:03:36.942774 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3477b1ed-ccf0-4f60-9505-ff0e417750af-catalog-content\") pod \"redhat-marketplace-qgnfw\" (UID: \"3477b1ed-ccf0-4f60-9505-ff0e417750af\") " pod="openshift-marketplace/redhat-marketplace-qgnfw" Jan 31 09:03:36 crc kubenswrapper[4830]: I0131 09:03:36.942890 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8lqw5\" (UniqueName: \"kubernetes.io/projected/3477b1ed-ccf0-4f60-9505-ff0e417750af-kube-api-access-8lqw5\") pod \"redhat-marketplace-qgnfw\" (UID: \"3477b1ed-ccf0-4f60-9505-ff0e417750af\") " pod="openshift-marketplace/redhat-marketplace-qgnfw" Jan 31 09:03:36 crc kubenswrapper[4830]: I0131 09:03:36.942972 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3477b1ed-ccf0-4f60-9505-ff0e417750af-utilities\") pod \"redhat-marketplace-qgnfw\" (UID: \"3477b1ed-ccf0-4f60-9505-ff0e417750af\") " pod="openshift-marketplace/redhat-marketplace-qgnfw" Jan 31 09:03:37 crc kubenswrapper[4830]: I0131 09:03:37.044782 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3477b1ed-ccf0-4f60-9505-ff0e417750af-utilities\") pod \"redhat-marketplace-qgnfw\" (UID: \"3477b1ed-ccf0-4f60-9505-ff0e417750af\") " pod="openshift-marketplace/redhat-marketplace-qgnfw" Jan 31 09:03:37 crc kubenswrapper[4830]: I0131 09:03:37.044850 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3477b1ed-ccf0-4f60-9505-ff0e417750af-catalog-content\") pod \"redhat-marketplace-qgnfw\" (UID: \"3477b1ed-ccf0-4f60-9505-ff0e417750af\") " pod="openshift-marketplace/redhat-marketplace-qgnfw" Jan 31 09:03:37 crc kubenswrapper[4830]: I0131 09:03:37.044915 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8lqw5\" (UniqueName: \"kubernetes.io/projected/3477b1ed-ccf0-4f60-9505-ff0e417750af-kube-api-access-8lqw5\") pod \"redhat-marketplace-qgnfw\" (UID: \"3477b1ed-ccf0-4f60-9505-ff0e417750af\") " pod="openshift-marketplace/redhat-marketplace-qgnfw" Jan 31 09:03:37 crc kubenswrapper[4830]: I0131 09:03:37.046182 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3477b1ed-ccf0-4f60-9505-ff0e417750af-catalog-content\") pod \"redhat-marketplace-qgnfw\" (UID: \"3477b1ed-ccf0-4f60-9505-ff0e417750af\") " pod="openshift-marketplace/redhat-marketplace-qgnfw" Jan 31 09:03:37 crc kubenswrapper[4830]: I0131 09:03:37.046210 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3477b1ed-ccf0-4f60-9505-ff0e417750af-utilities\") pod \"redhat-marketplace-qgnfw\" (UID: \"3477b1ed-ccf0-4f60-9505-ff0e417750af\") " pod="openshift-marketplace/redhat-marketplace-qgnfw" Jan 31 09:03:37 crc kubenswrapper[4830]: I0131 09:03:37.069704 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8lqw5\" (UniqueName: \"kubernetes.io/projected/3477b1ed-ccf0-4f60-9505-ff0e417750af-kube-api-access-8lqw5\") pod \"redhat-marketplace-qgnfw\" (UID: \"3477b1ed-ccf0-4f60-9505-ff0e417750af\") " pod="openshift-marketplace/redhat-marketplace-qgnfw" Jan 31 09:03:37 crc kubenswrapper[4830]: I0131 09:03:37.108146 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-sxn8r"] Jan 31 09:03:37 crc kubenswrapper[4830]: I0131 09:03:37.171546 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qgnfw" Jan 31 09:03:37 crc kubenswrapper[4830]: I0131 09:03:37.268352 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-7m8b7"] Jan 31 09:03:37 crc kubenswrapper[4830]: E0131 09:03:37.297660 4830 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3868f465_887b_4580_8c17_293665785251.slice/crio-conmon-ec29cbde38fa1dedfebe665bf9d3311a37fb95608bde46d7ef2f495d3c9c0134.scope\": RecentStats: unable to find data in memory cache]" Jan 31 09:03:37 crc kubenswrapper[4830]: I0131 09:03:37.335186 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-gp4nv" Jan 31 09:03:37 crc kubenswrapper[4830]: I0131 09:03:37.335783 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-gp4nv" Jan 31 09:03:37 crc kubenswrapper[4830]: I0131 09:03:37.339923 4830 patch_prober.go:28] interesting pod/console-f9d7485db-gp4nv container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.9:8443/health\": dial tcp 10.217.0.9:8443: connect: connection refused" start-of-body= Jan 31 09:03:37 crc kubenswrapper[4830]: I0131 09:03:37.339988 4830 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-gp4nv" podUID="83cc5fe8-7965-46aa-b846-33d1b8d317f8" containerName="console" probeResult="failure" output="Get \"https://10.217.0.9:8443/health\": dial tcp 10.217.0.9:8443: connect: connection refused" Jan 31 09:03:37 crc kubenswrapper[4830]: W0131 09:03:37.359137 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podacf2d685_5b8b_41ab_b91d_2e3b58b8b584.slice/crio-47e7e8e13e8a20edf26db4ddea741c5d988d3f9c28954abc786b95c66dda7131 WatchSource:0}: Error finding container 47e7e8e13e8a20edf26db4ddea741c5d988d3f9c28954abc786b95c66dda7131: Status 404 returned error can't find the container with id 47e7e8e13e8a20edf26db4ddea741c5d988d3f9c28954abc786b95c66dda7131 Jan 31 09:03:37 crc kubenswrapper[4830]: I0131 09:03:37.371875 4830 patch_prober.go:28] interesting pod/downloads-7954f5f757-l8ckt container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" start-of-body= Jan 31 09:03:37 crc kubenswrapper[4830]: I0131 09:03:37.372129 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-l8ckt" podUID="a8d26ab0-33c3-4eb7-928b-ffba996579d9" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" Jan 31 09:03:37 crc kubenswrapper[4830]: I0131 09:03:37.371966 4830 patch_prober.go:28] interesting pod/downloads-7954f5f757-l8ckt container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" start-of-body= Jan 31 09:03:37 crc kubenswrapper[4830]: I0131 09:03:37.372931 4830 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-l8ckt" podUID="a8d26ab0-33c3-4eb7-928b-ffba996579d9" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" Jan 31 09:03:37 crc kubenswrapper[4830]: I0131 09:03:37.402782 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-qgnfw"] Jan 31 09:03:37 crc kubenswrapper[4830]: I0131 09:03:37.517534 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pwk76" Jan 31 09:03:37 crc kubenswrapper[4830]: I0131 09:03:37.524109 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pwk76" Jan 31 09:03:37 crc kubenswrapper[4830]: I0131 09:03:37.743159 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-htl5l" Jan 31 09:03:37 crc kubenswrapper[4830]: I0131 09:03:37.743643 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-htl5l" Jan 31 09:03:37 crc kubenswrapper[4830]: I0131 09:03:37.750421 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-gs9bg"] Jan 31 09:03:37 crc kubenswrapper[4830]: I0131 09:03:37.751762 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gs9bg" Jan 31 09:03:37 crc kubenswrapper[4830]: I0131 09:03:37.753551 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 31 09:03:37 crc kubenswrapper[4830]: I0131 09:03:37.771412 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-gs9bg"] Jan 31 09:03:37 crc kubenswrapper[4830]: I0131 09:03:37.790267 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-vbcgc" Jan 31 09:03:37 crc kubenswrapper[4830]: I0131 09:03:37.800757 4830 patch_prober.go:28] interesting pod/apiserver-76f77b778f-htl5l container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Jan 31 09:03:37 crc kubenswrapper[4830]: [+]log ok Jan 31 09:03:37 crc kubenswrapper[4830]: [+]etcd ok Jan 31 09:03:37 crc kubenswrapper[4830]: [+]poststarthook/start-apiserver-admission-initializer ok Jan 31 09:03:37 crc kubenswrapper[4830]: [+]poststarthook/generic-apiserver-start-informers ok Jan 31 09:03:37 crc kubenswrapper[4830]: [+]poststarthook/max-in-flight-filter ok Jan 31 09:03:37 crc kubenswrapper[4830]: [+]poststarthook/storage-object-count-tracker-hook ok Jan 31 09:03:37 crc kubenswrapper[4830]: [+]poststarthook/image.openshift.io-apiserver-caches ok Jan 31 09:03:37 crc kubenswrapper[4830]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Jan 31 09:03:37 crc kubenswrapper[4830]: [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok Jan 31 09:03:37 crc kubenswrapper[4830]: [+]poststarthook/project.openshift.io-projectcache ok Jan 31 09:03:37 crc kubenswrapper[4830]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Jan 31 09:03:37 crc kubenswrapper[4830]: [+]poststarthook/openshift.io-startinformers ok Jan 31 09:03:37 crc kubenswrapper[4830]: [+]poststarthook/openshift.io-restmapperupdater ok Jan 31 09:03:37 crc kubenswrapper[4830]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Jan 31 09:03:37 crc kubenswrapper[4830]: livez check failed Jan 31 09:03:37 crc kubenswrapper[4830]: I0131 09:03:37.800877 4830 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-htl5l" podUID="2a94efc3-19bc-47ce-b48a-4f4b3351d955" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 31 09:03:37 crc kubenswrapper[4830]: I0131 09:03:37.804468 4830 patch_prober.go:28] interesting pod/router-default-5444994796-vbcgc container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 31 09:03:37 crc kubenswrapper[4830]: [-]has-synced failed: reason withheld Jan 31 09:03:37 crc kubenswrapper[4830]: [+]process-running ok Jan 31 09:03:37 crc kubenswrapper[4830]: healthz check failed Jan 31 09:03:37 crc kubenswrapper[4830]: I0131 09:03:37.804587 4830 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-vbcgc" podUID="bf986437-9998-4cd1-90b8-b2e0716e8d37" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 31 09:03:37 crc kubenswrapper[4830]: I0131 09:03:37.866372 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lz2g4\" (UniqueName: \"kubernetes.io/projected/ca8a4bb5-67d6-4e50-905f-95e0a15e376a-kube-api-access-lz2g4\") pod \"redhat-operators-gs9bg\" (UID: \"ca8a4bb5-67d6-4e50-905f-95e0a15e376a\") " pod="openshift-marketplace/redhat-operators-gs9bg" Jan 31 09:03:37 crc kubenswrapper[4830]: I0131 09:03:37.866502 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ca8a4bb5-67d6-4e50-905f-95e0a15e376a-utilities\") pod \"redhat-operators-gs9bg\" (UID: \"ca8a4bb5-67d6-4e50-905f-95e0a15e376a\") " pod="openshift-marketplace/redhat-operators-gs9bg" Jan 31 09:03:37 crc kubenswrapper[4830]: I0131 09:03:37.866552 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ca8a4bb5-67d6-4e50-905f-95e0a15e376a-catalog-content\") pod \"redhat-operators-gs9bg\" (UID: \"ca8a4bb5-67d6-4e50-905f-95e0a15e376a\") " pod="openshift-marketplace/redhat-operators-gs9bg" Jan 31 09:03:37 crc kubenswrapper[4830]: I0131 09:03:37.872662 4830 generic.go:334] "Generic (PLEG): container finished" podID="3868f465-887b-4580-8c17-293665785251" containerID="ec29cbde38fa1dedfebe665bf9d3311a37fb95608bde46d7ef2f495d3c9c0134" exitCode=0 Jan 31 09:03:37 crc kubenswrapper[4830]: I0131 09:03:37.872983 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sxn8r" event={"ID":"3868f465-887b-4580-8c17-293665785251","Type":"ContainerDied","Data":"ec29cbde38fa1dedfebe665bf9d3311a37fb95608bde46d7ef2f495d3c9c0134"} Jan 31 09:03:37 crc kubenswrapper[4830]: I0131 09:03:37.873039 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sxn8r" event={"ID":"3868f465-887b-4580-8c17-293665785251","Type":"ContainerStarted","Data":"b683a829a6a3da3790fa672b91fd2612d9b3d5a07c3c91411c1193079494cd22"} Jan 31 09:03:37 crc kubenswrapper[4830]: I0131 09:03:37.876379 4830 generic.go:334] "Generic (PLEG): container finished" podID="3477b1ed-ccf0-4f60-9505-ff0e417750af" containerID="87b619b5a59514b1c47359f60c98d55167cbb85978c7e83f62465fd000b9daed" exitCode=0 Jan 31 09:03:37 crc kubenswrapper[4830]: I0131 09:03:37.876491 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qgnfw" event={"ID":"3477b1ed-ccf0-4f60-9505-ff0e417750af","Type":"ContainerDied","Data":"87b619b5a59514b1c47359f60c98d55167cbb85978c7e83f62465fd000b9daed"} Jan 31 09:03:37 crc kubenswrapper[4830]: I0131 09:03:37.876525 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qgnfw" event={"ID":"3477b1ed-ccf0-4f60-9505-ff0e417750af","Type":"ContainerStarted","Data":"ec0891a1c75b8364d05386edd7e0cfd7348e9f5e32abd9db811037f9fc973bf0"} Jan 31 09:03:37 crc kubenswrapper[4830]: I0131 09:03:37.882925 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-7m8b7" event={"ID":"acf2d685-5b8b-41ab-b91d-2e3b58b8b584","Type":"ContainerStarted","Data":"f0b1a633cecf0b8973545b62836919c01720cefd12d03a417cdf2625965668c4"} Jan 31 09:03:37 crc kubenswrapper[4830]: I0131 09:03:37.883009 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-7m8b7" event={"ID":"acf2d685-5b8b-41ab-b91d-2e3b58b8b584","Type":"ContainerStarted","Data":"47e7e8e13e8a20edf26db4ddea741c5d988d3f9c28954abc786b95c66dda7131"} Jan 31 09:03:37 crc kubenswrapper[4830]: I0131 09:03:37.883251 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-7m8b7" Jan 31 09:03:37 crc kubenswrapper[4830]: I0131 09:03:37.888592 4830 generic.go:334] "Generic (PLEG): container finished" podID="8b19967e-b79f-42d5-b37b-2711aa675ac2" containerID="10dece3fcee834b421276de4475f8e891a70a452ddec3487f9ac4e34e09a7666" exitCode=0 Jan 31 09:03:37 crc kubenswrapper[4830]: I0131 09:03:37.888824 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"8b19967e-b79f-42d5-b37b-2711aa675ac2","Type":"ContainerDied","Data":"10dece3fcee834b421276de4475f8e891a70a452ddec3487f9ac4e34e09a7666"} Jan 31 09:03:37 crc kubenswrapper[4830]: I0131 09:03:37.922849 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-7m8b7" podStartSLOduration=135.922790472 podStartE2EDuration="2m15.922790472s" podCreationTimestamp="2026-01-31 09:01:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 09:03:37.918923379 +0000 UTC m=+162.412285821" watchObservedRunningTime="2026-01-31 09:03:37.922790472 +0000 UTC m=+162.416152934" Jan 31 09:03:37 crc kubenswrapper[4830]: I0131 09:03:37.968440 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ca8a4bb5-67d6-4e50-905f-95e0a15e376a-utilities\") pod \"redhat-operators-gs9bg\" (UID: \"ca8a4bb5-67d6-4e50-905f-95e0a15e376a\") " pod="openshift-marketplace/redhat-operators-gs9bg" Jan 31 09:03:37 crc kubenswrapper[4830]: I0131 09:03:37.968535 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ca8a4bb5-67d6-4e50-905f-95e0a15e376a-catalog-content\") pod \"redhat-operators-gs9bg\" (UID: \"ca8a4bb5-67d6-4e50-905f-95e0a15e376a\") " pod="openshift-marketplace/redhat-operators-gs9bg" Jan 31 09:03:37 crc kubenswrapper[4830]: I0131 09:03:37.968691 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lz2g4\" (UniqueName: \"kubernetes.io/projected/ca8a4bb5-67d6-4e50-905f-95e0a15e376a-kube-api-access-lz2g4\") pod \"redhat-operators-gs9bg\" (UID: \"ca8a4bb5-67d6-4e50-905f-95e0a15e376a\") " pod="openshift-marketplace/redhat-operators-gs9bg" Jan 31 09:03:37 crc kubenswrapper[4830]: I0131 09:03:37.969862 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ca8a4bb5-67d6-4e50-905f-95e0a15e376a-utilities\") pod \"redhat-operators-gs9bg\" (UID: \"ca8a4bb5-67d6-4e50-905f-95e0a15e376a\") " pod="openshift-marketplace/redhat-operators-gs9bg" Jan 31 09:03:37 crc kubenswrapper[4830]: I0131 09:03:37.969988 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ca8a4bb5-67d6-4e50-905f-95e0a15e376a-catalog-content\") pod \"redhat-operators-gs9bg\" (UID: \"ca8a4bb5-67d6-4e50-905f-95e0a15e376a\") " pod="openshift-marketplace/redhat-operators-gs9bg" Jan 31 09:03:38 crc kubenswrapper[4830]: I0131 09:03:38.031078 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lz2g4\" (UniqueName: \"kubernetes.io/projected/ca8a4bb5-67d6-4e50-905f-95e0a15e376a-kube-api-access-lz2g4\") pod \"redhat-operators-gs9bg\" (UID: \"ca8a4bb5-67d6-4e50-905f-95e0a15e376a\") " pod="openshift-marketplace/redhat-operators-gs9bg" Jan 31 09:03:38 crc kubenswrapper[4830]: I0131 09:03:38.097709 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gs9bg" Jan 31 09:03:38 crc kubenswrapper[4830]: I0131 09:03:38.153160 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-h79tz"] Jan 31 09:03:38 crc kubenswrapper[4830]: I0131 09:03:38.155933 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-h79tz" Jan 31 09:03:38 crc kubenswrapper[4830]: I0131 09:03:38.163572 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-h79tz"] Jan 31 09:03:38 crc kubenswrapper[4830]: I0131 09:03:38.198863 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-fnk7f" Jan 31 09:03:38 crc kubenswrapper[4830]: I0131 09:03:38.244225 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29497500-66dl8" Jan 31 09:03:38 crc kubenswrapper[4830]: I0131 09:03:38.277550 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xn6ll\" (UniqueName: \"kubernetes.io/projected/716e78b2-1856-45e6-a3fa-73538be51a97-kube-api-access-xn6ll\") pod \"redhat-operators-h79tz\" (UID: \"716e78b2-1856-45e6-a3fa-73538be51a97\") " pod="openshift-marketplace/redhat-operators-h79tz" Jan 31 09:03:38 crc kubenswrapper[4830]: I0131 09:03:38.277650 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/716e78b2-1856-45e6-a3fa-73538be51a97-catalog-content\") pod \"redhat-operators-h79tz\" (UID: \"716e78b2-1856-45e6-a3fa-73538be51a97\") " pod="openshift-marketplace/redhat-operators-h79tz" Jan 31 09:03:38 crc kubenswrapper[4830]: I0131 09:03:38.277705 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/716e78b2-1856-45e6-a3fa-73538be51a97-utilities\") pod \"redhat-operators-h79tz\" (UID: \"716e78b2-1856-45e6-a3fa-73538be51a97\") " pod="openshift-marketplace/redhat-operators-h79tz" Jan 31 09:03:38 crc kubenswrapper[4830]: I0131 09:03:38.293030 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Jan 31 09:03:38 crc kubenswrapper[4830]: I0131 09:03:38.380897 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dc74377f-6986-4156-9c2b-7a003f07d6ff-config-volume\") pod \"dc74377f-6986-4156-9c2b-7a003f07d6ff\" (UID: \"dc74377f-6986-4156-9c2b-7a003f07d6ff\") " Jan 31 09:03:38 crc kubenswrapper[4830]: I0131 09:03:38.386952 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vnqcc\" (UniqueName: \"kubernetes.io/projected/dc74377f-6986-4156-9c2b-7a003f07d6ff-kube-api-access-vnqcc\") pod \"dc74377f-6986-4156-9c2b-7a003f07d6ff\" (UID: \"dc74377f-6986-4156-9c2b-7a003f07d6ff\") " Jan 31 09:03:38 crc kubenswrapper[4830]: I0131 09:03:38.387128 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/dc74377f-6986-4156-9c2b-7a003f07d6ff-secret-volume\") pod \"dc74377f-6986-4156-9c2b-7a003f07d6ff\" (UID: \"dc74377f-6986-4156-9c2b-7a003f07d6ff\") " Jan 31 09:03:38 crc kubenswrapper[4830]: I0131 09:03:38.391256 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/716e78b2-1856-45e6-a3fa-73538be51a97-utilities\") pod \"redhat-operators-h79tz\" (UID: \"716e78b2-1856-45e6-a3fa-73538be51a97\") " pod="openshift-marketplace/redhat-operators-h79tz" Jan 31 09:03:38 crc kubenswrapper[4830]: I0131 09:03:38.391499 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xn6ll\" (UniqueName: \"kubernetes.io/projected/716e78b2-1856-45e6-a3fa-73538be51a97-kube-api-access-xn6ll\") pod \"redhat-operators-h79tz\" (UID: \"716e78b2-1856-45e6-a3fa-73538be51a97\") " pod="openshift-marketplace/redhat-operators-h79tz" Jan 31 09:03:38 crc kubenswrapper[4830]: I0131 09:03:38.391627 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/716e78b2-1856-45e6-a3fa-73538be51a97-catalog-content\") pod \"redhat-operators-h79tz\" (UID: \"716e78b2-1856-45e6-a3fa-73538be51a97\") " pod="openshift-marketplace/redhat-operators-h79tz" Jan 31 09:03:38 crc kubenswrapper[4830]: I0131 09:03:38.393113 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/716e78b2-1856-45e6-a3fa-73538be51a97-catalog-content\") pod \"redhat-operators-h79tz\" (UID: \"716e78b2-1856-45e6-a3fa-73538be51a97\") " pod="openshift-marketplace/redhat-operators-h79tz" Jan 31 09:03:38 crc kubenswrapper[4830]: I0131 09:03:38.394130 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dc74377f-6986-4156-9c2b-7a003f07d6ff-config-volume" (OuterVolumeSpecName: "config-volume") pod "dc74377f-6986-4156-9c2b-7a003f07d6ff" (UID: "dc74377f-6986-4156-9c2b-7a003f07d6ff"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:03:38 crc kubenswrapper[4830]: I0131 09:03:38.396390 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/716e78b2-1856-45e6-a3fa-73538be51a97-utilities\") pod \"redhat-operators-h79tz\" (UID: \"716e78b2-1856-45e6-a3fa-73538be51a97\") " pod="openshift-marketplace/redhat-operators-h79tz" Jan 31 09:03:38 crc kubenswrapper[4830]: I0131 09:03:38.408287 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc74377f-6986-4156-9c2b-7a003f07d6ff-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "dc74377f-6986-4156-9c2b-7a003f07d6ff" (UID: "dc74377f-6986-4156-9c2b-7a003f07d6ff"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:03:38 crc kubenswrapper[4830]: I0131 09:03:38.408699 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dc74377f-6986-4156-9c2b-7a003f07d6ff-kube-api-access-vnqcc" (OuterVolumeSpecName: "kube-api-access-vnqcc") pod "dc74377f-6986-4156-9c2b-7a003f07d6ff" (UID: "dc74377f-6986-4156-9c2b-7a003f07d6ff"). InnerVolumeSpecName "kube-api-access-vnqcc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:03:38 crc kubenswrapper[4830]: I0131 09:03:38.420744 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xn6ll\" (UniqueName: \"kubernetes.io/projected/716e78b2-1856-45e6-a3fa-73538be51a97-kube-api-access-xn6ll\") pod \"redhat-operators-h79tz\" (UID: \"716e78b2-1856-45e6-a3fa-73538be51a97\") " pod="openshift-marketplace/redhat-operators-h79tz" Jan 31 09:03:38 crc kubenswrapper[4830]: I0131 09:03:38.489552 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-gs9bg"] Jan 31 09:03:38 crc kubenswrapper[4830]: I0131 09:03:38.492796 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vnqcc\" (UniqueName: \"kubernetes.io/projected/dc74377f-6986-4156-9c2b-7a003f07d6ff-kube-api-access-vnqcc\") on node \"crc\" DevicePath \"\"" Jan 31 09:03:38 crc kubenswrapper[4830]: I0131 09:03:38.492847 4830 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/dc74377f-6986-4156-9c2b-7a003f07d6ff-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 31 09:03:38 crc kubenswrapper[4830]: I0131 09:03:38.492859 4830 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dc74377f-6986-4156-9c2b-7a003f07d6ff-config-volume\") on node \"crc\" DevicePath \"\"" Jan 31 09:03:38 crc kubenswrapper[4830]: W0131 09:03:38.505629 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podca8a4bb5_67d6_4e50_905f_95e0a15e376a.slice/crio-5b72ea10c07f3eeb0d0496ad7fa8736e50cce6e4d3b3d9cf616facb5858ec6f6 WatchSource:0}: Error finding container 5b72ea10c07f3eeb0d0496ad7fa8736e50cce6e4d3b3d9cf616facb5858ec6f6: Status 404 returned error can't find the container with id 5b72ea10c07f3eeb0d0496ad7fa8736e50cce6e4d3b3d9cf616facb5858ec6f6 Jan 31 09:03:38 crc kubenswrapper[4830]: I0131 09:03:38.537156 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-h79tz" Jan 31 09:03:38 crc kubenswrapper[4830]: I0131 09:03:38.796277 4830 patch_prober.go:28] interesting pod/router-default-5444994796-vbcgc container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 31 09:03:38 crc kubenswrapper[4830]: [-]has-synced failed: reason withheld Jan 31 09:03:38 crc kubenswrapper[4830]: [+]process-running ok Jan 31 09:03:38 crc kubenswrapper[4830]: healthz check failed Jan 31 09:03:38 crc kubenswrapper[4830]: I0131 09:03:38.797024 4830 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-vbcgc" podUID="bf986437-9998-4cd1-90b8-b2e0716e8d37" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 31 09:03:38 crc kubenswrapper[4830]: I0131 09:03:38.944569 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gs9bg" event={"ID":"ca8a4bb5-67d6-4e50-905f-95e0a15e376a","Type":"ContainerStarted","Data":"fbd87bac36d49f4b8412548086f5c4c860691e3ab80d714c1a6347d9329db56b"} Jan 31 09:03:38 crc kubenswrapper[4830]: I0131 09:03:38.944695 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gs9bg" event={"ID":"ca8a4bb5-67d6-4e50-905f-95e0a15e376a","Type":"ContainerStarted","Data":"5b72ea10c07f3eeb0d0496ad7fa8736e50cce6e4d3b3d9cf616facb5858ec6f6"} Jan 31 09:03:38 crc kubenswrapper[4830]: I0131 09:03:38.975870 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-h79tz"] Jan 31 09:03:38 crc kubenswrapper[4830]: I0131 09:03:38.985350 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29497500-66dl8" event={"ID":"dc74377f-6986-4156-9c2b-7a003f07d6ff","Type":"ContainerDied","Data":"fb9fc9e13aa13e5035e4d31402f5be1f1c8c1ae07e5be38ea76a827d38e986f8"} Jan 31 09:03:38 crc kubenswrapper[4830]: I0131 09:03:38.985410 4830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fb9fc9e13aa13e5035e4d31402f5be1f1c8c1ae07e5be38ea76a827d38e986f8" Jan 31 09:03:38 crc kubenswrapper[4830]: I0131 09:03:38.985553 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29497500-66dl8" Jan 31 09:03:39 crc kubenswrapper[4830]: W0131 09:03:39.019014 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod716e78b2_1856_45e6_a3fa_73538be51a97.slice/crio-fecb01cc01339eb2dd352a556c8d4f7aa672fd20154a957eb74a6077a3ec4bdd WatchSource:0}: Error finding container fecb01cc01339eb2dd352a556c8d4f7aa672fd20154a957eb74a6077a3ec4bdd: Status 404 returned error can't find the container with id fecb01cc01339eb2dd352a556c8d4f7aa672fd20154a957eb74a6077a3ec4bdd Jan 31 09:03:39 crc kubenswrapper[4830]: I0131 09:03:39.409676 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 31 09:03:39 crc kubenswrapper[4830]: I0131 09:03:39.516376 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8b19967e-b79f-42d5-b37b-2711aa675ac2-kubelet-dir\") pod \"8b19967e-b79f-42d5-b37b-2711aa675ac2\" (UID: \"8b19967e-b79f-42d5-b37b-2711aa675ac2\") " Jan 31 09:03:39 crc kubenswrapper[4830]: I0131 09:03:39.516580 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8b19967e-b79f-42d5-b37b-2711aa675ac2-kube-api-access\") pod \"8b19967e-b79f-42d5-b37b-2711aa675ac2\" (UID: \"8b19967e-b79f-42d5-b37b-2711aa675ac2\") " Jan 31 09:03:39 crc kubenswrapper[4830]: I0131 09:03:39.518278 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8b19967e-b79f-42d5-b37b-2711aa675ac2-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "8b19967e-b79f-42d5-b37b-2711aa675ac2" (UID: "8b19967e-b79f-42d5-b37b-2711aa675ac2"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 09:03:39 crc kubenswrapper[4830]: I0131 09:03:39.527595 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8b19967e-b79f-42d5-b37b-2711aa675ac2-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "8b19967e-b79f-42d5-b37b-2711aa675ac2" (UID: "8b19967e-b79f-42d5-b37b-2711aa675ac2"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:03:39 crc kubenswrapper[4830]: I0131 09:03:39.619444 4830 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8b19967e-b79f-42d5-b37b-2711aa675ac2-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 31 09:03:39 crc kubenswrapper[4830]: I0131 09:03:39.619492 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8b19967e-b79f-42d5-b37b-2711aa675ac2-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 31 09:03:39 crc kubenswrapper[4830]: I0131 09:03:39.804290 4830 patch_prober.go:28] interesting pod/router-default-5444994796-vbcgc container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 31 09:03:39 crc kubenswrapper[4830]: [-]has-synced failed: reason withheld Jan 31 09:03:39 crc kubenswrapper[4830]: [+]process-running ok Jan 31 09:03:39 crc kubenswrapper[4830]: healthz check failed Jan 31 09:03:39 crc kubenswrapper[4830]: I0131 09:03:39.805914 4830 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-vbcgc" podUID="bf986437-9998-4cd1-90b8-b2e0716e8d37" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 31 09:03:39 crc kubenswrapper[4830]: I0131 09:03:39.998635 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-h79tz" event={"ID":"716e78b2-1856-45e6-a3fa-73538be51a97","Type":"ContainerStarted","Data":"fecb01cc01339eb2dd352a556c8d4f7aa672fd20154a957eb74a6077a3ec4bdd"} Jan 31 09:03:40 crc kubenswrapper[4830]: I0131 09:03:40.006263 4830 generic.go:334] "Generic (PLEG): container finished" podID="ca8a4bb5-67d6-4e50-905f-95e0a15e376a" containerID="fbd87bac36d49f4b8412548086f5c4c860691e3ab80d714c1a6347d9329db56b" exitCode=0 Jan 31 09:03:40 crc kubenswrapper[4830]: I0131 09:03:40.006349 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gs9bg" event={"ID":"ca8a4bb5-67d6-4e50-905f-95e0a15e376a","Type":"ContainerDied","Data":"fbd87bac36d49f4b8412548086f5c4c860691e3ab80d714c1a6347d9329db56b"} Jan 31 09:03:40 crc kubenswrapper[4830]: I0131 09:03:40.013606 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"8b19967e-b79f-42d5-b37b-2711aa675ac2","Type":"ContainerDied","Data":"beca7de0fdbc170c31ba5a2976b46fc31c78077e99de730ce2500d5a147d8b1b"} Jan 31 09:03:40 crc kubenswrapper[4830]: I0131 09:03:40.013689 4830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="beca7de0fdbc170c31ba5a2976b46fc31c78077e99de730ce2500d5a147d8b1b" Jan 31 09:03:40 crc kubenswrapper[4830]: I0131 09:03:40.013706 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 31 09:03:40 crc kubenswrapper[4830]: I0131 09:03:40.791544 4830 patch_prober.go:28] interesting pod/router-default-5444994796-vbcgc container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 31 09:03:40 crc kubenswrapper[4830]: [-]has-synced failed: reason withheld Jan 31 09:03:40 crc kubenswrapper[4830]: [+]process-running ok Jan 31 09:03:40 crc kubenswrapper[4830]: healthz check failed Jan 31 09:03:40 crc kubenswrapper[4830]: I0131 09:03:40.792033 4830 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-vbcgc" podUID="bf986437-9998-4cd1-90b8-b2e0716e8d37" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 31 09:03:41 crc kubenswrapper[4830]: I0131 09:03:41.076050 4830 generic.go:334] "Generic (PLEG): container finished" podID="716e78b2-1856-45e6-a3fa-73538be51a97" containerID="90dad789e9ad829c314d34749f00d201a341bf01fe0232fbfa2e2ec5b40f6917" exitCode=0 Jan 31 09:03:41 crc kubenswrapper[4830]: I0131 09:03:41.076119 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-h79tz" event={"ID":"716e78b2-1856-45e6-a3fa-73538be51a97","Type":"ContainerDied","Data":"90dad789e9ad829c314d34749f00d201a341bf01fe0232fbfa2e2ec5b40f6917"} Jan 31 09:03:41 crc kubenswrapper[4830]: I0131 09:03:41.216443 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 31 09:03:41 crc kubenswrapper[4830]: E0131 09:03:41.216862 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b19967e-b79f-42d5-b37b-2711aa675ac2" containerName="pruner" Jan 31 09:03:41 crc kubenswrapper[4830]: I0131 09:03:41.216881 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b19967e-b79f-42d5-b37b-2711aa675ac2" containerName="pruner" Jan 31 09:03:41 crc kubenswrapper[4830]: E0131 09:03:41.216894 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc74377f-6986-4156-9c2b-7a003f07d6ff" containerName="collect-profiles" Jan 31 09:03:41 crc kubenswrapper[4830]: I0131 09:03:41.216902 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc74377f-6986-4156-9c2b-7a003f07d6ff" containerName="collect-profiles" Jan 31 09:03:41 crc kubenswrapper[4830]: I0131 09:03:41.217042 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="dc74377f-6986-4156-9c2b-7a003f07d6ff" containerName="collect-profiles" Jan 31 09:03:41 crc kubenswrapper[4830]: I0131 09:03:41.217072 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="8b19967e-b79f-42d5-b37b-2711aa675ac2" containerName="pruner" Jan 31 09:03:41 crc kubenswrapper[4830]: I0131 09:03:41.217748 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 31 09:03:41 crc kubenswrapper[4830]: I0131 09:03:41.218429 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 31 09:03:41 crc kubenswrapper[4830]: I0131 09:03:41.225800 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 31 09:03:41 crc kubenswrapper[4830]: I0131 09:03:41.226032 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 31 09:03:41 crc kubenswrapper[4830]: I0131 09:03:41.360995 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e57f6250-bb98-487f-9d49-b9ed02c3db41-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"e57f6250-bb98-487f-9d49-b9ed02c3db41\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 31 09:03:41 crc kubenswrapper[4830]: I0131 09:03:41.361064 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e57f6250-bb98-487f-9d49-b9ed02c3db41-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"e57f6250-bb98-487f-9d49-b9ed02c3db41\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 31 09:03:41 crc kubenswrapper[4830]: I0131 09:03:41.462888 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e57f6250-bb98-487f-9d49-b9ed02c3db41-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"e57f6250-bb98-487f-9d49-b9ed02c3db41\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 31 09:03:41 crc kubenswrapper[4830]: I0131 09:03:41.463098 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e57f6250-bb98-487f-9d49-b9ed02c3db41-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"e57f6250-bb98-487f-9d49-b9ed02c3db41\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 31 09:03:41 crc kubenswrapper[4830]: I0131 09:03:41.463189 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e57f6250-bb98-487f-9d49-b9ed02c3db41-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"e57f6250-bb98-487f-9d49-b9ed02c3db41\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 31 09:03:41 crc kubenswrapper[4830]: I0131 09:03:41.489751 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e57f6250-bb98-487f-9d49-b9ed02c3db41-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"e57f6250-bb98-487f-9d49-b9ed02c3db41\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 31 09:03:41 crc kubenswrapper[4830]: I0131 09:03:41.575100 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 31 09:03:41 crc kubenswrapper[4830]: I0131 09:03:41.793148 4830 patch_prober.go:28] interesting pod/router-default-5444994796-vbcgc container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 31 09:03:41 crc kubenswrapper[4830]: [-]has-synced failed: reason withheld Jan 31 09:03:41 crc kubenswrapper[4830]: [+]process-running ok Jan 31 09:03:41 crc kubenswrapper[4830]: healthz check failed Jan 31 09:03:41 crc kubenswrapper[4830]: I0131 09:03:41.793251 4830 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-vbcgc" podUID="bf986437-9998-4cd1-90b8-b2e0716e8d37" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 31 09:03:42 crc kubenswrapper[4830]: I0131 09:03:42.747080 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-htl5l" Jan 31 09:03:42 crc kubenswrapper[4830]: I0131 09:03:42.752489 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-htl5l" Jan 31 09:03:42 crc kubenswrapper[4830]: I0131 09:03:42.801425 4830 patch_prober.go:28] interesting pod/router-default-5444994796-vbcgc container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 31 09:03:42 crc kubenswrapper[4830]: [-]has-synced failed: reason withheld Jan 31 09:03:42 crc kubenswrapper[4830]: [+]process-running ok Jan 31 09:03:42 crc kubenswrapper[4830]: healthz check failed Jan 31 09:03:42 crc kubenswrapper[4830]: I0131 09:03:42.801502 4830 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-vbcgc" podUID="bf986437-9998-4cd1-90b8-b2e0716e8d37" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 31 09:03:43 crc kubenswrapper[4830]: I0131 09:03:43.005144 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-r8pc4" Jan 31 09:03:43 crc kubenswrapper[4830]: I0131 09:03:43.606000 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-whjm4" Jan 31 09:03:43 crc kubenswrapper[4830]: I0131 09:03:43.791816 4830 patch_prober.go:28] interesting pod/router-default-5444994796-vbcgc container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 31 09:03:43 crc kubenswrapper[4830]: [-]has-synced failed: reason withheld Jan 31 09:03:43 crc kubenswrapper[4830]: [+]process-running ok Jan 31 09:03:43 crc kubenswrapper[4830]: healthz check failed Jan 31 09:03:43 crc kubenswrapper[4830]: I0131 09:03:43.791885 4830 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-vbcgc" podUID="bf986437-9998-4cd1-90b8-b2e0716e8d37" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 31 09:03:44 crc kubenswrapper[4830]: I0131 09:03:44.352992 4830 patch_prober.go:28] interesting pod/machine-config-daemon-gt7kd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 09:03:44 crc kubenswrapper[4830]: I0131 09:03:44.353092 4830 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" podUID="158dbfda-9b0a-4809-9946-3c6ee2d082dc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 09:03:44 crc kubenswrapper[4830]: I0131 09:03:44.791115 4830 patch_prober.go:28] interesting pod/router-default-5444994796-vbcgc container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 31 09:03:44 crc kubenswrapper[4830]: [-]has-synced failed: reason withheld Jan 31 09:03:44 crc kubenswrapper[4830]: [+]process-running ok Jan 31 09:03:44 crc kubenswrapper[4830]: healthz check failed Jan 31 09:03:44 crc kubenswrapper[4830]: I0131 09:03:44.791232 4830 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-vbcgc" podUID="bf986437-9998-4cd1-90b8-b2e0716e8d37" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 31 09:03:44 crc kubenswrapper[4830]: I0131 09:03:44.949173 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c1fa30e4-0c03-43ab-9c37-f7ec86153b27-metrics-certs\") pod \"network-metrics-daemon-5kl8z\" (UID: \"c1fa30e4-0c03-43ab-9c37-f7ec86153b27\") " pod="openshift-multus/network-metrics-daemon-5kl8z" Jan 31 09:03:44 crc kubenswrapper[4830]: I0131 09:03:44.956932 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c1fa30e4-0c03-43ab-9c37-f7ec86153b27-metrics-certs\") pod \"network-metrics-daemon-5kl8z\" (UID: \"c1fa30e4-0c03-43ab-9c37-f7ec86153b27\") " pod="openshift-multus/network-metrics-daemon-5kl8z" Jan 31 09:03:45 crc kubenswrapper[4830]: I0131 09:03:45.168242 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5kl8z" Jan 31 09:03:45 crc kubenswrapper[4830]: I0131 09:03:45.791951 4830 patch_prober.go:28] interesting pod/router-default-5444994796-vbcgc container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 31 09:03:45 crc kubenswrapper[4830]: [-]has-synced failed: reason withheld Jan 31 09:03:45 crc kubenswrapper[4830]: [+]process-running ok Jan 31 09:03:45 crc kubenswrapper[4830]: healthz check failed Jan 31 09:03:45 crc kubenswrapper[4830]: I0131 09:03:45.792044 4830 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-vbcgc" podUID="bf986437-9998-4cd1-90b8-b2e0716e8d37" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 31 09:03:46 crc kubenswrapper[4830]: I0131 09:03:46.798375 4830 patch_prober.go:28] interesting pod/router-default-5444994796-vbcgc container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 31 09:03:46 crc kubenswrapper[4830]: [+]has-synced ok Jan 31 09:03:46 crc kubenswrapper[4830]: [+]process-running ok Jan 31 09:03:46 crc kubenswrapper[4830]: healthz check failed Jan 31 09:03:46 crc kubenswrapper[4830]: I0131 09:03:46.798439 4830 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-vbcgc" podUID="bf986437-9998-4cd1-90b8-b2e0716e8d37" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 31 09:03:47 crc kubenswrapper[4830]: I0131 09:03:47.334746 4830 patch_prober.go:28] interesting pod/console-f9d7485db-gp4nv container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.9:8443/health\": dial tcp 10.217.0.9:8443: connect: connection refused" start-of-body= Jan 31 09:03:47 crc kubenswrapper[4830]: I0131 09:03:47.334930 4830 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-gp4nv" podUID="83cc5fe8-7965-46aa-b846-33d1b8d317f8" containerName="console" probeResult="failure" output="Get \"https://10.217.0.9:8443/health\": dial tcp 10.217.0.9:8443: connect: connection refused" Jan 31 09:03:47 crc kubenswrapper[4830]: I0131 09:03:47.411570 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-l8ckt" Jan 31 09:03:47 crc kubenswrapper[4830]: I0131 09:03:47.791740 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-vbcgc" Jan 31 09:03:47 crc kubenswrapper[4830]: I0131 09:03:47.795452 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-vbcgc" Jan 31 09:03:49 crc kubenswrapper[4830]: I0131 09:03:49.162049 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-5kl8z"] Jan 31 09:03:49 crc kubenswrapper[4830]: W0131 09:03:49.170574 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc1fa30e4_0c03_43ab_9c37_f7ec86153b27.slice/crio-0f9cda267c12948cd1ace1e42ad6959e5d3626fd943e6f573c60be2c97a768ae WatchSource:0}: Error finding container 0f9cda267c12948cd1ace1e42ad6959e5d3626fd943e6f573c60be2c97a768ae: Status 404 returned error can't find the container with id 0f9cda267c12948cd1ace1e42ad6959e5d3626fd943e6f573c60be2c97a768ae Jan 31 09:03:49 crc kubenswrapper[4830]: I0131 09:03:49.191598 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 31 09:03:49 crc kubenswrapper[4830]: W0131 09:03:49.199916 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pode57f6250_bb98_487f_9d49_b9ed02c3db41.slice/crio-6ca85121d47a1de200075173baf591cb389e4484d8209f5a8c1306738a400b58 WatchSource:0}: Error finding container 6ca85121d47a1de200075173baf591cb389e4484d8209f5a8c1306738a400b58: Status 404 returned error can't find the container with id 6ca85121d47a1de200075173baf591cb389e4484d8209f5a8c1306738a400b58 Jan 31 09:03:50 crc kubenswrapper[4830]: I0131 09:03:50.167367 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"e57f6250-bb98-487f-9d49-b9ed02c3db41","Type":"ContainerStarted","Data":"4cca2c260e130123414ab85a55683d93d25996629e80aac6ab3c331b5210cefc"} Jan 31 09:03:50 crc kubenswrapper[4830]: I0131 09:03:50.168139 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"e57f6250-bb98-487f-9d49-b9ed02c3db41","Type":"ContainerStarted","Data":"6ca85121d47a1de200075173baf591cb389e4484d8209f5a8c1306738a400b58"} Jan 31 09:03:50 crc kubenswrapper[4830]: I0131 09:03:50.169231 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-5kl8z" event={"ID":"c1fa30e4-0c03-43ab-9c37-f7ec86153b27","Type":"ContainerStarted","Data":"643981b9d2920c6fec00e1c1c2106b9fd861957f7a863b7e67bc858f044ba287"} Jan 31 09:03:50 crc kubenswrapper[4830]: I0131 09:03:50.169255 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-5kl8z" event={"ID":"c1fa30e4-0c03-43ab-9c37-f7ec86153b27","Type":"ContainerStarted","Data":"0f9cda267c12948cd1ace1e42ad6959e5d3626fd943e6f573c60be2c97a768ae"} Jan 31 09:03:50 crc kubenswrapper[4830]: I0131 09:03:50.222716 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-8-crc" podStartSLOduration=9.222695861 podStartE2EDuration="9.222695861s" podCreationTimestamp="2026-01-31 09:03:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 09:03:50.214826502 +0000 UTC m=+174.708188944" watchObservedRunningTime="2026-01-31 09:03:50.222695861 +0000 UTC m=+174.716058303" Jan 31 09:03:50 crc kubenswrapper[4830]: I0131 09:03:50.947777 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-rdzrw"] Jan 31 09:03:50 crc kubenswrapper[4830]: I0131 09:03:50.948078 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-rdzrw" podUID="33210b82-c473-4bf8-b40d-a29b00833ea0" containerName="controller-manager" containerID="cri-o://022ea8a18a302916854f6b760b83a358dccdbbcd5c291d9804b6a782c98e9a71" gracePeriod=30 Jan 31 09:03:50 crc kubenswrapper[4830]: I0131 09:03:50.975780 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-knkww"] Jan 31 09:03:50 crc kubenswrapper[4830]: I0131 09:03:50.976109 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-knkww" podUID="0f4287bc-c7a7-4ee2-8212-3611b978e2e8" containerName="route-controller-manager" containerID="cri-o://c93ef06b4cc611d689048f6986abcd84dc1de88007a083281962fb48d9fe17b4" gracePeriod=30 Jan 31 09:03:51 crc kubenswrapper[4830]: I0131 09:03:51.178172 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-5kl8z" event={"ID":"c1fa30e4-0c03-43ab-9c37-f7ec86153b27","Type":"ContainerStarted","Data":"b9bee477979ce98548554a505c002398a6b1423d94776f77913b33256ca8e87a"} Jan 31 09:03:52 crc kubenswrapper[4830]: I0131 09:03:52.185322 4830 generic.go:334] "Generic (PLEG): container finished" podID="e57f6250-bb98-487f-9d49-b9ed02c3db41" containerID="4cca2c260e130123414ab85a55683d93d25996629e80aac6ab3c331b5210cefc" exitCode=0 Jan 31 09:03:52 crc kubenswrapper[4830]: I0131 09:03:52.185728 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"e57f6250-bb98-487f-9d49-b9ed02c3db41","Type":"ContainerDied","Data":"4cca2c260e130123414ab85a55683d93d25996629e80aac6ab3c331b5210cefc"} Jan 31 09:03:52 crc kubenswrapper[4830]: I0131 09:03:52.187486 4830 generic.go:334] "Generic (PLEG): container finished" podID="0f4287bc-c7a7-4ee2-8212-3611b978e2e8" containerID="c93ef06b4cc611d689048f6986abcd84dc1de88007a083281962fb48d9fe17b4" exitCode=0 Jan 31 09:03:52 crc kubenswrapper[4830]: I0131 09:03:52.187527 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-knkww" event={"ID":"0f4287bc-c7a7-4ee2-8212-3611b978e2e8","Type":"ContainerDied","Data":"c93ef06b4cc611d689048f6986abcd84dc1de88007a083281962fb48d9fe17b4"} Jan 31 09:03:52 crc kubenswrapper[4830]: I0131 09:03:52.188472 4830 generic.go:334] "Generic (PLEG): container finished" podID="33210b82-c473-4bf8-b40d-a29b00833ea0" containerID="022ea8a18a302916854f6b760b83a358dccdbbcd5c291d9804b6a782c98e9a71" exitCode=0 Jan 31 09:03:52 crc kubenswrapper[4830]: I0131 09:03:52.189181 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-rdzrw" event={"ID":"33210b82-c473-4bf8-b40d-a29b00833ea0","Type":"ContainerDied","Data":"022ea8a18a302916854f6b760b83a358dccdbbcd5c291d9804b6a782c98e9a71"} Jan 31 09:03:52 crc kubenswrapper[4830]: I0131 09:03:52.253545 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-5kl8z" podStartSLOduration=150.253509735 podStartE2EDuration="2m30.253509735s" podCreationTimestamp="2026-01-31 09:01:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 09:03:52.245355937 +0000 UTC m=+176.738718389" watchObservedRunningTime="2026-01-31 09:03:52.253509735 +0000 UTC m=+176.746872177" Jan 31 09:03:52 crc kubenswrapper[4830]: I0131 09:03:52.264249 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-knkww" Jan 31 09:03:52 crc kubenswrapper[4830]: I0131 09:03:52.302265 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7bd8785496-4t72d"] Jan 31 09:03:52 crc kubenswrapper[4830]: E0131 09:03:52.302585 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f4287bc-c7a7-4ee2-8212-3611b978e2e8" containerName="route-controller-manager" Jan 31 09:03:52 crc kubenswrapper[4830]: I0131 09:03:52.302601 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f4287bc-c7a7-4ee2-8212-3611b978e2e8" containerName="route-controller-manager" Jan 31 09:03:52 crc kubenswrapper[4830]: I0131 09:03:52.303176 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="0f4287bc-c7a7-4ee2-8212-3611b978e2e8" containerName="route-controller-manager" Jan 31 09:03:52 crc kubenswrapper[4830]: I0131 09:03:52.303897 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7bd8785496-4t72d" Jan 31 09:03:52 crc kubenswrapper[4830]: I0131 09:03:52.321154 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7bd8785496-4t72d"] Jan 31 09:03:52 crc kubenswrapper[4830]: I0131 09:03:52.377811 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-rdzrw" Jan 31 09:03:52 crc kubenswrapper[4830]: I0131 09:03:52.386218 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0f4287bc-c7a7-4ee2-8212-3611b978e2e8-client-ca\") pod \"0f4287bc-c7a7-4ee2-8212-3611b978e2e8\" (UID: \"0f4287bc-c7a7-4ee2-8212-3611b978e2e8\") " Jan 31 09:03:52 crc kubenswrapper[4830]: I0131 09:03:52.386331 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0f4287bc-c7a7-4ee2-8212-3611b978e2e8-serving-cert\") pod \"0f4287bc-c7a7-4ee2-8212-3611b978e2e8\" (UID: \"0f4287bc-c7a7-4ee2-8212-3611b978e2e8\") " Jan 31 09:03:52 crc kubenswrapper[4830]: I0131 09:03:52.386411 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0f4287bc-c7a7-4ee2-8212-3611b978e2e8-config\") pod \"0f4287bc-c7a7-4ee2-8212-3611b978e2e8\" (UID: \"0f4287bc-c7a7-4ee2-8212-3611b978e2e8\") " Jan 31 09:03:52 crc kubenswrapper[4830]: I0131 09:03:52.386441 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sktbs\" (UniqueName: \"kubernetes.io/projected/0f4287bc-c7a7-4ee2-8212-3611b978e2e8-kube-api-access-sktbs\") pod \"0f4287bc-c7a7-4ee2-8212-3611b978e2e8\" (UID: \"0f4287bc-c7a7-4ee2-8212-3611b978e2e8\") " Jan 31 09:03:52 crc kubenswrapper[4830]: I0131 09:03:52.386694 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qznw8\" (UniqueName: \"kubernetes.io/projected/9b68ddd6-bb0f-45ad-86e5-0c30bd513905-kube-api-access-qznw8\") pod \"route-controller-manager-7bd8785496-4t72d\" (UID: \"9b68ddd6-bb0f-45ad-86e5-0c30bd513905\") " pod="openshift-route-controller-manager/route-controller-manager-7bd8785496-4t72d" Jan 31 09:03:52 crc kubenswrapper[4830]: I0131 09:03:52.386794 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9b68ddd6-bb0f-45ad-86e5-0c30bd513905-serving-cert\") pod \"route-controller-manager-7bd8785496-4t72d\" (UID: \"9b68ddd6-bb0f-45ad-86e5-0c30bd513905\") " pod="openshift-route-controller-manager/route-controller-manager-7bd8785496-4t72d" Jan 31 09:03:52 crc kubenswrapper[4830]: I0131 09:03:52.386822 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9b68ddd6-bb0f-45ad-86e5-0c30bd513905-config\") pod \"route-controller-manager-7bd8785496-4t72d\" (UID: \"9b68ddd6-bb0f-45ad-86e5-0c30bd513905\") " pod="openshift-route-controller-manager/route-controller-manager-7bd8785496-4t72d" Jan 31 09:03:52 crc kubenswrapper[4830]: I0131 09:03:52.386846 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9b68ddd6-bb0f-45ad-86e5-0c30bd513905-client-ca\") pod \"route-controller-manager-7bd8785496-4t72d\" (UID: \"9b68ddd6-bb0f-45ad-86e5-0c30bd513905\") " pod="openshift-route-controller-manager/route-controller-manager-7bd8785496-4t72d" Jan 31 09:03:52 crc kubenswrapper[4830]: I0131 09:03:52.387656 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0f4287bc-c7a7-4ee2-8212-3611b978e2e8-config" (OuterVolumeSpecName: "config") pod "0f4287bc-c7a7-4ee2-8212-3611b978e2e8" (UID: "0f4287bc-c7a7-4ee2-8212-3611b978e2e8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:03:52 crc kubenswrapper[4830]: I0131 09:03:52.387759 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0f4287bc-c7a7-4ee2-8212-3611b978e2e8-client-ca" (OuterVolumeSpecName: "client-ca") pod "0f4287bc-c7a7-4ee2-8212-3611b978e2e8" (UID: "0f4287bc-c7a7-4ee2-8212-3611b978e2e8"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:03:52 crc kubenswrapper[4830]: I0131 09:03:52.394234 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0f4287bc-c7a7-4ee2-8212-3611b978e2e8-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0f4287bc-c7a7-4ee2-8212-3611b978e2e8" (UID: "0f4287bc-c7a7-4ee2-8212-3611b978e2e8"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:03:52 crc kubenswrapper[4830]: I0131 09:03:52.395200 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0f4287bc-c7a7-4ee2-8212-3611b978e2e8-kube-api-access-sktbs" (OuterVolumeSpecName: "kube-api-access-sktbs") pod "0f4287bc-c7a7-4ee2-8212-3611b978e2e8" (UID: "0f4287bc-c7a7-4ee2-8212-3611b978e2e8"). InnerVolumeSpecName "kube-api-access-sktbs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:03:52 crc kubenswrapper[4830]: I0131 09:03:52.488048 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/33210b82-c473-4bf8-b40d-a29b00833ea0-config\") pod \"33210b82-c473-4bf8-b40d-a29b00833ea0\" (UID: \"33210b82-c473-4bf8-b40d-a29b00833ea0\") " Jan 31 09:03:52 crc kubenswrapper[4830]: I0131 09:03:52.488187 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/33210b82-c473-4bf8-b40d-a29b00833ea0-serving-cert\") pod \"33210b82-c473-4bf8-b40d-a29b00833ea0\" (UID: \"33210b82-c473-4bf8-b40d-a29b00833ea0\") " Jan 31 09:03:52 crc kubenswrapper[4830]: I0131 09:03:52.488236 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rzsbm\" (UniqueName: \"kubernetes.io/projected/33210b82-c473-4bf8-b40d-a29b00833ea0-kube-api-access-rzsbm\") pod \"33210b82-c473-4bf8-b40d-a29b00833ea0\" (UID: \"33210b82-c473-4bf8-b40d-a29b00833ea0\") " Jan 31 09:03:52 crc kubenswrapper[4830]: I0131 09:03:52.488378 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/33210b82-c473-4bf8-b40d-a29b00833ea0-proxy-ca-bundles\") pod \"33210b82-c473-4bf8-b40d-a29b00833ea0\" (UID: \"33210b82-c473-4bf8-b40d-a29b00833ea0\") " Jan 31 09:03:52 crc kubenswrapper[4830]: I0131 09:03:52.488429 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/33210b82-c473-4bf8-b40d-a29b00833ea0-client-ca\") pod \"33210b82-c473-4bf8-b40d-a29b00833ea0\" (UID: \"33210b82-c473-4bf8-b40d-a29b00833ea0\") " Jan 31 09:03:52 crc kubenswrapper[4830]: I0131 09:03:52.488690 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qznw8\" (UniqueName: \"kubernetes.io/projected/9b68ddd6-bb0f-45ad-86e5-0c30bd513905-kube-api-access-qznw8\") pod \"route-controller-manager-7bd8785496-4t72d\" (UID: \"9b68ddd6-bb0f-45ad-86e5-0c30bd513905\") " pod="openshift-route-controller-manager/route-controller-manager-7bd8785496-4t72d" Jan 31 09:03:52 crc kubenswrapper[4830]: I0131 09:03:52.488766 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9b68ddd6-bb0f-45ad-86e5-0c30bd513905-serving-cert\") pod \"route-controller-manager-7bd8785496-4t72d\" (UID: \"9b68ddd6-bb0f-45ad-86e5-0c30bd513905\") " pod="openshift-route-controller-manager/route-controller-manager-7bd8785496-4t72d" Jan 31 09:03:52 crc kubenswrapper[4830]: I0131 09:03:52.488797 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9b68ddd6-bb0f-45ad-86e5-0c30bd513905-config\") pod \"route-controller-manager-7bd8785496-4t72d\" (UID: \"9b68ddd6-bb0f-45ad-86e5-0c30bd513905\") " pod="openshift-route-controller-manager/route-controller-manager-7bd8785496-4t72d" Jan 31 09:03:52 crc kubenswrapper[4830]: I0131 09:03:52.488827 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9b68ddd6-bb0f-45ad-86e5-0c30bd513905-client-ca\") pod \"route-controller-manager-7bd8785496-4t72d\" (UID: \"9b68ddd6-bb0f-45ad-86e5-0c30bd513905\") " pod="openshift-route-controller-manager/route-controller-manager-7bd8785496-4t72d" Jan 31 09:03:52 crc kubenswrapper[4830]: I0131 09:03:52.488886 4830 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0f4287bc-c7a7-4ee2-8212-3611b978e2e8-client-ca\") on node \"crc\" DevicePath \"\"" Jan 31 09:03:52 crc kubenswrapper[4830]: I0131 09:03:52.488902 4830 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0f4287bc-c7a7-4ee2-8212-3611b978e2e8-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 09:03:52 crc kubenswrapper[4830]: I0131 09:03:52.488916 4830 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0f4287bc-c7a7-4ee2-8212-3611b978e2e8-config\") on node \"crc\" DevicePath \"\"" Jan 31 09:03:52 crc kubenswrapper[4830]: I0131 09:03:52.488933 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sktbs\" (UniqueName: \"kubernetes.io/projected/0f4287bc-c7a7-4ee2-8212-3611b978e2e8-kube-api-access-sktbs\") on node \"crc\" DevicePath \"\"" Jan 31 09:03:52 crc kubenswrapper[4830]: I0131 09:03:52.489542 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/33210b82-c473-4bf8-b40d-a29b00833ea0-client-ca" (OuterVolumeSpecName: "client-ca") pod "33210b82-c473-4bf8-b40d-a29b00833ea0" (UID: "33210b82-c473-4bf8-b40d-a29b00833ea0"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:03:52 crc kubenswrapper[4830]: I0131 09:03:52.489919 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/33210b82-c473-4bf8-b40d-a29b00833ea0-config" (OuterVolumeSpecName: "config") pod "33210b82-c473-4bf8-b40d-a29b00833ea0" (UID: "33210b82-c473-4bf8-b40d-a29b00833ea0"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:03:52 crc kubenswrapper[4830]: I0131 09:03:52.489628 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/33210b82-c473-4bf8-b40d-a29b00833ea0-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "33210b82-c473-4bf8-b40d-a29b00833ea0" (UID: "33210b82-c473-4bf8-b40d-a29b00833ea0"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:03:52 crc kubenswrapper[4830]: I0131 09:03:52.490642 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9b68ddd6-bb0f-45ad-86e5-0c30bd513905-client-ca\") pod \"route-controller-manager-7bd8785496-4t72d\" (UID: \"9b68ddd6-bb0f-45ad-86e5-0c30bd513905\") " pod="openshift-route-controller-manager/route-controller-manager-7bd8785496-4t72d" Jan 31 09:03:52 crc kubenswrapper[4830]: I0131 09:03:52.490961 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9b68ddd6-bb0f-45ad-86e5-0c30bd513905-config\") pod \"route-controller-manager-7bd8785496-4t72d\" (UID: \"9b68ddd6-bb0f-45ad-86e5-0c30bd513905\") " pod="openshift-route-controller-manager/route-controller-manager-7bd8785496-4t72d" Jan 31 09:03:52 crc kubenswrapper[4830]: I0131 09:03:52.493285 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9b68ddd6-bb0f-45ad-86e5-0c30bd513905-serving-cert\") pod \"route-controller-manager-7bd8785496-4t72d\" (UID: \"9b68ddd6-bb0f-45ad-86e5-0c30bd513905\") " pod="openshift-route-controller-manager/route-controller-manager-7bd8785496-4t72d" Jan 31 09:03:52 crc kubenswrapper[4830]: I0131 09:03:52.493948 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/33210b82-c473-4bf8-b40d-a29b00833ea0-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "33210b82-c473-4bf8-b40d-a29b00833ea0" (UID: "33210b82-c473-4bf8-b40d-a29b00833ea0"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:03:52 crc kubenswrapper[4830]: I0131 09:03:52.494670 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/33210b82-c473-4bf8-b40d-a29b00833ea0-kube-api-access-rzsbm" (OuterVolumeSpecName: "kube-api-access-rzsbm") pod "33210b82-c473-4bf8-b40d-a29b00833ea0" (UID: "33210b82-c473-4bf8-b40d-a29b00833ea0"). InnerVolumeSpecName "kube-api-access-rzsbm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:03:52 crc kubenswrapper[4830]: I0131 09:03:52.505798 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qznw8\" (UniqueName: \"kubernetes.io/projected/9b68ddd6-bb0f-45ad-86e5-0c30bd513905-kube-api-access-qznw8\") pod \"route-controller-manager-7bd8785496-4t72d\" (UID: \"9b68ddd6-bb0f-45ad-86e5-0c30bd513905\") " pod="openshift-route-controller-manager/route-controller-manager-7bd8785496-4t72d" Jan 31 09:03:52 crc kubenswrapper[4830]: I0131 09:03:52.591078 4830 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/33210b82-c473-4bf8-b40d-a29b00833ea0-client-ca\") on node \"crc\" DevicePath \"\"" Jan 31 09:03:52 crc kubenswrapper[4830]: I0131 09:03:52.591139 4830 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/33210b82-c473-4bf8-b40d-a29b00833ea0-config\") on node \"crc\" DevicePath \"\"" Jan 31 09:03:52 crc kubenswrapper[4830]: I0131 09:03:52.591155 4830 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/33210b82-c473-4bf8-b40d-a29b00833ea0-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 09:03:52 crc kubenswrapper[4830]: I0131 09:03:52.591167 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rzsbm\" (UniqueName: \"kubernetes.io/projected/33210b82-c473-4bf8-b40d-a29b00833ea0-kube-api-access-rzsbm\") on node \"crc\" DevicePath \"\"" Jan 31 09:03:52 crc kubenswrapper[4830]: I0131 09:03:52.591178 4830 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/33210b82-c473-4bf8-b40d-a29b00833ea0-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 31 09:03:52 crc kubenswrapper[4830]: I0131 09:03:52.674185 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7bd8785496-4t72d" Jan 31 09:03:53 crc kubenswrapper[4830]: I0131 09:03:53.201044 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-knkww" event={"ID":"0f4287bc-c7a7-4ee2-8212-3611b978e2e8","Type":"ContainerDied","Data":"0a78637f8b7343d66b6b94425540ddbe97c86fe7a3066716b0fa8f790875c0a2"} Jan 31 09:03:53 crc kubenswrapper[4830]: I0131 09:03:53.201118 4830 scope.go:117] "RemoveContainer" containerID="c93ef06b4cc611d689048f6986abcd84dc1de88007a083281962fb48d9fe17b4" Jan 31 09:03:53 crc kubenswrapper[4830]: I0131 09:03:53.201163 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-knkww" Jan 31 09:03:53 crc kubenswrapper[4830]: I0131 09:03:53.209873 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-rdzrw" event={"ID":"33210b82-c473-4bf8-b40d-a29b00833ea0","Type":"ContainerDied","Data":"73db3496020c41a6d9c43cdd7a272c7e7b198ce445c18806d1b24285a64d36df"} Jan 31 09:03:53 crc kubenswrapper[4830]: I0131 09:03:53.210075 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-rdzrw" Jan 31 09:03:53 crc kubenswrapper[4830]: I0131 09:03:53.266238 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-knkww"] Jan 31 09:03:53 crc kubenswrapper[4830]: I0131 09:03:53.275385 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-knkww"] Jan 31 09:03:53 crc kubenswrapper[4830]: I0131 09:03:53.279429 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-rdzrw"] Jan 31 09:03:53 crc kubenswrapper[4830]: I0131 09:03:53.282543 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-rdzrw"] Jan 31 09:03:54 crc kubenswrapper[4830]: I0131 09:03:54.260772 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0f4287bc-c7a7-4ee2-8212-3611b978e2e8" path="/var/lib/kubelet/pods/0f4287bc-c7a7-4ee2-8212-3611b978e2e8/volumes" Jan 31 09:03:54 crc kubenswrapper[4830]: I0131 09:03:54.261708 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="33210b82-c473-4bf8-b40d-a29b00833ea0" path="/var/lib/kubelet/pods/33210b82-c473-4bf8-b40d-a29b00833ea0/volumes" Jan 31 09:03:54 crc kubenswrapper[4830]: I0131 09:03:54.307513 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 09:03:54 crc kubenswrapper[4830]: I0131 09:03:54.542480 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-78c8c6b64-d9ddw"] Jan 31 09:03:54 crc kubenswrapper[4830]: E0131 09:03:54.542796 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33210b82-c473-4bf8-b40d-a29b00833ea0" containerName="controller-manager" Jan 31 09:03:54 crc kubenswrapper[4830]: I0131 09:03:54.542810 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="33210b82-c473-4bf8-b40d-a29b00833ea0" containerName="controller-manager" Jan 31 09:03:54 crc kubenswrapper[4830]: I0131 09:03:54.542944 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="33210b82-c473-4bf8-b40d-a29b00833ea0" containerName="controller-manager" Jan 31 09:03:54 crc kubenswrapper[4830]: I0131 09:03:54.543420 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-78c8c6b64-d9ddw" Jan 31 09:03:54 crc kubenswrapper[4830]: I0131 09:03:54.545304 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 31 09:03:54 crc kubenswrapper[4830]: I0131 09:03:54.546026 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 31 09:03:54 crc kubenswrapper[4830]: I0131 09:03:54.546281 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 31 09:03:54 crc kubenswrapper[4830]: I0131 09:03:54.546714 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 31 09:03:54 crc kubenswrapper[4830]: I0131 09:03:54.547030 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 31 09:03:54 crc kubenswrapper[4830]: I0131 09:03:54.547514 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 31 09:03:54 crc kubenswrapper[4830]: I0131 09:03:54.559601 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-78c8c6b64-d9ddw"] Jan 31 09:03:54 crc kubenswrapper[4830]: I0131 09:03:54.561760 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 31 09:03:54 crc kubenswrapper[4830]: I0131 09:03:54.625107 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d72dbe7e-a948-47c0-ac76-9f9b2d5d24ec-proxy-ca-bundles\") pod \"controller-manager-78c8c6b64-d9ddw\" (UID: \"d72dbe7e-a948-47c0-ac76-9f9b2d5d24ec\") " pod="openshift-controller-manager/controller-manager-78c8c6b64-d9ddw" Jan 31 09:03:54 crc kubenswrapper[4830]: I0131 09:03:54.625187 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d72dbe7e-a948-47c0-ac76-9f9b2d5d24ec-client-ca\") pod \"controller-manager-78c8c6b64-d9ddw\" (UID: \"d72dbe7e-a948-47c0-ac76-9f9b2d5d24ec\") " pod="openshift-controller-manager/controller-manager-78c8c6b64-d9ddw" Jan 31 09:03:54 crc kubenswrapper[4830]: I0131 09:03:54.625231 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d72dbe7e-a948-47c0-ac76-9f9b2d5d24ec-config\") pod \"controller-manager-78c8c6b64-d9ddw\" (UID: \"d72dbe7e-a948-47c0-ac76-9f9b2d5d24ec\") " pod="openshift-controller-manager/controller-manager-78c8c6b64-d9ddw" Jan 31 09:03:54 crc kubenswrapper[4830]: I0131 09:03:54.625252 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d72dbe7e-a948-47c0-ac76-9f9b2d5d24ec-serving-cert\") pod \"controller-manager-78c8c6b64-d9ddw\" (UID: \"d72dbe7e-a948-47c0-ac76-9f9b2d5d24ec\") " pod="openshift-controller-manager/controller-manager-78c8c6b64-d9ddw" Jan 31 09:03:54 crc kubenswrapper[4830]: I0131 09:03:54.625395 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-57882\" (UniqueName: \"kubernetes.io/projected/d72dbe7e-a948-47c0-ac76-9f9b2d5d24ec-kube-api-access-57882\") pod \"controller-manager-78c8c6b64-d9ddw\" (UID: \"d72dbe7e-a948-47c0-ac76-9f9b2d5d24ec\") " pod="openshift-controller-manager/controller-manager-78c8c6b64-d9ddw" Jan 31 09:03:54 crc kubenswrapper[4830]: I0131 09:03:54.727046 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d72dbe7e-a948-47c0-ac76-9f9b2d5d24ec-config\") pod \"controller-manager-78c8c6b64-d9ddw\" (UID: \"d72dbe7e-a948-47c0-ac76-9f9b2d5d24ec\") " pod="openshift-controller-manager/controller-manager-78c8c6b64-d9ddw" Jan 31 09:03:54 crc kubenswrapper[4830]: I0131 09:03:54.727123 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d72dbe7e-a948-47c0-ac76-9f9b2d5d24ec-serving-cert\") pod \"controller-manager-78c8c6b64-d9ddw\" (UID: \"d72dbe7e-a948-47c0-ac76-9f9b2d5d24ec\") " pod="openshift-controller-manager/controller-manager-78c8c6b64-d9ddw" Jan 31 09:03:54 crc kubenswrapper[4830]: I0131 09:03:54.727152 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-57882\" (UniqueName: \"kubernetes.io/projected/d72dbe7e-a948-47c0-ac76-9f9b2d5d24ec-kube-api-access-57882\") pod \"controller-manager-78c8c6b64-d9ddw\" (UID: \"d72dbe7e-a948-47c0-ac76-9f9b2d5d24ec\") " pod="openshift-controller-manager/controller-manager-78c8c6b64-d9ddw" Jan 31 09:03:54 crc kubenswrapper[4830]: I0131 09:03:54.727207 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d72dbe7e-a948-47c0-ac76-9f9b2d5d24ec-proxy-ca-bundles\") pod \"controller-manager-78c8c6b64-d9ddw\" (UID: \"d72dbe7e-a948-47c0-ac76-9f9b2d5d24ec\") " pod="openshift-controller-manager/controller-manager-78c8c6b64-d9ddw" Jan 31 09:03:54 crc kubenswrapper[4830]: I0131 09:03:54.727230 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d72dbe7e-a948-47c0-ac76-9f9b2d5d24ec-client-ca\") pod \"controller-manager-78c8c6b64-d9ddw\" (UID: \"d72dbe7e-a948-47c0-ac76-9f9b2d5d24ec\") " pod="openshift-controller-manager/controller-manager-78c8c6b64-d9ddw" Jan 31 09:03:54 crc kubenswrapper[4830]: I0131 09:03:54.728365 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d72dbe7e-a948-47c0-ac76-9f9b2d5d24ec-client-ca\") pod \"controller-manager-78c8c6b64-d9ddw\" (UID: \"d72dbe7e-a948-47c0-ac76-9f9b2d5d24ec\") " pod="openshift-controller-manager/controller-manager-78c8c6b64-d9ddw" Jan 31 09:03:54 crc kubenswrapper[4830]: I0131 09:03:54.728778 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d72dbe7e-a948-47c0-ac76-9f9b2d5d24ec-proxy-ca-bundles\") pod \"controller-manager-78c8c6b64-d9ddw\" (UID: \"d72dbe7e-a948-47c0-ac76-9f9b2d5d24ec\") " pod="openshift-controller-manager/controller-manager-78c8c6b64-d9ddw" Jan 31 09:03:54 crc kubenswrapper[4830]: I0131 09:03:54.728973 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d72dbe7e-a948-47c0-ac76-9f9b2d5d24ec-config\") pod \"controller-manager-78c8c6b64-d9ddw\" (UID: \"d72dbe7e-a948-47c0-ac76-9f9b2d5d24ec\") " pod="openshift-controller-manager/controller-manager-78c8c6b64-d9ddw" Jan 31 09:03:54 crc kubenswrapper[4830]: I0131 09:03:54.732561 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d72dbe7e-a948-47c0-ac76-9f9b2d5d24ec-serving-cert\") pod \"controller-manager-78c8c6b64-d9ddw\" (UID: \"d72dbe7e-a948-47c0-ac76-9f9b2d5d24ec\") " pod="openshift-controller-manager/controller-manager-78c8c6b64-d9ddw" Jan 31 09:03:54 crc kubenswrapper[4830]: I0131 09:03:54.745680 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-57882\" (UniqueName: \"kubernetes.io/projected/d72dbe7e-a948-47c0-ac76-9f9b2d5d24ec-kube-api-access-57882\") pod \"controller-manager-78c8c6b64-d9ddw\" (UID: \"d72dbe7e-a948-47c0-ac76-9f9b2d5d24ec\") " pod="openshift-controller-manager/controller-manager-78c8c6b64-d9ddw" Jan 31 09:03:54 crc kubenswrapper[4830]: I0131 09:03:54.874644 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-78c8c6b64-d9ddw" Jan 31 09:03:56 crc kubenswrapper[4830]: I0131 09:03:56.787682 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-7m8b7" Jan 31 09:03:57 crc kubenswrapper[4830]: I0131 09:03:57.334614 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-gp4nv" Jan 31 09:03:57 crc kubenswrapper[4830]: I0131 09:03:57.339268 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-gp4nv" Jan 31 09:03:58 crc kubenswrapper[4830]: I0131 09:03:58.423684 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 31 09:03:58 crc kubenswrapper[4830]: I0131 09:03:58.488135 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e57f6250-bb98-487f-9d49-b9ed02c3db41-kubelet-dir\") pod \"e57f6250-bb98-487f-9d49-b9ed02c3db41\" (UID: \"e57f6250-bb98-487f-9d49-b9ed02c3db41\") " Jan 31 09:03:58 crc kubenswrapper[4830]: I0131 09:03:58.488314 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e57f6250-bb98-487f-9d49-b9ed02c3db41-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "e57f6250-bb98-487f-9d49-b9ed02c3db41" (UID: "e57f6250-bb98-487f-9d49-b9ed02c3db41"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 09:03:58 crc kubenswrapper[4830]: I0131 09:03:58.488366 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e57f6250-bb98-487f-9d49-b9ed02c3db41-kube-api-access\") pod \"e57f6250-bb98-487f-9d49-b9ed02c3db41\" (UID: \"e57f6250-bb98-487f-9d49-b9ed02c3db41\") " Jan 31 09:03:58 crc kubenswrapper[4830]: I0131 09:03:58.488634 4830 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e57f6250-bb98-487f-9d49-b9ed02c3db41-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 31 09:03:58 crc kubenswrapper[4830]: I0131 09:03:58.494859 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e57f6250-bb98-487f-9d49-b9ed02c3db41-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e57f6250-bb98-487f-9d49-b9ed02c3db41" (UID: "e57f6250-bb98-487f-9d49-b9ed02c3db41"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:03:58 crc kubenswrapper[4830]: I0131 09:03:58.590326 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e57f6250-bb98-487f-9d49-b9ed02c3db41-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 31 09:03:59 crc kubenswrapper[4830]: I0131 09:03:59.249704 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"e57f6250-bb98-487f-9d49-b9ed02c3db41","Type":"ContainerDied","Data":"6ca85121d47a1de200075173baf591cb389e4484d8209f5a8c1306738a400b58"} Jan 31 09:03:59 crc kubenswrapper[4830]: I0131 09:03:59.250235 4830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6ca85121d47a1de200075173baf591cb389e4484d8209f5a8c1306738a400b58" Jan 31 09:03:59 crc kubenswrapper[4830]: I0131 09:03:59.249807 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 31 09:04:08 crc kubenswrapper[4830]: I0131 09:04:08.497686 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-ckvgq" Jan 31 09:04:09 crc kubenswrapper[4830]: E0131 09:04:09.772972 4830 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 31 09:04:09 crc kubenswrapper[4830]: E0131 09:04:09.773668 4830 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8lqw5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-qgnfw_openshift-marketplace(3477b1ed-ccf0-4f60-9505-ff0e417750af): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 31 09:04:09 crc kubenswrapper[4830]: E0131 09:04:09.774969 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-qgnfw" podUID="3477b1ed-ccf0-4f60-9505-ff0e417750af" Jan 31 09:04:10 crc kubenswrapper[4830]: E0131 09:04:10.523532 4830 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 31 09:04:10 crc kubenswrapper[4830]: E0131 09:04:10.523819 4830 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rhc4h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-sxn8r_openshift-marketplace(3868f465-887b-4580-8c17-293665785251): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 31 09:04:10 crc kubenswrapper[4830]: E0131 09:04:10.525063 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-sxn8r" podUID="3868f465-887b-4580-8c17-293665785251" Jan 31 09:04:10 crc kubenswrapper[4830]: I0131 09:04:10.888783 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-78c8c6b64-d9ddw"] Jan 31 09:04:10 crc kubenswrapper[4830]: I0131 09:04:10.993400 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7bd8785496-4t72d"] Jan 31 09:04:11 crc kubenswrapper[4830]: E0131 09:04:11.770892 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-sxn8r" podUID="3868f465-887b-4580-8c17-293665785251" Jan 31 09:04:11 crc kubenswrapper[4830]: E0131 09:04:11.772092 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-qgnfw" podUID="3477b1ed-ccf0-4f60-9505-ff0e417750af" Jan 31 09:04:11 crc kubenswrapper[4830]: E0131 09:04:11.833375 4830 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 31 09:04:11 crc kubenswrapper[4830]: E0131 09:04:11.834196 4830 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-m7tpr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-bqmr6_openshift-marketplace(ea666a92-d7aa-4e9b-8c54-88ad8ae517aa): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 31 09:04:11 crc kubenswrapper[4830]: E0131 09:04:11.835410 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-bqmr6" podUID="ea666a92-d7aa-4e9b-8c54-88ad8ae517aa" Jan 31 09:04:11 crc kubenswrapper[4830]: E0131 09:04:11.940133 4830 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 31 09:04:11 crc kubenswrapper[4830]: E0131 09:04:11.940322 4830 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bhrcb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-q8t9t_openshift-marketplace(db7a137a-b7f9-4446-85f6-ea0d2f0caedd): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 31 09:04:11 crc kubenswrapper[4830]: E0131 09:04:11.941899 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-q8t9t" podUID="db7a137a-b7f9-4446-85f6-ea0d2f0caedd" Jan 31 09:04:14 crc kubenswrapper[4830]: I0131 09:04:14.353086 4830 patch_prober.go:28] interesting pod/machine-config-daemon-gt7kd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 09:04:14 crc kubenswrapper[4830]: I0131 09:04:14.353184 4830 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" podUID="158dbfda-9b0a-4809-9946-3c6ee2d082dc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 09:04:17 crc kubenswrapper[4830]: I0131 09:04:17.804667 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 31 09:04:17 crc kubenswrapper[4830]: E0131 09:04:17.805626 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e57f6250-bb98-487f-9d49-b9ed02c3db41" containerName="pruner" Jan 31 09:04:17 crc kubenswrapper[4830]: I0131 09:04:17.805655 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="e57f6250-bb98-487f-9d49-b9ed02c3db41" containerName="pruner" Jan 31 09:04:17 crc kubenswrapper[4830]: I0131 09:04:17.805878 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="e57f6250-bb98-487f-9d49-b9ed02c3db41" containerName="pruner" Jan 31 09:04:17 crc kubenswrapper[4830]: I0131 09:04:17.806515 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 31 09:04:17 crc kubenswrapper[4830]: I0131 09:04:17.810046 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 31 09:04:17 crc kubenswrapper[4830]: I0131 09:04:17.810379 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 31 09:04:17 crc kubenswrapper[4830]: I0131 09:04:17.816648 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 31 09:04:17 crc kubenswrapper[4830]: I0131 09:04:17.932075 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9edbde4e-2ffd-42bb-91f1-b016e8f9f1cd-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"9edbde4e-2ffd-42bb-91f1-b016e8f9f1cd\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 31 09:04:17 crc kubenswrapper[4830]: I0131 09:04:17.932189 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9edbde4e-2ffd-42bb-91f1-b016e8f9f1cd-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"9edbde4e-2ffd-42bb-91f1-b016e8f9f1cd\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 31 09:04:18 crc kubenswrapper[4830]: I0131 09:04:18.034136 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9edbde4e-2ffd-42bb-91f1-b016e8f9f1cd-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"9edbde4e-2ffd-42bb-91f1-b016e8f9f1cd\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 31 09:04:18 crc kubenswrapper[4830]: I0131 09:04:18.034215 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9edbde4e-2ffd-42bb-91f1-b016e8f9f1cd-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"9edbde4e-2ffd-42bb-91f1-b016e8f9f1cd\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 31 09:04:18 crc kubenswrapper[4830]: I0131 09:04:18.034329 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9edbde4e-2ffd-42bb-91f1-b016e8f9f1cd-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"9edbde4e-2ffd-42bb-91f1-b016e8f9f1cd\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 31 09:04:18 crc kubenswrapper[4830]: I0131 09:04:18.318319 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9edbde4e-2ffd-42bb-91f1-b016e8f9f1cd-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"9edbde4e-2ffd-42bb-91f1-b016e8f9f1cd\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 31 09:04:18 crc kubenswrapper[4830]: I0131 09:04:18.430112 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 31 09:04:18 crc kubenswrapper[4830]: E0131 09:04:18.481590 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-q8t9t" podUID="db7a137a-b7f9-4446-85f6-ea0d2f0caedd" Jan 31 09:04:18 crc kubenswrapper[4830]: E0131 09:04:18.482569 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-bqmr6" podUID="ea666a92-d7aa-4e9b-8c54-88ad8ae517aa" Jan 31 09:04:18 crc kubenswrapper[4830]: I0131 09:04:18.499656 4830 scope.go:117] "RemoveContainer" containerID="022ea8a18a302916854f6b760b83a358dccdbbcd5c291d9804b6a782c98e9a71" Jan 31 09:04:18 crc kubenswrapper[4830]: I0131 09:04:18.713154 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7bd8785496-4t72d"] Jan 31 09:04:18 crc kubenswrapper[4830]: W0131 09:04:18.726380 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9b68ddd6_bb0f_45ad_86e5_0c30bd513905.slice/crio-03c4a4b39e7f02973cd9067b40782d9b9f193f70b578e28a76a759936372e045 WatchSource:0}: Error finding container 03c4a4b39e7f02973cd9067b40782d9b9f193f70b578e28a76a759936372e045: Status 404 returned error can't find the container with id 03c4a4b39e7f02973cd9067b40782d9b9f193f70b578e28a76a759936372e045 Jan 31 09:04:18 crc kubenswrapper[4830]: I0131 09:04:18.753850 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 31 09:04:18 crc kubenswrapper[4830]: W0131 09:04:18.760535 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod9edbde4e_2ffd_42bb_91f1_b016e8f9f1cd.slice/crio-71f13a4eb058709db49e67a4d5f6b52f4213ba8bd351e1caa1b0a2505a0300b3 WatchSource:0}: Error finding container 71f13a4eb058709db49e67a4d5f6b52f4213ba8bd351e1caa1b0a2505a0300b3: Status 404 returned error can't find the container with id 71f13a4eb058709db49e67a4d5f6b52f4213ba8bd351e1caa1b0a2505a0300b3 Jan 31 09:04:18 crc kubenswrapper[4830]: I0131 09:04:18.797734 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-78c8c6b64-d9ddw"] Jan 31 09:04:18 crc kubenswrapper[4830]: W0131 09:04:18.805297 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd72dbe7e_a948_47c0_ac76_9f9b2d5d24ec.slice/crio-9a36db10b07c9e69468098ad24d5c85f07e9bc0d5b0538b61594e4e1014ab650 WatchSource:0}: Error finding container 9a36db10b07c9e69468098ad24d5c85f07e9bc0d5b0538b61594e4e1014ab650: Status 404 returned error can't find the container with id 9a36db10b07c9e69468098ad24d5c85f07e9bc0d5b0538b61594e4e1014ab650 Jan 31 09:04:19 crc kubenswrapper[4830]: I0131 09:04:19.397166 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-78c8c6b64-d9ddw" event={"ID":"d72dbe7e-a948-47c0-ac76-9f9b2d5d24ec","Type":"ContainerStarted","Data":"9a36db10b07c9e69468098ad24d5c85f07e9bc0d5b0538b61594e4e1014ab650"} Jan 31 09:04:19 crc kubenswrapper[4830]: I0131 09:04:19.398276 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"9edbde4e-2ffd-42bb-91f1-b016e8f9f1cd","Type":"ContainerStarted","Data":"71f13a4eb058709db49e67a4d5f6b52f4213ba8bd351e1caa1b0a2505a0300b3"} Jan 31 09:04:19 crc kubenswrapper[4830]: I0131 09:04:19.399574 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7bd8785496-4t72d" event={"ID":"9b68ddd6-bb0f-45ad-86e5-0c30bd513905","Type":"ContainerStarted","Data":"03c4a4b39e7f02973cd9067b40782d9b9f193f70b578e28a76a759936372e045"} Jan 31 09:04:19 crc kubenswrapper[4830]: E0131 09:04:19.778079 4830 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 31 09:04:19 crc kubenswrapper[4830]: E0131 09:04:19.778602 4830 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lz2g4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-gs9bg_openshift-marketplace(ca8a4bb5-67d6-4e50-905f-95e0a15e376a): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 31 09:04:19 crc kubenswrapper[4830]: E0131 09:04:19.780271 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-gs9bg" podUID="ca8a4bb5-67d6-4e50-905f-95e0a15e376a" Jan 31 09:04:19 crc kubenswrapper[4830]: E0131 09:04:19.852825 4830 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 31 09:04:19 crc kubenswrapper[4830]: E0131 09:04:19.853052 4830 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zts58,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-2ssr8_openshift-marketplace(3e020928-b063-4d3c-8992-e712fe3d1b1d): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 31 09:04:19 crc kubenswrapper[4830]: E0131 09:04:19.854405 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-2ssr8" podUID="3e020928-b063-4d3c-8992-e712fe3d1b1d" Jan 31 09:04:20 crc kubenswrapper[4830]: I0131 09:04:20.409135 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-78c8c6b64-d9ddw" event={"ID":"d72dbe7e-a948-47c0-ac76-9f9b2d5d24ec","Type":"ContainerStarted","Data":"b98d16c213b0ce071aaf97966bdfc9703381aa64a7c703cb71514bac2afdcb9f"} Jan 31 09:04:20 crc kubenswrapper[4830]: I0131 09:04:20.409312 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-78c8c6b64-d9ddw" podUID="d72dbe7e-a948-47c0-ac76-9f9b2d5d24ec" containerName="controller-manager" containerID="cri-o://b98d16c213b0ce071aaf97966bdfc9703381aa64a7c703cb71514bac2afdcb9f" gracePeriod=30 Jan 31 09:04:20 crc kubenswrapper[4830]: I0131 09:04:20.409581 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-78c8c6b64-d9ddw" Jan 31 09:04:20 crc kubenswrapper[4830]: I0131 09:04:20.411924 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"9edbde4e-2ffd-42bb-91f1-b016e8f9f1cd","Type":"ContainerStarted","Data":"4f32b234538f930f46a95987b5ea4f639349ce20eee48b42f1768e95f0c31ce7"} Jan 31 09:04:20 crc kubenswrapper[4830]: I0131 09:04:20.414442 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7bd8785496-4t72d" event={"ID":"9b68ddd6-bb0f-45ad-86e5-0c30bd513905","Type":"ContainerStarted","Data":"8503c981f2071e87dfef830f80e3699bf847237d9e2533b4507ad11c3f7d8425"} Jan 31 09:04:20 crc kubenswrapper[4830]: I0131 09:04:20.414478 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-7bd8785496-4t72d" podUID="9b68ddd6-bb0f-45ad-86e5-0c30bd513905" containerName="route-controller-manager" containerID="cri-o://8503c981f2071e87dfef830f80e3699bf847237d9e2533b4507ad11c3f7d8425" gracePeriod=30 Jan 31 09:04:20 crc kubenswrapper[4830]: I0131 09:04:20.414522 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-7bd8785496-4t72d" Jan 31 09:04:20 crc kubenswrapper[4830]: E0131 09:04:20.417111 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-gs9bg" podUID="ca8a4bb5-67d6-4e50-905f-95e0a15e376a" Jan 31 09:04:20 crc kubenswrapper[4830]: E0131 09:04:20.417830 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-2ssr8" podUID="3e020928-b063-4d3c-8992-e712fe3d1b1d" Jan 31 09:04:20 crc kubenswrapper[4830]: I0131 09:04:20.420323 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-78c8c6b64-d9ddw" Jan 31 09:04:20 crc kubenswrapper[4830]: I0131 09:04:20.422681 4830 patch_prober.go:28] interesting pod/route-controller-manager-7bd8785496-4t72d container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.54:8443/healthz\": read tcp 10.217.0.2:58246->10.217.0.54:8443: read: connection reset by peer" start-of-body= Jan 31 09:04:20 crc kubenswrapper[4830]: I0131 09:04:20.422744 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-7bd8785496-4t72d" podUID="9b68ddd6-bb0f-45ad-86e5-0c30bd513905" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.54:8443/healthz\": read tcp 10.217.0.2:58246->10.217.0.54:8443: read: connection reset by peer" Jan 31 09:04:20 crc kubenswrapper[4830]: I0131 09:04:20.435877 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-78c8c6b64-d9ddw" podStartSLOduration=30.435855202 podStartE2EDuration="30.435855202s" podCreationTimestamp="2026-01-31 09:03:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 09:04:20.434239144 +0000 UTC m=+204.927601586" watchObservedRunningTime="2026-01-31 09:04:20.435855202 +0000 UTC m=+204.929217644" Jan 31 09:04:20 crc kubenswrapper[4830]: I0131 09:04:20.497081 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-7bd8785496-4t72d" podStartSLOduration=29.497051753 podStartE2EDuration="29.497051753s" podCreationTimestamp="2026-01-31 09:03:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 09:04:20.493694883 +0000 UTC m=+204.987057325" watchObservedRunningTime="2026-01-31 09:04:20.497051753 +0000 UTC m=+204.990414205" Jan 31 09:04:20 crc kubenswrapper[4830]: I0131 09:04:20.533681 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-9-crc" podStartSLOduration=3.533650632 podStartE2EDuration="3.533650632s" podCreationTimestamp="2026-01-31 09:04:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 09:04:20.531346754 +0000 UTC m=+205.024709216" watchObservedRunningTime="2026-01-31 09:04:20.533650632 +0000 UTC m=+205.027013074" Jan 31 09:04:20 crc kubenswrapper[4830]: E0131 09:04:20.934417 4830 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 31 09:04:20 crc kubenswrapper[4830]: E0131 09:04:20.934641 4830 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-j8s8x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-dcmsg_openshift-marketplace(a1aef8b4-46c8-4ca1-87d9-0bdc6a53ca33): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 31 09:04:20 crc kubenswrapper[4830]: E0131 09:04:20.936179 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-dcmsg" podUID="a1aef8b4-46c8-4ca1-87d9-0bdc6a53ca33" Jan 31 09:04:21 crc kubenswrapper[4830]: I0131 09:04:21.424433 4830 generic.go:334] "Generic (PLEG): container finished" podID="d72dbe7e-a948-47c0-ac76-9f9b2d5d24ec" containerID="b98d16c213b0ce071aaf97966bdfc9703381aa64a7c703cb71514bac2afdcb9f" exitCode=0 Jan 31 09:04:21 crc kubenswrapper[4830]: I0131 09:04:21.424584 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-78c8c6b64-d9ddw" event={"ID":"d72dbe7e-a948-47c0-ac76-9f9b2d5d24ec","Type":"ContainerDied","Data":"b98d16c213b0ce071aaf97966bdfc9703381aa64a7c703cb71514bac2afdcb9f"} Jan 31 09:04:21 crc kubenswrapper[4830]: I0131 09:04:21.426709 4830 generic.go:334] "Generic (PLEG): container finished" podID="9edbde4e-2ffd-42bb-91f1-b016e8f9f1cd" containerID="4f32b234538f930f46a95987b5ea4f639349ce20eee48b42f1768e95f0c31ce7" exitCode=0 Jan 31 09:04:21 crc kubenswrapper[4830]: I0131 09:04:21.426902 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"9edbde4e-2ffd-42bb-91f1-b016e8f9f1cd","Type":"ContainerDied","Data":"4f32b234538f930f46a95987b5ea4f639349ce20eee48b42f1768e95f0c31ce7"} Jan 31 09:04:21 crc kubenswrapper[4830]: I0131 09:04:21.428983 4830 generic.go:334] "Generic (PLEG): container finished" podID="9b68ddd6-bb0f-45ad-86e5-0c30bd513905" containerID="8503c981f2071e87dfef830f80e3699bf847237d9e2533b4507ad11c3f7d8425" exitCode=0 Jan 31 09:04:21 crc kubenswrapper[4830]: I0131 09:04:21.429370 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7bd8785496-4t72d" event={"ID":"9b68ddd6-bb0f-45ad-86e5-0c30bd513905","Type":"ContainerDied","Data":"8503c981f2071e87dfef830f80e3699bf847237d9e2533b4507ad11c3f7d8425"} Jan 31 09:04:21 crc kubenswrapper[4830]: E0131 09:04:21.431487 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-dcmsg" podUID="a1aef8b4-46c8-4ca1-87d9-0bdc6a53ca33" Jan 31 09:04:21 crc kubenswrapper[4830]: I0131 09:04:21.934009 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-78c8c6b64-d9ddw" Jan 31 09:04:21 crc kubenswrapper[4830]: I0131 09:04:21.964673 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-76d6f96965-8fjhp"] Jan 31 09:04:21 crc kubenswrapper[4830]: E0131 09:04:21.964961 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d72dbe7e-a948-47c0-ac76-9f9b2d5d24ec" containerName="controller-manager" Jan 31 09:04:21 crc kubenswrapper[4830]: I0131 09:04:21.964976 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="d72dbe7e-a948-47c0-ac76-9f9b2d5d24ec" containerName="controller-manager" Jan 31 09:04:21 crc kubenswrapper[4830]: I0131 09:04:21.965113 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="d72dbe7e-a948-47c0-ac76-9f9b2d5d24ec" containerName="controller-manager" Jan 31 09:04:21 crc kubenswrapper[4830]: I0131 09:04:21.965563 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-76d6f96965-8fjhp" Jan 31 09:04:21 crc kubenswrapper[4830]: I0131 09:04:21.979043 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-76d6f96965-8fjhp"] Jan 31 09:04:22 crc kubenswrapper[4830]: I0131 09:04:22.013991 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7bd8785496-4t72d" Jan 31 09:04:22 crc kubenswrapper[4830]: I0131 09:04:22.095928 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d72dbe7e-a948-47c0-ac76-9f9b2d5d24ec-client-ca\") pod \"d72dbe7e-a948-47c0-ac76-9f9b2d5d24ec\" (UID: \"d72dbe7e-a948-47c0-ac76-9f9b2d5d24ec\") " Jan 31 09:04:22 crc kubenswrapper[4830]: I0131 09:04:22.096064 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d72dbe7e-a948-47c0-ac76-9f9b2d5d24ec-serving-cert\") pod \"d72dbe7e-a948-47c0-ac76-9f9b2d5d24ec\" (UID: \"d72dbe7e-a948-47c0-ac76-9f9b2d5d24ec\") " Jan 31 09:04:22 crc kubenswrapper[4830]: I0131 09:04:22.096134 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-57882\" (UniqueName: \"kubernetes.io/projected/d72dbe7e-a948-47c0-ac76-9f9b2d5d24ec-kube-api-access-57882\") pod \"d72dbe7e-a948-47c0-ac76-9f9b2d5d24ec\" (UID: \"d72dbe7e-a948-47c0-ac76-9f9b2d5d24ec\") " Jan 31 09:04:22 crc kubenswrapper[4830]: I0131 09:04:22.096174 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d72dbe7e-a948-47c0-ac76-9f9b2d5d24ec-proxy-ca-bundles\") pod \"d72dbe7e-a948-47c0-ac76-9f9b2d5d24ec\" (UID: \"d72dbe7e-a948-47c0-ac76-9f9b2d5d24ec\") " Jan 31 09:04:22 crc kubenswrapper[4830]: I0131 09:04:22.096226 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d72dbe7e-a948-47c0-ac76-9f9b2d5d24ec-config\") pod \"d72dbe7e-a948-47c0-ac76-9f9b2d5d24ec\" (UID: \"d72dbe7e-a948-47c0-ac76-9f9b2d5d24ec\") " Jan 31 09:04:22 crc kubenswrapper[4830]: I0131 09:04:22.096430 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/38a174d9-22cd-4e22-b157-499b9df4e292-config\") pod \"controller-manager-76d6f96965-8fjhp\" (UID: \"38a174d9-22cd-4e22-b157-499b9df4e292\") " pod="openshift-controller-manager/controller-manager-76d6f96965-8fjhp" Jan 31 09:04:22 crc kubenswrapper[4830]: I0131 09:04:22.096468 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/38a174d9-22cd-4e22-b157-499b9df4e292-serving-cert\") pod \"controller-manager-76d6f96965-8fjhp\" (UID: \"38a174d9-22cd-4e22-b157-499b9df4e292\") " pod="openshift-controller-manager/controller-manager-76d6f96965-8fjhp" Jan 31 09:04:22 crc kubenswrapper[4830]: I0131 09:04:22.096483 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/38a174d9-22cd-4e22-b157-499b9df4e292-client-ca\") pod \"controller-manager-76d6f96965-8fjhp\" (UID: \"38a174d9-22cd-4e22-b157-499b9df4e292\") " pod="openshift-controller-manager/controller-manager-76d6f96965-8fjhp" Jan 31 09:04:22 crc kubenswrapper[4830]: I0131 09:04:22.096518 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hhfcd\" (UniqueName: \"kubernetes.io/projected/38a174d9-22cd-4e22-b157-499b9df4e292-kube-api-access-hhfcd\") pod \"controller-manager-76d6f96965-8fjhp\" (UID: \"38a174d9-22cd-4e22-b157-499b9df4e292\") " pod="openshift-controller-manager/controller-manager-76d6f96965-8fjhp" Jan 31 09:04:22 crc kubenswrapper[4830]: I0131 09:04:22.096551 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/38a174d9-22cd-4e22-b157-499b9df4e292-proxy-ca-bundles\") pod \"controller-manager-76d6f96965-8fjhp\" (UID: \"38a174d9-22cd-4e22-b157-499b9df4e292\") " pod="openshift-controller-manager/controller-manager-76d6f96965-8fjhp" Jan 31 09:04:22 crc kubenswrapper[4830]: I0131 09:04:22.096884 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d72dbe7e-a948-47c0-ac76-9f9b2d5d24ec-client-ca" (OuterVolumeSpecName: "client-ca") pod "d72dbe7e-a948-47c0-ac76-9f9b2d5d24ec" (UID: "d72dbe7e-a948-47c0-ac76-9f9b2d5d24ec"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:04:22 crc kubenswrapper[4830]: I0131 09:04:22.097311 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d72dbe7e-a948-47c0-ac76-9f9b2d5d24ec-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "d72dbe7e-a948-47c0-ac76-9f9b2d5d24ec" (UID: "d72dbe7e-a948-47c0-ac76-9f9b2d5d24ec"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:04:22 crc kubenswrapper[4830]: I0131 09:04:22.097445 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d72dbe7e-a948-47c0-ac76-9f9b2d5d24ec-config" (OuterVolumeSpecName: "config") pod "d72dbe7e-a948-47c0-ac76-9f9b2d5d24ec" (UID: "d72dbe7e-a948-47c0-ac76-9f9b2d5d24ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:04:22 crc kubenswrapper[4830]: I0131 09:04:22.102065 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d72dbe7e-a948-47c0-ac76-9f9b2d5d24ec-kube-api-access-57882" (OuterVolumeSpecName: "kube-api-access-57882") pod "d72dbe7e-a948-47c0-ac76-9f9b2d5d24ec" (UID: "d72dbe7e-a948-47c0-ac76-9f9b2d5d24ec"). InnerVolumeSpecName "kube-api-access-57882". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:04:22 crc kubenswrapper[4830]: I0131 09:04:22.109672 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d72dbe7e-a948-47c0-ac76-9f9b2d5d24ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "d72dbe7e-a948-47c0-ac76-9f9b2d5d24ec" (UID: "d72dbe7e-a948-47c0-ac76-9f9b2d5d24ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:04:22 crc kubenswrapper[4830]: I0131 09:04:22.198118 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9b68ddd6-bb0f-45ad-86e5-0c30bd513905-serving-cert\") pod \"9b68ddd6-bb0f-45ad-86e5-0c30bd513905\" (UID: \"9b68ddd6-bb0f-45ad-86e5-0c30bd513905\") " Jan 31 09:04:22 crc kubenswrapper[4830]: I0131 09:04:22.198232 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qznw8\" (UniqueName: \"kubernetes.io/projected/9b68ddd6-bb0f-45ad-86e5-0c30bd513905-kube-api-access-qznw8\") pod \"9b68ddd6-bb0f-45ad-86e5-0c30bd513905\" (UID: \"9b68ddd6-bb0f-45ad-86e5-0c30bd513905\") " Jan 31 09:04:22 crc kubenswrapper[4830]: I0131 09:04:22.198295 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9b68ddd6-bb0f-45ad-86e5-0c30bd513905-config\") pod \"9b68ddd6-bb0f-45ad-86e5-0c30bd513905\" (UID: \"9b68ddd6-bb0f-45ad-86e5-0c30bd513905\") " Jan 31 09:04:22 crc kubenswrapper[4830]: I0131 09:04:22.198339 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9b68ddd6-bb0f-45ad-86e5-0c30bd513905-client-ca\") pod \"9b68ddd6-bb0f-45ad-86e5-0c30bd513905\" (UID: \"9b68ddd6-bb0f-45ad-86e5-0c30bd513905\") " Jan 31 09:04:22 crc kubenswrapper[4830]: I0131 09:04:22.198570 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/38a174d9-22cd-4e22-b157-499b9df4e292-serving-cert\") pod \"controller-manager-76d6f96965-8fjhp\" (UID: \"38a174d9-22cd-4e22-b157-499b9df4e292\") " pod="openshift-controller-manager/controller-manager-76d6f96965-8fjhp" Jan 31 09:04:22 crc kubenswrapper[4830]: I0131 09:04:22.198597 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/38a174d9-22cd-4e22-b157-499b9df4e292-client-ca\") pod \"controller-manager-76d6f96965-8fjhp\" (UID: \"38a174d9-22cd-4e22-b157-499b9df4e292\") " pod="openshift-controller-manager/controller-manager-76d6f96965-8fjhp" Jan 31 09:04:22 crc kubenswrapper[4830]: I0131 09:04:22.198641 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hhfcd\" (UniqueName: \"kubernetes.io/projected/38a174d9-22cd-4e22-b157-499b9df4e292-kube-api-access-hhfcd\") pod \"controller-manager-76d6f96965-8fjhp\" (UID: \"38a174d9-22cd-4e22-b157-499b9df4e292\") " pod="openshift-controller-manager/controller-manager-76d6f96965-8fjhp" Jan 31 09:04:22 crc kubenswrapper[4830]: I0131 09:04:22.198662 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/38a174d9-22cd-4e22-b157-499b9df4e292-proxy-ca-bundles\") pod \"controller-manager-76d6f96965-8fjhp\" (UID: \"38a174d9-22cd-4e22-b157-499b9df4e292\") " pod="openshift-controller-manager/controller-manager-76d6f96965-8fjhp" Jan 31 09:04:22 crc kubenswrapper[4830]: I0131 09:04:22.198713 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/38a174d9-22cd-4e22-b157-499b9df4e292-config\") pod \"controller-manager-76d6f96965-8fjhp\" (UID: \"38a174d9-22cd-4e22-b157-499b9df4e292\") " pod="openshift-controller-manager/controller-manager-76d6f96965-8fjhp" Jan 31 09:04:22 crc kubenswrapper[4830]: I0131 09:04:22.198778 4830 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d72dbe7e-a948-47c0-ac76-9f9b2d5d24ec-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 09:04:22 crc kubenswrapper[4830]: I0131 09:04:22.198791 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-57882\" (UniqueName: \"kubernetes.io/projected/d72dbe7e-a948-47c0-ac76-9f9b2d5d24ec-kube-api-access-57882\") on node \"crc\" DevicePath \"\"" Jan 31 09:04:22 crc kubenswrapper[4830]: I0131 09:04:22.198803 4830 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d72dbe7e-a948-47c0-ac76-9f9b2d5d24ec-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 31 09:04:22 crc kubenswrapper[4830]: I0131 09:04:22.198813 4830 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d72dbe7e-a948-47c0-ac76-9f9b2d5d24ec-config\") on node \"crc\" DevicePath \"\"" Jan 31 09:04:22 crc kubenswrapper[4830]: I0131 09:04:22.198821 4830 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d72dbe7e-a948-47c0-ac76-9f9b2d5d24ec-client-ca\") on node \"crc\" DevicePath \"\"" Jan 31 09:04:22 crc kubenswrapper[4830]: I0131 09:04:22.199888 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/38a174d9-22cd-4e22-b157-499b9df4e292-client-ca\") pod \"controller-manager-76d6f96965-8fjhp\" (UID: \"38a174d9-22cd-4e22-b157-499b9df4e292\") " pod="openshift-controller-manager/controller-manager-76d6f96965-8fjhp" Jan 31 09:04:22 crc kubenswrapper[4830]: I0131 09:04:22.199972 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9b68ddd6-bb0f-45ad-86e5-0c30bd513905-client-ca" (OuterVolumeSpecName: "client-ca") pod "9b68ddd6-bb0f-45ad-86e5-0c30bd513905" (UID: "9b68ddd6-bb0f-45ad-86e5-0c30bd513905"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:04:22 crc kubenswrapper[4830]: I0131 09:04:22.200105 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9b68ddd6-bb0f-45ad-86e5-0c30bd513905-config" (OuterVolumeSpecName: "config") pod "9b68ddd6-bb0f-45ad-86e5-0c30bd513905" (UID: "9b68ddd6-bb0f-45ad-86e5-0c30bd513905"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:04:22 crc kubenswrapper[4830]: I0131 09:04:22.200374 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/38a174d9-22cd-4e22-b157-499b9df4e292-proxy-ca-bundles\") pod \"controller-manager-76d6f96965-8fjhp\" (UID: \"38a174d9-22cd-4e22-b157-499b9df4e292\") " pod="openshift-controller-manager/controller-manager-76d6f96965-8fjhp" Jan 31 09:04:22 crc kubenswrapper[4830]: I0131 09:04:22.200614 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/38a174d9-22cd-4e22-b157-499b9df4e292-config\") pod \"controller-manager-76d6f96965-8fjhp\" (UID: \"38a174d9-22cd-4e22-b157-499b9df4e292\") " pod="openshift-controller-manager/controller-manager-76d6f96965-8fjhp" Jan 31 09:04:22 crc kubenswrapper[4830]: I0131 09:04:22.202329 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9b68ddd6-bb0f-45ad-86e5-0c30bd513905-kube-api-access-qznw8" (OuterVolumeSpecName: "kube-api-access-qznw8") pod "9b68ddd6-bb0f-45ad-86e5-0c30bd513905" (UID: "9b68ddd6-bb0f-45ad-86e5-0c30bd513905"). InnerVolumeSpecName "kube-api-access-qznw8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:04:22 crc kubenswrapper[4830]: I0131 09:04:22.205098 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9b68ddd6-bb0f-45ad-86e5-0c30bd513905-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9b68ddd6-bb0f-45ad-86e5-0c30bd513905" (UID: "9b68ddd6-bb0f-45ad-86e5-0c30bd513905"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:04:22 crc kubenswrapper[4830]: I0131 09:04:22.211833 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/38a174d9-22cd-4e22-b157-499b9df4e292-serving-cert\") pod \"controller-manager-76d6f96965-8fjhp\" (UID: \"38a174d9-22cd-4e22-b157-499b9df4e292\") " pod="openshift-controller-manager/controller-manager-76d6f96965-8fjhp" Jan 31 09:04:22 crc kubenswrapper[4830]: I0131 09:04:22.223514 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hhfcd\" (UniqueName: \"kubernetes.io/projected/38a174d9-22cd-4e22-b157-499b9df4e292-kube-api-access-hhfcd\") pod \"controller-manager-76d6f96965-8fjhp\" (UID: \"38a174d9-22cd-4e22-b157-499b9df4e292\") " pod="openshift-controller-manager/controller-manager-76d6f96965-8fjhp" Jan 31 09:04:22 crc kubenswrapper[4830]: I0131 09:04:22.299817 4830 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9b68ddd6-bb0f-45ad-86e5-0c30bd513905-config\") on node \"crc\" DevicePath \"\"" Jan 31 09:04:22 crc kubenswrapper[4830]: I0131 09:04:22.299867 4830 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9b68ddd6-bb0f-45ad-86e5-0c30bd513905-client-ca\") on node \"crc\" DevicePath \"\"" Jan 31 09:04:22 crc kubenswrapper[4830]: I0131 09:04:22.299884 4830 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9b68ddd6-bb0f-45ad-86e5-0c30bd513905-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 09:04:22 crc kubenswrapper[4830]: I0131 09:04:22.299903 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qznw8\" (UniqueName: \"kubernetes.io/projected/9b68ddd6-bb0f-45ad-86e5-0c30bd513905-kube-api-access-qznw8\") on node \"crc\" DevicePath \"\"" Jan 31 09:04:22 crc kubenswrapper[4830]: I0131 09:04:22.311794 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-76d6f96965-8fjhp" Jan 31 09:04:22 crc kubenswrapper[4830]: I0131 09:04:22.446011 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-78c8c6b64-d9ddw" event={"ID":"d72dbe7e-a948-47c0-ac76-9f9b2d5d24ec","Type":"ContainerDied","Data":"9a36db10b07c9e69468098ad24d5c85f07e9bc0d5b0538b61594e4e1014ab650"} Jan 31 09:04:22 crc kubenswrapper[4830]: I0131 09:04:22.446076 4830 scope.go:117] "RemoveContainer" containerID="b98d16c213b0ce071aaf97966bdfc9703381aa64a7c703cb71514bac2afdcb9f" Jan 31 09:04:22 crc kubenswrapper[4830]: I0131 09:04:22.446241 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-78c8c6b64-d9ddw" Jan 31 09:04:22 crc kubenswrapper[4830]: I0131 09:04:22.451481 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7bd8785496-4t72d" Jan 31 09:04:22 crc kubenswrapper[4830]: I0131 09:04:22.451914 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7bd8785496-4t72d" event={"ID":"9b68ddd6-bb0f-45ad-86e5-0c30bd513905","Type":"ContainerDied","Data":"03c4a4b39e7f02973cd9067b40782d9b9f193f70b578e28a76a759936372e045"} Jan 31 09:04:22 crc kubenswrapper[4830]: I0131 09:04:22.477700 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-78c8c6b64-d9ddw"] Jan 31 09:04:22 crc kubenswrapper[4830]: I0131 09:04:22.482289 4830 scope.go:117] "RemoveContainer" containerID="8503c981f2071e87dfef830f80e3699bf847237d9e2533b4507ad11c3f7d8425" Jan 31 09:04:22 crc kubenswrapper[4830]: I0131 09:04:22.484012 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-78c8c6b64-d9ddw"] Jan 31 09:04:22 crc kubenswrapper[4830]: I0131 09:04:22.496014 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7bd8785496-4t72d"] Jan 31 09:04:22 crc kubenswrapper[4830]: I0131 09:04:22.499468 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7bd8785496-4t72d"] Jan 31 09:04:22 crc kubenswrapper[4830]: I0131 09:04:22.566352 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-76d6f96965-8fjhp"] Jan 31 09:04:22 crc kubenswrapper[4830]: I0131 09:04:22.609366 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 31 09:04:22 crc kubenswrapper[4830]: E0131 09:04:22.609656 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b68ddd6-bb0f-45ad-86e5-0c30bd513905" containerName="route-controller-manager" Jan 31 09:04:22 crc kubenswrapper[4830]: I0131 09:04:22.609670 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b68ddd6-bb0f-45ad-86e5-0c30bd513905" containerName="route-controller-manager" Jan 31 09:04:22 crc kubenswrapper[4830]: I0131 09:04:22.609804 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="9b68ddd6-bb0f-45ad-86e5-0c30bd513905" containerName="route-controller-manager" Jan 31 09:04:22 crc kubenswrapper[4830]: I0131 09:04:22.610263 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 31 09:04:22 crc kubenswrapper[4830]: I0131 09:04:22.613894 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 31 09:04:22 crc kubenswrapper[4830]: I0131 09:04:22.686109 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 31 09:04:22 crc kubenswrapper[4830]: I0131 09:04:22.708233 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9edbde4e-2ffd-42bb-91f1-b016e8f9f1cd-kube-api-access\") pod \"9edbde4e-2ffd-42bb-91f1-b016e8f9f1cd\" (UID: \"9edbde4e-2ffd-42bb-91f1-b016e8f9f1cd\") " Jan 31 09:04:22 crc kubenswrapper[4830]: I0131 09:04:22.708279 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9edbde4e-2ffd-42bb-91f1-b016e8f9f1cd-kubelet-dir\") pod \"9edbde4e-2ffd-42bb-91f1-b016e8f9f1cd\" (UID: \"9edbde4e-2ffd-42bb-91f1-b016e8f9f1cd\") " Jan 31 09:04:22 crc kubenswrapper[4830]: I0131 09:04:22.708413 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/0cfd30f0-26ec-4ac4-b315-4d99ec492231-var-lock\") pod \"installer-9-crc\" (UID: \"0cfd30f0-26ec-4ac4-b315-4d99ec492231\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 31 09:04:22 crc kubenswrapper[4830]: I0131 09:04:22.708463 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0cfd30f0-26ec-4ac4-b315-4d99ec492231-kubelet-dir\") pod \"installer-9-crc\" (UID: \"0cfd30f0-26ec-4ac4-b315-4d99ec492231\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 31 09:04:22 crc kubenswrapper[4830]: I0131 09:04:22.708505 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0cfd30f0-26ec-4ac4-b315-4d99ec492231-kube-api-access\") pod \"installer-9-crc\" (UID: \"0cfd30f0-26ec-4ac4-b315-4d99ec492231\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 31 09:04:22 crc kubenswrapper[4830]: I0131 09:04:22.708670 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9edbde4e-2ffd-42bb-91f1-b016e8f9f1cd-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "9edbde4e-2ffd-42bb-91f1-b016e8f9f1cd" (UID: "9edbde4e-2ffd-42bb-91f1-b016e8f9f1cd"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 09:04:22 crc kubenswrapper[4830]: I0131 09:04:22.729498 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9edbde4e-2ffd-42bb-91f1-b016e8f9f1cd-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "9edbde4e-2ffd-42bb-91f1-b016e8f9f1cd" (UID: "9edbde4e-2ffd-42bb-91f1-b016e8f9f1cd"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:04:22 crc kubenswrapper[4830]: I0131 09:04:22.809382 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/0cfd30f0-26ec-4ac4-b315-4d99ec492231-var-lock\") pod \"installer-9-crc\" (UID: \"0cfd30f0-26ec-4ac4-b315-4d99ec492231\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 31 09:04:22 crc kubenswrapper[4830]: I0131 09:04:22.809460 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0cfd30f0-26ec-4ac4-b315-4d99ec492231-kubelet-dir\") pod \"installer-9-crc\" (UID: \"0cfd30f0-26ec-4ac4-b315-4d99ec492231\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 31 09:04:22 crc kubenswrapper[4830]: I0131 09:04:22.809515 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0cfd30f0-26ec-4ac4-b315-4d99ec492231-kube-api-access\") pod \"installer-9-crc\" (UID: \"0cfd30f0-26ec-4ac4-b315-4d99ec492231\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 31 09:04:22 crc kubenswrapper[4830]: I0131 09:04:22.809568 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9edbde4e-2ffd-42bb-91f1-b016e8f9f1cd-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 31 09:04:22 crc kubenswrapper[4830]: I0131 09:04:22.809584 4830 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9edbde4e-2ffd-42bb-91f1-b016e8f9f1cd-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 31 09:04:22 crc kubenswrapper[4830]: I0131 09:04:22.809589 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0cfd30f0-26ec-4ac4-b315-4d99ec492231-kubelet-dir\") pod \"installer-9-crc\" (UID: \"0cfd30f0-26ec-4ac4-b315-4d99ec492231\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 31 09:04:22 crc kubenswrapper[4830]: I0131 09:04:22.809605 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/0cfd30f0-26ec-4ac4-b315-4d99ec492231-var-lock\") pod \"installer-9-crc\" (UID: \"0cfd30f0-26ec-4ac4-b315-4d99ec492231\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 31 09:04:22 crc kubenswrapper[4830]: I0131 09:04:22.825759 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0cfd30f0-26ec-4ac4-b315-4d99ec492231-kube-api-access\") pod \"installer-9-crc\" (UID: \"0cfd30f0-26ec-4ac4-b315-4d99ec492231\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 31 09:04:22 crc kubenswrapper[4830]: I0131 09:04:22.935925 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 31 09:04:23 crc kubenswrapper[4830]: I0131 09:04:23.154830 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 31 09:04:23 crc kubenswrapper[4830]: W0131 09:04:23.160498 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod0cfd30f0_26ec_4ac4_b315_4d99ec492231.slice/crio-b1808037e35e61011f2412bc618c1f4796b10bace9b781f8d89764a82acd957f WatchSource:0}: Error finding container b1808037e35e61011f2412bc618c1f4796b10bace9b781f8d89764a82acd957f: Status 404 returned error can't find the container with id b1808037e35e61011f2412bc618c1f4796b10bace9b781f8d89764a82acd957f Jan 31 09:04:23 crc kubenswrapper[4830]: I0131 09:04:23.458894 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-76d6f96965-8fjhp" event={"ID":"38a174d9-22cd-4e22-b157-499b9df4e292","Type":"ContainerStarted","Data":"78b5d86a6ebc55c0d5cda5d77a1f680a78970876d1a15d1c4c1377bbada48ebe"} Jan 31 09:04:23 crc kubenswrapper[4830]: I0131 09:04:23.461298 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"9edbde4e-2ffd-42bb-91f1-b016e8f9f1cd","Type":"ContainerDied","Data":"71f13a4eb058709db49e67a4d5f6b52f4213ba8bd351e1caa1b0a2505a0300b3"} Jan 31 09:04:23 crc kubenswrapper[4830]: I0131 09:04:23.461338 4830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="71f13a4eb058709db49e67a4d5f6b52f4213ba8bd351e1caa1b0a2505a0300b3" Jan 31 09:04:23 crc kubenswrapper[4830]: I0131 09:04:23.461385 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 31 09:04:23 crc kubenswrapper[4830]: I0131 09:04:23.463399 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"0cfd30f0-26ec-4ac4-b315-4d99ec492231","Type":"ContainerStarted","Data":"b1808037e35e61011f2412bc618c1f4796b10bace9b781f8d89764a82acd957f"} Jan 31 09:04:24 crc kubenswrapper[4830]: E0131 09:04:24.248931 4830 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 31 09:04:24 crc kubenswrapper[4830]: E0131 09:04:24.249267 4830 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xn6ll,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-h79tz_openshift-marketplace(716e78b2-1856-45e6-a3fa-73538be51a97): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 31 09:04:24 crc kubenswrapper[4830]: E0131 09:04:24.251158 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-h79tz" podUID="716e78b2-1856-45e6-a3fa-73538be51a97" Jan 31 09:04:24 crc kubenswrapper[4830]: I0131 09:04:24.266892 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9b68ddd6-bb0f-45ad-86e5-0c30bd513905" path="/var/lib/kubelet/pods/9b68ddd6-bb0f-45ad-86e5-0c30bd513905/volumes" Jan 31 09:04:24 crc kubenswrapper[4830]: I0131 09:04:24.267769 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d72dbe7e-a948-47c0-ac76-9f9b2d5d24ec" path="/var/lib/kubelet/pods/d72dbe7e-a948-47c0-ac76-9f9b2d5d24ec/volumes" Jan 31 09:04:24 crc kubenswrapper[4830]: I0131 09:04:24.470326 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"0cfd30f0-26ec-4ac4-b315-4d99ec492231","Type":"ContainerStarted","Data":"5335aa5937ff53a6c2ed174596ead653063c3b31846269fb0ed8abf78cd62068"} Jan 31 09:04:24 crc kubenswrapper[4830]: I0131 09:04:24.471864 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-76d6f96965-8fjhp" event={"ID":"38a174d9-22cd-4e22-b157-499b9df4e292","Type":"ContainerStarted","Data":"b86cf5e18f70fb8b1beeb189caff6edda4380b839fa823e063dd7c42a9230881"} Jan 31 09:04:24 crc kubenswrapper[4830]: E0131 09:04:24.473678 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-h79tz" podUID="716e78b2-1856-45e6-a3fa-73538be51a97" Jan 31 09:04:24 crc kubenswrapper[4830]: I0131 09:04:24.491320 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=2.491289997 podStartE2EDuration="2.491289997s" podCreationTimestamp="2026-01-31 09:04:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 09:04:24.488803243 +0000 UTC m=+208.982165695" watchObservedRunningTime="2026-01-31 09:04:24.491289997 +0000 UTC m=+208.984652439" Jan 31 09:04:24 crc kubenswrapper[4830]: I0131 09:04:24.516839 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-76d6f96965-8fjhp" podStartSLOduration=14.516814167 podStartE2EDuration="14.516814167s" podCreationTimestamp="2026-01-31 09:04:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 09:04:24.514786066 +0000 UTC m=+209.008148508" watchObservedRunningTime="2026-01-31 09:04:24.516814167 +0000 UTC m=+209.010176609" Jan 31 09:04:24 crc kubenswrapper[4830]: I0131 09:04:24.672492 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-794778c64d-6v65m"] Jan 31 09:04:24 crc kubenswrapper[4830]: E0131 09:04:24.672820 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9edbde4e-2ffd-42bb-91f1-b016e8f9f1cd" containerName="pruner" Jan 31 09:04:24 crc kubenswrapper[4830]: I0131 09:04:24.672838 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="9edbde4e-2ffd-42bb-91f1-b016e8f9f1cd" containerName="pruner" Jan 31 09:04:24 crc kubenswrapper[4830]: I0131 09:04:24.672942 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="9edbde4e-2ffd-42bb-91f1-b016e8f9f1cd" containerName="pruner" Jan 31 09:04:24 crc kubenswrapper[4830]: I0131 09:04:24.673402 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-794778c64d-6v65m" Jan 31 09:04:24 crc kubenswrapper[4830]: I0131 09:04:24.676689 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 31 09:04:24 crc kubenswrapper[4830]: I0131 09:04:24.676795 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 31 09:04:24 crc kubenswrapper[4830]: I0131 09:04:24.677073 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 31 09:04:24 crc kubenswrapper[4830]: I0131 09:04:24.679869 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 31 09:04:24 crc kubenswrapper[4830]: I0131 09:04:24.680106 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 31 09:04:24 crc kubenswrapper[4830]: I0131 09:04:24.680507 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 31 09:04:24 crc kubenswrapper[4830]: I0131 09:04:24.694259 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-794778c64d-6v65m"] Jan 31 09:04:24 crc kubenswrapper[4830]: I0131 09:04:24.742515 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6e0a1032-01f8-476b-b725-b43704ec47df-serving-cert\") pod \"route-controller-manager-794778c64d-6v65m\" (UID: \"6e0a1032-01f8-476b-b725-b43704ec47df\") " pod="openshift-route-controller-manager/route-controller-manager-794778c64d-6v65m" Jan 31 09:04:24 crc kubenswrapper[4830]: I0131 09:04:24.742597 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6e0a1032-01f8-476b-b725-b43704ec47df-client-ca\") pod \"route-controller-manager-794778c64d-6v65m\" (UID: \"6e0a1032-01f8-476b-b725-b43704ec47df\") " pod="openshift-route-controller-manager/route-controller-manager-794778c64d-6v65m" Jan 31 09:04:24 crc kubenswrapper[4830]: I0131 09:04:24.742654 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ws5gw\" (UniqueName: \"kubernetes.io/projected/6e0a1032-01f8-476b-b725-b43704ec47df-kube-api-access-ws5gw\") pod \"route-controller-manager-794778c64d-6v65m\" (UID: \"6e0a1032-01f8-476b-b725-b43704ec47df\") " pod="openshift-route-controller-manager/route-controller-manager-794778c64d-6v65m" Jan 31 09:04:24 crc kubenswrapper[4830]: I0131 09:04:24.742689 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6e0a1032-01f8-476b-b725-b43704ec47df-config\") pod \"route-controller-manager-794778c64d-6v65m\" (UID: \"6e0a1032-01f8-476b-b725-b43704ec47df\") " pod="openshift-route-controller-manager/route-controller-manager-794778c64d-6v65m" Jan 31 09:04:24 crc kubenswrapper[4830]: I0131 09:04:24.844156 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6e0a1032-01f8-476b-b725-b43704ec47df-serving-cert\") pod \"route-controller-manager-794778c64d-6v65m\" (UID: \"6e0a1032-01f8-476b-b725-b43704ec47df\") " pod="openshift-route-controller-manager/route-controller-manager-794778c64d-6v65m" Jan 31 09:04:24 crc kubenswrapper[4830]: I0131 09:04:24.844229 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6e0a1032-01f8-476b-b725-b43704ec47df-client-ca\") pod \"route-controller-manager-794778c64d-6v65m\" (UID: \"6e0a1032-01f8-476b-b725-b43704ec47df\") " pod="openshift-route-controller-manager/route-controller-manager-794778c64d-6v65m" Jan 31 09:04:24 crc kubenswrapper[4830]: I0131 09:04:24.844265 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ws5gw\" (UniqueName: \"kubernetes.io/projected/6e0a1032-01f8-476b-b725-b43704ec47df-kube-api-access-ws5gw\") pod \"route-controller-manager-794778c64d-6v65m\" (UID: \"6e0a1032-01f8-476b-b725-b43704ec47df\") " pod="openshift-route-controller-manager/route-controller-manager-794778c64d-6v65m" Jan 31 09:04:24 crc kubenswrapper[4830]: I0131 09:04:24.844290 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6e0a1032-01f8-476b-b725-b43704ec47df-config\") pod \"route-controller-manager-794778c64d-6v65m\" (UID: \"6e0a1032-01f8-476b-b725-b43704ec47df\") " pod="openshift-route-controller-manager/route-controller-manager-794778c64d-6v65m" Jan 31 09:04:24 crc kubenswrapper[4830]: I0131 09:04:24.845389 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6e0a1032-01f8-476b-b725-b43704ec47df-client-ca\") pod \"route-controller-manager-794778c64d-6v65m\" (UID: \"6e0a1032-01f8-476b-b725-b43704ec47df\") " pod="openshift-route-controller-manager/route-controller-manager-794778c64d-6v65m" Jan 31 09:04:24 crc kubenswrapper[4830]: I0131 09:04:24.845764 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6e0a1032-01f8-476b-b725-b43704ec47df-config\") pod \"route-controller-manager-794778c64d-6v65m\" (UID: \"6e0a1032-01f8-476b-b725-b43704ec47df\") " pod="openshift-route-controller-manager/route-controller-manager-794778c64d-6v65m" Jan 31 09:04:24 crc kubenswrapper[4830]: I0131 09:04:24.854180 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6e0a1032-01f8-476b-b725-b43704ec47df-serving-cert\") pod \"route-controller-manager-794778c64d-6v65m\" (UID: \"6e0a1032-01f8-476b-b725-b43704ec47df\") " pod="openshift-route-controller-manager/route-controller-manager-794778c64d-6v65m" Jan 31 09:04:24 crc kubenswrapper[4830]: I0131 09:04:24.863207 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ws5gw\" (UniqueName: \"kubernetes.io/projected/6e0a1032-01f8-476b-b725-b43704ec47df-kube-api-access-ws5gw\") pod \"route-controller-manager-794778c64d-6v65m\" (UID: \"6e0a1032-01f8-476b-b725-b43704ec47df\") " pod="openshift-route-controller-manager/route-controller-manager-794778c64d-6v65m" Jan 31 09:04:24 crc kubenswrapper[4830]: I0131 09:04:24.993537 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-794778c64d-6v65m" Jan 31 09:04:25 crc kubenswrapper[4830]: I0131 09:04:25.227360 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-794778c64d-6v65m"] Jan 31 09:04:25 crc kubenswrapper[4830]: I0131 09:04:25.476780 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-76d6f96965-8fjhp" Jan 31 09:04:25 crc kubenswrapper[4830]: I0131 09:04:25.483143 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-76d6f96965-8fjhp" Jan 31 09:04:25 crc kubenswrapper[4830]: W0131 09:04:25.949691 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6e0a1032_01f8_476b_b725_b43704ec47df.slice/crio-4699ff81713f10e7a07d7662957ae156b7bf4c620bea69c41c20dcb0051c4649 WatchSource:0}: Error finding container 4699ff81713f10e7a07d7662957ae156b7bf4c620bea69c41c20dcb0051c4649: Status 404 returned error can't find the container with id 4699ff81713f10e7a07d7662957ae156b7bf4c620bea69c41c20dcb0051c4649 Jan 31 09:04:26 crc kubenswrapper[4830]: I0131 09:04:26.484917 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-794778c64d-6v65m" event={"ID":"6e0a1032-01f8-476b-b725-b43704ec47df","Type":"ContainerStarted","Data":"5506e8040cc3bfa8ac6090d25dc10ca91b43d9d322036b1d7b95dcfdcc51ddd7"} Jan 31 09:04:26 crc kubenswrapper[4830]: I0131 09:04:26.486826 4830 generic.go:334] "Generic (PLEG): container finished" podID="3477b1ed-ccf0-4f60-9505-ff0e417750af" containerID="e99b334a166cdca121c4721c09952221d95a22a8eb4841d936a0e1ec1aecb5fb" exitCode=0 Jan 31 09:04:26 crc kubenswrapper[4830]: I0131 09:04:26.488403 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-794778c64d-6v65m" event={"ID":"6e0a1032-01f8-476b-b725-b43704ec47df","Type":"ContainerStarted","Data":"4699ff81713f10e7a07d7662957ae156b7bf4c620bea69c41c20dcb0051c4649"} Jan 31 09:04:26 crc kubenswrapper[4830]: I0131 09:04:26.488484 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-794778c64d-6v65m" Jan 31 09:04:26 crc kubenswrapper[4830]: I0131 09:04:26.488501 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qgnfw" event={"ID":"3477b1ed-ccf0-4f60-9505-ff0e417750af","Type":"ContainerDied","Data":"e99b334a166cdca121c4721c09952221d95a22a8eb4841d936a0e1ec1aecb5fb"} Jan 31 09:04:26 crc kubenswrapper[4830]: I0131 09:04:26.509448 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-794778c64d-6v65m" podStartSLOduration=15.509423334 podStartE2EDuration="15.509423334s" podCreationTimestamp="2026-01-31 09:04:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 09:04:26.507400524 +0000 UTC m=+211.000762966" watchObservedRunningTime="2026-01-31 09:04:26.509423334 +0000 UTC m=+211.002785776" Jan 31 09:04:26 crc kubenswrapper[4830]: I0131 09:04:26.637389 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-794778c64d-6v65m" Jan 31 09:04:27 crc kubenswrapper[4830]: I0131 09:04:27.498609 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qgnfw" event={"ID":"3477b1ed-ccf0-4f60-9505-ff0e417750af","Type":"ContainerStarted","Data":"0f46baf323f9dce50b0b183977967ceda8e845ec58055d9b4e9cbe29b23f2eea"} Jan 31 09:04:27 crc kubenswrapper[4830]: I0131 09:04:27.502832 4830 generic.go:334] "Generic (PLEG): container finished" podID="3868f465-887b-4580-8c17-293665785251" containerID="5e9914065ddf6845fb0f9e3be4b114e59184a828a92bd6b2c9b6a3946d6f3692" exitCode=0 Jan 31 09:04:27 crc kubenswrapper[4830]: I0131 09:04:27.503569 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sxn8r" event={"ID":"3868f465-887b-4580-8c17-293665785251","Type":"ContainerDied","Data":"5e9914065ddf6845fb0f9e3be4b114e59184a828a92bd6b2c9b6a3946d6f3692"} Jan 31 09:04:27 crc kubenswrapper[4830]: I0131 09:04:27.521306 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-qgnfw" podStartSLOduration=2.429720394 podStartE2EDuration="51.521279866s" podCreationTimestamp="2026-01-31 09:03:36 +0000 UTC" firstStartedPulling="2026-01-31 09:03:37.878959694 +0000 UTC m=+162.372322126" lastFinishedPulling="2026-01-31 09:04:26.970519156 +0000 UTC m=+211.463881598" observedRunningTime="2026-01-31 09:04:27.519681809 +0000 UTC m=+212.013044251" watchObservedRunningTime="2026-01-31 09:04:27.521279866 +0000 UTC m=+212.014642308" Jan 31 09:04:28 crc kubenswrapper[4830]: I0131 09:04:28.511319 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sxn8r" event={"ID":"3868f465-887b-4580-8c17-293665785251","Type":"ContainerStarted","Data":"d6e16cadc700d0fcb28f1b5f96aa36d73c6023b86ddef29f9188be612c19f246"} Jan 31 09:04:28 crc kubenswrapper[4830]: I0131 09:04:28.537285 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-sxn8r" podStartSLOduration=2.477797756 podStartE2EDuration="52.537256731s" podCreationTimestamp="2026-01-31 09:03:36 +0000 UTC" firstStartedPulling="2026-01-31 09:03:37.87846616 +0000 UTC m=+162.371828602" lastFinishedPulling="2026-01-31 09:04:27.937925135 +0000 UTC m=+212.431287577" observedRunningTime="2026-01-31 09:04:28.534618993 +0000 UTC m=+213.027981445" watchObservedRunningTime="2026-01-31 09:04:28.537256731 +0000 UTC m=+213.030619173" Jan 31 09:04:33 crc kubenswrapper[4830]: I0131 09:04:33.553872 4830 generic.go:334] "Generic (PLEG): container finished" podID="db7a137a-b7f9-4446-85f6-ea0d2f0caedd" containerID="e3739938c8209aea2c94a2139d565624e6a0e9b2b2a62ff1d79c01c29bb78ba1" exitCode=0 Jan 31 09:04:33 crc kubenswrapper[4830]: I0131 09:04:33.554009 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q8t9t" event={"ID":"db7a137a-b7f9-4446-85f6-ea0d2f0caedd","Type":"ContainerDied","Data":"e3739938c8209aea2c94a2139d565624e6a0e9b2b2a62ff1d79c01c29bb78ba1"} Jan 31 09:04:33 crc kubenswrapper[4830]: I0131 09:04:33.560077 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gs9bg" event={"ID":"ca8a4bb5-67d6-4e50-905f-95e0a15e376a","Type":"ContainerStarted","Data":"69c9e1248ed8682621278217a7f13f6c22489ce8b103c0312a4e64be9018ae62"} Jan 31 09:04:33 crc kubenswrapper[4830]: I0131 09:04:33.564270 4830 generic.go:334] "Generic (PLEG): container finished" podID="ea666a92-d7aa-4e9b-8c54-88ad8ae517aa" containerID="b1b905da8423eb8a3a3e472636dd6a55d709b18193b3b518df5fc2d1ac6e474b" exitCode=0 Jan 31 09:04:33 crc kubenswrapper[4830]: I0131 09:04:33.564322 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bqmr6" event={"ID":"ea666a92-d7aa-4e9b-8c54-88ad8ae517aa","Type":"ContainerDied","Data":"b1b905da8423eb8a3a3e472636dd6a55d709b18193b3b518df5fc2d1ac6e474b"} Jan 31 09:04:34 crc kubenswrapper[4830]: I0131 09:04:34.573837 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q8t9t" event={"ID":"db7a137a-b7f9-4446-85f6-ea0d2f0caedd","Type":"ContainerStarted","Data":"c350e9a183f1092001e8d6788224e69c58a2be2488073ec39b0a19d0bf81b52c"} Jan 31 09:04:34 crc kubenswrapper[4830]: I0131 09:04:34.579158 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bqmr6" event={"ID":"ea666a92-d7aa-4e9b-8c54-88ad8ae517aa","Type":"ContainerStarted","Data":"b08418445c5d59d02fe608ce449d876ebb7b3e993ea892e122d3a167048a5ecb"} Jan 31 09:04:34 crc kubenswrapper[4830]: I0131 09:04:34.584119 4830 generic.go:334] "Generic (PLEG): container finished" podID="ca8a4bb5-67d6-4e50-905f-95e0a15e376a" containerID="69c9e1248ed8682621278217a7f13f6c22489ce8b103c0312a4e64be9018ae62" exitCode=0 Jan 31 09:04:34 crc kubenswrapper[4830]: I0131 09:04:34.584172 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gs9bg" event={"ID":"ca8a4bb5-67d6-4e50-905f-95e0a15e376a","Type":"ContainerDied","Data":"69c9e1248ed8682621278217a7f13f6c22489ce8b103c0312a4e64be9018ae62"} Jan 31 09:04:34 crc kubenswrapper[4830]: I0131 09:04:34.601560 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-q8t9t" podStartSLOduration=3.029696727 podStartE2EDuration="1m0.601538437s" podCreationTimestamp="2026-01-31 09:03:34 +0000 UTC" firstStartedPulling="2026-01-31 09:03:36.71356993 +0000 UTC m=+161.206932372" lastFinishedPulling="2026-01-31 09:04:34.28541164 +0000 UTC m=+218.778774082" observedRunningTime="2026-01-31 09:04:34.598817706 +0000 UTC m=+219.092180168" watchObservedRunningTime="2026-01-31 09:04:34.601538437 +0000 UTC m=+219.094900869" Jan 31 09:04:35 crc kubenswrapper[4830]: I0131 09:04:35.005079 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-q8t9t" Jan 31 09:04:35 crc kubenswrapper[4830]: I0131 09:04:35.005143 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-q8t9t" Jan 31 09:04:35 crc kubenswrapper[4830]: I0131 09:04:35.271448 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-bqmr6" podStartSLOduration=3.816546302 podStartE2EDuration="1m1.271419662s" podCreationTimestamp="2026-01-31 09:03:34 +0000 UTC" firstStartedPulling="2026-01-31 09:03:36.71427731 +0000 UTC m=+161.207639752" lastFinishedPulling="2026-01-31 09:04:34.16915068 +0000 UTC m=+218.662513112" observedRunningTime="2026-01-31 09:04:34.643091034 +0000 UTC m=+219.136453486" watchObservedRunningTime="2026-01-31 09:04:35.271419662 +0000 UTC m=+219.764782104" Jan 31 09:04:35 crc kubenswrapper[4830]: I0131 09:04:35.314572 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-bqmr6" Jan 31 09:04:35 crc kubenswrapper[4830]: I0131 09:04:35.314643 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-bqmr6" Jan 31 09:04:35 crc kubenswrapper[4830]: I0131 09:04:35.592655 4830 generic.go:334] "Generic (PLEG): container finished" podID="3e020928-b063-4d3c-8992-e712fe3d1b1d" containerID="4907ca9744784c8af949d7901fcde921f68f7c0fe7a1e5c5d1028c7bbdf74675" exitCode=0 Jan 31 09:04:35 crc kubenswrapper[4830]: I0131 09:04:35.592753 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2ssr8" event={"ID":"3e020928-b063-4d3c-8992-e712fe3d1b1d","Type":"ContainerDied","Data":"4907ca9744784c8af949d7901fcde921f68f7c0fe7a1e5c5d1028c7bbdf74675"} Jan 31 09:04:35 crc kubenswrapper[4830]: I0131 09:04:35.599700 4830 generic.go:334] "Generic (PLEG): container finished" podID="a1aef8b4-46c8-4ca1-87d9-0bdc6a53ca33" containerID="ad7fec7e51067a3bd2f8105b0d74e8cd9c7dc2fbd0e1f0340bac1b457406f883" exitCode=0 Jan 31 09:04:35 crc kubenswrapper[4830]: I0131 09:04:35.599801 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dcmsg" event={"ID":"a1aef8b4-46c8-4ca1-87d9-0bdc6a53ca33","Type":"ContainerDied","Data":"ad7fec7e51067a3bd2f8105b0d74e8cd9c7dc2fbd0e1f0340bac1b457406f883"} Jan 31 09:04:35 crc kubenswrapper[4830]: I0131 09:04:35.621442 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gs9bg" event={"ID":"ca8a4bb5-67d6-4e50-905f-95e0a15e376a","Type":"ContainerStarted","Data":"bac4f522a593309f95ac81e947b40e3374b6a486b33627c2867e7c855e45faad"} Jan 31 09:04:35 crc kubenswrapper[4830]: I0131 09:04:35.669254 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-gs9bg" podStartSLOduration=2.633045115 podStartE2EDuration="58.669236721s" podCreationTimestamp="2026-01-31 09:03:37 +0000 UTC" firstStartedPulling="2026-01-31 09:03:38.948117312 +0000 UTC m=+163.441479754" lastFinishedPulling="2026-01-31 09:04:34.984308928 +0000 UTC m=+219.477671360" observedRunningTime="2026-01-31 09:04:35.66685028 +0000 UTC m=+220.160212722" watchObservedRunningTime="2026-01-31 09:04:35.669236721 +0000 UTC m=+220.162599163" Jan 31 09:04:36 crc kubenswrapper[4830]: I0131 09:04:36.147272 4830 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-q8t9t" podUID="db7a137a-b7f9-4446-85f6-ea0d2f0caedd" containerName="registry-server" probeResult="failure" output=< Jan 31 09:04:36 crc kubenswrapper[4830]: timeout: failed to connect service ":50051" within 1s Jan 31 09:04:36 crc kubenswrapper[4830]: > Jan 31 09:04:36 crc kubenswrapper[4830]: I0131 09:04:36.355840 4830 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-bqmr6" podUID="ea666a92-d7aa-4e9b-8c54-88ad8ae517aa" containerName="registry-server" probeResult="failure" output=< Jan 31 09:04:36 crc kubenswrapper[4830]: timeout: failed to connect service ":50051" within 1s Jan 31 09:04:36 crc kubenswrapper[4830]: > Jan 31 09:04:36 crc kubenswrapper[4830]: I0131 09:04:36.628490 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2ssr8" event={"ID":"3e020928-b063-4d3c-8992-e712fe3d1b1d","Type":"ContainerStarted","Data":"91511445b3449579f3d14ae49e60cf52eb3d6565be2d36f6218bbcc6dfdff270"} Jan 31 09:04:36 crc kubenswrapper[4830]: I0131 09:04:36.630693 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dcmsg" event={"ID":"a1aef8b4-46c8-4ca1-87d9-0bdc6a53ca33","Type":"ContainerStarted","Data":"14658fad85e5b39dd41313efcb96ad42124c6e681a37d5d971a00256c014967f"} Jan 31 09:04:36 crc kubenswrapper[4830]: I0131 09:04:36.634065 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-h79tz" event={"ID":"716e78b2-1856-45e6-a3fa-73538be51a97","Type":"ContainerStarted","Data":"ca55ffa53a3e29498cb5fa92d8d28ac1309cedbc256b7869890817b597121576"} Jan 31 09:04:36 crc kubenswrapper[4830]: I0131 09:04:36.650498 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-2ssr8" podStartSLOduration=3.014296024 podStartE2EDuration="1m2.650481401s" podCreationTimestamp="2026-01-31 09:03:34 +0000 UTC" firstStartedPulling="2026-01-31 09:03:36.721392938 +0000 UTC m=+161.214755380" lastFinishedPulling="2026-01-31 09:04:36.357578315 +0000 UTC m=+220.850940757" observedRunningTime="2026-01-31 09:04:36.648049009 +0000 UTC m=+221.141411451" watchObservedRunningTime="2026-01-31 09:04:36.650481401 +0000 UTC m=+221.143843843" Jan 31 09:04:36 crc kubenswrapper[4830]: I0131 09:04:36.679886 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-dcmsg" podStartSLOduration=3.154938688 podStartE2EDuration="1m2.679868196s" podCreationTimestamp="2026-01-31 09:03:34 +0000 UTC" firstStartedPulling="2026-01-31 09:03:36.741608027 +0000 UTC m=+161.234970469" lastFinishedPulling="2026-01-31 09:04:36.266537535 +0000 UTC m=+220.759899977" observedRunningTime="2026-01-31 09:04:36.677490835 +0000 UTC m=+221.170853277" watchObservedRunningTime="2026-01-31 09:04:36.679868196 +0000 UTC m=+221.173230638" Jan 31 09:04:36 crc kubenswrapper[4830]: I0131 09:04:36.877353 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-sxn8r" Jan 31 09:04:36 crc kubenswrapper[4830]: I0131 09:04:36.877837 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-sxn8r" Jan 31 09:04:36 crc kubenswrapper[4830]: I0131 09:04:36.933411 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-sxn8r" Jan 31 09:04:37 crc kubenswrapper[4830]: I0131 09:04:37.172889 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-qgnfw" Jan 31 09:04:37 crc kubenswrapper[4830]: I0131 09:04:37.172974 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-qgnfw" Jan 31 09:04:37 crc kubenswrapper[4830]: I0131 09:04:37.219738 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-qgnfw" Jan 31 09:04:37 crc kubenswrapper[4830]: I0131 09:04:37.642327 4830 generic.go:334] "Generic (PLEG): container finished" podID="716e78b2-1856-45e6-a3fa-73538be51a97" containerID="ca55ffa53a3e29498cb5fa92d8d28ac1309cedbc256b7869890817b597121576" exitCode=0 Jan 31 09:04:37 crc kubenswrapper[4830]: I0131 09:04:37.642419 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-h79tz" event={"ID":"716e78b2-1856-45e6-a3fa-73538be51a97","Type":"ContainerDied","Data":"ca55ffa53a3e29498cb5fa92d8d28ac1309cedbc256b7869890817b597121576"} Jan 31 09:04:37 crc kubenswrapper[4830]: I0131 09:04:37.691524 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-qgnfw" Jan 31 09:04:37 crc kubenswrapper[4830]: I0131 09:04:37.703281 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-sxn8r" Jan 31 09:04:38 crc kubenswrapper[4830]: I0131 09:04:38.098622 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-gs9bg" Jan 31 09:04:38 crc kubenswrapper[4830]: I0131 09:04:38.098688 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-gs9bg" Jan 31 09:04:39 crc kubenswrapper[4830]: I0131 09:04:39.141010 4830 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-gs9bg" podUID="ca8a4bb5-67d6-4e50-905f-95e0a15e376a" containerName="registry-server" probeResult="failure" output=< Jan 31 09:04:39 crc kubenswrapper[4830]: timeout: failed to connect service ":50051" within 1s Jan 31 09:04:39 crc kubenswrapper[4830]: > Jan 31 09:04:39 crc kubenswrapper[4830]: I0131 09:04:39.890589 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-qgnfw"] Jan 31 09:04:39 crc kubenswrapper[4830]: I0131 09:04:39.891056 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-qgnfw" podUID="3477b1ed-ccf0-4f60-9505-ff0e417750af" containerName="registry-server" containerID="cri-o://0f46baf323f9dce50b0b183977967ceda8e845ec58055d9b4e9cbe29b23f2eea" gracePeriod=2 Jan 31 09:04:40 crc kubenswrapper[4830]: I0131 09:04:40.664600 4830 generic.go:334] "Generic (PLEG): container finished" podID="3477b1ed-ccf0-4f60-9505-ff0e417750af" containerID="0f46baf323f9dce50b0b183977967ceda8e845ec58055d9b4e9cbe29b23f2eea" exitCode=0 Jan 31 09:04:40 crc kubenswrapper[4830]: I0131 09:04:40.664680 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qgnfw" event={"ID":"3477b1ed-ccf0-4f60-9505-ff0e417750af","Type":"ContainerDied","Data":"0f46baf323f9dce50b0b183977967ceda8e845ec58055d9b4e9cbe29b23f2eea"} Jan 31 09:04:41 crc kubenswrapper[4830]: I0131 09:04:41.192852 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qgnfw" Jan 31 09:04:41 crc kubenswrapper[4830]: I0131 09:04:41.388157 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3477b1ed-ccf0-4f60-9505-ff0e417750af-catalog-content\") pod \"3477b1ed-ccf0-4f60-9505-ff0e417750af\" (UID: \"3477b1ed-ccf0-4f60-9505-ff0e417750af\") " Jan 31 09:04:41 crc kubenswrapper[4830]: I0131 09:04:41.388792 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3477b1ed-ccf0-4f60-9505-ff0e417750af-utilities\") pod \"3477b1ed-ccf0-4f60-9505-ff0e417750af\" (UID: \"3477b1ed-ccf0-4f60-9505-ff0e417750af\") " Jan 31 09:04:41 crc kubenswrapper[4830]: I0131 09:04:41.388979 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8lqw5\" (UniqueName: \"kubernetes.io/projected/3477b1ed-ccf0-4f60-9505-ff0e417750af-kube-api-access-8lqw5\") pod \"3477b1ed-ccf0-4f60-9505-ff0e417750af\" (UID: \"3477b1ed-ccf0-4f60-9505-ff0e417750af\") " Jan 31 09:04:41 crc kubenswrapper[4830]: I0131 09:04:41.391180 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3477b1ed-ccf0-4f60-9505-ff0e417750af-utilities" (OuterVolumeSpecName: "utilities") pod "3477b1ed-ccf0-4f60-9505-ff0e417750af" (UID: "3477b1ed-ccf0-4f60-9505-ff0e417750af"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 09:04:41 crc kubenswrapper[4830]: I0131 09:04:41.396480 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3477b1ed-ccf0-4f60-9505-ff0e417750af-kube-api-access-8lqw5" (OuterVolumeSpecName: "kube-api-access-8lqw5") pod "3477b1ed-ccf0-4f60-9505-ff0e417750af" (UID: "3477b1ed-ccf0-4f60-9505-ff0e417750af"). InnerVolumeSpecName "kube-api-access-8lqw5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:04:41 crc kubenswrapper[4830]: I0131 09:04:41.420882 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3477b1ed-ccf0-4f60-9505-ff0e417750af-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3477b1ed-ccf0-4f60-9505-ff0e417750af" (UID: "3477b1ed-ccf0-4f60-9505-ff0e417750af"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 09:04:41 crc kubenswrapper[4830]: I0131 09:04:41.490649 4830 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3477b1ed-ccf0-4f60-9505-ff0e417750af-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 09:04:41 crc kubenswrapper[4830]: I0131 09:04:41.490713 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8lqw5\" (UniqueName: \"kubernetes.io/projected/3477b1ed-ccf0-4f60-9505-ff0e417750af-kube-api-access-8lqw5\") on node \"crc\" DevicePath \"\"" Jan 31 09:04:41 crc kubenswrapper[4830]: I0131 09:04:41.490751 4830 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3477b1ed-ccf0-4f60-9505-ff0e417750af-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 09:04:41 crc kubenswrapper[4830]: I0131 09:04:41.677152 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qgnfw" Jan 31 09:04:41 crc kubenswrapper[4830]: I0131 09:04:41.677158 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qgnfw" event={"ID":"3477b1ed-ccf0-4f60-9505-ff0e417750af","Type":"ContainerDied","Data":"ec0891a1c75b8364d05386edd7e0cfd7348e9f5e32abd9db811037f9fc973bf0"} Jan 31 09:04:41 crc kubenswrapper[4830]: I0131 09:04:41.678027 4830 scope.go:117] "RemoveContainer" containerID="0f46baf323f9dce50b0b183977967ceda8e845ec58055d9b4e9cbe29b23f2eea" Jan 31 09:04:41 crc kubenswrapper[4830]: I0131 09:04:41.702051 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-h79tz" event={"ID":"716e78b2-1856-45e6-a3fa-73538be51a97","Type":"ContainerStarted","Data":"70171760d19bc95fe1119723b430ea2e6d64c2567c470e0b4fc7a32a96cd75ca"} Jan 31 09:04:41 crc kubenswrapper[4830]: I0131 09:04:41.719458 4830 scope.go:117] "RemoveContainer" containerID="e99b334a166cdca121c4721c09952221d95a22a8eb4841d936a0e1ec1aecb5fb" Jan 31 09:04:41 crc kubenswrapper[4830]: I0131 09:04:41.722921 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-qgnfw"] Jan 31 09:04:41 crc kubenswrapper[4830]: I0131 09:04:41.729166 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-qgnfw"] Jan 31 09:04:41 crc kubenswrapper[4830]: I0131 09:04:41.738436 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-h79tz" podStartSLOduration=4.196557809 podStartE2EDuration="1m3.738418253s" podCreationTimestamp="2026-01-31 09:03:38 +0000 UTC" firstStartedPulling="2026-01-31 09:03:41.080391993 +0000 UTC m=+165.573754435" lastFinishedPulling="2026-01-31 09:04:40.622252437 +0000 UTC m=+225.115614879" observedRunningTime="2026-01-31 09:04:41.737929989 +0000 UTC m=+226.231292441" watchObservedRunningTime="2026-01-31 09:04:41.738418253 +0000 UTC m=+226.231780705" Jan 31 09:04:41 crc kubenswrapper[4830]: I0131 09:04:41.752785 4830 scope.go:117] "RemoveContainer" containerID="87b619b5a59514b1c47359f60c98d55167cbb85978c7e83f62465fd000b9daed" Jan 31 09:04:42 crc kubenswrapper[4830]: I0131 09:04:42.263976 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3477b1ed-ccf0-4f60-9505-ff0e417750af" path="/var/lib/kubelet/pods/3477b1ed-ccf0-4f60-9505-ff0e417750af/volumes" Jan 31 09:04:44 crc kubenswrapper[4830]: I0131 09:04:44.352871 4830 patch_prober.go:28] interesting pod/machine-config-daemon-gt7kd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 09:04:44 crc kubenswrapper[4830]: I0131 09:04:44.352965 4830 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" podUID="158dbfda-9b0a-4809-9946-3c6ee2d082dc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 09:04:44 crc kubenswrapper[4830]: I0131 09:04:44.353043 4830 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" Jan 31 09:04:44 crc kubenswrapper[4830]: I0131 09:04:44.353886 4830 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"7cfb7ee25dc18bb1412f69e9bbc3a9055029ed188a12baa5ceef7d5445ad597c"} pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 31 09:04:44 crc kubenswrapper[4830]: I0131 09:04:44.354015 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" podUID="158dbfda-9b0a-4809-9946-3c6ee2d082dc" containerName="machine-config-daemon" containerID="cri-o://7cfb7ee25dc18bb1412f69e9bbc3a9055029ed188a12baa5ceef7d5445ad597c" gracePeriod=600 Jan 31 09:04:44 crc kubenswrapper[4830]: I0131 09:04:44.726378 4830 generic.go:334] "Generic (PLEG): container finished" podID="158dbfda-9b0a-4809-9946-3c6ee2d082dc" containerID="7cfb7ee25dc18bb1412f69e9bbc3a9055029ed188a12baa5ceef7d5445ad597c" exitCode=0 Jan 31 09:04:44 crc kubenswrapper[4830]: I0131 09:04:44.726455 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" event={"ID":"158dbfda-9b0a-4809-9946-3c6ee2d082dc","Type":"ContainerDied","Data":"7cfb7ee25dc18bb1412f69e9bbc3a9055029ed188a12baa5ceef7d5445ad597c"} Jan 31 09:04:44 crc kubenswrapper[4830]: I0131 09:04:44.726823 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" event={"ID":"158dbfda-9b0a-4809-9946-3c6ee2d082dc","Type":"ContainerStarted","Data":"daea99fc983195352b8e4718b50bf7bbcdbf16fe4b6ceb22c6175dbbdd6d0099"} Jan 31 09:04:44 crc kubenswrapper[4830]: I0131 09:04:44.885023 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-2ssr8" Jan 31 09:04:44 crc kubenswrapper[4830]: I0131 09:04:44.885265 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-2ssr8" Jan 31 09:04:44 crc kubenswrapper[4830]: I0131 09:04:44.938306 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-2ssr8" Jan 31 09:04:45 crc kubenswrapper[4830]: I0131 09:04:45.050396 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-q8t9t" Jan 31 09:04:45 crc kubenswrapper[4830]: I0131 09:04:45.098384 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-q8t9t" Jan 31 09:04:45 crc kubenswrapper[4830]: I0131 09:04:45.109381 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-dcmsg" Jan 31 09:04:45 crc kubenswrapper[4830]: I0131 09:04:45.109444 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-dcmsg" Jan 31 09:04:45 crc kubenswrapper[4830]: I0131 09:04:45.163858 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-dcmsg" Jan 31 09:04:45 crc kubenswrapper[4830]: I0131 09:04:45.361533 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-bqmr6" Jan 31 09:04:45 crc kubenswrapper[4830]: I0131 09:04:45.409602 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-bqmr6" Jan 31 09:04:45 crc kubenswrapper[4830]: I0131 09:04:45.780820 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-dcmsg" Jan 31 09:04:45 crc kubenswrapper[4830]: I0131 09:04:45.789950 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-2ssr8" Jan 31 09:04:46 crc kubenswrapper[4830]: I0131 09:04:46.282016 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-bqmr6"] Jan 31 09:04:46 crc kubenswrapper[4830]: I0131 09:04:46.742271 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-bqmr6" podUID="ea666a92-d7aa-4e9b-8c54-88ad8ae517aa" containerName="registry-server" containerID="cri-o://b08418445c5d59d02fe608ce449d876ebb7b3e993ea892e122d3a167048a5ecb" gracePeriod=2 Jan 31 09:04:47 crc kubenswrapper[4830]: I0131 09:04:47.284394 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-dcmsg"] Jan 31 09:04:47 crc kubenswrapper[4830]: I0131 09:04:47.593891 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bqmr6" Jan 31 09:04:47 crc kubenswrapper[4830]: I0131 09:04:47.702192 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ea666a92-d7aa-4e9b-8c54-88ad8ae517aa-catalog-content\") pod \"ea666a92-d7aa-4e9b-8c54-88ad8ae517aa\" (UID: \"ea666a92-d7aa-4e9b-8c54-88ad8ae517aa\") " Jan 31 09:04:47 crc kubenswrapper[4830]: I0131 09:04:47.702256 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ea666a92-d7aa-4e9b-8c54-88ad8ae517aa-utilities\") pod \"ea666a92-d7aa-4e9b-8c54-88ad8ae517aa\" (UID: \"ea666a92-d7aa-4e9b-8c54-88ad8ae517aa\") " Jan 31 09:04:47 crc kubenswrapper[4830]: I0131 09:04:47.702293 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m7tpr\" (UniqueName: \"kubernetes.io/projected/ea666a92-d7aa-4e9b-8c54-88ad8ae517aa-kube-api-access-m7tpr\") pod \"ea666a92-d7aa-4e9b-8c54-88ad8ae517aa\" (UID: \"ea666a92-d7aa-4e9b-8c54-88ad8ae517aa\") " Jan 31 09:04:47 crc kubenswrapper[4830]: I0131 09:04:47.703343 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ea666a92-d7aa-4e9b-8c54-88ad8ae517aa-utilities" (OuterVolumeSpecName: "utilities") pod "ea666a92-d7aa-4e9b-8c54-88ad8ae517aa" (UID: "ea666a92-d7aa-4e9b-8c54-88ad8ae517aa"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 09:04:47 crc kubenswrapper[4830]: I0131 09:04:47.711048 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ea666a92-d7aa-4e9b-8c54-88ad8ae517aa-kube-api-access-m7tpr" (OuterVolumeSpecName: "kube-api-access-m7tpr") pod "ea666a92-d7aa-4e9b-8c54-88ad8ae517aa" (UID: "ea666a92-d7aa-4e9b-8c54-88ad8ae517aa"). InnerVolumeSpecName "kube-api-access-m7tpr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:04:47 crc kubenswrapper[4830]: I0131 09:04:47.754649 4830 generic.go:334] "Generic (PLEG): container finished" podID="ea666a92-d7aa-4e9b-8c54-88ad8ae517aa" containerID="b08418445c5d59d02fe608ce449d876ebb7b3e993ea892e122d3a167048a5ecb" exitCode=0 Jan 31 09:04:47 crc kubenswrapper[4830]: I0131 09:04:47.754808 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bqmr6" Jan 31 09:04:47 crc kubenswrapper[4830]: I0131 09:04:47.754824 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bqmr6" event={"ID":"ea666a92-d7aa-4e9b-8c54-88ad8ae517aa","Type":"ContainerDied","Data":"b08418445c5d59d02fe608ce449d876ebb7b3e993ea892e122d3a167048a5ecb"} Jan 31 09:04:47 crc kubenswrapper[4830]: I0131 09:04:47.754923 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bqmr6" event={"ID":"ea666a92-d7aa-4e9b-8c54-88ad8ae517aa","Type":"ContainerDied","Data":"7c980da832516b94ea92d1a2e598952b5112b8f3ebaa11375103b5cbe79592ba"} Jan 31 09:04:47 crc kubenswrapper[4830]: I0131 09:04:47.754953 4830 scope.go:117] "RemoveContainer" containerID="b08418445c5d59d02fe608ce449d876ebb7b3e993ea892e122d3a167048a5ecb" Jan 31 09:04:47 crc kubenswrapper[4830]: I0131 09:04:47.754988 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-dcmsg" podUID="a1aef8b4-46c8-4ca1-87d9-0bdc6a53ca33" containerName="registry-server" containerID="cri-o://14658fad85e5b39dd41313efcb96ad42124c6e681a37d5d971a00256c014967f" gracePeriod=2 Jan 31 09:04:47 crc kubenswrapper[4830]: I0131 09:04:47.771719 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ea666a92-d7aa-4e9b-8c54-88ad8ae517aa-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ea666a92-d7aa-4e9b-8c54-88ad8ae517aa" (UID: "ea666a92-d7aa-4e9b-8c54-88ad8ae517aa"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 09:04:47 crc kubenswrapper[4830]: I0131 09:04:47.789481 4830 scope.go:117] "RemoveContainer" containerID="b1b905da8423eb8a3a3e472636dd6a55d709b18193b3b518df5fc2d1ac6e474b" Jan 31 09:04:47 crc kubenswrapper[4830]: I0131 09:04:47.803564 4830 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ea666a92-d7aa-4e9b-8c54-88ad8ae517aa-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 09:04:47 crc kubenswrapper[4830]: I0131 09:04:47.803651 4830 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ea666a92-d7aa-4e9b-8c54-88ad8ae517aa-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 09:04:47 crc kubenswrapper[4830]: I0131 09:04:47.803671 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m7tpr\" (UniqueName: \"kubernetes.io/projected/ea666a92-d7aa-4e9b-8c54-88ad8ae517aa-kube-api-access-m7tpr\") on node \"crc\" DevicePath \"\"" Jan 31 09:04:47 crc kubenswrapper[4830]: I0131 09:04:47.810991 4830 scope.go:117] "RemoveContainer" containerID="e1c344810985a78a6f2463f53077155a5f741fa2978808c0cb856ec7bb4cd54c" Jan 31 09:04:47 crc kubenswrapper[4830]: I0131 09:04:47.891485 4830 scope.go:117] "RemoveContainer" containerID="b08418445c5d59d02fe608ce449d876ebb7b3e993ea892e122d3a167048a5ecb" Jan 31 09:04:47 crc kubenswrapper[4830]: E0131 09:04:47.891998 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b08418445c5d59d02fe608ce449d876ebb7b3e993ea892e122d3a167048a5ecb\": container with ID starting with b08418445c5d59d02fe608ce449d876ebb7b3e993ea892e122d3a167048a5ecb not found: ID does not exist" containerID="b08418445c5d59d02fe608ce449d876ebb7b3e993ea892e122d3a167048a5ecb" Jan 31 09:04:47 crc kubenswrapper[4830]: I0131 09:04:47.892054 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b08418445c5d59d02fe608ce449d876ebb7b3e993ea892e122d3a167048a5ecb"} err="failed to get container status \"b08418445c5d59d02fe608ce449d876ebb7b3e993ea892e122d3a167048a5ecb\": rpc error: code = NotFound desc = could not find container \"b08418445c5d59d02fe608ce449d876ebb7b3e993ea892e122d3a167048a5ecb\": container with ID starting with b08418445c5d59d02fe608ce449d876ebb7b3e993ea892e122d3a167048a5ecb not found: ID does not exist" Jan 31 09:04:47 crc kubenswrapper[4830]: I0131 09:04:47.892093 4830 scope.go:117] "RemoveContainer" containerID="b1b905da8423eb8a3a3e472636dd6a55d709b18193b3b518df5fc2d1ac6e474b" Jan 31 09:04:47 crc kubenswrapper[4830]: E0131 09:04:47.892594 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b1b905da8423eb8a3a3e472636dd6a55d709b18193b3b518df5fc2d1ac6e474b\": container with ID starting with b1b905da8423eb8a3a3e472636dd6a55d709b18193b3b518df5fc2d1ac6e474b not found: ID does not exist" containerID="b1b905da8423eb8a3a3e472636dd6a55d709b18193b3b518df5fc2d1ac6e474b" Jan 31 09:04:47 crc kubenswrapper[4830]: I0131 09:04:47.892629 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b1b905da8423eb8a3a3e472636dd6a55d709b18193b3b518df5fc2d1ac6e474b"} err="failed to get container status \"b1b905da8423eb8a3a3e472636dd6a55d709b18193b3b518df5fc2d1ac6e474b\": rpc error: code = NotFound desc = could not find container \"b1b905da8423eb8a3a3e472636dd6a55d709b18193b3b518df5fc2d1ac6e474b\": container with ID starting with b1b905da8423eb8a3a3e472636dd6a55d709b18193b3b518df5fc2d1ac6e474b not found: ID does not exist" Jan 31 09:04:47 crc kubenswrapper[4830]: I0131 09:04:47.892651 4830 scope.go:117] "RemoveContainer" containerID="e1c344810985a78a6f2463f53077155a5f741fa2978808c0cb856ec7bb4cd54c" Jan 31 09:04:47 crc kubenswrapper[4830]: E0131 09:04:47.893049 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e1c344810985a78a6f2463f53077155a5f741fa2978808c0cb856ec7bb4cd54c\": container with ID starting with e1c344810985a78a6f2463f53077155a5f741fa2978808c0cb856ec7bb4cd54c not found: ID does not exist" containerID="e1c344810985a78a6f2463f53077155a5f741fa2978808c0cb856ec7bb4cd54c" Jan 31 09:04:47 crc kubenswrapper[4830]: I0131 09:04:47.893076 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e1c344810985a78a6f2463f53077155a5f741fa2978808c0cb856ec7bb4cd54c"} err="failed to get container status \"e1c344810985a78a6f2463f53077155a5f741fa2978808c0cb856ec7bb4cd54c\": rpc error: code = NotFound desc = could not find container \"e1c344810985a78a6f2463f53077155a5f741fa2978808c0cb856ec7bb4cd54c\": container with ID starting with e1c344810985a78a6f2463f53077155a5f741fa2978808c0cb856ec7bb4cd54c not found: ID does not exist" Jan 31 09:04:48 crc kubenswrapper[4830]: I0131 09:04:48.092537 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-bqmr6"] Jan 31 09:04:48 crc kubenswrapper[4830]: I0131 09:04:48.095242 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-bqmr6"] Jan 31 09:04:48 crc kubenswrapper[4830]: I0131 09:04:48.148197 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-gs9bg" Jan 31 09:04:48 crc kubenswrapper[4830]: I0131 09:04:48.201100 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-gs9bg" Jan 31 09:04:48 crc kubenswrapper[4830]: I0131 09:04:48.254592 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dcmsg" Jan 31 09:04:48 crc kubenswrapper[4830]: I0131 09:04:48.259359 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ea666a92-d7aa-4e9b-8c54-88ad8ae517aa" path="/var/lib/kubelet/pods/ea666a92-d7aa-4e9b-8c54-88ad8ae517aa/volumes" Jan 31 09:04:48 crc kubenswrapper[4830]: I0131 09:04:48.311423 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a1aef8b4-46c8-4ca1-87d9-0bdc6a53ca33-catalog-content\") pod \"a1aef8b4-46c8-4ca1-87d9-0bdc6a53ca33\" (UID: \"a1aef8b4-46c8-4ca1-87d9-0bdc6a53ca33\") " Jan 31 09:04:48 crc kubenswrapper[4830]: I0131 09:04:48.311540 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j8s8x\" (UniqueName: \"kubernetes.io/projected/a1aef8b4-46c8-4ca1-87d9-0bdc6a53ca33-kube-api-access-j8s8x\") pod \"a1aef8b4-46c8-4ca1-87d9-0bdc6a53ca33\" (UID: \"a1aef8b4-46c8-4ca1-87d9-0bdc6a53ca33\") " Jan 31 09:04:48 crc kubenswrapper[4830]: I0131 09:04:48.311627 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a1aef8b4-46c8-4ca1-87d9-0bdc6a53ca33-utilities\") pod \"a1aef8b4-46c8-4ca1-87d9-0bdc6a53ca33\" (UID: \"a1aef8b4-46c8-4ca1-87d9-0bdc6a53ca33\") " Jan 31 09:04:48 crc kubenswrapper[4830]: I0131 09:04:48.315745 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a1aef8b4-46c8-4ca1-87d9-0bdc6a53ca33-utilities" (OuterVolumeSpecName: "utilities") pod "a1aef8b4-46c8-4ca1-87d9-0bdc6a53ca33" (UID: "a1aef8b4-46c8-4ca1-87d9-0bdc6a53ca33"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 09:04:48 crc kubenswrapper[4830]: I0131 09:04:48.320887 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a1aef8b4-46c8-4ca1-87d9-0bdc6a53ca33-kube-api-access-j8s8x" (OuterVolumeSpecName: "kube-api-access-j8s8x") pod "a1aef8b4-46c8-4ca1-87d9-0bdc6a53ca33" (UID: "a1aef8b4-46c8-4ca1-87d9-0bdc6a53ca33"). InnerVolumeSpecName "kube-api-access-j8s8x". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:04:48 crc kubenswrapper[4830]: I0131 09:04:48.384314 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a1aef8b4-46c8-4ca1-87d9-0bdc6a53ca33-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a1aef8b4-46c8-4ca1-87d9-0bdc6a53ca33" (UID: "a1aef8b4-46c8-4ca1-87d9-0bdc6a53ca33"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 09:04:48 crc kubenswrapper[4830]: I0131 09:04:48.413348 4830 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a1aef8b4-46c8-4ca1-87d9-0bdc6a53ca33-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 09:04:48 crc kubenswrapper[4830]: I0131 09:04:48.413420 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j8s8x\" (UniqueName: \"kubernetes.io/projected/a1aef8b4-46c8-4ca1-87d9-0bdc6a53ca33-kube-api-access-j8s8x\") on node \"crc\" DevicePath \"\"" Jan 31 09:04:48 crc kubenswrapper[4830]: I0131 09:04:48.413432 4830 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a1aef8b4-46c8-4ca1-87d9-0bdc6a53ca33-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 09:04:48 crc kubenswrapper[4830]: I0131 09:04:48.537262 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-h79tz" Jan 31 09:04:48 crc kubenswrapper[4830]: I0131 09:04:48.537323 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-h79tz" Jan 31 09:04:48 crc kubenswrapper[4830]: I0131 09:04:48.588499 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-h79tz" Jan 31 09:04:48 crc kubenswrapper[4830]: I0131 09:04:48.766104 4830 generic.go:334] "Generic (PLEG): container finished" podID="a1aef8b4-46c8-4ca1-87d9-0bdc6a53ca33" containerID="14658fad85e5b39dd41313efcb96ad42124c6e681a37d5d971a00256c014967f" exitCode=0 Jan 31 09:04:48 crc kubenswrapper[4830]: I0131 09:04:48.766197 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dcmsg" Jan 31 09:04:48 crc kubenswrapper[4830]: I0131 09:04:48.766180 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dcmsg" event={"ID":"a1aef8b4-46c8-4ca1-87d9-0bdc6a53ca33","Type":"ContainerDied","Data":"14658fad85e5b39dd41313efcb96ad42124c6e681a37d5d971a00256c014967f"} Jan 31 09:04:48 crc kubenswrapper[4830]: I0131 09:04:48.766260 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dcmsg" event={"ID":"a1aef8b4-46c8-4ca1-87d9-0bdc6a53ca33","Type":"ContainerDied","Data":"03b507624453911af4ea42a682692fe3a6f9d62abbbfc9481bb1c38237367306"} Jan 31 09:04:48 crc kubenswrapper[4830]: I0131 09:04:48.766286 4830 scope.go:117] "RemoveContainer" containerID="14658fad85e5b39dd41313efcb96ad42124c6e681a37d5d971a00256c014967f" Jan 31 09:04:48 crc kubenswrapper[4830]: I0131 09:04:48.803658 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-dcmsg"] Jan 31 09:04:48 crc kubenswrapper[4830]: I0131 09:04:48.804751 4830 scope.go:117] "RemoveContainer" containerID="ad7fec7e51067a3bd2f8105b0d74e8cd9c7dc2fbd0e1f0340bac1b457406f883" Jan 31 09:04:48 crc kubenswrapper[4830]: I0131 09:04:48.810052 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-dcmsg"] Jan 31 09:04:48 crc kubenswrapper[4830]: I0131 09:04:48.822030 4830 scope.go:117] "RemoveContainer" containerID="0c2a8b7249d284d0603796fd9cb83b9256f84058c98bacc657c84b6ea6f3eb8d" Jan 31 09:04:48 crc kubenswrapper[4830]: I0131 09:04:48.826484 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-h79tz" Jan 31 09:04:48 crc kubenswrapper[4830]: I0131 09:04:48.846146 4830 scope.go:117] "RemoveContainer" containerID="14658fad85e5b39dd41313efcb96ad42124c6e681a37d5d971a00256c014967f" Jan 31 09:04:48 crc kubenswrapper[4830]: E0131 09:04:48.846884 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"14658fad85e5b39dd41313efcb96ad42124c6e681a37d5d971a00256c014967f\": container with ID starting with 14658fad85e5b39dd41313efcb96ad42124c6e681a37d5d971a00256c014967f not found: ID does not exist" containerID="14658fad85e5b39dd41313efcb96ad42124c6e681a37d5d971a00256c014967f" Jan 31 09:04:48 crc kubenswrapper[4830]: I0131 09:04:48.846952 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"14658fad85e5b39dd41313efcb96ad42124c6e681a37d5d971a00256c014967f"} err="failed to get container status \"14658fad85e5b39dd41313efcb96ad42124c6e681a37d5d971a00256c014967f\": rpc error: code = NotFound desc = could not find container \"14658fad85e5b39dd41313efcb96ad42124c6e681a37d5d971a00256c014967f\": container with ID starting with 14658fad85e5b39dd41313efcb96ad42124c6e681a37d5d971a00256c014967f not found: ID does not exist" Jan 31 09:04:48 crc kubenswrapper[4830]: I0131 09:04:48.846997 4830 scope.go:117] "RemoveContainer" containerID="ad7fec7e51067a3bd2f8105b0d74e8cd9c7dc2fbd0e1f0340bac1b457406f883" Jan 31 09:04:48 crc kubenswrapper[4830]: E0131 09:04:48.847394 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ad7fec7e51067a3bd2f8105b0d74e8cd9c7dc2fbd0e1f0340bac1b457406f883\": container with ID starting with ad7fec7e51067a3bd2f8105b0d74e8cd9c7dc2fbd0e1f0340bac1b457406f883 not found: ID does not exist" containerID="ad7fec7e51067a3bd2f8105b0d74e8cd9c7dc2fbd0e1f0340bac1b457406f883" Jan 31 09:04:48 crc kubenswrapper[4830]: I0131 09:04:48.847434 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ad7fec7e51067a3bd2f8105b0d74e8cd9c7dc2fbd0e1f0340bac1b457406f883"} err="failed to get container status \"ad7fec7e51067a3bd2f8105b0d74e8cd9c7dc2fbd0e1f0340bac1b457406f883\": rpc error: code = NotFound desc = could not find container \"ad7fec7e51067a3bd2f8105b0d74e8cd9c7dc2fbd0e1f0340bac1b457406f883\": container with ID starting with ad7fec7e51067a3bd2f8105b0d74e8cd9c7dc2fbd0e1f0340bac1b457406f883 not found: ID does not exist" Jan 31 09:04:48 crc kubenswrapper[4830]: I0131 09:04:48.847466 4830 scope.go:117] "RemoveContainer" containerID="0c2a8b7249d284d0603796fd9cb83b9256f84058c98bacc657c84b6ea6f3eb8d" Jan 31 09:04:48 crc kubenswrapper[4830]: E0131 09:04:48.847865 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0c2a8b7249d284d0603796fd9cb83b9256f84058c98bacc657c84b6ea6f3eb8d\": container with ID starting with 0c2a8b7249d284d0603796fd9cb83b9256f84058c98bacc657c84b6ea6f3eb8d not found: ID does not exist" containerID="0c2a8b7249d284d0603796fd9cb83b9256f84058c98bacc657c84b6ea6f3eb8d" Jan 31 09:04:48 crc kubenswrapper[4830]: I0131 09:04:48.847933 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0c2a8b7249d284d0603796fd9cb83b9256f84058c98bacc657c84b6ea6f3eb8d"} err="failed to get container status \"0c2a8b7249d284d0603796fd9cb83b9256f84058c98bacc657c84b6ea6f3eb8d\": rpc error: code = NotFound desc = could not find container \"0c2a8b7249d284d0603796fd9cb83b9256f84058c98bacc657c84b6ea6f3eb8d\": container with ID starting with 0c2a8b7249d284d0603796fd9cb83b9256f84058c98bacc657c84b6ea6f3eb8d not found: ID does not exist" Jan 31 09:04:50 crc kubenswrapper[4830]: I0131 09:04:50.258430 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a1aef8b4-46c8-4ca1-87d9-0bdc6a53ca33" path="/var/lib/kubelet/pods/a1aef8b4-46c8-4ca1-87d9-0bdc6a53ca33/volumes" Jan 31 09:04:50 crc kubenswrapper[4830]: I0131 09:04:50.682004 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-h79tz"] Jan 31 09:04:50 crc kubenswrapper[4830]: I0131 09:04:50.781340 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-h79tz" podUID="716e78b2-1856-45e6-a3fa-73538be51a97" containerName="registry-server" containerID="cri-o://70171760d19bc95fe1119723b430ea2e6d64c2567c470e0b4fc7a32a96cd75ca" gracePeriod=2 Jan 31 09:04:50 crc kubenswrapper[4830]: I0131 09:04:50.849842 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-76d6f96965-8fjhp"] Jan 31 09:04:50 crc kubenswrapper[4830]: I0131 09:04:50.850114 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-76d6f96965-8fjhp" podUID="38a174d9-22cd-4e22-b157-499b9df4e292" containerName="controller-manager" containerID="cri-o://b86cf5e18f70fb8b1beeb189caff6edda4380b839fa823e063dd7c42a9230881" gracePeriod=30 Jan 31 09:04:50 crc kubenswrapper[4830]: I0131 09:04:50.953648 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-794778c64d-6v65m"] Jan 31 09:04:50 crc kubenswrapper[4830]: I0131 09:04:50.953960 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-794778c64d-6v65m" podUID="6e0a1032-01f8-476b-b725-b43704ec47df" containerName="route-controller-manager" containerID="cri-o://5506e8040cc3bfa8ac6090d25dc10ca91b43d9d322036b1d7b95dcfdcc51ddd7" gracePeriod=30 Jan 31 09:04:51 crc kubenswrapper[4830]: I0131 09:04:51.402957 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-h79tz" Jan 31 09:04:51 crc kubenswrapper[4830]: I0131 09:04:51.458761 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xn6ll\" (UniqueName: \"kubernetes.io/projected/716e78b2-1856-45e6-a3fa-73538be51a97-kube-api-access-xn6ll\") pod \"716e78b2-1856-45e6-a3fa-73538be51a97\" (UID: \"716e78b2-1856-45e6-a3fa-73538be51a97\") " Jan 31 09:04:51 crc kubenswrapper[4830]: I0131 09:04:51.459021 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/716e78b2-1856-45e6-a3fa-73538be51a97-utilities\") pod \"716e78b2-1856-45e6-a3fa-73538be51a97\" (UID: \"716e78b2-1856-45e6-a3fa-73538be51a97\") " Jan 31 09:04:51 crc kubenswrapper[4830]: I0131 09:04:51.459072 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/716e78b2-1856-45e6-a3fa-73538be51a97-catalog-content\") pod \"716e78b2-1856-45e6-a3fa-73538be51a97\" (UID: \"716e78b2-1856-45e6-a3fa-73538be51a97\") " Jan 31 09:04:51 crc kubenswrapper[4830]: I0131 09:04:51.460527 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/716e78b2-1856-45e6-a3fa-73538be51a97-utilities" (OuterVolumeSpecName: "utilities") pod "716e78b2-1856-45e6-a3fa-73538be51a97" (UID: "716e78b2-1856-45e6-a3fa-73538be51a97"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 09:04:51 crc kubenswrapper[4830]: I0131 09:04:51.467855 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/716e78b2-1856-45e6-a3fa-73538be51a97-kube-api-access-xn6ll" (OuterVolumeSpecName: "kube-api-access-xn6ll") pod "716e78b2-1856-45e6-a3fa-73538be51a97" (UID: "716e78b2-1856-45e6-a3fa-73538be51a97"). InnerVolumeSpecName "kube-api-access-xn6ll". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:04:51 crc kubenswrapper[4830]: I0131 09:04:51.505139 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-794778c64d-6v65m" Jan 31 09:04:51 crc kubenswrapper[4830]: I0131 09:04:51.560422 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6e0a1032-01f8-476b-b725-b43704ec47df-serving-cert\") pod \"6e0a1032-01f8-476b-b725-b43704ec47df\" (UID: \"6e0a1032-01f8-476b-b725-b43704ec47df\") " Jan 31 09:04:51 crc kubenswrapper[4830]: I0131 09:04:51.560620 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6e0a1032-01f8-476b-b725-b43704ec47df-client-ca\") pod \"6e0a1032-01f8-476b-b725-b43704ec47df\" (UID: \"6e0a1032-01f8-476b-b725-b43704ec47df\") " Jan 31 09:04:51 crc kubenswrapper[4830]: I0131 09:04:51.560656 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ws5gw\" (UniqueName: \"kubernetes.io/projected/6e0a1032-01f8-476b-b725-b43704ec47df-kube-api-access-ws5gw\") pod \"6e0a1032-01f8-476b-b725-b43704ec47df\" (UID: \"6e0a1032-01f8-476b-b725-b43704ec47df\") " Jan 31 09:04:51 crc kubenswrapper[4830]: I0131 09:04:51.560992 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6e0a1032-01f8-476b-b725-b43704ec47df-config\") pod \"6e0a1032-01f8-476b-b725-b43704ec47df\" (UID: \"6e0a1032-01f8-476b-b725-b43704ec47df\") " Jan 31 09:04:51 crc kubenswrapper[4830]: I0131 09:04:51.561300 4830 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/716e78b2-1856-45e6-a3fa-73538be51a97-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 09:04:51 crc kubenswrapper[4830]: I0131 09:04:51.561361 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xn6ll\" (UniqueName: \"kubernetes.io/projected/716e78b2-1856-45e6-a3fa-73538be51a97-kube-api-access-xn6ll\") on node \"crc\" DevicePath \"\"" Jan 31 09:04:51 crc kubenswrapper[4830]: I0131 09:04:51.562190 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6e0a1032-01f8-476b-b725-b43704ec47df-config" (OuterVolumeSpecName: "config") pod "6e0a1032-01f8-476b-b725-b43704ec47df" (UID: "6e0a1032-01f8-476b-b725-b43704ec47df"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:04:51 crc kubenswrapper[4830]: I0131 09:04:51.562181 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6e0a1032-01f8-476b-b725-b43704ec47df-client-ca" (OuterVolumeSpecName: "client-ca") pod "6e0a1032-01f8-476b-b725-b43704ec47df" (UID: "6e0a1032-01f8-476b-b725-b43704ec47df"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:04:51 crc kubenswrapper[4830]: I0131 09:04:51.564238 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6e0a1032-01f8-476b-b725-b43704ec47df-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6e0a1032-01f8-476b-b725-b43704ec47df" (UID: "6e0a1032-01f8-476b-b725-b43704ec47df"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:04:51 crc kubenswrapper[4830]: I0131 09:04:51.564408 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6e0a1032-01f8-476b-b725-b43704ec47df-kube-api-access-ws5gw" (OuterVolumeSpecName: "kube-api-access-ws5gw") pod "6e0a1032-01f8-476b-b725-b43704ec47df" (UID: "6e0a1032-01f8-476b-b725-b43704ec47df"). InnerVolumeSpecName "kube-api-access-ws5gw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:04:51 crc kubenswrapper[4830]: I0131 09:04:51.599373 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-76d6f96965-8fjhp" Jan 31 09:04:51 crc kubenswrapper[4830]: I0131 09:04:51.608817 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/716e78b2-1856-45e6-a3fa-73538be51a97-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "716e78b2-1856-45e6-a3fa-73538be51a97" (UID: "716e78b2-1856-45e6-a3fa-73538be51a97"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 09:04:51 crc kubenswrapper[4830]: I0131 09:04:51.662673 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hhfcd\" (UniqueName: \"kubernetes.io/projected/38a174d9-22cd-4e22-b157-499b9df4e292-kube-api-access-hhfcd\") pod \"38a174d9-22cd-4e22-b157-499b9df4e292\" (UID: \"38a174d9-22cd-4e22-b157-499b9df4e292\") " Jan 31 09:04:51 crc kubenswrapper[4830]: I0131 09:04:51.662746 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/38a174d9-22cd-4e22-b157-499b9df4e292-proxy-ca-bundles\") pod \"38a174d9-22cd-4e22-b157-499b9df4e292\" (UID: \"38a174d9-22cd-4e22-b157-499b9df4e292\") " Jan 31 09:04:51 crc kubenswrapper[4830]: I0131 09:04:51.662775 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/38a174d9-22cd-4e22-b157-499b9df4e292-config\") pod \"38a174d9-22cd-4e22-b157-499b9df4e292\" (UID: \"38a174d9-22cd-4e22-b157-499b9df4e292\") " Jan 31 09:04:51 crc kubenswrapper[4830]: I0131 09:04:51.662847 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/38a174d9-22cd-4e22-b157-499b9df4e292-serving-cert\") pod \"38a174d9-22cd-4e22-b157-499b9df4e292\" (UID: \"38a174d9-22cd-4e22-b157-499b9df4e292\") " Jan 31 09:04:51 crc kubenswrapper[4830]: I0131 09:04:51.662870 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/38a174d9-22cd-4e22-b157-499b9df4e292-client-ca\") pod \"38a174d9-22cd-4e22-b157-499b9df4e292\" (UID: \"38a174d9-22cd-4e22-b157-499b9df4e292\") " Jan 31 09:04:51 crc kubenswrapper[4830]: I0131 09:04:51.663135 4830 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6e0a1032-01f8-476b-b725-b43704ec47df-client-ca\") on node \"crc\" DevicePath \"\"" Jan 31 09:04:51 crc kubenswrapper[4830]: I0131 09:04:51.663151 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ws5gw\" (UniqueName: \"kubernetes.io/projected/6e0a1032-01f8-476b-b725-b43704ec47df-kube-api-access-ws5gw\") on node \"crc\" DevicePath \"\"" Jan 31 09:04:51 crc kubenswrapper[4830]: I0131 09:04:51.663163 4830 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/716e78b2-1856-45e6-a3fa-73538be51a97-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 09:04:51 crc kubenswrapper[4830]: I0131 09:04:51.663172 4830 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6e0a1032-01f8-476b-b725-b43704ec47df-config\") on node \"crc\" DevicePath \"\"" Jan 31 09:04:51 crc kubenswrapper[4830]: I0131 09:04:51.663181 4830 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6e0a1032-01f8-476b-b725-b43704ec47df-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 09:04:51 crc kubenswrapper[4830]: I0131 09:04:51.663821 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/38a174d9-22cd-4e22-b157-499b9df4e292-client-ca" (OuterVolumeSpecName: "client-ca") pod "38a174d9-22cd-4e22-b157-499b9df4e292" (UID: "38a174d9-22cd-4e22-b157-499b9df4e292"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:04:51 crc kubenswrapper[4830]: I0131 09:04:51.664156 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/38a174d9-22cd-4e22-b157-499b9df4e292-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "38a174d9-22cd-4e22-b157-499b9df4e292" (UID: "38a174d9-22cd-4e22-b157-499b9df4e292"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:04:51 crc kubenswrapper[4830]: I0131 09:04:51.664228 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/38a174d9-22cd-4e22-b157-499b9df4e292-config" (OuterVolumeSpecName: "config") pod "38a174d9-22cd-4e22-b157-499b9df4e292" (UID: "38a174d9-22cd-4e22-b157-499b9df4e292"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:04:51 crc kubenswrapper[4830]: I0131 09:04:51.666120 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/38a174d9-22cd-4e22-b157-499b9df4e292-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "38a174d9-22cd-4e22-b157-499b9df4e292" (UID: "38a174d9-22cd-4e22-b157-499b9df4e292"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:04:51 crc kubenswrapper[4830]: I0131 09:04:51.666240 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/38a174d9-22cd-4e22-b157-499b9df4e292-kube-api-access-hhfcd" (OuterVolumeSpecName: "kube-api-access-hhfcd") pod "38a174d9-22cd-4e22-b157-499b9df4e292" (UID: "38a174d9-22cd-4e22-b157-499b9df4e292"). InnerVolumeSpecName "kube-api-access-hhfcd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:04:51 crc kubenswrapper[4830]: I0131 09:04:51.765011 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hhfcd\" (UniqueName: \"kubernetes.io/projected/38a174d9-22cd-4e22-b157-499b9df4e292-kube-api-access-hhfcd\") on node \"crc\" DevicePath \"\"" Jan 31 09:04:51 crc kubenswrapper[4830]: I0131 09:04:51.765087 4830 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/38a174d9-22cd-4e22-b157-499b9df4e292-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 31 09:04:51 crc kubenswrapper[4830]: I0131 09:04:51.765108 4830 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/38a174d9-22cd-4e22-b157-499b9df4e292-config\") on node \"crc\" DevicePath \"\"" Jan 31 09:04:51 crc kubenswrapper[4830]: I0131 09:04:51.765121 4830 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/38a174d9-22cd-4e22-b157-499b9df4e292-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 09:04:51 crc kubenswrapper[4830]: I0131 09:04:51.765135 4830 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/38a174d9-22cd-4e22-b157-499b9df4e292-client-ca\") on node \"crc\" DevicePath \"\"" Jan 31 09:04:51 crc kubenswrapper[4830]: I0131 09:04:51.790934 4830 generic.go:334] "Generic (PLEG): container finished" podID="716e78b2-1856-45e6-a3fa-73538be51a97" containerID="70171760d19bc95fe1119723b430ea2e6d64c2567c470e0b4fc7a32a96cd75ca" exitCode=0 Jan 31 09:04:51 crc kubenswrapper[4830]: I0131 09:04:51.791029 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-h79tz" Jan 31 09:04:51 crc kubenswrapper[4830]: I0131 09:04:51.791057 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-h79tz" event={"ID":"716e78b2-1856-45e6-a3fa-73538be51a97","Type":"ContainerDied","Data":"70171760d19bc95fe1119723b430ea2e6d64c2567c470e0b4fc7a32a96cd75ca"} Jan 31 09:04:51 crc kubenswrapper[4830]: I0131 09:04:51.791107 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-h79tz" event={"ID":"716e78b2-1856-45e6-a3fa-73538be51a97","Type":"ContainerDied","Data":"fecb01cc01339eb2dd352a556c8d4f7aa672fd20154a957eb74a6077a3ec4bdd"} Jan 31 09:04:51 crc kubenswrapper[4830]: I0131 09:04:51.791131 4830 scope.go:117] "RemoveContainer" containerID="70171760d19bc95fe1119723b430ea2e6d64c2567c470e0b4fc7a32a96cd75ca" Jan 31 09:04:51 crc kubenswrapper[4830]: I0131 09:04:51.792691 4830 generic.go:334] "Generic (PLEG): container finished" podID="38a174d9-22cd-4e22-b157-499b9df4e292" containerID="b86cf5e18f70fb8b1beeb189caff6edda4380b839fa823e063dd7c42a9230881" exitCode=0 Jan 31 09:04:51 crc kubenswrapper[4830]: I0131 09:04:51.792855 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-76d6f96965-8fjhp" Jan 31 09:04:51 crc kubenswrapper[4830]: I0131 09:04:51.793131 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-76d6f96965-8fjhp" event={"ID":"38a174d9-22cd-4e22-b157-499b9df4e292","Type":"ContainerDied","Data":"b86cf5e18f70fb8b1beeb189caff6edda4380b839fa823e063dd7c42a9230881"} Jan 31 09:04:51 crc kubenswrapper[4830]: I0131 09:04:51.793234 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-76d6f96965-8fjhp" event={"ID":"38a174d9-22cd-4e22-b157-499b9df4e292","Type":"ContainerDied","Data":"78b5d86a6ebc55c0d5cda5d77a1f680a78970876d1a15d1c4c1377bbada48ebe"} Jan 31 09:04:51 crc kubenswrapper[4830]: I0131 09:04:51.796169 4830 generic.go:334] "Generic (PLEG): container finished" podID="6e0a1032-01f8-476b-b725-b43704ec47df" containerID="5506e8040cc3bfa8ac6090d25dc10ca91b43d9d322036b1d7b95dcfdcc51ddd7" exitCode=0 Jan 31 09:04:51 crc kubenswrapper[4830]: I0131 09:04:51.796212 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-794778c64d-6v65m" event={"ID":"6e0a1032-01f8-476b-b725-b43704ec47df","Type":"ContainerDied","Data":"5506e8040cc3bfa8ac6090d25dc10ca91b43d9d322036b1d7b95dcfdcc51ddd7"} Jan 31 09:04:51 crc kubenswrapper[4830]: I0131 09:04:51.796242 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-794778c64d-6v65m" event={"ID":"6e0a1032-01f8-476b-b725-b43704ec47df","Type":"ContainerDied","Data":"4699ff81713f10e7a07d7662957ae156b7bf4c620bea69c41c20dcb0051c4649"} Jan 31 09:04:51 crc kubenswrapper[4830]: I0131 09:04:51.796326 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-794778c64d-6v65m" Jan 31 09:04:51 crc kubenswrapper[4830]: I0131 09:04:51.829645 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-h79tz"] Jan 31 09:04:51 crc kubenswrapper[4830]: I0131 09:04:51.833377 4830 scope.go:117] "RemoveContainer" containerID="ca55ffa53a3e29498cb5fa92d8d28ac1309cedbc256b7869890817b597121576" Jan 31 09:04:51 crc kubenswrapper[4830]: I0131 09:04:51.833957 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-h79tz"] Jan 31 09:04:51 crc kubenswrapper[4830]: I0131 09:04:51.846772 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-76d6f96965-8fjhp"] Jan 31 09:04:51 crc kubenswrapper[4830]: I0131 09:04:51.860563 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-76d6f96965-8fjhp"] Jan 31 09:04:51 crc kubenswrapper[4830]: I0131 09:04:51.862224 4830 scope.go:117] "RemoveContainer" containerID="90dad789e9ad829c314d34749f00d201a341bf01fe0232fbfa2e2ec5b40f6917" Jan 31 09:04:51 crc kubenswrapper[4830]: I0131 09:04:51.866384 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-794778c64d-6v65m"] Jan 31 09:04:51 crc kubenswrapper[4830]: I0131 09:04:51.871714 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-794778c64d-6v65m"] Jan 31 09:04:51 crc kubenswrapper[4830]: I0131 09:04:51.879149 4830 scope.go:117] "RemoveContainer" containerID="70171760d19bc95fe1119723b430ea2e6d64c2567c470e0b4fc7a32a96cd75ca" Jan 31 09:04:51 crc kubenswrapper[4830]: E0131 09:04:51.879677 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"70171760d19bc95fe1119723b430ea2e6d64c2567c470e0b4fc7a32a96cd75ca\": container with ID starting with 70171760d19bc95fe1119723b430ea2e6d64c2567c470e0b4fc7a32a96cd75ca not found: ID does not exist" containerID="70171760d19bc95fe1119723b430ea2e6d64c2567c470e0b4fc7a32a96cd75ca" Jan 31 09:04:51 crc kubenswrapper[4830]: I0131 09:04:51.879735 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"70171760d19bc95fe1119723b430ea2e6d64c2567c470e0b4fc7a32a96cd75ca"} err="failed to get container status \"70171760d19bc95fe1119723b430ea2e6d64c2567c470e0b4fc7a32a96cd75ca\": rpc error: code = NotFound desc = could not find container \"70171760d19bc95fe1119723b430ea2e6d64c2567c470e0b4fc7a32a96cd75ca\": container with ID starting with 70171760d19bc95fe1119723b430ea2e6d64c2567c470e0b4fc7a32a96cd75ca not found: ID does not exist" Jan 31 09:04:51 crc kubenswrapper[4830]: I0131 09:04:51.879776 4830 scope.go:117] "RemoveContainer" containerID="ca55ffa53a3e29498cb5fa92d8d28ac1309cedbc256b7869890817b597121576" Jan 31 09:04:51 crc kubenswrapper[4830]: E0131 09:04:51.880184 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ca55ffa53a3e29498cb5fa92d8d28ac1309cedbc256b7869890817b597121576\": container with ID starting with ca55ffa53a3e29498cb5fa92d8d28ac1309cedbc256b7869890817b597121576 not found: ID does not exist" containerID="ca55ffa53a3e29498cb5fa92d8d28ac1309cedbc256b7869890817b597121576" Jan 31 09:04:51 crc kubenswrapper[4830]: I0131 09:04:51.880243 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ca55ffa53a3e29498cb5fa92d8d28ac1309cedbc256b7869890817b597121576"} err="failed to get container status \"ca55ffa53a3e29498cb5fa92d8d28ac1309cedbc256b7869890817b597121576\": rpc error: code = NotFound desc = could not find container \"ca55ffa53a3e29498cb5fa92d8d28ac1309cedbc256b7869890817b597121576\": container with ID starting with ca55ffa53a3e29498cb5fa92d8d28ac1309cedbc256b7869890817b597121576 not found: ID does not exist" Jan 31 09:04:51 crc kubenswrapper[4830]: I0131 09:04:51.880285 4830 scope.go:117] "RemoveContainer" containerID="90dad789e9ad829c314d34749f00d201a341bf01fe0232fbfa2e2ec5b40f6917" Jan 31 09:04:51 crc kubenswrapper[4830]: E0131 09:04:51.880653 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"90dad789e9ad829c314d34749f00d201a341bf01fe0232fbfa2e2ec5b40f6917\": container with ID starting with 90dad789e9ad829c314d34749f00d201a341bf01fe0232fbfa2e2ec5b40f6917 not found: ID does not exist" containerID="90dad789e9ad829c314d34749f00d201a341bf01fe0232fbfa2e2ec5b40f6917" Jan 31 09:04:51 crc kubenswrapper[4830]: I0131 09:04:51.880805 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"90dad789e9ad829c314d34749f00d201a341bf01fe0232fbfa2e2ec5b40f6917"} err="failed to get container status \"90dad789e9ad829c314d34749f00d201a341bf01fe0232fbfa2e2ec5b40f6917\": rpc error: code = NotFound desc = could not find container \"90dad789e9ad829c314d34749f00d201a341bf01fe0232fbfa2e2ec5b40f6917\": container with ID starting with 90dad789e9ad829c314d34749f00d201a341bf01fe0232fbfa2e2ec5b40f6917 not found: ID does not exist" Jan 31 09:04:51 crc kubenswrapper[4830]: I0131 09:04:51.880905 4830 scope.go:117] "RemoveContainer" containerID="b86cf5e18f70fb8b1beeb189caff6edda4380b839fa823e063dd7c42a9230881" Jan 31 09:04:51 crc kubenswrapper[4830]: I0131 09:04:51.901113 4830 scope.go:117] "RemoveContainer" containerID="b86cf5e18f70fb8b1beeb189caff6edda4380b839fa823e063dd7c42a9230881" Jan 31 09:04:51 crc kubenswrapper[4830]: E0131 09:04:51.901708 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b86cf5e18f70fb8b1beeb189caff6edda4380b839fa823e063dd7c42a9230881\": container with ID starting with b86cf5e18f70fb8b1beeb189caff6edda4380b839fa823e063dd7c42a9230881 not found: ID does not exist" containerID="b86cf5e18f70fb8b1beeb189caff6edda4380b839fa823e063dd7c42a9230881" Jan 31 09:04:51 crc kubenswrapper[4830]: I0131 09:04:51.901764 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b86cf5e18f70fb8b1beeb189caff6edda4380b839fa823e063dd7c42a9230881"} err="failed to get container status \"b86cf5e18f70fb8b1beeb189caff6edda4380b839fa823e063dd7c42a9230881\": rpc error: code = NotFound desc = could not find container \"b86cf5e18f70fb8b1beeb189caff6edda4380b839fa823e063dd7c42a9230881\": container with ID starting with b86cf5e18f70fb8b1beeb189caff6edda4380b839fa823e063dd7c42a9230881 not found: ID does not exist" Jan 31 09:04:51 crc kubenswrapper[4830]: I0131 09:04:51.901805 4830 scope.go:117] "RemoveContainer" containerID="5506e8040cc3bfa8ac6090d25dc10ca91b43d9d322036b1d7b95dcfdcc51ddd7" Jan 31 09:04:51 crc kubenswrapper[4830]: I0131 09:04:51.933926 4830 scope.go:117] "RemoveContainer" containerID="5506e8040cc3bfa8ac6090d25dc10ca91b43d9d322036b1d7b95dcfdcc51ddd7" Jan 31 09:04:51 crc kubenswrapper[4830]: E0131 09:04:51.946106 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5506e8040cc3bfa8ac6090d25dc10ca91b43d9d322036b1d7b95dcfdcc51ddd7\": container with ID starting with 5506e8040cc3bfa8ac6090d25dc10ca91b43d9d322036b1d7b95dcfdcc51ddd7 not found: ID does not exist" containerID="5506e8040cc3bfa8ac6090d25dc10ca91b43d9d322036b1d7b95dcfdcc51ddd7" Jan 31 09:04:51 crc kubenswrapper[4830]: I0131 09:04:51.946274 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5506e8040cc3bfa8ac6090d25dc10ca91b43d9d322036b1d7b95dcfdcc51ddd7"} err="failed to get container status \"5506e8040cc3bfa8ac6090d25dc10ca91b43d9d322036b1d7b95dcfdcc51ddd7\": rpc error: code = NotFound desc = could not find container \"5506e8040cc3bfa8ac6090d25dc10ca91b43d9d322036b1d7b95dcfdcc51ddd7\": container with ID starting with 5506e8040cc3bfa8ac6090d25dc10ca91b43d9d322036b1d7b95dcfdcc51ddd7 not found: ID does not exist" Jan 31 09:04:52 crc kubenswrapper[4830]: I0131 09:04:52.257649 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="38a174d9-22cd-4e22-b157-499b9df4e292" path="/var/lib/kubelet/pods/38a174d9-22cd-4e22-b157-499b9df4e292/volumes" Jan 31 09:04:52 crc kubenswrapper[4830]: I0131 09:04:52.258313 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6e0a1032-01f8-476b-b725-b43704ec47df" path="/var/lib/kubelet/pods/6e0a1032-01f8-476b-b725-b43704ec47df/volumes" Jan 31 09:04:52 crc kubenswrapper[4830]: I0131 09:04:52.258862 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="716e78b2-1856-45e6-a3fa-73538be51a97" path="/var/lib/kubelet/pods/716e78b2-1856-45e6-a3fa-73538be51a97/volumes" Jan 31 09:04:52 crc kubenswrapper[4830]: I0131 09:04:52.700634 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-9dc64dc66-ldnmz"] Jan 31 09:04:52 crc kubenswrapper[4830]: E0131 09:04:52.701051 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1aef8b4-46c8-4ca1-87d9-0bdc6a53ca33" containerName="registry-server" Jan 31 09:04:52 crc kubenswrapper[4830]: I0131 09:04:52.701071 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1aef8b4-46c8-4ca1-87d9-0bdc6a53ca33" containerName="registry-server" Jan 31 09:04:52 crc kubenswrapper[4830]: E0131 09:04:52.701085 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3477b1ed-ccf0-4f60-9505-ff0e417750af" containerName="extract-utilities" Jan 31 09:04:52 crc kubenswrapper[4830]: I0131 09:04:52.701092 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="3477b1ed-ccf0-4f60-9505-ff0e417750af" containerName="extract-utilities" Jan 31 09:04:52 crc kubenswrapper[4830]: E0131 09:04:52.701102 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="716e78b2-1856-45e6-a3fa-73538be51a97" containerName="extract-content" Jan 31 09:04:52 crc kubenswrapper[4830]: I0131 09:04:52.701109 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="716e78b2-1856-45e6-a3fa-73538be51a97" containerName="extract-content" Jan 31 09:04:52 crc kubenswrapper[4830]: E0131 09:04:52.701120 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3477b1ed-ccf0-4f60-9505-ff0e417750af" containerName="registry-server" Jan 31 09:04:52 crc kubenswrapper[4830]: I0131 09:04:52.701126 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="3477b1ed-ccf0-4f60-9505-ff0e417750af" containerName="registry-server" Jan 31 09:04:52 crc kubenswrapper[4830]: E0131 09:04:52.701138 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1aef8b4-46c8-4ca1-87d9-0bdc6a53ca33" containerName="extract-content" Jan 31 09:04:52 crc kubenswrapper[4830]: I0131 09:04:52.701144 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1aef8b4-46c8-4ca1-87d9-0bdc6a53ca33" containerName="extract-content" Jan 31 09:04:52 crc kubenswrapper[4830]: E0131 09:04:52.701152 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea666a92-d7aa-4e9b-8c54-88ad8ae517aa" containerName="extract-utilities" Jan 31 09:04:52 crc kubenswrapper[4830]: I0131 09:04:52.701160 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea666a92-d7aa-4e9b-8c54-88ad8ae517aa" containerName="extract-utilities" Jan 31 09:04:52 crc kubenswrapper[4830]: E0131 09:04:52.701169 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="716e78b2-1856-45e6-a3fa-73538be51a97" containerName="extract-utilities" Jan 31 09:04:52 crc kubenswrapper[4830]: I0131 09:04:52.701176 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="716e78b2-1856-45e6-a3fa-73538be51a97" containerName="extract-utilities" Jan 31 09:04:52 crc kubenswrapper[4830]: E0131 09:04:52.701186 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e0a1032-01f8-476b-b725-b43704ec47df" containerName="route-controller-manager" Jan 31 09:04:52 crc kubenswrapper[4830]: I0131 09:04:52.701191 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e0a1032-01f8-476b-b725-b43704ec47df" containerName="route-controller-manager" Jan 31 09:04:52 crc kubenswrapper[4830]: E0131 09:04:52.701200 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea666a92-d7aa-4e9b-8c54-88ad8ae517aa" containerName="registry-server" Jan 31 09:04:52 crc kubenswrapper[4830]: I0131 09:04:52.701206 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea666a92-d7aa-4e9b-8c54-88ad8ae517aa" containerName="registry-server" Jan 31 09:04:52 crc kubenswrapper[4830]: E0131 09:04:52.701217 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38a174d9-22cd-4e22-b157-499b9df4e292" containerName="controller-manager" Jan 31 09:04:52 crc kubenswrapper[4830]: I0131 09:04:52.701224 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="38a174d9-22cd-4e22-b157-499b9df4e292" containerName="controller-manager" Jan 31 09:04:52 crc kubenswrapper[4830]: E0131 09:04:52.701230 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3477b1ed-ccf0-4f60-9505-ff0e417750af" containerName="extract-content" Jan 31 09:04:52 crc kubenswrapper[4830]: I0131 09:04:52.701236 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="3477b1ed-ccf0-4f60-9505-ff0e417750af" containerName="extract-content" Jan 31 09:04:52 crc kubenswrapper[4830]: E0131 09:04:52.701242 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea666a92-d7aa-4e9b-8c54-88ad8ae517aa" containerName="extract-content" Jan 31 09:04:52 crc kubenswrapper[4830]: I0131 09:04:52.701248 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea666a92-d7aa-4e9b-8c54-88ad8ae517aa" containerName="extract-content" Jan 31 09:04:52 crc kubenswrapper[4830]: E0131 09:04:52.701256 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="716e78b2-1856-45e6-a3fa-73538be51a97" containerName="registry-server" Jan 31 09:04:52 crc kubenswrapper[4830]: I0131 09:04:52.701262 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="716e78b2-1856-45e6-a3fa-73538be51a97" containerName="registry-server" Jan 31 09:04:52 crc kubenswrapper[4830]: E0131 09:04:52.701270 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1aef8b4-46c8-4ca1-87d9-0bdc6a53ca33" containerName="extract-utilities" Jan 31 09:04:52 crc kubenswrapper[4830]: I0131 09:04:52.701276 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1aef8b4-46c8-4ca1-87d9-0bdc6a53ca33" containerName="extract-utilities" Jan 31 09:04:52 crc kubenswrapper[4830]: I0131 09:04:52.701381 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="38a174d9-22cd-4e22-b157-499b9df4e292" containerName="controller-manager" Jan 31 09:04:52 crc kubenswrapper[4830]: I0131 09:04:52.701391 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="ea666a92-d7aa-4e9b-8c54-88ad8ae517aa" containerName="registry-server" Jan 31 09:04:52 crc kubenswrapper[4830]: I0131 09:04:52.701400 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="3477b1ed-ccf0-4f60-9505-ff0e417750af" containerName="registry-server" Jan 31 09:04:52 crc kubenswrapper[4830]: I0131 09:04:52.701409 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="6e0a1032-01f8-476b-b725-b43704ec47df" containerName="route-controller-manager" Jan 31 09:04:52 crc kubenswrapper[4830]: I0131 09:04:52.701417 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="716e78b2-1856-45e6-a3fa-73538be51a97" containerName="registry-server" Jan 31 09:04:52 crc kubenswrapper[4830]: I0131 09:04:52.701425 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="a1aef8b4-46c8-4ca1-87d9-0bdc6a53ca33" containerName="registry-server" Jan 31 09:04:52 crc kubenswrapper[4830]: I0131 09:04:52.701964 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-9dc64dc66-ldnmz" Jan 31 09:04:52 crc kubenswrapper[4830]: I0131 09:04:52.703772 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-6488cf5546-fd5sf"] Jan 31 09:04:52 crc kubenswrapper[4830]: I0131 09:04:52.706258 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6488cf5546-fd5sf" Jan 31 09:04:52 crc kubenswrapper[4830]: I0131 09:04:52.707099 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 31 09:04:52 crc kubenswrapper[4830]: I0131 09:04:52.707265 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 31 09:04:52 crc kubenswrapper[4830]: I0131 09:04:52.707278 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 31 09:04:52 crc kubenswrapper[4830]: I0131 09:04:52.707377 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 31 09:04:52 crc kubenswrapper[4830]: I0131 09:04:52.708386 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 31 09:04:52 crc kubenswrapper[4830]: I0131 09:04:52.708528 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 31 09:04:52 crc kubenswrapper[4830]: I0131 09:04:52.712532 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 31 09:04:52 crc kubenswrapper[4830]: I0131 09:04:52.712787 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 31 09:04:52 crc kubenswrapper[4830]: I0131 09:04:52.713243 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 31 09:04:52 crc kubenswrapper[4830]: I0131 09:04:52.713492 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 31 09:04:52 crc kubenswrapper[4830]: I0131 09:04:52.713663 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 31 09:04:52 crc kubenswrapper[4830]: I0131 09:04:52.714415 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 31 09:04:52 crc kubenswrapper[4830]: I0131 09:04:52.717366 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6488cf5546-fd5sf"] Jan 31 09:04:52 crc kubenswrapper[4830]: I0131 09:04:52.718231 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 31 09:04:52 crc kubenswrapper[4830]: I0131 09:04:52.726852 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-9dc64dc66-ldnmz"] Jan 31 09:04:52 crc kubenswrapper[4830]: I0131 09:04:52.780280 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9d39fccc-7441-408a-b27a-6cd6c53ad159-client-ca\") pod \"route-controller-manager-9dc64dc66-ldnmz\" (UID: \"9d39fccc-7441-408a-b27a-6cd6c53ad159\") " pod="openshift-route-controller-manager/route-controller-manager-9dc64dc66-ldnmz" Jan 31 09:04:52 crc kubenswrapper[4830]: I0131 09:04:52.780352 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/88e7057e-29a9-4bba-a588-11ae4def7947-client-ca\") pod \"controller-manager-6488cf5546-fd5sf\" (UID: \"88e7057e-29a9-4bba-a588-11ae4def7947\") " pod="openshift-controller-manager/controller-manager-6488cf5546-fd5sf" Jan 31 09:04:52 crc kubenswrapper[4830]: I0131 09:04:52.780456 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d39fccc-7441-408a-b27a-6cd6c53ad159-config\") pod \"route-controller-manager-9dc64dc66-ldnmz\" (UID: \"9d39fccc-7441-408a-b27a-6cd6c53ad159\") " pod="openshift-route-controller-manager/route-controller-manager-9dc64dc66-ldnmz" Jan 31 09:04:52 crc kubenswrapper[4830]: I0131 09:04:52.780488 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/88e7057e-29a9-4bba-a588-11ae4def7947-serving-cert\") pod \"controller-manager-6488cf5546-fd5sf\" (UID: \"88e7057e-29a9-4bba-a588-11ae4def7947\") " pod="openshift-controller-manager/controller-manager-6488cf5546-fd5sf" Jan 31 09:04:52 crc kubenswrapper[4830]: I0131 09:04:52.780524 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/88e7057e-29a9-4bba-a588-11ae4def7947-proxy-ca-bundles\") pod \"controller-manager-6488cf5546-fd5sf\" (UID: \"88e7057e-29a9-4bba-a588-11ae4def7947\") " pod="openshift-controller-manager/controller-manager-6488cf5546-fd5sf" Jan 31 09:04:52 crc kubenswrapper[4830]: I0131 09:04:52.780577 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/88e7057e-29a9-4bba-a588-11ae4def7947-config\") pod \"controller-manager-6488cf5546-fd5sf\" (UID: \"88e7057e-29a9-4bba-a588-11ae4def7947\") " pod="openshift-controller-manager/controller-manager-6488cf5546-fd5sf" Jan 31 09:04:52 crc kubenswrapper[4830]: I0131 09:04:52.780616 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k9lz9\" (UniqueName: \"kubernetes.io/projected/88e7057e-29a9-4bba-a588-11ae4def7947-kube-api-access-k9lz9\") pod \"controller-manager-6488cf5546-fd5sf\" (UID: \"88e7057e-29a9-4bba-a588-11ae4def7947\") " pod="openshift-controller-manager/controller-manager-6488cf5546-fd5sf" Jan 31 09:04:52 crc kubenswrapper[4830]: I0131 09:04:52.780663 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d39fccc-7441-408a-b27a-6cd6c53ad159-serving-cert\") pod \"route-controller-manager-9dc64dc66-ldnmz\" (UID: \"9d39fccc-7441-408a-b27a-6cd6c53ad159\") " pod="openshift-route-controller-manager/route-controller-manager-9dc64dc66-ldnmz" Jan 31 09:04:52 crc kubenswrapper[4830]: I0131 09:04:52.780693 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-72zm8\" (UniqueName: \"kubernetes.io/projected/9d39fccc-7441-408a-b27a-6cd6c53ad159-kube-api-access-72zm8\") pod \"route-controller-manager-9dc64dc66-ldnmz\" (UID: \"9d39fccc-7441-408a-b27a-6cd6c53ad159\") " pod="openshift-route-controller-manager/route-controller-manager-9dc64dc66-ldnmz" Jan 31 09:04:52 crc kubenswrapper[4830]: I0131 09:04:52.882074 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/88e7057e-29a9-4bba-a588-11ae4def7947-config\") pod \"controller-manager-6488cf5546-fd5sf\" (UID: \"88e7057e-29a9-4bba-a588-11ae4def7947\") " pod="openshift-controller-manager/controller-manager-6488cf5546-fd5sf" Jan 31 09:04:52 crc kubenswrapper[4830]: I0131 09:04:52.882162 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k9lz9\" (UniqueName: \"kubernetes.io/projected/88e7057e-29a9-4bba-a588-11ae4def7947-kube-api-access-k9lz9\") pod \"controller-manager-6488cf5546-fd5sf\" (UID: \"88e7057e-29a9-4bba-a588-11ae4def7947\") " pod="openshift-controller-manager/controller-manager-6488cf5546-fd5sf" Jan 31 09:04:52 crc kubenswrapper[4830]: I0131 09:04:52.882201 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d39fccc-7441-408a-b27a-6cd6c53ad159-serving-cert\") pod \"route-controller-manager-9dc64dc66-ldnmz\" (UID: \"9d39fccc-7441-408a-b27a-6cd6c53ad159\") " pod="openshift-route-controller-manager/route-controller-manager-9dc64dc66-ldnmz" Jan 31 09:04:52 crc kubenswrapper[4830]: I0131 09:04:52.882229 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-72zm8\" (UniqueName: \"kubernetes.io/projected/9d39fccc-7441-408a-b27a-6cd6c53ad159-kube-api-access-72zm8\") pod \"route-controller-manager-9dc64dc66-ldnmz\" (UID: \"9d39fccc-7441-408a-b27a-6cd6c53ad159\") " pod="openshift-route-controller-manager/route-controller-manager-9dc64dc66-ldnmz" Jan 31 09:04:52 crc kubenswrapper[4830]: I0131 09:04:52.882251 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9d39fccc-7441-408a-b27a-6cd6c53ad159-client-ca\") pod \"route-controller-manager-9dc64dc66-ldnmz\" (UID: \"9d39fccc-7441-408a-b27a-6cd6c53ad159\") " pod="openshift-route-controller-manager/route-controller-manager-9dc64dc66-ldnmz" Jan 31 09:04:52 crc kubenswrapper[4830]: I0131 09:04:52.882272 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/88e7057e-29a9-4bba-a588-11ae4def7947-client-ca\") pod \"controller-manager-6488cf5546-fd5sf\" (UID: \"88e7057e-29a9-4bba-a588-11ae4def7947\") " pod="openshift-controller-manager/controller-manager-6488cf5546-fd5sf" Jan 31 09:04:52 crc kubenswrapper[4830]: I0131 09:04:52.882326 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d39fccc-7441-408a-b27a-6cd6c53ad159-config\") pod \"route-controller-manager-9dc64dc66-ldnmz\" (UID: \"9d39fccc-7441-408a-b27a-6cd6c53ad159\") " pod="openshift-route-controller-manager/route-controller-manager-9dc64dc66-ldnmz" Jan 31 09:04:52 crc kubenswrapper[4830]: I0131 09:04:52.882347 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/88e7057e-29a9-4bba-a588-11ae4def7947-serving-cert\") pod \"controller-manager-6488cf5546-fd5sf\" (UID: \"88e7057e-29a9-4bba-a588-11ae4def7947\") " pod="openshift-controller-manager/controller-manager-6488cf5546-fd5sf" Jan 31 09:04:52 crc kubenswrapper[4830]: I0131 09:04:52.882366 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/88e7057e-29a9-4bba-a588-11ae4def7947-proxy-ca-bundles\") pod \"controller-manager-6488cf5546-fd5sf\" (UID: \"88e7057e-29a9-4bba-a588-11ae4def7947\") " pod="openshift-controller-manager/controller-manager-6488cf5546-fd5sf" Jan 31 09:04:52 crc kubenswrapper[4830]: I0131 09:04:52.883639 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/88e7057e-29a9-4bba-a588-11ae4def7947-client-ca\") pod \"controller-manager-6488cf5546-fd5sf\" (UID: \"88e7057e-29a9-4bba-a588-11ae4def7947\") " pod="openshift-controller-manager/controller-manager-6488cf5546-fd5sf" Jan 31 09:04:52 crc kubenswrapper[4830]: I0131 09:04:52.883772 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d39fccc-7441-408a-b27a-6cd6c53ad159-config\") pod \"route-controller-manager-9dc64dc66-ldnmz\" (UID: \"9d39fccc-7441-408a-b27a-6cd6c53ad159\") " pod="openshift-route-controller-manager/route-controller-manager-9dc64dc66-ldnmz" Jan 31 09:04:52 crc kubenswrapper[4830]: I0131 09:04:52.884145 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/88e7057e-29a9-4bba-a588-11ae4def7947-proxy-ca-bundles\") pod \"controller-manager-6488cf5546-fd5sf\" (UID: \"88e7057e-29a9-4bba-a588-11ae4def7947\") " pod="openshift-controller-manager/controller-manager-6488cf5546-fd5sf" Jan 31 09:04:52 crc kubenswrapper[4830]: I0131 09:04:52.884352 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9d39fccc-7441-408a-b27a-6cd6c53ad159-client-ca\") pod \"route-controller-manager-9dc64dc66-ldnmz\" (UID: \"9d39fccc-7441-408a-b27a-6cd6c53ad159\") " pod="openshift-route-controller-manager/route-controller-manager-9dc64dc66-ldnmz" Jan 31 09:04:52 crc kubenswrapper[4830]: I0131 09:04:52.884381 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/88e7057e-29a9-4bba-a588-11ae4def7947-config\") pod \"controller-manager-6488cf5546-fd5sf\" (UID: \"88e7057e-29a9-4bba-a588-11ae4def7947\") " pod="openshift-controller-manager/controller-manager-6488cf5546-fd5sf" Jan 31 09:04:52 crc kubenswrapper[4830]: I0131 09:04:52.887288 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/88e7057e-29a9-4bba-a588-11ae4def7947-serving-cert\") pod \"controller-manager-6488cf5546-fd5sf\" (UID: \"88e7057e-29a9-4bba-a588-11ae4def7947\") " pod="openshift-controller-manager/controller-manager-6488cf5546-fd5sf" Jan 31 09:04:52 crc kubenswrapper[4830]: I0131 09:04:52.887296 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d39fccc-7441-408a-b27a-6cd6c53ad159-serving-cert\") pod \"route-controller-manager-9dc64dc66-ldnmz\" (UID: \"9d39fccc-7441-408a-b27a-6cd6c53ad159\") " pod="openshift-route-controller-manager/route-controller-manager-9dc64dc66-ldnmz" Jan 31 09:04:52 crc kubenswrapper[4830]: I0131 09:04:52.906508 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k9lz9\" (UniqueName: \"kubernetes.io/projected/88e7057e-29a9-4bba-a588-11ae4def7947-kube-api-access-k9lz9\") pod \"controller-manager-6488cf5546-fd5sf\" (UID: \"88e7057e-29a9-4bba-a588-11ae4def7947\") " pod="openshift-controller-manager/controller-manager-6488cf5546-fd5sf" Jan 31 09:04:52 crc kubenswrapper[4830]: I0131 09:04:52.908525 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-72zm8\" (UniqueName: \"kubernetes.io/projected/9d39fccc-7441-408a-b27a-6cd6c53ad159-kube-api-access-72zm8\") pod \"route-controller-manager-9dc64dc66-ldnmz\" (UID: \"9d39fccc-7441-408a-b27a-6cd6c53ad159\") " pod="openshift-route-controller-manager/route-controller-manager-9dc64dc66-ldnmz" Jan 31 09:04:53 crc kubenswrapper[4830]: I0131 09:04:53.026317 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-9dc64dc66-ldnmz" Jan 31 09:04:53 crc kubenswrapper[4830]: I0131 09:04:53.038503 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6488cf5546-fd5sf" Jan 31 09:04:53 crc kubenswrapper[4830]: I0131 09:04:53.270161 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6488cf5546-fd5sf"] Jan 31 09:04:53 crc kubenswrapper[4830]: W0131 09:04:53.278920 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod88e7057e_29a9_4bba_a588_11ae4def7947.slice/crio-6a8352a2ba7c41f247990ea48c57849aaa08c676c7dc773db9da574142446392 WatchSource:0}: Error finding container 6a8352a2ba7c41f247990ea48c57849aaa08c676c7dc773db9da574142446392: Status 404 returned error can't find the container with id 6a8352a2ba7c41f247990ea48c57849aaa08c676c7dc773db9da574142446392 Jan 31 09:04:53 crc kubenswrapper[4830]: I0131 09:04:53.320932 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-9dc64dc66-ldnmz"] Jan 31 09:04:53 crc kubenswrapper[4830]: W0131 09:04:53.325756 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9d39fccc_7441_408a_b27a_6cd6c53ad159.slice/crio-4ed6982978591e8e3dff355398a1faecf73c8d569bf51e5a15f9185cdfdccd82 WatchSource:0}: Error finding container 4ed6982978591e8e3dff355398a1faecf73c8d569bf51e5a15f9185cdfdccd82: Status 404 returned error can't find the container with id 4ed6982978591e8e3dff355398a1faecf73c8d569bf51e5a15f9185cdfdccd82 Jan 31 09:04:53 crc kubenswrapper[4830]: I0131 09:04:53.814328 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6488cf5546-fd5sf" event={"ID":"88e7057e-29a9-4bba-a588-11ae4def7947","Type":"ContainerStarted","Data":"612de1acd164c5d864167c9e586526fa1bcddbe39ed6bfa04ccb806b246b55ff"} Jan 31 09:04:53 crc kubenswrapper[4830]: I0131 09:04:53.814395 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6488cf5546-fd5sf" event={"ID":"88e7057e-29a9-4bba-a588-11ae4def7947","Type":"ContainerStarted","Data":"6a8352a2ba7c41f247990ea48c57849aaa08c676c7dc773db9da574142446392"} Jan 31 09:04:53 crc kubenswrapper[4830]: I0131 09:04:53.815421 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-6488cf5546-fd5sf" Jan 31 09:04:53 crc kubenswrapper[4830]: I0131 09:04:53.818227 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-9dc64dc66-ldnmz" event={"ID":"9d39fccc-7441-408a-b27a-6cd6c53ad159","Type":"ContainerStarted","Data":"2733fdc2c5f66907564519a2d957ad6f1ea9852a3a02051f977934ddb9db0d5c"} Jan 31 09:04:53 crc kubenswrapper[4830]: I0131 09:04:53.818306 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-9dc64dc66-ldnmz" event={"ID":"9d39fccc-7441-408a-b27a-6cd6c53ad159","Type":"ContainerStarted","Data":"4ed6982978591e8e3dff355398a1faecf73c8d569bf51e5a15f9185cdfdccd82"} Jan 31 09:04:53 crc kubenswrapper[4830]: I0131 09:04:53.818528 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-9dc64dc66-ldnmz" Jan 31 09:04:53 crc kubenswrapper[4830]: I0131 09:04:53.834259 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-6488cf5546-fd5sf" Jan 31 09:04:53 crc kubenswrapper[4830]: I0131 09:04:53.856129 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-6488cf5546-fd5sf" podStartSLOduration=3.856100193 podStartE2EDuration="3.856100193s" podCreationTimestamp="2026-01-31 09:04:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 09:04:53.855900037 +0000 UTC m=+238.349262489" watchObservedRunningTime="2026-01-31 09:04:53.856100193 +0000 UTC m=+238.349462635" Jan 31 09:04:54 crc kubenswrapper[4830]: I0131 09:04:54.132636 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-9dc64dc66-ldnmz" Jan 31 09:04:54 crc kubenswrapper[4830]: I0131 09:04:54.155762 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-9dc64dc66-ldnmz" podStartSLOduration=4.155717119 podStartE2EDuration="4.155717119s" podCreationTimestamp="2026-01-31 09:04:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 09:04:53.923648843 +0000 UTC m=+238.417011285" watchObservedRunningTime="2026-01-31 09:04:54.155717119 +0000 UTC m=+238.649079561" Jan 31 09:04:57 crc kubenswrapper[4830]: I0131 09:04:57.045554 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-hzk7b"] Jan 31 09:05:02 crc kubenswrapper[4830]: I0131 09:05:02.062041 4830 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 31 09:05:02 crc kubenswrapper[4830]: I0131 09:05:02.063276 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://0fc4f1b4979b8d902676eced34650daea491e68b4c4377492b928a9f0f78d12b" gracePeriod=15 Jan 31 09:05:02 crc kubenswrapper[4830]: I0131 09:05:02.063393 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://d9b64732f8259953717c8ad355889afd462ce339c881ba9c105f6d3f39245e79" gracePeriod=15 Jan 31 09:05:02 crc kubenswrapper[4830]: I0131 09:05:02.063496 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://b68f539fdc8bbf394adaf06d3e8682cdf498d0994c53f3754caf282cf9cf3607" gracePeriod=15 Jan 31 09:05:02 crc kubenswrapper[4830]: I0131 09:05:02.063446 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://539885da1b8f083e7ae878fec45416be66a5b06df063f046a05a4981bc8a8f75" gracePeriod=15 Jan 31 09:05:02 crc kubenswrapper[4830]: I0131 09:05:02.063851 4830 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 31 09:05:02 crc kubenswrapper[4830]: E0131 09:05:02.064165 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 31 09:05:02 crc kubenswrapper[4830]: I0131 09:05:02.064181 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 31 09:05:02 crc kubenswrapper[4830]: E0131 09:05:02.064195 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 31 09:05:02 crc kubenswrapper[4830]: I0131 09:05:02.064036 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://4d4d8bd6451d7416a2a312147ec282db5de8410910834f555c8d09062902130d" gracePeriod=15 Jan 31 09:05:02 crc kubenswrapper[4830]: I0131 09:05:02.064204 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 31 09:05:02 crc kubenswrapper[4830]: E0131 09:05:02.064327 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 31 09:05:02 crc kubenswrapper[4830]: I0131 09:05:02.064337 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 31 09:05:02 crc kubenswrapper[4830]: E0131 09:05:02.064348 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 31 09:05:02 crc kubenswrapper[4830]: I0131 09:05:02.064357 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 31 09:05:02 crc kubenswrapper[4830]: E0131 09:05:02.064381 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 31 09:05:02 crc kubenswrapper[4830]: I0131 09:05:02.064391 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 31 09:05:02 crc kubenswrapper[4830]: E0131 09:05:02.064407 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 31 09:05:02 crc kubenswrapper[4830]: I0131 09:05:02.064416 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 31 09:05:02 crc kubenswrapper[4830]: E0131 09:05:02.064434 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 31 09:05:02 crc kubenswrapper[4830]: I0131 09:05:02.064445 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 31 09:05:02 crc kubenswrapper[4830]: I0131 09:05:02.064619 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 31 09:05:02 crc kubenswrapper[4830]: I0131 09:05:02.064635 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 31 09:05:02 crc kubenswrapper[4830]: I0131 09:05:02.064645 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 31 09:05:02 crc kubenswrapper[4830]: I0131 09:05:02.064660 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 31 09:05:02 crc kubenswrapper[4830]: I0131 09:05:02.064674 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 31 09:05:02 crc kubenswrapper[4830]: I0131 09:05:02.064684 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 31 09:05:02 crc kubenswrapper[4830]: I0131 09:05:02.064696 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 31 09:05:02 crc kubenswrapper[4830]: E0131 09:05:02.064834 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 31 09:05:02 crc kubenswrapper[4830]: I0131 09:05:02.064850 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 31 09:05:02 crc kubenswrapper[4830]: I0131 09:05:02.066520 4830 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 31 09:05:02 crc kubenswrapper[4830]: I0131 09:05:02.067308 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 31 09:05:02 crc kubenswrapper[4830]: I0131 09:05:02.073286 4830 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="f4b27818a5e8e43d0dc095d08835c792" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" Jan 31 09:05:02 crc kubenswrapper[4830]: I0131 09:05:02.113954 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 31 09:05:02 crc kubenswrapper[4830]: I0131 09:05:02.121478 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 31 09:05:02 crc kubenswrapper[4830]: I0131 09:05:02.121546 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 31 09:05:02 crc kubenswrapper[4830]: I0131 09:05:02.121586 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 09:05:02 crc kubenswrapper[4830]: I0131 09:05:02.121705 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 31 09:05:02 crc kubenswrapper[4830]: I0131 09:05:02.121803 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 09:05:02 crc kubenswrapper[4830]: I0131 09:05:02.121873 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 31 09:05:02 crc kubenswrapper[4830]: I0131 09:05:02.121929 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 09:05:02 crc kubenswrapper[4830]: I0131 09:05:02.122090 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 31 09:05:02 crc kubenswrapper[4830]: I0131 09:05:02.223577 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 31 09:05:02 crc kubenswrapper[4830]: I0131 09:05:02.223634 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 31 09:05:02 crc kubenswrapper[4830]: I0131 09:05:02.223664 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 31 09:05:02 crc kubenswrapper[4830]: I0131 09:05:02.223696 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 09:05:02 crc kubenswrapper[4830]: I0131 09:05:02.223714 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 31 09:05:02 crc kubenswrapper[4830]: I0131 09:05:02.223718 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 31 09:05:02 crc kubenswrapper[4830]: I0131 09:05:02.223760 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 09:05:02 crc kubenswrapper[4830]: I0131 09:05:02.223790 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 09:05:02 crc kubenswrapper[4830]: I0131 09:05:02.223795 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 31 09:05:02 crc kubenswrapper[4830]: I0131 09:05:02.223814 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 31 09:05:02 crc kubenswrapper[4830]: I0131 09:05:02.223825 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 09:05:02 crc kubenswrapper[4830]: I0131 09:05:02.223822 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 31 09:05:02 crc kubenswrapper[4830]: I0131 09:05:02.223855 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 09:05:02 crc kubenswrapper[4830]: I0131 09:05:02.223934 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 31 09:05:02 crc kubenswrapper[4830]: I0131 09:05:02.223993 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 09:05:02 crc kubenswrapper[4830]: I0131 09:05:02.224034 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 31 09:05:02 crc kubenswrapper[4830]: I0131 09:05:02.404192 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 31 09:05:02 crc kubenswrapper[4830]: E0131 09:05:02.432970 4830 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.53:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188fc5791614b858 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-31 09:05:02.432065624 +0000 UTC m=+246.925428066,LastTimestamp:2026-01-31 09:05:02.432065624 +0000 UTC m=+246.925428066,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 31 09:05:02 crc kubenswrapper[4830]: E0131 09:05:02.818366 4830 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.53:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188fc5791614b858 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-31 09:05:02.432065624 +0000 UTC m=+246.925428066,LastTimestamp:2026-01-31 09:05:02.432065624 +0000 UTC m=+246.925428066,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 31 09:05:02 crc kubenswrapper[4830]: I0131 09:05:02.873551 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Jan 31 09:05:02 crc kubenswrapper[4830]: I0131 09:05:02.874895 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 31 09:05:02 crc kubenswrapper[4830]: I0131 09:05:02.875675 4830 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="d9b64732f8259953717c8ad355889afd462ce339c881ba9c105f6d3f39245e79" exitCode=0 Jan 31 09:05:02 crc kubenswrapper[4830]: I0131 09:05:02.875715 4830 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="4d4d8bd6451d7416a2a312147ec282db5de8410910834f555c8d09062902130d" exitCode=0 Jan 31 09:05:02 crc kubenswrapper[4830]: I0131 09:05:02.875767 4830 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="b68f539fdc8bbf394adaf06d3e8682cdf498d0994c53f3754caf282cf9cf3607" exitCode=0 Jan 31 09:05:02 crc kubenswrapper[4830]: I0131 09:05:02.875775 4830 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="539885da1b8f083e7ae878fec45416be66a5b06df063f046a05a4981bc8a8f75" exitCode=2 Jan 31 09:05:02 crc kubenswrapper[4830]: I0131 09:05:02.875895 4830 scope.go:117] "RemoveContainer" containerID="8cac33719e081864153ce20c60069a21036f3c29f7b4395021c3a2fe4f09dbc9" Jan 31 09:05:02 crc kubenswrapper[4830]: I0131 09:05:02.878310 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"a6e8e3c4937bd4c2a71b73d9762af599201d7928895e4d6a5ea2d397b1462ab1"} Jan 31 09:05:02 crc kubenswrapper[4830]: I0131 09:05:02.878373 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"9bcdbcfbab6a8d7bb6d96e23709a4146b4deaaf61308d31c88aa7ebe247bdd0b"} Jan 31 09:05:02 crc kubenswrapper[4830]: I0131 09:05:02.879400 4830 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.53:6443: connect: connection refused" Jan 31 09:05:02 crc kubenswrapper[4830]: I0131 09:05:02.880254 4830 generic.go:334] "Generic (PLEG): container finished" podID="0cfd30f0-26ec-4ac4-b315-4d99ec492231" containerID="5335aa5937ff53a6c2ed174596ead653063c3b31846269fb0ed8abf78cd62068" exitCode=0 Jan 31 09:05:02 crc kubenswrapper[4830]: I0131 09:05:02.880381 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"0cfd30f0-26ec-4ac4-b315-4d99ec492231","Type":"ContainerDied","Data":"5335aa5937ff53a6c2ed174596ead653063c3b31846269fb0ed8abf78cd62068"} Jan 31 09:05:02 crc kubenswrapper[4830]: I0131 09:05:02.881316 4830 status_manager.go:851] "Failed to get status for pod" podUID="0cfd30f0-26ec-4ac4-b315-4d99ec492231" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.53:6443: connect: connection refused" Jan 31 09:05:02 crc kubenswrapper[4830]: I0131 09:05:02.881758 4830 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.53:6443: connect: connection refused" Jan 31 09:05:03 crc kubenswrapper[4830]: I0131 09:05:03.888510 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 31 09:05:04 crc kubenswrapper[4830]: I0131 09:05:04.293879 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 31 09:05:04 crc kubenswrapper[4830]: I0131 09:05:04.298949 4830 status_manager.go:851] "Failed to get status for pod" podUID="0cfd30f0-26ec-4ac4-b315-4d99ec492231" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.53:6443: connect: connection refused" Jan 31 09:05:04 crc kubenswrapper[4830]: I0131 09:05:04.299452 4830 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.53:6443: connect: connection refused" Jan 31 09:05:04 crc kubenswrapper[4830]: I0131 09:05:04.358692 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/0cfd30f0-26ec-4ac4-b315-4d99ec492231-var-lock\") pod \"0cfd30f0-26ec-4ac4-b315-4d99ec492231\" (UID: \"0cfd30f0-26ec-4ac4-b315-4d99ec492231\") " Jan 31 09:05:04 crc kubenswrapper[4830]: I0131 09:05:04.359216 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0cfd30f0-26ec-4ac4-b315-4d99ec492231-kubelet-dir\") pod \"0cfd30f0-26ec-4ac4-b315-4d99ec492231\" (UID: \"0cfd30f0-26ec-4ac4-b315-4d99ec492231\") " Jan 31 09:05:04 crc kubenswrapper[4830]: I0131 09:05:04.359257 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0cfd30f0-26ec-4ac4-b315-4d99ec492231-kube-api-access\") pod \"0cfd30f0-26ec-4ac4-b315-4d99ec492231\" (UID: \"0cfd30f0-26ec-4ac4-b315-4d99ec492231\") " Jan 31 09:05:04 crc kubenswrapper[4830]: I0131 09:05:04.358806 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0cfd30f0-26ec-4ac4-b315-4d99ec492231-var-lock" (OuterVolumeSpecName: "var-lock") pod "0cfd30f0-26ec-4ac4-b315-4d99ec492231" (UID: "0cfd30f0-26ec-4ac4-b315-4d99ec492231"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 09:05:04 crc kubenswrapper[4830]: I0131 09:05:04.359604 4830 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/0cfd30f0-26ec-4ac4-b315-4d99ec492231-var-lock\") on node \"crc\" DevicePath \"\"" Jan 31 09:05:04 crc kubenswrapper[4830]: I0131 09:05:04.359306 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0cfd30f0-26ec-4ac4-b315-4d99ec492231-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "0cfd30f0-26ec-4ac4-b315-4d99ec492231" (UID: "0cfd30f0-26ec-4ac4-b315-4d99ec492231"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 09:05:04 crc kubenswrapper[4830]: I0131 09:05:04.365187 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0cfd30f0-26ec-4ac4-b315-4d99ec492231-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0cfd30f0-26ec-4ac4-b315-4d99ec492231" (UID: "0cfd30f0-26ec-4ac4-b315-4d99ec492231"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:05:04 crc kubenswrapper[4830]: I0131 09:05:04.461485 4830 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0cfd30f0-26ec-4ac4-b315-4d99ec492231-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 31 09:05:04 crc kubenswrapper[4830]: I0131 09:05:04.461545 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0cfd30f0-26ec-4ac4-b315-4d99ec492231-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 31 09:05:04 crc kubenswrapper[4830]: I0131 09:05:04.897375 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 31 09:05:04 crc kubenswrapper[4830]: I0131 09:05:04.899128 4830 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="0fc4f1b4979b8d902676eced34650daea491e68b4c4377492b928a9f0f78d12b" exitCode=0 Jan 31 09:05:04 crc kubenswrapper[4830]: I0131 09:05:04.900974 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"0cfd30f0-26ec-4ac4-b315-4d99ec492231","Type":"ContainerDied","Data":"b1808037e35e61011f2412bc618c1f4796b10bace9b781f8d89764a82acd957f"} Jan 31 09:05:04 crc kubenswrapper[4830]: I0131 09:05:04.901010 4830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b1808037e35e61011f2412bc618c1f4796b10bace9b781f8d89764a82acd957f" Jan 31 09:05:04 crc kubenswrapper[4830]: I0131 09:05:04.901034 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 31 09:05:04 crc kubenswrapper[4830]: I0131 09:05:04.934693 4830 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.53:6443: connect: connection refused" Jan 31 09:05:04 crc kubenswrapper[4830]: I0131 09:05:04.935297 4830 status_manager.go:851] "Failed to get status for pod" podUID="0cfd30f0-26ec-4ac4-b315-4d99ec492231" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.53:6443: connect: connection refused" Jan 31 09:05:04 crc kubenswrapper[4830]: I0131 09:05:04.944167 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 31 09:05:04 crc kubenswrapper[4830]: I0131 09:05:04.950157 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 09:05:04 crc kubenswrapper[4830]: I0131 09:05:04.951028 4830 status_manager.go:851] "Failed to get status for pod" podUID="0cfd30f0-26ec-4ac4-b315-4d99ec492231" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.53:6443: connect: connection refused" Jan 31 09:05:04 crc kubenswrapper[4830]: I0131 09:05:04.951620 4830 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.53:6443: connect: connection refused" Jan 31 09:05:04 crc kubenswrapper[4830]: I0131 09:05:04.952227 4830 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.53:6443: connect: connection refused" Jan 31 09:05:05 crc kubenswrapper[4830]: I0131 09:05:05.068433 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 31 09:05:05 crc kubenswrapper[4830]: I0131 09:05:05.068493 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 31 09:05:05 crc kubenswrapper[4830]: I0131 09:05:05.068542 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 09:05:05 crc kubenswrapper[4830]: I0131 09:05:05.068630 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 31 09:05:05 crc kubenswrapper[4830]: I0131 09:05:05.068705 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 09:05:05 crc kubenswrapper[4830]: I0131 09:05:05.068868 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 09:05:05 crc kubenswrapper[4830]: I0131 09:05:05.069049 4830 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 31 09:05:05 crc kubenswrapper[4830]: I0131 09:05:05.069066 4830 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Jan 31 09:05:05 crc kubenswrapper[4830]: I0131 09:05:05.069076 4830 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 31 09:05:05 crc kubenswrapper[4830]: I0131 09:05:05.910893 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 31 09:05:05 crc kubenswrapper[4830]: I0131 09:05:05.912617 4830 scope.go:117] "RemoveContainer" containerID="d9b64732f8259953717c8ad355889afd462ce339c881ba9c105f6d3f39245e79" Jan 31 09:05:05 crc kubenswrapper[4830]: I0131 09:05:05.912704 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 09:05:05 crc kubenswrapper[4830]: E0131 09:05:05.924034 4830 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.53:6443: connect: connection refused" Jan 31 09:05:05 crc kubenswrapper[4830]: E0131 09:05:05.931180 4830 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.53:6443: connect: connection refused" Jan 31 09:05:05 crc kubenswrapper[4830]: E0131 09:05:05.933112 4830 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.53:6443: connect: connection refused" Jan 31 09:05:05 crc kubenswrapper[4830]: I0131 09:05:05.933230 4830 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.53:6443: connect: connection refused" Jan 31 09:05:05 crc kubenswrapper[4830]: I0131 09:05:05.933562 4830 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.53:6443: connect: connection refused" Jan 31 09:05:05 crc kubenswrapper[4830]: E0131 09:05:05.933766 4830 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.53:6443: connect: connection refused" Jan 31 09:05:05 crc kubenswrapper[4830]: I0131 09:05:05.933889 4830 status_manager.go:851] "Failed to get status for pod" podUID="0cfd30f0-26ec-4ac4-b315-4d99ec492231" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.53:6443: connect: connection refused" Jan 31 09:05:05 crc kubenswrapper[4830]: E0131 09:05:05.934330 4830 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.53:6443: connect: connection refused" Jan 31 09:05:05 crc kubenswrapper[4830]: I0131 09:05:05.934372 4830 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Jan 31 09:05:05 crc kubenswrapper[4830]: E0131 09:05:05.934640 4830 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.53:6443: connect: connection refused" interval="200ms" Jan 31 09:05:05 crc kubenswrapper[4830]: I0131 09:05:05.940740 4830 scope.go:117] "RemoveContainer" containerID="4d4d8bd6451d7416a2a312147ec282db5de8410910834f555c8d09062902130d" Jan 31 09:05:05 crc kubenswrapper[4830]: I0131 09:05:05.959786 4830 scope.go:117] "RemoveContainer" containerID="b68f539fdc8bbf394adaf06d3e8682cdf498d0994c53f3754caf282cf9cf3607" Jan 31 09:05:05 crc kubenswrapper[4830]: I0131 09:05:05.977476 4830 scope.go:117] "RemoveContainer" containerID="539885da1b8f083e7ae878fec45416be66a5b06df063f046a05a4981bc8a8f75" Jan 31 09:05:05 crc kubenswrapper[4830]: I0131 09:05:05.994098 4830 scope.go:117] "RemoveContainer" containerID="0fc4f1b4979b8d902676eced34650daea491e68b4c4377492b928a9f0f78d12b" Jan 31 09:05:06 crc kubenswrapper[4830]: I0131 09:05:06.009412 4830 scope.go:117] "RemoveContainer" containerID="41db253848cac205af5f91621cac4e8223bda7d76984afd734a093f8e8a2d125" Jan 31 09:05:06 crc kubenswrapper[4830]: E0131 09:05:06.135821 4830 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.53:6443: connect: connection refused" interval="400ms" Jan 31 09:05:06 crc kubenswrapper[4830]: I0131 09:05:06.255903 4830 status_manager.go:851] "Failed to get status for pod" podUID="0cfd30f0-26ec-4ac4-b315-4d99ec492231" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.53:6443: connect: connection refused" Jan 31 09:05:06 crc kubenswrapper[4830]: I0131 09:05:06.256857 4830 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.53:6443: connect: connection refused" Jan 31 09:05:06 crc kubenswrapper[4830]: I0131 09:05:06.257425 4830 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.53:6443: connect: connection refused" Jan 31 09:05:06 crc kubenswrapper[4830]: I0131 09:05:06.259563 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Jan 31 09:05:06 crc kubenswrapper[4830]: E0131 09:05:06.537150 4830 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.53:6443: connect: connection refused" interval="800ms" Jan 31 09:05:07 crc kubenswrapper[4830]: E0131 09:05:07.072665 4830 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T09:05:07Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T09:05:07Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T09:05:07Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T09:05:07Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Patch \"https://api-int.crc.testing:6443/api/v1/nodes/crc/status?timeout=10s\": dial tcp 38.102.83.53:6443: connect: connection refused" Jan 31 09:05:07 crc kubenswrapper[4830]: E0131 09:05:07.073077 4830 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.53:6443: connect: connection refused" Jan 31 09:05:07 crc kubenswrapper[4830]: E0131 09:05:07.073360 4830 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.53:6443: connect: connection refused" Jan 31 09:05:07 crc kubenswrapper[4830]: E0131 09:05:07.073626 4830 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.53:6443: connect: connection refused" Jan 31 09:05:07 crc kubenswrapper[4830]: E0131 09:05:07.073867 4830 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.53:6443: connect: connection refused" Jan 31 09:05:07 crc kubenswrapper[4830]: E0131 09:05:07.073887 4830 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 31 09:05:07 crc kubenswrapper[4830]: E0131 09:05:07.338136 4830 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.53:6443: connect: connection refused" interval="1.6s" Jan 31 09:05:08 crc kubenswrapper[4830]: E0131 09:05:08.939420 4830 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.53:6443: connect: connection refused" interval="3.2s" Jan 31 09:05:12 crc kubenswrapper[4830]: E0131 09:05:12.140450 4830 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.53:6443: connect: connection refused" interval="6.4s" Jan 31 09:05:12 crc kubenswrapper[4830]: E0131 09:05:12.820438 4830 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.53:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188fc5791614b858 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-31 09:05:02.432065624 +0000 UTC m=+246.925428066,LastTimestamp:2026-01-31 09:05:02.432065624 +0000 UTC m=+246.925428066,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 31 09:05:14 crc kubenswrapper[4830]: E0131 09:05:14.287646 4830 desired_state_of_world_populator.go:312] "Error processing volume" err="error processing PVC openshift-image-registry/crc-image-registry-storage: failed to fetch PVC from API server: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/persistentvolumeclaims/crc-image-registry-storage\": dial tcp 38.102.83.53:6443: connect: connection refused" pod="openshift-image-registry/image-registry-697d97f7c8-7m8b7" volumeName="registry-storage" Jan 31 09:05:16 crc kubenswrapper[4830]: I0131 09:05:16.254628 4830 status_manager.go:851] "Failed to get status for pod" podUID="0cfd30f0-26ec-4ac4-b315-4d99ec492231" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.53:6443: connect: connection refused" Jan 31 09:05:16 crc kubenswrapper[4830]: I0131 09:05:16.255372 4830 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.53:6443: connect: connection refused" Jan 31 09:05:16 crc kubenswrapper[4830]: I0131 09:05:16.978908 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 31 09:05:16 crc kubenswrapper[4830]: I0131 09:05:16.979262 4830 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="c484d4ed3775a955f3077e3404135c771c581eef1e8fa614d7cdd6e521bbb426" exitCode=1 Jan 31 09:05:16 crc kubenswrapper[4830]: I0131 09:05:16.979303 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"c484d4ed3775a955f3077e3404135c771c581eef1e8fa614d7cdd6e521bbb426"} Jan 31 09:05:16 crc kubenswrapper[4830]: I0131 09:05:16.979973 4830 scope.go:117] "RemoveContainer" containerID="c484d4ed3775a955f3077e3404135c771c581eef1e8fa614d7cdd6e521bbb426" Jan 31 09:05:16 crc kubenswrapper[4830]: I0131 09:05:16.980386 4830 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.53:6443: connect: connection refused" Jan 31 09:05:16 crc kubenswrapper[4830]: I0131 09:05:16.981132 4830 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.53:6443: connect: connection refused" Jan 31 09:05:16 crc kubenswrapper[4830]: I0131 09:05:16.981865 4830 status_manager.go:851] "Failed to get status for pod" podUID="0cfd30f0-26ec-4ac4-b315-4d99ec492231" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.53:6443: connect: connection refused" Jan 31 09:05:17 crc kubenswrapper[4830]: I0131 09:05:17.250408 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 09:05:17 crc kubenswrapper[4830]: I0131 09:05:17.251396 4830 status_manager.go:851] "Failed to get status for pod" podUID="0cfd30f0-26ec-4ac4-b315-4d99ec492231" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.53:6443: connect: connection refused" Jan 31 09:05:17 crc kubenswrapper[4830]: I0131 09:05:17.252007 4830 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.53:6443: connect: connection refused" Jan 31 09:05:17 crc kubenswrapper[4830]: I0131 09:05:17.252565 4830 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.53:6443: connect: connection refused" Jan 31 09:05:17 crc kubenswrapper[4830]: I0131 09:05:17.267661 4830 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="c75e1c36-b769-464e-96eb-6d9b3c5aa384" Jan 31 09:05:17 crc kubenswrapper[4830]: I0131 09:05:17.267777 4830 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="c75e1c36-b769-464e-96eb-6d9b3c5aa384" Jan 31 09:05:17 crc kubenswrapper[4830]: E0131 09:05:17.268387 4830 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.53:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 09:05:17 crc kubenswrapper[4830]: I0131 09:05:17.269268 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 09:05:17 crc kubenswrapper[4830]: W0131 09:05:17.288259 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71bb4a3aecc4ba5b26c4b7318770ce13.slice/crio-c64dcc1f7b042619353aded40d2c89045ba2d04351bfcfb393f6a985ed2270fe WatchSource:0}: Error finding container c64dcc1f7b042619353aded40d2c89045ba2d04351bfcfb393f6a985ed2270fe: Status 404 returned error can't find the container with id c64dcc1f7b042619353aded40d2c89045ba2d04351bfcfb393f6a985ed2270fe Jan 31 09:05:17 crc kubenswrapper[4830]: I0131 09:05:17.988432 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 31 09:05:17 crc kubenswrapper[4830]: I0131 09:05:17.988935 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"a49046577e1bb5d63fea892db0b89c5f6ece8f18d3a0ad0eaf6cecdb7f6d5340"} Jan 31 09:05:17 crc kubenswrapper[4830]: I0131 09:05:17.990019 4830 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.53:6443: connect: connection refused" Jan 31 09:05:17 crc kubenswrapper[4830]: I0131 09:05:17.990338 4830 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.53:6443: connect: connection refused" Jan 31 09:05:17 crc kubenswrapper[4830]: I0131 09:05:17.990768 4830 status_manager.go:851] "Failed to get status for pod" podUID="0cfd30f0-26ec-4ac4-b315-4d99ec492231" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.53:6443: connect: connection refused" Jan 31 09:05:17 crc kubenswrapper[4830]: I0131 09:05:17.991086 4830 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="8d6a39d439187fa38da088675785a2db07e68f0c282b8bcccf519351defd91de" exitCode=0 Jan 31 09:05:17 crc kubenswrapper[4830]: I0131 09:05:17.991117 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"8d6a39d439187fa38da088675785a2db07e68f0c282b8bcccf519351defd91de"} Jan 31 09:05:17 crc kubenswrapper[4830]: I0131 09:05:17.991134 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"c64dcc1f7b042619353aded40d2c89045ba2d04351bfcfb393f6a985ed2270fe"} Jan 31 09:05:17 crc kubenswrapper[4830]: I0131 09:05:17.991421 4830 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="c75e1c36-b769-464e-96eb-6d9b3c5aa384" Jan 31 09:05:17 crc kubenswrapper[4830]: I0131 09:05:17.991452 4830 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="c75e1c36-b769-464e-96eb-6d9b3c5aa384" Jan 31 09:05:17 crc kubenswrapper[4830]: I0131 09:05:17.991907 4830 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.53:6443: connect: connection refused" Jan 31 09:05:17 crc kubenswrapper[4830]: E0131 09:05:17.991966 4830 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.53:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 09:05:17 crc kubenswrapper[4830]: I0131 09:05:17.992247 4830 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.53:6443: connect: connection refused" Jan 31 09:05:17 crc kubenswrapper[4830]: I0131 09:05:17.992564 4830 status_manager.go:851] "Failed to get status for pod" podUID="0cfd30f0-26ec-4ac4-b315-4d99ec492231" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.53:6443: connect: connection refused" Jan 31 09:05:18 crc kubenswrapper[4830]: E0131 09:05:18.541620 4830 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.53:6443: connect: connection refused" interval="7s" Jan 31 09:05:19 crc kubenswrapper[4830]: I0131 09:05:19.011263 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"f6b7009e19de743993dfc0d00bf5dffc4d99bfb68abbab02d296f6548fe88ddd"} Jan 31 09:05:19 crc kubenswrapper[4830]: I0131 09:05:19.562901 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 31 09:05:19 crc kubenswrapper[4830]: I0131 09:05:19.576257 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 31 09:05:20 crc kubenswrapper[4830]: I0131 09:05:20.019247 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"0d861c2c71d14c370cb883fc5540a26eaad0eb341c2fa6e8a882c5f6b927110d"} Jan 31 09:05:20 crc kubenswrapper[4830]: I0131 09:05:20.019310 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"e08a069239524876ff976b61056ec0de81baeb4de7cc6e2e357078d8d6a596e8"} Jan 31 09:05:20 crc kubenswrapper[4830]: I0131 09:05:20.019323 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"56a4c80c41a3dacf3221eb836577ceaad59ebf5374e7e2e0058b89604d8c2d23"} Jan 31 09:05:20 crc kubenswrapper[4830]: I0131 09:05:20.019451 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 31 09:05:21 crc kubenswrapper[4830]: I0131 09:05:21.028793 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"91035dd12d1792b59f72347d581a97a985f1121d5b761508e92389325a376a10"} Jan 31 09:05:21 crc kubenswrapper[4830]: I0131 09:05:21.029165 4830 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="c75e1c36-b769-464e-96eb-6d9b3c5aa384" Jan 31 09:05:21 crc kubenswrapper[4830]: I0131 09:05:21.030154 4830 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="c75e1c36-b769-464e-96eb-6d9b3c5aa384" Jan 31 09:05:22 crc kubenswrapper[4830]: I0131 09:05:22.079397 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-hzk7b" podUID="5fe5bd86-a665-4a73-8892-fd12a784463d" containerName="oauth-openshift" containerID="cri-o://bb377573acca1cadcbbd0e2208ca9329c7f68ae0060779b2e74b9b113b146b89" gracePeriod=15 Jan 31 09:05:22 crc kubenswrapper[4830]: I0131 09:05:22.269649 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 09:05:22 crc kubenswrapper[4830]: I0131 09:05:22.269707 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 09:05:22 crc kubenswrapper[4830]: I0131 09:05:22.277181 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 09:05:23 crc kubenswrapper[4830]: I0131 09:05:23.040990 4830 generic.go:334] "Generic (PLEG): container finished" podID="5fe5bd86-a665-4a73-8892-fd12a784463d" containerID="bb377573acca1cadcbbd0e2208ca9329c7f68ae0060779b2e74b9b113b146b89" exitCode=0 Jan 31 09:05:23 crc kubenswrapper[4830]: I0131 09:05:23.041034 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-hzk7b" event={"ID":"5fe5bd86-a665-4a73-8892-fd12a784463d","Type":"ContainerDied","Data":"bb377573acca1cadcbbd0e2208ca9329c7f68ae0060779b2e74b9b113b146b89"} Jan 31 09:05:23 crc kubenswrapper[4830]: I0131 09:05:23.041081 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-hzk7b" event={"ID":"5fe5bd86-a665-4a73-8892-fd12a784463d","Type":"ContainerDied","Data":"8adab15d8c8c06af57609909ec54cc53623ecd57ee4d7656578ddfd785fa5321"} Jan 31 09:05:23 crc kubenswrapper[4830]: I0131 09:05:23.041096 4830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8adab15d8c8c06af57609909ec54cc53623ecd57ee4d7656578ddfd785fa5321" Jan 31 09:05:23 crc kubenswrapper[4830]: I0131 09:05:23.072418 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-hzk7b" Jan 31 09:05:23 crc kubenswrapper[4830]: I0131 09:05:23.134034 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/5fe5bd86-a665-4a73-8892-fd12a784463d-v4-0-config-system-session\") pod \"5fe5bd86-a665-4a73-8892-fd12a784463d\" (UID: \"5fe5bd86-a665-4a73-8892-fd12a784463d\") " Jan 31 09:05:23 crc kubenswrapper[4830]: I0131 09:05:23.134098 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5fe5bd86-a665-4a73-8892-fd12a784463d-audit-dir\") pod \"5fe5bd86-a665-4a73-8892-fd12a784463d\" (UID: \"5fe5bd86-a665-4a73-8892-fd12a784463d\") " Jan 31 09:05:23 crc kubenswrapper[4830]: I0131 09:05:23.134131 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/5fe5bd86-a665-4a73-8892-fd12a784463d-v4-0-config-user-template-error\") pod \"5fe5bd86-a665-4a73-8892-fd12a784463d\" (UID: \"5fe5bd86-a665-4a73-8892-fd12a784463d\") " Jan 31 09:05:23 crc kubenswrapper[4830]: I0131 09:05:23.134165 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5fe5bd86-a665-4a73-8892-fd12a784463d-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "5fe5bd86-a665-4a73-8892-fd12a784463d" (UID: "5fe5bd86-a665-4a73-8892-fd12a784463d"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 09:05:23 crc kubenswrapper[4830]: I0131 09:05:23.134221 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/5fe5bd86-a665-4a73-8892-fd12a784463d-audit-policies\") pod \"5fe5bd86-a665-4a73-8892-fd12a784463d\" (UID: \"5fe5bd86-a665-4a73-8892-fd12a784463d\") " Jan 31 09:05:23 crc kubenswrapper[4830]: I0131 09:05:23.134261 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/5fe5bd86-a665-4a73-8892-fd12a784463d-v4-0-config-system-service-ca\") pod \"5fe5bd86-a665-4a73-8892-fd12a784463d\" (UID: \"5fe5bd86-a665-4a73-8892-fd12a784463d\") " Jan 31 09:05:23 crc kubenswrapper[4830]: I0131 09:05:23.134289 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/5fe5bd86-a665-4a73-8892-fd12a784463d-v4-0-config-system-cliconfig\") pod \"5fe5bd86-a665-4a73-8892-fd12a784463d\" (UID: \"5fe5bd86-a665-4a73-8892-fd12a784463d\") " Jan 31 09:05:23 crc kubenswrapper[4830]: I0131 09:05:23.134338 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/5fe5bd86-a665-4a73-8892-fd12a784463d-v4-0-config-system-serving-cert\") pod \"5fe5bd86-a665-4a73-8892-fd12a784463d\" (UID: \"5fe5bd86-a665-4a73-8892-fd12a784463d\") " Jan 31 09:05:23 crc kubenswrapper[4830]: I0131 09:05:23.134384 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5fe5bd86-a665-4a73-8892-fd12a784463d-v4-0-config-system-trusted-ca-bundle\") pod \"5fe5bd86-a665-4a73-8892-fd12a784463d\" (UID: \"5fe5bd86-a665-4a73-8892-fd12a784463d\") " Jan 31 09:05:23 crc kubenswrapper[4830]: I0131 09:05:23.134415 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cs9w7\" (UniqueName: \"kubernetes.io/projected/5fe5bd86-a665-4a73-8892-fd12a784463d-kube-api-access-cs9w7\") pod \"5fe5bd86-a665-4a73-8892-fd12a784463d\" (UID: \"5fe5bd86-a665-4a73-8892-fd12a784463d\") " Jan 31 09:05:23 crc kubenswrapper[4830]: I0131 09:05:23.134445 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/5fe5bd86-a665-4a73-8892-fd12a784463d-v4-0-config-user-idp-0-file-data\") pod \"5fe5bd86-a665-4a73-8892-fd12a784463d\" (UID: \"5fe5bd86-a665-4a73-8892-fd12a784463d\") " Jan 31 09:05:23 crc kubenswrapper[4830]: I0131 09:05:23.134481 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/5fe5bd86-a665-4a73-8892-fd12a784463d-v4-0-config-user-template-login\") pod \"5fe5bd86-a665-4a73-8892-fd12a784463d\" (UID: \"5fe5bd86-a665-4a73-8892-fd12a784463d\") " Jan 31 09:05:23 crc kubenswrapper[4830]: I0131 09:05:23.134511 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/5fe5bd86-a665-4a73-8892-fd12a784463d-v4-0-config-user-template-provider-selection\") pod \"5fe5bd86-a665-4a73-8892-fd12a784463d\" (UID: \"5fe5bd86-a665-4a73-8892-fd12a784463d\") " Jan 31 09:05:23 crc kubenswrapper[4830]: I0131 09:05:23.134585 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/5fe5bd86-a665-4a73-8892-fd12a784463d-v4-0-config-system-router-certs\") pod \"5fe5bd86-a665-4a73-8892-fd12a784463d\" (UID: \"5fe5bd86-a665-4a73-8892-fd12a784463d\") " Jan 31 09:05:23 crc kubenswrapper[4830]: I0131 09:05:23.134620 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/5fe5bd86-a665-4a73-8892-fd12a784463d-v4-0-config-system-ocp-branding-template\") pod \"5fe5bd86-a665-4a73-8892-fd12a784463d\" (UID: \"5fe5bd86-a665-4a73-8892-fd12a784463d\") " Jan 31 09:05:23 crc kubenswrapper[4830]: I0131 09:05:23.134984 4830 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5fe5bd86-a665-4a73-8892-fd12a784463d-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 31 09:05:23 crc kubenswrapper[4830]: I0131 09:05:23.135416 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5fe5bd86-a665-4a73-8892-fd12a784463d-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "5fe5bd86-a665-4a73-8892-fd12a784463d" (UID: "5fe5bd86-a665-4a73-8892-fd12a784463d"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:05:23 crc kubenswrapper[4830]: I0131 09:05:23.135561 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5fe5bd86-a665-4a73-8892-fd12a784463d-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "5fe5bd86-a665-4a73-8892-fd12a784463d" (UID: "5fe5bd86-a665-4a73-8892-fd12a784463d"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:05:23 crc kubenswrapper[4830]: I0131 09:05:23.136530 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5fe5bd86-a665-4a73-8892-fd12a784463d-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "5fe5bd86-a665-4a73-8892-fd12a784463d" (UID: "5fe5bd86-a665-4a73-8892-fd12a784463d"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:05:23 crc kubenswrapper[4830]: I0131 09:05:23.137032 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5fe5bd86-a665-4a73-8892-fd12a784463d-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "5fe5bd86-a665-4a73-8892-fd12a784463d" (UID: "5fe5bd86-a665-4a73-8892-fd12a784463d"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:05:23 crc kubenswrapper[4830]: I0131 09:05:23.142434 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe5bd86-a665-4a73-8892-fd12a784463d-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "5fe5bd86-a665-4a73-8892-fd12a784463d" (UID: "5fe5bd86-a665-4a73-8892-fd12a784463d"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:05:23 crc kubenswrapper[4830]: I0131 09:05:23.142508 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe5bd86-a665-4a73-8892-fd12a784463d-kube-api-access-cs9w7" (OuterVolumeSpecName: "kube-api-access-cs9w7") pod "5fe5bd86-a665-4a73-8892-fd12a784463d" (UID: "5fe5bd86-a665-4a73-8892-fd12a784463d"). InnerVolumeSpecName "kube-api-access-cs9w7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:05:23 crc kubenswrapper[4830]: I0131 09:05:23.143241 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe5bd86-a665-4a73-8892-fd12a784463d-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "5fe5bd86-a665-4a73-8892-fd12a784463d" (UID: "5fe5bd86-a665-4a73-8892-fd12a784463d"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:05:23 crc kubenswrapper[4830]: I0131 09:05:23.143398 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe5bd86-a665-4a73-8892-fd12a784463d-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "5fe5bd86-a665-4a73-8892-fd12a784463d" (UID: "5fe5bd86-a665-4a73-8892-fd12a784463d"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:05:23 crc kubenswrapper[4830]: I0131 09:05:23.143578 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe5bd86-a665-4a73-8892-fd12a784463d-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "5fe5bd86-a665-4a73-8892-fd12a784463d" (UID: "5fe5bd86-a665-4a73-8892-fd12a784463d"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:05:23 crc kubenswrapper[4830]: I0131 09:05:23.143684 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe5bd86-a665-4a73-8892-fd12a784463d-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "5fe5bd86-a665-4a73-8892-fd12a784463d" (UID: "5fe5bd86-a665-4a73-8892-fd12a784463d"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:05:23 crc kubenswrapper[4830]: I0131 09:05:23.144036 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe5bd86-a665-4a73-8892-fd12a784463d-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "5fe5bd86-a665-4a73-8892-fd12a784463d" (UID: "5fe5bd86-a665-4a73-8892-fd12a784463d"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:05:23 crc kubenswrapper[4830]: I0131 09:05:23.149922 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe5bd86-a665-4a73-8892-fd12a784463d-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "5fe5bd86-a665-4a73-8892-fd12a784463d" (UID: "5fe5bd86-a665-4a73-8892-fd12a784463d"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:05:23 crc kubenswrapper[4830]: I0131 09:05:23.150548 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe5bd86-a665-4a73-8892-fd12a784463d-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "5fe5bd86-a665-4a73-8892-fd12a784463d" (UID: "5fe5bd86-a665-4a73-8892-fd12a784463d"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:05:23 crc kubenswrapper[4830]: I0131 09:05:23.236198 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cs9w7\" (UniqueName: \"kubernetes.io/projected/5fe5bd86-a665-4a73-8892-fd12a784463d-kube-api-access-cs9w7\") on node \"crc\" DevicePath \"\"" Jan 31 09:05:23 crc kubenswrapper[4830]: I0131 09:05:23.236243 4830 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/5fe5bd86-a665-4a73-8892-fd12a784463d-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 31 09:05:23 crc kubenswrapper[4830]: I0131 09:05:23.236258 4830 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/5fe5bd86-a665-4a73-8892-fd12a784463d-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 31 09:05:23 crc kubenswrapper[4830]: I0131 09:05:23.236270 4830 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/5fe5bd86-a665-4a73-8892-fd12a784463d-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 31 09:05:23 crc kubenswrapper[4830]: I0131 09:05:23.236283 4830 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/5fe5bd86-a665-4a73-8892-fd12a784463d-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 31 09:05:23 crc kubenswrapper[4830]: I0131 09:05:23.236293 4830 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/5fe5bd86-a665-4a73-8892-fd12a784463d-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 31 09:05:23 crc kubenswrapper[4830]: I0131 09:05:23.236304 4830 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/5fe5bd86-a665-4a73-8892-fd12a784463d-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 31 09:05:23 crc kubenswrapper[4830]: I0131 09:05:23.236315 4830 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/5fe5bd86-a665-4a73-8892-fd12a784463d-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 31 09:05:23 crc kubenswrapper[4830]: I0131 09:05:23.236327 4830 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/5fe5bd86-a665-4a73-8892-fd12a784463d-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 31 09:05:23 crc kubenswrapper[4830]: I0131 09:05:23.236337 4830 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/5fe5bd86-a665-4a73-8892-fd12a784463d-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 31 09:05:23 crc kubenswrapper[4830]: I0131 09:05:23.236346 4830 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/5fe5bd86-a665-4a73-8892-fd12a784463d-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 31 09:05:23 crc kubenswrapper[4830]: I0131 09:05:23.236356 4830 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/5fe5bd86-a665-4a73-8892-fd12a784463d-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 09:05:23 crc kubenswrapper[4830]: I0131 09:05:23.236365 4830 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5fe5bd86-a665-4a73-8892-fd12a784463d-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 09:05:24 crc kubenswrapper[4830]: I0131 09:05:24.046517 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-hzk7b" Jan 31 09:05:25 crc kubenswrapper[4830]: E0131 09:05:25.577027 4830 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication\"/\"audit\": Failed to watch *v1.ConfigMap: unknown (get configmaps)" logger="UnhandledError" Jan 31 09:05:25 crc kubenswrapper[4830]: E0131 09:05:25.847276 4830 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication\"/\"v4-0-config-system-router-certs\": Failed to watch *v1.Secret: unknown (get secrets)" logger="UnhandledError" Jan 31 09:05:25 crc kubenswrapper[4830]: E0131 09:05:25.903011 4830 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication\"/\"v4-0-config-system-serving-cert\": Failed to watch *v1.Secret: unknown (get secrets)" logger="UnhandledError" Jan 31 09:05:26 crc kubenswrapper[4830]: I0131 09:05:26.039216 4830 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 09:05:26 crc kubenswrapper[4830]: I0131 09:05:26.058037 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 09:05:26 crc kubenswrapper[4830]: I0131 09:05:26.058183 4830 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="c75e1c36-b769-464e-96eb-6d9b3c5aa384" Jan 31 09:05:26 crc kubenswrapper[4830]: I0131 09:05:26.058215 4830 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="c75e1c36-b769-464e-96eb-6d9b3c5aa384" Jan 31 09:05:26 crc kubenswrapper[4830]: I0131 09:05:26.063789 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 09:05:26 crc kubenswrapper[4830]: I0131 09:05:26.276454 4830 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="7cb5ff33-6437-4b10-ae63-8414b7e7e21f" Jan 31 09:05:27 crc kubenswrapper[4830]: I0131 09:05:27.063404 4830 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="c75e1c36-b769-464e-96eb-6d9b3c5aa384" Jan 31 09:05:27 crc kubenswrapper[4830]: I0131 09:05:27.064764 4830 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="c75e1c36-b769-464e-96eb-6d9b3c5aa384" Jan 31 09:05:27 crc kubenswrapper[4830]: I0131 09:05:27.066802 4830 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="7cb5ff33-6437-4b10-ae63-8414b7e7e21f" Jan 31 09:05:28 crc kubenswrapper[4830]: I0131 09:05:28.068368 4830 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="c75e1c36-b769-464e-96eb-6d9b3c5aa384" Jan 31 09:05:28 crc kubenswrapper[4830]: I0131 09:05:28.068400 4830 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="c75e1c36-b769-464e-96eb-6d9b3c5aa384" Jan 31 09:05:28 crc kubenswrapper[4830]: I0131 09:05:28.072513 4830 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="7cb5ff33-6437-4b10-ae63-8414b7e7e21f" Jan 31 09:05:35 crc kubenswrapper[4830]: I0131 09:05:35.106403 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 31 09:05:35 crc kubenswrapper[4830]: I0131 09:05:35.131501 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 31 09:05:35 crc kubenswrapper[4830]: I0131 09:05:35.237361 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 31 09:05:35 crc kubenswrapper[4830]: I0131 09:05:35.285172 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 31 09:05:35 crc kubenswrapper[4830]: I0131 09:05:35.757390 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 31 09:05:35 crc kubenswrapper[4830]: I0131 09:05:35.928871 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 31 09:05:36 crc kubenswrapper[4830]: I0131 09:05:36.151075 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 31 09:05:36 crc kubenswrapper[4830]: I0131 09:05:36.266063 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 31 09:05:36 crc kubenswrapper[4830]: I0131 09:05:36.489419 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 31 09:05:36 crc kubenswrapper[4830]: I0131 09:05:36.691405 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 31 09:05:36 crc kubenswrapper[4830]: I0131 09:05:36.901963 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 31 09:05:37 crc kubenswrapper[4830]: I0131 09:05:37.197244 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 31 09:05:37 crc kubenswrapper[4830]: I0131 09:05:37.390474 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 31 09:05:37 crc kubenswrapper[4830]: I0131 09:05:37.536706 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 31 09:05:38 crc kubenswrapper[4830]: I0131 09:05:38.034255 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 31 09:05:38 crc kubenswrapper[4830]: I0131 09:05:38.073413 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 31 09:05:38 crc kubenswrapper[4830]: I0131 09:05:38.239914 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 31 09:05:38 crc kubenswrapper[4830]: I0131 09:05:38.258074 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 31 09:05:38 crc kubenswrapper[4830]: I0131 09:05:38.272641 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 31 09:05:38 crc kubenswrapper[4830]: I0131 09:05:38.362034 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 31 09:05:38 crc kubenswrapper[4830]: I0131 09:05:38.364993 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 31 09:05:38 crc kubenswrapper[4830]: I0131 09:05:38.420512 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 31 09:05:38 crc kubenswrapper[4830]: I0131 09:05:38.454972 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 31 09:05:38 crc kubenswrapper[4830]: I0131 09:05:38.497867 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 31 09:05:38 crc kubenswrapper[4830]: I0131 09:05:38.513131 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 31 09:05:38 crc kubenswrapper[4830]: I0131 09:05:38.612861 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 31 09:05:38 crc kubenswrapper[4830]: I0131 09:05:38.840376 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 31 09:05:38 crc kubenswrapper[4830]: I0131 09:05:38.843986 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 31 09:05:38 crc kubenswrapper[4830]: I0131 09:05:38.852518 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 31 09:05:38 crc kubenswrapper[4830]: I0131 09:05:38.890454 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 31 09:05:38 crc kubenswrapper[4830]: I0131 09:05:38.944766 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 31 09:05:39 crc kubenswrapper[4830]: I0131 09:05:39.037800 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 31 09:05:39 crc kubenswrapper[4830]: I0131 09:05:39.071784 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 31 09:05:39 crc kubenswrapper[4830]: I0131 09:05:39.152785 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 31 09:05:39 crc kubenswrapper[4830]: I0131 09:05:39.220838 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 31 09:05:39 crc kubenswrapper[4830]: I0131 09:05:39.245265 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 31 09:05:39 crc kubenswrapper[4830]: I0131 09:05:39.434705 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 31 09:05:39 crc kubenswrapper[4830]: I0131 09:05:39.497395 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 31 09:05:39 crc kubenswrapper[4830]: I0131 09:05:39.503319 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 31 09:05:39 crc kubenswrapper[4830]: I0131 09:05:39.542361 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 31 09:05:39 crc kubenswrapper[4830]: I0131 09:05:39.596502 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 31 09:05:39 crc kubenswrapper[4830]: I0131 09:05:39.619489 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 31 09:05:39 crc kubenswrapper[4830]: I0131 09:05:39.676486 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 31 09:05:39 crc kubenswrapper[4830]: I0131 09:05:39.775016 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 31 09:05:39 crc kubenswrapper[4830]: I0131 09:05:39.924932 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 31 09:05:39 crc kubenswrapper[4830]: I0131 09:05:39.930774 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 31 09:05:39 crc kubenswrapper[4830]: I0131 09:05:39.979719 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 31 09:05:40 crc kubenswrapper[4830]: I0131 09:05:40.045909 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 31 09:05:40 crc kubenswrapper[4830]: I0131 09:05:40.048076 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 31 09:05:40 crc kubenswrapper[4830]: I0131 09:05:40.180529 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 31 09:05:40 crc kubenswrapper[4830]: I0131 09:05:40.256891 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 31 09:05:40 crc kubenswrapper[4830]: I0131 09:05:40.259620 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 31 09:05:40 crc kubenswrapper[4830]: I0131 09:05:40.260099 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 31 09:05:40 crc kubenswrapper[4830]: I0131 09:05:40.377711 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 31 09:05:40 crc kubenswrapper[4830]: I0131 09:05:40.434080 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 31 09:05:40 crc kubenswrapper[4830]: I0131 09:05:40.475287 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 31 09:05:40 crc kubenswrapper[4830]: I0131 09:05:40.480662 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 31 09:05:40 crc kubenswrapper[4830]: I0131 09:05:40.526303 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 31 09:05:40 crc kubenswrapper[4830]: I0131 09:05:40.682700 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 31 09:05:40 crc kubenswrapper[4830]: I0131 09:05:40.710847 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 31 09:05:40 crc kubenswrapper[4830]: I0131 09:05:40.775923 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 31 09:05:40 crc kubenswrapper[4830]: I0131 09:05:40.795317 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 31 09:05:40 crc kubenswrapper[4830]: I0131 09:05:40.803660 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 31 09:05:40 crc kubenswrapper[4830]: I0131 09:05:40.910036 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 31 09:05:40 crc kubenswrapper[4830]: I0131 09:05:40.937875 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 31 09:05:40 crc kubenswrapper[4830]: I0131 09:05:40.940406 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 31 09:05:40 crc kubenswrapper[4830]: I0131 09:05:40.991811 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 31 09:05:41 crc kubenswrapper[4830]: I0131 09:05:41.058131 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 31 09:05:41 crc kubenswrapper[4830]: I0131 09:05:41.102014 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 31 09:05:41 crc kubenswrapper[4830]: I0131 09:05:41.109963 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 31 09:05:41 crc kubenswrapper[4830]: I0131 09:05:41.118694 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 31 09:05:41 crc kubenswrapper[4830]: I0131 09:05:41.154931 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 31 09:05:41 crc kubenswrapper[4830]: I0131 09:05:41.335040 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 31 09:05:41 crc kubenswrapper[4830]: I0131 09:05:41.342032 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 31 09:05:41 crc kubenswrapper[4830]: I0131 09:05:41.342110 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 31 09:05:41 crc kubenswrapper[4830]: I0131 09:05:41.353082 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 31 09:05:41 crc kubenswrapper[4830]: I0131 09:05:41.399569 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 31 09:05:41 crc kubenswrapper[4830]: I0131 09:05:41.503037 4830 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 31 09:05:41 crc kubenswrapper[4830]: I0131 09:05:41.504469 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 31 09:05:41 crc kubenswrapper[4830]: I0131 09:05:41.536260 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 31 09:05:41 crc kubenswrapper[4830]: I0131 09:05:41.565039 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 31 09:05:41 crc kubenswrapper[4830]: I0131 09:05:41.628562 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 31 09:05:41 crc kubenswrapper[4830]: I0131 09:05:41.662240 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 31 09:05:41 crc kubenswrapper[4830]: I0131 09:05:41.835062 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 31 09:05:42 crc kubenswrapper[4830]: I0131 09:05:42.014646 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 31 09:05:42 crc kubenswrapper[4830]: I0131 09:05:42.047079 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 31 09:05:42 crc kubenswrapper[4830]: I0131 09:05:42.057963 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 31 09:05:42 crc kubenswrapper[4830]: I0131 09:05:42.059866 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 31 09:05:42 crc kubenswrapper[4830]: I0131 09:05:42.115350 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 31 09:05:42 crc kubenswrapper[4830]: I0131 09:05:42.241371 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 31 09:05:42 crc kubenswrapper[4830]: I0131 09:05:42.297192 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 31 09:05:42 crc kubenswrapper[4830]: I0131 09:05:42.312973 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 31 09:05:42 crc kubenswrapper[4830]: I0131 09:05:42.427282 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 31 09:05:42 crc kubenswrapper[4830]: I0131 09:05:42.495333 4830 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 31 09:05:42 crc kubenswrapper[4830]: I0131 09:05:42.498795 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 31 09:05:42 crc kubenswrapper[4830]: I0131 09:05:42.553586 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 31 09:05:42 crc kubenswrapper[4830]: I0131 09:05:42.680614 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 31 09:05:42 crc kubenswrapper[4830]: I0131 09:05:42.695383 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 31 09:05:42 crc kubenswrapper[4830]: I0131 09:05:42.709240 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 31 09:05:42 crc kubenswrapper[4830]: I0131 09:05:42.832754 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 31 09:05:42 crc kubenswrapper[4830]: I0131 09:05:42.971514 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 31 09:05:42 crc kubenswrapper[4830]: I0131 09:05:42.977522 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 31 09:05:42 crc kubenswrapper[4830]: I0131 09:05:42.999111 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 31 09:05:43 crc kubenswrapper[4830]: I0131 09:05:43.093613 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 31 09:05:43 crc kubenswrapper[4830]: I0131 09:05:43.152787 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 31 09:05:43 crc kubenswrapper[4830]: I0131 09:05:43.222986 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 31 09:05:43 crc kubenswrapper[4830]: I0131 09:05:43.291855 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 31 09:05:43 crc kubenswrapper[4830]: I0131 09:05:43.405192 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 31 09:05:43 crc kubenswrapper[4830]: I0131 09:05:43.439339 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 31 09:05:43 crc kubenswrapper[4830]: I0131 09:05:43.444421 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 31 09:05:43 crc kubenswrapper[4830]: I0131 09:05:43.453088 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 31 09:05:43 crc kubenswrapper[4830]: I0131 09:05:43.494150 4830 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 31 09:05:43 crc kubenswrapper[4830]: I0131 09:05:43.578753 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 31 09:05:43 crc kubenswrapper[4830]: I0131 09:05:43.697185 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 31 09:05:43 crc kubenswrapper[4830]: I0131 09:05:43.816704 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 31 09:05:43 crc kubenswrapper[4830]: I0131 09:05:43.829493 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 31 09:05:43 crc kubenswrapper[4830]: I0131 09:05:43.850581 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 31 09:05:44 crc kubenswrapper[4830]: I0131 09:05:44.000049 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 31 09:05:44 crc kubenswrapper[4830]: I0131 09:05:44.071623 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 31 09:05:44 crc kubenswrapper[4830]: I0131 09:05:44.085611 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 31 09:05:44 crc kubenswrapper[4830]: I0131 09:05:44.196914 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 31 09:05:44 crc kubenswrapper[4830]: I0131 09:05:44.230518 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 31 09:05:44 crc kubenswrapper[4830]: I0131 09:05:44.292289 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 31 09:05:44 crc kubenswrapper[4830]: I0131 09:05:44.295530 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 31 09:05:44 crc kubenswrapper[4830]: I0131 09:05:44.344651 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 31 09:05:44 crc kubenswrapper[4830]: I0131 09:05:44.396855 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 31 09:05:44 crc kubenswrapper[4830]: I0131 09:05:44.428256 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 31 09:05:44 crc kubenswrapper[4830]: I0131 09:05:44.505847 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 31 09:05:44 crc kubenswrapper[4830]: I0131 09:05:44.539638 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 31 09:05:44 crc kubenswrapper[4830]: I0131 09:05:44.575137 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 31 09:05:44 crc kubenswrapper[4830]: I0131 09:05:44.597122 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 31 09:05:44 crc kubenswrapper[4830]: I0131 09:05:44.640923 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 31 09:05:44 crc kubenswrapper[4830]: I0131 09:05:44.643796 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 31 09:05:44 crc kubenswrapper[4830]: I0131 09:05:44.751906 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 31 09:05:44 crc kubenswrapper[4830]: I0131 09:05:44.798978 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 31 09:05:44 crc kubenswrapper[4830]: I0131 09:05:44.820199 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 31 09:05:44 crc kubenswrapper[4830]: I0131 09:05:44.869147 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 31 09:05:44 crc kubenswrapper[4830]: I0131 09:05:44.929234 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 31 09:05:44 crc kubenswrapper[4830]: I0131 09:05:44.996864 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 31 09:05:45 crc kubenswrapper[4830]: I0131 09:05:45.058389 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 31 09:05:45 crc kubenswrapper[4830]: I0131 09:05:45.135377 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 31 09:05:45 crc kubenswrapper[4830]: I0131 09:05:45.153287 4830 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 31 09:05:45 crc kubenswrapper[4830]: I0131 09:05:45.154139 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podStartSLOduration=43.154121663 podStartE2EDuration="43.154121663s" podCreationTimestamp="2026-01-31 09:05:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 09:05:25.455868679 +0000 UTC m=+269.949231121" watchObservedRunningTime="2026-01-31 09:05:45.154121663 +0000 UTC m=+289.647484105" Jan 31 09:05:45 crc kubenswrapper[4830]: I0131 09:05:45.157665 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-hzk7b","openshift-kube-apiserver/kube-apiserver-crc"] Jan 31 09:05:45 crc kubenswrapper[4830]: I0131 09:05:45.158150 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 31 09:05:45 crc kubenswrapper[4830]: I0131 09:05:45.163647 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 09:05:45 crc kubenswrapper[4830]: I0131 09:05:45.173411 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 31 09:05:45 crc kubenswrapper[4830]: I0131 09:05:45.204504 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 31 09:05:45 crc kubenswrapper[4830]: I0131 09:05:45.204843 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=19.204799354 podStartE2EDuration="19.204799354s" podCreationTimestamp="2026-01-31 09:05:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 09:05:45.1808546 +0000 UTC m=+289.674217042" watchObservedRunningTime="2026-01-31 09:05:45.204799354 +0000 UTC m=+289.698161806" Jan 31 09:05:45 crc kubenswrapper[4830]: I0131 09:05:45.205672 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 31 09:05:45 crc kubenswrapper[4830]: I0131 09:05:45.215355 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 31 09:05:45 crc kubenswrapper[4830]: I0131 09:05:45.301233 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 31 09:05:45 crc kubenswrapper[4830]: I0131 09:05:45.343923 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 31 09:05:45 crc kubenswrapper[4830]: I0131 09:05:45.361505 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 31 09:05:45 crc kubenswrapper[4830]: I0131 09:05:45.376037 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 31 09:05:45 crc kubenswrapper[4830]: I0131 09:05:45.476547 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 31 09:05:45 crc kubenswrapper[4830]: I0131 09:05:45.492228 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 31 09:05:45 crc kubenswrapper[4830]: I0131 09:05:45.516843 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 31 09:05:45 crc kubenswrapper[4830]: I0131 09:05:45.540759 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 31 09:05:45 crc kubenswrapper[4830]: I0131 09:05:45.588607 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 31 09:05:45 crc kubenswrapper[4830]: I0131 09:05:45.621582 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 31 09:05:45 crc kubenswrapper[4830]: I0131 09:05:45.689705 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 31 09:05:45 crc kubenswrapper[4830]: I0131 09:05:45.705170 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 31 09:05:45 crc kubenswrapper[4830]: I0131 09:05:45.975910 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 31 09:05:46 crc kubenswrapper[4830]: I0131 09:05:46.070717 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 31 09:05:46 crc kubenswrapper[4830]: I0131 09:05:46.101536 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 31 09:05:46 crc kubenswrapper[4830]: I0131 09:05:46.116073 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 31 09:05:46 crc kubenswrapper[4830]: I0131 09:05:46.260170 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe5bd86-a665-4a73-8892-fd12a784463d" path="/var/lib/kubelet/pods/5fe5bd86-a665-4a73-8892-fd12a784463d/volumes" Jan 31 09:05:46 crc kubenswrapper[4830]: I0131 09:05:46.265151 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 31 09:05:46 crc kubenswrapper[4830]: I0131 09:05:46.374155 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 31 09:05:46 crc kubenswrapper[4830]: I0131 09:05:46.443856 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 31 09:05:46 crc kubenswrapper[4830]: I0131 09:05:46.450152 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 31 09:05:46 crc kubenswrapper[4830]: I0131 09:05:46.463606 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 31 09:05:46 crc kubenswrapper[4830]: I0131 09:05:46.570309 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 31 09:05:46 crc kubenswrapper[4830]: I0131 09:05:46.766910 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 31 09:05:46 crc kubenswrapper[4830]: I0131 09:05:46.771183 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 31 09:05:46 crc kubenswrapper[4830]: I0131 09:05:46.790288 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 31 09:05:46 crc kubenswrapper[4830]: I0131 09:05:46.901381 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 31 09:05:46 crc kubenswrapper[4830]: I0131 09:05:46.957694 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 31 09:05:47 crc kubenswrapper[4830]: I0131 09:05:47.076275 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 31 09:05:47 crc kubenswrapper[4830]: I0131 09:05:47.096940 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 31 09:05:47 crc kubenswrapper[4830]: I0131 09:05:47.134613 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 31 09:05:47 crc kubenswrapper[4830]: I0131 09:05:47.143746 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 31 09:05:47 crc kubenswrapper[4830]: I0131 09:05:47.166207 4830 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 31 09:05:47 crc kubenswrapper[4830]: I0131 09:05:47.185836 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 31 09:05:47 crc kubenswrapper[4830]: I0131 09:05:47.460620 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 31 09:05:47 crc kubenswrapper[4830]: I0131 09:05:47.482222 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 31 09:05:47 crc kubenswrapper[4830]: I0131 09:05:47.581303 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 31 09:05:47 crc kubenswrapper[4830]: I0131 09:05:47.622133 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 31 09:05:47 crc kubenswrapper[4830]: I0131 09:05:47.828571 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 31 09:05:47 crc kubenswrapper[4830]: I0131 09:05:47.956133 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 31 09:05:47 crc kubenswrapper[4830]: I0131 09:05:47.971455 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 31 09:05:48 crc kubenswrapper[4830]: I0131 09:05:48.069641 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 31 09:05:48 crc kubenswrapper[4830]: I0131 09:05:48.137216 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 31 09:05:48 crc kubenswrapper[4830]: I0131 09:05:48.156602 4830 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 31 09:05:48 crc kubenswrapper[4830]: I0131 09:05:48.157206 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://a6e8e3c4937bd4c2a71b73d9762af599201d7928895e4d6a5ea2d397b1462ab1" gracePeriod=5 Jan 31 09:05:48 crc kubenswrapper[4830]: I0131 09:05:48.290531 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 31 09:05:48 crc kubenswrapper[4830]: I0131 09:05:48.303476 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 31 09:05:48 crc kubenswrapper[4830]: I0131 09:05:48.306139 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 31 09:05:48 crc kubenswrapper[4830]: I0131 09:05:48.329070 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 31 09:05:48 crc kubenswrapper[4830]: I0131 09:05:48.424898 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-6768bc9c9c-5t4z8"] Jan 31 09:05:48 crc kubenswrapper[4830]: E0131 09:05:48.425499 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5fe5bd86-a665-4a73-8892-fd12a784463d" containerName="oauth-openshift" Jan 31 09:05:48 crc kubenswrapper[4830]: I0131 09:05:48.425562 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="5fe5bd86-a665-4a73-8892-fd12a784463d" containerName="oauth-openshift" Jan 31 09:05:48 crc kubenswrapper[4830]: E0131 09:05:48.425579 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0cfd30f0-26ec-4ac4-b315-4d99ec492231" containerName="installer" Jan 31 09:05:48 crc kubenswrapper[4830]: I0131 09:05:48.425588 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="0cfd30f0-26ec-4ac4-b315-4d99ec492231" containerName="installer" Jan 31 09:05:48 crc kubenswrapper[4830]: E0131 09:05:48.425610 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 31 09:05:48 crc kubenswrapper[4830]: I0131 09:05:48.425618 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 31 09:05:48 crc kubenswrapper[4830]: I0131 09:05:48.425946 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 31 09:05:48 crc kubenswrapper[4830]: I0131 09:05:48.425978 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="0cfd30f0-26ec-4ac4-b315-4d99ec492231" containerName="installer" Jan 31 09:05:48 crc kubenswrapper[4830]: I0131 09:05:48.425997 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="5fe5bd86-a665-4a73-8892-fd12a784463d" containerName="oauth-openshift" Jan 31 09:05:48 crc kubenswrapper[4830]: I0131 09:05:48.426851 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-6768bc9c9c-5t4z8" Jan 31 09:05:48 crc kubenswrapper[4830]: I0131 09:05:48.431599 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 31 09:05:48 crc kubenswrapper[4830]: I0131 09:05:48.432080 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 31 09:05:48 crc kubenswrapper[4830]: I0131 09:05:48.432114 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 31 09:05:48 crc kubenswrapper[4830]: I0131 09:05:48.432145 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 31 09:05:48 crc kubenswrapper[4830]: I0131 09:05:48.432218 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 31 09:05:48 crc kubenswrapper[4830]: I0131 09:05:48.432114 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 31 09:05:48 crc kubenswrapper[4830]: I0131 09:05:48.432071 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 31 09:05:48 crc kubenswrapper[4830]: I0131 09:05:48.432300 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 31 09:05:48 crc kubenswrapper[4830]: I0131 09:05:48.432551 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 31 09:05:48 crc kubenswrapper[4830]: I0131 09:05:48.433622 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 31 09:05:48 crc kubenswrapper[4830]: I0131 09:05:48.433856 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 31 09:05:48 crc kubenswrapper[4830]: I0131 09:05:48.434632 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 31 09:05:48 crc kubenswrapper[4830]: I0131 09:05:48.438703 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 31 09:05:48 crc kubenswrapper[4830]: I0131 09:05:48.445923 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-6768bc9c9c-5t4z8"] Jan 31 09:05:48 crc kubenswrapper[4830]: I0131 09:05:48.460207 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 31 09:05:48 crc kubenswrapper[4830]: I0131 09:05:48.464067 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 31 09:05:48 crc kubenswrapper[4830]: I0131 09:05:48.475307 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3549201c-94c2-4a29-9e62-b498b4a97ece-audit-dir\") pod \"oauth-openshift-6768bc9c9c-5t4z8\" (UID: \"3549201c-94c2-4a29-9e62-b498b4a97ece\") " pod="openshift-authentication/oauth-openshift-6768bc9c9c-5t4z8" Jan 31 09:05:48 crc kubenswrapper[4830]: I0131 09:05:48.475513 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5g66k\" (UniqueName: \"kubernetes.io/projected/3549201c-94c2-4a29-9e62-b498b4a97ece-kube-api-access-5g66k\") pod \"oauth-openshift-6768bc9c9c-5t4z8\" (UID: \"3549201c-94c2-4a29-9e62-b498b4a97ece\") " pod="openshift-authentication/oauth-openshift-6768bc9c9c-5t4z8" Jan 31 09:05:48 crc kubenswrapper[4830]: I0131 09:05:48.475616 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/3549201c-94c2-4a29-9e62-b498b4a97ece-v4-0-config-system-serving-cert\") pod \"oauth-openshift-6768bc9c9c-5t4z8\" (UID: \"3549201c-94c2-4a29-9e62-b498b4a97ece\") " pod="openshift-authentication/oauth-openshift-6768bc9c9c-5t4z8" Jan 31 09:05:48 crc kubenswrapper[4830]: I0131 09:05:48.475821 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/3549201c-94c2-4a29-9e62-b498b4a97ece-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-6768bc9c9c-5t4z8\" (UID: \"3549201c-94c2-4a29-9e62-b498b4a97ece\") " pod="openshift-authentication/oauth-openshift-6768bc9c9c-5t4z8" Jan 31 09:05:48 crc kubenswrapper[4830]: I0131 09:05:48.475871 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/3549201c-94c2-4a29-9e62-b498b4a97ece-audit-policies\") pod \"oauth-openshift-6768bc9c9c-5t4z8\" (UID: \"3549201c-94c2-4a29-9e62-b498b4a97ece\") " pod="openshift-authentication/oauth-openshift-6768bc9c9c-5t4z8" Jan 31 09:05:48 crc kubenswrapper[4830]: I0131 09:05:48.475910 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/3549201c-94c2-4a29-9e62-b498b4a97ece-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-6768bc9c9c-5t4z8\" (UID: \"3549201c-94c2-4a29-9e62-b498b4a97ece\") " pod="openshift-authentication/oauth-openshift-6768bc9c9c-5t4z8" Jan 31 09:05:48 crc kubenswrapper[4830]: I0131 09:05:48.476164 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/3549201c-94c2-4a29-9e62-b498b4a97ece-v4-0-config-system-cliconfig\") pod \"oauth-openshift-6768bc9c9c-5t4z8\" (UID: \"3549201c-94c2-4a29-9e62-b498b4a97ece\") " pod="openshift-authentication/oauth-openshift-6768bc9c9c-5t4z8" Jan 31 09:05:48 crc kubenswrapper[4830]: I0131 09:05:48.476252 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/3549201c-94c2-4a29-9e62-b498b4a97ece-v4-0-config-user-template-error\") pod \"oauth-openshift-6768bc9c9c-5t4z8\" (UID: \"3549201c-94c2-4a29-9e62-b498b4a97ece\") " pod="openshift-authentication/oauth-openshift-6768bc9c9c-5t4z8" Jan 31 09:05:48 crc kubenswrapper[4830]: I0131 09:05:48.476302 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/3549201c-94c2-4a29-9e62-b498b4a97ece-v4-0-config-user-template-login\") pod \"oauth-openshift-6768bc9c9c-5t4z8\" (UID: \"3549201c-94c2-4a29-9e62-b498b4a97ece\") " pod="openshift-authentication/oauth-openshift-6768bc9c9c-5t4z8" Jan 31 09:05:48 crc kubenswrapper[4830]: I0131 09:05:48.476341 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3549201c-94c2-4a29-9e62-b498b4a97ece-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-6768bc9c9c-5t4z8\" (UID: \"3549201c-94c2-4a29-9e62-b498b4a97ece\") " pod="openshift-authentication/oauth-openshift-6768bc9c9c-5t4z8" Jan 31 09:05:48 crc kubenswrapper[4830]: I0131 09:05:48.476426 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/3549201c-94c2-4a29-9e62-b498b4a97ece-v4-0-config-system-session\") pod \"oauth-openshift-6768bc9c9c-5t4z8\" (UID: \"3549201c-94c2-4a29-9e62-b498b4a97ece\") " pod="openshift-authentication/oauth-openshift-6768bc9c9c-5t4z8" Jan 31 09:05:48 crc kubenswrapper[4830]: I0131 09:05:48.476505 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/3549201c-94c2-4a29-9e62-b498b4a97ece-v4-0-config-system-router-certs\") pod \"oauth-openshift-6768bc9c9c-5t4z8\" (UID: \"3549201c-94c2-4a29-9e62-b498b4a97ece\") " pod="openshift-authentication/oauth-openshift-6768bc9c9c-5t4z8" Jan 31 09:05:48 crc kubenswrapper[4830]: I0131 09:05:48.476546 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/3549201c-94c2-4a29-9e62-b498b4a97ece-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-6768bc9c9c-5t4z8\" (UID: \"3549201c-94c2-4a29-9e62-b498b4a97ece\") " pod="openshift-authentication/oauth-openshift-6768bc9c9c-5t4z8" Jan 31 09:05:48 crc kubenswrapper[4830]: I0131 09:05:48.476571 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/3549201c-94c2-4a29-9e62-b498b4a97ece-v4-0-config-system-service-ca\") pod \"oauth-openshift-6768bc9c9c-5t4z8\" (UID: \"3549201c-94c2-4a29-9e62-b498b4a97ece\") " pod="openshift-authentication/oauth-openshift-6768bc9c9c-5t4z8" Jan 31 09:05:48 crc kubenswrapper[4830]: I0131 09:05:48.577774 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/3549201c-94c2-4a29-9e62-b498b4a97ece-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-6768bc9c9c-5t4z8\" (UID: \"3549201c-94c2-4a29-9e62-b498b4a97ece\") " pod="openshift-authentication/oauth-openshift-6768bc9c9c-5t4z8" Jan 31 09:05:48 crc kubenswrapper[4830]: I0131 09:05:48.577888 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/3549201c-94c2-4a29-9e62-b498b4a97ece-v4-0-config-system-cliconfig\") pod \"oauth-openshift-6768bc9c9c-5t4z8\" (UID: \"3549201c-94c2-4a29-9e62-b498b4a97ece\") " pod="openshift-authentication/oauth-openshift-6768bc9c9c-5t4z8" Jan 31 09:05:48 crc kubenswrapper[4830]: I0131 09:05:48.577923 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/3549201c-94c2-4a29-9e62-b498b4a97ece-v4-0-config-user-template-error\") pod \"oauth-openshift-6768bc9c9c-5t4z8\" (UID: \"3549201c-94c2-4a29-9e62-b498b4a97ece\") " pod="openshift-authentication/oauth-openshift-6768bc9c9c-5t4z8" Jan 31 09:05:48 crc kubenswrapper[4830]: I0131 09:05:48.577989 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/3549201c-94c2-4a29-9e62-b498b4a97ece-v4-0-config-user-template-login\") pod \"oauth-openshift-6768bc9c9c-5t4z8\" (UID: \"3549201c-94c2-4a29-9e62-b498b4a97ece\") " pod="openshift-authentication/oauth-openshift-6768bc9c9c-5t4z8" Jan 31 09:05:48 crc kubenswrapper[4830]: I0131 09:05:48.578019 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3549201c-94c2-4a29-9e62-b498b4a97ece-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-6768bc9c9c-5t4z8\" (UID: \"3549201c-94c2-4a29-9e62-b498b4a97ece\") " pod="openshift-authentication/oauth-openshift-6768bc9c9c-5t4z8" Jan 31 09:05:48 crc kubenswrapper[4830]: I0131 09:05:48.578079 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/3549201c-94c2-4a29-9e62-b498b4a97ece-v4-0-config-system-session\") pod \"oauth-openshift-6768bc9c9c-5t4z8\" (UID: \"3549201c-94c2-4a29-9e62-b498b4a97ece\") " pod="openshift-authentication/oauth-openshift-6768bc9c9c-5t4z8" Jan 31 09:05:48 crc kubenswrapper[4830]: I0131 09:05:48.578138 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/3549201c-94c2-4a29-9e62-b498b4a97ece-v4-0-config-system-router-certs\") pod \"oauth-openshift-6768bc9c9c-5t4z8\" (UID: \"3549201c-94c2-4a29-9e62-b498b4a97ece\") " pod="openshift-authentication/oauth-openshift-6768bc9c9c-5t4z8" Jan 31 09:05:48 crc kubenswrapper[4830]: I0131 09:05:48.578171 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/3549201c-94c2-4a29-9e62-b498b4a97ece-v4-0-config-system-service-ca\") pod \"oauth-openshift-6768bc9c9c-5t4z8\" (UID: \"3549201c-94c2-4a29-9e62-b498b4a97ece\") " pod="openshift-authentication/oauth-openshift-6768bc9c9c-5t4z8" Jan 31 09:05:48 crc kubenswrapper[4830]: I0131 09:05:48.578222 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/3549201c-94c2-4a29-9e62-b498b4a97ece-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-6768bc9c9c-5t4z8\" (UID: \"3549201c-94c2-4a29-9e62-b498b4a97ece\") " pod="openshift-authentication/oauth-openshift-6768bc9c9c-5t4z8" Jan 31 09:05:48 crc kubenswrapper[4830]: I0131 09:05:48.578259 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3549201c-94c2-4a29-9e62-b498b4a97ece-audit-dir\") pod \"oauth-openshift-6768bc9c9c-5t4z8\" (UID: \"3549201c-94c2-4a29-9e62-b498b4a97ece\") " pod="openshift-authentication/oauth-openshift-6768bc9c9c-5t4z8" Jan 31 09:05:48 crc kubenswrapper[4830]: I0131 09:05:48.578318 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5g66k\" (UniqueName: \"kubernetes.io/projected/3549201c-94c2-4a29-9e62-b498b4a97ece-kube-api-access-5g66k\") pod \"oauth-openshift-6768bc9c9c-5t4z8\" (UID: \"3549201c-94c2-4a29-9e62-b498b4a97ece\") " pod="openshift-authentication/oauth-openshift-6768bc9c9c-5t4z8" Jan 31 09:05:48 crc kubenswrapper[4830]: I0131 09:05:48.578345 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/3549201c-94c2-4a29-9e62-b498b4a97ece-v4-0-config-system-serving-cert\") pod \"oauth-openshift-6768bc9c9c-5t4z8\" (UID: \"3549201c-94c2-4a29-9e62-b498b4a97ece\") " pod="openshift-authentication/oauth-openshift-6768bc9c9c-5t4z8" Jan 31 09:05:48 crc kubenswrapper[4830]: I0131 09:05:48.578403 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/3549201c-94c2-4a29-9e62-b498b4a97ece-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-6768bc9c9c-5t4z8\" (UID: \"3549201c-94c2-4a29-9e62-b498b4a97ece\") " pod="openshift-authentication/oauth-openshift-6768bc9c9c-5t4z8" Jan 31 09:05:48 crc kubenswrapper[4830]: I0131 09:05:48.578430 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/3549201c-94c2-4a29-9e62-b498b4a97ece-audit-policies\") pod \"oauth-openshift-6768bc9c9c-5t4z8\" (UID: \"3549201c-94c2-4a29-9e62-b498b4a97ece\") " pod="openshift-authentication/oauth-openshift-6768bc9c9c-5t4z8" Jan 31 09:05:48 crc kubenswrapper[4830]: I0131 09:05:48.579631 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/3549201c-94c2-4a29-9e62-b498b4a97ece-audit-policies\") pod \"oauth-openshift-6768bc9c9c-5t4z8\" (UID: \"3549201c-94c2-4a29-9e62-b498b4a97ece\") " pod="openshift-authentication/oauth-openshift-6768bc9c9c-5t4z8" Jan 31 09:05:48 crc kubenswrapper[4830]: I0131 09:05:48.580208 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3549201c-94c2-4a29-9e62-b498b4a97ece-audit-dir\") pod \"oauth-openshift-6768bc9c9c-5t4z8\" (UID: \"3549201c-94c2-4a29-9e62-b498b4a97ece\") " pod="openshift-authentication/oauth-openshift-6768bc9c9c-5t4z8" Jan 31 09:05:48 crc kubenswrapper[4830]: I0131 09:05:48.581185 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/3549201c-94c2-4a29-9e62-b498b4a97ece-v4-0-config-system-cliconfig\") pod \"oauth-openshift-6768bc9c9c-5t4z8\" (UID: \"3549201c-94c2-4a29-9e62-b498b4a97ece\") " pod="openshift-authentication/oauth-openshift-6768bc9c9c-5t4z8" Jan 31 09:05:48 crc kubenswrapper[4830]: I0131 09:05:48.582597 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/3549201c-94c2-4a29-9e62-b498b4a97ece-v4-0-config-system-service-ca\") pod \"oauth-openshift-6768bc9c9c-5t4z8\" (UID: \"3549201c-94c2-4a29-9e62-b498b4a97ece\") " pod="openshift-authentication/oauth-openshift-6768bc9c9c-5t4z8" Jan 31 09:05:48 crc kubenswrapper[4830]: I0131 09:05:48.583515 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3549201c-94c2-4a29-9e62-b498b4a97ece-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-6768bc9c9c-5t4z8\" (UID: \"3549201c-94c2-4a29-9e62-b498b4a97ece\") " pod="openshift-authentication/oauth-openshift-6768bc9c9c-5t4z8" Jan 31 09:05:48 crc kubenswrapper[4830]: I0131 09:05:48.587566 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/3549201c-94c2-4a29-9e62-b498b4a97ece-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-6768bc9c9c-5t4z8\" (UID: \"3549201c-94c2-4a29-9e62-b498b4a97ece\") " pod="openshift-authentication/oauth-openshift-6768bc9c9c-5t4z8" Jan 31 09:05:48 crc kubenswrapper[4830]: I0131 09:05:48.587990 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/3549201c-94c2-4a29-9e62-b498b4a97ece-v4-0-config-system-session\") pod \"oauth-openshift-6768bc9c9c-5t4z8\" (UID: \"3549201c-94c2-4a29-9e62-b498b4a97ece\") " pod="openshift-authentication/oauth-openshift-6768bc9c9c-5t4z8" Jan 31 09:05:48 crc kubenswrapper[4830]: I0131 09:05:48.588108 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/3549201c-94c2-4a29-9e62-b498b4a97ece-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-6768bc9c9c-5t4z8\" (UID: \"3549201c-94c2-4a29-9e62-b498b4a97ece\") " pod="openshift-authentication/oauth-openshift-6768bc9c9c-5t4z8" Jan 31 09:05:48 crc kubenswrapper[4830]: I0131 09:05:48.588385 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/3549201c-94c2-4a29-9e62-b498b4a97ece-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-6768bc9c9c-5t4z8\" (UID: \"3549201c-94c2-4a29-9e62-b498b4a97ece\") " pod="openshift-authentication/oauth-openshift-6768bc9c9c-5t4z8" Jan 31 09:05:48 crc kubenswrapper[4830]: I0131 09:05:48.588529 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/3549201c-94c2-4a29-9e62-b498b4a97ece-v4-0-config-user-template-login\") pod \"oauth-openshift-6768bc9c9c-5t4z8\" (UID: \"3549201c-94c2-4a29-9e62-b498b4a97ece\") " pod="openshift-authentication/oauth-openshift-6768bc9c9c-5t4z8" Jan 31 09:05:48 crc kubenswrapper[4830]: I0131 09:05:48.594227 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/3549201c-94c2-4a29-9e62-b498b4a97ece-v4-0-config-system-serving-cert\") pod \"oauth-openshift-6768bc9c9c-5t4z8\" (UID: \"3549201c-94c2-4a29-9e62-b498b4a97ece\") " pod="openshift-authentication/oauth-openshift-6768bc9c9c-5t4z8" Jan 31 09:05:48 crc kubenswrapper[4830]: I0131 09:05:48.596656 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/3549201c-94c2-4a29-9e62-b498b4a97ece-v4-0-config-system-router-certs\") pod \"oauth-openshift-6768bc9c9c-5t4z8\" (UID: \"3549201c-94c2-4a29-9e62-b498b4a97ece\") " pod="openshift-authentication/oauth-openshift-6768bc9c9c-5t4z8" Jan 31 09:05:48 crc kubenswrapper[4830]: I0131 09:05:48.596760 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 31 09:05:48 crc kubenswrapper[4830]: I0131 09:05:48.597594 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/3549201c-94c2-4a29-9e62-b498b4a97ece-v4-0-config-user-template-error\") pod \"oauth-openshift-6768bc9c9c-5t4z8\" (UID: \"3549201c-94c2-4a29-9e62-b498b4a97ece\") " pod="openshift-authentication/oauth-openshift-6768bc9c9c-5t4z8" Jan 31 09:05:48 crc kubenswrapper[4830]: I0131 09:05:48.600361 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5g66k\" (UniqueName: \"kubernetes.io/projected/3549201c-94c2-4a29-9e62-b498b4a97ece-kube-api-access-5g66k\") pod \"oauth-openshift-6768bc9c9c-5t4z8\" (UID: \"3549201c-94c2-4a29-9e62-b498b4a97ece\") " pod="openshift-authentication/oauth-openshift-6768bc9c9c-5t4z8" Jan 31 09:05:48 crc kubenswrapper[4830]: I0131 09:05:48.608649 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 31 09:05:48 crc kubenswrapper[4830]: I0131 09:05:48.766595 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-6768bc9c9c-5t4z8" Jan 31 09:05:48 crc kubenswrapper[4830]: I0131 09:05:48.963407 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 31 09:05:49 crc kubenswrapper[4830]: I0131 09:05:49.112128 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 31 09:05:49 crc kubenswrapper[4830]: I0131 09:05:49.112486 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 31 09:05:49 crc kubenswrapper[4830]: I0131 09:05:49.151948 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 31 09:05:49 crc kubenswrapper[4830]: I0131 09:05:49.168271 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-6768bc9c9c-5t4z8"] Jan 31 09:05:49 crc kubenswrapper[4830]: I0131 09:05:49.191301 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-6768bc9c9c-5t4z8" event={"ID":"3549201c-94c2-4a29-9e62-b498b4a97ece","Type":"ContainerStarted","Data":"f2a9e76684f1643d4115cfdb02b18083871117eb278fe213f614c8b9b798b73c"} Jan 31 09:05:49 crc kubenswrapper[4830]: I0131 09:05:49.196349 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 31 09:05:49 crc kubenswrapper[4830]: I0131 09:05:49.204469 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 31 09:05:49 crc kubenswrapper[4830]: I0131 09:05:49.237917 4830 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 31 09:05:49 crc kubenswrapper[4830]: I0131 09:05:49.307505 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 31 09:05:49 crc kubenswrapper[4830]: I0131 09:05:49.354889 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 31 09:05:49 crc kubenswrapper[4830]: I0131 09:05:49.387854 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 31 09:05:49 crc kubenswrapper[4830]: I0131 09:05:49.434786 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 31 09:05:49 crc kubenswrapper[4830]: I0131 09:05:49.481035 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 31 09:05:49 crc kubenswrapper[4830]: I0131 09:05:49.546580 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 31 09:05:49 crc kubenswrapper[4830]: I0131 09:05:49.559313 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 31 09:05:49 crc kubenswrapper[4830]: I0131 09:05:49.580566 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 31 09:05:49 crc kubenswrapper[4830]: I0131 09:05:49.625058 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 31 09:05:49 crc kubenswrapper[4830]: I0131 09:05:49.783449 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 31 09:05:49 crc kubenswrapper[4830]: I0131 09:05:49.874871 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 31 09:05:49 crc kubenswrapper[4830]: I0131 09:05:49.905805 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 31 09:05:49 crc kubenswrapper[4830]: I0131 09:05:49.916145 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 31 09:05:50 crc kubenswrapper[4830]: I0131 09:05:50.083837 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 31 09:05:50 crc kubenswrapper[4830]: I0131 09:05:50.096247 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 31 09:05:50 crc kubenswrapper[4830]: I0131 09:05:50.198269 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-6768bc9c9c-5t4z8" event={"ID":"3549201c-94c2-4a29-9e62-b498b4a97ece","Type":"ContainerStarted","Data":"3ea2639af37448a2eefa4b679484a5226ded1742fea84b95ff9c683ad7e4fd1e"} Jan 31 09:05:50 crc kubenswrapper[4830]: I0131 09:05:50.198922 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-6768bc9c9c-5t4z8" Jan 31 09:05:50 crc kubenswrapper[4830]: I0131 09:05:50.204277 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-6768bc9c9c-5t4z8" Jan 31 09:05:50 crc kubenswrapper[4830]: I0131 09:05:50.239994 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-6768bc9c9c-5t4z8" podStartSLOduration=53.239978024 podStartE2EDuration="53.239978024s" podCreationTimestamp="2026-01-31 09:04:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 09:05:50.220961109 +0000 UTC m=+294.714323551" watchObservedRunningTime="2026-01-31 09:05:50.239978024 +0000 UTC m=+294.733340466" Jan 31 09:05:50 crc kubenswrapper[4830]: I0131 09:05:50.266886 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 31 09:05:50 crc kubenswrapper[4830]: I0131 09:05:50.451874 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 31 09:05:50 crc kubenswrapper[4830]: I0131 09:05:50.532392 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 31 09:05:50 crc kubenswrapper[4830]: I0131 09:05:50.726136 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 31 09:05:50 crc kubenswrapper[4830]: I0131 09:05:50.856854 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6488cf5546-fd5sf"] Jan 31 09:05:50 crc kubenswrapper[4830]: I0131 09:05:50.857666 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-6488cf5546-fd5sf" podUID="88e7057e-29a9-4bba-a588-11ae4def7947" containerName="controller-manager" containerID="cri-o://612de1acd164c5d864167c9e586526fa1bcddbe39ed6bfa04ccb806b246b55ff" gracePeriod=30 Jan 31 09:05:50 crc kubenswrapper[4830]: I0131 09:05:50.895856 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 31 09:05:50 crc kubenswrapper[4830]: I0131 09:05:50.941232 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 31 09:05:50 crc kubenswrapper[4830]: I0131 09:05:50.952880 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-9dc64dc66-ldnmz"] Jan 31 09:05:50 crc kubenswrapper[4830]: I0131 09:05:50.953229 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-9dc64dc66-ldnmz" podUID="9d39fccc-7441-408a-b27a-6cd6c53ad159" containerName="route-controller-manager" containerID="cri-o://2733fdc2c5f66907564519a2d957ad6f1ea9852a3a02051f977934ddb9db0d5c" gracePeriod=30 Jan 31 09:05:51 crc kubenswrapper[4830]: I0131 09:05:51.210330 4830 generic.go:334] "Generic (PLEG): container finished" podID="88e7057e-29a9-4bba-a588-11ae4def7947" containerID="612de1acd164c5d864167c9e586526fa1bcddbe39ed6bfa04ccb806b246b55ff" exitCode=0 Jan 31 09:05:51 crc kubenswrapper[4830]: I0131 09:05:51.210393 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6488cf5546-fd5sf" event={"ID":"88e7057e-29a9-4bba-a588-11ae4def7947","Type":"ContainerDied","Data":"612de1acd164c5d864167c9e586526fa1bcddbe39ed6bfa04ccb806b246b55ff"} Jan 31 09:05:51 crc kubenswrapper[4830]: I0131 09:05:51.210421 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6488cf5546-fd5sf" event={"ID":"88e7057e-29a9-4bba-a588-11ae4def7947","Type":"ContainerDied","Data":"6a8352a2ba7c41f247990ea48c57849aaa08c676c7dc773db9da574142446392"} Jan 31 09:05:51 crc kubenswrapper[4830]: I0131 09:05:51.210433 4830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6a8352a2ba7c41f247990ea48c57849aaa08c676c7dc773db9da574142446392" Jan 31 09:05:51 crc kubenswrapper[4830]: I0131 09:05:51.214010 4830 generic.go:334] "Generic (PLEG): container finished" podID="9d39fccc-7441-408a-b27a-6cd6c53ad159" containerID="2733fdc2c5f66907564519a2d957ad6f1ea9852a3a02051f977934ddb9db0d5c" exitCode=0 Jan 31 09:05:51 crc kubenswrapper[4830]: I0131 09:05:51.214356 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-9dc64dc66-ldnmz" event={"ID":"9d39fccc-7441-408a-b27a-6cd6c53ad159","Type":"ContainerDied","Data":"2733fdc2c5f66907564519a2d957ad6f1ea9852a3a02051f977934ddb9db0d5c"} Jan 31 09:05:51 crc kubenswrapper[4830]: I0131 09:05:51.237038 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6488cf5546-fd5sf" Jan 31 09:05:51 crc kubenswrapper[4830]: I0131 09:05:51.335308 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-9dc64dc66-ldnmz" Jan 31 09:05:51 crc kubenswrapper[4830]: I0131 09:05:51.423493 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k9lz9\" (UniqueName: \"kubernetes.io/projected/88e7057e-29a9-4bba-a588-11ae4def7947-kube-api-access-k9lz9\") pod \"88e7057e-29a9-4bba-a588-11ae4def7947\" (UID: \"88e7057e-29a9-4bba-a588-11ae4def7947\") " Jan 31 09:05:51 crc kubenswrapper[4830]: I0131 09:05:51.423553 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/88e7057e-29a9-4bba-a588-11ae4def7947-proxy-ca-bundles\") pod \"88e7057e-29a9-4bba-a588-11ae4def7947\" (UID: \"88e7057e-29a9-4bba-a588-11ae4def7947\") " Jan 31 09:05:51 crc kubenswrapper[4830]: I0131 09:05:51.423591 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/88e7057e-29a9-4bba-a588-11ae4def7947-client-ca\") pod \"88e7057e-29a9-4bba-a588-11ae4def7947\" (UID: \"88e7057e-29a9-4bba-a588-11ae4def7947\") " Jan 31 09:05:51 crc kubenswrapper[4830]: I0131 09:05:51.423632 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/88e7057e-29a9-4bba-a588-11ae4def7947-serving-cert\") pod \"88e7057e-29a9-4bba-a588-11ae4def7947\" (UID: \"88e7057e-29a9-4bba-a588-11ae4def7947\") " Jan 31 09:05:51 crc kubenswrapper[4830]: I0131 09:05:51.424640 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/88e7057e-29a9-4bba-a588-11ae4def7947-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "88e7057e-29a9-4bba-a588-11ae4def7947" (UID: "88e7057e-29a9-4bba-a588-11ae4def7947"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:05:51 crc kubenswrapper[4830]: I0131 09:05:51.424854 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/88e7057e-29a9-4bba-a588-11ae4def7947-config" (OuterVolumeSpecName: "config") pod "88e7057e-29a9-4bba-a588-11ae4def7947" (UID: "88e7057e-29a9-4bba-a588-11ae4def7947"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:05:51 crc kubenswrapper[4830]: I0131 09:05:51.426925 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/88e7057e-29a9-4bba-a588-11ae4def7947-client-ca" (OuterVolumeSpecName: "client-ca") pod "88e7057e-29a9-4bba-a588-11ae4def7947" (UID: "88e7057e-29a9-4bba-a588-11ae4def7947"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:05:51 crc kubenswrapper[4830]: I0131 09:05:51.426986 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/88e7057e-29a9-4bba-a588-11ae4def7947-config\") pod \"88e7057e-29a9-4bba-a588-11ae4def7947\" (UID: \"88e7057e-29a9-4bba-a588-11ae4def7947\") " Jan 31 09:05:51 crc kubenswrapper[4830]: I0131 09:05:51.427011 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-72zm8\" (UniqueName: \"kubernetes.io/projected/9d39fccc-7441-408a-b27a-6cd6c53ad159-kube-api-access-72zm8\") pod \"9d39fccc-7441-408a-b27a-6cd6c53ad159\" (UID: \"9d39fccc-7441-408a-b27a-6cd6c53ad159\") " Jan 31 09:05:51 crc kubenswrapper[4830]: I0131 09:05:51.427037 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d39fccc-7441-408a-b27a-6cd6c53ad159-config\") pod \"9d39fccc-7441-408a-b27a-6cd6c53ad159\" (UID: \"9d39fccc-7441-408a-b27a-6cd6c53ad159\") " Jan 31 09:05:51 crc kubenswrapper[4830]: I0131 09:05:51.427069 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d39fccc-7441-408a-b27a-6cd6c53ad159-serving-cert\") pod \"9d39fccc-7441-408a-b27a-6cd6c53ad159\" (UID: \"9d39fccc-7441-408a-b27a-6cd6c53ad159\") " Jan 31 09:05:51 crc kubenswrapper[4830]: I0131 09:05:51.427906 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d39fccc-7441-408a-b27a-6cd6c53ad159-config" (OuterVolumeSpecName: "config") pod "9d39fccc-7441-408a-b27a-6cd6c53ad159" (UID: "9d39fccc-7441-408a-b27a-6cd6c53ad159"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:05:51 crc kubenswrapper[4830]: I0131 09:05:51.428138 4830 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/88e7057e-29a9-4bba-a588-11ae4def7947-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 31 09:05:51 crc kubenswrapper[4830]: I0131 09:05:51.428152 4830 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/88e7057e-29a9-4bba-a588-11ae4def7947-client-ca\") on node \"crc\" DevicePath \"\"" Jan 31 09:05:51 crc kubenswrapper[4830]: I0131 09:05:51.428162 4830 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/88e7057e-29a9-4bba-a588-11ae4def7947-config\") on node \"crc\" DevicePath \"\"" Jan 31 09:05:51 crc kubenswrapper[4830]: I0131 09:05:51.428173 4830 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d39fccc-7441-408a-b27a-6cd6c53ad159-config\") on node \"crc\" DevicePath \"\"" Jan 31 09:05:51 crc kubenswrapper[4830]: I0131 09:05:51.430674 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d39fccc-7441-408a-b27a-6cd6c53ad159-kube-api-access-72zm8" (OuterVolumeSpecName: "kube-api-access-72zm8") pod "9d39fccc-7441-408a-b27a-6cd6c53ad159" (UID: "9d39fccc-7441-408a-b27a-6cd6c53ad159"). InnerVolumeSpecName "kube-api-access-72zm8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:05:51 crc kubenswrapper[4830]: I0131 09:05:51.430692 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d39fccc-7441-408a-b27a-6cd6c53ad159-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d39fccc-7441-408a-b27a-6cd6c53ad159" (UID: "9d39fccc-7441-408a-b27a-6cd6c53ad159"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:05:51 crc kubenswrapper[4830]: I0131 09:05:51.431220 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/88e7057e-29a9-4bba-a588-11ae4def7947-kube-api-access-k9lz9" (OuterVolumeSpecName: "kube-api-access-k9lz9") pod "88e7057e-29a9-4bba-a588-11ae4def7947" (UID: "88e7057e-29a9-4bba-a588-11ae4def7947"). InnerVolumeSpecName "kube-api-access-k9lz9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:05:51 crc kubenswrapper[4830]: I0131 09:05:51.432028 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/88e7057e-29a9-4bba-a588-11ae4def7947-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "88e7057e-29a9-4bba-a588-11ae4def7947" (UID: "88e7057e-29a9-4bba-a588-11ae4def7947"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:05:51 crc kubenswrapper[4830]: I0131 09:05:51.529428 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9d39fccc-7441-408a-b27a-6cd6c53ad159-client-ca\") pod \"9d39fccc-7441-408a-b27a-6cd6c53ad159\" (UID: \"9d39fccc-7441-408a-b27a-6cd6c53ad159\") " Jan 31 09:05:51 crc kubenswrapper[4830]: I0131 09:05:51.529675 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k9lz9\" (UniqueName: \"kubernetes.io/projected/88e7057e-29a9-4bba-a588-11ae4def7947-kube-api-access-k9lz9\") on node \"crc\" DevicePath \"\"" Jan 31 09:05:51 crc kubenswrapper[4830]: I0131 09:05:51.529688 4830 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/88e7057e-29a9-4bba-a588-11ae4def7947-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 09:05:51 crc kubenswrapper[4830]: I0131 09:05:51.529698 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-72zm8\" (UniqueName: \"kubernetes.io/projected/9d39fccc-7441-408a-b27a-6cd6c53ad159-kube-api-access-72zm8\") on node \"crc\" DevicePath \"\"" Jan 31 09:05:51 crc kubenswrapper[4830]: I0131 09:05:51.529707 4830 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d39fccc-7441-408a-b27a-6cd6c53ad159-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 09:05:51 crc kubenswrapper[4830]: I0131 09:05:51.530057 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d39fccc-7441-408a-b27a-6cd6c53ad159-client-ca" (OuterVolumeSpecName: "client-ca") pod "9d39fccc-7441-408a-b27a-6cd6c53ad159" (UID: "9d39fccc-7441-408a-b27a-6cd6c53ad159"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:05:51 crc kubenswrapper[4830]: I0131 09:05:51.633061 4830 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9d39fccc-7441-408a-b27a-6cd6c53ad159-client-ca\") on node \"crc\" DevicePath \"\"" Jan 31 09:05:51 crc kubenswrapper[4830]: I0131 09:05:51.736899 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 31 09:05:51 crc kubenswrapper[4830]: I0131 09:05:51.985015 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 31 09:05:52 crc kubenswrapper[4830]: I0131 09:05:52.221075 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-9dc64dc66-ldnmz" Jan 31 09:05:52 crc kubenswrapper[4830]: I0131 09:05:52.221163 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-9dc64dc66-ldnmz" event={"ID":"9d39fccc-7441-408a-b27a-6cd6c53ad159","Type":"ContainerDied","Data":"4ed6982978591e8e3dff355398a1faecf73c8d569bf51e5a15f9185cdfdccd82"} Jan 31 09:05:52 crc kubenswrapper[4830]: I0131 09:05:52.221219 4830 scope.go:117] "RemoveContainer" containerID="2733fdc2c5f66907564519a2d957ad6f1ea9852a3a02051f977934ddb9db0d5c" Jan 31 09:05:52 crc kubenswrapper[4830]: I0131 09:05:52.222082 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6488cf5546-fd5sf" Jan 31 09:05:52 crc kubenswrapper[4830]: I0131 09:05:52.270048 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6488cf5546-fd5sf"] Jan 31 09:05:52 crc kubenswrapper[4830]: I0131 09:05:52.272039 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-6488cf5546-fd5sf"] Jan 31 09:05:52 crc kubenswrapper[4830]: I0131 09:05:52.284761 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-9dc64dc66-ldnmz"] Jan 31 09:05:52 crc kubenswrapper[4830]: I0131 09:05:52.288665 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-9dc64dc66-ldnmz"] Jan 31 09:05:52 crc kubenswrapper[4830]: I0131 09:05:52.738594 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-76cb5849cd-wwspw"] Jan 31 09:05:52 crc kubenswrapper[4830]: E0131 09:05:52.739060 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="88e7057e-29a9-4bba-a588-11ae4def7947" containerName="controller-manager" Jan 31 09:05:52 crc kubenswrapper[4830]: I0131 09:05:52.739083 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="88e7057e-29a9-4bba-a588-11ae4def7947" containerName="controller-manager" Jan 31 09:05:52 crc kubenswrapper[4830]: E0131 09:05:52.739102 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d39fccc-7441-408a-b27a-6cd6c53ad159" containerName="route-controller-manager" Jan 31 09:05:52 crc kubenswrapper[4830]: I0131 09:05:52.739111 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d39fccc-7441-408a-b27a-6cd6c53ad159" containerName="route-controller-manager" Jan 31 09:05:52 crc kubenswrapper[4830]: I0131 09:05:52.739225 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="88e7057e-29a9-4bba-a588-11ae4def7947" containerName="controller-manager" Jan 31 09:05:52 crc kubenswrapper[4830]: I0131 09:05:52.739241 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="9d39fccc-7441-408a-b27a-6cd6c53ad159" containerName="route-controller-manager" Jan 31 09:05:52 crc kubenswrapper[4830]: I0131 09:05:52.740659 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-76cb5849cd-wwspw" Jan 31 09:05:52 crc kubenswrapper[4830]: I0131 09:05:52.747980 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8wbj7\" (UniqueName: \"kubernetes.io/projected/a6505f5e-7a8b-468d-b9d4-12c454986269-kube-api-access-8wbj7\") pod \"route-controller-manager-76cb5849cd-wwspw\" (UID: \"a6505f5e-7a8b-468d-b9d4-12c454986269\") " pod="openshift-route-controller-manager/route-controller-manager-76cb5849cd-wwspw" Jan 31 09:05:52 crc kubenswrapper[4830]: I0131 09:05:52.748044 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a6505f5e-7a8b-468d-b9d4-12c454986269-client-ca\") pod \"route-controller-manager-76cb5849cd-wwspw\" (UID: \"a6505f5e-7a8b-468d-b9d4-12c454986269\") " pod="openshift-route-controller-manager/route-controller-manager-76cb5849cd-wwspw" Jan 31 09:05:52 crc kubenswrapper[4830]: I0131 09:05:52.748108 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a6505f5e-7a8b-468d-b9d4-12c454986269-serving-cert\") pod \"route-controller-manager-76cb5849cd-wwspw\" (UID: \"a6505f5e-7a8b-468d-b9d4-12c454986269\") " pod="openshift-route-controller-manager/route-controller-manager-76cb5849cd-wwspw" Jan 31 09:05:52 crc kubenswrapper[4830]: I0131 09:05:52.748147 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a6505f5e-7a8b-468d-b9d4-12c454986269-config\") pod \"route-controller-manager-76cb5849cd-wwspw\" (UID: \"a6505f5e-7a8b-468d-b9d4-12c454986269\") " pod="openshift-route-controller-manager/route-controller-manager-76cb5849cd-wwspw" Jan 31 09:05:52 crc kubenswrapper[4830]: I0131 09:05:52.750384 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 31 09:05:52 crc kubenswrapper[4830]: I0131 09:05:52.750673 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 31 09:05:52 crc kubenswrapper[4830]: I0131 09:05:52.750521 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 31 09:05:52 crc kubenswrapper[4830]: I0131 09:05:52.752775 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 31 09:05:52 crc kubenswrapper[4830]: I0131 09:05:52.753262 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 31 09:05:52 crc kubenswrapper[4830]: I0131 09:05:52.755474 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 31 09:05:52 crc kubenswrapper[4830]: I0131 09:05:52.763431 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-d755d4f49-prmkf"] Jan 31 09:05:52 crc kubenswrapper[4830]: I0131 09:05:52.765025 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-d755d4f49-prmkf" Jan 31 09:05:52 crc kubenswrapper[4830]: I0131 09:05:52.767803 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 31 09:05:52 crc kubenswrapper[4830]: I0131 09:05:52.768248 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 31 09:05:52 crc kubenswrapper[4830]: I0131 09:05:52.768622 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 31 09:05:52 crc kubenswrapper[4830]: I0131 09:05:52.769041 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 31 09:05:52 crc kubenswrapper[4830]: I0131 09:05:52.769230 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 31 09:05:52 crc kubenswrapper[4830]: I0131 09:05:52.771299 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 31 09:05:52 crc kubenswrapper[4830]: I0131 09:05:52.776905 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 31 09:05:52 crc kubenswrapper[4830]: I0131 09:05:52.777643 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-d755d4f49-prmkf"] Jan 31 09:05:52 crc kubenswrapper[4830]: I0131 09:05:52.786882 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-76cb5849cd-wwspw"] Jan 31 09:05:52 crc kubenswrapper[4830]: I0131 09:05:52.850464 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a6505f5e-7a8b-468d-b9d4-12c454986269-client-ca\") pod \"route-controller-manager-76cb5849cd-wwspw\" (UID: \"a6505f5e-7a8b-468d-b9d4-12c454986269\") " pod="openshift-route-controller-manager/route-controller-manager-76cb5849cd-wwspw" Jan 31 09:05:52 crc kubenswrapper[4830]: I0131 09:05:52.850533 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/15dfc3dc-f225-4121-b796-6a7430f3626a-config\") pod \"controller-manager-d755d4f49-prmkf\" (UID: \"15dfc3dc-f225-4121-b796-6a7430f3626a\") " pod="openshift-controller-manager/controller-manager-d755d4f49-prmkf" Jan 31 09:05:52 crc kubenswrapper[4830]: I0131 09:05:52.850557 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/15dfc3dc-f225-4121-b796-6a7430f3626a-serving-cert\") pod \"controller-manager-d755d4f49-prmkf\" (UID: \"15dfc3dc-f225-4121-b796-6a7430f3626a\") " pod="openshift-controller-manager/controller-manager-d755d4f49-prmkf" Jan 31 09:05:52 crc kubenswrapper[4830]: I0131 09:05:52.850801 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tgcg5\" (UniqueName: \"kubernetes.io/projected/15dfc3dc-f225-4121-b796-6a7430f3626a-kube-api-access-tgcg5\") pod \"controller-manager-d755d4f49-prmkf\" (UID: \"15dfc3dc-f225-4121-b796-6a7430f3626a\") " pod="openshift-controller-manager/controller-manager-d755d4f49-prmkf" Jan 31 09:05:52 crc kubenswrapper[4830]: I0131 09:05:52.850947 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/15dfc3dc-f225-4121-b796-6a7430f3626a-proxy-ca-bundles\") pod \"controller-manager-d755d4f49-prmkf\" (UID: \"15dfc3dc-f225-4121-b796-6a7430f3626a\") " pod="openshift-controller-manager/controller-manager-d755d4f49-prmkf" Jan 31 09:05:52 crc kubenswrapper[4830]: I0131 09:05:52.850986 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a6505f5e-7a8b-468d-b9d4-12c454986269-serving-cert\") pod \"route-controller-manager-76cb5849cd-wwspw\" (UID: \"a6505f5e-7a8b-468d-b9d4-12c454986269\") " pod="openshift-route-controller-manager/route-controller-manager-76cb5849cd-wwspw" Jan 31 09:05:52 crc kubenswrapper[4830]: I0131 09:05:52.851052 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/15dfc3dc-f225-4121-b796-6a7430f3626a-client-ca\") pod \"controller-manager-d755d4f49-prmkf\" (UID: \"15dfc3dc-f225-4121-b796-6a7430f3626a\") " pod="openshift-controller-manager/controller-manager-d755d4f49-prmkf" Jan 31 09:05:52 crc kubenswrapper[4830]: I0131 09:05:52.851080 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a6505f5e-7a8b-468d-b9d4-12c454986269-config\") pod \"route-controller-manager-76cb5849cd-wwspw\" (UID: \"a6505f5e-7a8b-468d-b9d4-12c454986269\") " pod="openshift-route-controller-manager/route-controller-manager-76cb5849cd-wwspw" Jan 31 09:05:52 crc kubenswrapper[4830]: I0131 09:05:52.851112 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8wbj7\" (UniqueName: \"kubernetes.io/projected/a6505f5e-7a8b-468d-b9d4-12c454986269-kube-api-access-8wbj7\") pod \"route-controller-manager-76cb5849cd-wwspw\" (UID: \"a6505f5e-7a8b-468d-b9d4-12c454986269\") " pod="openshift-route-controller-manager/route-controller-manager-76cb5849cd-wwspw" Jan 31 09:05:52 crc kubenswrapper[4830]: I0131 09:05:52.851748 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a6505f5e-7a8b-468d-b9d4-12c454986269-client-ca\") pod \"route-controller-manager-76cb5849cd-wwspw\" (UID: \"a6505f5e-7a8b-468d-b9d4-12c454986269\") " pod="openshift-route-controller-manager/route-controller-manager-76cb5849cd-wwspw" Jan 31 09:05:52 crc kubenswrapper[4830]: I0131 09:05:52.852618 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a6505f5e-7a8b-468d-b9d4-12c454986269-config\") pod \"route-controller-manager-76cb5849cd-wwspw\" (UID: \"a6505f5e-7a8b-468d-b9d4-12c454986269\") " pod="openshift-route-controller-manager/route-controller-manager-76cb5849cd-wwspw" Jan 31 09:05:52 crc kubenswrapper[4830]: I0131 09:05:52.856262 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a6505f5e-7a8b-468d-b9d4-12c454986269-serving-cert\") pod \"route-controller-manager-76cb5849cd-wwspw\" (UID: \"a6505f5e-7a8b-468d-b9d4-12c454986269\") " pod="openshift-route-controller-manager/route-controller-manager-76cb5849cd-wwspw" Jan 31 09:05:52 crc kubenswrapper[4830]: I0131 09:05:52.870585 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8wbj7\" (UniqueName: \"kubernetes.io/projected/a6505f5e-7a8b-468d-b9d4-12c454986269-kube-api-access-8wbj7\") pod \"route-controller-manager-76cb5849cd-wwspw\" (UID: \"a6505f5e-7a8b-468d-b9d4-12c454986269\") " pod="openshift-route-controller-manager/route-controller-manager-76cb5849cd-wwspw" Jan 31 09:05:52 crc kubenswrapper[4830]: I0131 09:05:52.952327 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/15dfc3dc-f225-4121-b796-6a7430f3626a-proxy-ca-bundles\") pod \"controller-manager-d755d4f49-prmkf\" (UID: \"15dfc3dc-f225-4121-b796-6a7430f3626a\") " pod="openshift-controller-manager/controller-manager-d755d4f49-prmkf" Jan 31 09:05:52 crc kubenswrapper[4830]: I0131 09:05:52.952781 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/15dfc3dc-f225-4121-b796-6a7430f3626a-client-ca\") pod \"controller-manager-d755d4f49-prmkf\" (UID: \"15dfc3dc-f225-4121-b796-6a7430f3626a\") " pod="openshift-controller-manager/controller-manager-d755d4f49-prmkf" Jan 31 09:05:52 crc kubenswrapper[4830]: I0131 09:05:52.952904 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/15dfc3dc-f225-4121-b796-6a7430f3626a-config\") pod \"controller-manager-d755d4f49-prmkf\" (UID: \"15dfc3dc-f225-4121-b796-6a7430f3626a\") " pod="openshift-controller-manager/controller-manager-d755d4f49-prmkf" Jan 31 09:05:52 crc kubenswrapper[4830]: I0131 09:05:52.952982 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/15dfc3dc-f225-4121-b796-6a7430f3626a-serving-cert\") pod \"controller-manager-d755d4f49-prmkf\" (UID: \"15dfc3dc-f225-4121-b796-6a7430f3626a\") " pod="openshift-controller-manager/controller-manager-d755d4f49-prmkf" Jan 31 09:05:52 crc kubenswrapper[4830]: I0131 09:05:52.953150 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tgcg5\" (UniqueName: \"kubernetes.io/projected/15dfc3dc-f225-4121-b796-6a7430f3626a-kube-api-access-tgcg5\") pod \"controller-manager-d755d4f49-prmkf\" (UID: \"15dfc3dc-f225-4121-b796-6a7430f3626a\") " pod="openshift-controller-manager/controller-manager-d755d4f49-prmkf" Jan 31 09:05:52 crc kubenswrapper[4830]: I0131 09:05:52.953646 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/15dfc3dc-f225-4121-b796-6a7430f3626a-proxy-ca-bundles\") pod \"controller-manager-d755d4f49-prmkf\" (UID: \"15dfc3dc-f225-4121-b796-6a7430f3626a\") " pod="openshift-controller-manager/controller-manager-d755d4f49-prmkf" Jan 31 09:05:52 crc kubenswrapper[4830]: I0131 09:05:52.954075 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/15dfc3dc-f225-4121-b796-6a7430f3626a-client-ca\") pod \"controller-manager-d755d4f49-prmkf\" (UID: \"15dfc3dc-f225-4121-b796-6a7430f3626a\") " pod="openshift-controller-manager/controller-manager-d755d4f49-prmkf" Jan 31 09:05:52 crc kubenswrapper[4830]: I0131 09:05:52.955472 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/15dfc3dc-f225-4121-b796-6a7430f3626a-config\") pod \"controller-manager-d755d4f49-prmkf\" (UID: \"15dfc3dc-f225-4121-b796-6a7430f3626a\") " pod="openshift-controller-manager/controller-manager-d755d4f49-prmkf" Jan 31 09:05:52 crc kubenswrapper[4830]: I0131 09:05:52.958062 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/15dfc3dc-f225-4121-b796-6a7430f3626a-serving-cert\") pod \"controller-manager-d755d4f49-prmkf\" (UID: \"15dfc3dc-f225-4121-b796-6a7430f3626a\") " pod="openshift-controller-manager/controller-manager-d755d4f49-prmkf" Jan 31 09:05:52 crc kubenswrapper[4830]: I0131 09:05:52.972925 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tgcg5\" (UniqueName: \"kubernetes.io/projected/15dfc3dc-f225-4121-b796-6a7430f3626a-kube-api-access-tgcg5\") pod \"controller-manager-d755d4f49-prmkf\" (UID: \"15dfc3dc-f225-4121-b796-6a7430f3626a\") " pod="openshift-controller-manager/controller-manager-d755d4f49-prmkf" Jan 31 09:05:53 crc kubenswrapper[4830]: I0131 09:05:53.066137 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-76cb5849cd-wwspw" Jan 31 09:05:53 crc kubenswrapper[4830]: I0131 09:05:53.092310 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-d755d4f49-prmkf" Jan 31 09:05:53 crc kubenswrapper[4830]: I0131 09:05:53.231263 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 31 09:05:53 crc kubenswrapper[4830]: I0131 09:05:53.231692 4830 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="a6e8e3c4937bd4c2a71b73d9762af599201d7928895e4d6a5ea2d397b1462ab1" exitCode=137 Jan 31 09:05:53 crc kubenswrapper[4830]: I0131 09:05:53.372260 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-76cb5849cd-wwspw"] Jan 31 09:05:53 crc kubenswrapper[4830]: I0131 09:05:53.437135 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-d755d4f49-prmkf"] Jan 31 09:05:53 crc kubenswrapper[4830]: I0131 09:05:53.717536 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 31 09:05:53 crc kubenswrapper[4830]: I0131 09:05:53.717612 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 31 09:05:53 crc kubenswrapper[4830]: I0131 09:05:53.875107 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 31 09:05:53 crc kubenswrapper[4830]: I0131 09:05:53.875192 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 31 09:05:53 crc kubenswrapper[4830]: I0131 09:05:53.875258 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 31 09:05:53 crc kubenswrapper[4830]: I0131 09:05:53.875339 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 09:05:53 crc kubenswrapper[4830]: I0131 09:05:53.875367 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 31 09:05:53 crc kubenswrapper[4830]: I0131 09:05:53.875353 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 09:05:53 crc kubenswrapper[4830]: I0131 09:05:53.875247 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 09:05:53 crc kubenswrapper[4830]: I0131 09:05:53.875410 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 09:05:53 crc kubenswrapper[4830]: I0131 09:05:53.875390 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 31 09:05:53 crc kubenswrapper[4830]: I0131 09:05:53.875896 4830 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 31 09:05:53 crc kubenswrapper[4830]: I0131 09:05:53.875911 4830 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Jan 31 09:05:53 crc kubenswrapper[4830]: I0131 09:05:53.875920 4830 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Jan 31 09:05:53 crc kubenswrapper[4830]: I0131 09:05:53.875929 4830 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Jan 31 09:05:53 crc kubenswrapper[4830]: I0131 09:05:53.885065 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 09:05:53 crc kubenswrapper[4830]: I0131 09:05:53.978658 4830 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 31 09:05:54 crc kubenswrapper[4830]: I0131 09:05:54.240250 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-76cb5849cd-wwspw" event={"ID":"a6505f5e-7a8b-468d-b9d4-12c454986269","Type":"ContainerStarted","Data":"c5eddd260c58dee88c32247f25893f670ecac670a98bd3f2f264cc5f2617e726"} Jan 31 09:05:54 crc kubenswrapper[4830]: I0131 09:05:54.240299 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-76cb5849cd-wwspw" event={"ID":"a6505f5e-7a8b-468d-b9d4-12c454986269","Type":"ContainerStarted","Data":"5d1bc7706c85f7a6a8bc5e34c6dadc729db42fa6899f048048c37dfc9a6f6f2d"} Jan 31 09:05:54 crc kubenswrapper[4830]: I0131 09:05:54.240519 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-76cb5849cd-wwspw" Jan 31 09:05:54 crc kubenswrapper[4830]: I0131 09:05:54.242286 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 31 09:05:54 crc kubenswrapper[4830]: I0131 09:05:54.242398 4830 scope.go:117] "RemoveContainer" containerID="a6e8e3c4937bd4c2a71b73d9762af599201d7928895e4d6a5ea2d397b1462ab1" Jan 31 09:05:54 crc kubenswrapper[4830]: I0131 09:05:54.242410 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 31 09:05:54 crc kubenswrapper[4830]: I0131 09:05:54.244062 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-d755d4f49-prmkf" event={"ID":"15dfc3dc-f225-4121-b796-6a7430f3626a","Type":"ContainerStarted","Data":"b5ac31c9cc9ab4136132df4ad9c973f1248f14bc3b02442fa983a5dbffb8e01a"} Jan 31 09:05:54 crc kubenswrapper[4830]: I0131 09:05:54.244099 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-d755d4f49-prmkf" event={"ID":"15dfc3dc-f225-4121-b796-6a7430f3626a","Type":"ContainerStarted","Data":"fd1a99dce7ef7576830457dcf851dcde42a15c1616a86380de98c27ea50c9933"} Jan 31 09:05:54 crc kubenswrapper[4830]: I0131 09:05:54.244265 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-d755d4f49-prmkf" Jan 31 09:05:54 crc kubenswrapper[4830]: I0131 09:05:54.248595 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-d755d4f49-prmkf" Jan 31 09:05:54 crc kubenswrapper[4830]: I0131 09:05:54.248684 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-76cb5849cd-wwspw" Jan 31 09:05:54 crc kubenswrapper[4830]: I0131 09:05:54.261354 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-76cb5849cd-wwspw" podStartSLOduration=4.261331657 podStartE2EDuration="4.261331657s" podCreationTimestamp="2026-01-31 09:05:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 09:05:54.260828052 +0000 UTC m=+298.754190494" watchObservedRunningTime="2026-01-31 09:05:54.261331657 +0000 UTC m=+298.754694109" Jan 31 09:05:54 crc kubenswrapper[4830]: I0131 09:05:54.263419 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="88e7057e-29a9-4bba-a588-11ae4def7947" path="/var/lib/kubelet/pods/88e7057e-29a9-4bba-a588-11ae4def7947/volumes" Jan 31 09:05:54 crc kubenswrapper[4830]: I0131 09:05:54.264297 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d39fccc-7441-408a-b27a-6cd6c53ad159" path="/var/lib/kubelet/pods/9d39fccc-7441-408a-b27a-6cd6c53ad159/volumes" Jan 31 09:05:54 crc kubenswrapper[4830]: I0131 09:05:54.264850 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Jan 31 09:05:54 crc kubenswrapper[4830]: I0131 09:05:54.265099 4830 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="" Jan 31 09:05:54 crc kubenswrapper[4830]: I0131 09:05:54.280294 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 31 09:05:54 crc kubenswrapper[4830]: I0131 09:05:54.280338 4830 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="2413885d-9118-43f7-b254-418496688600" Jan 31 09:05:54 crc kubenswrapper[4830]: I0131 09:05:54.287521 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 31 09:05:54 crc kubenswrapper[4830]: I0131 09:05:54.288142 4830 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="2413885d-9118-43f7-b254-418496688600" Jan 31 09:05:54 crc kubenswrapper[4830]: I0131 09:05:54.288881 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-d755d4f49-prmkf" podStartSLOduration=4.288866449 podStartE2EDuration="4.288866449s" podCreationTimestamp="2026-01-31 09:05:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 09:05:54.283304821 +0000 UTC m=+298.776667263" watchObservedRunningTime="2026-01-31 09:05:54.288866449 +0000 UTC m=+298.782228911" Jan 31 09:05:56 crc kubenswrapper[4830]: I0131 09:05:56.003105 4830 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Jan 31 09:06:02 crc kubenswrapper[4830]: I0131 09:06:02.364897 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 31 09:06:04 crc kubenswrapper[4830]: I0131 09:06:04.458659 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 31 09:06:06 crc kubenswrapper[4830]: I0131 09:06:06.332540 4830 generic.go:334] "Generic (PLEG): container finished" podID="36a7a51a-2662-4f3b-aa1d-d674cf676b9d" containerID="b90565efd448c3a205961e4d926bf471147c2a338b39eef1471085e2888f47a0" exitCode=0 Jan 31 09:06:06 crc kubenswrapper[4830]: I0131 09:06:06.332621 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-fnk7f" event={"ID":"36a7a51a-2662-4f3b-aa1d-d674cf676b9d","Type":"ContainerDied","Data":"b90565efd448c3a205961e4d926bf471147c2a338b39eef1471085e2888f47a0"} Jan 31 09:06:06 crc kubenswrapper[4830]: I0131 09:06:06.335040 4830 scope.go:117] "RemoveContainer" containerID="b90565efd448c3a205961e4d926bf471147c2a338b39eef1471085e2888f47a0" Jan 31 09:06:07 crc kubenswrapper[4830]: I0131 09:06:07.344241 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-fnk7f" event={"ID":"36a7a51a-2662-4f3b-aa1d-d674cf676b9d","Type":"ContainerStarted","Data":"4e48e977c1cc79f53accb7684a4ac58353f9e37b15ae4f8702c5995bf57261d8"} Jan 31 09:06:07 crc kubenswrapper[4830]: I0131 09:06:07.344699 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-fnk7f" Jan 31 09:06:07 crc kubenswrapper[4830]: I0131 09:06:07.348466 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-fnk7f" Jan 31 09:06:08 crc kubenswrapper[4830]: I0131 09:06:08.214298 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 31 09:06:10 crc kubenswrapper[4830]: I0131 09:06:10.845031 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-d755d4f49-prmkf"] Jan 31 09:06:10 crc kubenswrapper[4830]: I0131 09:06:10.845266 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-d755d4f49-prmkf" podUID="15dfc3dc-f225-4121-b796-6a7430f3626a" containerName="controller-manager" containerID="cri-o://b5ac31c9cc9ab4136132df4ad9c973f1248f14bc3b02442fa983a5dbffb8e01a" gracePeriod=30 Jan 31 09:06:10 crc kubenswrapper[4830]: I0131 09:06:10.879418 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-76cb5849cd-wwspw"] Jan 31 09:06:10 crc kubenswrapper[4830]: I0131 09:06:10.879639 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-76cb5849cd-wwspw" podUID="a6505f5e-7a8b-468d-b9d4-12c454986269" containerName="route-controller-manager" containerID="cri-o://c5eddd260c58dee88c32247f25893f670ecac670a98bd3f2f264cc5f2617e726" gracePeriod=30 Jan 31 09:06:11 crc kubenswrapper[4830]: I0131 09:06:11.368638 4830 generic.go:334] "Generic (PLEG): container finished" podID="15dfc3dc-f225-4121-b796-6a7430f3626a" containerID="b5ac31c9cc9ab4136132df4ad9c973f1248f14bc3b02442fa983a5dbffb8e01a" exitCode=0 Jan 31 09:06:11 crc kubenswrapper[4830]: I0131 09:06:11.368882 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-d755d4f49-prmkf" event={"ID":"15dfc3dc-f225-4121-b796-6a7430f3626a","Type":"ContainerDied","Data":"b5ac31c9cc9ab4136132df4ad9c973f1248f14bc3b02442fa983a5dbffb8e01a"} Jan 31 09:06:11 crc kubenswrapper[4830]: I0131 09:06:11.371479 4830 generic.go:334] "Generic (PLEG): container finished" podID="a6505f5e-7a8b-468d-b9d4-12c454986269" containerID="c5eddd260c58dee88c32247f25893f670ecac670a98bd3f2f264cc5f2617e726" exitCode=0 Jan 31 09:06:11 crc kubenswrapper[4830]: I0131 09:06:11.371511 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-76cb5849cd-wwspw" event={"ID":"a6505f5e-7a8b-468d-b9d4-12c454986269","Type":"ContainerDied","Data":"c5eddd260c58dee88c32247f25893f670ecac670a98bd3f2f264cc5f2617e726"} Jan 31 09:06:11 crc kubenswrapper[4830]: I0131 09:06:11.463468 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-76cb5849cd-wwspw" Jan 31 09:06:11 crc kubenswrapper[4830]: I0131 09:06:11.506043 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-d755d4f49-prmkf" Jan 31 09:06:11 crc kubenswrapper[4830]: I0131 09:06:11.547286 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tgcg5\" (UniqueName: \"kubernetes.io/projected/15dfc3dc-f225-4121-b796-6a7430f3626a-kube-api-access-tgcg5\") pod \"15dfc3dc-f225-4121-b796-6a7430f3626a\" (UID: \"15dfc3dc-f225-4121-b796-6a7430f3626a\") " Jan 31 09:06:11 crc kubenswrapper[4830]: I0131 09:06:11.547335 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/15dfc3dc-f225-4121-b796-6a7430f3626a-config\") pod \"15dfc3dc-f225-4121-b796-6a7430f3626a\" (UID: \"15dfc3dc-f225-4121-b796-6a7430f3626a\") " Jan 31 09:06:11 crc kubenswrapper[4830]: I0131 09:06:11.547402 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a6505f5e-7a8b-468d-b9d4-12c454986269-config\") pod \"a6505f5e-7a8b-468d-b9d4-12c454986269\" (UID: \"a6505f5e-7a8b-468d-b9d4-12c454986269\") " Jan 31 09:06:11 crc kubenswrapper[4830]: I0131 09:06:11.547436 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a6505f5e-7a8b-468d-b9d4-12c454986269-serving-cert\") pod \"a6505f5e-7a8b-468d-b9d4-12c454986269\" (UID: \"a6505f5e-7a8b-468d-b9d4-12c454986269\") " Jan 31 09:06:11 crc kubenswrapper[4830]: I0131 09:06:11.547477 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8wbj7\" (UniqueName: \"kubernetes.io/projected/a6505f5e-7a8b-468d-b9d4-12c454986269-kube-api-access-8wbj7\") pod \"a6505f5e-7a8b-468d-b9d4-12c454986269\" (UID: \"a6505f5e-7a8b-468d-b9d4-12c454986269\") " Jan 31 09:06:11 crc kubenswrapper[4830]: I0131 09:06:11.547554 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/15dfc3dc-f225-4121-b796-6a7430f3626a-proxy-ca-bundles\") pod \"15dfc3dc-f225-4121-b796-6a7430f3626a\" (UID: \"15dfc3dc-f225-4121-b796-6a7430f3626a\") " Jan 31 09:06:11 crc kubenswrapper[4830]: I0131 09:06:11.547616 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/15dfc3dc-f225-4121-b796-6a7430f3626a-client-ca\") pod \"15dfc3dc-f225-4121-b796-6a7430f3626a\" (UID: \"15dfc3dc-f225-4121-b796-6a7430f3626a\") " Jan 31 09:06:11 crc kubenswrapper[4830]: I0131 09:06:11.547637 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a6505f5e-7a8b-468d-b9d4-12c454986269-client-ca\") pod \"a6505f5e-7a8b-468d-b9d4-12c454986269\" (UID: \"a6505f5e-7a8b-468d-b9d4-12c454986269\") " Jan 31 09:06:11 crc kubenswrapper[4830]: I0131 09:06:11.547669 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/15dfc3dc-f225-4121-b796-6a7430f3626a-serving-cert\") pod \"15dfc3dc-f225-4121-b796-6a7430f3626a\" (UID: \"15dfc3dc-f225-4121-b796-6a7430f3626a\") " Jan 31 09:06:11 crc kubenswrapper[4830]: I0131 09:06:11.549930 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a6505f5e-7a8b-468d-b9d4-12c454986269-client-ca" (OuterVolumeSpecName: "client-ca") pod "a6505f5e-7a8b-468d-b9d4-12c454986269" (UID: "a6505f5e-7a8b-468d-b9d4-12c454986269"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:06:11 crc kubenswrapper[4830]: I0131 09:06:11.550013 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/15dfc3dc-f225-4121-b796-6a7430f3626a-client-ca" (OuterVolumeSpecName: "client-ca") pod "15dfc3dc-f225-4121-b796-6a7430f3626a" (UID: "15dfc3dc-f225-4121-b796-6a7430f3626a"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:06:11 crc kubenswrapper[4830]: I0131 09:06:11.550149 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/15dfc3dc-f225-4121-b796-6a7430f3626a-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "15dfc3dc-f225-4121-b796-6a7430f3626a" (UID: "15dfc3dc-f225-4121-b796-6a7430f3626a"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:06:11 crc kubenswrapper[4830]: I0131 09:06:11.550359 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/15dfc3dc-f225-4121-b796-6a7430f3626a-config" (OuterVolumeSpecName: "config") pod "15dfc3dc-f225-4121-b796-6a7430f3626a" (UID: "15dfc3dc-f225-4121-b796-6a7430f3626a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:06:11 crc kubenswrapper[4830]: I0131 09:06:11.550614 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a6505f5e-7a8b-468d-b9d4-12c454986269-config" (OuterVolumeSpecName: "config") pod "a6505f5e-7a8b-468d-b9d4-12c454986269" (UID: "a6505f5e-7a8b-468d-b9d4-12c454986269"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:06:11 crc kubenswrapper[4830]: I0131 09:06:11.554242 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a6505f5e-7a8b-468d-b9d4-12c454986269-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "a6505f5e-7a8b-468d-b9d4-12c454986269" (UID: "a6505f5e-7a8b-468d-b9d4-12c454986269"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:06:11 crc kubenswrapper[4830]: I0131 09:06:11.554285 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a6505f5e-7a8b-468d-b9d4-12c454986269-kube-api-access-8wbj7" (OuterVolumeSpecName: "kube-api-access-8wbj7") pod "a6505f5e-7a8b-468d-b9d4-12c454986269" (UID: "a6505f5e-7a8b-468d-b9d4-12c454986269"). InnerVolumeSpecName "kube-api-access-8wbj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:06:11 crc kubenswrapper[4830]: I0131 09:06:11.554456 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/15dfc3dc-f225-4121-b796-6a7430f3626a-kube-api-access-tgcg5" (OuterVolumeSpecName: "kube-api-access-tgcg5") pod "15dfc3dc-f225-4121-b796-6a7430f3626a" (UID: "15dfc3dc-f225-4121-b796-6a7430f3626a"). InnerVolumeSpecName "kube-api-access-tgcg5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:06:11 crc kubenswrapper[4830]: I0131 09:06:11.554602 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/15dfc3dc-f225-4121-b796-6a7430f3626a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "15dfc3dc-f225-4121-b796-6a7430f3626a" (UID: "15dfc3dc-f225-4121-b796-6a7430f3626a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:06:11 crc kubenswrapper[4830]: I0131 09:06:11.649215 4830 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a6505f5e-7a8b-468d-b9d4-12c454986269-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 09:06:11 crc kubenswrapper[4830]: I0131 09:06:11.649257 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8wbj7\" (UniqueName: \"kubernetes.io/projected/a6505f5e-7a8b-468d-b9d4-12c454986269-kube-api-access-8wbj7\") on node \"crc\" DevicePath \"\"" Jan 31 09:06:11 crc kubenswrapper[4830]: I0131 09:06:11.649272 4830 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/15dfc3dc-f225-4121-b796-6a7430f3626a-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 31 09:06:11 crc kubenswrapper[4830]: I0131 09:06:11.649284 4830 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/15dfc3dc-f225-4121-b796-6a7430f3626a-client-ca\") on node \"crc\" DevicePath \"\"" Jan 31 09:06:11 crc kubenswrapper[4830]: I0131 09:06:11.649295 4830 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a6505f5e-7a8b-468d-b9d4-12c454986269-client-ca\") on node \"crc\" DevicePath \"\"" Jan 31 09:06:11 crc kubenswrapper[4830]: I0131 09:06:11.649305 4830 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/15dfc3dc-f225-4121-b796-6a7430f3626a-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 09:06:11 crc kubenswrapper[4830]: I0131 09:06:11.649314 4830 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/15dfc3dc-f225-4121-b796-6a7430f3626a-config\") on node \"crc\" DevicePath \"\"" Jan 31 09:06:11 crc kubenswrapper[4830]: I0131 09:06:11.649322 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tgcg5\" (UniqueName: \"kubernetes.io/projected/15dfc3dc-f225-4121-b796-6a7430f3626a-kube-api-access-tgcg5\") on node \"crc\" DevicePath \"\"" Jan 31 09:06:11 crc kubenswrapper[4830]: I0131 09:06:11.649330 4830 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a6505f5e-7a8b-468d-b9d4-12c454986269-config\") on node \"crc\" DevicePath \"\"" Jan 31 09:06:12 crc kubenswrapper[4830]: I0131 09:06:12.079194 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 31 09:06:12 crc kubenswrapper[4830]: I0131 09:06:12.379403 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-d755d4f49-prmkf" Jan 31 09:06:12 crc kubenswrapper[4830]: I0131 09:06:12.379390 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-d755d4f49-prmkf" event={"ID":"15dfc3dc-f225-4121-b796-6a7430f3626a","Type":"ContainerDied","Data":"fd1a99dce7ef7576830457dcf851dcde42a15c1616a86380de98c27ea50c9933"} Jan 31 09:06:12 crc kubenswrapper[4830]: I0131 09:06:12.379497 4830 scope.go:117] "RemoveContainer" containerID="b5ac31c9cc9ab4136132df4ad9c973f1248f14bc3b02442fa983a5dbffb8e01a" Jan 31 09:06:12 crc kubenswrapper[4830]: I0131 09:06:12.382492 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-76cb5849cd-wwspw" event={"ID":"a6505f5e-7a8b-468d-b9d4-12c454986269","Type":"ContainerDied","Data":"5d1bc7706c85f7a6a8bc5e34c6dadc729db42fa6899f048048c37dfc9a6f6f2d"} Jan 31 09:06:12 crc kubenswrapper[4830]: I0131 09:06:12.382567 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-76cb5849cd-wwspw" Jan 31 09:06:12 crc kubenswrapper[4830]: I0131 09:06:12.409300 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-d755d4f49-prmkf"] Jan 31 09:06:12 crc kubenswrapper[4830]: I0131 09:06:12.420457 4830 scope.go:117] "RemoveContainer" containerID="c5eddd260c58dee88c32247f25893f670ecac670a98bd3f2f264cc5f2617e726" Jan 31 09:06:12 crc kubenswrapper[4830]: I0131 09:06:12.425702 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-d755d4f49-prmkf"] Jan 31 09:06:12 crc kubenswrapper[4830]: I0131 09:06:12.435398 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-76cb5849cd-wwspw"] Jan 31 09:06:12 crc kubenswrapper[4830]: I0131 09:06:12.444585 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-76cb5849cd-wwspw"] Jan 31 09:06:12 crc kubenswrapper[4830]: I0131 09:06:12.756331 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-555476556f-cxnzc"] Jan 31 09:06:12 crc kubenswrapper[4830]: E0131 09:06:12.756764 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a6505f5e-7a8b-468d-b9d4-12c454986269" containerName="route-controller-manager" Jan 31 09:06:12 crc kubenswrapper[4830]: I0131 09:06:12.756790 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="a6505f5e-7a8b-468d-b9d4-12c454986269" containerName="route-controller-manager" Jan 31 09:06:12 crc kubenswrapper[4830]: E0131 09:06:12.756805 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="15dfc3dc-f225-4121-b796-6a7430f3626a" containerName="controller-manager" Jan 31 09:06:12 crc kubenswrapper[4830]: I0131 09:06:12.756814 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="15dfc3dc-f225-4121-b796-6a7430f3626a" containerName="controller-manager" Jan 31 09:06:12 crc kubenswrapper[4830]: I0131 09:06:12.756974 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="a6505f5e-7a8b-468d-b9d4-12c454986269" containerName="route-controller-manager" Jan 31 09:06:12 crc kubenswrapper[4830]: I0131 09:06:12.757004 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="15dfc3dc-f225-4121-b796-6a7430f3626a" containerName="controller-manager" Jan 31 09:06:12 crc kubenswrapper[4830]: I0131 09:06:12.757567 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-555476556f-cxnzc" Jan 31 09:06:12 crc kubenswrapper[4830]: I0131 09:06:12.760393 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 31 09:06:12 crc kubenswrapper[4830]: I0131 09:06:12.760446 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 31 09:06:12 crc kubenswrapper[4830]: I0131 09:06:12.760465 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 31 09:06:12 crc kubenswrapper[4830]: I0131 09:06:12.760975 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 31 09:06:12 crc kubenswrapper[4830]: I0131 09:06:12.761213 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 31 09:06:12 crc kubenswrapper[4830]: I0131 09:06:12.764792 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 31 09:06:12 crc kubenswrapper[4830]: I0131 09:06:12.769165 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-d46c67fd4-tk7b8"] Jan 31 09:06:12 crc kubenswrapper[4830]: I0131 09:06:12.770055 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-d46c67fd4-tk7b8" Jan 31 09:06:12 crc kubenswrapper[4830]: I0131 09:06:12.777553 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 31 09:06:12 crc kubenswrapper[4830]: I0131 09:06:12.777635 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 31 09:06:12 crc kubenswrapper[4830]: I0131 09:06:12.778006 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 31 09:06:12 crc kubenswrapper[4830]: I0131 09:06:12.778012 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 31 09:06:12 crc kubenswrapper[4830]: I0131 09:06:12.778457 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 31 09:06:12 crc kubenswrapper[4830]: I0131 09:06:12.778600 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 31 09:06:12 crc kubenswrapper[4830]: I0131 09:06:12.781099 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-555476556f-cxnzc"] Jan 31 09:06:12 crc kubenswrapper[4830]: I0131 09:06:12.785399 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 31 09:06:12 crc kubenswrapper[4830]: I0131 09:06:12.785845 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-d46c67fd4-tk7b8"] Jan 31 09:06:12 crc kubenswrapper[4830]: I0131 09:06:12.867057 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g4lfk\" (UniqueName: \"kubernetes.io/projected/aa7f3d4f-c421-471e-b86d-1b98226cfc03-kube-api-access-g4lfk\") pod \"controller-manager-d46c67fd4-tk7b8\" (UID: \"aa7f3d4f-c421-471e-b86d-1b98226cfc03\") " pod="openshift-controller-manager/controller-manager-d46c67fd4-tk7b8" Jan 31 09:06:12 crc kubenswrapper[4830]: I0131 09:06:12.867132 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/aa7f3d4f-c421-471e-b86d-1b98226cfc03-client-ca\") pod \"controller-manager-d46c67fd4-tk7b8\" (UID: \"aa7f3d4f-c421-471e-b86d-1b98226cfc03\") " pod="openshift-controller-manager/controller-manager-d46c67fd4-tk7b8" Jan 31 09:06:12 crc kubenswrapper[4830]: I0131 09:06:12.867159 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/aa7f3d4f-c421-471e-b86d-1b98226cfc03-serving-cert\") pod \"controller-manager-d46c67fd4-tk7b8\" (UID: \"aa7f3d4f-c421-471e-b86d-1b98226cfc03\") " pod="openshift-controller-manager/controller-manager-d46c67fd4-tk7b8" Jan 31 09:06:12 crc kubenswrapper[4830]: I0131 09:06:12.867182 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/aa7f3d4f-c421-471e-b86d-1b98226cfc03-proxy-ca-bundles\") pod \"controller-manager-d46c67fd4-tk7b8\" (UID: \"aa7f3d4f-c421-471e-b86d-1b98226cfc03\") " pod="openshift-controller-manager/controller-manager-d46c67fd4-tk7b8" Jan 31 09:06:12 crc kubenswrapper[4830]: I0131 09:06:12.867208 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6bff0c5d-b14b-4164-9294-b6e330e28a0f-config\") pod \"route-controller-manager-555476556f-cxnzc\" (UID: \"6bff0c5d-b14b-4164-9294-b6e330e28a0f\") " pod="openshift-route-controller-manager/route-controller-manager-555476556f-cxnzc" Jan 31 09:06:12 crc kubenswrapper[4830]: I0131 09:06:12.867414 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6bff0c5d-b14b-4164-9294-b6e330e28a0f-client-ca\") pod \"route-controller-manager-555476556f-cxnzc\" (UID: \"6bff0c5d-b14b-4164-9294-b6e330e28a0f\") " pod="openshift-route-controller-manager/route-controller-manager-555476556f-cxnzc" Jan 31 09:06:12 crc kubenswrapper[4830]: I0131 09:06:12.867536 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ncjsw\" (UniqueName: \"kubernetes.io/projected/6bff0c5d-b14b-4164-9294-b6e330e28a0f-kube-api-access-ncjsw\") pod \"route-controller-manager-555476556f-cxnzc\" (UID: \"6bff0c5d-b14b-4164-9294-b6e330e28a0f\") " pod="openshift-route-controller-manager/route-controller-manager-555476556f-cxnzc" Jan 31 09:06:12 crc kubenswrapper[4830]: I0131 09:06:12.867611 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6bff0c5d-b14b-4164-9294-b6e330e28a0f-serving-cert\") pod \"route-controller-manager-555476556f-cxnzc\" (UID: \"6bff0c5d-b14b-4164-9294-b6e330e28a0f\") " pod="openshift-route-controller-manager/route-controller-manager-555476556f-cxnzc" Jan 31 09:06:12 crc kubenswrapper[4830]: I0131 09:06:12.867662 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aa7f3d4f-c421-471e-b86d-1b98226cfc03-config\") pod \"controller-manager-d46c67fd4-tk7b8\" (UID: \"aa7f3d4f-c421-471e-b86d-1b98226cfc03\") " pod="openshift-controller-manager/controller-manager-d46c67fd4-tk7b8" Jan 31 09:06:12 crc kubenswrapper[4830]: I0131 09:06:12.969521 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ncjsw\" (UniqueName: \"kubernetes.io/projected/6bff0c5d-b14b-4164-9294-b6e330e28a0f-kube-api-access-ncjsw\") pod \"route-controller-manager-555476556f-cxnzc\" (UID: \"6bff0c5d-b14b-4164-9294-b6e330e28a0f\") " pod="openshift-route-controller-manager/route-controller-manager-555476556f-cxnzc" Jan 31 09:06:12 crc kubenswrapper[4830]: I0131 09:06:12.969632 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6bff0c5d-b14b-4164-9294-b6e330e28a0f-serving-cert\") pod \"route-controller-manager-555476556f-cxnzc\" (UID: \"6bff0c5d-b14b-4164-9294-b6e330e28a0f\") " pod="openshift-route-controller-manager/route-controller-manager-555476556f-cxnzc" Jan 31 09:06:12 crc kubenswrapper[4830]: I0131 09:06:12.969683 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aa7f3d4f-c421-471e-b86d-1b98226cfc03-config\") pod \"controller-manager-d46c67fd4-tk7b8\" (UID: \"aa7f3d4f-c421-471e-b86d-1b98226cfc03\") " pod="openshift-controller-manager/controller-manager-d46c67fd4-tk7b8" Jan 31 09:06:12 crc kubenswrapper[4830]: I0131 09:06:12.969741 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g4lfk\" (UniqueName: \"kubernetes.io/projected/aa7f3d4f-c421-471e-b86d-1b98226cfc03-kube-api-access-g4lfk\") pod \"controller-manager-d46c67fd4-tk7b8\" (UID: \"aa7f3d4f-c421-471e-b86d-1b98226cfc03\") " pod="openshift-controller-manager/controller-manager-d46c67fd4-tk7b8" Jan 31 09:06:12 crc kubenswrapper[4830]: I0131 09:06:12.969779 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/aa7f3d4f-c421-471e-b86d-1b98226cfc03-client-ca\") pod \"controller-manager-d46c67fd4-tk7b8\" (UID: \"aa7f3d4f-c421-471e-b86d-1b98226cfc03\") " pod="openshift-controller-manager/controller-manager-d46c67fd4-tk7b8" Jan 31 09:06:12 crc kubenswrapper[4830]: I0131 09:06:12.971429 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aa7f3d4f-c421-471e-b86d-1b98226cfc03-config\") pod \"controller-manager-d46c67fd4-tk7b8\" (UID: \"aa7f3d4f-c421-471e-b86d-1b98226cfc03\") " pod="openshift-controller-manager/controller-manager-d46c67fd4-tk7b8" Jan 31 09:06:12 crc kubenswrapper[4830]: I0131 09:06:12.972028 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/aa7f3d4f-c421-471e-b86d-1b98226cfc03-serving-cert\") pod \"controller-manager-d46c67fd4-tk7b8\" (UID: \"aa7f3d4f-c421-471e-b86d-1b98226cfc03\") " pod="openshift-controller-manager/controller-manager-d46c67fd4-tk7b8" Jan 31 09:06:12 crc kubenswrapper[4830]: I0131 09:06:12.972642 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/aa7f3d4f-c421-471e-b86d-1b98226cfc03-proxy-ca-bundles\") pod \"controller-manager-d46c67fd4-tk7b8\" (UID: \"aa7f3d4f-c421-471e-b86d-1b98226cfc03\") " pod="openshift-controller-manager/controller-manager-d46c67fd4-tk7b8" Jan 31 09:06:12 crc kubenswrapper[4830]: I0131 09:06:12.972701 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6bff0c5d-b14b-4164-9294-b6e330e28a0f-config\") pod \"route-controller-manager-555476556f-cxnzc\" (UID: \"6bff0c5d-b14b-4164-9294-b6e330e28a0f\") " pod="openshift-route-controller-manager/route-controller-manager-555476556f-cxnzc" Jan 31 09:06:12 crc kubenswrapper[4830]: I0131 09:06:12.973013 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6bff0c5d-b14b-4164-9294-b6e330e28a0f-client-ca\") pod \"route-controller-manager-555476556f-cxnzc\" (UID: \"6bff0c5d-b14b-4164-9294-b6e330e28a0f\") " pod="openshift-route-controller-manager/route-controller-manager-555476556f-cxnzc" Jan 31 09:06:12 crc kubenswrapper[4830]: I0131 09:06:12.975453 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6bff0c5d-b14b-4164-9294-b6e330e28a0f-config\") pod \"route-controller-manager-555476556f-cxnzc\" (UID: \"6bff0c5d-b14b-4164-9294-b6e330e28a0f\") " pod="openshift-route-controller-manager/route-controller-manager-555476556f-cxnzc" Jan 31 09:06:12 crc kubenswrapper[4830]: I0131 09:06:12.975562 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/aa7f3d4f-c421-471e-b86d-1b98226cfc03-serving-cert\") pod \"controller-manager-d46c67fd4-tk7b8\" (UID: \"aa7f3d4f-c421-471e-b86d-1b98226cfc03\") " pod="openshift-controller-manager/controller-manager-d46c67fd4-tk7b8" Jan 31 09:06:12 crc kubenswrapper[4830]: I0131 09:06:12.975872 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6bff0c5d-b14b-4164-9294-b6e330e28a0f-serving-cert\") pod \"route-controller-manager-555476556f-cxnzc\" (UID: \"6bff0c5d-b14b-4164-9294-b6e330e28a0f\") " pod="openshift-route-controller-manager/route-controller-manager-555476556f-cxnzc" Jan 31 09:06:12 crc kubenswrapper[4830]: I0131 09:06:12.977332 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6bff0c5d-b14b-4164-9294-b6e330e28a0f-client-ca\") pod \"route-controller-manager-555476556f-cxnzc\" (UID: \"6bff0c5d-b14b-4164-9294-b6e330e28a0f\") " pod="openshift-route-controller-manager/route-controller-manager-555476556f-cxnzc" Jan 31 09:06:12 crc kubenswrapper[4830]: I0131 09:06:12.978051 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/aa7f3d4f-c421-471e-b86d-1b98226cfc03-proxy-ca-bundles\") pod \"controller-manager-d46c67fd4-tk7b8\" (UID: \"aa7f3d4f-c421-471e-b86d-1b98226cfc03\") " pod="openshift-controller-manager/controller-manager-d46c67fd4-tk7b8" Jan 31 09:06:12 crc kubenswrapper[4830]: I0131 09:06:12.978352 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/aa7f3d4f-c421-471e-b86d-1b98226cfc03-client-ca\") pod \"controller-manager-d46c67fd4-tk7b8\" (UID: \"aa7f3d4f-c421-471e-b86d-1b98226cfc03\") " pod="openshift-controller-manager/controller-manager-d46c67fd4-tk7b8" Jan 31 09:06:12 crc kubenswrapper[4830]: I0131 09:06:12.985701 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g4lfk\" (UniqueName: \"kubernetes.io/projected/aa7f3d4f-c421-471e-b86d-1b98226cfc03-kube-api-access-g4lfk\") pod \"controller-manager-d46c67fd4-tk7b8\" (UID: \"aa7f3d4f-c421-471e-b86d-1b98226cfc03\") " pod="openshift-controller-manager/controller-manager-d46c67fd4-tk7b8" Jan 31 09:06:12 crc kubenswrapper[4830]: I0131 09:06:12.987630 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ncjsw\" (UniqueName: \"kubernetes.io/projected/6bff0c5d-b14b-4164-9294-b6e330e28a0f-kube-api-access-ncjsw\") pod \"route-controller-manager-555476556f-cxnzc\" (UID: \"6bff0c5d-b14b-4164-9294-b6e330e28a0f\") " pod="openshift-route-controller-manager/route-controller-manager-555476556f-cxnzc" Jan 31 09:06:13 crc kubenswrapper[4830]: I0131 09:06:13.078897 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-555476556f-cxnzc" Jan 31 09:06:13 crc kubenswrapper[4830]: I0131 09:06:13.092765 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-d46c67fd4-tk7b8" Jan 31 09:06:13 crc kubenswrapper[4830]: I0131 09:06:13.499601 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-d46c67fd4-tk7b8"] Jan 31 09:06:13 crc kubenswrapper[4830]: I0131 09:06:13.536935 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-555476556f-cxnzc"] Jan 31 09:06:14 crc kubenswrapper[4830]: I0131 09:06:14.184746 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 31 09:06:14 crc kubenswrapper[4830]: I0131 09:06:14.258339 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="15dfc3dc-f225-4121-b796-6a7430f3626a" path="/var/lib/kubelet/pods/15dfc3dc-f225-4121-b796-6a7430f3626a/volumes" Jan 31 09:06:14 crc kubenswrapper[4830]: I0131 09:06:14.259222 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a6505f5e-7a8b-468d-b9d4-12c454986269" path="/var/lib/kubelet/pods/a6505f5e-7a8b-468d-b9d4-12c454986269/volumes" Jan 31 09:06:14 crc kubenswrapper[4830]: I0131 09:06:14.406555 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-d46c67fd4-tk7b8" event={"ID":"aa7f3d4f-c421-471e-b86d-1b98226cfc03","Type":"ContainerStarted","Data":"92bc80cc750a699a5ab799bef8b49f6ae30f5b223472f92e6ed38d147b802112"} Jan 31 09:06:14 crc kubenswrapper[4830]: I0131 09:06:14.406690 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-d46c67fd4-tk7b8" event={"ID":"aa7f3d4f-c421-471e-b86d-1b98226cfc03","Type":"ContainerStarted","Data":"25d8a47237dc4888d7fd37981d828f9a07f8957b3b3331cda30b5ab88326993a"} Jan 31 09:06:14 crc kubenswrapper[4830]: I0131 09:06:14.407209 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-d46c67fd4-tk7b8" Jan 31 09:06:14 crc kubenswrapper[4830]: I0131 09:06:14.411660 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-555476556f-cxnzc" event={"ID":"6bff0c5d-b14b-4164-9294-b6e330e28a0f","Type":"ContainerStarted","Data":"2aaad1075d4203b6462c793f0dd9db70f329b9cb44f9243888c9b32a319683c9"} Jan 31 09:06:14 crc kubenswrapper[4830]: I0131 09:06:14.411709 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-555476556f-cxnzc" event={"ID":"6bff0c5d-b14b-4164-9294-b6e330e28a0f","Type":"ContainerStarted","Data":"4005e52eeefcb4158b1598a9b2854da25d5368ba4317d91ab3bfc34cb1e3dc0c"} Jan 31 09:06:14 crc kubenswrapper[4830]: I0131 09:06:14.423685 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-d46c67fd4-tk7b8" Jan 31 09:06:14 crc kubenswrapper[4830]: I0131 09:06:14.428907 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-d46c67fd4-tk7b8" podStartSLOduration=4.428889993 podStartE2EDuration="4.428889993s" podCreationTimestamp="2026-01-31 09:06:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 09:06:14.428338306 +0000 UTC m=+318.921700748" watchObservedRunningTime="2026-01-31 09:06:14.428889993 +0000 UTC m=+318.922252425" Jan 31 09:06:14 crc kubenswrapper[4830]: I0131 09:06:14.448625 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-555476556f-cxnzc" podStartSLOduration=4.448608009 podStartE2EDuration="4.448608009s" podCreationTimestamp="2026-01-31 09:06:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 09:06:14.4473441 +0000 UTC m=+318.940706542" watchObservedRunningTime="2026-01-31 09:06:14.448608009 +0000 UTC m=+318.941970451" Jan 31 09:06:14 crc kubenswrapper[4830]: I0131 09:06:14.815991 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 31 09:06:15 crc kubenswrapper[4830]: I0131 09:06:15.417102 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-555476556f-cxnzc" Jan 31 09:06:15 crc kubenswrapper[4830]: I0131 09:06:15.423749 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-555476556f-cxnzc" Jan 31 09:06:20 crc kubenswrapper[4830]: I0131 09:06:20.527536 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 31 09:06:23 crc kubenswrapper[4830]: I0131 09:06:23.220133 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 31 09:06:23 crc kubenswrapper[4830]: I0131 09:06:23.899274 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 31 09:06:24 crc kubenswrapper[4830]: I0131 09:06:24.483450 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 31 09:06:28 crc kubenswrapper[4830]: I0131 09:06:28.526104 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 31 09:06:31 crc kubenswrapper[4830]: I0131 09:06:31.812058 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 31 09:06:32 crc kubenswrapper[4830]: I0131 09:06:32.587626 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 31 09:06:44 crc kubenswrapper[4830]: I0131 09:06:44.353601 4830 patch_prober.go:28] interesting pod/machine-config-daemon-gt7kd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 09:06:44 crc kubenswrapper[4830]: I0131 09:06:44.354612 4830 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" podUID="158dbfda-9b0a-4809-9946-3c6ee2d082dc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 09:06:50 crc kubenswrapper[4830]: I0131 09:06:50.858448 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-d46c67fd4-tk7b8"] Jan 31 09:06:50 crc kubenswrapper[4830]: I0131 09:06:50.859823 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-d46c67fd4-tk7b8" podUID="aa7f3d4f-c421-471e-b86d-1b98226cfc03" containerName="controller-manager" containerID="cri-o://92bc80cc750a699a5ab799bef8b49f6ae30f5b223472f92e6ed38d147b802112" gracePeriod=30 Jan 31 09:06:51 crc kubenswrapper[4830]: I0131 09:06:51.316260 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-d46c67fd4-tk7b8" Jan 31 09:06:51 crc kubenswrapper[4830]: I0131 09:06:51.360154 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aa7f3d4f-c421-471e-b86d-1b98226cfc03-config\") pod \"aa7f3d4f-c421-471e-b86d-1b98226cfc03\" (UID: \"aa7f3d4f-c421-471e-b86d-1b98226cfc03\") " Jan 31 09:06:51 crc kubenswrapper[4830]: I0131 09:06:51.360232 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/aa7f3d4f-c421-471e-b86d-1b98226cfc03-serving-cert\") pod \"aa7f3d4f-c421-471e-b86d-1b98226cfc03\" (UID: \"aa7f3d4f-c421-471e-b86d-1b98226cfc03\") " Jan 31 09:06:51 crc kubenswrapper[4830]: I0131 09:06:51.360296 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g4lfk\" (UniqueName: \"kubernetes.io/projected/aa7f3d4f-c421-471e-b86d-1b98226cfc03-kube-api-access-g4lfk\") pod \"aa7f3d4f-c421-471e-b86d-1b98226cfc03\" (UID: \"aa7f3d4f-c421-471e-b86d-1b98226cfc03\") " Jan 31 09:06:51 crc kubenswrapper[4830]: I0131 09:06:51.360317 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/aa7f3d4f-c421-471e-b86d-1b98226cfc03-proxy-ca-bundles\") pod \"aa7f3d4f-c421-471e-b86d-1b98226cfc03\" (UID: \"aa7f3d4f-c421-471e-b86d-1b98226cfc03\") " Jan 31 09:06:51 crc kubenswrapper[4830]: I0131 09:06:51.360404 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/aa7f3d4f-c421-471e-b86d-1b98226cfc03-client-ca\") pod \"aa7f3d4f-c421-471e-b86d-1b98226cfc03\" (UID: \"aa7f3d4f-c421-471e-b86d-1b98226cfc03\") " Jan 31 09:06:51 crc kubenswrapper[4830]: I0131 09:06:51.361327 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aa7f3d4f-c421-471e-b86d-1b98226cfc03-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "aa7f3d4f-c421-471e-b86d-1b98226cfc03" (UID: "aa7f3d4f-c421-471e-b86d-1b98226cfc03"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:06:51 crc kubenswrapper[4830]: I0131 09:06:51.361458 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aa7f3d4f-c421-471e-b86d-1b98226cfc03-client-ca" (OuterVolumeSpecName: "client-ca") pod "aa7f3d4f-c421-471e-b86d-1b98226cfc03" (UID: "aa7f3d4f-c421-471e-b86d-1b98226cfc03"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:06:51 crc kubenswrapper[4830]: I0131 09:06:51.361490 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aa7f3d4f-c421-471e-b86d-1b98226cfc03-config" (OuterVolumeSpecName: "config") pod "aa7f3d4f-c421-471e-b86d-1b98226cfc03" (UID: "aa7f3d4f-c421-471e-b86d-1b98226cfc03"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:06:51 crc kubenswrapper[4830]: I0131 09:06:51.361517 4830 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/aa7f3d4f-c421-471e-b86d-1b98226cfc03-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 31 09:06:51 crc kubenswrapper[4830]: I0131 09:06:51.366854 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aa7f3d4f-c421-471e-b86d-1b98226cfc03-kube-api-access-g4lfk" (OuterVolumeSpecName: "kube-api-access-g4lfk") pod "aa7f3d4f-c421-471e-b86d-1b98226cfc03" (UID: "aa7f3d4f-c421-471e-b86d-1b98226cfc03"). InnerVolumeSpecName "kube-api-access-g4lfk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:06:51 crc kubenswrapper[4830]: I0131 09:06:51.367019 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aa7f3d4f-c421-471e-b86d-1b98226cfc03-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "aa7f3d4f-c421-471e-b86d-1b98226cfc03" (UID: "aa7f3d4f-c421-471e-b86d-1b98226cfc03"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:06:51 crc kubenswrapper[4830]: I0131 09:06:51.463216 4830 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/aa7f3d4f-c421-471e-b86d-1b98226cfc03-client-ca\") on node \"crc\" DevicePath \"\"" Jan 31 09:06:51 crc kubenswrapper[4830]: I0131 09:06:51.463268 4830 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aa7f3d4f-c421-471e-b86d-1b98226cfc03-config\") on node \"crc\" DevicePath \"\"" Jan 31 09:06:51 crc kubenswrapper[4830]: I0131 09:06:51.463278 4830 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/aa7f3d4f-c421-471e-b86d-1b98226cfc03-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 09:06:51 crc kubenswrapper[4830]: I0131 09:06:51.463290 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g4lfk\" (UniqueName: \"kubernetes.io/projected/aa7f3d4f-c421-471e-b86d-1b98226cfc03-kube-api-access-g4lfk\") on node \"crc\" DevicePath \"\"" Jan 31 09:06:51 crc kubenswrapper[4830]: I0131 09:06:51.695308 4830 generic.go:334] "Generic (PLEG): container finished" podID="aa7f3d4f-c421-471e-b86d-1b98226cfc03" containerID="92bc80cc750a699a5ab799bef8b49f6ae30f5b223472f92e6ed38d147b802112" exitCode=0 Jan 31 09:06:51 crc kubenswrapper[4830]: I0131 09:06:51.695376 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-d46c67fd4-tk7b8" event={"ID":"aa7f3d4f-c421-471e-b86d-1b98226cfc03","Type":"ContainerDied","Data":"92bc80cc750a699a5ab799bef8b49f6ae30f5b223472f92e6ed38d147b802112"} Jan 31 09:06:51 crc kubenswrapper[4830]: I0131 09:06:51.695417 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-d46c67fd4-tk7b8" event={"ID":"aa7f3d4f-c421-471e-b86d-1b98226cfc03","Type":"ContainerDied","Data":"25d8a47237dc4888d7fd37981d828f9a07f8957b3b3331cda30b5ab88326993a"} Jan 31 09:06:51 crc kubenswrapper[4830]: I0131 09:06:51.695441 4830 scope.go:117] "RemoveContainer" containerID="92bc80cc750a699a5ab799bef8b49f6ae30f5b223472f92e6ed38d147b802112" Jan 31 09:06:51 crc kubenswrapper[4830]: I0131 09:06:51.695584 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-d46c67fd4-tk7b8" Jan 31 09:06:51 crc kubenswrapper[4830]: I0131 09:06:51.728117 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-d46c67fd4-tk7b8"] Jan 31 09:06:51 crc kubenswrapper[4830]: I0131 09:06:51.734295 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-d46c67fd4-tk7b8"] Jan 31 09:06:51 crc kubenswrapper[4830]: I0131 09:06:51.738453 4830 scope.go:117] "RemoveContainer" containerID="92bc80cc750a699a5ab799bef8b49f6ae30f5b223472f92e6ed38d147b802112" Jan 31 09:06:51 crc kubenswrapper[4830]: E0131 09:06:51.739281 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"92bc80cc750a699a5ab799bef8b49f6ae30f5b223472f92e6ed38d147b802112\": container with ID starting with 92bc80cc750a699a5ab799bef8b49f6ae30f5b223472f92e6ed38d147b802112 not found: ID does not exist" containerID="92bc80cc750a699a5ab799bef8b49f6ae30f5b223472f92e6ed38d147b802112" Jan 31 09:06:51 crc kubenswrapper[4830]: I0131 09:06:51.739328 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"92bc80cc750a699a5ab799bef8b49f6ae30f5b223472f92e6ed38d147b802112"} err="failed to get container status \"92bc80cc750a699a5ab799bef8b49f6ae30f5b223472f92e6ed38d147b802112\": rpc error: code = NotFound desc = could not find container \"92bc80cc750a699a5ab799bef8b49f6ae30f5b223472f92e6ed38d147b802112\": container with ID starting with 92bc80cc750a699a5ab799bef8b49f6ae30f5b223472f92e6ed38d147b802112 not found: ID does not exist" Jan 31 09:06:52 crc kubenswrapper[4830]: I0131 09:06:52.259395 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aa7f3d4f-c421-471e-b86d-1b98226cfc03" path="/var/lib/kubelet/pods/aa7f3d4f-c421-471e-b86d-1b98226cfc03/volumes" Jan 31 09:06:52 crc kubenswrapper[4830]: I0131 09:06:52.780031 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-7896c76d86-c5cgs"] Jan 31 09:06:52 crc kubenswrapper[4830]: E0131 09:06:52.780319 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa7f3d4f-c421-471e-b86d-1b98226cfc03" containerName="controller-manager" Jan 31 09:06:52 crc kubenswrapper[4830]: I0131 09:06:52.780335 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa7f3d4f-c421-471e-b86d-1b98226cfc03" containerName="controller-manager" Jan 31 09:06:52 crc kubenswrapper[4830]: I0131 09:06:52.780469 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="aa7f3d4f-c421-471e-b86d-1b98226cfc03" containerName="controller-manager" Jan 31 09:06:52 crc kubenswrapper[4830]: I0131 09:06:52.780915 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7896c76d86-c5cgs" Jan 31 09:06:52 crc kubenswrapper[4830]: I0131 09:06:52.783384 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 31 09:06:52 crc kubenswrapper[4830]: I0131 09:06:52.783456 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 31 09:06:52 crc kubenswrapper[4830]: I0131 09:06:52.783995 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 31 09:06:52 crc kubenswrapper[4830]: I0131 09:06:52.785233 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 31 09:06:52 crc kubenswrapper[4830]: I0131 09:06:52.785842 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 31 09:06:52 crc kubenswrapper[4830]: I0131 09:06:52.786036 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 31 09:06:52 crc kubenswrapper[4830]: I0131 09:06:52.797453 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 31 09:06:52 crc kubenswrapper[4830]: I0131 09:06:52.798558 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7896c76d86-c5cgs"] Jan 31 09:06:52 crc kubenswrapper[4830]: I0131 09:06:52.883393 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d85aeaa6-c7da-420f-b8d9-2d0983e2ab36-proxy-ca-bundles\") pod \"controller-manager-7896c76d86-c5cgs\" (UID: \"d85aeaa6-c7da-420f-b8d9-2d0983e2ab36\") " pod="openshift-controller-manager/controller-manager-7896c76d86-c5cgs" Jan 31 09:06:52 crc kubenswrapper[4830]: I0131 09:06:52.883457 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d85aeaa6-c7da-420f-b8d9-2d0983e2ab36-config\") pod \"controller-manager-7896c76d86-c5cgs\" (UID: \"d85aeaa6-c7da-420f-b8d9-2d0983e2ab36\") " pod="openshift-controller-manager/controller-manager-7896c76d86-c5cgs" Jan 31 09:06:52 crc kubenswrapper[4830]: I0131 09:06:52.883507 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d85aeaa6-c7da-420f-b8d9-2d0983e2ab36-serving-cert\") pod \"controller-manager-7896c76d86-c5cgs\" (UID: \"d85aeaa6-c7da-420f-b8d9-2d0983e2ab36\") " pod="openshift-controller-manager/controller-manager-7896c76d86-c5cgs" Jan 31 09:06:52 crc kubenswrapper[4830]: I0131 09:06:52.883533 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p59zx\" (UniqueName: \"kubernetes.io/projected/d85aeaa6-c7da-420f-b8d9-2d0983e2ab36-kube-api-access-p59zx\") pod \"controller-manager-7896c76d86-c5cgs\" (UID: \"d85aeaa6-c7da-420f-b8d9-2d0983e2ab36\") " pod="openshift-controller-manager/controller-manager-7896c76d86-c5cgs" Jan 31 09:06:52 crc kubenswrapper[4830]: I0131 09:06:52.883558 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d85aeaa6-c7da-420f-b8d9-2d0983e2ab36-client-ca\") pod \"controller-manager-7896c76d86-c5cgs\" (UID: \"d85aeaa6-c7da-420f-b8d9-2d0983e2ab36\") " pod="openshift-controller-manager/controller-manager-7896c76d86-c5cgs" Jan 31 09:06:52 crc kubenswrapper[4830]: I0131 09:06:52.984871 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d85aeaa6-c7da-420f-b8d9-2d0983e2ab36-client-ca\") pod \"controller-manager-7896c76d86-c5cgs\" (UID: \"d85aeaa6-c7da-420f-b8d9-2d0983e2ab36\") " pod="openshift-controller-manager/controller-manager-7896c76d86-c5cgs" Jan 31 09:06:52 crc kubenswrapper[4830]: I0131 09:06:52.984973 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d85aeaa6-c7da-420f-b8d9-2d0983e2ab36-proxy-ca-bundles\") pod \"controller-manager-7896c76d86-c5cgs\" (UID: \"d85aeaa6-c7da-420f-b8d9-2d0983e2ab36\") " pod="openshift-controller-manager/controller-manager-7896c76d86-c5cgs" Jan 31 09:06:52 crc kubenswrapper[4830]: I0131 09:06:52.985014 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d85aeaa6-c7da-420f-b8d9-2d0983e2ab36-config\") pod \"controller-manager-7896c76d86-c5cgs\" (UID: \"d85aeaa6-c7da-420f-b8d9-2d0983e2ab36\") " pod="openshift-controller-manager/controller-manager-7896c76d86-c5cgs" Jan 31 09:06:52 crc kubenswrapper[4830]: I0131 09:06:52.985079 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d85aeaa6-c7da-420f-b8d9-2d0983e2ab36-serving-cert\") pod \"controller-manager-7896c76d86-c5cgs\" (UID: \"d85aeaa6-c7da-420f-b8d9-2d0983e2ab36\") " pod="openshift-controller-manager/controller-manager-7896c76d86-c5cgs" Jan 31 09:06:52 crc kubenswrapper[4830]: I0131 09:06:52.985123 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p59zx\" (UniqueName: \"kubernetes.io/projected/d85aeaa6-c7da-420f-b8d9-2d0983e2ab36-kube-api-access-p59zx\") pod \"controller-manager-7896c76d86-c5cgs\" (UID: \"d85aeaa6-c7da-420f-b8d9-2d0983e2ab36\") " pod="openshift-controller-manager/controller-manager-7896c76d86-c5cgs" Jan 31 09:06:52 crc kubenswrapper[4830]: I0131 09:06:52.986493 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d85aeaa6-c7da-420f-b8d9-2d0983e2ab36-client-ca\") pod \"controller-manager-7896c76d86-c5cgs\" (UID: \"d85aeaa6-c7da-420f-b8d9-2d0983e2ab36\") " pod="openshift-controller-manager/controller-manager-7896c76d86-c5cgs" Jan 31 09:06:52 crc kubenswrapper[4830]: I0131 09:06:52.987196 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d85aeaa6-c7da-420f-b8d9-2d0983e2ab36-config\") pod \"controller-manager-7896c76d86-c5cgs\" (UID: \"d85aeaa6-c7da-420f-b8d9-2d0983e2ab36\") " pod="openshift-controller-manager/controller-manager-7896c76d86-c5cgs" Jan 31 09:06:52 crc kubenswrapper[4830]: I0131 09:06:52.987518 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d85aeaa6-c7da-420f-b8d9-2d0983e2ab36-proxy-ca-bundles\") pod \"controller-manager-7896c76d86-c5cgs\" (UID: \"d85aeaa6-c7da-420f-b8d9-2d0983e2ab36\") " pod="openshift-controller-manager/controller-manager-7896c76d86-c5cgs" Jan 31 09:06:52 crc kubenswrapper[4830]: I0131 09:06:52.992745 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d85aeaa6-c7da-420f-b8d9-2d0983e2ab36-serving-cert\") pod \"controller-manager-7896c76d86-c5cgs\" (UID: \"d85aeaa6-c7da-420f-b8d9-2d0983e2ab36\") " pod="openshift-controller-manager/controller-manager-7896c76d86-c5cgs" Jan 31 09:06:53 crc kubenswrapper[4830]: I0131 09:06:53.003125 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p59zx\" (UniqueName: \"kubernetes.io/projected/d85aeaa6-c7da-420f-b8d9-2d0983e2ab36-kube-api-access-p59zx\") pod \"controller-manager-7896c76d86-c5cgs\" (UID: \"d85aeaa6-c7da-420f-b8d9-2d0983e2ab36\") " pod="openshift-controller-manager/controller-manager-7896c76d86-c5cgs" Jan 31 09:06:53 crc kubenswrapper[4830]: I0131 09:06:53.097896 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7896c76d86-c5cgs" Jan 31 09:06:53 crc kubenswrapper[4830]: I0131 09:06:53.511579 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7896c76d86-c5cgs"] Jan 31 09:06:53 crc kubenswrapper[4830]: I0131 09:06:53.717042 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7896c76d86-c5cgs" event={"ID":"d85aeaa6-c7da-420f-b8d9-2d0983e2ab36","Type":"ContainerStarted","Data":"f32553c7b295719f56496bf853a26b7c14fef0d6e4969159c919977278f26085"} Jan 31 09:06:53 crc kubenswrapper[4830]: I0131 09:06:53.717579 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7896c76d86-c5cgs" event={"ID":"d85aeaa6-c7da-420f-b8d9-2d0983e2ab36","Type":"ContainerStarted","Data":"e44de28cf7e484fc51d02bc661515df89149a56593b5f013868800e4d3cf63ce"} Jan 31 09:06:53 crc kubenswrapper[4830]: I0131 09:06:53.717599 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-7896c76d86-c5cgs" Jan 31 09:06:53 crc kubenswrapper[4830]: I0131 09:06:53.719531 4830 patch_prober.go:28] interesting pod/controller-manager-7896c76d86-c5cgs container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.68:8443/healthz\": dial tcp 10.217.0.68:8443: connect: connection refused" start-of-body= Jan 31 09:06:53 crc kubenswrapper[4830]: I0131 09:06:53.719585 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-7896c76d86-c5cgs" podUID="d85aeaa6-c7da-420f-b8d9-2d0983e2ab36" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.68:8443/healthz\": dial tcp 10.217.0.68:8443: connect: connection refused" Jan 31 09:06:53 crc kubenswrapper[4830]: I0131 09:06:53.738282 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-7896c76d86-c5cgs" podStartSLOduration=3.738256066 podStartE2EDuration="3.738256066s" podCreationTimestamp="2026-01-31 09:06:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 09:06:53.73512833 +0000 UTC m=+358.228490772" watchObservedRunningTime="2026-01-31 09:06:53.738256066 +0000 UTC m=+358.231618508" Jan 31 09:06:54 crc kubenswrapper[4830]: I0131 09:06:54.725616 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-7896c76d86-c5cgs" Jan 31 09:07:10 crc kubenswrapper[4830]: I0131 09:07:10.884233 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-555476556f-cxnzc"] Jan 31 09:07:10 crc kubenswrapper[4830]: I0131 09:07:10.885315 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-555476556f-cxnzc" podUID="6bff0c5d-b14b-4164-9294-b6e330e28a0f" containerName="route-controller-manager" containerID="cri-o://2aaad1075d4203b6462c793f0dd9db70f329b9cb44f9243888c9b32a319683c9" gracePeriod=30 Jan 31 09:07:11 crc kubenswrapper[4830]: I0131 09:07:11.070472 4830 generic.go:334] "Generic (PLEG): container finished" podID="6bff0c5d-b14b-4164-9294-b6e330e28a0f" containerID="2aaad1075d4203b6462c793f0dd9db70f329b9cb44f9243888c9b32a319683c9" exitCode=0 Jan 31 09:07:11 crc kubenswrapper[4830]: I0131 09:07:11.070523 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-555476556f-cxnzc" event={"ID":"6bff0c5d-b14b-4164-9294-b6e330e28a0f","Type":"ContainerDied","Data":"2aaad1075d4203b6462c793f0dd9db70f329b9cb44f9243888c9b32a319683c9"} Jan 31 09:07:11 crc kubenswrapper[4830]: I0131 09:07:11.403680 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-555476556f-cxnzc" Jan 31 09:07:11 crc kubenswrapper[4830]: I0131 09:07:11.603179 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6bff0c5d-b14b-4164-9294-b6e330e28a0f-client-ca\") pod \"6bff0c5d-b14b-4164-9294-b6e330e28a0f\" (UID: \"6bff0c5d-b14b-4164-9294-b6e330e28a0f\") " Jan 31 09:07:11 crc kubenswrapper[4830]: I0131 09:07:11.603245 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6bff0c5d-b14b-4164-9294-b6e330e28a0f-serving-cert\") pod \"6bff0c5d-b14b-4164-9294-b6e330e28a0f\" (UID: \"6bff0c5d-b14b-4164-9294-b6e330e28a0f\") " Jan 31 09:07:11 crc kubenswrapper[4830]: I0131 09:07:11.604128 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6bff0c5d-b14b-4164-9294-b6e330e28a0f-config\") pod \"6bff0c5d-b14b-4164-9294-b6e330e28a0f\" (UID: \"6bff0c5d-b14b-4164-9294-b6e330e28a0f\") " Jan 31 09:07:11 crc kubenswrapper[4830]: I0131 09:07:11.604184 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ncjsw\" (UniqueName: \"kubernetes.io/projected/6bff0c5d-b14b-4164-9294-b6e330e28a0f-kube-api-access-ncjsw\") pod \"6bff0c5d-b14b-4164-9294-b6e330e28a0f\" (UID: \"6bff0c5d-b14b-4164-9294-b6e330e28a0f\") " Jan 31 09:07:11 crc kubenswrapper[4830]: I0131 09:07:11.604334 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6bff0c5d-b14b-4164-9294-b6e330e28a0f-client-ca" (OuterVolumeSpecName: "client-ca") pod "6bff0c5d-b14b-4164-9294-b6e330e28a0f" (UID: "6bff0c5d-b14b-4164-9294-b6e330e28a0f"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:07:11 crc kubenswrapper[4830]: I0131 09:07:11.604639 4830 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6bff0c5d-b14b-4164-9294-b6e330e28a0f-client-ca\") on node \"crc\" DevicePath \"\"" Jan 31 09:07:11 crc kubenswrapper[4830]: I0131 09:07:11.604924 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6bff0c5d-b14b-4164-9294-b6e330e28a0f-config" (OuterVolumeSpecName: "config") pod "6bff0c5d-b14b-4164-9294-b6e330e28a0f" (UID: "6bff0c5d-b14b-4164-9294-b6e330e28a0f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:07:11 crc kubenswrapper[4830]: I0131 09:07:11.611415 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6bff0c5d-b14b-4164-9294-b6e330e28a0f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6bff0c5d-b14b-4164-9294-b6e330e28a0f" (UID: "6bff0c5d-b14b-4164-9294-b6e330e28a0f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:07:11 crc kubenswrapper[4830]: I0131 09:07:11.623465 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6bff0c5d-b14b-4164-9294-b6e330e28a0f-kube-api-access-ncjsw" (OuterVolumeSpecName: "kube-api-access-ncjsw") pod "6bff0c5d-b14b-4164-9294-b6e330e28a0f" (UID: "6bff0c5d-b14b-4164-9294-b6e330e28a0f"). InnerVolumeSpecName "kube-api-access-ncjsw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:07:11 crc kubenswrapper[4830]: I0131 09:07:11.706250 4830 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6bff0c5d-b14b-4164-9294-b6e330e28a0f-config\") on node \"crc\" DevicePath \"\"" Jan 31 09:07:11 crc kubenswrapper[4830]: I0131 09:07:11.706301 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ncjsw\" (UniqueName: \"kubernetes.io/projected/6bff0c5d-b14b-4164-9294-b6e330e28a0f-kube-api-access-ncjsw\") on node \"crc\" DevicePath \"\"" Jan 31 09:07:11 crc kubenswrapper[4830]: I0131 09:07:11.706332 4830 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6bff0c5d-b14b-4164-9294-b6e330e28a0f-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 09:07:12 crc kubenswrapper[4830]: I0131 09:07:12.059448 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-bcf89fb66-fxq4w"] Jan 31 09:07:12 crc kubenswrapper[4830]: E0131 09:07:12.062420 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6bff0c5d-b14b-4164-9294-b6e330e28a0f" containerName="route-controller-manager" Jan 31 09:07:12 crc kubenswrapper[4830]: I0131 09:07:12.062990 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="6bff0c5d-b14b-4164-9294-b6e330e28a0f" containerName="route-controller-manager" Jan 31 09:07:12 crc kubenswrapper[4830]: I0131 09:07:12.063206 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="6bff0c5d-b14b-4164-9294-b6e330e28a0f" containerName="route-controller-manager" Jan 31 09:07:12 crc kubenswrapper[4830]: I0131 09:07:12.063905 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-bcf89fb66-fxq4w" Jan 31 09:07:12 crc kubenswrapper[4830]: I0131 09:07:12.066883 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-bcf89fb66-fxq4w"] Jan 31 09:07:12 crc kubenswrapper[4830]: I0131 09:07:12.078043 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-555476556f-cxnzc" event={"ID":"6bff0c5d-b14b-4164-9294-b6e330e28a0f","Type":"ContainerDied","Data":"4005e52eeefcb4158b1598a9b2854da25d5368ba4317d91ab3bfc34cb1e3dc0c"} Jan 31 09:07:12 crc kubenswrapper[4830]: I0131 09:07:12.078112 4830 scope.go:117] "RemoveContainer" containerID="2aaad1075d4203b6462c793f0dd9db70f329b9cb44f9243888c9b32a319683c9" Jan 31 09:07:12 crc kubenswrapper[4830]: I0131 09:07:12.078245 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-555476556f-cxnzc" Jan 31 09:07:12 crc kubenswrapper[4830]: I0131 09:07:12.114412 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9e3fd47c-6860-47d0-98ce-3654da25fdce-config\") pod \"route-controller-manager-bcf89fb66-fxq4w\" (UID: \"9e3fd47c-6860-47d0-98ce-3654da25fdce\") " pod="openshift-route-controller-manager/route-controller-manager-bcf89fb66-fxq4w" Jan 31 09:07:12 crc kubenswrapper[4830]: I0131 09:07:12.114500 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9vg9x\" (UniqueName: \"kubernetes.io/projected/9e3fd47c-6860-47d0-98ce-3654da25fdce-kube-api-access-9vg9x\") pod \"route-controller-manager-bcf89fb66-fxq4w\" (UID: \"9e3fd47c-6860-47d0-98ce-3654da25fdce\") " pod="openshift-route-controller-manager/route-controller-manager-bcf89fb66-fxq4w" Jan 31 09:07:12 crc kubenswrapper[4830]: I0131 09:07:12.114551 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9e3fd47c-6860-47d0-98ce-3654da25fdce-client-ca\") pod \"route-controller-manager-bcf89fb66-fxq4w\" (UID: \"9e3fd47c-6860-47d0-98ce-3654da25fdce\") " pod="openshift-route-controller-manager/route-controller-manager-bcf89fb66-fxq4w" Jan 31 09:07:12 crc kubenswrapper[4830]: I0131 09:07:12.114606 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9e3fd47c-6860-47d0-98ce-3654da25fdce-serving-cert\") pod \"route-controller-manager-bcf89fb66-fxq4w\" (UID: \"9e3fd47c-6860-47d0-98ce-3654da25fdce\") " pod="openshift-route-controller-manager/route-controller-manager-bcf89fb66-fxq4w" Jan 31 09:07:12 crc kubenswrapper[4830]: I0131 09:07:12.125045 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-555476556f-cxnzc"] Jan 31 09:07:12 crc kubenswrapper[4830]: I0131 09:07:12.129026 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-555476556f-cxnzc"] Jan 31 09:07:12 crc kubenswrapper[4830]: I0131 09:07:12.216230 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9e3fd47c-6860-47d0-98ce-3654da25fdce-client-ca\") pod \"route-controller-manager-bcf89fb66-fxq4w\" (UID: \"9e3fd47c-6860-47d0-98ce-3654da25fdce\") " pod="openshift-route-controller-manager/route-controller-manager-bcf89fb66-fxq4w" Jan 31 09:07:12 crc kubenswrapper[4830]: I0131 09:07:12.216358 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9e3fd47c-6860-47d0-98ce-3654da25fdce-serving-cert\") pod \"route-controller-manager-bcf89fb66-fxq4w\" (UID: \"9e3fd47c-6860-47d0-98ce-3654da25fdce\") " pod="openshift-route-controller-manager/route-controller-manager-bcf89fb66-fxq4w" Jan 31 09:07:12 crc kubenswrapper[4830]: I0131 09:07:12.216490 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9e3fd47c-6860-47d0-98ce-3654da25fdce-config\") pod \"route-controller-manager-bcf89fb66-fxq4w\" (UID: \"9e3fd47c-6860-47d0-98ce-3654da25fdce\") " pod="openshift-route-controller-manager/route-controller-manager-bcf89fb66-fxq4w" Jan 31 09:07:12 crc kubenswrapper[4830]: I0131 09:07:12.218419 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9e3fd47c-6860-47d0-98ce-3654da25fdce-client-ca\") pod \"route-controller-manager-bcf89fb66-fxq4w\" (UID: \"9e3fd47c-6860-47d0-98ce-3654da25fdce\") " pod="openshift-route-controller-manager/route-controller-manager-bcf89fb66-fxq4w" Jan 31 09:07:12 crc kubenswrapper[4830]: I0131 09:07:12.218950 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9e3fd47c-6860-47d0-98ce-3654da25fdce-config\") pod \"route-controller-manager-bcf89fb66-fxq4w\" (UID: \"9e3fd47c-6860-47d0-98ce-3654da25fdce\") " pod="openshift-route-controller-manager/route-controller-manager-bcf89fb66-fxq4w" Jan 31 09:07:12 crc kubenswrapper[4830]: I0131 09:07:12.219119 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9vg9x\" (UniqueName: \"kubernetes.io/projected/9e3fd47c-6860-47d0-98ce-3654da25fdce-kube-api-access-9vg9x\") pod \"route-controller-manager-bcf89fb66-fxq4w\" (UID: \"9e3fd47c-6860-47d0-98ce-3654da25fdce\") " pod="openshift-route-controller-manager/route-controller-manager-bcf89fb66-fxq4w" Jan 31 09:07:12 crc kubenswrapper[4830]: I0131 09:07:12.229600 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9e3fd47c-6860-47d0-98ce-3654da25fdce-serving-cert\") pod \"route-controller-manager-bcf89fb66-fxq4w\" (UID: \"9e3fd47c-6860-47d0-98ce-3654da25fdce\") " pod="openshift-route-controller-manager/route-controller-manager-bcf89fb66-fxq4w" Jan 31 09:07:12 crc kubenswrapper[4830]: I0131 09:07:12.243576 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9vg9x\" (UniqueName: \"kubernetes.io/projected/9e3fd47c-6860-47d0-98ce-3654da25fdce-kube-api-access-9vg9x\") pod \"route-controller-manager-bcf89fb66-fxq4w\" (UID: \"9e3fd47c-6860-47d0-98ce-3654da25fdce\") " pod="openshift-route-controller-manager/route-controller-manager-bcf89fb66-fxq4w" Jan 31 09:07:12 crc kubenswrapper[4830]: I0131 09:07:12.262184 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6bff0c5d-b14b-4164-9294-b6e330e28a0f" path="/var/lib/kubelet/pods/6bff0c5d-b14b-4164-9294-b6e330e28a0f/volumes" Jan 31 09:07:12 crc kubenswrapper[4830]: I0131 09:07:12.395561 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-bcf89fb66-fxq4w" Jan 31 09:07:12 crc kubenswrapper[4830]: I0131 09:07:12.853938 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-bcf89fb66-fxq4w"] Jan 31 09:07:13 crc kubenswrapper[4830]: I0131 09:07:13.084773 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-bcf89fb66-fxq4w" event={"ID":"9e3fd47c-6860-47d0-98ce-3654da25fdce","Type":"ContainerStarted","Data":"788f52e16faad612c586019f97fd0e1c157ee62484db497fa5c83f31c107360d"} Jan 31 09:07:13 crc kubenswrapper[4830]: I0131 09:07:13.085445 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-bcf89fb66-fxq4w" event={"ID":"9e3fd47c-6860-47d0-98ce-3654da25fdce","Type":"ContainerStarted","Data":"e647b6a41e1d0d1acbaeed78242a26325ed23ed3e2408d663fab34ff129747e9"} Jan 31 09:07:13 crc kubenswrapper[4830]: I0131 09:07:13.085504 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-bcf89fb66-fxq4w" Jan 31 09:07:13 crc kubenswrapper[4830]: I0131 09:07:13.087113 4830 patch_prober.go:28] interesting pod/route-controller-manager-bcf89fb66-fxq4w container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.69:8443/healthz\": dial tcp 10.217.0.69:8443: connect: connection refused" start-of-body= Jan 31 09:07:13 crc kubenswrapper[4830]: I0131 09:07:13.087196 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-bcf89fb66-fxq4w" podUID="9e3fd47c-6860-47d0-98ce-3654da25fdce" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.69:8443/healthz\": dial tcp 10.217.0.69:8443: connect: connection refused" Jan 31 09:07:13 crc kubenswrapper[4830]: I0131 09:07:13.111826 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-bcf89fb66-fxq4w" podStartSLOduration=3.11179112 podStartE2EDuration="3.11179112s" podCreationTimestamp="2026-01-31 09:07:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 09:07:13.105472407 +0000 UTC m=+377.598834879" watchObservedRunningTime="2026-01-31 09:07:13.11179112 +0000 UTC m=+377.605153572" Jan 31 09:07:13 crc kubenswrapper[4830]: I0131 09:07:13.164372 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-gkw8v"] Jan 31 09:07:13 crc kubenswrapper[4830]: I0131 09:07:13.165213 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-gkw8v" Jan 31 09:07:13 crc kubenswrapper[4830]: I0131 09:07:13.187500 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-gkw8v"] Jan 31 09:07:13 crc kubenswrapper[4830]: I0131 09:07:13.243076 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4889a479-52c6-494e-a902-c7653ffef4a7-bound-sa-token\") pod \"image-registry-66df7c8f76-gkw8v\" (UID: \"4889a479-52c6-494e-a902-c7653ffef4a7\") " pod="openshift-image-registry/image-registry-66df7c8f76-gkw8v" Jan 31 09:07:13 crc kubenswrapper[4830]: I0131 09:07:13.243160 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/4889a479-52c6-494e-a902-c7653ffef4a7-installation-pull-secrets\") pod \"image-registry-66df7c8f76-gkw8v\" (UID: \"4889a479-52c6-494e-a902-c7653ffef4a7\") " pod="openshift-image-registry/image-registry-66df7c8f76-gkw8v" Jan 31 09:07:13 crc kubenswrapper[4830]: I0131 09:07:13.243330 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-gkw8v\" (UID: \"4889a479-52c6-494e-a902-c7653ffef4a7\") " pod="openshift-image-registry/image-registry-66df7c8f76-gkw8v" Jan 31 09:07:13 crc kubenswrapper[4830]: I0131 09:07:13.243629 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/4889a479-52c6-494e-a902-c7653ffef4a7-ca-trust-extracted\") pod \"image-registry-66df7c8f76-gkw8v\" (UID: \"4889a479-52c6-494e-a902-c7653ffef4a7\") " pod="openshift-image-registry/image-registry-66df7c8f76-gkw8v" Jan 31 09:07:13 crc kubenswrapper[4830]: I0131 09:07:13.243766 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zrlnp\" (UniqueName: \"kubernetes.io/projected/4889a479-52c6-494e-a902-c7653ffef4a7-kube-api-access-zrlnp\") pod \"image-registry-66df7c8f76-gkw8v\" (UID: \"4889a479-52c6-494e-a902-c7653ffef4a7\") " pod="openshift-image-registry/image-registry-66df7c8f76-gkw8v" Jan 31 09:07:13 crc kubenswrapper[4830]: I0131 09:07:13.243862 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4889a479-52c6-494e-a902-c7653ffef4a7-trusted-ca\") pod \"image-registry-66df7c8f76-gkw8v\" (UID: \"4889a479-52c6-494e-a902-c7653ffef4a7\") " pod="openshift-image-registry/image-registry-66df7c8f76-gkw8v" Jan 31 09:07:13 crc kubenswrapper[4830]: I0131 09:07:13.243941 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/4889a479-52c6-494e-a902-c7653ffef4a7-registry-certificates\") pod \"image-registry-66df7c8f76-gkw8v\" (UID: \"4889a479-52c6-494e-a902-c7653ffef4a7\") " pod="openshift-image-registry/image-registry-66df7c8f76-gkw8v" Jan 31 09:07:13 crc kubenswrapper[4830]: I0131 09:07:13.244049 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/4889a479-52c6-494e-a902-c7653ffef4a7-registry-tls\") pod \"image-registry-66df7c8f76-gkw8v\" (UID: \"4889a479-52c6-494e-a902-c7653ffef4a7\") " pod="openshift-image-registry/image-registry-66df7c8f76-gkw8v" Jan 31 09:07:13 crc kubenswrapper[4830]: I0131 09:07:13.270447 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-gkw8v\" (UID: \"4889a479-52c6-494e-a902-c7653ffef4a7\") " pod="openshift-image-registry/image-registry-66df7c8f76-gkw8v" Jan 31 09:07:13 crc kubenswrapper[4830]: I0131 09:07:13.345358 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/4889a479-52c6-494e-a902-c7653ffef4a7-registry-certificates\") pod \"image-registry-66df7c8f76-gkw8v\" (UID: \"4889a479-52c6-494e-a902-c7653ffef4a7\") " pod="openshift-image-registry/image-registry-66df7c8f76-gkw8v" Jan 31 09:07:13 crc kubenswrapper[4830]: I0131 09:07:13.345439 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/4889a479-52c6-494e-a902-c7653ffef4a7-registry-tls\") pod \"image-registry-66df7c8f76-gkw8v\" (UID: \"4889a479-52c6-494e-a902-c7653ffef4a7\") " pod="openshift-image-registry/image-registry-66df7c8f76-gkw8v" Jan 31 09:07:13 crc kubenswrapper[4830]: I0131 09:07:13.346632 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4889a479-52c6-494e-a902-c7653ffef4a7-bound-sa-token\") pod \"image-registry-66df7c8f76-gkw8v\" (UID: \"4889a479-52c6-494e-a902-c7653ffef4a7\") " pod="openshift-image-registry/image-registry-66df7c8f76-gkw8v" Jan 31 09:07:13 crc kubenswrapper[4830]: I0131 09:07:13.346682 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/4889a479-52c6-494e-a902-c7653ffef4a7-installation-pull-secrets\") pod \"image-registry-66df7c8f76-gkw8v\" (UID: \"4889a479-52c6-494e-a902-c7653ffef4a7\") " pod="openshift-image-registry/image-registry-66df7c8f76-gkw8v" Jan 31 09:07:13 crc kubenswrapper[4830]: I0131 09:07:13.346705 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/4889a479-52c6-494e-a902-c7653ffef4a7-ca-trust-extracted\") pod \"image-registry-66df7c8f76-gkw8v\" (UID: \"4889a479-52c6-494e-a902-c7653ffef4a7\") " pod="openshift-image-registry/image-registry-66df7c8f76-gkw8v" Jan 31 09:07:13 crc kubenswrapper[4830]: I0131 09:07:13.346753 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zrlnp\" (UniqueName: \"kubernetes.io/projected/4889a479-52c6-494e-a902-c7653ffef4a7-kube-api-access-zrlnp\") pod \"image-registry-66df7c8f76-gkw8v\" (UID: \"4889a479-52c6-494e-a902-c7653ffef4a7\") " pod="openshift-image-registry/image-registry-66df7c8f76-gkw8v" Jan 31 09:07:13 crc kubenswrapper[4830]: I0131 09:07:13.346785 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4889a479-52c6-494e-a902-c7653ffef4a7-trusted-ca\") pod \"image-registry-66df7c8f76-gkw8v\" (UID: \"4889a479-52c6-494e-a902-c7653ffef4a7\") " pod="openshift-image-registry/image-registry-66df7c8f76-gkw8v" Jan 31 09:07:13 crc kubenswrapper[4830]: I0131 09:07:13.347452 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/4889a479-52c6-494e-a902-c7653ffef4a7-registry-certificates\") pod \"image-registry-66df7c8f76-gkw8v\" (UID: \"4889a479-52c6-494e-a902-c7653ffef4a7\") " pod="openshift-image-registry/image-registry-66df7c8f76-gkw8v" Jan 31 09:07:13 crc kubenswrapper[4830]: I0131 09:07:13.347710 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/4889a479-52c6-494e-a902-c7653ffef4a7-ca-trust-extracted\") pod \"image-registry-66df7c8f76-gkw8v\" (UID: \"4889a479-52c6-494e-a902-c7653ffef4a7\") " pod="openshift-image-registry/image-registry-66df7c8f76-gkw8v" Jan 31 09:07:13 crc kubenswrapper[4830]: I0131 09:07:13.347963 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4889a479-52c6-494e-a902-c7653ffef4a7-trusted-ca\") pod \"image-registry-66df7c8f76-gkw8v\" (UID: \"4889a479-52c6-494e-a902-c7653ffef4a7\") " pod="openshift-image-registry/image-registry-66df7c8f76-gkw8v" Jan 31 09:07:13 crc kubenswrapper[4830]: I0131 09:07:13.365329 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/4889a479-52c6-494e-a902-c7653ffef4a7-registry-tls\") pod \"image-registry-66df7c8f76-gkw8v\" (UID: \"4889a479-52c6-494e-a902-c7653ffef4a7\") " pod="openshift-image-registry/image-registry-66df7c8f76-gkw8v" Jan 31 09:07:13 crc kubenswrapper[4830]: I0131 09:07:13.365329 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/4889a479-52c6-494e-a902-c7653ffef4a7-installation-pull-secrets\") pod \"image-registry-66df7c8f76-gkw8v\" (UID: \"4889a479-52c6-494e-a902-c7653ffef4a7\") " pod="openshift-image-registry/image-registry-66df7c8f76-gkw8v" Jan 31 09:07:13 crc kubenswrapper[4830]: I0131 09:07:13.369668 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zrlnp\" (UniqueName: \"kubernetes.io/projected/4889a479-52c6-494e-a902-c7653ffef4a7-kube-api-access-zrlnp\") pod \"image-registry-66df7c8f76-gkw8v\" (UID: \"4889a479-52c6-494e-a902-c7653ffef4a7\") " pod="openshift-image-registry/image-registry-66df7c8f76-gkw8v" Jan 31 09:07:13 crc kubenswrapper[4830]: I0131 09:07:13.375205 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4889a479-52c6-494e-a902-c7653ffef4a7-bound-sa-token\") pod \"image-registry-66df7c8f76-gkw8v\" (UID: \"4889a479-52c6-494e-a902-c7653ffef4a7\") " pod="openshift-image-registry/image-registry-66df7c8f76-gkw8v" Jan 31 09:07:13 crc kubenswrapper[4830]: I0131 09:07:13.482514 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-gkw8v" Jan 31 09:07:13 crc kubenswrapper[4830]: I0131 09:07:13.921603 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-gkw8v"] Jan 31 09:07:13 crc kubenswrapper[4830]: W0131 09:07:13.929438 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4889a479_52c6_494e_a902_c7653ffef4a7.slice/crio-27d48df200cf573b16955c58d022c2e81eea4b5766baf6c48c95320c3bfd2d98 WatchSource:0}: Error finding container 27d48df200cf573b16955c58d022c2e81eea4b5766baf6c48c95320c3bfd2d98: Status 404 returned error can't find the container with id 27d48df200cf573b16955c58d022c2e81eea4b5766baf6c48c95320c3bfd2d98 Jan 31 09:07:14 crc kubenswrapper[4830]: I0131 09:07:14.102857 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-gkw8v" event={"ID":"4889a479-52c6-494e-a902-c7653ffef4a7","Type":"ContainerStarted","Data":"6ccc3a816849a38a94aa9cfb33a18746a9d8c4baed430fa9f3e2cad1b0f80483"} Jan 31 09:07:14 crc kubenswrapper[4830]: I0131 09:07:14.103516 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-gkw8v" event={"ID":"4889a479-52c6-494e-a902-c7653ffef4a7","Type":"ContainerStarted","Data":"27d48df200cf573b16955c58d022c2e81eea4b5766baf6c48c95320c3bfd2d98"} Jan 31 09:07:14 crc kubenswrapper[4830]: I0131 09:07:14.109824 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-bcf89fb66-fxq4w" Jan 31 09:07:14 crc kubenswrapper[4830]: I0131 09:07:14.122598 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-gkw8v" podStartSLOduration=1.122581311 podStartE2EDuration="1.122581311s" podCreationTimestamp="2026-01-31 09:07:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 09:07:14.120562789 +0000 UTC m=+378.613925251" watchObservedRunningTime="2026-01-31 09:07:14.122581311 +0000 UTC m=+378.615943753" Jan 31 09:07:14 crc kubenswrapper[4830]: I0131 09:07:14.353195 4830 patch_prober.go:28] interesting pod/machine-config-daemon-gt7kd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 09:07:14 crc kubenswrapper[4830]: I0131 09:07:14.353586 4830 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" podUID="158dbfda-9b0a-4809-9946-3c6ee2d082dc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 09:07:15 crc kubenswrapper[4830]: I0131 09:07:15.108394 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-gkw8v" Jan 31 09:07:16 crc kubenswrapper[4830]: I0131 09:07:16.632257 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-2ssr8"] Jan 31 09:07:16 crc kubenswrapper[4830]: I0131 09:07:16.633198 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-2ssr8" podUID="3e020928-b063-4d3c-8992-e712fe3d1b1d" containerName="registry-server" containerID="cri-o://91511445b3449579f3d14ae49e60cf52eb3d6565be2d36f6218bbcc6dfdff270" gracePeriod=30 Jan 31 09:07:16 crc kubenswrapper[4830]: I0131 09:07:16.647748 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-q8t9t"] Jan 31 09:07:16 crc kubenswrapper[4830]: I0131 09:07:16.648161 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-q8t9t" podUID="db7a137a-b7f9-4446-85f6-ea0d2f0caedd" containerName="registry-server" containerID="cri-o://c350e9a183f1092001e8d6788224e69c58a2be2488073ec39b0a19d0bf81b52c" gracePeriod=30 Jan 31 09:07:16 crc kubenswrapper[4830]: I0131 09:07:16.656547 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-fnk7f"] Jan 31 09:07:16 crc kubenswrapper[4830]: I0131 09:07:16.656914 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-fnk7f" podUID="36a7a51a-2662-4f3b-aa1d-d674cf676b9d" containerName="marketplace-operator" containerID="cri-o://4e48e977c1cc79f53accb7684a4ac58353f9e37b15ae4f8702c5995bf57261d8" gracePeriod=30 Jan 31 09:07:16 crc kubenswrapper[4830]: I0131 09:07:16.674572 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-sxn8r"] Jan 31 09:07:16 crc kubenswrapper[4830]: I0131 09:07:16.692197 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-58x6p"] Jan 31 09:07:16 crc kubenswrapper[4830]: I0131 09:07:16.693245 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-58x6p" Jan 31 09:07:16 crc kubenswrapper[4830]: I0131 09:07:16.694859 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-gs9bg"] Jan 31 09:07:16 crc kubenswrapper[4830]: I0131 09:07:16.695161 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-gs9bg" podUID="ca8a4bb5-67d6-4e50-905f-95e0a15e376a" containerName="registry-server" containerID="cri-o://bac4f522a593309f95ac81e947b40e3374b6a486b33627c2867e7c855e45faad" gracePeriod=30 Jan 31 09:07:16 crc kubenswrapper[4830]: I0131 09:07:16.697518 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-58x6p"] Jan 31 09:07:16 crc kubenswrapper[4830]: I0131 09:07:16.801315 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6c3d452-2742-4f91-9857-5f5e0b50f348-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-58x6p\" (UID: \"b6c3d452-2742-4f91-9857-5f5e0b50f348\") " pod="openshift-marketplace/marketplace-operator-79b997595-58x6p" Jan 31 09:07:16 crc kubenswrapper[4830]: I0131 09:07:16.801902 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6c3d452-2742-4f91-9857-5f5e0b50f348-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-58x6p\" (UID: \"b6c3d452-2742-4f91-9857-5f5e0b50f348\") " pod="openshift-marketplace/marketplace-operator-79b997595-58x6p" Jan 31 09:07:16 crc kubenswrapper[4830]: I0131 09:07:16.801934 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d7zts\" (UniqueName: \"kubernetes.io/projected/b6c3d452-2742-4f91-9857-5f5e0b50f348-kube-api-access-d7zts\") pod \"marketplace-operator-79b997595-58x6p\" (UID: \"b6c3d452-2742-4f91-9857-5f5e0b50f348\") " pod="openshift-marketplace/marketplace-operator-79b997595-58x6p" Jan 31 09:07:16 crc kubenswrapper[4830]: I0131 09:07:16.903195 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6c3d452-2742-4f91-9857-5f5e0b50f348-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-58x6p\" (UID: \"b6c3d452-2742-4f91-9857-5f5e0b50f348\") " pod="openshift-marketplace/marketplace-operator-79b997595-58x6p" Jan 31 09:07:16 crc kubenswrapper[4830]: I0131 09:07:16.903264 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d7zts\" (UniqueName: \"kubernetes.io/projected/b6c3d452-2742-4f91-9857-5f5e0b50f348-kube-api-access-d7zts\") pod \"marketplace-operator-79b997595-58x6p\" (UID: \"b6c3d452-2742-4f91-9857-5f5e0b50f348\") " pod="openshift-marketplace/marketplace-operator-79b997595-58x6p" Jan 31 09:07:16 crc kubenswrapper[4830]: I0131 09:07:16.903327 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6c3d452-2742-4f91-9857-5f5e0b50f348-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-58x6p\" (UID: \"b6c3d452-2742-4f91-9857-5f5e0b50f348\") " pod="openshift-marketplace/marketplace-operator-79b997595-58x6p" Jan 31 09:07:16 crc kubenswrapper[4830]: I0131 09:07:16.906705 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6c3d452-2742-4f91-9857-5f5e0b50f348-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-58x6p\" (UID: \"b6c3d452-2742-4f91-9857-5f5e0b50f348\") " pod="openshift-marketplace/marketplace-operator-79b997595-58x6p" Jan 31 09:07:16 crc kubenswrapper[4830]: I0131 09:07:16.930125 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6c3d452-2742-4f91-9857-5f5e0b50f348-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-58x6p\" (UID: \"b6c3d452-2742-4f91-9857-5f5e0b50f348\") " pod="openshift-marketplace/marketplace-operator-79b997595-58x6p" Jan 31 09:07:16 crc kubenswrapper[4830]: I0131 09:07:16.943144 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d7zts\" (UniqueName: \"kubernetes.io/projected/b6c3d452-2742-4f91-9857-5f5e0b50f348-kube-api-access-d7zts\") pod \"marketplace-operator-79b997595-58x6p\" (UID: \"b6c3d452-2742-4f91-9857-5f5e0b50f348\") " pod="openshift-marketplace/marketplace-operator-79b997595-58x6p" Jan 31 09:07:17 crc kubenswrapper[4830]: I0131 09:07:17.040492 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-58x6p" Jan 31 09:07:17 crc kubenswrapper[4830]: I0131 09:07:17.123056 4830 generic.go:334] "Generic (PLEG): container finished" podID="ca8a4bb5-67d6-4e50-905f-95e0a15e376a" containerID="bac4f522a593309f95ac81e947b40e3374b6a486b33627c2867e7c855e45faad" exitCode=0 Jan 31 09:07:17 crc kubenswrapper[4830]: I0131 09:07:17.123144 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gs9bg" event={"ID":"ca8a4bb5-67d6-4e50-905f-95e0a15e376a","Type":"ContainerDied","Data":"bac4f522a593309f95ac81e947b40e3374b6a486b33627c2867e7c855e45faad"} Jan 31 09:07:17 crc kubenswrapper[4830]: I0131 09:07:17.124852 4830 generic.go:334] "Generic (PLEG): container finished" podID="36a7a51a-2662-4f3b-aa1d-d674cf676b9d" containerID="4e48e977c1cc79f53accb7684a4ac58353f9e37b15ae4f8702c5995bf57261d8" exitCode=0 Jan 31 09:07:17 crc kubenswrapper[4830]: I0131 09:07:17.124951 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-fnk7f" event={"ID":"36a7a51a-2662-4f3b-aa1d-d674cf676b9d","Type":"ContainerDied","Data":"4e48e977c1cc79f53accb7684a4ac58353f9e37b15ae4f8702c5995bf57261d8"} Jan 31 09:07:17 crc kubenswrapper[4830]: I0131 09:07:17.125068 4830 scope.go:117] "RemoveContainer" containerID="b90565efd448c3a205961e4d926bf471147c2a338b39eef1471085e2888f47a0" Jan 31 09:07:17 crc kubenswrapper[4830]: I0131 09:07:17.127122 4830 generic.go:334] "Generic (PLEG): container finished" podID="db7a137a-b7f9-4446-85f6-ea0d2f0caedd" containerID="c350e9a183f1092001e8d6788224e69c58a2be2488073ec39b0a19d0bf81b52c" exitCode=0 Jan 31 09:07:17 crc kubenswrapper[4830]: I0131 09:07:17.127203 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q8t9t" event={"ID":"db7a137a-b7f9-4446-85f6-ea0d2f0caedd","Type":"ContainerDied","Data":"c350e9a183f1092001e8d6788224e69c58a2be2488073ec39b0a19d0bf81b52c"} Jan 31 09:07:17 crc kubenswrapper[4830]: I0131 09:07:17.130303 4830 generic.go:334] "Generic (PLEG): container finished" podID="3e020928-b063-4d3c-8992-e712fe3d1b1d" containerID="91511445b3449579f3d14ae49e60cf52eb3d6565be2d36f6218bbcc6dfdff270" exitCode=0 Jan 31 09:07:17 crc kubenswrapper[4830]: I0131 09:07:17.130382 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2ssr8" event={"ID":"3e020928-b063-4d3c-8992-e712fe3d1b1d","Type":"ContainerDied","Data":"91511445b3449579f3d14ae49e60cf52eb3d6565be2d36f6218bbcc6dfdff270"} Jan 31 09:07:17 crc kubenswrapper[4830]: I0131 09:07:17.130591 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-sxn8r" podUID="3868f465-887b-4580-8c17-293665785251" containerName="registry-server" containerID="cri-o://d6e16cadc700d0fcb28f1b5f96aa36d73c6023b86ddef29f9188be612c19f246" gracePeriod=30 Jan 31 09:07:17 crc kubenswrapper[4830]: I0131 09:07:17.292302 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-q8t9t" Jan 31 09:07:17 crc kubenswrapper[4830]: I0131 09:07:17.365879 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-fnk7f" Jan 31 09:07:17 crc kubenswrapper[4830]: I0131 09:07:17.371124 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gs9bg" Jan 31 09:07:17 crc kubenswrapper[4830]: I0131 09:07:17.417983 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/db7a137a-b7f9-4446-85f6-ea0d2f0caedd-utilities\") pod \"db7a137a-b7f9-4446-85f6-ea0d2f0caedd\" (UID: \"db7a137a-b7f9-4446-85f6-ea0d2f0caedd\") " Jan 31 09:07:17 crc kubenswrapper[4830]: I0131 09:07:17.418144 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bhrcb\" (UniqueName: \"kubernetes.io/projected/db7a137a-b7f9-4446-85f6-ea0d2f0caedd-kube-api-access-bhrcb\") pod \"db7a137a-b7f9-4446-85f6-ea0d2f0caedd\" (UID: \"db7a137a-b7f9-4446-85f6-ea0d2f0caedd\") " Jan 31 09:07:17 crc kubenswrapper[4830]: I0131 09:07:17.418179 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/db7a137a-b7f9-4446-85f6-ea0d2f0caedd-catalog-content\") pod \"db7a137a-b7f9-4446-85f6-ea0d2f0caedd\" (UID: \"db7a137a-b7f9-4446-85f6-ea0d2f0caedd\") " Jan 31 09:07:17 crc kubenswrapper[4830]: I0131 09:07:17.420201 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/db7a137a-b7f9-4446-85f6-ea0d2f0caedd-utilities" (OuterVolumeSpecName: "utilities") pod "db7a137a-b7f9-4446-85f6-ea0d2f0caedd" (UID: "db7a137a-b7f9-4446-85f6-ea0d2f0caedd"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 09:07:17 crc kubenswrapper[4830]: I0131 09:07:17.428950 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/db7a137a-b7f9-4446-85f6-ea0d2f0caedd-kube-api-access-bhrcb" (OuterVolumeSpecName: "kube-api-access-bhrcb") pod "db7a137a-b7f9-4446-85f6-ea0d2f0caedd" (UID: "db7a137a-b7f9-4446-85f6-ea0d2f0caedd"). InnerVolumeSpecName "kube-api-access-bhrcb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:07:17 crc kubenswrapper[4830]: I0131 09:07:17.480927 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/db7a137a-b7f9-4446-85f6-ea0d2f0caedd-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "db7a137a-b7f9-4446-85f6-ea0d2f0caedd" (UID: "db7a137a-b7f9-4446-85f6-ea0d2f0caedd"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 09:07:17 crc kubenswrapper[4830]: I0131 09:07:17.520180 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nhctp\" (UniqueName: \"kubernetes.io/projected/36a7a51a-2662-4f3b-aa1d-d674cf676b9d-kube-api-access-nhctp\") pod \"36a7a51a-2662-4f3b-aa1d-d674cf676b9d\" (UID: \"36a7a51a-2662-4f3b-aa1d-d674cf676b9d\") " Jan 31 09:07:17 crc kubenswrapper[4830]: I0131 09:07:17.520237 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz2g4\" (UniqueName: \"kubernetes.io/projected/ca8a4bb5-67d6-4e50-905f-95e0a15e376a-kube-api-access-lz2g4\") pod \"ca8a4bb5-67d6-4e50-905f-95e0a15e376a\" (UID: \"ca8a4bb5-67d6-4e50-905f-95e0a15e376a\") " Jan 31 09:07:17 crc kubenswrapper[4830]: I0131 09:07:17.520289 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ca8a4bb5-67d6-4e50-905f-95e0a15e376a-catalog-content\") pod \"ca8a4bb5-67d6-4e50-905f-95e0a15e376a\" (UID: \"ca8a4bb5-67d6-4e50-905f-95e0a15e376a\") " Jan 31 09:07:17 crc kubenswrapper[4830]: I0131 09:07:17.520318 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/36a7a51a-2662-4f3b-aa1d-d674cf676b9d-marketplace-operator-metrics\") pod \"36a7a51a-2662-4f3b-aa1d-d674cf676b9d\" (UID: \"36a7a51a-2662-4f3b-aa1d-d674cf676b9d\") " Jan 31 09:07:17 crc kubenswrapper[4830]: I0131 09:07:17.520392 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ca8a4bb5-67d6-4e50-905f-95e0a15e376a-utilities\") pod \"ca8a4bb5-67d6-4e50-905f-95e0a15e376a\" (UID: \"ca8a4bb5-67d6-4e50-905f-95e0a15e376a\") " Jan 31 09:07:17 crc kubenswrapper[4830]: I0131 09:07:17.520450 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/36a7a51a-2662-4f3b-aa1d-d674cf676b9d-marketplace-trusted-ca\") pod \"36a7a51a-2662-4f3b-aa1d-d674cf676b9d\" (UID: \"36a7a51a-2662-4f3b-aa1d-d674cf676b9d\") " Jan 31 09:07:17 crc kubenswrapper[4830]: I0131 09:07:17.520742 4830 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/db7a137a-b7f9-4446-85f6-ea0d2f0caedd-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 09:07:17 crc kubenswrapper[4830]: I0131 09:07:17.520755 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bhrcb\" (UniqueName: \"kubernetes.io/projected/db7a137a-b7f9-4446-85f6-ea0d2f0caedd-kube-api-access-bhrcb\") on node \"crc\" DevicePath \"\"" Jan 31 09:07:17 crc kubenswrapper[4830]: I0131 09:07:17.520768 4830 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/db7a137a-b7f9-4446-85f6-ea0d2f0caedd-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 09:07:17 crc kubenswrapper[4830]: I0131 09:07:17.521683 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/36a7a51a-2662-4f3b-aa1d-d674cf676b9d-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "36a7a51a-2662-4f3b-aa1d-d674cf676b9d" (UID: "36a7a51a-2662-4f3b-aa1d-d674cf676b9d"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:07:17 crc kubenswrapper[4830]: I0131 09:07:17.524116 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ca8a4bb5-67d6-4e50-905f-95e0a15e376a-utilities" (OuterVolumeSpecName: "utilities") pod "ca8a4bb5-67d6-4e50-905f-95e0a15e376a" (UID: "ca8a4bb5-67d6-4e50-905f-95e0a15e376a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 09:07:17 crc kubenswrapper[4830]: I0131 09:07:17.526497 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ca8a4bb5-67d6-4e50-905f-95e0a15e376a-kube-api-access-lz2g4" (OuterVolumeSpecName: "kube-api-access-lz2g4") pod "ca8a4bb5-67d6-4e50-905f-95e0a15e376a" (UID: "ca8a4bb5-67d6-4e50-905f-95e0a15e376a"). InnerVolumeSpecName "kube-api-access-lz2g4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:07:17 crc kubenswrapper[4830]: I0131 09:07:17.526625 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/36a7a51a-2662-4f3b-aa1d-d674cf676b9d-kube-api-access-nhctp" (OuterVolumeSpecName: "kube-api-access-nhctp") pod "36a7a51a-2662-4f3b-aa1d-d674cf676b9d" (UID: "36a7a51a-2662-4f3b-aa1d-d674cf676b9d"). InnerVolumeSpecName "kube-api-access-nhctp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:07:17 crc kubenswrapper[4830]: I0131 09:07:17.527149 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/36a7a51a-2662-4f3b-aa1d-d674cf676b9d-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "36a7a51a-2662-4f3b-aa1d-d674cf676b9d" (UID: "36a7a51a-2662-4f3b-aa1d-d674cf676b9d"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:07:17 crc kubenswrapper[4830]: I0131 09:07:17.572296 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-58x6p"] Jan 31 09:07:17 crc kubenswrapper[4830]: I0131 09:07:17.622241 4830 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/36a7a51a-2662-4f3b-aa1d-d674cf676b9d-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 31 09:07:17 crc kubenswrapper[4830]: I0131 09:07:17.622415 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nhctp\" (UniqueName: \"kubernetes.io/projected/36a7a51a-2662-4f3b-aa1d-d674cf676b9d-kube-api-access-nhctp\") on node \"crc\" DevicePath \"\"" Jan 31 09:07:17 crc kubenswrapper[4830]: I0131 09:07:17.622428 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz2g4\" (UniqueName: \"kubernetes.io/projected/ca8a4bb5-67d6-4e50-905f-95e0a15e376a-kube-api-access-lz2g4\") on node \"crc\" DevicePath \"\"" Jan 31 09:07:17 crc kubenswrapper[4830]: I0131 09:07:17.622440 4830 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/36a7a51a-2662-4f3b-aa1d-d674cf676b9d-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 31 09:07:17 crc kubenswrapper[4830]: I0131 09:07:17.622453 4830 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ca8a4bb5-67d6-4e50-905f-95e0a15e376a-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 09:07:17 crc kubenswrapper[4830]: W0131 09:07:17.660509 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb6c3d452_2742_4f91_9857_5f5e0b50f348.slice/crio-097ea3f829d45855945388a7827ce62424a8577d547f3057b2db3face9f34831 WatchSource:0}: Error finding container 097ea3f829d45855945388a7827ce62424a8577d547f3057b2db3face9f34831: Status 404 returned error can't find the container with id 097ea3f829d45855945388a7827ce62424a8577d547f3057b2db3face9f34831 Jan 31 09:07:17 crc kubenswrapper[4830]: I0131 09:07:17.677185 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ca8a4bb5-67d6-4e50-905f-95e0a15e376a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ca8a4bb5-67d6-4e50-905f-95e0a15e376a" (UID: "ca8a4bb5-67d6-4e50-905f-95e0a15e376a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 09:07:17 crc kubenswrapper[4830]: I0131 09:07:17.724333 4830 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ca8a4bb5-67d6-4e50-905f-95e0a15e376a-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 09:07:17 crc kubenswrapper[4830]: I0131 09:07:17.863323 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2ssr8" Jan 31 09:07:17 crc kubenswrapper[4830]: I0131 09:07:17.876750 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sxn8r" Jan 31 09:07:18 crc kubenswrapper[4830]: I0131 09:07:18.028201 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3868f465-887b-4580-8c17-293665785251-utilities\") pod \"3868f465-887b-4580-8c17-293665785251\" (UID: \"3868f465-887b-4580-8c17-293665785251\") " Jan 31 09:07:18 crc kubenswrapper[4830]: I0131 09:07:18.028280 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3e020928-b063-4d3c-8992-e712fe3d1b1d-catalog-content\") pod \"3e020928-b063-4d3c-8992-e712fe3d1b1d\" (UID: \"3e020928-b063-4d3c-8992-e712fe3d1b1d\") " Jan 31 09:07:18 crc kubenswrapper[4830]: I0131 09:07:18.028314 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rhc4h\" (UniqueName: \"kubernetes.io/projected/3868f465-887b-4580-8c17-293665785251-kube-api-access-rhc4h\") pod \"3868f465-887b-4580-8c17-293665785251\" (UID: \"3868f465-887b-4580-8c17-293665785251\") " Jan 31 09:07:18 crc kubenswrapper[4830]: I0131 09:07:18.028381 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3e020928-b063-4d3c-8992-e712fe3d1b1d-utilities\") pod \"3e020928-b063-4d3c-8992-e712fe3d1b1d\" (UID: \"3e020928-b063-4d3c-8992-e712fe3d1b1d\") " Jan 31 09:07:18 crc kubenswrapper[4830]: I0131 09:07:18.028410 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3868f465-887b-4580-8c17-293665785251-catalog-content\") pod \"3868f465-887b-4580-8c17-293665785251\" (UID: \"3868f465-887b-4580-8c17-293665785251\") " Jan 31 09:07:18 crc kubenswrapper[4830]: I0131 09:07:18.028440 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zts58\" (UniqueName: \"kubernetes.io/projected/3e020928-b063-4d3c-8992-e712fe3d1b1d-kube-api-access-zts58\") pod \"3e020928-b063-4d3c-8992-e712fe3d1b1d\" (UID: \"3e020928-b063-4d3c-8992-e712fe3d1b1d\") " Jan 31 09:07:18 crc kubenswrapper[4830]: I0131 09:07:18.030572 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3868f465-887b-4580-8c17-293665785251-utilities" (OuterVolumeSpecName: "utilities") pod "3868f465-887b-4580-8c17-293665785251" (UID: "3868f465-887b-4580-8c17-293665785251"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 09:07:18 crc kubenswrapper[4830]: I0131 09:07:18.030640 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3e020928-b063-4d3c-8992-e712fe3d1b1d-utilities" (OuterVolumeSpecName: "utilities") pod "3e020928-b063-4d3c-8992-e712fe3d1b1d" (UID: "3e020928-b063-4d3c-8992-e712fe3d1b1d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 09:07:18 crc kubenswrapper[4830]: I0131 09:07:18.049782 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3e020928-b063-4d3c-8992-e712fe3d1b1d-kube-api-access-zts58" (OuterVolumeSpecName: "kube-api-access-zts58") pod "3e020928-b063-4d3c-8992-e712fe3d1b1d" (UID: "3e020928-b063-4d3c-8992-e712fe3d1b1d"). InnerVolumeSpecName "kube-api-access-zts58". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:07:18 crc kubenswrapper[4830]: I0131 09:07:18.049908 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3868f465-887b-4580-8c17-293665785251-kube-api-access-rhc4h" (OuterVolumeSpecName: "kube-api-access-rhc4h") pod "3868f465-887b-4580-8c17-293665785251" (UID: "3868f465-887b-4580-8c17-293665785251"). InnerVolumeSpecName "kube-api-access-rhc4h". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:07:18 crc kubenswrapper[4830]: I0131 09:07:18.056241 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3868f465-887b-4580-8c17-293665785251-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3868f465-887b-4580-8c17-293665785251" (UID: "3868f465-887b-4580-8c17-293665785251"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 09:07:18 crc kubenswrapper[4830]: I0131 09:07:18.085529 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3e020928-b063-4d3c-8992-e712fe3d1b1d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3e020928-b063-4d3c-8992-e712fe3d1b1d" (UID: "3e020928-b063-4d3c-8992-e712fe3d1b1d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 09:07:18 crc kubenswrapper[4830]: I0131 09:07:18.129875 4830 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3e020928-b063-4d3c-8992-e712fe3d1b1d-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 09:07:18 crc kubenswrapper[4830]: I0131 09:07:18.129928 4830 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3868f465-887b-4580-8c17-293665785251-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 09:07:18 crc kubenswrapper[4830]: I0131 09:07:18.129942 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zts58\" (UniqueName: \"kubernetes.io/projected/3e020928-b063-4d3c-8992-e712fe3d1b1d-kube-api-access-zts58\") on node \"crc\" DevicePath \"\"" Jan 31 09:07:18 crc kubenswrapper[4830]: I0131 09:07:18.129953 4830 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3868f465-887b-4580-8c17-293665785251-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 09:07:18 crc kubenswrapper[4830]: I0131 09:07:18.129966 4830 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3e020928-b063-4d3c-8992-e712fe3d1b1d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 09:07:18 crc kubenswrapper[4830]: I0131 09:07:18.129975 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rhc4h\" (UniqueName: \"kubernetes.io/projected/3868f465-887b-4580-8c17-293665785251-kube-api-access-rhc4h\") on node \"crc\" DevicePath \"\"" Jan 31 09:07:18 crc kubenswrapper[4830]: I0131 09:07:18.141225 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gs9bg" Jan 31 09:07:18 crc kubenswrapper[4830]: I0131 09:07:18.141273 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gs9bg" event={"ID":"ca8a4bb5-67d6-4e50-905f-95e0a15e376a","Type":"ContainerDied","Data":"5b72ea10c07f3eeb0d0496ad7fa8736e50cce6e4d3b3d9cf616facb5858ec6f6"} Jan 31 09:07:18 crc kubenswrapper[4830]: I0131 09:07:18.141334 4830 scope.go:117] "RemoveContainer" containerID="bac4f522a593309f95ac81e947b40e3374b6a486b33627c2867e7c855e45faad" Jan 31 09:07:18 crc kubenswrapper[4830]: I0131 09:07:18.145115 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-fnk7f" event={"ID":"36a7a51a-2662-4f3b-aa1d-d674cf676b9d","Type":"ContainerDied","Data":"e333f126646e33e3be1b2d0c1dda0c5012c306f8b9919b39797eb66da8e04c59"} Jan 31 09:07:18 crc kubenswrapper[4830]: I0131 09:07:18.145253 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-fnk7f" Jan 31 09:07:18 crc kubenswrapper[4830]: I0131 09:07:18.151873 4830 generic.go:334] "Generic (PLEG): container finished" podID="3868f465-887b-4580-8c17-293665785251" containerID="d6e16cadc700d0fcb28f1b5f96aa36d73c6023b86ddef29f9188be612c19f246" exitCode=0 Jan 31 09:07:18 crc kubenswrapper[4830]: I0131 09:07:18.151953 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sxn8r" event={"ID":"3868f465-887b-4580-8c17-293665785251","Type":"ContainerDied","Data":"d6e16cadc700d0fcb28f1b5f96aa36d73c6023b86ddef29f9188be612c19f246"} Jan 31 09:07:18 crc kubenswrapper[4830]: I0131 09:07:18.151988 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sxn8r" event={"ID":"3868f465-887b-4580-8c17-293665785251","Type":"ContainerDied","Data":"b683a829a6a3da3790fa672b91fd2612d9b3d5a07c3c91411c1193079494cd22"} Jan 31 09:07:18 crc kubenswrapper[4830]: I0131 09:07:18.152079 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sxn8r" Jan 31 09:07:18 crc kubenswrapper[4830]: I0131 09:07:18.156178 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q8t9t" event={"ID":"db7a137a-b7f9-4446-85f6-ea0d2f0caedd","Type":"ContainerDied","Data":"3408accb1abcb9f45ad912603142df63d056931c7e91eaaa009bb4bd10e4c29d"} Jan 31 09:07:18 crc kubenswrapper[4830]: I0131 09:07:18.156366 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-q8t9t" Jan 31 09:07:18 crc kubenswrapper[4830]: I0131 09:07:18.160487 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2ssr8" Jan 31 09:07:18 crc kubenswrapper[4830]: I0131 09:07:18.160459 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2ssr8" event={"ID":"3e020928-b063-4d3c-8992-e712fe3d1b1d","Type":"ContainerDied","Data":"6570560ae4c56864c000a86f57a5dbed953675349c6784122415e645d9d9067d"} Jan 31 09:07:18 crc kubenswrapper[4830]: I0131 09:07:18.161807 4830 scope.go:117] "RemoveContainer" containerID="69c9e1248ed8682621278217a7f13f6c22489ce8b103c0312a4e64be9018ae62" Jan 31 09:07:18 crc kubenswrapper[4830]: I0131 09:07:18.162953 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-58x6p" event={"ID":"b6c3d452-2742-4f91-9857-5f5e0b50f348","Type":"ContainerStarted","Data":"d85017aaf93892f489ab9319825e71a9a965d45d582b884dfab7617b94a784eb"} Jan 31 09:07:18 crc kubenswrapper[4830]: I0131 09:07:18.163037 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-58x6p" event={"ID":"b6c3d452-2742-4f91-9857-5f5e0b50f348","Type":"ContainerStarted","Data":"097ea3f829d45855945388a7827ce62424a8577d547f3057b2db3face9f34831"} Jan 31 09:07:18 crc kubenswrapper[4830]: I0131 09:07:18.164020 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-58x6p" Jan 31 09:07:18 crc kubenswrapper[4830]: I0131 09:07:18.168610 4830 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-58x6p container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.71:8080/healthz\": dial tcp 10.217.0.71:8080: connect: connection refused" start-of-body= Jan 31 09:07:18 crc kubenswrapper[4830]: I0131 09:07:18.168669 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-58x6p" podUID="b6c3d452-2742-4f91-9857-5f5e0b50f348" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.71:8080/healthz\": dial tcp 10.217.0.71:8080: connect: connection refused" Jan 31 09:07:18 crc kubenswrapper[4830]: I0131 09:07:18.204066 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-58x6p" podStartSLOduration=2.204031653 podStartE2EDuration="2.204031653s" podCreationTimestamp="2026-01-31 09:07:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 09:07:18.197908805 +0000 UTC m=+382.691271247" watchObservedRunningTime="2026-01-31 09:07:18.204031653 +0000 UTC m=+382.697394105" Jan 31 09:07:18 crc kubenswrapper[4830]: I0131 09:07:18.205831 4830 scope.go:117] "RemoveContainer" containerID="fbd87bac36d49f4b8412548086f5c4c860691e3ab80d714c1a6347d9329db56b" Jan 31 09:07:18 crc kubenswrapper[4830]: I0131 09:07:18.223302 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-gs9bg"] Jan 31 09:07:18 crc kubenswrapper[4830]: I0131 09:07:18.233683 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-gs9bg"] Jan 31 09:07:18 crc kubenswrapper[4830]: I0131 09:07:18.236165 4830 scope.go:117] "RemoveContainer" containerID="4e48e977c1cc79f53accb7684a4ac58353f9e37b15ae4f8702c5995bf57261d8" Jan 31 09:07:18 crc kubenswrapper[4830]: I0131 09:07:18.245869 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-fnk7f"] Jan 31 09:07:18 crc kubenswrapper[4830]: I0131 09:07:18.255757 4830 scope.go:117] "RemoveContainer" containerID="d6e16cadc700d0fcb28f1b5f96aa36d73c6023b86ddef29f9188be612c19f246" Jan 31 09:07:18 crc kubenswrapper[4830]: I0131 09:07:18.262985 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ca8a4bb5-67d6-4e50-905f-95e0a15e376a" path="/var/lib/kubelet/pods/ca8a4bb5-67d6-4e50-905f-95e0a15e376a/volumes" Jan 31 09:07:18 crc kubenswrapper[4830]: I0131 09:07:18.265290 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-fnk7f"] Jan 31 09:07:18 crc kubenswrapper[4830]: I0131 09:07:18.265333 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-sxn8r"] Jan 31 09:07:18 crc kubenswrapper[4830]: I0131 09:07:18.271039 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-sxn8r"] Jan 31 09:07:18 crc kubenswrapper[4830]: I0131 09:07:18.273118 4830 scope.go:117] "RemoveContainer" containerID="5e9914065ddf6845fb0f9e3be4b114e59184a828a92bd6b2c9b6a3946d6f3692" Jan 31 09:07:18 crc kubenswrapper[4830]: I0131 09:07:18.282385 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-q8t9t"] Jan 31 09:07:18 crc kubenswrapper[4830]: I0131 09:07:18.290908 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-q8t9t"] Jan 31 09:07:18 crc kubenswrapper[4830]: I0131 09:07:18.292975 4830 scope.go:117] "RemoveContainer" containerID="ec29cbde38fa1dedfebe665bf9d3311a37fb95608bde46d7ef2f495d3c9c0134" Jan 31 09:07:18 crc kubenswrapper[4830]: I0131 09:07:18.296042 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-2ssr8"] Jan 31 09:07:18 crc kubenswrapper[4830]: I0131 09:07:18.300463 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-2ssr8"] Jan 31 09:07:18 crc kubenswrapper[4830]: I0131 09:07:18.310162 4830 scope.go:117] "RemoveContainer" containerID="d6e16cadc700d0fcb28f1b5f96aa36d73c6023b86ddef29f9188be612c19f246" Jan 31 09:07:18 crc kubenswrapper[4830]: E0131 09:07:18.310564 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d6e16cadc700d0fcb28f1b5f96aa36d73c6023b86ddef29f9188be612c19f246\": container with ID starting with d6e16cadc700d0fcb28f1b5f96aa36d73c6023b86ddef29f9188be612c19f246 not found: ID does not exist" containerID="d6e16cadc700d0fcb28f1b5f96aa36d73c6023b86ddef29f9188be612c19f246" Jan 31 09:07:18 crc kubenswrapper[4830]: I0131 09:07:18.310609 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d6e16cadc700d0fcb28f1b5f96aa36d73c6023b86ddef29f9188be612c19f246"} err="failed to get container status \"d6e16cadc700d0fcb28f1b5f96aa36d73c6023b86ddef29f9188be612c19f246\": rpc error: code = NotFound desc = could not find container \"d6e16cadc700d0fcb28f1b5f96aa36d73c6023b86ddef29f9188be612c19f246\": container with ID starting with d6e16cadc700d0fcb28f1b5f96aa36d73c6023b86ddef29f9188be612c19f246 not found: ID does not exist" Jan 31 09:07:18 crc kubenswrapper[4830]: I0131 09:07:18.310638 4830 scope.go:117] "RemoveContainer" containerID="5e9914065ddf6845fb0f9e3be4b114e59184a828a92bd6b2c9b6a3946d6f3692" Jan 31 09:07:18 crc kubenswrapper[4830]: E0131 09:07:18.310983 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5e9914065ddf6845fb0f9e3be4b114e59184a828a92bd6b2c9b6a3946d6f3692\": container with ID starting with 5e9914065ddf6845fb0f9e3be4b114e59184a828a92bd6b2c9b6a3946d6f3692 not found: ID does not exist" containerID="5e9914065ddf6845fb0f9e3be4b114e59184a828a92bd6b2c9b6a3946d6f3692" Jan 31 09:07:18 crc kubenswrapper[4830]: I0131 09:07:18.311011 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5e9914065ddf6845fb0f9e3be4b114e59184a828a92bd6b2c9b6a3946d6f3692"} err="failed to get container status \"5e9914065ddf6845fb0f9e3be4b114e59184a828a92bd6b2c9b6a3946d6f3692\": rpc error: code = NotFound desc = could not find container \"5e9914065ddf6845fb0f9e3be4b114e59184a828a92bd6b2c9b6a3946d6f3692\": container with ID starting with 5e9914065ddf6845fb0f9e3be4b114e59184a828a92bd6b2c9b6a3946d6f3692 not found: ID does not exist" Jan 31 09:07:18 crc kubenswrapper[4830]: I0131 09:07:18.311030 4830 scope.go:117] "RemoveContainer" containerID="ec29cbde38fa1dedfebe665bf9d3311a37fb95608bde46d7ef2f495d3c9c0134" Jan 31 09:07:18 crc kubenswrapper[4830]: E0131 09:07:18.311307 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ec29cbde38fa1dedfebe665bf9d3311a37fb95608bde46d7ef2f495d3c9c0134\": container with ID starting with ec29cbde38fa1dedfebe665bf9d3311a37fb95608bde46d7ef2f495d3c9c0134 not found: ID does not exist" containerID="ec29cbde38fa1dedfebe665bf9d3311a37fb95608bde46d7ef2f495d3c9c0134" Jan 31 09:07:18 crc kubenswrapper[4830]: I0131 09:07:18.311338 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ec29cbde38fa1dedfebe665bf9d3311a37fb95608bde46d7ef2f495d3c9c0134"} err="failed to get container status \"ec29cbde38fa1dedfebe665bf9d3311a37fb95608bde46d7ef2f495d3c9c0134\": rpc error: code = NotFound desc = could not find container \"ec29cbde38fa1dedfebe665bf9d3311a37fb95608bde46d7ef2f495d3c9c0134\": container with ID starting with ec29cbde38fa1dedfebe665bf9d3311a37fb95608bde46d7ef2f495d3c9c0134 not found: ID does not exist" Jan 31 09:07:18 crc kubenswrapper[4830]: I0131 09:07:18.311357 4830 scope.go:117] "RemoveContainer" containerID="c350e9a183f1092001e8d6788224e69c58a2be2488073ec39b0a19d0bf81b52c" Jan 31 09:07:18 crc kubenswrapper[4830]: I0131 09:07:18.326462 4830 scope.go:117] "RemoveContainer" containerID="e3739938c8209aea2c94a2139d565624e6a0e9b2b2a62ff1d79c01c29bb78ba1" Jan 31 09:07:18 crc kubenswrapper[4830]: I0131 09:07:18.340993 4830 scope.go:117] "RemoveContainer" containerID="b1da15b1cfd5f09f4f82e796703792e2bfc71f61de85bcc01295357613e9d7f0" Jan 31 09:07:18 crc kubenswrapper[4830]: I0131 09:07:18.364469 4830 scope.go:117] "RemoveContainer" containerID="91511445b3449579f3d14ae49e60cf52eb3d6565be2d36f6218bbcc6dfdff270" Jan 31 09:07:18 crc kubenswrapper[4830]: I0131 09:07:18.379657 4830 scope.go:117] "RemoveContainer" containerID="4907ca9744784c8af949d7901fcde921f68f7c0fe7a1e5c5d1028c7bbdf74675" Jan 31 09:07:18 crc kubenswrapper[4830]: I0131 09:07:18.396972 4830 scope.go:117] "RemoveContainer" containerID="87e5360931c06df89cc5a321b5a6e533de79e0a177545667824500166052980a" Jan 31 09:07:18 crc kubenswrapper[4830]: I0131 09:07:18.847074 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-g5pvp"] Jan 31 09:07:18 crc kubenswrapper[4830]: E0131 09:07:18.847358 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3868f465-887b-4580-8c17-293665785251" containerName="extract-content" Jan 31 09:07:18 crc kubenswrapper[4830]: I0131 09:07:18.847377 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="3868f465-887b-4580-8c17-293665785251" containerName="extract-content" Jan 31 09:07:18 crc kubenswrapper[4830]: E0131 09:07:18.847389 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca8a4bb5-67d6-4e50-905f-95e0a15e376a" containerName="registry-server" Jan 31 09:07:18 crc kubenswrapper[4830]: I0131 09:07:18.847397 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca8a4bb5-67d6-4e50-905f-95e0a15e376a" containerName="registry-server" Jan 31 09:07:18 crc kubenswrapper[4830]: E0131 09:07:18.847407 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e020928-b063-4d3c-8992-e712fe3d1b1d" containerName="registry-server" Jan 31 09:07:18 crc kubenswrapper[4830]: I0131 09:07:18.847417 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e020928-b063-4d3c-8992-e712fe3d1b1d" containerName="registry-server" Jan 31 09:07:18 crc kubenswrapper[4830]: E0131 09:07:18.847429 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca8a4bb5-67d6-4e50-905f-95e0a15e376a" containerName="extract-content" Jan 31 09:07:18 crc kubenswrapper[4830]: I0131 09:07:18.847436 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca8a4bb5-67d6-4e50-905f-95e0a15e376a" containerName="extract-content" Jan 31 09:07:18 crc kubenswrapper[4830]: E0131 09:07:18.847448 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db7a137a-b7f9-4446-85f6-ea0d2f0caedd" containerName="registry-server" Jan 31 09:07:18 crc kubenswrapper[4830]: I0131 09:07:18.847458 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="db7a137a-b7f9-4446-85f6-ea0d2f0caedd" containerName="registry-server" Jan 31 09:07:18 crc kubenswrapper[4830]: E0131 09:07:18.847472 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3868f465-887b-4580-8c17-293665785251" containerName="registry-server" Jan 31 09:07:18 crc kubenswrapper[4830]: I0131 09:07:18.847482 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="3868f465-887b-4580-8c17-293665785251" containerName="registry-server" Jan 31 09:07:18 crc kubenswrapper[4830]: E0131 09:07:18.847490 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db7a137a-b7f9-4446-85f6-ea0d2f0caedd" containerName="extract-content" Jan 31 09:07:18 crc kubenswrapper[4830]: I0131 09:07:18.847498 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="db7a137a-b7f9-4446-85f6-ea0d2f0caedd" containerName="extract-content" Jan 31 09:07:18 crc kubenswrapper[4830]: E0131 09:07:18.847508 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e020928-b063-4d3c-8992-e712fe3d1b1d" containerName="extract-utilities" Jan 31 09:07:18 crc kubenswrapper[4830]: I0131 09:07:18.847516 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e020928-b063-4d3c-8992-e712fe3d1b1d" containerName="extract-utilities" Jan 31 09:07:18 crc kubenswrapper[4830]: E0131 09:07:18.847527 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca8a4bb5-67d6-4e50-905f-95e0a15e376a" containerName="extract-utilities" Jan 31 09:07:18 crc kubenswrapper[4830]: I0131 09:07:18.847535 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca8a4bb5-67d6-4e50-905f-95e0a15e376a" containerName="extract-utilities" Jan 31 09:07:18 crc kubenswrapper[4830]: E0131 09:07:18.847547 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="36a7a51a-2662-4f3b-aa1d-d674cf676b9d" containerName="marketplace-operator" Jan 31 09:07:18 crc kubenswrapper[4830]: I0131 09:07:18.847555 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="36a7a51a-2662-4f3b-aa1d-d674cf676b9d" containerName="marketplace-operator" Jan 31 09:07:18 crc kubenswrapper[4830]: E0131 09:07:18.847566 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e020928-b063-4d3c-8992-e712fe3d1b1d" containerName="extract-content" Jan 31 09:07:18 crc kubenswrapper[4830]: I0131 09:07:18.847575 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e020928-b063-4d3c-8992-e712fe3d1b1d" containerName="extract-content" Jan 31 09:07:18 crc kubenswrapper[4830]: E0131 09:07:18.847589 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3868f465-887b-4580-8c17-293665785251" containerName="extract-utilities" Jan 31 09:07:18 crc kubenswrapper[4830]: I0131 09:07:18.847597 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="3868f465-887b-4580-8c17-293665785251" containerName="extract-utilities" Jan 31 09:07:18 crc kubenswrapper[4830]: E0131 09:07:18.847611 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db7a137a-b7f9-4446-85f6-ea0d2f0caedd" containerName="extract-utilities" Jan 31 09:07:18 crc kubenswrapper[4830]: I0131 09:07:18.847620 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="db7a137a-b7f9-4446-85f6-ea0d2f0caedd" containerName="extract-utilities" Jan 31 09:07:18 crc kubenswrapper[4830]: I0131 09:07:18.847776 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="db7a137a-b7f9-4446-85f6-ea0d2f0caedd" containerName="registry-server" Jan 31 09:07:18 crc kubenswrapper[4830]: I0131 09:07:18.847792 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="36a7a51a-2662-4f3b-aa1d-d674cf676b9d" containerName="marketplace-operator" Jan 31 09:07:18 crc kubenswrapper[4830]: I0131 09:07:18.847804 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="3e020928-b063-4d3c-8992-e712fe3d1b1d" containerName="registry-server" Jan 31 09:07:18 crc kubenswrapper[4830]: I0131 09:07:18.847815 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="3868f465-887b-4580-8c17-293665785251" containerName="registry-server" Jan 31 09:07:18 crc kubenswrapper[4830]: I0131 09:07:18.847826 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="36a7a51a-2662-4f3b-aa1d-d674cf676b9d" containerName="marketplace-operator" Jan 31 09:07:18 crc kubenswrapper[4830]: I0131 09:07:18.847836 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="ca8a4bb5-67d6-4e50-905f-95e0a15e376a" containerName="registry-server" Jan 31 09:07:18 crc kubenswrapper[4830]: E0131 09:07:18.847952 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="36a7a51a-2662-4f3b-aa1d-d674cf676b9d" containerName="marketplace-operator" Jan 31 09:07:18 crc kubenswrapper[4830]: I0131 09:07:18.847963 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="36a7a51a-2662-4f3b-aa1d-d674cf676b9d" containerName="marketplace-operator" Jan 31 09:07:18 crc kubenswrapper[4830]: I0131 09:07:18.848792 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-g5pvp" Jan 31 09:07:18 crc kubenswrapper[4830]: I0131 09:07:18.851685 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 31 09:07:18 crc kubenswrapper[4830]: I0131 09:07:18.858915 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-g5pvp"] Jan 31 09:07:18 crc kubenswrapper[4830]: I0131 09:07:18.947031 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fnvqq\" (UniqueName: \"kubernetes.io/projected/35d308f6-fcf3-4b01-b26e-5c1848d6ee7d-kube-api-access-fnvqq\") pod \"redhat-marketplace-g5pvp\" (UID: \"35d308f6-fcf3-4b01-b26e-5c1848d6ee7d\") " pod="openshift-marketplace/redhat-marketplace-g5pvp" Jan 31 09:07:18 crc kubenswrapper[4830]: I0131 09:07:18.947085 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/35d308f6-fcf3-4b01-b26e-5c1848d6ee7d-catalog-content\") pod \"redhat-marketplace-g5pvp\" (UID: \"35d308f6-fcf3-4b01-b26e-5c1848d6ee7d\") " pod="openshift-marketplace/redhat-marketplace-g5pvp" Jan 31 09:07:18 crc kubenswrapper[4830]: I0131 09:07:18.947246 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/35d308f6-fcf3-4b01-b26e-5c1848d6ee7d-utilities\") pod \"redhat-marketplace-g5pvp\" (UID: \"35d308f6-fcf3-4b01-b26e-5c1848d6ee7d\") " pod="openshift-marketplace/redhat-marketplace-g5pvp" Jan 31 09:07:19 crc kubenswrapper[4830]: I0131 09:07:19.046452 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-56876"] Jan 31 09:07:19 crc kubenswrapper[4830]: I0131 09:07:19.047853 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-56876" Jan 31 09:07:19 crc kubenswrapper[4830]: I0131 09:07:19.054474 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 31 09:07:19 crc kubenswrapper[4830]: I0131 09:07:19.056391 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fnvqq\" (UniqueName: \"kubernetes.io/projected/35d308f6-fcf3-4b01-b26e-5c1848d6ee7d-kube-api-access-fnvqq\") pod \"redhat-marketplace-g5pvp\" (UID: \"35d308f6-fcf3-4b01-b26e-5c1848d6ee7d\") " pod="openshift-marketplace/redhat-marketplace-g5pvp" Jan 31 09:07:19 crc kubenswrapper[4830]: I0131 09:07:19.056702 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/35d308f6-fcf3-4b01-b26e-5c1848d6ee7d-catalog-content\") pod \"redhat-marketplace-g5pvp\" (UID: \"35d308f6-fcf3-4b01-b26e-5c1848d6ee7d\") " pod="openshift-marketplace/redhat-marketplace-g5pvp" Jan 31 09:07:19 crc kubenswrapper[4830]: I0131 09:07:19.057048 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/35d308f6-fcf3-4b01-b26e-5c1848d6ee7d-utilities\") pod \"redhat-marketplace-g5pvp\" (UID: \"35d308f6-fcf3-4b01-b26e-5c1848d6ee7d\") " pod="openshift-marketplace/redhat-marketplace-g5pvp" Jan 31 09:07:19 crc kubenswrapper[4830]: I0131 09:07:19.057788 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/35d308f6-fcf3-4b01-b26e-5c1848d6ee7d-utilities\") pod \"redhat-marketplace-g5pvp\" (UID: \"35d308f6-fcf3-4b01-b26e-5c1848d6ee7d\") " pod="openshift-marketplace/redhat-marketplace-g5pvp" Jan 31 09:07:19 crc kubenswrapper[4830]: I0131 09:07:19.058504 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/35d308f6-fcf3-4b01-b26e-5c1848d6ee7d-catalog-content\") pod \"redhat-marketplace-g5pvp\" (UID: \"35d308f6-fcf3-4b01-b26e-5c1848d6ee7d\") " pod="openshift-marketplace/redhat-marketplace-g5pvp" Jan 31 09:07:19 crc kubenswrapper[4830]: I0131 09:07:19.069677 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-56876"] Jan 31 09:07:19 crc kubenswrapper[4830]: I0131 09:07:19.082713 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fnvqq\" (UniqueName: \"kubernetes.io/projected/35d308f6-fcf3-4b01-b26e-5c1848d6ee7d-kube-api-access-fnvqq\") pod \"redhat-marketplace-g5pvp\" (UID: \"35d308f6-fcf3-4b01-b26e-5c1848d6ee7d\") " pod="openshift-marketplace/redhat-marketplace-g5pvp" Jan 31 09:07:19 crc kubenswrapper[4830]: I0131 09:07:19.158848 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2626e876-9148-4165-a735-a5a1733c014d-catalog-content\") pod \"redhat-operators-56876\" (UID: \"2626e876-9148-4165-a735-a5a1733c014d\") " pod="openshift-marketplace/redhat-operators-56876" Jan 31 09:07:19 crc kubenswrapper[4830]: I0131 09:07:19.158938 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2626e876-9148-4165-a735-a5a1733c014d-utilities\") pod \"redhat-operators-56876\" (UID: \"2626e876-9148-4165-a735-a5a1733c014d\") " pod="openshift-marketplace/redhat-operators-56876" Jan 31 09:07:19 crc kubenswrapper[4830]: I0131 09:07:19.158961 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7kv6z\" (UniqueName: \"kubernetes.io/projected/2626e876-9148-4165-a735-a5a1733c014d-kube-api-access-7kv6z\") pod \"redhat-operators-56876\" (UID: \"2626e876-9148-4165-a735-a5a1733c014d\") " pod="openshift-marketplace/redhat-operators-56876" Jan 31 09:07:19 crc kubenswrapper[4830]: I0131 09:07:19.178502 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-58x6p" Jan 31 09:07:19 crc kubenswrapper[4830]: I0131 09:07:19.201686 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-g5pvp" Jan 31 09:07:19 crc kubenswrapper[4830]: I0131 09:07:19.260843 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2626e876-9148-4165-a735-a5a1733c014d-utilities\") pod \"redhat-operators-56876\" (UID: \"2626e876-9148-4165-a735-a5a1733c014d\") " pod="openshift-marketplace/redhat-operators-56876" Jan 31 09:07:19 crc kubenswrapper[4830]: I0131 09:07:19.260931 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7kv6z\" (UniqueName: \"kubernetes.io/projected/2626e876-9148-4165-a735-a5a1733c014d-kube-api-access-7kv6z\") pod \"redhat-operators-56876\" (UID: \"2626e876-9148-4165-a735-a5a1733c014d\") " pod="openshift-marketplace/redhat-operators-56876" Jan 31 09:07:19 crc kubenswrapper[4830]: I0131 09:07:19.261022 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2626e876-9148-4165-a735-a5a1733c014d-catalog-content\") pod \"redhat-operators-56876\" (UID: \"2626e876-9148-4165-a735-a5a1733c014d\") " pod="openshift-marketplace/redhat-operators-56876" Jan 31 09:07:19 crc kubenswrapper[4830]: I0131 09:07:19.261587 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2626e876-9148-4165-a735-a5a1733c014d-catalog-content\") pod \"redhat-operators-56876\" (UID: \"2626e876-9148-4165-a735-a5a1733c014d\") " pod="openshift-marketplace/redhat-operators-56876" Jan 31 09:07:19 crc kubenswrapper[4830]: I0131 09:07:19.262416 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2626e876-9148-4165-a735-a5a1733c014d-utilities\") pod \"redhat-operators-56876\" (UID: \"2626e876-9148-4165-a735-a5a1733c014d\") " pod="openshift-marketplace/redhat-operators-56876" Jan 31 09:07:19 crc kubenswrapper[4830]: I0131 09:07:19.280305 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7kv6z\" (UniqueName: \"kubernetes.io/projected/2626e876-9148-4165-a735-a5a1733c014d-kube-api-access-7kv6z\") pod \"redhat-operators-56876\" (UID: \"2626e876-9148-4165-a735-a5a1733c014d\") " pod="openshift-marketplace/redhat-operators-56876" Jan 31 09:07:19 crc kubenswrapper[4830]: I0131 09:07:19.374803 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-56876" Jan 31 09:07:19 crc kubenswrapper[4830]: I0131 09:07:19.648314 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-g5pvp"] Jan 31 09:07:19 crc kubenswrapper[4830]: W0131 09:07:19.651378 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod35d308f6_fcf3_4b01_b26e_5c1848d6ee7d.slice/crio-255206e0a3bb32a8ba184f71b13bb63507556e90f4d6cd138176e32fae10c12b WatchSource:0}: Error finding container 255206e0a3bb32a8ba184f71b13bb63507556e90f4d6cd138176e32fae10c12b: Status 404 returned error can't find the container with id 255206e0a3bb32a8ba184f71b13bb63507556e90f4d6cd138176e32fae10c12b Jan 31 09:07:19 crc kubenswrapper[4830]: I0131 09:07:19.826783 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-56876"] Jan 31 09:07:19 crc kubenswrapper[4830]: W0131 09:07:19.882410 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2626e876_9148_4165_a735_a5a1733c014d.slice/crio-8764f369ebc99cd67655ff0b47f5dcdd8b92dd7f9bba2b4bdc74f168fa4bd725 WatchSource:0}: Error finding container 8764f369ebc99cd67655ff0b47f5dcdd8b92dd7f9bba2b4bdc74f168fa4bd725: Status 404 returned error can't find the container with id 8764f369ebc99cd67655ff0b47f5dcdd8b92dd7f9bba2b4bdc74f168fa4bd725 Jan 31 09:07:20 crc kubenswrapper[4830]: I0131 09:07:20.188319 4830 generic.go:334] "Generic (PLEG): container finished" podID="2626e876-9148-4165-a735-a5a1733c014d" containerID="47af8268d39cbf0a91ceb501fdacfdeade88d3c14e2a04845fff879f21955d05" exitCode=0 Jan 31 09:07:20 crc kubenswrapper[4830]: I0131 09:07:20.188409 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-56876" event={"ID":"2626e876-9148-4165-a735-a5a1733c014d","Type":"ContainerDied","Data":"47af8268d39cbf0a91ceb501fdacfdeade88d3c14e2a04845fff879f21955d05"} Jan 31 09:07:20 crc kubenswrapper[4830]: I0131 09:07:20.188445 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-56876" event={"ID":"2626e876-9148-4165-a735-a5a1733c014d","Type":"ContainerStarted","Data":"8764f369ebc99cd67655ff0b47f5dcdd8b92dd7f9bba2b4bdc74f168fa4bd725"} Jan 31 09:07:20 crc kubenswrapper[4830]: I0131 09:07:20.191547 4830 generic.go:334] "Generic (PLEG): container finished" podID="35d308f6-fcf3-4b01-b26e-5c1848d6ee7d" containerID="32d76e5998ce686b805b32aa1d70d07c6bd7949b37905c9f2ea73bbf37c5ead3" exitCode=0 Jan 31 09:07:20 crc kubenswrapper[4830]: I0131 09:07:20.191651 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-g5pvp" event={"ID":"35d308f6-fcf3-4b01-b26e-5c1848d6ee7d","Type":"ContainerDied","Data":"32d76e5998ce686b805b32aa1d70d07c6bd7949b37905c9f2ea73bbf37c5ead3"} Jan 31 09:07:20 crc kubenswrapper[4830]: I0131 09:07:20.191750 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-g5pvp" event={"ID":"35d308f6-fcf3-4b01-b26e-5c1848d6ee7d","Type":"ContainerStarted","Data":"255206e0a3bb32a8ba184f71b13bb63507556e90f4d6cd138176e32fae10c12b"} Jan 31 09:07:20 crc kubenswrapper[4830]: I0131 09:07:20.261164 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="36a7a51a-2662-4f3b-aa1d-d674cf676b9d" path="/var/lib/kubelet/pods/36a7a51a-2662-4f3b-aa1d-d674cf676b9d/volumes" Jan 31 09:07:20 crc kubenswrapper[4830]: I0131 09:07:20.261836 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3868f465-887b-4580-8c17-293665785251" path="/var/lib/kubelet/pods/3868f465-887b-4580-8c17-293665785251/volumes" Jan 31 09:07:20 crc kubenswrapper[4830]: I0131 09:07:20.262584 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3e020928-b063-4d3c-8992-e712fe3d1b1d" path="/var/lib/kubelet/pods/3e020928-b063-4d3c-8992-e712fe3d1b1d/volumes" Jan 31 09:07:20 crc kubenswrapper[4830]: I0131 09:07:20.264112 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="db7a137a-b7f9-4446-85f6-ea0d2f0caedd" path="/var/lib/kubelet/pods/db7a137a-b7f9-4446-85f6-ea0d2f0caedd/volumes" Jan 31 09:07:21 crc kubenswrapper[4830]: I0131 09:07:21.202189 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-56876" event={"ID":"2626e876-9148-4165-a735-a5a1733c014d","Type":"ContainerStarted","Data":"49b6c441f4d00f09231e1888069251a42618fc6c15a7a73432cf2d80db42ba27"} Jan 31 09:07:21 crc kubenswrapper[4830]: I0131 09:07:21.205203 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-g5pvp" event={"ID":"35d308f6-fcf3-4b01-b26e-5c1848d6ee7d","Type":"ContainerStarted","Data":"cf3a6b9f1474235455e13580e81c7387cca297d095395238dd1958fa6184c9d7"} Jan 31 09:07:21 crc kubenswrapper[4830]: I0131 09:07:21.287492 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-jwvm4"] Jan 31 09:07:21 crc kubenswrapper[4830]: I0131 09:07:21.289507 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jwvm4" Jan 31 09:07:21 crc kubenswrapper[4830]: I0131 09:07:21.300529 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 31 09:07:21 crc kubenswrapper[4830]: I0131 09:07:21.308339 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-jwvm4"] Jan 31 09:07:21 crc kubenswrapper[4830]: I0131 09:07:21.328152 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/14550547-ce63-48cc-800e-b74235d0daa1-utilities\") pod \"certified-operators-jwvm4\" (UID: \"14550547-ce63-48cc-800e-b74235d0daa1\") " pod="openshift-marketplace/certified-operators-jwvm4" Jan 31 09:07:21 crc kubenswrapper[4830]: I0131 09:07:21.328231 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/14550547-ce63-48cc-800e-b74235d0daa1-catalog-content\") pod \"certified-operators-jwvm4\" (UID: \"14550547-ce63-48cc-800e-b74235d0daa1\") " pod="openshift-marketplace/certified-operators-jwvm4" Jan 31 09:07:21 crc kubenswrapper[4830]: I0131 09:07:21.328368 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f64df\" (UniqueName: \"kubernetes.io/projected/14550547-ce63-48cc-800e-b74235d0daa1-kube-api-access-f64df\") pod \"certified-operators-jwvm4\" (UID: \"14550547-ce63-48cc-800e-b74235d0daa1\") " pod="openshift-marketplace/certified-operators-jwvm4" Jan 31 09:07:21 crc kubenswrapper[4830]: I0131 09:07:21.429672 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f64df\" (UniqueName: \"kubernetes.io/projected/14550547-ce63-48cc-800e-b74235d0daa1-kube-api-access-f64df\") pod \"certified-operators-jwvm4\" (UID: \"14550547-ce63-48cc-800e-b74235d0daa1\") " pod="openshift-marketplace/certified-operators-jwvm4" Jan 31 09:07:21 crc kubenswrapper[4830]: I0131 09:07:21.430332 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/14550547-ce63-48cc-800e-b74235d0daa1-utilities\") pod \"certified-operators-jwvm4\" (UID: \"14550547-ce63-48cc-800e-b74235d0daa1\") " pod="openshift-marketplace/certified-operators-jwvm4" Jan 31 09:07:21 crc kubenswrapper[4830]: I0131 09:07:21.430377 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/14550547-ce63-48cc-800e-b74235d0daa1-catalog-content\") pod \"certified-operators-jwvm4\" (UID: \"14550547-ce63-48cc-800e-b74235d0daa1\") " pod="openshift-marketplace/certified-operators-jwvm4" Jan 31 09:07:21 crc kubenswrapper[4830]: I0131 09:07:21.430993 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/14550547-ce63-48cc-800e-b74235d0daa1-utilities\") pod \"certified-operators-jwvm4\" (UID: \"14550547-ce63-48cc-800e-b74235d0daa1\") " pod="openshift-marketplace/certified-operators-jwvm4" Jan 31 09:07:21 crc kubenswrapper[4830]: I0131 09:07:21.431033 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/14550547-ce63-48cc-800e-b74235d0daa1-catalog-content\") pod \"certified-operators-jwvm4\" (UID: \"14550547-ce63-48cc-800e-b74235d0daa1\") " pod="openshift-marketplace/certified-operators-jwvm4" Jan 31 09:07:21 crc kubenswrapper[4830]: I0131 09:07:21.449336 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-fcmv2"] Jan 31 09:07:21 crc kubenswrapper[4830]: I0131 09:07:21.450352 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fcmv2" Jan 31 09:07:21 crc kubenswrapper[4830]: I0131 09:07:21.453870 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 31 09:07:21 crc kubenswrapper[4830]: I0131 09:07:21.463993 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f64df\" (UniqueName: \"kubernetes.io/projected/14550547-ce63-48cc-800e-b74235d0daa1-kube-api-access-f64df\") pod \"certified-operators-jwvm4\" (UID: \"14550547-ce63-48cc-800e-b74235d0daa1\") " pod="openshift-marketplace/certified-operators-jwvm4" Jan 31 09:07:21 crc kubenswrapper[4830]: I0131 09:07:21.485700 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-fcmv2"] Jan 31 09:07:21 crc kubenswrapper[4830]: I0131 09:07:21.532678 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-srpfx\" (UniqueName: \"kubernetes.io/projected/c361702a-d6db-4925-809d-f08c6dd88a7d-kube-api-access-srpfx\") pod \"community-operators-fcmv2\" (UID: \"c361702a-d6db-4925-809d-f08c6dd88a7d\") " pod="openshift-marketplace/community-operators-fcmv2" Jan 31 09:07:21 crc kubenswrapper[4830]: I0131 09:07:21.533043 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c361702a-d6db-4925-809d-f08c6dd88a7d-utilities\") pod \"community-operators-fcmv2\" (UID: \"c361702a-d6db-4925-809d-f08c6dd88a7d\") " pod="openshift-marketplace/community-operators-fcmv2" Jan 31 09:07:21 crc kubenswrapper[4830]: I0131 09:07:21.533195 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c361702a-d6db-4925-809d-f08c6dd88a7d-catalog-content\") pod \"community-operators-fcmv2\" (UID: \"c361702a-d6db-4925-809d-f08c6dd88a7d\") " pod="openshift-marketplace/community-operators-fcmv2" Jan 31 09:07:21 crc kubenswrapper[4830]: I0131 09:07:21.634000 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-srpfx\" (UniqueName: \"kubernetes.io/projected/c361702a-d6db-4925-809d-f08c6dd88a7d-kube-api-access-srpfx\") pod \"community-operators-fcmv2\" (UID: \"c361702a-d6db-4925-809d-f08c6dd88a7d\") " pod="openshift-marketplace/community-operators-fcmv2" Jan 31 09:07:21 crc kubenswrapper[4830]: I0131 09:07:21.634059 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c361702a-d6db-4925-809d-f08c6dd88a7d-utilities\") pod \"community-operators-fcmv2\" (UID: \"c361702a-d6db-4925-809d-f08c6dd88a7d\") " pod="openshift-marketplace/community-operators-fcmv2" Jan 31 09:07:21 crc kubenswrapper[4830]: I0131 09:07:21.634085 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c361702a-d6db-4925-809d-f08c6dd88a7d-catalog-content\") pod \"community-operators-fcmv2\" (UID: \"c361702a-d6db-4925-809d-f08c6dd88a7d\") " pod="openshift-marketplace/community-operators-fcmv2" Jan 31 09:07:21 crc kubenswrapper[4830]: I0131 09:07:21.634528 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c361702a-d6db-4925-809d-f08c6dd88a7d-catalog-content\") pod \"community-operators-fcmv2\" (UID: \"c361702a-d6db-4925-809d-f08c6dd88a7d\") " pod="openshift-marketplace/community-operators-fcmv2" Jan 31 09:07:21 crc kubenswrapper[4830]: I0131 09:07:21.634646 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c361702a-d6db-4925-809d-f08c6dd88a7d-utilities\") pod \"community-operators-fcmv2\" (UID: \"c361702a-d6db-4925-809d-f08c6dd88a7d\") " pod="openshift-marketplace/community-operators-fcmv2" Jan 31 09:07:21 crc kubenswrapper[4830]: I0131 09:07:21.660325 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-srpfx\" (UniqueName: \"kubernetes.io/projected/c361702a-d6db-4925-809d-f08c6dd88a7d-kube-api-access-srpfx\") pod \"community-operators-fcmv2\" (UID: \"c361702a-d6db-4925-809d-f08c6dd88a7d\") " pod="openshift-marketplace/community-operators-fcmv2" Jan 31 09:07:21 crc kubenswrapper[4830]: I0131 09:07:21.662654 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jwvm4" Jan 31 09:07:21 crc kubenswrapper[4830]: I0131 09:07:21.852096 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fcmv2" Jan 31 09:07:22 crc kubenswrapper[4830]: I0131 09:07:22.078757 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-jwvm4"] Jan 31 09:07:22 crc kubenswrapper[4830]: W0131 09:07:22.086598 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod14550547_ce63_48cc_800e_b74235d0daa1.slice/crio-caf44eede0752cc88db48903e9797866546d8724271113be224de4c353af6814 WatchSource:0}: Error finding container caf44eede0752cc88db48903e9797866546d8724271113be224de4c353af6814: Status 404 returned error can't find the container with id caf44eede0752cc88db48903e9797866546d8724271113be224de4c353af6814 Jan 31 09:07:22 crc kubenswrapper[4830]: I0131 09:07:22.225043 4830 generic.go:334] "Generic (PLEG): container finished" podID="2626e876-9148-4165-a735-a5a1733c014d" containerID="49b6c441f4d00f09231e1888069251a42618fc6c15a7a73432cf2d80db42ba27" exitCode=0 Jan 31 09:07:22 crc kubenswrapper[4830]: I0131 09:07:22.225121 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-56876" event={"ID":"2626e876-9148-4165-a735-a5a1733c014d","Type":"ContainerDied","Data":"49b6c441f4d00f09231e1888069251a42618fc6c15a7a73432cf2d80db42ba27"} Jan 31 09:07:22 crc kubenswrapper[4830]: I0131 09:07:22.232078 4830 generic.go:334] "Generic (PLEG): container finished" podID="35d308f6-fcf3-4b01-b26e-5c1848d6ee7d" containerID="cf3a6b9f1474235455e13580e81c7387cca297d095395238dd1958fa6184c9d7" exitCode=0 Jan 31 09:07:22 crc kubenswrapper[4830]: I0131 09:07:22.232126 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-g5pvp" event={"ID":"35d308f6-fcf3-4b01-b26e-5c1848d6ee7d","Type":"ContainerDied","Data":"cf3a6b9f1474235455e13580e81c7387cca297d095395238dd1958fa6184c9d7"} Jan 31 09:07:22 crc kubenswrapper[4830]: I0131 09:07:22.241897 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jwvm4" event={"ID":"14550547-ce63-48cc-800e-b74235d0daa1","Type":"ContainerStarted","Data":"caf44eede0752cc88db48903e9797866546d8724271113be224de4c353af6814"} Jan 31 09:07:22 crc kubenswrapper[4830]: I0131 09:07:22.922817 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-fcmv2"] Jan 31 09:07:23 crc kubenswrapper[4830]: I0131 09:07:23.250059 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-g5pvp" event={"ID":"35d308f6-fcf3-4b01-b26e-5c1848d6ee7d","Type":"ContainerStarted","Data":"f3902cab012b2fd7a05dad2e119debffa319f7939ade666477b0cf8bf2859a4a"} Jan 31 09:07:23 crc kubenswrapper[4830]: I0131 09:07:23.251562 4830 generic.go:334] "Generic (PLEG): container finished" podID="14550547-ce63-48cc-800e-b74235d0daa1" containerID="7657245ca041d34852588df17baa72d3d9c73f7c124fa4e167121579b84974be" exitCode=0 Jan 31 09:07:23 crc kubenswrapper[4830]: I0131 09:07:23.251678 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jwvm4" event={"ID":"14550547-ce63-48cc-800e-b74235d0daa1","Type":"ContainerDied","Data":"7657245ca041d34852588df17baa72d3d9c73f7c124fa4e167121579b84974be"} Jan 31 09:07:23 crc kubenswrapper[4830]: I0131 09:07:23.257259 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-56876" event={"ID":"2626e876-9148-4165-a735-a5a1733c014d","Type":"ContainerStarted","Data":"f11441cbba9561c6c57f871491bfd86946bb4556451df5f1b4cd312425394af7"} Jan 31 09:07:23 crc kubenswrapper[4830]: I0131 09:07:23.260294 4830 generic.go:334] "Generic (PLEG): container finished" podID="c361702a-d6db-4925-809d-f08c6dd88a7d" containerID="f925ae7dce9abb6a7bfe0e188023c45411e117c4636845b3c881f52b42d40411" exitCode=0 Jan 31 09:07:23 crc kubenswrapper[4830]: I0131 09:07:23.260342 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fcmv2" event={"ID":"c361702a-d6db-4925-809d-f08c6dd88a7d","Type":"ContainerDied","Data":"f925ae7dce9abb6a7bfe0e188023c45411e117c4636845b3c881f52b42d40411"} Jan 31 09:07:23 crc kubenswrapper[4830]: I0131 09:07:23.260369 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fcmv2" event={"ID":"c361702a-d6db-4925-809d-f08c6dd88a7d","Type":"ContainerStarted","Data":"73c93b41b83ae37774cc7df4393a480caa3740a31b99d23c1669b765eda11fe1"} Jan 31 09:07:23 crc kubenswrapper[4830]: I0131 09:07:23.275319 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-g5pvp" podStartSLOduration=2.7198760589999997 podStartE2EDuration="5.275301586s" podCreationTimestamp="2026-01-31 09:07:18 +0000 UTC" firstStartedPulling="2026-01-31 09:07:20.193139225 +0000 UTC m=+384.686501667" lastFinishedPulling="2026-01-31 09:07:22.748564752 +0000 UTC m=+387.241927194" observedRunningTime="2026-01-31 09:07:23.269663114 +0000 UTC m=+387.763025566" watchObservedRunningTime="2026-01-31 09:07:23.275301586 +0000 UTC m=+387.768664028" Jan 31 09:07:23 crc kubenswrapper[4830]: I0131 09:07:23.291809 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-56876" podStartSLOduration=1.8269823459999999 podStartE2EDuration="4.29177936s" podCreationTimestamp="2026-01-31 09:07:19 +0000 UTC" firstStartedPulling="2026-01-31 09:07:20.190278408 +0000 UTC m=+384.683640850" lastFinishedPulling="2026-01-31 09:07:22.655075422 +0000 UTC m=+387.148437864" observedRunningTime="2026-01-31 09:07:23.2907891 +0000 UTC m=+387.784151532" watchObservedRunningTime="2026-01-31 09:07:23.29177936 +0000 UTC m=+387.785141822" Jan 31 09:07:24 crc kubenswrapper[4830]: I0131 09:07:24.272681 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jwvm4" event={"ID":"14550547-ce63-48cc-800e-b74235d0daa1","Type":"ContainerStarted","Data":"40bf8a7d9c296dc5ca7102dc2a5f8d438c23cbdf691a1bac8f0e5e0e4150c196"} Jan 31 09:07:24 crc kubenswrapper[4830]: I0131 09:07:24.275406 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fcmv2" event={"ID":"c361702a-d6db-4925-809d-f08c6dd88a7d","Type":"ContainerStarted","Data":"99d6829cb7c462695fef6867df143f325a45954fab0ffad4333da26ade55533d"} Jan 31 09:07:25 crc kubenswrapper[4830]: I0131 09:07:25.282424 4830 generic.go:334] "Generic (PLEG): container finished" podID="c361702a-d6db-4925-809d-f08c6dd88a7d" containerID="99d6829cb7c462695fef6867df143f325a45954fab0ffad4333da26ade55533d" exitCode=0 Jan 31 09:07:25 crc kubenswrapper[4830]: I0131 09:07:25.282514 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fcmv2" event={"ID":"c361702a-d6db-4925-809d-f08c6dd88a7d","Type":"ContainerDied","Data":"99d6829cb7c462695fef6867df143f325a45954fab0ffad4333da26ade55533d"} Jan 31 09:07:25 crc kubenswrapper[4830]: I0131 09:07:25.290077 4830 generic.go:334] "Generic (PLEG): container finished" podID="14550547-ce63-48cc-800e-b74235d0daa1" containerID="40bf8a7d9c296dc5ca7102dc2a5f8d438c23cbdf691a1bac8f0e5e0e4150c196" exitCode=0 Jan 31 09:07:25 crc kubenswrapper[4830]: I0131 09:07:25.290134 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jwvm4" event={"ID":"14550547-ce63-48cc-800e-b74235d0daa1","Type":"ContainerDied","Data":"40bf8a7d9c296dc5ca7102dc2a5f8d438c23cbdf691a1bac8f0e5e0e4150c196"} Jan 31 09:07:25 crc kubenswrapper[4830]: I0131 09:07:25.290169 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jwvm4" event={"ID":"14550547-ce63-48cc-800e-b74235d0daa1","Type":"ContainerStarted","Data":"83d53b8dc5ef1de88fb6035c22e2a2cf67146c16f93c7ba5c2795bd39e9c58c1"} Jan 31 09:07:25 crc kubenswrapper[4830]: I0131 09:07:25.328409 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-jwvm4" podStartSLOduration=2.878266432 podStartE2EDuration="4.328377004s" podCreationTimestamp="2026-01-31 09:07:21 +0000 UTC" firstStartedPulling="2026-01-31 09:07:23.253117527 +0000 UTC m=+387.746479989" lastFinishedPulling="2026-01-31 09:07:24.703228119 +0000 UTC m=+389.196590561" observedRunningTime="2026-01-31 09:07:25.327128656 +0000 UTC m=+389.820491098" watchObservedRunningTime="2026-01-31 09:07:25.328377004 +0000 UTC m=+389.821739446" Jan 31 09:07:26 crc kubenswrapper[4830]: I0131 09:07:26.300675 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fcmv2" event={"ID":"c361702a-d6db-4925-809d-f08c6dd88a7d","Type":"ContainerStarted","Data":"d90335abfa9207b4d4d63cf2f5f0c9a8b085e06ea6f5f12d88ddd096f3e7f6f8"} Jan 31 09:07:26 crc kubenswrapper[4830]: I0131 09:07:26.323839 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-fcmv2" podStartSLOduration=2.871350671 podStartE2EDuration="5.323820378s" podCreationTimestamp="2026-01-31 09:07:21 +0000 UTC" firstStartedPulling="2026-01-31 09:07:23.261651578 +0000 UTC m=+387.755014020" lastFinishedPulling="2026-01-31 09:07:25.714121295 +0000 UTC m=+390.207483727" observedRunningTime="2026-01-31 09:07:26.321220758 +0000 UTC m=+390.814583200" watchObservedRunningTime="2026-01-31 09:07:26.323820378 +0000 UTC m=+390.817182830" Jan 31 09:07:29 crc kubenswrapper[4830]: I0131 09:07:29.202526 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-g5pvp" Jan 31 09:07:29 crc kubenswrapper[4830]: I0131 09:07:29.203158 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-g5pvp" Jan 31 09:07:29 crc kubenswrapper[4830]: I0131 09:07:29.262484 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-g5pvp" Jan 31 09:07:29 crc kubenswrapper[4830]: I0131 09:07:29.371426 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-g5pvp" Jan 31 09:07:29 crc kubenswrapper[4830]: I0131 09:07:29.375624 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-56876" Jan 31 09:07:29 crc kubenswrapper[4830]: I0131 09:07:29.375664 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-56876" Jan 31 09:07:29 crc kubenswrapper[4830]: I0131 09:07:29.422669 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-56876" Jan 31 09:07:30 crc kubenswrapper[4830]: I0131 09:07:30.374906 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-56876" Jan 31 09:07:31 crc kubenswrapper[4830]: I0131 09:07:31.663769 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-jwvm4" Jan 31 09:07:31 crc kubenswrapper[4830]: I0131 09:07:31.664189 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-jwvm4" Jan 31 09:07:31 crc kubenswrapper[4830]: I0131 09:07:31.709075 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-jwvm4" Jan 31 09:07:31 crc kubenswrapper[4830]: I0131 09:07:31.853163 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-fcmv2" Jan 31 09:07:31 crc kubenswrapper[4830]: I0131 09:07:31.853246 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-fcmv2" Jan 31 09:07:31 crc kubenswrapper[4830]: I0131 09:07:31.912868 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-fcmv2" Jan 31 09:07:32 crc kubenswrapper[4830]: I0131 09:07:32.387246 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-jwvm4" Jan 31 09:07:32 crc kubenswrapper[4830]: I0131 09:07:32.398127 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-fcmv2" Jan 31 09:07:33 crc kubenswrapper[4830]: I0131 09:07:33.490020 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-gkw8v" Jan 31 09:07:33 crc kubenswrapper[4830]: I0131 09:07:33.574181 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-7m8b7"] Jan 31 09:07:44 crc kubenswrapper[4830]: I0131 09:07:44.353617 4830 patch_prober.go:28] interesting pod/machine-config-daemon-gt7kd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 09:07:44 crc kubenswrapper[4830]: I0131 09:07:44.354613 4830 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" podUID="158dbfda-9b0a-4809-9946-3c6ee2d082dc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 09:07:44 crc kubenswrapper[4830]: I0131 09:07:44.354678 4830 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" Jan 31 09:07:44 crc kubenswrapper[4830]: I0131 09:07:44.355459 4830 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"daea99fc983195352b8e4718b50bf7bbcdbf16fe4b6ceb22c6175dbbdd6d0099"} pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 31 09:07:44 crc kubenswrapper[4830]: I0131 09:07:44.355512 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" podUID="158dbfda-9b0a-4809-9946-3c6ee2d082dc" containerName="machine-config-daemon" containerID="cri-o://daea99fc983195352b8e4718b50bf7bbcdbf16fe4b6ceb22c6175dbbdd6d0099" gracePeriod=600 Jan 31 09:07:45 crc kubenswrapper[4830]: I0131 09:07:45.426316 4830 generic.go:334] "Generic (PLEG): container finished" podID="158dbfda-9b0a-4809-9946-3c6ee2d082dc" containerID="daea99fc983195352b8e4718b50bf7bbcdbf16fe4b6ceb22c6175dbbdd6d0099" exitCode=0 Jan 31 09:07:45 crc kubenswrapper[4830]: I0131 09:07:45.426407 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" event={"ID":"158dbfda-9b0a-4809-9946-3c6ee2d082dc","Type":"ContainerDied","Data":"daea99fc983195352b8e4718b50bf7bbcdbf16fe4b6ceb22c6175dbbdd6d0099"} Jan 31 09:07:45 crc kubenswrapper[4830]: I0131 09:07:45.426863 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" event={"ID":"158dbfda-9b0a-4809-9946-3c6ee2d082dc","Type":"ContainerStarted","Data":"40de0b135d2e6436aca04cec9e087aebbf22156339d1945255baa4aa59e53756"} Jan 31 09:07:45 crc kubenswrapper[4830]: I0131 09:07:45.426896 4830 scope.go:117] "RemoveContainer" containerID="7cfb7ee25dc18bb1412f69e9bbc3a9055029ed188a12baa5ceef7d5445ad597c" Jan 31 09:07:50 crc kubenswrapper[4830]: I0131 09:07:50.732554 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/cluster-monitoring-operator-6d5b84845-9kfdg"] Jan 31 09:07:50 crc kubenswrapper[4830]: I0131 09:07:50.735982 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-9kfdg" Jan 31 09:07:50 crc kubenswrapper[4830]: I0131 09:07:50.741547 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/cluster-monitoring-operator-6d5b84845-9kfdg"] Jan 31 09:07:50 crc kubenswrapper[4830]: I0131 09:07:50.742471 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-root-ca.crt" Jan 31 09:07:50 crc kubenswrapper[4830]: I0131 09:07:50.742836 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"cluster-monitoring-operator-dockercfg-wwt9l" Jan 31 09:07:50 crc kubenswrapper[4830]: I0131 09:07:50.743907 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"openshift-service-ca.crt" Jan 31 09:07:50 crc kubenswrapper[4830]: I0131 09:07:50.744139 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"cluster-monitoring-operator-tls" Jan 31 09:07:50 crc kubenswrapper[4830]: I0131 09:07:50.744695 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemetry-config" Jan 31 09:07:50 crc kubenswrapper[4830]: I0131 09:07:50.897945 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/067955de-e422-42aa-801f-4000bccc81aa-telemetry-config\") pod \"cluster-monitoring-operator-6d5b84845-9kfdg\" (UID: \"067955de-e422-42aa-801f-4000bccc81aa\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-9kfdg" Jan 31 09:07:50 crc kubenswrapper[4830]: I0131 09:07:50.898020 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jpv5s\" (UniqueName: \"kubernetes.io/projected/067955de-e422-42aa-801f-4000bccc81aa-kube-api-access-jpv5s\") pod \"cluster-monitoring-operator-6d5b84845-9kfdg\" (UID: \"067955de-e422-42aa-801f-4000bccc81aa\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-9kfdg" Jan 31 09:07:50 crc kubenswrapper[4830]: I0131 09:07:50.898092 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/067955de-e422-42aa-801f-4000bccc81aa-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-6d5b84845-9kfdg\" (UID: \"067955de-e422-42aa-801f-4000bccc81aa\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-9kfdg" Jan 31 09:07:50 crc kubenswrapper[4830]: I0131 09:07:50.999576 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/067955de-e422-42aa-801f-4000bccc81aa-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-6d5b84845-9kfdg\" (UID: \"067955de-e422-42aa-801f-4000bccc81aa\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-9kfdg" Jan 31 09:07:50 crc kubenswrapper[4830]: I0131 09:07:50.999688 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/067955de-e422-42aa-801f-4000bccc81aa-telemetry-config\") pod \"cluster-monitoring-operator-6d5b84845-9kfdg\" (UID: \"067955de-e422-42aa-801f-4000bccc81aa\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-9kfdg" Jan 31 09:07:50 crc kubenswrapper[4830]: I0131 09:07:50.999716 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jpv5s\" (UniqueName: \"kubernetes.io/projected/067955de-e422-42aa-801f-4000bccc81aa-kube-api-access-jpv5s\") pod \"cluster-monitoring-operator-6d5b84845-9kfdg\" (UID: \"067955de-e422-42aa-801f-4000bccc81aa\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-9kfdg" Jan 31 09:07:51 crc kubenswrapper[4830]: I0131 09:07:51.001627 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/067955de-e422-42aa-801f-4000bccc81aa-telemetry-config\") pod \"cluster-monitoring-operator-6d5b84845-9kfdg\" (UID: \"067955de-e422-42aa-801f-4000bccc81aa\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-9kfdg" Jan 31 09:07:51 crc kubenswrapper[4830]: I0131 09:07:51.009361 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/067955de-e422-42aa-801f-4000bccc81aa-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-6d5b84845-9kfdg\" (UID: \"067955de-e422-42aa-801f-4000bccc81aa\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-9kfdg" Jan 31 09:07:51 crc kubenswrapper[4830]: I0131 09:07:51.018925 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jpv5s\" (UniqueName: \"kubernetes.io/projected/067955de-e422-42aa-801f-4000bccc81aa-kube-api-access-jpv5s\") pod \"cluster-monitoring-operator-6d5b84845-9kfdg\" (UID: \"067955de-e422-42aa-801f-4000bccc81aa\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-9kfdg" Jan 31 09:07:51 crc kubenswrapper[4830]: I0131 09:07:51.071217 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-9kfdg" Jan 31 09:07:51 crc kubenswrapper[4830]: I0131 09:07:51.483504 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/cluster-monitoring-operator-6d5b84845-9kfdg"] Jan 31 09:07:52 crc kubenswrapper[4830]: I0131 09:07:52.488160 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-9kfdg" event={"ID":"067955de-e422-42aa-801f-4000bccc81aa","Type":"ContainerStarted","Data":"c37561452eb6c78ad41bd65ea3873fd61b02a8031df0e13104e958b560d717b6"} Jan 31 09:07:53 crc kubenswrapper[4830]: I0131 09:07:53.494740 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-9kfdg" event={"ID":"067955de-e422-42aa-801f-4000bccc81aa","Type":"ContainerStarted","Data":"c6bb9ecb44e2385241f06301a8a7f68b1a6201940999adbce75305b20d57d9a2"} Jan 31 09:07:53 crc kubenswrapper[4830]: I0131 09:07:53.929943 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-9kfdg" podStartSLOduration=2.102835902 podStartE2EDuration="3.929876245s" podCreationTimestamp="2026-01-31 09:07:50 +0000 UTC" firstStartedPulling="2026-01-31 09:07:51.494645925 +0000 UTC m=+415.988008367" lastFinishedPulling="2026-01-31 09:07:53.321686268 +0000 UTC m=+417.815048710" observedRunningTime="2026-01-31 09:07:53.514610591 +0000 UTC m=+418.007973043" watchObservedRunningTime="2026-01-31 09:07:53.929876245 +0000 UTC m=+418.423238687" Jan 31 09:07:53 crc kubenswrapper[4830]: I0131 09:07:53.933626 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-dbkt8"] Jan 31 09:07:53 crc kubenswrapper[4830]: I0131 09:07:53.934784 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-dbkt8" Jan 31 09:07:53 crc kubenswrapper[4830]: I0131 09:07:53.937987 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-admission-webhook-dockercfg-zfk4q" Jan 31 09:07:53 crc kubenswrapper[4830]: I0131 09:07:53.942953 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-dbkt8"] Jan 31 09:07:53 crc kubenswrapper[4830]: I0131 09:07:53.943034 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-admission-webhook-tls" Jan 31 09:07:54 crc kubenswrapper[4830]: I0131 09:07:54.043324 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/48688d73-57bb-4105-8116-4853be571b01-tls-certificates\") pod \"prometheus-operator-admission-webhook-f54c54754-dbkt8\" (UID: \"48688d73-57bb-4105-8116-4853be571b01\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-dbkt8" Jan 31 09:07:54 crc kubenswrapper[4830]: I0131 09:07:54.144788 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/48688d73-57bb-4105-8116-4853be571b01-tls-certificates\") pod \"prometheus-operator-admission-webhook-f54c54754-dbkt8\" (UID: \"48688d73-57bb-4105-8116-4853be571b01\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-dbkt8" Jan 31 09:07:54 crc kubenswrapper[4830]: I0131 09:07:54.156227 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/48688d73-57bb-4105-8116-4853be571b01-tls-certificates\") pod \"prometheus-operator-admission-webhook-f54c54754-dbkt8\" (UID: \"48688d73-57bb-4105-8116-4853be571b01\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-dbkt8" Jan 31 09:07:54 crc kubenswrapper[4830]: I0131 09:07:54.258051 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-dbkt8" Jan 31 09:07:54 crc kubenswrapper[4830]: I0131 09:07:54.657044 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-dbkt8"] Jan 31 09:07:54 crc kubenswrapper[4830]: W0131 09:07:54.667751 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod48688d73_57bb_4105_8116_4853be571b01.slice/crio-327b9ab14d3eab40d6aebd31614707ea5d2ccbed38d0500a2f4a88ce4cd16456 WatchSource:0}: Error finding container 327b9ab14d3eab40d6aebd31614707ea5d2ccbed38d0500a2f4a88ce4cd16456: Status 404 returned error can't find the container with id 327b9ab14d3eab40d6aebd31614707ea5d2ccbed38d0500a2f4a88ce4cd16456 Jan 31 09:07:55 crc kubenswrapper[4830]: I0131 09:07:55.508320 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-dbkt8" event={"ID":"48688d73-57bb-4105-8116-4853be571b01","Type":"ContainerStarted","Data":"327b9ab14d3eab40d6aebd31614707ea5d2ccbed38d0500a2f4a88ce4cd16456"} Jan 31 09:07:56 crc kubenswrapper[4830]: I0131 09:07:56.526512 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-dbkt8" event={"ID":"48688d73-57bb-4105-8116-4853be571b01","Type":"ContainerStarted","Data":"54aa8ec469ea3a966faa7ccc7d68b904d98cfe2c3172796d1eb2782e8f440f84"} Jan 31 09:07:56 crc kubenswrapper[4830]: I0131 09:07:56.527086 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-dbkt8" Jan 31 09:07:56 crc kubenswrapper[4830]: I0131 09:07:56.535848 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-dbkt8" Jan 31 09:07:56 crc kubenswrapper[4830]: I0131 09:07:56.550011 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-dbkt8" podStartSLOduration=2.391868933 podStartE2EDuration="3.549980151s" podCreationTimestamp="2026-01-31 09:07:53 +0000 UTC" firstStartedPulling="2026-01-31 09:07:54.671134562 +0000 UTC m=+419.164496994" lastFinishedPulling="2026-01-31 09:07:55.82924576 +0000 UTC m=+420.322608212" observedRunningTime="2026-01-31 09:07:56.542939794 +0000 UTC m=+421.036302236" watchObservedRunningTime="2026-01-31 09:07:56.549980151 +0000 UTC m=+421.043342593" Jan 31 09:07:56 crc kubenswrapper[4830]: I0131 09:07:56.987926 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/prometheus-operator-db54df47d-qbcbf"] Jan 31 09:07:56 crc kubenswrapper[4830]: I0131 09:07:56.988949 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-db54df47d-qbcbf" Jan 31 09:07:56 crc kubenswrapper[4830]: I0131 09:07:56.991098 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-dockercfg-bmv7s" Jan 31 09:07:56 crc kubenswrapper[4830]: I0131 09:07:56.991323 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-kube-rbac-proxy-config" Jan 31 09:07:56 crc kubenswrapper[4830]: I0131 09:07:56.991410 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-client-ca" Jan 31 09:07:56 crc kubenswrapper[4830]: I0131 09:07:56.991441 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-tls" Jan 31 09:07:56 crc kubenswrapper[4830]: I0131 09:07:56.997575 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-db54df47d-qbcbf"] Jan 31 09:07:57 crc kubenswrapper[4830]: I0131 09:07:57.090663 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/54a67971-16d5-45c7-ae4e-f2fbee97a059-prometheus-operator-tls\") pod \"prometheus-operator-db54df47d-qbcbf\" (UID: \"54a67971-16d5-45c7-ae4e-f2fbee97a059\") " pod="openshift-monitoring/prometheus-operator-db54df47d-qbcbf" Jan 31 09:07:57 crc kubenswrapper[4830]: I0131 09:07:57.091149 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-79xkw\" (UniqueName: \"kubernetes.io/projected/54a67971-16d5-45c7-ae4e-f2fbee97a059-kube-api-access-79xkw\") pod \"prometheus-operator-db54df47d-qbcbf\" (UID: \"54a67971-16d5-45c7-ae4e-f2fbee97a059\") " pod="openshift-monitoring/prometheus-operator-db54df47d-qbcbf" Jan 31 09:07:57 crc kubenswrapper[4830]: I0131 09:07:57.091174 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/54a67971-16d5-45c7-ae4e-f2fbee97a059-metrics-client-ca\") pod \"prometheus-operator-db54df47d-qbcbf\" (UID: \"54a67971-16d5-45c7-ae4e-f2fbee97a059\") " pod="openshift-monitoring/prometheus-operator-db54df47d-qbcbf" Jan 31 09:07:57 crc kubenswrapper[4830]: I0131 09:07:57.091206 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/54a67971-16d5-45c7-ae4e-f2fbee97a059-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-db54df47d-qbcbf\" (UID: \"54a67971-16d5-45c7-ae4e-f2fbee97a059\") " pod="openshift-monitoring/prometheus-operator-db54df47d-qbcbf" Jan 31 09:07:57 crc kubenswrapper[4830]: I0131 09:07:57.193098 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/54a67971-16d5-45c7-ae4e-f2fbee97a059-prometheus-operator-tls\") pod \"prometheus-operator-db54df47d-qbcbf\" (UID: \"54a67971-16d5-45c7-ae4e-f2fbee97a059\") " pod="openshift-monitoring/prometheus-operator-db54df47d-qbcbf" Jan 31 09:07:57 crc kubenswrapper[4830]: I0131 09:07:57.193214 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-79xkw\" (UniqueName: \"kubernetes.io/projected/54a67971-16d5-45c7-ae4e-f2fbee97a059-kube-api-access-79xkw\") pod \"prometheus-operator-db54df47d-qbcbf\" (UID: \"54a67971-16d5-45c7-ae4e-f2fbee97a059\") " pod="openshift-monitoring/prometheus-operator-db54df47d-qbcbf" Jan 31 09:07:57 crc kubenswrapper[4830]: I0131 09:07:57.193257 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/54a67971-16d5-45c7-ae4e-f2fbee97a059-metrics-client-ca\") pod \"prometheus-operator-db54df47d-qbcbf\" (UID: \"54a67971-16d5-45c7-ae4e-f2fbee97a059\") " pod="openshift-monitoring/prometheus-operator-db54df47d-qbcbf" Jan 31 09:07:57 crc kubenswrapper[4830]: I0131 09:07:57.193293 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/54a67971-16d5-45c7-ae4e-f2fbee97a059-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-db54df47d-qbcbf\" (UID: \"54a67971-16d5-45c7-ae4e-f2fbee97a059\") " pod="openshift-monitoring/prometheus-operator-db54df47d-qbcbf" Jan 31 09:07:57 crc kubenswrapper[4830]: I0131 09:07:57.194410 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/54a67971-16d5-45c7-ae4e-f2fbee97a059-metrics-client-ca\") pod \"prometheus-operator-db54df47d-qbcbf\" (UID: \"54a67971-16d5-45c7-ae4e-f2fbee97a059\") " pod="openshift-monitoring/prometheus-operator-db54df47d-qbcbf" Jan 31 09:07:57 crc kubenswrapper[4830]: I0131 09:07:57.201986 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/54a67971-16d5-45c7-ae4e-f2fbee97a059-prometheus-operator-tls\") pod \"prometheus-operator-db54df47d-qbcbf\" (UID: \"54a67971-16d5-45c7-ae4e-f2fbee97a059\") " pod="openshift-monitoring/prometheus-operator-db54df47d-qbcbf" Jan 31 09:07:57 crc kubenswrapper[4830]: I0131 09:07:57.202002 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/54a67971-16d5-45c7-ae4e-f2fbee97a059-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-db54df47d-qbcbf\" (UID: \"54a67971-16d5-45c7-ae4e-f2fbee97a059\") " pod="openshift-monitoring/prometheus-operator-db54df47d-qbcbf" Jan 31 09:07:57 crc kubenswrapper[4830]: I0131 09:07:57.220155 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-79xkw\" (UniqueName: \"kubernetes.io/projected/54a67971-16d5-45c7-ae4e-f2fbee97a059-kube-api-access-79xkw\") pod \"prometheus-operator-db54df47d-qbcbf\" (UID: \"54a67971-16d5-45c7-ae4e-f2fbee97a059\") " pod="openshift-monitoring/prometheus-operator-db54df47d-qbcbf" Jan 31 09:07:57 crc kubenswrapper[4830]: I0131 09:07:57.305911 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-db54df47d-qbcbf" Jan 31 09:07:57 crc kubenswrapper[4830]: I0131 09:07:57.555017 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-db54df47d-qbcbf"] Jan 31 09:07:57 crc kubenswrapper[4830]: W0131 09:07:57.566298 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod54a67971_16d5_45c7_ae4e_f2fbee97a059.slice/crio-7f1052f38b5034e07b7e7eaa1c8a888e5609eac45ea1c62d95d536cc1524a1b7 WatchSource:0}: Error finding container 7f1052f38b5034e07b7e7eaa1c8a888e5609eac45ea1c62d95d536cc1524a1b7: Status 404 returned error can't find the container with id 7f1052f38b5034e07b7e7eaa1c8a888e5609eac45ea1c62d95d536cc1524a1b7 Jan 31 09:07:58 crc kubenswrapper[4830]: I0131 09:07:58.541857 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-db54df47d-qbcbf" event={"ID":"54a67971-16d5-45c7-ae4e-f2fbee97a059","Type":"ContainerStarted","Data":"7f1052f38b5034e07b7e7eaa1c8a888e5609eac45ea1c62d95d536cc1524a1b7"} Jan 31 09:07:58 crc kubenswrapper[4830]: I0131 09:07:58.635571 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-7m8b7" podUID="acf2d685-5b8b-41ab-b91d-2e3b58b8b584" containerName="registry" containerID="cri-o://f0b1a633cecf0b8973545b62836919c01720cefd12d03a417cdf2625965668c4" gracePeriod=30 Jan 31 09:07:59 crc kubenswrapper[4830]: I0131 09:07:59.242092 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-7m8b7" Jan 31 09:07:59 crc kubenswrapper[4830]: I0131 09:07:59.397087 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zqr7j\" (UniqueName: \"kubernetes.io/projected/acf2d685-5b8b-41ab-b91d-2e3b58b8b584-kube-api-access-zqr7j\") pod \"acf2d685-5b8b-41ab-b91d-2e3b58b8b584\" (UID: \"acf2d685-5b8b-41ab-b91d-2e3b58b8b584\") " Jan 31 09:07:59 crc kubenswrapper[4830]: I0131 09:07:59.397152 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/acf2d685-5b8b-41ab-b91d-2e3b58b8b584-installation-pull-secrets\") pod \"acf2d685-5b8b-41ab-b91d-2e3b58b8b584\" (UID: \"acf2d685-5b8b-41ab-b91d-2e3b58b8b584\") " Jan 31 09:07:59 crc kubenswrapper[4830]: I0131 09:07:59.397190 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/acf2d685-5b8b-41ab-b91d-2e3b58b8b584-trusted-ca\") pod \"acf2d685-5b8b-41ab-b91d-2e3b58b8b584\" (UID: \"acf2d685-5b8b-41ab-b91d-2e3b58b8b584\") " Jan 31 09:07:59 crc kubenswrapper[4830]: I0131 09:07:59.397408 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"acf2d685-5b8b-41ab-b91d-2e3b58b8b584\" (UID: \"acf2d685-5b8b-41ab-b91d-2e3b58b8b584\") " Jan 31 09:07:59 crc kubenswrapper[4830]: I0131 09:07:59.397459 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/acf2d685-5b8b-41ab-b91d-2e3b58b8b584-registry-certificates\") pod \"acf2d685-5b8b-41ab-b91d-2e3b58b8b584\" (UID: \"acf2d685-5b8b-41ab-b91d-2e3b58b8b584\") " Jan 31 09:07:59 crc kubenswrapper[4830]: I0131 09:07:59.397487 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/acf2d685-5b8b-41ab-b91d-2e3b58b8b584-registry-tls\") pod \"acf2d685-5b8b-41ab-b91d-2e3b58b8b584\" (UID: \"acf2d685-5b8b-41ab-b91d-2e3b58b8b584\") " Jan 31 09:07:59 crc kubenswrapper[4830]: I0131 09:07:59.397522 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/acf2d685-5b8b-41ab-b91d-2e3b58b8b584-ca-trust-extracted\") pod \"acf2d685-5b8b-41ab-b91d-2e3b58b8b584\" (UID: \"acf2d685-5b8b-41ab-b91d-2e3b58b8b584\") " Jan 31 09:07:59 crc kubenswrapper[4830]: I0131 09:07:59.397575 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/acf2d685-5b8b-41ab-b91d-2e3b58b8b584-bound-sa-token\") pod \"acf2d685-5b8b-41ab-b91d-2e3b58b8b584\" (UID: \"acf2d685-5b8b-41ab-b91d-2e3b58b8b584\") " Jan 31 09:07:59 crc kubenswrapper[4830]: I0131 09:07:59.398507 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/acf2d685-5b8b-41ab-b91d-2e3b58b8b584-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "acf2d685-5b8b-41ab-b91d-2e3b58b8b584" (UID: "acf2d685-5b8b-41ab-b91d-2e3b58b8b584"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:07:59 crc kubenswrapper[4830]: I0131 09:07:59.398903 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/acf2d685-5b8b-41ab-b91d-2e3b58b8b584-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "acf2d685-5b8b-41ab-b91d-2e3b58b8b584" (UID: "acf2d685-5b8b-41ab-b91d-2e3b58b8b584"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:07:59 crc kubenswrapper[4830]: I0131 09:07:59.407599 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/acf2d685-5b8b-41ab-b91d-2e3b58b8b584-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "acf2d685-5b8b-41ab-b91d-2e3b58b8b584" (UID: "acf2d685-5b8b-41ab-b91d-2e3b58b8b584"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:07:59 crc kubenswrapper[4830]: I0131 09:07:59.407771 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/acf2d685-5b8b-41ab-b91d-2e3b58b8b584-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "acf2d685-5b8b-41ab-b91d-2e3b58b8b584" (UID: "acf2d685-5b8b-41ab-b91d-2e3b58b8b584"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:07:59 crc kubenswrapper[4830]: I0131 09:07:59.408956 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/acf2d685-5b8b-41ab-b91d-2e3b58b8b584-kube-api-access-zqr7j" (OuterVolumeSpecName: "kube-api-access-zqr7j") pod "acf2d685-5b8b-41ab-b91d-2e3b58b8b584" (UID: "acf2d685-5b8b-41ab-b91d-2e3b58b8b584"). InnerVolumeSpecName "kube-api-access-zqr7j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:07:59 crc kubenswrapper[4830]: I0131 09:07:59.410176 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/acf2d685-5b8b-41ab-b91d-2e3b58b8b584-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "acf2d685-5b8b-41ab-b91d-2e3b58b8b584" (UID: "acf2d685-5b8b-41ab-b91d-2e3b58b8b584"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:07:59 crc kubenswrapper[4830]: I0131 09:07:59.415997 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/acf2d685-5b8b-41ab-b91d-2e3b58b8b584-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "acf2d685-5b8b-41ab-b91d-2e3b58b8b584" (UID: "acf2d685-5b8b-41ab-b91d-2e3b58b8b584"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 09:07:59 crc kubenswrapper[4830]: I0131 09:07:59.418456 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "acf2d685-5b8b-41ab-b91d-2e3b58b8b584" (UID: "acf2d685-5b8b-41ab-b91d-2e3b58b8b584"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 31 09:07:59 crc kubenswrapper[4830]: I0131 09:07:59.498446 4830 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/acf2d685-5b8b-41ab-b91d-2e3b58b8b584-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 31 09:07:59 crc kubenswrapper[4830]: I0131 09:07:59.498485 4830 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/acf2d685-5b8b-41ab-b91d-2e3b58b8b584-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 31 09:07:59 crc kubenswrapper[4830]: I0131 09:07:59.498495 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zqr7j\" (UniqueName: \"kubernetes.io/projected/acf2d685-5b8b-41ab-b91d-2e3b58b8b584-kube-api-access-zqr7j\") on node \"crc\" DevicePath \"\"" Jan 31 09:07:59 crc kubenswrapper[4830]: I0131 09:07:59.498505 4830 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/acf2d685-5b8b-41ab-b91d-2e3b58b8b584-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 31 09:07:59 crc kubenswrapper[4830]: I0131 09:07:59.498513 4830 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/acf2d685-5b8b-41ab-b91d-2e3b58b8b584-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 31 09:07:59 crc kubenswrapper[4830]: I0131 09:07:59.498521 4830 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/acf2d685-5b8b-41ab-b91d-2e3b58b8b584-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 31 09:07:59 crc kubenswrapper[4830]: I0131 09:07:59.498529 4830 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/acf2d685-5b8b-41ab-b91d-2e3b58b8b584-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 31 09:07:59 crc kubenswrapper[4830]: I0131 09:07:59.548627 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-db54df47d-qbcbf" event={"ID":"54a67971-16d5-45c7-ae4e-f2fbee97a059","Type":"ContainerStarted","Data":"868eabae61522c0f3f7acf4f904a3d05a4385edb0c59eed13a1355d64038597a"} Jan 31 09:07:59 crc kubenswrapper[4830]: I0131 09:07:59.548763 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-db54df47d-qbcbf" event={"ID":"54a67971-16d5-45c7-ae4e-f2fbee97a059","Type":"ContainerStarted","Data":"855dcfb94bab539e1784db266997c37f2cf1436c6c04c27fb7e6cce823aec624"} Jan 31 09:07:59 crc kubenswrapper[4830]: I0131 09:07:59.550339 4830 generic.go:334] "Generic (PLEG): container finished" podID="acf2d685-5b8b-41ab-b91d-2e3b58b8b584" containerID="f0b1a633cecf0b8973545b62836919c01720cefd12d03a417cdf2625965668c4" exitCode=0 Jan 31 09:07:59 crc kubenswrapper[4830]: I0131 09:07:59.550386 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-7m8b7" Jan 31 09:07:59 crc kubenswrapper[4830]: I0131 09:07:59.550416 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-7m8b7" event={"ID":"acf2d685-5b8b-41ab-b91d-2e3b58b8b584","Type":"ContainerDied","Data":"f0b1a633cecf0b8973545b62836919c01720cefd12d03a417cdf2625965668c4"} Jan 31 09:07:59 crc kubenswrapper[4830]: I0131 09:07:59.550789 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-7m8b7" event={"ID":"acf2d685-5b8b-41ab-b91d-2e3b58b8b584","Type":"ContainerDied","Data":"47e7e8e13e8a20edf26db4ddea741c5d988d3f9c28954abc786b95c66dda7131"} Jan 31 09:07:59 crc kubenswrapper[4830]: I0131 09:07:59.550862 4830 scope.go:117] "RemoveContainer" containerID="f0b1a633cecf0b8973545b62836919c01720cefd12d03a417cdf2625965668c4" Jan 31 09:07:59 crc kubenswrapper[4830]: I0131 09:07:59.574396 4830 scope.go:117] "RemoveContainer" containerID="f0b1a633cecf0b8973545b62836919c01720cefd12d03a417cdf2625965668c4" Jan 31 09:07:59 crc kubenswrapper[4830]: E0131 09:07:59.575202 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f0b1a633cecf0b8973545b62836919c01720cefd12d03a417cdf2625965668c4\": container with ID starting with f0b1a633cecf0b8973545b62836919c01720cefd12d03a417cdf2625965668c4 not found: ID does not exist" containerID="f0b1a633cecf0b8973545b62836919c01720cefd12d03a417cdf2625965668c4" Jan 31 09:07:59 crc kubenswrapper[4830]: I0131 09:07:59.575272 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f0b1a633cecf0b8973545b62836919c01720cefd12d03a417cdf2625965668c4"} err="failed to get container status \"f0b1a633cecf0b8973545b62836919c01720cefd12d03a417cdf2625965668c4\": rpc error: code = NotFound desc = could not find container \"f0b1a633cecf0b8973545b62836919c01720cefd12d03a417cdf2625965668c4\": container with ID starting with f0b1a633cecf0b8973545b62836919c01720cefd12d03a417cdf2625965668c4 not found: ID does not exist" Jan 31 09:07:59 crc kubenswrapper[4830]: I0131 09:07:59.575754 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/prometheus-operator-db54df47d-qbcbf" podStartSLOduration=2.161759463 podStartE2EDuration="3.575716498s" podCreationTimestamp="2026-01-31 09:07:56 +0000 UTC" firstStartedPulling="2026-01-31 09:07:57.570089143 +0000 UTC m=+422.063451585" lastFinishedPulling="2026-01-31 09:07:58.984046178 +0000 UTC m=+423.477408620" observedRunningTime="2026-01-31 09:07:59.569983859 +0000 UTC m=+424.063346301" watchObservedRunningTime="2026-01-31 09:07:59.575716498 +0000 UTC m=+424.069078940" Jan 31 09:07:59 crc kubenswrapper[4830]: I0131 09:07:59.590350 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-7m8b7"] Jan 31 09:07:59 crc kubenswrapper[4830]: I0131 09:07:59.596917 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-7m8b7"] Jan 31 09:08:00 crc kubenswrapper[4830]: I0131 09:08:00.266935 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="acf2d685-5b8b-41ab-b91d-2e3b58b8b584" path="/var/lib/kubelet/pods/acf2d685-5b8b-41ab-b91d-2e3b58b8b584/volumes" Jan 31 09:08:01 crc kubenswrapper[4830]: I0131 09:08:01.339763 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/openshift-state-metrics-566fddb674-smxhh"] Jan 31 09:08:01 crc kubenswrapper[4830]: E0131 09:08:01.340063 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="acf2d685-5b8b-41ab-b91d-2e3b58b8b584" containerName="registry" Jan 31 09:08:01 crc kubenswrapper[4830]: I0131 09:08:01.340081 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="acf2d685-5b8b-41ab-b91d-2e3b58b8b584" containerName="registry" Jan 31 09:08:01 crc kubenswrapper[4830]: I0131 09:08:01.340210 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="acf2d685-5b8b-41ab-b91d-2e3b58b8b584" containerName="registry" Jan 31 09:08:01 crc kubenswrapper[4830]: I0131 09:08:01.341209 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/openshift-state-metrics-566fddb674-smxhh" Jan 31 09:08:01 crc kubenswrapper[4830]: I0131 09:08:01.347509 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-tls" Jan 31 09:08:01 crc kubenswrapper[4830]: I0131 09:08:01.350512 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-kube-rbac-proxy-config" Jan 31 09:08:01 crc kubenswrapper[4830]: I0131 09:08:01.353171 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-dockercfg-9smrb" Jan 31 09:08:01 crc kubenswrapper[4830]: I0131 09:08:01.355999 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/openshift-state-metrics-566fddb674-smxhh"] Jan 31 09:08:01 crc kubenswrapper[4830]: I0131 09:08:01.390266 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/kube-state-metrics-777cb5bd5d-qvqdp"] Jan 31 09:08:01 crc kubenswrapper[4830]: I0131 09:08:01.391493 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-qvqdp" Jan 31 09:08:01 crc kubenswrapper[4830]: I0131 09:08:01.394558 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-dockercfg-54dwk" Jan 31 09:08:01 crc kubenswrapper[4830]: I0131 09:08:01.395032 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-tls" Jan 31 09:08:01 crc kubenswrapper[4830]: I0131 09:08:01.395195 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-state-metrics-custom-resource-state-configmap" Jan 31 09:08:01 crc kubenswrapper[4830]: I0131 09:08:01.395417 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-kube-rbac-proxy-config" Jan 31 09:08:01 crc kubenswrapper[4830]: I0131 09:08:01.406891 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/node-exporter-26zhs"] Jan 31 09:08:01 crc kubenswrapper[4830]: I0131 09:08:01.408074 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/node-exporter-26zhs" Jan 31 09:08:01 crc kubenswrapper[4830]: I0131 09:08:01.410099 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-dockercfg-q7pt4" Jan 31 09:08:01 crc kubenswrapper[4830]: I0131 09:08:01.413544 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-tls" Jan 31 09:08:01 crc kubenswrapper[4830]: I0131 09:08:01.420976 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-kube-rbac-proxy-config" Jan 31 09:08:01 crc kubenswrapper[4830]: I0131 09:08:01.427857 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"root\" (UniqueName: \"kubernetes.io/host-path/e74b4b1e-57f8-419d-b270-b6aa853614b5-root\") pod \"node-exporter-26zhs\" (UID: \"e74b4b1e-57f8-419d-b270-b6aa853614b5\") " pod="openshift-monitoring/node-exporter-26zhs" Jan 31 09:08:01 crc kubenswrapper[4830]: I0131 09:08:01.427903 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/aa257d6b-a6a6-43ee-944c-c254b63cd122-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-777cb5bd5d-qvqdp\" (UID: \"aa257d6b-a6a6-43ee-944c-c254b63cd122\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-qvqdp" Jan 31 09:08:01 crc kubenswrapper[4830]: I0131 09:08:01.427927 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/e74b4b1e-57f8-419d-b270-b6aa853614b5-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-26zhs\" (UID: \"e74b4b1e-57f8-419d-b270-b6aa853614b5\") " pod="openshift-monitoring/node-exporter-26zhs" Jan 31 09:08:01 crc kubenswrapper[4830]: I0131 09:08:01.427945 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q4sjg\" (UniqueName: \"kubernetes.io/projected/aa257d6b-a6a6-43ee-944c-c254b63cd122-kube-api-access-q4sjg\") pod \"kube-state-metrics-777cb5bd5d-qvqdp\" (UID: \"aa257d6b-a6a6-43ee-944c-c254b63cd122\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-qvqdp" Jan 31 09:08:01 crc kubenswrapper[4830]: I0131 09:08:01.427967 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/9ce4852d-bfc3-4e08-beb3-7622e1160402-metrics-client-ca\") pod \"openshift-state-metrics-566fddb674-smxhh\" (UID: \"9ce4852d-bfc3-4e08-beb3-7622e1160402\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-smxhh" Jan 31 09:08:01 crc kubenswrapper[4830]: I0131 09:08:01.427985 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/aa257d6b-a6a6-43ee-944c-c254b63cd122-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-777cb5bd5d-qvqdp\" (UID: \"aa257d6b-a6a6-43ee-944c-c254b63cd122\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-qvqdp" Jan 31 09:08:01 crc kubenswrapper[4830]: I0131 09:08:01.428002 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-66z5q\" (UniqueName: \"kubernetes.io/projected/9ce4852d-bfc3-4e08-beb3-7622e1160402-kube-api-access-66z5q\") pod \"openshift-state-metrics-566fddb674-smxhh\" (UID: \"9ce4852d-bfc3-4e08-beb3-7622e1160402\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-smxhh" Jan 31 09:08:01 crc kubenswrapper[4830]: I0131 09:08:01.428025 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/e74b4b1e-57f8-419d-b270-b6aa853614b5-sys\") pod \"node-exporter-26zhs\" (UID: \"e74b4b1e-57f8-419d-b270-b6aa853614b5\") " pod="openshift-monitoring/node-exporter-26zhs" Jan 31 09:08:01 crc kubenswrapper[4830]: I0131 09:08:01.428052 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qt2sk\" (UniqueName: \"kubernetes.io/projected/e74b4b1e-57f8-419d-b270-b6aa853614b5-kube-api-access-qt2sk\") pod \"node-exporter-26zhs\" (UID: \"e74b4b1e-57f8-419d-b270-b6aa853614b5\") " pod="openshift-monitoring/node-exporter-26zhs" Jan 31 09:08:01 crc kubenswrapper[4830]: I0131 09:08:01.428068 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/9ce4852d-bfc3-4e08-beb3-7622e1160402-openshift-state-metrics-tls\") pod \"openshift-state-metrics-566fddb674-smxhh\" (UID: \"9ce4852d-bfc3-4e08-beb3-7622e1160402\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-smxhh" Jan 31 09:08:01 crc kubenswrapper[4830]: I0131 09:08:01.428084 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/e74b4b1e-57f8-419d-b270-b6aa853614b5-node-exporter-wtmp\") pod \"node-exporter-26zhs\" (UID: \"e74b4b1e-57f8-419d-b270-b6aa853614b5\") " pod="openshift-monitoring/node-exporter-26zhs" Jan 31 09:08:01 crc kubenswrapper[4830]: I0131 09:08:01.428107 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/aa257d6b-a6a6-43ee-944c-c254b63cd122-kube-state-metrics-tls\") pod \"kube-state-metrics-777cb5bd5d-qvqdp\" (UID: \"aa257d6b-a6a6-43ee-944c-c254b63cd122\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-qvqdp" Jan 31 09:08:01 crc kubenswrapper[4830]: I0131 09:08:01.428124 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/e74b4b1e-57f8-419d-b270-b6aa853614b5-metrics-client-ca\") pod \"node-exporter-26zhs\" (UID: \"e74b4b1e-57f8-419d-b270-b6aa853614b5\") " pod="openshift-monitoring/node-exporter-26zhs" Jan 31 09:08:01 crc kubenswrapper[4830]: I0131 09:08:01.428161 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/aa257d6b-a6a6-43ee-944c-c254b63cd122-metrics-client-ca\") pod \"kube-state-metrics-777cb5bd5d-qvqdp\" (UID: \"aa257d6b-a6a6-43ee-944c-c254b63cd122\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-qvqdp" Jan 31 09:08:01 crc kubenswrapper[4830]: I0131 09:08:01.428183 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/aa257d6b-a6a6-43ee-944c-c254b63cd122-volume-directive-shadow\") pod \"kube-state-metrics-777cb5bd5d-qvqdp\" (UID: \"aa257d6b-a6a6-43ee-944c-c254b63cd122\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-qvqdp" Jan 31 09:08:01 crc kubenswrapper[4830]: I0131 09:08:01.428203 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/9ce4852d-bfc3-4e08-beb3-7622e1160402-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-566fddb674-smxhh\" (UID: \"9ce4852d-bfc3-4e08-beb3-7622e1160402\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-smxhh" Jan 31 09:08:01 crc kubenswrapper[4830]: I0131 09:08:01.428234 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/e74b4b1e-57f8-419d-b270-b6aa853614b5-node-exporter-textfile\") pod \"node-exporter-26zhs\" (UID: \"e74b4b1e-57f8-419d-b270-b6aa853614b5\") " pod="openshift-monitoring/node-exporter-26zhs" Jan 31 09:08:01 crc kubenswrapper[4830]: I0131 09:08:01.428264 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/e74b4b1e-57f8-419d-b270-b6aa853614b5-node-exporter-tls\") pod \"node-exporter-26zhs\" (UID: \"e74b4b1e-57f8-419d-b270-b6aa853614b5\") " pod="openshift-monitoring/node-exporter-26zhs" Jan 31 09:08:01 crc kubenswrapper[4830]: I0131 09:08:01.468581 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/kube-state-metrics-777cb5bd5d-qvqdp"] Jan 31 09:08:01 crc kubenswrapper[4830]: I0131 09:08:01.528954 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/aa257d6b-a6a6-43ee-944c-c254b63cd122-metrics-client-ca\") pod \"kube-state-metrics-777cb5bd5d-qvqdp\" (UID: \"aa257d6b-a6a6-43ee-944c-c254b63cd122\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-qvqdp" Jan 31 09:08:01 crc kubenswrapper[4830]: I0131 09:08:01.528995 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/aa257d6b-a6a6-43ee-944c-c254b63cd122-volume-directive-shadow\") pod \"kube-state-metrics-777cb5bd5d-qvqdp\" (UID: \"aa257d6b-a6a6-43ee-944c-c254b63cd122\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-qvqdp" Jan 31 09:08:01 crc kubenswrapper[4830]: I0131 09:08:01.529022 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/9ce4852d-bfc3-4e08-beb3-7622e1160402-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-566fddb674-smxhh\" (UID: \"9ce4852d-bfc3-4e08-beb3-7622e1160402\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-smxhh" Jan 31 09:08:01 crc kubenswrapper[4830]: I0131 09:08:01.529046 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/e74b4b1e-57f8-419d-b270-b6aa853614b5-node-exporter-textfile\") pod \"node-exporter-26zhs\" (UID: \"e74b4b1e-57f8-419d-b270-b6aa853614b5\") " pod="openshift-monitoring/node-exporter-26zhs" Jan 31 09:08:01 crc kubenswrapper[4830]: I0131 09:08:01.529065 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/e74b4b1e-57f8-419d-b270-b6aa853614b5-node-exporter-tls\") pod \"node-exporter-26zhs\" (UID: \"e74b4b1e-57f8-419d-b270-b6aa853614b5\") " pod="openshift-monitoring/node-exporter-26zhs" Jan 31 09:08:01 crc kubenswrapper[4830]: I0131 09:08:01.529089 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"root\" (UniqueName: \"kubernetes.io/host-path/e74b4b1e-57f8-419d-b270-b6aa853614b5-root\") pod \"node-exporter-26zhs\" (UID: \"e74b4b1e-57f8-419d-b270-b6aa853614b5\") " pod="openshift-monitoring/node-exporter-26zhs" Jan 31 09:08:01 crc kubenswrapper[4830]: I0131 09:08:01.529109 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/aa257d6b-a6a6-43ee-944c-c254b63cd122-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-777cb5bd5d-qvqdp\" (UID: \"aa257d6b-a6a6-43ee-944c-c254b63cd122\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-qvqdp" Jan 31 09:08:01 crc kubenswrapper[4830]: I0131 09:08:01.529126 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/e74b4b1e-57f8-419d-b270-b6aa853614b5-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-26zhs\" (UID: \"e74b4b1e-57f8-419d-b270-b6aa853614b5\") " pod="openshift-monitoring/node-exporter-26zhs" Jan 31 09:08:01 crc kubenswrapper[4830]: I0131 09:08:01.529141 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q4sjg\" (UniqueName: \"kubernetes.io/projected/aa257d6b-a6a6-43ee-944c-c254b63cd122-kube-api-access-q4sjg\") pod \"kube-state-metrics-777cb5bd5d-qvqdp\" (UID: \"aa257d6b-a6a6-43ee-944c-c254b63cd122\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-qvqdp" Jan 31 09:08:01 crc kubenswrapper[4830]: I0131 09:08:01.529160 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/9ce4852d-bfc3-4e08-beb3-7622e1160402-metrics-client-ca\") pod \"openshift-state-metrics-566fddb674-smxhh\" (UID: \"9ce4852d-bfc3-4e08-beb3-7622e1160402\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-smxhh" Jan 31 09:08:01 crc kubenswrapper[4830]: I0131 09:08:01.529177 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/aa257d6b-a6a6-43ee-944c-c254b63cd122-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-777cb5bd5d-qvqdp\" (UID: \"aa257d6b-a6a6-43ee-944c-c254b63cd122\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-qvqdp" Jan 31 09:08:01 crc kubenswrapper[4830]: I0131 09:08:01.529195 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-66z5q\" (UniqueName: \"kubernetes.io/projected/9ce4852d-bfc3-4e08-beb3-7622e1160402-kube-api-access-66z5q\") pod \"openshift-state-metrics-566fddb674-smxhh\" (UID: \"9ce4852d-bfc3-4e08-beb3-7622e1160402\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-smxhh" Jan 31 09:08:01 crc kubenswrapper[4830]: I0131 09:08:01.529213 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/e74b4b1e-57f8-419d-b270-b6aa853614b5-sys\") pod \"node-exporter-26zhs\" (UID: \"e74b4b1e-57f8-419d-b270-b6aa853614b5\") " pod="openshift-monitoring/node-exporter-26zhs" Jan 31 09:08:01 crc kubenswrapper[4830]: I0131 09:08:01.529231 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qt2sk\" (UniqueName: \"kubernetes.io/projected/e74b4b1e-57f8-419d-b270-b6aa853614b5-kube-api-access-qt2sk\") pod \"node-exporter-26zhs\" (UID: \"e74b4b1e-57f8-419d-b270-b6aa853614b5\") " pod="openshift-monitoring/node-exporter-26zhs" Jan 31 09:08:01 crc kubenswrapper[4830]: I0131 09:08:01.529249 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/9ce4852d-bfc3-4e08-beb3-7622e1160402-openshift-state-metrics-tls\") pod \"openshift-state-metrics-566fddb674-smxhh\" (UID: \"9ce4852d-bfc3-4e08-beb3-7622e1160402\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-smxhh" Jan 31 09:08:01 crc kubenswrapper[4830]: I0131 09:08:01.529265 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/e74b4b1e-57f8-419d-b270-b6aa853614b5-node-exporter-wtmp\") pod \"node-exporter-26zhs\" (UID: \"e74b4b1e-57f8-419d-b270-b6aa853614b5\") " pod="openshift-monitoring/node-exporter-26zhs" Jan 31 09:08:01 crc kubenswrapper[4830]: I0131 09:08:01.529287 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/aa257d6b-a6a6-43ee-944c-c254b63cd122-kube-state-metrics-tls\") pod \"kube-state-metrics-777cb5bd5d-qvqdp\" (UID: \"aa257d6b-a6a6-43ee-944c-c254b63cd122\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-qvqdp" Jan 31 09:08:01 crc kubenswrapper[4830]: I0131 09:08:01.529305 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/e74b4b1e-57f8-419d-b270-b6aa853614b5-metrics-client-ca\") pod \"node-exporter-26zhs\" (UID: \"e74b4b1e-57f8-419d-b270-b6aa853614b5\") " pod="openshift-monitoring/node-exporter-26zhs" Jan 31 09:08:01 crc kubenswrapper[4830]: I0131 09:08:01.530099 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/e74b4b1e-57f8-419d-b270-b6aa853614b5-metrics-client-ca\") pod \"node-exporter-26zhs\" (UID: \"e74b4b1e-57f8-419d-b270-b6aa853614b5\") " pod="openshift-monitoring/node-exporter-26zhs" Jan 31 09:08:01 crc kubenswrapper[4830]: I0131 09:08:01.530702 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/aa257d6b-a6a6-43ee-944c-c254b63cd122-metrics-client-ca\") pod \"kube-state-metrics-777cb5bd5d-qvqdp\" (UID: \"aa257d6b-a6a6-43ee-944c-c254b63cd122\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-qvqdp" Jan 31 09:08:01 crc kubenswrapper[4830]: I0131 09:08:01.531012 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/aa257d6b-a6a6-43ee-944c-c254b63cd122-volume-directive-shadow\") pod \"kube-state-metrics-777cb5bd5d-qvqdp\" (UID: \"aa257d6b-a6a6-43ee-944c-c254b63cd122\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-qvqdp" Jan 31 09:08:01 crc kubenswrapper[4830]: I0131 09:08:01.531938 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"root\" (UniqueName: \"kubernetes.io/host-path/e74b4b1e-57f8-419d-b270-b6aa853614b5-root\") pod \"node-exporter-26zhs\" (UID: \"e74b4b1e-57f8-419d-b270-b6aa853614b5\") " pod="openshift-monitoring/node-exporter-26zhs" Jan 31 09:08:01 crc kubenswrapper[4830]: I0131 09:08:01.532309 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/e74b4b1e-57f8-419d-b270-b6aa853614b5-node-exporter-textfile\") pod \"node-exporter-26zhs\" (UID: \"e74b4b1e-57f8-419d-b270-b6aa853614b5\") " pod="openshift-monitoring/node-exporter-26zhs" Jan 31 09:08:01 crc kubenswrapper[4830]: I0131 09:08:01.532676 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/e74b4b1e-57f8-419d-b270-b6aa853614b5-node-exporter-wtmp\") pod \"node-exporter-26zhs\" (UID: \"e74b4b1e-57f8-419d-b270-b6aa853614b5\") " pod="openshift-monitoring/node-exporter-26zhs" Jan 31 09:08:01 crc kubenswrapper[4830]: I0131 09:08:01.533060 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/aa257d6b-a6a6-43ee-944c-c254b63cd122-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-777cb5bd5d-qvqdp\" (UID: \"aa257d6b-a6a6-43ee-944c-c254b63cd122\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-qvqdp" Jan 31 09:08:01 crc kubenswrapper[4830]: I0131 09:08:01.533277 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/e74b4b1e-57f8-419d-b270-b6aa853614b5-sys\") pod \"node-exporter-26zhs\" (UID: \"e74b4b1e-57f8-419d-b270-b6aa853614b5\") " pod="openshift-monitoring/node-exporter-26zhs" Jan 31 09:08:01 crc kubenswrapper[4830]: E0131 09:08:01.533356 4830 secret.go:188] Couldn't get secret openshift-monitoring/kube-state-metrics-tls: secret "kube-state-metrics-tls" not found Jan 31 09:08:01 crc kubenswrapper[4830]: E0131 09:08:01.533400 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/aa257d6b-a6a6-43ee-944c-c254b63cd122-kube-state-metrics-tls podName:aa257d6b-a6a6-43ee-944c-c254b63cd122 nodeName:}" failed. No retries permitted until 2026-01-31 09:08:02.03338541 +0000 UTC m=+426.526747852 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-state-metrics-tls" (UniqueName: "kubernetes.io/secret/aa257d6b-a6a6-43ee-944c-c254b63cd122-kube-state-metrics-tls") pod "kube-state-metrics-777cb5bd5d-qvqdp" (UID: "aa257d6b-a6a6-43ee-944c-c254b63cd122") : secret "kube-state-metrics-tls" not found Jan 31 09:08:01 crc kubenswrapper[4830]: I0131 09:08:01.537872 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/9ce4852d-bfc3-4e08-beb3-7622e1160402-metrics-client-ca\") pod \"openshift-state-metrics-566fddb674-smxhh\" (UID: \"9ce4852d-bfc3-4e08-beb3-7622e1160402\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-smxhh" Jan 31 09:08:01 crc kubenswrapper[4830]: I0131 09:08:01.538852 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/9ce4852d-bfc3-4e08-beb3-7622e1160402-openshift-state-metrics-tls\") pod \"openshift-state-metrics-566fddb674-smxhh\" (UID: \"9ce4852d-bfc3-4e08-beb3-7622e1160402\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-smxhh" Jan 31 09:08:01 crc kubenswrapper[4830]: I0131 09:08:01.543287 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/aa257d6b-a6a6-43ee-944c-c254b63cd122-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-777cb5bd5d-qvqdp\" (UID: \"aa257d6b-a6a6-43ee-944c-c254b63cd122\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-qvqdp" Jan 31 09:08:01 crc kubenswrapper[4830]: I0131 09:08:01.550864 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/9ce4852d-bfc3-4e08-beb3-7622e1160402-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-566fddb674-smxhh\" (UID: \"9ce4852d-bfc3-4e08-beb3-7622e1160402\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-smxhh" Jan 31 09:08:01 crc kubenswrapper[4830]: I0131 09:08:01.551744 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qt2sk\" (UniqueName: \"kubernetes.io/projected/e74b4b1e-57f8-419d-b270-b6aa853614b5-kube-api-access-qt2sk\") pod \"node-exporter-26zhs\" (UID: \"e74b4b1e-57f8-419d-b270-b6aa853614b5\") " pod="openshift-monitoring/node-exporter-26zhs" Jan 31 09:08:01 crc kubenswrapper[4830]: I0131 09:08:01.552903 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/e74b4b1e-57f8-419d-b270-b6aa853614b5-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-26zhs\" (UID: \"e74b4b1e-57f8-419d-b270-b6aa853614b5\") " pod="openshift-monitoring/node-exporter-26zhs" Jan 31 09:08:01 crc kubenswrapper[4830]: I0131 09:08:01.554422 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-66z5q\" (UniqueName: \"kubernetes.io/projected/9ce4852d-bfc3-4e08-beb3-7622e1160402-kube-api-access-66z5q\") pod \"openshift-state-metrics-566fddb674-smxhh\" (UID: \"9ce4852d-bfc3-4e08-beb3-7622e1160402\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-smxhh" Jan 31 09:08:01 crc kubenswrapper[4830]: I0131 09:08:01.554580 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/e74b4b1e-57f8-419d-b270-b6aa853614b5-node-exporter-tls\") pod \"node-exporter-26zhs\" (UID: \"e74b4b1e-57f8-419d-b270-b6aa853614b5\") " pod="openshift-monitoring/node-exporter-26zhs" Jan 31 09:08:01 crc kubenswrapper[4830]: I0131 09:08:01.556794 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q4sjg\" (UniqueName: \"kubernetes.io/projected/aa257d6b-a6a6-43ee-944c-c254b63cd122-kube-api-access-q4sjg\") pod \"kube-state-metrics-777cb5bd5d-qvqdp\" (UID: \"aa257d6b-a6a6-43ee-944c-c254b63cd122\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-qvqdp" Jan 31 09:08:01 crc kubenswrapper[4830]: I0131 09:08:01.657240 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/openshift-state-metrics-566fddb674-smxhh" Jan 31 09:08:01 crc kubenswrapper[4830]: I0131 09:08:01.735979 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/node-exporter-26zhs" Jan 31 09:08:02 crc kubenswrapper[4830]: I0131 09:08:02.044865 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/aa257d6b-a6a6-43ee-944c-c254b63cd122-kube-state-metrics-tls\") pod \"kube-state-metrics-777cb5bd5d-qvqdp\" (UID: \"aa257d6b-a6a6-43ee-944c-c254b63cd122\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-qvqdp" Jan 31 09:08:02 crc kubenswrapper[4830]: I0131 09:08:02.053423 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/aa257d6b-a6a6-43ee-944c-c254b63cd122-kube-state-metrics-tls\") pod \"kube-state-metrics-777cb5bd5d-qvqdp\" (UID: \"aa257d6b-a6a6-43ee-944c-c254b63cd122\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-qvqdp" Jan 31 09:08:02 crc kubenswrapper[4830]: I0131 09:08:02.131502 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/openshift-state-metrics-566fddb674-smxhh"] Jan 31 09:08:02 crc kubenswrapper[4830]: W0131 09:08:02.142862 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9ce4852d_bfc3_4e08_beb3_7622e1160402.slice/crio-a9c477c04528a7ab03ecd0301792447a65a543ed3efea4bfcc2c7ee9bcf211e3 WatchSource:0}: Error finding container a9c477c04528a7ab03ecd0301792447a65a543ed3efea4bfcc2c7ee9bcf211e3: Status 404 returned error can't find the container with id a9c477c04528a7ab03ecd0301792447a65a543ed3efea4bfcc2c7ee9bcf211e3 Jan 31 09:08:02 crc kubenswrapper[4830]: I0131 09:08:02.316821 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-qvqdp" Jan 31 09:08:02 crc kubenswrapper[4830]: I0131 09:08:02.574153 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-566fddb674-smxhh" event={"ID":"9ce4852d-bfc3-4e08-beb3-7622e1160402","Type":"ContainerStarted","Data":"dcf5e4f8a17590213df43a1fea4ad438134f55fcec579465df209301a51a0c98"} Jan 31 09:08:02 crc kubenswrapper[4830]: I0131 09:08:02.574499 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-566fddb674-smxhh" event={"ID":"9ce4852d-bfc3-4e08-beb3-7622e1160402","Type":"ContainerStarted","Data":"03d2c2d68f7750c8c8f27fe076fe2847b1dc74fb8f19a6c4018c60550b26861c"} Jan 31 09:08:02 crc kubenswrapper[4830]: I0131 09:08:02.574512 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-566fddb674-smxhh" event={"ID":"9ce4852d-bfc3-4e08-beb3-7622e1160402","Type":"ContainerStarted","Data":"a9c477c04528a7ab03ecd0301792447a65a543ed3efea4bfcc2c7ee9bcf211e3"} Jan 31 09:08:02 crc kubenswrapper[4830]: I0131 09:08:02.575535 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-26zhs" event={"ID":"e74b4b1e-57f8-419d-b270-b6aa853614b5","Type":"ContainerStarted","Data":"9b61e0aa1471f7a309698c35ea1b0b4dc7c272fefab78940d64fa3841610fe71"} Jan 31 09:08:02 crc kubenswrapper[4830]: I0131 09:08:02.608003 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Jan 31 09:08:02 crc kubenswrapper[4830]: I0131 09:08:02.611300 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Jan 31 09:08:02 crc kubenswrapper[4830]: I0131 09:08:02.614761 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy" Jan 31 09:08:02 crc kubenswrapper[4830]: I0131 09:08:02.615580 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-dockercfg-vsq97" Jan 31 09:08:02 crc kubenswrapper[4830]: I0131 09:08:02.615767 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-tls" Jan 31 09:08:02 crc kubenswrapper[4830]: I0131 09:08:02.615940 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy-metric" Jan 31 09:08:02 crc kubenswrapper[4830]: I0131 09:08:02.616215 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy-web" Jan 31 09:08:02 crc kubenswrapper[4830]: I0131 09:08:02.616422 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-generated" Jan 31 09:08:02 crc kubenswrapper[4830]: I0131 09:08:02.616527 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-tls-assets-0" Jan 31 09:08:02 crc kubenswrapper[4830]: I0131 09:08:02.616588 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-web-config" Jan 31 09:08:02 crc kubenswrapper[4830]: I0131 09:08:02.627335 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"alertmanager-trusted-ca-bundle" Jan 31 09:08:02 crc kubenswrapper[4830]: I0131 09:08:02.644321 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Jan 31 09:08:02 crc kubenswrapper[4830]: I0131 09:08:02.659996 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/55a33a17-a7e7-403f-95b7-ec98415b7235-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"55a33a17-a7e7-403f-95b7-ec98415b7235\") " pod="openshift-monitoring/alertmanager-main-0" Jan 31 09:08:02 crc kubenswrapper[4830]: I0131 09:08:02.660053 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/55a33a17-a7e7-403f-95b7-ec98415b7235-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"55a33a17-a7e7-403f-95b7-ec98415b7235\") " pod="openshift-monitoring/alertmanager-main-0" Jan 31 09:08:02 crc kubenswrapper[4830]: I0131 09:08:02.660080 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/55a33a17-a7e7-403f-95b7-ec98415b7235-config-volume\") pod \"alertmanager-main-0\" (UID: \"55a33a17-a7e7-403f-95b7-ec98415b7235\") " pod="openshift-monitoring/alertmanager-main-0" Jan 31 09:08:02 crc kubenswrapper[4830]: I0131 09:08:02.660103 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bk47w\" (UniqueName: \"kubernetes.io/projected/55a33a17-a7e7-403f-95b7-ec98415b7235-kube-api-access-bk47w\") pod \"alertmanager-main-0\" (UID: \"55a33a17-a7e7-403f-95b7-ec98415b7235\") " pod="openshift-monitoring/alertmanager-main-0" Jan 31 09:08:02 crc kubenswrapper[4830]: I0131 09:08:02.660127 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/55a33a17-a7e7-403f-95b7-ec98415b7235-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"55a33a17-a7e7-403f-95b7-ec98415b7235\") " pod="openshift-monitoring/alertmanager-main-0" Jan 31 09:08:02 crc kubenswrapper[4830]: I0131 09:08:02.660147 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/55a33a17-a7e7-403f-95b7-ec98415b7235-tls-assets\") pod \"alertmanager-main-0\" (UID: \"55a33a17-a7e7-403f-95b7-ec98415b7235\") " pod="openshift-monitoring/alertmanager-main-0" Jan 31 09:08:02 crc kubenswrapper[4830]: I0131 09:08:02.660891 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/55a33a17-a7e7-403f-95b7-ec98415b7235-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"55a33a17-a7e7-403f-95b7-ec98415b7235\") " pod="openshift-monitoring/alertmanager-main-0" Jan 31 09:08:02 crc kubenswrapper[4830]: I0131 09:08:02.660964 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/55a33a17-a7e7-403f-95b7-ec98415b7235-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"55a33a17-a7e7-403f-95b7-ec98415b7235\") " pod="openshift-monitoring/alertmanager-main-0" Jan 31 09:08:02 crc kubenswrapper[4830]: I0131 09:08:02.661005 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/55a33a17-a7e7-403f-95b7-ec98415b7235-web-config\") pod \"alertmanager-main-0\" (UID: \"55a33a17-a7e7-403f-95b7-ec98415b7235\") " pod="openshift-monitoring/alertmanager-main-0" Jan 31 09:08:02 crc kubenswrapper[4830]: I0131 09:08:02.661026 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/55a33a17-a7e7-403f-95b7-ec98415b7235-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"55a33a17-a7e7-403f-95b7-ec98415b7235\") " pod="openshift-monitoring/alertmanager-main-0" Jan 31 09:08:02 crc kubenswrapper[4830]: I0131 09:08:02.661145 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/55a33a17-a7e7-403f-95b7-ec98415b7235-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"55a33a17-a7e7-403f-95b7-ec98415b7235\") " pod="openshift-monitoring/alertmanager-main-0" Jan 31 09:08:02 crc kubenswrapper[4830]: I0131 09:08:02.661198 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/55a33a17-a7e7-403f-95b7-ec98415b7235-config-out\") pod \"alertmanager-main-0\" (UID: \"55a33a17-a7e7-403f-95b7-ec98415b7235\") " pod="openshift-monitoring/alertmanager-main-0" Jan 31 09:08:02 crc kubenswrapper[4830]: I0131 09:08:02.762232 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/55a33a17-a7e7-403f-95b7-ec98415b7235-tls-assets\") pod \"alertmanager-main-0\" (UID: \"55a33a17-a7e7-403f-95b7-ec98415b7235\") " pod="openshift-monitoring/alertmanager-main-0" Jan 31 09:08:02 crc kubenswrapper[4830]: I0131 09:08:02.762365 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/55a33a17-a7e7-403f-95b7-ec98415b7235-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"55a33a17-a7e7-403f-95b7-ec98415b7235\") " pod="openshift-monitoring/alertmanager-main-0" Jan 31 09:08:02 crc kubenswrapper[4830]: I0131 09:08:02.762413 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/55a33a17-a7e7-403f-95b7-ec98415b7235-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"55a33a17-a7e7-403f-95b7-ec98415b7235\") " pod="openshift-monitoring/alertmanager-main-0" Jan 31 09:08:02 crc kubenswrapper[4830]: I0131 09:08:02.762445 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/55a33a17-a7e7-403f-95b7-ec98415b7235-web-config\") pod \"alertmanager-main-0\" (UID: \"55a33a17-a7e7-403f-95b7-ec98415b7235\") " pod="openshift-monitoring/alertmanager-main-0" Jan 31 09:08:02 crc kubenswrapper[4830]: I0131 09:08:02.762469 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/55a33a17-a7e7-403f-95b7-ec98415b7235-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"55a33a17-a7e7-403f-95b7-ec98415b7235\") " pod="openshift-monitoring/alertmanager-main-0" Jan 31 09:08:02 crc kubenswrapper[4830]: I0131 09:08:02.762523 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/55a33a17-a7e7-403f-95b7-ec98415b7235-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"55a33a17-a7e7-403f-95b7-ec98415b7235\") " pod="openshift-monitoring/alertmanager-main-0" Jan 31 09:08:02 crc kubenswrapper[4830]: I0131 09:08:02.762554 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/55a33a17-a7e7-403f-95b7-ec98415b7235-config-out\") pod \"alertmanager-main-0\" (UID: \"55a33a17-a7e7-403f-95b7-ec98415b7235\") " pod="openshift-monitoring/alertmanager-main-0" Jan 31 09:08:02 crc kubenswrapper[4830]: I0131 09:08:02.762587 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/55a33a17-a7e7-403f-95b7-ec98415b7235-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"55a33a17-a7e7-403f-95b7-ec98415b7235\") " pod="openshift-monitoring/alertmanager-main-0" Jan 31 09:08:02 crc kubenswrapper[4830]: I0131 09:08:02.762618 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/55a33a17-a7e7-403f-95b7-ec98415b7235-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"55a33a17-a7e7-403f-95b7-ec98415b7235\") " pod="openshift-monitoring/alertmanager-main-0" Jan 31 09:08:02 crc kubenswrapper[4830]: I0131 09:08:02.762648 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bk47w\" (UniqueName: \"kubernetes.io/projected/55a33a17-a7e7-403f-95b7-ec98415b7235-kube-api-access-bk47w\") pod \"alertmanager-main-0\" (UID: \"55a33a17-a7e7-403f-95b7-ec98415b7235\") " pod="openshift-monitoring/alertmanager-main-0" Jan 31 09:08:02 crc kubenswrapper[4830]: I0131 09:08:02.762669 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/55a33a17-a7e7-403f-95b7-ec98415b7235-config-volume\") pod \"alertmanager-main-0\" (UID: \"55a33a17-a7e7-403f-95b7-ec98415b7235\") " pod="openshift-monitoring/alertmanager-main-0" Jan 31 09:08:02 crc kubenswrapper[4830]: I0131 09:08:02.762692 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/55a33a17-a7e7-403f-95b7-ec98415b7235-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"55a33a17-a7e7-403f-95b7-ec98415b7235\") " pod="openshift-monitoring/alertmanager-main-0" Jan 31 09:08:02 crc kubenswrapper[4830]: I0131 09:08:02.763551 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/55a33a17-a7e7-403f-95b7-ec98415b7235-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"55a33a17-a7e7-403f-95b7-ec98415b7235\") " pod="openshift-monitoring/alertmanager-main-0" Jan 31 09:08:02 crc kubenswrapper[4830]: I0131 09:08:02.763910 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/55a33a17-a7e7-403f-95b7-ec98415b7235-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"55a33a17-a7e7-403f-95b7-ec98415b7235\") " pod="openshift-monitoring/alertmanager-main-0" Jan 31 09:08:02 crc kubenswrapper[4830]: I0131 09:08:02.764116 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/55a33a17-a7e7-403f-95b7-ec98415b7235-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"55a33a17-a7e7-403f-95b7-ec98415b7235\") " pod="openshift-monitoring/alertmanager-main-0" Jan 31 09:08:02 crc kubenswrapper[4830]: I0131 09:08:02.767768 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/55a33a17-a7e7-403f-95b7-ec98415b7235-tls-assets\") pod \"alertmanager-main-0\" (UID: \"55a33a17-a7e7-403f-95b7-ec98415b7235\") " pod="openshift-monitoring/alertmanager-main-0" Jan 31 09:08:02 crc kubenswrapper[4830]: I0131 09:08:02.768372 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/55a33a17-a7e7-403f-95b7-ec98415b7235-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"55a33a17-a7e7-403f-95b7-ec98415b7235\") " pod="openshift-monitoring/alertmanager-main-0" Jan 31 09:08:02 crc kubenswrapper[4830]: I0131 09:08:02.768648 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/55a33a17-a7e7-403f-95b7-ec98415b7235-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"55a33a17-a7e7-403f-95b7-ec98415b7235\") " pod="openshift-monitoring/alertmanager-main-0" Jan 31 09:08:02 crc kubenswrapper[4830]: I0131 09:08:02.770886 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/55a33a17-a7e7-403f-95b7-ec98415b7235-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"55a33a17-a7e7-403f-95b7-ec98415b7235\") " pod="openshift-monitoring/alertmanager-main-0" Jan 31 09:08:02 crc kubenswrapper[4830]: I0131 09:08:02.771879 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/55a33a17-a7e7-403f-95b7-ec98415b7235-config-volume\") pod \"alertmanager-main-0\" (UID: \"55a33a17-a7e7-403f-95b7-ec98415b7235\") " pod="openshift-monitoring/alertmanager-main-0" Jan 31 09:08:02 crc kubenswrapper[4830]: I0131 09:08:02.777581 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/55a33a17-a7e7-403f-95b7-ec98415b7235-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"55a33a17-a7e7-403f-95b7-ec98415b7235\") " pod="openshift-monitoring/alertmanager-main-0" Jan 31 09:08:02 crc kubenswrapper[4830]: I0131 09:08:02.782837 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/55a33a17-a7e7-403f-95b7-ec98415b7235-config-out\") pod \"alertmanager-main-0\" (UID: \"55a33a17-a7e7-403f-95b7-ec98415b7235\") " pod="openshift-monitoring/alertmanager-main-0" Jan 31 09:08:02 crc kubenswrapper[4830]: I0131 09:08:02.786408 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bk47w\" (UniqueName: \"kubernetes.io/projected/55a33a17-a7e7-403f-95b7-ec98415b7235-kube-api-access-bk47w\") pod \"alertmanager-main-0\" (UID: \"55a33a17-a7e7-403f-95b7-ec98415b7235\") " pod="openshift-monitoring/alertmanager-main-0" Jan 31 09:08:02 crc kubenswrapper[4830]: I0131 09:08:02.786442 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/55a33a17-a7e7-403f-95b7-ec98415b7235-web-config\") pod \"alertmanager-main-0\" (UID: \"55a33a17-a7e7-403f-95b7-ec98415b7235\") " pod="openshift-monitoring/alertmanager-main-0" Jan 31 09:08:02 crc kubenswrapper[4830]: I0131 09:08:02.810653 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/kube-state-metrics-777cb5bd5d-qvqdp"] Jan 31 09:08:02 crc kubenswrapper[4830]: W0131 09:08:02.820384 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaa257d6b_a6a6_43ee_944c_c254b63cd122.slice/crio-b51b505247e97618f3c3e1ce01dc47c7d824c404dd99ecf235779186299500dc WatchSource:0}: Error finding container b51b505247e97618f3c3e1ce01dc47c7d824c404dd99ecf235779186299500dc: Status 404 returned error can't find the container with id b51b505247e97618f3c3e1ce01dc47c7d824c404dd99ecf235779186299500dc Jan 31 09:08:02 crc kubenswrapper[4830]: I0131 09:08:02.949396 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Jan 31 09:08:03 crc kubenswrapper[4830]: I0131 09:08:03.458339 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/thanos-querier-57c5b4b8d5-lsvdc"] Jan 31 09:08:03 crc kubenswrapper[4830]: I0131 09:08:03.460796 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/thanos-querier-57c5b4b8d5-lsvdc" Jan 31 09:08:03 crc kubenswrapper[4830]: I0131 09:08:03.463480 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-metrics" Jan 31 09:08:03 crc kubenswrapper[4830]: I0131 09:08:03.463480 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-web" Jan 31 09:08:03 crc kubenswrapper[4830]: I0131 09:08:03.464069 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy" Jan 31 09:08:03 crc kubenswrapper[4830]: I0131 09:08:03.465846 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-tls" Jan 31 09:08:03 crc kubenswrapper[4830]: I0131 09:08:03.465852 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-dockercfg-fdvjb" Jan 31 09:08:03 crc kubenswrapper[4830]: I0131 09:08:03.466053 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-rules" Jan 31 09:08:03 crc kubenswrapper[4830]: I0131 09:08:03.467977 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-grpc-tls-6mucu9t3tqk8p" Jan 31 09:08:03 crc kubenswrapper[4830]: I0131 09:08:03.473278 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/thanos-querier-57c5b4b8d5-lsvdc"] Jan 31 09:08:03 crc kubenswrapper[4830]: I0131 09:08:03.488778 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/4158e29b-a0d9-40f2-904d-ffb63ba734f6-secret-thanos-querier-tls\") pod \"thanos-querier-57c5b4b8d5-lsvdc\" (UID: \"4158e29b-a0d9-40f2-904d-ffb63ba734f6\") " pod="openshift-monitoring/thanos-querier-57c5b4b8d5-lsvdc" Jan 31 09:08:03 crc kubenswrapper[4830]: I0131 09:08:03.488832 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/4158e29b-a0d9-40f2-904d-ffb63ba734f6-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-57c5b4b8d5-lsvdc\" (UID: \"4158e29b-a0d9-40f2-904d-ffb63ba734f6\") " pod="openshift-monitoring/thanos-querier-57c5b4b8d5-lsvdc" Jan 31 09:08:03 crc kubenswrapper[4830]: I0131 09:08:03.488881 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/4158e29b-a0d9-40f2-904d-ffb63ba734f6-metrics-client-ca\") pod \"thanos-querier-57c5b4b8d5-lsvdc\" (UID: \"4158e29b-a0d9-40f2-904d-ffb63ba734f6\") " pod="openshift-monitoring/thanos-querier-57c5b4b8d5-lsvdc" Jan 31 09:08:03 crc kubenswrapper[4830]: I0131 09:08:03.488919 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/4158e29b-a0d9-40f2-904d-ffb63ba734f6-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-57c5b4b8d5-lsvdc\" (UID: \"4158e29b-a0d9-40f2-904d-ffb63ba734f6\") " pod="openshift-monitoring/thanos-querier-57c5b4b8d5-lsvdc" Jan 31 09:08:03 crc kubenswrapper[4830]: I0131 09:08:03.488945 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/4158e29b-a0d9-40f2-904d-ffb63ba734f6-secret-grpc-tls\") pod \"thanos-querier-57c5b4b8d5-lsvdc\" (UID: \"4158e29b-a0d9-40f2-904d-ffb63ba734f6\") " pod="openshift-monitoring/thanos-querier-57c5b4b8d5-lsvdc" Jan 31 09:08:03 crc kubenswrapper[4830]: I0131 09:08:03.488974 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/4158e29b-a0d9-40f2-904d-ffb63ba734f6-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-57c5b4b8d5-lsvdc\" (UID: \"4158e29b-a0d9-40f2-904d-ffb63ba734f6\") " pod="openshift-monitoring/thanos-querier-57c5b4b8d5-lsvdc" Jan 31 09:08:03 crc kubenswrapper[4830]: I0131 09:08:03.488999 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/4158e29b-a0d9-40f2-904d-ffb63ba734f6-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-57c5b4b8d5-lsvdc\" (UID: \"4158e29b-a0d9-40f2-904d-ffb63ba734f6\") " pod="openshift-monitoring/thanos-querier-57c5b4b8d5-lsvdc" Jan 31 09:08:03 crc kubenswrapper[4830]: I0131 09:08:03.489020 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wf44b\" (UniqueName: \"kubernetes.io/projected/4158e29b-a0d9-40f2-904d-ffb63ba734f6-kube-api-access-wf44b\") pod \"thanos-querier-57c5b4b8d5-lsvdc\" (UID: \"4158e29b-a0d9-40f2-904d-ffb63ba734f6\") " pod="openshift-monitoring/thanos-querier-57c5b4b8d5-lsvdc" Jan 31 09:08:03 crc kubenswrapper[4830]: I0131 09:08:03.590189 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/4158e29b-a0d9-40f2-904d-ffb63ba734f6-metrics-client-ca\") pod \"thanos-querier-57c5b4b8d5-lsvdc\" (UID: \"4158e29b-a0d9-40f2-904d-ffb63ba734f6\") " pod="openshift-monitoring/thanos-querier-57c5b4b8d5-lsvdc" Jan 31 09:08:03 crc kubenswrapper[4830]: I0131 09:08:03.590276 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/4158e29b-a0d9-40f2-904d-ffb63ba734f6-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-57c5b4b8d5-lsvdc\" (UID: \"4158e29b-a0d9-40f2-904d-ffb63ba734f6\") " pod="openshift-monitoring/thanos-querier-57c5b4b8d5-lsvdc" Jan 31 09:08:03 crc kubenswrapper[4830]: I0131 09:08:03.590302 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/4158e29b-a0d9-40f2-904d-ffb63ba734f6-secret-grpc-tls\") pod \"thanos-querier-57c5b4b8d5-lsvdc\" (UID: \"4158e29b-a0d9-40f2-904d-ffb63ba734f6\") " pod="openshift-monitoring/thanos-querier-57c5b4b8d5-lsvdc" Jan 31 09:08:03 crc kubenswrapper[4830]: I0131 09:08:03.590336 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/4158e29b-a0d9-40f2-904d-ffb63ba734f6-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-57c5b4b8d5-lsvdc\" (UID: \"4158e29b-a0d9-40f2-904d-ffb63ba734f6\") " pod="openshift-monitoring/thanos-querier-57c5b4b8d5-lsvdc" Jan 31 09:08:03 crc kubenswrapper[4830]: I0131 09:08:03.590360 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/4158e29b-a0d9-40f2-904d-ffb63ba734f6-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-57c5b4b8d5-lsvdc\" (UID: \"4158e29b-a0d9-40f2-904d-ffb63ba734f6\") " pod="openshift-monitoring/thanos-querier-57c5b4b8d5-lsvdc" Jan 31 09:08:03 crc kubenswrapper[4830]: I0131 09:08:03.590383 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wf44b\" (UniqueName: \"kubernetes.io/projected/4158e29b-a0d9-40f2-904d-ffb63ba734f6-kube-api-access-wf44b\") pod \"thanos-querier-57c5b4b8d5-lsvdc\" (UID: \"4158e29b-a0d9-40f2-904d-ffb63ba734f6\") " pod="openshift-monitoring/thanos-querier-57c5b4b8d5-lsvdc" Jan 31 09:08:03 crc kubenswrapper[4830]: I0131 09:08:03.590408 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/4158e29b-a0d9-40f2-904d-ffb63ba734f6-secret-thanos-querier-tls\") pod \"thanos-querier-57c5b4b8d5-lsvdc\" (UID: \"4158e29b-a0d9-40f2-904d-ffb63ba734f6\") " pod="openshift-monitoring/thanos-querier-57c5b4b8d5-lsvdc" Jan 31 09:08:03 crc kubenswrapper[4830]: I0131 09:08:03.590432 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/4158e29b-a0d9-40f2-904d-ffb63ba734f6-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-57c5b4b8d5-lsvdc\" (UID: \"4158e29b-a0d9-40f2-904d-ffb63ba734f6\") " pod="openshift-monitoring/thanos-querier-57c5b4b8d5-lsvdc" Jan 31 09:08:03 crc kubenswrapper[4830]: I0131 09:08:03.594164 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/4158e29b-a0d9-40f2-904d-ffb63ba734f6-metrics-client-ca\") pod \"thanos-querier-57c5b4b8d5-lsvdc\" (UID: \"4158e29b-a0d9-40f2-904d-ffb63ba734f6\") " pod="openshift-monitoring/thanos-querier-57c5b4b8d5-lsvdc" Jan 31 09:08:03 crc kubenswrapper[4830]: I0131 09:08:03.604110 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/4158e29b-a0d9-40f2-904d-ffb63ba734f6-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-57c5b4b8d5-lsvdc\" (UID: \"4158e29b-a0d9-40f2-904d-ffb63ba734f6\") " pod="openshift-monitoring/thanos-querier-57c5b4b8d5-lsvdc" Jan 31 09:08:03 crc kubenswrapper[4830]: I0131 09:08:03.604646 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/4158e29b-a0d9-40f2-904d-ffb63ba734f6-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-57c5b4b8d5-lsvdc\" (UID: \"4158e29b-a0d9-40f2-904d-ffb63ba734f6\") " pod="openshift-monitoring/thanos-querier-57c5b4b8d5-lsvdc" Jan 31 09:08:03 crc kubenswrapper[4830]: I0131 09:08:03.606413 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/4158e29b-a0d9-40f2-904d-ffb63ba734f6-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-57c5b4b8d5-lsvdc\" (UID: \"4158e29b-a0d9-40f2-904d-ffb63ba734f6\") " pod="openshift-monitoring/thanos-querier-57c5b4b8d5-lsvdc" Jan 31 09:08:03 crc kubenswrapper[4830]: I0131 09:08:03.607281 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/4158e29b-a0d9-40f2-904d-ffb63ba734f6-secret-thanos-querier-tls\") pod \"thanos-querier-57c5b4b8d5-lsvdc\" (UID: \"4158e29b-a0d9-40f2-904d-ffb63ba734f6\") " pod="openshift-monitoring/thanos-querier-57c5b4b8d5-lsvdc" Jan 31 09:08:03 crc kubenswrapper[4830]: I0131 09:08:03.607781 4830 generic.go:334] "Generic (PLEG): container finished" podID="e74b4b1e-57f8-419d-b270-b6aa853614b5" containerID="7e959d749f3111a37389ee12cf64d1974351a655ea04c58be2dd7c9f124d8fcd" exitCode=0 Jan 31 09:08:03 crc kubenswrapper[4830]: I0131 09:08:03.607938 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-26zhs" event={"ID":"e74b4b1e-57f8-419d-b270-b6aa853614b5","Type":"ContainerDied","Data":"7e959d749f3111a37389ee12cf64d1974351a655ea04c58be2dd7c9f124d8fcd"} Jan 31 09:08:03 crc kubenswrapper[4830]: I0131 09:08:03.613750 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-qvqdp" event={"ID":"aa257d6b-a6a6-43ee-944c-c254b63cd122","Type":"ContainerStarted","Data":"b51b505247e97618f3c3e1ce01dc47c7d824c404dd99ecf235779186299500dc"} Jan 31 09:08:03 crc kubenswrapper[4830]: I0131 09:08:03.617983 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wf44b\" (UniqueName: \"kubernetes.io/projected/4158e29b-a0d9-40f2-904d-ffb63ba734f6-kube-api-access-wf44b\") pod \"thanos-querier-57c5b4b8d5-lsvdc\" (UID: \"4158e29b-a0d9-40f2-904d-ffb63ba734f6\") " pod="openshift-monitoring/thanos-querier-57c5b4b8d5-lsvdc" Jan 31 09:08:03 crc kubenswrapper[4830]: I0131 09:08:03.620163 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Jan 31 09:08:03 crc kubenswrapper[4830]: I0131 09:08:03.625588 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/4158e29b-a0d9-40f2-904d-ffb63ba734f6-secret-grpc-tls\") pod \"thanos-querier-57c5b4b8d5-lsvdc\" (UID: \"4158e29b-a0d9-40f2-904d-ffb63ba734f6\") " pod="openshift-monitoring/thanos-querier-57c5b4b8d5-lsvdc" Jan 31 09:08:03 crc kubenswrapper[4830]: I0131 09:08:03.626429 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/4158e29b-a0d9-40f2-904d-ffb63ba734f6-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-57c5b4b8d5-lsvdc\" (UID: \"4158e29b-a0d9-40f2-904d-ffb63ba734f6\") " pod="openshift-monitoring/thanos-querier-57c5b4b8d5-lsvdc" Jan 31 09:08:03 crc kubenswrapper[4830]: I0131 09:08:03.777587 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/thanos-querier-57c5b4b8d5-lsvdc" Jan 31 09:08:04 crc kubenswrapper[4830]: I0131 09:08:04.360024 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/thanos-querier-57c5b4b8d5-lsvdc"] Jan 31 09:08:04 crc kubenswrapper[4830]: I0131 09:08:04.624499 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"55a33a17-a7e7-403f-95b7-ec98415b7235","Type":"ContainerStarted","Data":"4bd651db99f8f2b6bebdc6eccaac2d5f9617e5d03185098e042da24847ebe700"} Jan 31 09:08:04 crc kubenswrapper[4830]: I0131 09:08:04.637383 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-26zhs" event={"ID":"e74b4b1e-57f8-419d-b270-b6aa853614b5","Type":"ContainerStarted","Data":"82fa6cc4a54d2a0e5f53922c6da55e33ff9ff4767a84eeab4890685030e5acee"} Jan 31 09:08:04 crc kubenswrapper[4830]: I0131 09:08:04.637451 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-26zhs" event={"ID":"e74b4b1e-57f8-419d-b270-b6aa853614b5","Type":"ContainerStarted","Data":"941f800168350ea94e764ecd1394f51d0505b3fe3b9aa8e8f1018bdb47d42732"} Jan 31 09:08:04 crc kubenswrapper[4830]: I0131 09:08:04.664200 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/node-exporter-26zhs" podStartSLOduration=2.229002221 podStartE2EDuration="3.664178231s" podCreationTimestamp="2026-01-31 09:08:01 +0000 UTC" firstStartedPulling="2026-01-31 09:08:01.767506517 +0000 UTC m=+426.260868959" lastFinishedPulling="2026-01-31 09:08:03.202682527 +0000 UTC m=+427.696044969" observedRunningTime="2026-01-31 09:08:04.661200283 +0000 UTC m=+429.154562725" watchObservedRunningTime="2026-01-31 09:08:04.664178231 +0000 UTC m=+429.157540683" Jan 31 09:08:04 crc kubenswrapper[4830]: W0131 09:08:04.812683 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4158e29b_a0d9_40f2_904d_ffb63ba734f6.slice/crio-ef83b13174d5a58956f92a5ceffcd07b031e66eb2863e38eef101b89fd1c03f5 WatchSource:0}: Error finding container ef83b13174d5a58956f92a5ceffcd07b031e66eb2863e38eef101b89fd1c03f5: Status 404 returned error can't find the container with id ef83b13174d5a58956f92a5ceffcd07b031e66eb2863e38eef101b89fd1c03f5 Jan 31 09:08:05 crc kubenswrapper[4830]: I0131 09:08:05.647392 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-qvqdp" event={"ID":"aa257d6b-a6a6-43ee-944c-c254b63cd122","Type":"ContainerStarted","Data":"55be4eb1f2d7aa2a7c62f759879ae6a14161a2a3318f4dc1220d85970d093c40"} Jan 31 09:08:05 crc kubenswrapper[4830]: I0131 09:08:05.647878 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-qvqdp" event={"ID":"aa257d6b-a6a6-43ee-944c-c254b63cd122","Type":"ContainerStarted","Data":"3f18c642132009fde87dc728356b9ac10668a0595f6819ef3c4247e0441da2dd"} Jan 31 09:08:05 crc kubenswrapper[4830]: I0131 09:08:05.647890 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-qvqdp" event={"ID":"aa257d6b-a6a6-43ee-944c-c254b63cd122","Type":"ContainerStarted","Data":"436b6bd999ca4079aeb8585a6f8e6ddd2ac97439adb8b9925833c69ace403bc8"} Jan 31 09:08:05 crc kubenswrapper[4830]: I0131 09:08:05.650790 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-57c5b4b8d5-lsvdc" event={"ID":"4158e29b-a0d9-40f2-904d-ffb63ba734f6","Type":"ContainerStarted","Data":"ef83b13174d5a58956f92a5ceffcd07b031e66eb2863e38eef101b89fd1c03f5"} Jan 31 09:08:05 crc kubenswrapper[4830]: I0131 09:08:05.658186 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-566fddb674-smxhh" event={"ID":"9ce4852d-bfc3-4e08-beb3-7622e1160402","Type":"ContainerStarted","Data":"8d3dd7ee44964a26e061d28ce59e31fef9c61397a7ef3a648646c6789e578f5e"} Jan 31 09:08:05 crc kubenswrapper[4830]: I0131 09:08:05.688998 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-qvqdp" podStartSLOduration=2.660104672 podStartE2EDuration="4.68896552s" podCreationTimestamp="2026-01-31 09:08:01 +0000 UTC" firstStartedPulling="2026-01-31 09:08:02.824213327 +0000 UTC m=+427.317575769" lastFinishedPulling="2026-01-31 09:08:04.853074175 +0000 UTC m=+429.346436617" observedRunningTime="2026-01-31 09:08:05.665050166 +0000 UTC m=+430.158412608" watchObservedRunningTime="2026-01-31 09:08:05.68896552 +0000 UTC m=+430.182328003" Jan 31 09:08:05 crc kubenswrapper[4830]: I0131 09:08:05.691410 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/openshift-state-metrics-566fddb674-smxhh" podStartSLOduration=2.266552398 podStartE2EDuration="4.691387712s" podCreationTimestamp="2026-01-31 09:08:01 +0000 UTC" firstStartedPulling="2026-01-31 09:08:02.42817856 +0000 UTC m=+426.921541002" lastFinishedPulling="2026-01-31 09:08:04.853013874 +0000 UTC m=+429.346376316" observedRunningTime="2026-01-31 09:08:05.68963189 +0000 UTC m=+430.182994332" watchObservedRunningTime="2026-01-31 09:08:05.691387712 +0000 UTC m=+430.184750154" Jan 31 09:08:06 crc kubenswrapper[4830]: I0131 09:08:06.185807 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-76b6b6b6bb-kwnwx"] Jan 31 09:08:06 crc kubenswrapper[4830]: I0131 09:08:06.186818 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-76b6b6b6bb-kwnwx" Jan 31 09:08:06 crc kubenswrapper[4830]: I0131 09:08:06.202326 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-76b6b6b6bb-kwnwx"] Jan 31 09:08:06 crc kubenswrapper[4830]: I0131 09:08:06.243858 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/3a2122d4-f14a-405d-80a0-e139b7c03b0c-console-serving-cert\") pod \"console-76b6b6b6bb-kwnwx\" (UID: \"3a2122d4-f14a-405d-80a0-e139b7c03b0c\") " pod="openshift-console/console-76b6b6b6bb-kwnwx" Jan 31 09:08:06 crc kubenswrapper[4830]: I0131 09:08:06.243918 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/3a2122d4-f14a-405d-80a0-e139b7c03b0c-console-oauth-config\") pod \"console-76b6b6b6bb-kwnwx\" (UID: \"3a2122d4-f14a-405d-80a0-e139b7c03b0c\") " pod="openshift-console/console-76b6b6b6bb-kwnwx" Jan 31 09:08:06 crc kubenswrapper[4830]: I0131 09:08:06.243952 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/3a2122d4-f14a-405d-80a0-e139b7c03b0c-console-config\") pod \"console-76b6b6b6bb-kwnwx\" (UID: \"3a2122d4-f14a-405d-80a0-e139b7c03b0c\") " pod="openshift-console/console-76b6b6b6bb-kwnwx" Jan 31 09:08:06 crc kubenswrapper[4830]: I0131 09:08:06.244342 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3a2122d4-f14a-405d-80a0-e139b7c03b0c-trusted-ca-bundle\") pod \"console-76b6b6b6bb-kwnwx\" (UID: \"3a2122d4-f14a-405d-80a0-e139b7c03b0c\") " pod="openshift-console/console-76b6b6b6bb-kwnwx" Jan 31 09:08:06 crc kubenswrapper[4830]: I0131 09:08:06.244462 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/3a2122d4-f14a-405d-80a0-e139b7c03b0c-service-ca\") pod \"console-76b6b6b6bb-kwnwx\" (UID: \"3a2122d4-f14a-405d-80a0-e139b7c03b0c\") " pod="openshift-console/console-76b6b6b6bb-kwnwx" Jan 31 09:08:06 crc kubenswrapper[4830]: I0131 09:08:06.244634 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sw86p\" (UniqueName: \"kubernetes.io/projected/3a2122d4-f14a-405d-80a0-e139b7c03b0c-kube-api-access-sw86p\") pod \"console-76b6b6b6bb-kwnwx\" (UID: \"3a2122d4-f14a-405d-80a0-e139b7c03b0c\") " pod="openshift-console/console-76b6b6b6bb-kwnwx" Jan 31 09:08:06 crc kubenswrapper[4830]: I0131 09:08:06.244685 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/3a2122d4-f14a-405d-80a0-e139b7c03b0c-oauth-serving-cert\") pod \"console-76b6b6b6bb-kwnwx\" (UID: \"3a2122d4-f14a-405d-80a0-e139b7c03b0c\") " pod="openshift-console/console-76b6b6b6bb-kwnwx" Jan 31 09:08:06 crc kubenswrapper[4830]: I0131 09:08:06.346757 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3a2122d4-f14a-405d-80a0-e139b7c03b0c-trusted-ca-bundle\") pod \"console-76b6b6b6bb-kwnwx\" (UID: \"3a2122d4-f14a-405d-80a0-e139b7c03b0c\") " pod="openshift-console/console-76b6b6b6bb-kwnwx" Jan 31 09:08:06 crc kubenswrapper[4830]: I0131 09:08:06.347391 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/3a2122d4-f14a-405d-80a0-e139b7c03b0c-service-ca\") pod \"console-76b6b6b6bb-kwnwx\" (UID: \"3a2122d4-f14a-405d-80a0-e139b7c03b0c\") " pod="openshift-console/console-76b6b6b6bb-kwnwx" Jan 31 09:08:06 crc kubenswrapper[4830]: I0131 09:08:06.348255 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sw86p\" (UniqueName: \"kubernetes.io/projected/3a2122d4-f14a-405d-80a0-e139b7c03b0c-kube-api-access-sw86p\") pod \"console-76b6b6b6bb-kwnwx\" (UID: \"3a2122d4-f14a-405d-80a0-e139b7c03b0c\") " pod="openshift-console/console-76b6b6b6bb-kwnwx" Jan 31 09:08:06 crc kubenswrapper[4830]: I0131 09:08:06.348355 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/3a2122d4-f14a-405d-80a0-e139b7c03b0c-oauth-serving-cert\") pod \"console-76b6b6b6bb-kwnwx\" (UID: \"3a2122d4-f14a-405d-80a0-e139b7c03b0c\") " pod="openshift-console/console-76b6b6b6bb-kwnwx" Jan 31 09:08:06 crc kubenswrapper[4830]: I0131 09:08:06.348424 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/3a2122d4-f14a-405d-80a0-e139b7c03b0c-console-serving-cert\") pod \"console-76b6b6b6bb-kwnwx\" (UID: \"3a2122d4-f14a-405d-80a0-e139b7c03b0c\") " pod="openshift-console/console-76b6b6b6bb-kwnwx" Jan 31 09:08:06 crc kubenswrapper[4830]: I0131 09:08:06.348446 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/3a2122d4-f14a-405d-80a0-e139b7c03b0c-console-oauth-config\") pod \"console-76b6b6b6bb-kwnwx\" (UID: \"3a2122d4-f14a-405d-80a0-e139b7c03b0c\") " pod="openshift-console/console-76b6b6b6bb-kwnwx" Jan 31 09:08:06 crc kubenswrapper[4830]: I0131 09:08:06.348476 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/3a2122d4-f14a-405d-80a0-e139b7c03b0c-console-config\") pod \"console-76b6b6b6bb-kwnwx\" (UID: \"3a2122d4-f14a-405d-80a0-e139b7c03b0c\") " pod="openshift-console/console-76b6b6b6bb-kwnwx" Jan 31 09:08:06 crc kubenswrapper[4830]: I0131 09:08:06.348988 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/3a2122d4-f14a-405d-80a0-e139b7c03b0c-service-ca\") pod \"console-76b6b6b6bb-kwnwx\" (UID: \"3a2122d4-f14a-405d-80a0-e139b7c03b0c\") " pod="openshift-console/console-76b6b6b6bb-kwnwx" Jan 31 09:08:06 crc kubenswrapper[4830]: I0131 09:08:06.349456 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/3a2122d4-f14a-405d-80a0-e139b7c03b0c-oauth-serving-cert\") pod \"console-76b6b6b6bb-kwnwx\" (UID: \"3a2122d4-f14a-405d-80a0-e139b7c03b0c\") " pod="openshift-console/console-76b6b6b6bb-kwnwx" Jan 31 09:08:06 crc kubenswrapper[4830]: I0131 09:08:06.350235 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/3a2122d4-f14a-405d-80a0-e139b7c03b0c-console-config\") pod \"console-76b6b6b6bb-kwnwx\" (UID: \"3a2122d4-f14a-405d-80a0-e139b7c03b0c\") " pod="openshift-console/console-76b6b6b6bb-kwnwx" Jan 31 09:08:06 crc kubenswrapper[4830]: I0131 09:08:06.351408 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3a2122d4-f14a-405d-80a0-e139b7c03b0c-trusted-ca-bundle\") pod \"console-76b6b6b6bb-kwnwx\" (UID: \"3a2122d4-f14a-405d-80a0-e139b7c03b0c\") " pod="openshift-console/console-76b6b6b6bb-kwnwx" Jan 31 09:08:06 crc kubenswrapper[4830]: I0131 09:08:06.354078 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/3a2122d4-f14a-405d-80a0-e139b7c03b0c-console-serving-cert\") pod \"console-76b6b6b6bb-kwnwx\" (UID: \"3a2122d4-f14a-405d-80a0-e139b7c03b0c\") " pod="openshift-console/console-76b6b6b6bb-kwnwx" Jan 31 09:08:06 crc kubenswrapper[4830]: I0131 09:08:06.356253 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/3a2122d4-f14a-405d-80a0-e139b7c03b0c-console-oauth-config\") pod \"console-76b6b6b6bb-kwnwx\" (UID: \"3a2122d4-f14a-405d-80a0-e139b7c03b0c\") " pod="openshift-console/console-76b6b6b6bb-kwnwx" Jan 31 09:08:06 crc kubenswrapper[4830]: I0131 09:08:06.369493 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sw86p\" (UniqueName: \"kubernetes.io/projected/3a2122d4-f14a-405d-80a0-e139b7c03b0c-kube-api-access-sw86p\") pod \"console-76b6b6b6bb-kwnwx\" (UID: \"3a2122d4-f14a-405d-80a0-e139b7c03b0c\") " pod="openshift-console/console-76b6b6b6bb-kwnwx" Jan 31 09:08:06 crc kubenswrapper[4830]: I0131 09:08:06.502360 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-76b6b6b6bb-kwnwx" Jan 31 09:08:06 crc kubenswrapper[4830]: I0131 09:08:06.665576 4830 generic.go:334] "Generic (PLEG): container finished" podID="55a33a17-a7e7-403f-95b7-ec98415b7235" containerID="cfb34d9ef2c77c3796978827d425d1389128aeab17cd2dcfc21a3a344e2f904d" exitCode=0 Jan 31 09:08:06 crc kubenswrapper[4830]: I0131 09:08:06.666108 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"55a33a17-a7e7-403f-95b7-ec98415b7235","Type":"ContainerDied","Data":"cfb34d9ef2c77c3796978827d425d1389128aeab17cd2dcfc21a3a344e2f904d"} Jan 31 09:08:06 crc kubenswrapper[4830]: I0131 09:08:06.786537 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/metrics-server-6cdc866fc6-9thf6"] Jan 31 09:08:06 crc kubenswrapper[4830]: I0131 09:08:06.788510 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-6cdc866fc6-9thf6" Jan 31 09:08:06 crc kubenswrapper[4830]: I0131 09:08:06.794514 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-client-certs" Jan 31 09:08:06 crc kubenswrapper[4830]: I0131 09:08:06.795301 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-tls" Jan 31 09:08:06 crc kubenswrapper[4830]: I0131 09:08:06.795401 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-dockercfg-7c69s" Jan 31 09:08:06 crc kubenswrapper[4830]: I0131 09:08:06.796029 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-server-audit-profiles" Jan 31 09:08:06 crc kubenswrapper[4830]: I0131 09:08:06.796037 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-51v30pqs0bg3o" Jan 31 09:08:06 crc kubenswrapper[4830]: I0131 09:08:06.796407 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kubelet-serving-ca-bundle" Jan 31 09:08:06 crc kubenswrapper[4830]: I0131 09:08:06.804771 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/metrics-server-6cdc866fc6-9thf6"] Jan 31 09:08:06 crc kubenswrapper[4830]: I0131 09:08:06.859122 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/45903f73-e8ae-4e54-b650-f0090e9436b3-metrics-server-audit-profiles\") pod \"metrics-server-6cdc866fc6-9thf6\" (UID: \"45903f73-e8ae-4e54-b650-f0090e9436b3\") " pod="openshift-monitoring/metrics-server-6cdc866fc6-9thf6" Jan 31 09:08:06 crc kubenswrapper[4830]: I0131 09:08:06.859205 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/45903f73-e8ae-4e54-b650-f0090e9436b3-secret-metrics-server-tls\") pod \"metrics-server-6cdc866fc6-9thf6\" (UID: \"45903f73-e8ae-4e54-b650-f0090e9436b3\") " pod="openshift-monitoring/metrics-server-6cdc866fc6-9thf6" Jan 31 09:08:06 crc kubenswrapper[4830]: I0131 09:08:06.859296 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p4vxq\" (UniqueName: \"kubernetes.io/projected/45903f73-e8ae-4e54-b650-f0090e9436b3-kube-api-access-p4vxq\") pod \"metrics-server-6cdc866fc6-9thf6\" (UID: \"45903f73-e8ae-4e54-b650-f0090e9436b3\") " pod="openshift-monitoring/metrics-server-6cdc866fc6-9thf6" Jan 31 09:08:06 crc kubenswrapper[4830]: I0131 09:08:06.859342 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45903f73-e8ae-4e54-b650-f0090e9436b3-client-ca-bundle\") pod \"metrics-server-6cdc866fc6-9thf6\" (UID: \"45903f73-e8ae-4e54-b650-f0090e9436b3\") " pod="openshift-monitoring/metrics-server-6cdc866fc6-9thf6" Jan 31 09:08:06 crc kubenswrapper[4830]: I0131 09:08:06.859519 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/45903f73-e8ae-4e54-b650-f0090e9436b3-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-6cdc866fc6-9thf6\" (UID: \"45903f73-e8ae-4e54-b650-f0090e9436b3\") " pod="openshift-monitoring/metrics-server-6cdc866fc6-9thf6" Jan 31 09:08:06 crc kubenswrapper[4830]: I0131 09:08:06.859573 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/45903f73-e8ae-4e54-b650-f0090e9436b3-secret-metrics-client-certs\") pod \"metrics-server-6cdc866fc6-9thf6\" (UID: \"45903f73-e8ae-4e54-b650-f0090e9436b3\") " pod="openshift-monitoring/metrics-server-6cdc866fc6-9thf6" Jan 31 09:08:06 crc kubenswrapper[4830]: I0131 09:08:06.859857 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/45903f73-e8ae-4e54-b650-f0090e9436b3-audit-log\") pod \"metrics-server-6cdc866fc6-9thf6\" (UID: \"45903f73-e8ae-4e54-b650-f0090e9436b3\") " pod="openshift-monitoring/metrics-server-6cdc866fc6-9thf6" Jan 31 09:08:06 crc kubenswrapper[4830]: I0131 09:08:06.945300 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-76b6b6b6bb-kwnwx"] Jan 31 09:08:06 crc kubenswrapper[4830]: I0131 09:08:06.961939 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/45903f73-e8ae-4e54-b650-f0090e9436b3-audit-log\") pod \"metrics-server-6cdc866fc6-9thf6\" (UID: \"45903f73-e8ae-4e54-b650-f0090e9436b3\") " pod="openshift-monitoring/metrics-server-6cdc866fc6-9thf6" Jan 31 09:08:06 crc kubenswrapper[4830]: I0131 09:08:06.962023 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/45903f73-e8ae-4e54-b650-f0090e9436b3-metrics-server-audit-profiles\") pod \"metrics-server-6cdc866fc6-9thf6\" (UID: \"45903f73-e8ae-4e54-b650-f0090e9436b3\") " pod="openshift-monitoring/metrics-server-6cdc866fc6-9thf6" Jan 31 09:08:06 crc kubenswrapper[4830]: I0131 09:08:06.962063 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/45903f73-e8ae-4e54-b650-f0090e9436b3-secret-metrics-server-tls\") pod \"metrics-server-6cdc866fc6-9thf6\" (UID: \"45903f73-e8ae-4e54-b650-f0090e9436b3\") " pod="openshift-monitoring/metrics-server-6cdc866fc6-9thf6" Jan 31 09:08:06 crc kubenswrapper[4830]: I0131 09:08:06.962082 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p4vxq\" (UniqueName: \"kubernetes.io/projected/45903f73-e8ae-4e54-b650-f0090e9436b3-kube-api-access-p4vxq\") pod \"metrics-server-6cdc866fc6-9thf6\" (UID: \"45903f73-e8ae-4e54-b650-f0090e9436b3\") " pod="openshift-monitoring/metrics-server-6cdc866fc6-9thf6" Jan 31 09:08:06 crc kubenswrapper[4830]: I0131 09:08:06.962109 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45903f73-e8ae-4e54-b650-f0090e9436b3-client-ca-bundle\") pod \"metrics-server-6cdc866fc6-9thf6\" (UID: \"45903f73-e8ae-4e54-b650-f0090e9436b3\") " pod="openshift-monitoring/metrics-server-6cdc866fc6-9thf6" Jan 31 09:08:06 crc kubenswrapper[4830]: I0131 09:08:06.962400 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/45903f73-e8ae-4e54-b650-f0090e9436b3-audit-log\") pod \"metrics-server-6cdc866fc6-9thf6\" (UID: \"45903f73-e8ae-4e54-b650-f0090e9436b3\") " pod="openshift-monitoring/metrics-server-6cdc866fc6-9thf6" Jan 31 09:08:06 crc kubenswrapper[4830]: I0131 09:08:06.962812 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/45903f73-e8ae-4e54-b650-f0090e9436b3-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-6cdc866fc6-9thf6\" (UID: \"45903f73-e8ae-4e54-b650-f0090e9436b3\") " pod="openshift-monitoring/metrics-server-6cdc866fc6-9thf6" Jan 31 09:08:06 crc kubenswrapper[4830]: I0131 09:08:06.962853 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/45903f73-e8ae-4e54-b650-f0090e9436b3-secret-metrics-client-certs\") pod \"metrics-server-6cdc866fc6-9thf6\" (UID: \"45903f73-e8ae-4e54-b650-f0090e9436b3\") " pod="openshift-monitoring/metrics-server-6cdc866fc6-9thf6" Jan 31 09:08:06 crc kubenswrapper[4830]: I0131 09:08:06.963243 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/45903f73-e8ae-4e54-b650-f0090e9436b3-metrics-server-audit-profiles\") pod \"metrics-server-6cdc866fc6-9thf6\" (UID: \"45903f73-e8ae-4e54-b650-f0090e9436b3\") " pod="openshift-monitoring/metrics-server-6cdc866fc6-9thf6" Jan 31 09:08:06 crc kubenswrapper[4830]: I0131 09:08:06.963551 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/45903f73-e8ae-4e54-b650-f0090e9436b3-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-6cdc866fc6-9thf6\" (UID: \"45903f73-e8ae-4e54-b650-f0090e9436b3\") " pod="openshift-monitoring/metrics-server-6cdc866fc6-9thf6" Jan 31 09:08:06 crc kubenswrapper[4830]: I0131 09:08:06.969455 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/45903f73-e8ae-4e54-b650-f0090e9436b3-secret-metrics-client-certs\") pod \"metrics-server-6cdc866fc6-9thf6\" (UID: \"45903f73-e8ae-4e54-b650-f0090e9436b3\") " pod="openshift-monitoring/metrics-server-6cdc866fc6-9thf6" Jan 31 09:08:06 crc kubenswrapper[4830]: I0131 09:08:06.969914 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/45903f73-e8ae-4e54-b650-f0090e9436b3-secret-metrics-server-tls\") pod \"metrics-server-6cdc866fc6-9thf6\" (UID: \"45903f73-e8ae-4e54-b650-f0090e9436b3\") " pod="openshift-monitoring/metrics-server-6cdc866fc6-9thf6" Jan 31 09:08:06 crc kubenswrapper[4830]: I0131 09:08:06.969914 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45903f73-e8ae-4e54-b650-f0090e9436b3-client-ca-bundle\") pod \"metrics-server-6cdc866fc6-9thf6\" (UID: \"45903f73-e8ae-4e54-b650-f0090e9436b3\") " pod="openshift-monitoring/metrics-server-6cdc866fc6-9thf6" Jan 31 09:08:06 crc kubenswrapper[4830]: I0131 09:08:06.981288 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p4vxq\" (UniqueName: \"kubernetes.io/projected/45903f73-e8ae-4e54-b650-f0090e9436b3-kube-api-access-p4vxq\") pod \"metrics-server-6cdc866fc6-9thf6\" (UID: \"45903f73-e8ae-4e54-b650-f0090e9436b3\") " pod="openshift-monitoring/metrics-server-6cdc866fc6-9thf6" Jan 31 09:08:07 crc kubenswrapper[4830]: I0131 09:08:07.114545 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-6cdc866fc6-9thf6" Jan 31 09:08:07 crc kubenswrapper[4830]: I0131 09:08:07.154749 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/monitoring-plugin-546c959798-jmj57"] Jan 31 09:08:07 crc kubenswrapper[4830]: I0131 09:08:07.155849 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/monitoring-plugin-546c959798-jmj57" Jan 31 09:08:07 crc kubenswrapper[4830]: I0131 09:08:07.158718 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"default-dockercfg-6tstp" Jan 31 09:08:07 crc kubenswrapper[4830]: I0131 09:08:07.159000 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"monitoring-plugin-cert" Jan 31 09:08:07 crc kubenswrapper[4830]: I0131 09:08:07.164564 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/monitoring-plugin-546c959798-jmj57"] Jan 31 09:08:07 crc kubenswrapper[4830]: I0131 09:08:07.272527 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/fadaea73-e4ec-47a5-b6df-c93b1ce5645f-monitoring-plugin-cert\") pod \"monitoring-plugin-546c959798-jmj57\" (UID: \"fadaea73-e4ec-47a5-b6df-c93b1ce5645f\") " pod="openshift-monitoring/monitoring-plugin-546c959798-jmj57" Jan 31 09:08:07 crc kubenswrapper[4830]: I0131 09:08:07.375505 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/fadaea73-e4ec-47a5-b6df-c93b1ce5645f-monitoring-plugin-cert\") pod \"monitoring-plugin-546c959798-jmj57\" (UID: \"fadaea73-e4ec-47a5-b6df-c93b1ce5645f\") " pod="openshift-monitoring/monitoring-plugin-546c959798-jmj57" Jan 31 09:08:07 crc kubenswrapper[4830]: I0131 09:08:07.400103 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/fadaea73-e4ec-47a5-b6df-c93b1ce5645f-monitoring-plugin-cert\") pod \"monitoring-plugin-546c959798-jmj57\" (UID: \"fadaea73-e4ec-47a5-b6df-c93b1ce5645f\") " pod="openshift-monitoring/monitoring-plugin-546c959798-jmj57" Jan 31 09:08:07 crc kubenswrapper[4830]: I0131 09:08:07.491227 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/monitoring-plugin-546c959798-jmj57" Jan 31 09:08:07 crc kubenswrapper[4830]: I0131 09:08:07.672850 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-76b6b6b6bb-kwnwx" event={"ID":"3a2122d4-f14a-405d-80a0-e139b7c03b0c","Type":"ContainerStarted","Data":"1ba6165679da9d773250ea31f752574e46b9f81b75ea5a45c928e85a6fc9be40"} Jan 31 09:08:07 crc kubenswrapper[4830]: I0131 09:08:07.765696 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Jan 31 09:08:07 crc kubenswrapper[4830]: I0131 09:08:07.769504 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Jan 31 09:08:07 crc kubenswrapper[4830]: I0131 09:08:07.779550 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-rbac-proxy" Jan 31 09:08:07 crc kubenswrapper[4830]: I0131 09:08:07.780163 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-tls" Jan 31 09:08:07 crc kubenswrapper[4830]: I0131 09:08:07.780368 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-dockercfg-9jx2v" Jan 31 09:08:07 crc kubenswrapper[4830]: I0131 09:08:07.780550 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-tls-assets-0" Jan 31 09:08:07 crc kubenswrapper[4830]: I0131 09:08:07.780663 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-grpc-tls-c49r7pvbfob8h" Jan 31 09:08:07 crc kubenswrapper[4830]: I0131 09:08:07.780905 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s" Jan 31 09:08:07 crc kubenswrapper[4830]: I0131 09:08:07.781198 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-thanos-sidecar-tls" Jan 31 09:08:07 crc kubenswrapper[4830]: I0131 09:08:07.782182 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"serving-certs-ca-bundle" Jan 31 09:08:07 crc kubenswrapper[4830]: I0131 09:08:07.782236 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-thanos-prometheus-http-client-file" Jan 31 09:08:07 crc kubenswrapper[4830]: I0131 09:08:07.782446 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-kube-rbac-proxy-web" Jan 31 09:08:07 crc kubenswrapper[4830]: I0131 09:08:07.782502 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-web-config" Jan 31 09:08:07 crc kubenswrapper[4830]: I0131 09:08:07.783070 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"prometheus-k8s-rulefiles-0" Jan 31 09:08:07 crc kubenswrapper[4830]: I0131 09:08:07.793476 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"prometheus-trusted-ca-bundle" Jan 31 09:08:07 crc kubenswrapper[4830]: I0131 09:08:07.833328 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Jan 31 09:08:07 crc kubenswrapper[4830]: I0131 09:08:07.897933 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/d2c8c9f6-9f27-4f47-8fb3-e22aaa0d3e88-web-config\") pod \"prometheus-k8s-0\" (UID: \"d2c8c9f6-9f27-4f47-8fb3-e22aaa0d3e88\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 31 09:08:07 crc kubenswrapper[4830]: I0131 09:08:07.898015 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/d2c8c9f6-9f27-4f47-8fb3-e22aaa0d3e88-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"d2c8c9f6-9f27-4f47-8fb3-e22aaa0d3e88\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 31 09:08:07 crc kubenswrapper[4830]: I0131 09:08:07.898059 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/d2c8c9f6-9f27-4f47-8fb3-e22aaa0d3e88-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"d2c8c9f6-9f27-4f47-8fb3-e22aaa0d3e88\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 31 09:08:07 crc kubenswrapper[4830]: I0131 09:08:07.898095 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/d2c8c9f6-9f27-4f47-8fb3-e22aaa0d3e88-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"d2c8c9f6-9f27-4f47-8fb3-e22aaa0d3e88\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 31 09:08:07 crc kubenswrapper[4830]: I0131 09:08:07.898137 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n2dsf\" (UniqueName: \"kubernetes.io/projected/d2c8c9f6-9f27-4f47-8fb3-e22aaa0d3e88-kube-api-access-n2dsf\") pod \"prometheus-k8s-0\" (UID: \"d2c8c9f6-9f27-4f47-8fb3-e22aaa0d3e88\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 31 09:08:07 crc kubenswrapper[4830]: I0131 09:08:07.898162 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/d2c8c9f6-9f27-4f47-8fb3-e22aaa0d3e88-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"d2c8c9f6-9f27-4f47-8fb3-e22aaa0d3e88\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 31 09:08:07 crc kubenswrapper[4830]: I0131 09:08:07.898215 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/d2c8c9f6-9f27-4f47-8fb3-e22aaa0d3e88-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"d2c8c9f6-9f27-4f47-8fb3-e22aaa0d3e88\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 31 09:08:07 crc kubenswrapper[4830]: I0131 09:08:07.898252 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d2c8c9f6-9f27-4f47-8fb3-e22aaa0d3e88-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"d2c8c9f6-9f27-4f47-8fb3-e22aaa0d3e88\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 31 09:08:07 crc kubenswrapper[4830]: I0131 09:08:07.898294 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/d2c8c9f6-9f27-4f47-8fb3-e22aaa0d3e88-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"d2c8c9f6-9f27-4f47-8fb3-e22aaa0d3e88\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 31 09:08:07 crc kubenswrapper[4830]: I0131 09:08:07.898317 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/d2c8c9f6-9f27-4f47-8fb3-e22aaa0d3e88-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"d2c8c9f6-9f27-4f47-8fb3-e22aaa0d3e88\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 31 09:08:07 crc kubenswrapper[4830]: I0131 09:08:07.898343 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/d2c8c9f6-9f27-4f47-8fb3-e22aaa0d3e88-config\") pod \"prometheus-k8s-0\" (UID: \"d2c8c9f6-9f27-4f47-8fb3-e22aaa0d3e88\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 31 09:08:07 crc kubenswrapper[4830]: I0131 09:08:07.898370 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/d2c8c9f6-9f27-4f47-8fb3-e22aaa0d3e88-config-out\") pod \"prometheus-k8s-0\" (UID: \"d2c8c9f6-9f27-4f47-8fb3-e22aaa0d3e88\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 31 09:08:07 crc kubenswrapper[4830]: I0131 09:08:07.898389 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/d2c8c9f6-9f27-4f47-8fb3-e22aaa0d3e88-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"d2c8c9f6-9f27-4f47-8fb3-e22aaa0d3e88\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 31 09:08:07 crc kubenswrapper[4830]: I0131 09:08:07.898438 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/d2c8c9f6-9f27-4f47-8fb3-e22aaa0d3e88-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"d2c8c9f6-9f27-4f47-8fb3-e22aaa0d3e88\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 31 09:08:07 crc kubenswrapper[4830]: I0131 09:08:07.898464 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/d2c8c9f6-9f27-4f47-8fb3-e22aaa0d3e88-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"d2c8c9f6-9f27-4f47-8fb3-e22aaa0d3e88\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 31 09:08:07 crc kubenswrapper[4830]: I0131 09:08:07.898506 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/d2c8c9f6-9f27-4f47-8fb3-e22aaa0d3e88-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"d2c8c9f6-9f27-4f47-8fb3-e22aaa0d3e88\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 31 09:08:07 crc kubenswrapper[4830]: I0131 09:08:07.898531 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d2c8c9f6-9f27-4f47-8fb3-e22aaa0d3e88-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"d2c8c9f6-9f27-4f47-8fb3-e22aaa0d3e88\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 31 09:08:07 crc kubenswrapper[4830]: I0131 09:08:07.898559 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d2c8c9f6-9f27-4f47-8fb3-e22aaa0d3e88-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"d2c8c9f6-9f27-4f47-8fb3-e22aaa0d3e88\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 31 09:08:08 crc kubenswrapper[4830]: I0131 09:08:07.999706 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n2dsf\" (UniqueName: \"kubernetes.io/projected/d2c8c9f6-9f27-4f47-8fb3-e22aaa0d3e88-kube-api-access-n2dsf\") pod \"prometheus-k8s-0\" (UID: \"d2c8c9f6-9f27-4f47-8fb3-e22aaa0d3e88\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 31 09:08:08 crc kubenswrapper[4830]: I0131 09:08:08.000304 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/d2c8c9f6-9f27-4f47-8fb3-e22aaa0d3e88-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"d2c8c9f6-9f27-4f47-8fb3-e22aaa0d3e88\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 31 09:08:08 crc kubenswrapper[4830]: I0131 09:08:08.000346 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/d2c8c9f6-9f27-4f47-8fb3-e22aaa0d3e88-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"d2c8c9f6-9f27-4f47-8fb3-e22aaa0d3e88\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 31 09:08:08 crc kubenswrapper[4830]: I0131 09:08:08.000377 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d2c8c9f6-9f27-4f47-8fb3-e22aaa0d3e88-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"d2c8c9f6-9f27-4f47-8fb3-e22aaa0d3e88\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 31 09:08:08 crc kubenswrapper[4830]: I0131 09:08:08.000397 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/d2c8c9f6-9f27-4f47-8fb3-e22aaa0d3e88-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"d2c8c9f6-9f27-4f47-8fb3-e22aaa0d3e88\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 31 09:08:08 crc kubenswrapper[4830]: I0131 09:08:08.000416 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/d2c8c9f6-9f27-4f47-8fb3-e22aaa0d3e88-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"d2c8c9f6-9f27-4f47-8fb3-e22aaa0d3e88\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 31 09:08:08 crc kubenswrapper[4830]: I0131 09:08:08.000435 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/d2c8c9f6-9f27-4f47-8fb3-e22aaa0d3e88-config\") pod \"prometheus-k8s-0\" (UID: \"d2c8c9f6-9f27-4f47-8fb3-e22aaa0d3e88\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 31 09:08:08 crc kubenswrapper[4830]: I0131 09:08:08.000455 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/d2c8c9f6-9f27-4f47-8fb3-e22aaa0d3e88-config-out\") pod \"prometheus-k8s-0\" (UID: \"d2c8c9f6-9f27-4f47-8fb3-e22aaa0d3e88\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 31 09:08:08 crc kubenswrapper[4830]: I0131 09:08:08.000479 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/d2c8c9f6-9f27-4f47-8fb3-e22aaa0d3e88-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"d2c8c9f6-9f27-4f47-8fb3-e22aaa0d3e88\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 31 09:08:08 crc kubenswrapper[4830]: I0131 09:08:08.000513 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/d2c8c9f6-9f27-4f47-8fb3-e22aaa0d3e88-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"d2c8c9f6-9f27-4f47-8fb3-e22aaa0d3e88\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 31 09:08:08 crc kubenswrapper[4830]: I0131 09:08:08.000532 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/d2c8c9f6-9f27-4f47-8fb3-e22aaa0d3e88-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"d2c8c9f6-9f27-4f47-8fb3-e22aaa0d3e88\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 31 09:08:08 crc kubenswrapper[4830]: I0131 09:08:08.000562 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/d2c8c9f6-9f27-4f47-8fb3-e22aaa0d3e88-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"d2c8c9f6-9f27-4f47-8fb3-e22aaa0d3e88\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 31 09:08:08 crc kubenswrapper[4830]: I0131 09:08:08.000581 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d2c8c9f6-9f27-4f47-8fb3-e22aaa0d3e88-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"d2c8c9f6-9f27-4f47-8fb3-e22aaa0d3e88\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 31 09:08:08 crc kubenswrapper[4830]: I0131 09:08:08.000601 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d2c8c9f6-9f27-4f47-8fb3-e22aaa0d3e88-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"d2c8c9f6-9f27-4f47-8fb3-e22aaa0d3e88\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 31 09:08:08 crc kubenswrapper[4830]: I0131 09:08:08.000617 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/d2c8c9f6-9f27-4f47-8fb3-e22aaa0d3e88-web-config\") pod \"prometheus-k8s-0\" (UID: \"d2c8c9f6-9f27-4f47-8fb3-e22aaa0d3e88\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 31 09:08:08 crc kubenswrapper[4830]: I0131 09:08:08.000644 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/d2c8c9f6-9f27-4f47-8fb3-e22aaa0d3e88-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"d2c8c9f6-9f27-4f47-8fb3-e22aaa0d3e88\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 31 09:08:08 crc kubenswrapper[4830]: I0131 09:08:08.000667 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/d2c8c9f6-9f27-4f47-8fb3-e22aaa0d3e88-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"d2c8c9f6-9f27-4f47-8fb3-e22aaa0d3e88\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 31 09:08:08 crc kubenswrapper[4830]: I0131 09:08:08.000691 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/d2c8c9f6-9f27-4f47-8fb3-e22aaa0d3e88-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"d2c8c9f6-9f27-4f47-8fb3-e22aaa0d3e88\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 31 09:08:08 crc kubenswrapper[4830]: I0131 09:08:08.002498 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/d2c8c9f6-9f27-4f47-8fb3-e22aaa0d3e88-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"d2c8c9f6-9f27-4f47-8fb3-e22aaa0d3e88\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 31 09:08:08 crc kubenswrapper[4830]: I0131 09:08:08.003865 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d2c8c9f6-9f27-4f47-8fb3-e22aaa0d3e88-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"d2c8c9f6-9f27-4f47-8fb3-e22aaa0d3e88\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 31 09:08:08 crc kubenswrapper[4830]: I0131 09:08:08.003918 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d2c8c9f6-9f27-4f47-8fb3-e22aaa0d3e88-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"d2c8c9f6-9f27-4f47-8fb3-e22aaa0d3e88\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 31 09:08:08 crc kubenswrapper[4830]: I0131 09:08:08.004589 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d2c8c9f6-9f27-4f47-8fb3-e22aaa0d3e88-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"d2c8c9f6-9f27-4f47-8fb3-e22aaa0d3e88\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 31 09:08:08 crc kubenswrapper[4830]: I0131 09:08:08.005089 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/d2c8c9f6-9f27-4f47-8fb3-e22aaa0d3e88-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"d2c8c9f6-9f27-4f47-8fb3-e22aaa0d3e88\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 31 09:08:08 crc kubenswrapper[4830]: I0131 09:08:08.011655 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/d2c8c9f6-9f27-4f47-8fb3-e22aaa0d3e88-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"d2c8c9f6-9f27-4f47-8fb3-e22aaa0d3e88\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 31 09:08:08 crc kubenswrapper[4830]: I0131 09:08:08.011923 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/d2c8c9f6-9f27-4f47-8fb3-e22aaa0d3e88-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"d2c8c9f6-9f27-4f47-8fb3-e22aaa0d3e88\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 31 09:08:08 crc kubenswrapper[4830]: I0131 09:08:08.011925 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/d2c8c9f6-9f27-4f47-8fb3-e22aaa0d3e88-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"d2c8c9f6-9f27-4f47-8fb3-e22aaa0d3e88\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 31 09:08:08 crc kubenswrapper[4830]: I0131 09:08:08.012084 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/d2c8c9f6-9f27-4f47-8fb3-e22aaa0d3e88-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"d2c8c9f6-9f27-4f47-8fb3-e22aaa0d3e88\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 31 09:08:08 crc kubenswrapper[4830]: I0131 09:08:08.011976 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/d2c8c9f6-9f27-4f47-8fb3-e22aaa0d3e88-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"d2c8c9f6-9f27-4f47-8fb3-e22aaa0d3e88\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 31 09:08:08 crc kubenswrapper[4830]: I0131 09:08:08.012238 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/d2c8c9f6-9f27-4f47-8fb3-e22aaa0d3e88-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"d2c8c9f6-9f27-4f47-8fb3-e22aaa0d3e88\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 31 09:08:08 crc kubenswrapper[4830]: I0131 09:08:08.012495 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/d2c8c9f6-9f27-4f47-8fb3-e22aaa0d3e88-config\") pod \"prometheus-k8s-0\" (UID: \"d2c8c9f6-9f27-4f47-8fb3-e22aaa0d3e88\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 31 09:08:08 crc kubenswrapper[4830]: I0131 09:08:08.015868 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/d2c8c9f6-9f27-4f47-8fb3-e22aaa0d3e88-config-out\") pod \"prometheus-k8s-0\" (UID: \"d2c8c9f6-9f27-4f47-8fb3-e22aaa0d3e88\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 31 09:08:08 crc kubenswrapper[4830]: I0131 09:08:08.016210 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/d2c8c9f6-9f27-4f47-8fb3-e22aaa0d3e88-web-config\") pod \"prometheus-k8s-0\" (UID: \"d2c8c9f6-9f27-4f47-8fb3-e22aaa0d3e88\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 31 09:08:08 crc kubenswrapper[4830]: I0131 09:08:08.020156 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/d2c8c9f6-9f27-4f47-8fb3-e22aaa0d3e88-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"d2c8c9f6-9f27-4f47-8fb3-e22aaa0d3e88\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 31 09:08:08 crc kubenswrapper[4830]: I0131 09:08:08.020351 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/d2c8c9f6-9f27-4f47-8fb3-e22aaa0d3e88-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"d2c8c9f6-9f27-4f47-8fb3-e22aaa0d3e88\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 31 09:08:08 crc kubenswrapper[4830]: I0131 09:08:08.023799 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/d2c8c9f6-9f27-4f47-8fb3-e22aaa0d3e88-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"d2c8c9f6-9f27-4f47-8fb3-e22aaa0d3e88\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 31 09:08:08 crc kubenswrapper[4830]: I0131 09:08:08.040024 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n2dsf\" (UniqueName: \"kubernetes.io/projected/d2c8c9f6-9f27-4f47-8fb3-e22aaa0d3e88-kube-api-access-n2dsf\") pod \"prometheus-k8s-0\" (UID: \"d2c8c9f6-9f27-4f47-8fb3-e22aaa0d3e88\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 31 09:08:08 crc kubenswrapper[4830]: I0131 09:08:08.147299 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Jan 31 09:08:08 crc kubenswrapper[4830]: I0131 09:08:08.178900 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/metrics-server-6cdc866fc6-9thf6"] Jan 31 09:08:08 crc kubenswrapper[4830]: W0131 09:08:08.190339 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod45903f73_e8ae_4e54_b650_f0090e9436b3.slice/crio-c6d31032e66c123efa074051dbf2315b172cf9452160bd63292829dcdf07b8b4 WatchSource:0}: Error finding container c6d31032e66c123efa074051dbf2315b172cf9452160bd63292829dcdf07b8b4: Status 404 returned error can't find the container with id c6d31032e66c123efa074051dbf2315b172cf9452160bd63292829dcdf07b8b4 Jan 31 09:08:08 crc kubenswrapper[4830]: I0131 09:08:08.229441 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/monitoring-plugin-546c959798-jmj57"] Jan 31 09:08:08 crc kubenswrapper[4830]: I0131 09:08:08.606410 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Jan 31 09:08:08 crc kubenswrapper[4830]: I0131 09:08:08.682812 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/monitoring-plugin-546c959798-jmj57" event={"ID":"fadaea73-e4ec-47a5-b6df-c93b1ce5645f","Type":"ContainerStarted","Data":"bcd7444423822fa7711e93ceecb96c5ef9d1a62b16eb9b704dac72a39753e204"} Jan 31 09:08:08 crc kubenswrapper[4830]: I0131 09:08:08.685168 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-76b6b6b6bb-kwnwx" event={"ID":"3a2122d4-f14a-405d-80a0-e139b7c03b0c","Type":"ContainerStarted","Data":"b9992a558b3a0f6e85ebe23d3b9e41a931ecc199793f8ad96aece0ae1a83776a"} Jan 31 09:08:08 crc kubenswrapper[4830]: I0131 09:08:08.691303 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-57c5b4b8d5-lsvdc" event={"ID":"4158e29b-a0d9-40f2-904d-ffb63ba734f6","Type":"ContainerStarted","Data":"20692048d22115d1c5a1119f3b75db84415481f9bceb28e321a43968184dd93b"} Jan 31 09:08:08 crc kubenswrapper[4830]: I0131 09:08:08.691366 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-57c5b4b8d5-lsvdc" event={"ID":"4158e29b-a0d9-40f2-904d-ffb63ba734f6","Type":"ContainerStarted","Data":"bad9c2dbe52694f8c867a7264c763186fa52f2223d74af4239c06dba7d465889"} Jan 31 09:08:08 crc kubenswrapper[4830]: I0131 09:08:08.691379 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-57c5b4b8d5-lsvdc" event={"ID":"4158e29b-a0d9-40f2-904d-ffb63ba734f6","Type":"ContainerStarted","Data":"95e29c22767bce392764ec74045ff50792698cadcb4fd95c7666dcab6de5831d"} Jan 31 09:08:08 crc kubenswrapper[4830]: I0131 09:08:08.692439 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-6cdc866fc6-9thf6" event={"ID":"45903f73-e8ae-4e54-b650-f0090e9436b3","Type":"ContainerStarted","Data":"c6d31032e66c123efa074051dbf2315b172cf9452160bd63292829dcdf07b8b4"} Jan 31 09:08:08 crc kubenswrapper[4830]: I0131 09:08:08.704159 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-76b6b6b6bb-kwnwx" podStartSLOduration=2.704127235 podStartE2EDuration="2.704127235s" podCreationTimestamp="2026-01-31 09:08:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 09:08:08.702461246 +0000 UTC m=+433.195823688" watchObservedRunningTime="2026-01-31 09:08:08.704127235 +0000 UTC m=+433.197489677" Jan 31 09:08:09 crc kubenswrapper[4830]: I0131 09:08:09.701404 4830 generic.go:334] "Generic (PLEG): container finished" podID="d2c8c9f6-9f27-4f47-8fb3-e22aaa0d3e88" containerID="371c9e55e1fd72db86b52bc696c324389380965da659f63a5820d918064ccae1" exitCode=0 Jan 31 09:08:09 crc kubenswrapper[4830]: I0131 09:08:09.701529 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"d2c8c9f6-9f27-4f47-8fb3-e22aaa0d3e88","Type":"ContainerDied","Data":"371c9e55e1fd72db86b52bc696c324389380965da659f63a5820d918064ccae1"} Jan 31 09:08:09 crc kubenswrapper[4830]: I0131 09:08:09.702047 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"d2c8c9f6-9f27-4f47-8fb3-e22aaa0d3e88","Type":"ContainerStarted","Data":"99827cd457acd297965c54d00defb03b3ca33a4e6011b0152385ea25e13e7b46"} Jan 31 09:08:10 crc kubenswrapper[4830]: I0131 09:08:10.712345 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-57c5b4b8d5-lsvdc" event={"ID":"4158e29b-a0d9-40f2-904d-ffb63ba734f6","Type":"ContainerStarted","Data":"87b5559e88bcab086fdd39607957046c1a92a8cb32cb7583e217779e2a470877"} Jan 31 09:08:10 crc kubenswrapper[4830]: I0131 09:08:10.714199 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-6cdc866fc6-9thf6" event={"ID":"45903f73-e8ae-4e54-b650-f0090e9436b3","Type":"ContainerStarted","Data":"c4da751ed7e78efc6b02a950d82b969bca3c58873a46feefa2b13814f5949365"} Jan 31 09:08:10 crc kubenswrapper[4830]: I0131 09:08:10.717702 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"55a33a17-a7e7-403f-95b7-ec98415b7235","Type":"ContainerStarted","Data":"c8d6fd5e7f0802aef94e8fe29b5d2247de741a3e35cff3103de61496dc05bac6"} Jan 31 09:08:10 crc kubenswrapper[4830]: I0131 09:08:10.719111 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/monitoring-plugin-546c959798-jmj57" event={"ID":"fadaea73-e4ec-47a5-b6df-c93b1ce5645f","Type":"ContainerStarted","Data":"5c51ce2f2c32098b1e2b4d433c4657af9f14579c6d15a0dd2e2831eb5eb66ed3"} Jan 31 09:08:10 crc kubenswrapper[4830]: I0131 09:08:10.719467 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/monitoring-plugin-546c959798-jmj57" Jan 31 09:08:10 crc kubenswrapper[4830]: I0131 09:08:10.726041 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/monitoring-plugin-546c959798-jmj57" Jan 31 09:08:10 crc kubenswrapper[4830]: I0131 09:08:10.739609 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/metrics-server-6cdc866fc6-9thf6" podStartSLOduration=2.525252926 podStartE2EDuration="4.739584029s" podCreationTimestamp="2026-01-31 09:08:06 +0000 UTC" firstStartedPulling="2026-01-31 09:08:08.193620186 +0000 UTC m=+432.686982628" lastFinishedPulling="2026-01-31 09:08:10.407951259 +0000 UTC m=+434.901313731" observedRunningTime="2026-01-31 09:08:10.734027905 +0000 UTC m=+435.227390347" watchObservedRunningTime="2026-01-31 09:08:10.739584029 +0000 UTC m=+435.232946471" Jan 31 09:08:11 crc kubenswrapper[4830]: I0131 09:08:11.731059 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"55a33a17-a7e7-403f-95b7-ec98415b7235","Type":"ContainerStarted","Data":"e8e76c58c321dd0e6fce9c4b60fa172dc9aeebb45747b9407091f167d3838b15"} Jan 31 09:08:11 crc kubenswrapper[4830]: I0131 09:08:11.731615 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"55a33a17-a7e7-403f-95b7-ec98415b7235","Type":"ContainerStarted","Data":"7b3aa4ac1e6f147c9282b668d562cfa76dc3ace39a7610e6d7dd640705b4fa8b"} Jan 31 09:08:11 crc kubenswrapper[4830]: I0131 09:08:11.731640 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"55a33a17-a7e7-403f-95b7-ec98415b7235","Type":"ContainerStarted","Data":"3e99fc93e3e96956b5e864573bfde6c4ad6315777326fe1fff17ad274329c795"} Jan 31 09:08:11 crc kubenswrapper[4830]: I0131 09:08:11.731656 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"55a33a17-a7e7-403f-95b7-ec98415b7235","Type":"ContainerStarted","Data":"752a55ca0ca5cdb6e05be6f46cf53f818632fbc99fec321ec7be126eae008d35"} Jan 31 09:08:11 crc kubenswrapper[4830]: I0131 09:08:11.731669 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"55a33a17-a7e7-403f-95b7-ec98415b7235","Type":"ContainerStarted","Data":"e7d4720ad4bfc146e5f8444510e1cabac3212587c01b5428c59147a03fef3a34"} Jan 31 09:08:11 crc kubenswrapper[4830]: I0131 09:08:11.738100 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-57c5b4b8d5-lsvdc" event={"ID":"4158e29b-a0d9-40f2-904d-ffb63ba734f6","Type":"ContainerStarted","Data":"e30bcca6a6c6cb22ebb06fb28d551fe32f9ade494293b8ace933a25a35f6412f"} Jan 31 09:08:11 crc kubenswrapper[4830]: I0131 09:08:11.738154 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-57c5b4b8d5-lsvdc" event={"ID":"4158e29b-a0d9-40f2-904d-ffb63ba734f6","Type":"ContainerStarted","Data":"fad963fb09351437c6eb4810357e334a58afa29ed7d4fef01b8ccb01ed00dc43"} Jan 31 09:08:11 crc kubenswrapper[4830]: I0131 09:08:11.765842 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/monitoring-plugin-546c959798-jmj57" podStartSLOduration=2.603801718 podStartE2EDuration="4.76581456s" podCreationTimestamp="2026-01-31 09:08:07 +0000 UTC" firstStartedPulling="2026-01-31 09:08:08.241016402 +0000 UTC m=+432.734378844" lastFinishedPulling="2026-01-31 09:08:10.403029244 +0000 UTC m=+434.896391686" observedRunningTime="2026-01-31 09:08:10.752126238 +0000 UTC m=+435.245488680" watchObservedRunningTime="2026-01-31 09:08:11.76581456 +0000 UTC m=+436.259177002" Jan 31 09:08:11 crc kubenswrapper[4830]: I0131 09:08:11.766318 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/alertmanager-main-0" podStartSLOduration=3.015614903 podStartE2EDuration="9.766312485s" podCreationTimestamp="2026-01-31 09:08:02 +0000 UTC" firstStartedPulling="2026-01-31 09:08:03.651286681 +0000 UTC m=+428.144649123" lastFinishedPulling="2026-01-31 09:08:10.401984263 +0000 UTC m=+434.895346705" observedRunningTime="2026-01-31 09:08:11.759175155 +0000 UTC m=+436.252537597" watchObservedRunningTime="2026-01-31 09:08:11.766312485 +0000 UTC m=+436.259674927" Jan 31 09:08:11 crc kubenswrapper[4830]: I0131 09:08:11.797480 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/thanos-querier-57c5b4b8d5-lsvdc" podStartSLOduration=3.240451586 podStartE2EDuration="8.797458792s" podCreationTimestamp="2026-01-31 09:08:03 +0000 UTC" firstStartedPulling="2026-01-31 09:08:04.847541343 +0000 UTC m=+429.340903795" lastFinishedPulling="2026-01-31 09:08:10.404548559 +0000 UTC m=+434.897911001" observedRunningTime="2026-01-31 09:08:11.794122784 +0000 UTC m=+436.287485226" watchObservedRunningTime="2026-01-31 09:08:11.797458792 +0000 UTC m=+436.290821234" Jan 31 09:08:12 crc kubenswrapper[4830]: I0131 09:08:12.745235 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/thanos-querier-57c5b4b8d5-lsvdc" Jan 31 09:08:13 crc kubenswrapper[4830]: I0131 09:08:13.794088 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"d2c8c9f6-9f27-4f47-8fb3-e22aaa0d3e88","Type":"ContainerStarted","Data":"982303ad763435cc8fed270603cdaf4f3f885ca50e52af47393321d008bfc6b8"} Jan 31 09:08:13 crc kubenswrapper[4830]: I0131 09:08:13.794610 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"d2c8c9f6-9f27-4f47-8fb3-e22aaa0d3e88","Type":"ContainerStarted","Data":"e93bac223761aa6705fe644d1bf759f872cdc84396a0c8cb60f4467d4efb9d33"} Jan 31 09:08:13 crc kubenswrapper[4830]: I0131 09:08:13.818008 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/thanos-querier-57c5b4b8d5-lsvdc" Jan 31 09:08:14 crc kubenswrapper[4830]: I0131 09:08:14.804746 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"d2c8c9f6-9f27-4f47-8fb3-e22aaa0d3e88","Type":"ContainerStarted","Data":"3fe29820f2c19fcd558919de5748887c7c2b29d66a959bbd230fc51a3419914e"} Jan 31 09:08:14 crc kubenswrapper[4830]: I0131 09:08:14.805343 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"d2c8c9f6-9f27-4f47-8fb3-e22aaa0d3e88","Type":"ContainerStarted","Data":"61363ccd80e9759aa7d6fe2205d9cf6da3a2987f63b4f08753c5d7525109cadc"} Jan 31 09:08:14 crc kubenswrapper[4830]: I0131 09:08:14.805374 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"d2c8c9f6-9f27-4f47-8fb3-e22aaa0d3e88","Type":"ContainerStarted","Data":"d60444f95a25f44db0c4b8b76103ad003df7ddb76c18428f307eb2c42a4b2c21"} Jan 31 09:08:14 crc kubenswrapper[4830]: I0131 09:08:14.805394 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"d2c8c9f6-9f27-4f47-8fb3-e22aaa0d3e88","Type":"ContainerStarted","Data":"bf984de8e283cfe1b350cbe9a8387dc61ec53d7fbd57c7e1a0ffddae91a1d999"} Jan 31 09:08:14 crc kubenswrapper[4830]: I0131 09:08:14.839596 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/prometheus-k8s-0" podStartSLOduration=4.203520165 podStartE2EDuration="7.839574851s" podCreationTimestamp="2026-01-31 09:08:07 +0000 UTC" firstStartedPulling="2026-01-31 09:08:09.703235158 +0000 UTC m=+434.196597600" lastFinishedPulling="2026-01-31 09:08:13.339289854 +0000 UTC m=+437.832652286" observedRunningTime="2026-01-31 09:08:14.838303023 +0000 UTC m=+439.331665465" watchObservedRunningTime="2026-01-31 09:08:14.839574851 +0000 UTC m=+439.332937293" Jan 31 09:08:16 crc kubenswrapper[4830]: I0131 09:08:16.502410 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-76b6b6b6bb-kwnwx" Jan 31 09:08:16 crc kubenswrapper[4830]: I0131 09:08:16.502486 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-76b6b6b6bb-kwnwx" Jan 31 09:08:16 crc kubenswrapper[4830]: I0131 09:08:16.509148 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-76b6b6b6bb-kwnwx" Jan 31 09:08:16 crc kubenswrapper[4830]: I0131 09:08:16.828304 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-76b6b6b6bb-kwnwx" Jan 31 09:08:16 crc kubenswrapper[4830]: I0131 09:08:16.897492 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-gp4nv"] Jan 31 09:08:18 crc kubenswrapper[4830]: I0131 09:08:18.148425 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/prometheus-k8s-0" Jan 31 09:08:27 crc kubenswrapper[4830]: I0131 09:08:27.114966 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-monitoring/metrics-server-6cdc866fc6-9thf6" Jan 31 09:08:27 crc kubenswrapper[4830]: I0131 09:08:27.116317 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/metrics-server-6cdc866fc6-9thf6" Jan 31 09:08:41 crc kubenswrapper[4830]: I0131 09:08:41.951306 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-gp4nv" podUID="83cc5fe8-7965-46aa-b846-33d1b8d317f8" containerName="console" containerID="cri-o://4d8a6a78e590a29565dc28a9b5bb611fc4a65cc7c4e41bb1ec1d59ce1b636727" gracePeriod=15 Jan 31 09:08:42 crc kubenswrapper[4830]: I0131 09:08:42.386961 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-gp4nv_83cc5fe8-7965-46aa-b846-33d1b8d317f8/console/0.log" Jan 31 09:08:42 crc kubenswrapper[4830]: I0131 09:08:42.387514 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-gp4nv" Jan 31 09:08:42 crc kubenswrapper[4830]: I0131 09:08:42.504407 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/83cc5fe8-7965-46aa-b846-33d1b8d317f8-service-ca\") pod \"83cc5fe8-7965-46aa-b846-33d1b8d317f8\" (UID: \"83cc5fe8-7965-46aa-b846-33d1b8d317f8\") " Jan 31 09:08:42 crc kubenswrapper[4830]: I0131 09:08:42.504474 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2wwdq\" (UniqueName: \"kubernetes.io/projected/83cc5fe8-7965-46aa-b846-33d1b8d317f8-kube-api-access-2wwdq\") pod \"83cc5fe8-7965-46aa-b846-33d1b8d317f8\" (UID: \"83cc5fe8-7965-46aa-b846-33d1b8d317f8\") " Jan 31 09:08:42 crc kubenswrapper[4830]: I0131 09:08:42.504499 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/83cc5fe8-7965-46aa-b846-33d1b8d317f8-trusted-ca-bundle\") pod \"83cc5fe8-7965-46aa-b846-33d1b8d317f8\" (UID: \"83cc5fe8-7965-46aa-b846-33d1b8d317f8\") " Jan 31 09:08:42 crc kubenswrapper[4830]: I0131 09:08:42.504540 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/83cc5fe8-7965-46aa-b846-33d1b8d317f8-oauth-serving-cert\") pod \"83cc5fe8-7965-46aa-b846-33d1b8d317f8\" (UID: \"83cc5fe8-7965-46aa-b846-33d1b8d317f8\") " Jan 31 09:08:42 crc kubenswrapper[4830]: I0131 09:08:42.504600 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/83cc5fe8-7965-46aa-b846-33d1b8d317f8-console-oauth-config\") pod \"83cc5fe8-7965-46aa-b846-33d1b8d317f8\" (UID: \"83cc5fe8-7965-46aa-b846-33d1b8d317f8\") " Jan 31 09:08:42 crc kubenswrapper[4830]: I0131 09:08:42.504648 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/83cc5fe8-7965-46aa-b846-33d1b8d317f8-console-config\") pod \"83cc5fe8-7965-46aa-b846-33d1b8d317f8\" (UID: \"83cc5fe8-7965-46aa-b846-33d1b8d317f8\") " Jan 31 09:08:42 crc kubenswrapper[4830]: I0131 09:08:42.504825 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/83cc5fe8-7965-46aa-b846-33d1b8d317f8-console-serving-cert\") pod \"83cc5fe8-7965-46aa-b846-33d1b8d317f8\" (UID: \"83cc5fe8-7965-46aa-b846-33d1b8d317f8\") " Jan 31 09:08:42 crc kubenswrapper[4830]: I0131 09:08:42.506507 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/83cc5fe8-7965-46aa-b846-33d1b8d317f8-console-config" (OuterVolumeSpecName: "console-config") pod "83cc5fe8-7965-46aa-b846-33d1b8d317f8" (UID: "83cc5fe8-7965-46aa-b846-33d1b8d317f8"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:08:42 crc kubenswrapper[4830]: I0131 09:08:42.506789 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/83cc5fe8-7965-46aa-b846-33d1b8d317f8-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "83cc5fe8-7965-46aa-b846-33d1b8d317f8" (UID: "83cc5fe8-7965-46aa-b846-33d1b8d317f8"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:08:42 crc kubenswrapper[4830]: I0131 09:08:42.506814 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/83cc5fe8-7965-46aa-b846-33d1b8d317f8-service-ca" (OuterVolumeSpecName: "service-ca") pod "83cc5fe8-7965-46aa-b846-33d1b8d317f8" (UID: "83cc5fe8-7965-46aa-b846-33d1b8d317f8"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:08:42 crc kubenswrapper[4830]: I0131 09:08:42.507599 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/83cc5fe8-7965-46aa-b846-33d1b8d317f8-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "83cc5fe8-7965-46aa-b846-33d1b8d317f8" (UID: "83cc5fe8-7965-46aa-b846-33d1b8d317f8"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:08:42 crc kubenswrapper[4830]: I0131 09:08:42.514028 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/83cc5fe8-7965-46aa-b846-33d1b8d317f8-kube-api-access-2wwdq" (OuterVolumeSpecName: "kube-api-access-2wwdq") pod "83cc5fe8-7965-46aa-b846-33d1b8d317f8" (UID: "83cc5fe8-7965-46aa-b846-33d1b8d317f8"). InnerVolumeSpecName "kube-api-access-2wwdq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:08:42 crc kubenswrapper[4830]: I0131 09:08:42.514927 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/83cc5fe8-7965-46aa-b846-33d1b8d317f8-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "83cc5fe8-7965-46aa-b846-33d1b8d317f8" (UID: "83cc5fe8-7965-46aa-b846-33d1b8d317f8"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:08:42 crc kubenswrapper[4830]: I0131 09:08:42.517336 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/83cc5fe8-7965-46aa-b846-33d1b8d317f8-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "83cc5fe8-7965-46aa-b846-33d1b8d317f8" (UID: "83cc5fe8-7965-46aa-b846-33d1b8d317f8"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:08:42 crc kubenswrapper[4830]: I0131 09:08:42.606336 4830 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/83cc5fe8-7965-46aa-b846-33d1b8d317f8-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 31 09:08:42 crc kubenswrapper[4830]: I0131 09:08:42.606883 4830 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/83cc5fe8-7965-46aa-b846-33d1b8d317f8-console-config\") on node \"crc\" DevicePath \"\"" Jan 31 09:08:42 crc kubenswrapper[4830]: I0131 09:08:42.606905 4830 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/83cc5fe8-7965-46aa-b846-33d1b8d317f8-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 09:08:42 crc kubenswrapper[4830]: I0131 09:08:42.606924 4830 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/83cc5fe8-7965-46aa-b846-33d1b8d317f8-service-ca\") on node \"crc\" DevicePath \"\"" Jan 31 09:08:42 crc kubenswrapper[4830]: I0131 09:08:42.606944 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2wwdq\" (UniqueName: \"kubernetes.io/projected/83cc5fe8-7965-46aa-b846-33d1b8d317f8-kube-api-access-2wwdq\") on node \"crc\" DevicePath \"\"" Jan 31 09:08:42 crc kubenswrapper[4830]: I0131 09:08:42.606965 4830 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/83cc5fe8-7965-46aa-b846-33d1b8d317f8-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 09:08:42 crc kubenswrapper[4830]: I0131 09:08:42.606984 4830 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/83cc5fe8-7965-46aa-b846-33d1b8d317f8-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 09:08:42 crc kubenswrapper[4830]: I0131 09:08:42.997834 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-gp4nv_83cc5fe8-7965-46aa-b846-33d1b8d317f8/console/0.log" Jan 31 09:08:43 crc kubenswrapper[4830]: I0131 09:08:42.997904 4830 generic.go:334] "Generic (PLEG): container finished" podID="83cc5fe8-7965-46aa-b846-33d1b8d317f8" containerID="4d8a6a78e590a29565dc28a9b5bb611fc4a65cc7c4e41bb1ec1d59ce1b636727" exitCode=2 Jan 31 09:08:43 crc kubenswrapper[4830]: I0131 09:08:42.997954 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-gp4nv" event={"ID":"83cc5fe8-7965-46aa-b846-33d1b8d317f8","Type":"ContainerDied","Data":"4d8a6a78e590a29565dc28a9b5bb611fc4a65cc7c4e41bb1ec1d59ce1b636727"} Jan 31 09:08:43 crc kubenswrapper[4830]: I0131 09:08:42.997995 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-gp4nv" event={"ID":"83cc5fe8-7965-46aa-b846-33d1b8d317f8","Type":"ContainerDied","Data":"47b669fb814c76ba9b2b65afb80e4d3e6bf938ddb0db4eac8228e3a0dc714ec6"} Jan 31 09:08:43 crc kubenswrapper[4830]: I0131 09:08:42.997993 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-gp4nv" Jan 31 09:08:43 crc kubenswrapper[4830]: I0131 09:08:42.998018 4830 scope.go:117] "RemoveContainer" containerID="4d8a6a78e590a29565dc28a9b5bb611fc4a65cc7c4e41bb1ec1d59ce1b636727" Jan 31 09:08:43 crc kubenswrapper[4830]: I0131 09:08:43.019053 4830 scope.go:117] "RemoveContainer" containerID="4d8a6a78e590a29565dc28a9b5bb611fc4a65cc7c4e41bb1ec1d59ce1b636727" Jan 31 09:08:43 crc kubenswrapper[4830]: E0131 09:08:43.019887 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4d8a6a78e590a29565dc28a9b5bb611fc4a65cc7c4e41bb1ec1d59ce1b636727\": container with ID starting with 4d8a6a78e590a29565dc28a9b5bb611fc4a65cc7c4e41bb1ec1d59ce1b636727 not found: ID does not exist" containerID="4d8a6a78e590a29565dc28a9b5bb611fc4a65cc7c4e41bb1ec1d59ce1b636727" Jan 31 09:08:43 crc kubenswrapper[4830]: I0131 09:08:43.019958 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4d8a6a78e590a29565dc28a9b5bb611fc4a65cc7c4e41bb1ec1d59ce1b636727"} err="failed to get container status \"4d8a6a78e590a29565dc28a9b5bb611fc4a65cc7c4e41bb1ec1d59ce1b636727\": rpc error: code = NotFound desc = could not find container \"4d8a6a78e590a29565dc28a9b5bb611fc4a65cc7c4e41bb1ec1d59ce1b636727\": container with ID starting with 4d8a6a78e590a29565dc28a9b5bb611fc4a65cc7c4e41bb1ec1d59ce1b636727 not found: ID does not exist" Jan 31 09:08:43 crc kubenswrapper[4830]: I0131 09:08:43.033515 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-gp4nv"] Jan 31 09:08:43 crc kubenswrapper[4830]: I0131 09:08:43.041953 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-gp4nv"] Jan 31 09:08:44 crc kubenswrapper[4830]: I0131 09:08:44.265995 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="83cc5fe8-7965-46aa-b846-33d1b8d317f8" path="/var/lib/kubelet/pods/83cc5fe8-7965-46aa-b846-33d1b8d317f8/volumes" Jan 31 09:08:47 crc kubenswrapper[4830]: I0131 09:08:47.123683 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-monitoring/metrics-server-6cdc866fc6-9thf6" Jan 31 09:08:47 crc kubenswrapper[4830]: I0131 09:08:47.127557 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/metrics-server-6cdc866fc6-9thf6" Jan 31 09:09:08 crc kubenswrapper[4830]: I0131 09:09:08.148788 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-monitoring/prometheus-k8s-0" Jan 31 09:09:08 crc kubenswrapper[4830]: I0131 09:09:08.177448 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-monitoring/prometheus-k8s-0" Jan 31 09:09:08 crc kubenswrapper[4830]: I0131 09:09:08.203365 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/prometheus-k8s-0" Jan 31 09:09:44 crc kubenswrapper[4830]: I0131 09:09:44.354009 4830 patch_prober.go:28] interesting pod/machine-config-daemon-gt7kd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 09:09:44 crc kubenswrapper[4830]: I0131 09:09:44.355822 4830 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" podUID="158dbfda-9b0a-4809-9946-3c6ee2d082dc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 09:09:50 crc kubenswrapper[4830]: I0131 09:09:50.199018 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-76f59b595d-84k99"] Jan 31 09:09:50 crc kubenswrapper[4830]: E0131 09:09:50.200112 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83cc5fe8-7965-46aa-b846-33d1b8d317f8" containerName="console" Jan 31 09:09:50 crc kubenswrapper[4830]: I0131 09:09:50.200153 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="83cc5fe8-7965-46aa-b846-33d1b8d317f8" containerName="console" Jan 31 09:09:50 crc kubenswrapper[4830]: I0131 09:09:50.200477 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="83cc5fe8-7965-46aa-b846-33d1b8d317f8" containerName="console" Jan 31 09:09:50 crc kubenswrapper[4830]: I0131 09:09:50.201670 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-76f59b595d-84k99" Jan 31 09:09:50 crc kubenswrapper[4830]: I0131 09:09:50.215759 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-76f59b595d-84k99"] Jan 31 09:09:50 crc kubenswrapper[4830]: I0131 09:09:50.306960 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ac8533f2-4007-40aa-b933-fe2ec3b7d6bf-trusted-ca-bundle\") pod \"console-76f59b595d-84k99\" (UID: \"ac8533f2-4007-40aa-b933-fe2ec3b7d6bf\") " pod="openshift-console/console-76f59b595d-84k99" Jan 31 09:09:50 crc kubenswrapper[4830]: I0131 09:09:50.307507 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/ac8533f2-4007-40aa-b933-fe2ec3b7d6bf-console-serving-cert\") pod \"console-76f59b595d-84k99\" (UID: \"ac8533f2-4007-40aa-b933-fe2ec3b7d6bf\") " pod="openshift-console/console-76f59b595d-84k99" Jan 31 09:09:50 crc kubenswrapper[4830]: I0131 09:09:50.307615 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/ac8533f2-4007-40aa-b933-fe2ec3b7d6bf-console-oauth-config\") pod \"console-76f59b595d-84k99\" (UID: \"ac8533f2-4007-40aa-b933-fe2ec3b7d6bf\") " pod="openshift-console/console-76f59b595d-84k99" Jan 31 09:09:50 crc kubenswrapper[4830]: I0131 09:09:50.307746 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/ac8533f2-4007-40aa-b933-fe2ec3b7d6bf-console-config\") pod \"console-76f59b595d-84k99\" (UID: \"ac8533f2-4007-40aa-b933-fe2ec3b7d6bf\") " pod="openshift-console/console-76f59b595d-84k99" Jan 31 09:09:50 crc kubenswrapper[4830]: I0131 09:09:50.307865 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5qjxz\" (UniqueName: \"kubernetes.io/projected/ac8533f2-4007-40aa-b933-fe2ec3b7d6bf-kube-api-access-5qjxz\") pod \"console-76f59b595d-84k99\" (UID: \"ac8533f2-4007-40aa-b933-fe2ec3b7d6bf\") " pod="openshift-console/console-76f59b595d-84k99" Jan 31 09:09:50 crc kubenswrapper[4830]: I0131 09:09:50.307997 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ac8533f2-4007-40aa-b933-fe2ec3b7d6bf-service-ca\") pod \"console-76f59b595d-84k99\" (UID: \"ac8533f2-4007-40aa-b933-fe2ec3b7d6bf\") " pod="openshift-console/console-76f59b595d-84k99" Jan 31 09:09:50 crc kubenswrapper[4830]: I0131 09:09:50.308095 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/ac8533f2-4007-40aa-b933-fe2ec3b7d6bf-oauth-serving-cert\") pod \"console-76f59b595d-84k99\" (UID: \"ac8533f2-4007-40aa-b933-fe2ec3b7d6bf\") " pod="openshift-console/console-76f59b595d-84k99" Jan 31 09:09:50 crc kubenswrapper[4830]: I0131 09:09:50.410375 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ac8533f2-4007-40aa-b933-fe2ec3b7d6bf-trusted-ca-bundle\") pod \"console-76f59b595d-84k99\" (UID: \"ac8533f2-4007-40aa-b933-fe2ec3b7d6bf\") " pod="openshift-console/console-76f59b595d-84k99" Jan 31 09:09:50 crc kubenswrapper[4830]: I0131 09:09:50.410538 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/ac8533f2-4007-40aa-b933-fe2ec3b7d6bf-console-serving-cert\") pod \"console-76f59b595d-84k99\" (UID: \"ac8533f2-4007-40aa-b933-fe2ec3b7d6bf\") " pod="openshift-console/console-76f59b595d-84k99" Jan 31 09:09:50 crc kubenswrapper[4830]: I0131 09:09:50.410587 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/ac8533f2-4007-40aa-b933-fe2ec3b7d6bf-console-oauth-config\") pod \"console-76f59b595d-84k99\" (UID: \"ac8533f2-4007-40aa-b933-fe2ec3b7d6bf\") " pod="openshift-console/console-76f59b595d-84k99" Jan 31 09:09:50 crc kubenswrapper[4830]: I0131 09:09:50.410642 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/ac8533f2-4007-40aa-b933-fe2ec3b7d6bf-console-config\") pod \"console-76f59b595d-84k99\" (UID: \"ac8533f2-4007-40aa-b933-fe2ec3b7d6bf\") " pod="openshift-console/console-76f59b595d-84k99" Jan 31 09:09:50 crc kubenswrapper[4830]: I0131 09:09:50.410683 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5qjxz\" (UniqueName: \"kubernetes.io/projected/ac8533f2-4007-40aa-b933-fe2ec3b7d6bf-kube-api-access-5qjxz\") pod \"console-76f59b595d-84k99\" (UID: \"ac8533f2-4007-40aa-b933-fe2ec3b7d6bf\") " pod="openshift-console/console-76f59b595d-84k99" Jan 31 09:09:50 crc kubenswrapper[4830]: I0131 09:09:50.410774 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ac8533f2-4007-40aa-b933-fe2ec3b7d6bf-service-ca\") pod \"console-76f59b595d-84k99\" (UID: \"ac8533f2-4007-40aa-b933-fe2ec3b7d6bf\") " pod="openshift-console/console-76f59b595d-84k99" Jan 31 09:09:50 crc kubenswrapper[4830]: I0131 09:09:50.410811 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/ac8533f2-4007-40aa-b933-fe2ec3b7d6bf-oauth-serving-cert\") pod \"console-76f59b595d-84k99\" (UID: \"ac8533f2-4007-40aa-b933-fe2ec3b7d6bf\") " pod="openshift-console/console-76f59b595d-84k99" Jan 31 09:09:50 crc kubenswrapper[4830]: I0131 09:09:50.414910 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ac8533f2-4007-40aa-b933-fe2ec3b7d6bf-trusted-ca-bundle\") pod \"console-76f59b595d-84k99\" (UID: \"ac8533f2-4007-40aa-b933-fe2ec3b7d6bf\") " pod="openshift-console/console-76f59b595d-84k99" Jan 31 09:09:50 crc kubenswrapper[4830]: I0131 09:09:50.415144 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/ac8533f2-4007-40aa-b933-fe2ec3b7d6bf-console-config\") pod \"console-76f59b595d-84k99\" (UID: \"ac8533f2-4007-40aa-b933-fe2ec3b7d6bf\") " pod="openshift-console/console-76f59b595d-84k99" Jan 31 09:09:50 crc kubenswrapper[4830]: I0131 09:09:50.415345 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/ac8533f2-4007-40aa-b933-fe2ec3b7d6bf-oauth-serving-cert\") pod \"console-76f59b595d-84k99\" (UID: \"ac8533f2-4007-40aa-b933-fe2ec3b7d6bf\") " pod="openshift-console/console-76f59b595d-84k99" Jan 31 09:09:50 crc kubenswrapper[4830]: I0131 09:09:50.415372 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ac8533f2-4007-40aa-b933-fe2ec3b7d6bf-service-ca\") pod \"console-76f59b595d-84k99\" (UID: \"ac8533f2-4007-40aa-b933-fe2ec3b7d6bf\") " pod="openshift-console/console-76f59b595d-84k99" Jan 31 09:09:50 crc kubenswrapper[4830]: I0131 09:09:50.430705 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/ac8533f2-4007-40aa-b933-fe2ec3b7d6bf-console-serving-cert\") pod \"console-76f59b595d-84k99\" (UID: \"ac8533f2-4007-40aa-b933-fe2ec3b7d6bf\") " pod="openshift-console/console-76f59b595d-84k99" Jan 31 09:09:50 crc kubenswrapper[4830]: I0131 09:09:50.443424 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/ac8533f2-4007-40aa-b933-fe2ec3b7d6bf-console-oauth-config\") pod \"console-76f59b595d-84k99\" (UID: \"ac8533f2-4007-40aa-b933-fe2ec3b7d6bf\") " pod="openshift-console/console-76f59b595d-84k99" Jan 31 09:09:50 crc kubenswrapper[4830]: I0131 09:09:50.448637 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5qjxz\" (UniqueName: \"kubernetes.io/projected/ac8533f2-4007-40aa-b933-fe2ec3b7d6bf-kube-api-access-5qjxz\") pod \"console-76f59b595d-84k99\" (UID: \"ac8533f2-4007-40aa-b933-fe2ec3b7d6bf\") " pod="openshift-console/console-76f59b595d-84k99" Jan 31 09:09:50 crc kubenswrapper[4830]: I0131 09:09:50.526951 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-76f59b595d-84k99" Jan 31 09:09:50 crc kubenswrapper[4830]: I0131 09:09:50.768774 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-76f59b595d-84k99"] Jan 31 09:09:51 crc kubenswrapper[4830]: I0131 09:09:51.471220 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-76f59b595d-84k99" event={"ID":"ac8533f2-4007-40aa-b933-fe2ec3b7d6bf","Type":"ContainerStarted","Data":"8d3814bb96c222c8a413e07701c8f3bc2c775210ad99b7c8fa2b0362835e846b"} Jan 31 09:09:51 crc kubenswrapper[4830]: I0131 09:09:51.471779 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-76f59b595d-84k99" event={"ID":"ac8533f2-4007-40aa-b933-fe2ec3b7d6bf","Type":"ContainerStarted","Data":"57a857b3e24c4a04d04480dfebb213f51d63617c1c8cbcb5501b63c589b30d69"} Jan 31 09:09:51 crc kubenswrapper[4830]: I0131 09:09:51.494220 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-76f59b595d-84k99" podStartSLOduration=1.494191155 podStartE2EDuration="1.494191155s" podCreationTimestamp="2026-01-31 09:09:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 09:09:51.491852827 +0000 UTC m=+535.985215279" watchObservedRunningTime="2026-01-31 09:09:51.494191155 +0000 UTC m=+535.987553607" Jan 31 09:09:56 crc kubenswrapper[4830]: I0131 09:09:56.592973 4830 scope.go:117] "RemoveContainer" containerID="bb377573acca1cadcbbd0e2208ca9329c7f68ae0060779b2e74b9b113b146b89" Jan 31 09:10:00 crc kubenswrapper[4830]: I0131 09:10:00.527777 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-76f59b595d-84k99" Jan 31 09:10:00 crc kubenswrapper[4830]: I0131 09:10:00.528298 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-76f59b595d-84k99" Jan 31 09:10:00 crc kubenswrapper[4830]: I0131 09:10:00.534256 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-76f59b595d-84k99" Jan 31 09:10:00 crc kubenswrapper[4830]: I0131 09:10:00.544431 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-76f59b595d-84k99" Jan 31 09:10:00 crc kubenswrapper[4830]: I0131 09:10:00.691252 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-76b6b6b6bb-kwnwx"] Jan 31 09:10:14 crc kubenswrapper[4830]: I0131 09:10:14.353585 4830 patch_prober.go:28] interesting pod/machine-config-daemon-gt7kd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 09:10:14 crc kubenswrapper[4830]: I0131 09:10:14.354289 4830 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" podUID="158dbfda-9b0a-4809-9946-3c6ee2d082dc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 09:10:25 crc kubenswrapper[4830]: I0131 09:10:25.734063 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-76b6b6b6bb-kwnwx" podUID="3a2122d4-f14a-405d-80a0-e139b7c03b0c" containerName="console" containerID="cri-o://b9992a558b3a0f6e85ebe23d3b9e41a931ecc199793f8ad96aece0ae1a83776a" gracePeriod=15 Jan 31 09:10:26 crc kubenswrapper[4830]: I0131 09:10:26.108665 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-76b6b6b6bb-kwnwx_3a2122d4-f14a-405d-80a0-e139b7c03b0c/console/0.log" Jan 31 09:10:26 crc kubenswrapper[4830]: I0131 09:10:26.109170 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-76b6b6b6bb-kwnwx" Jan 31 09:10:26 crc kubenswrapper[4830]: I0131 09:10:26.236150 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/3a2122d4-f14a-405d-80a0-e139b7c03b0c-console-oauth-config\") pod \"3a2122d4-f14a-405d-80a0-e139b7c03b0c\" (UID: \"3a2122d4-f14a-405d-80a0-e139b7c03b0c\") " Jan 31 09:10:26 crc kubenswrapper[4830]: I0131 09:10:26.236231 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sw86p\" (UniqueName: \"kubernetes.io/projected/3a2122d4-f14a-405d-80a0-e139b7c03b0c-kube-api-access-sw86p\") pod \"3a2122d4-f14a-405d-80a0-e139b7c03b0c\" (UID: \"3a2122d4-f14a-405d-80a0-e139b7c03b0c\") " Jan 31 09:10:26 crc kubenswrapper[4830]: I0131 09:10:26.236269 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/3a2122d4-f14a-405d-80a0-e139b7c03b0c-console-config\") pod \"3a2122d4-f14a-405d-80a0-e139b7c03b0c\" (UID: \"3a2122d4-f14a-405d-80a0-e139b7c03b0c\") " Jan 31 09:10:26 crc kubenswrapper[4830]: I0131 09:10:26.236374 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/3a2122d4-f14a-405d-80a0-e139b7c03b0c-service-ca\") pod \"3a2122d4-f14a-405d-80a0-e139b7c03b0c\" (UID: \"3a2122d4-f14a-405d-80a0-e139b7c03b0c\") " Jan 31 09:10:26 crc kubenswrapper[4830]: I0131 09:10:26.237547 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3a2122d4-f14a-405d-80a0-e139b7c03b0c-service-ca" (OuterVolumeSpecName: "service-ca") pod "3a2122d4-f14a-405d-80a0-e139b7c03b0c" (UID: "3a2122d4-f14a-405d-80a0-e139b7c03b0c"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:10:26 crc kubenswrapper[4830]: I0131 09:10:26.237563 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3a2122d4-f14a-405d-80a0-e139b7c03b0c-console-config" (OuterVolumeSpecName: "console-config") pod "3a2122d4-f14a-405d-80a0-e139b7c03b0c" (UID: "3a2122d4-f14a-405d-80a0-e139b7c03b0c"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:10:26 crc kubenswrapper[4830]: I0131 09:10:26.237710 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/3a2122d4-f14a-405d-80a0-e139b7c03b0c-oauth-serving-cert\") pod \"3a2122d4-f14a-405d-80a0-e139b7c03b0c\" (UID: \"3a2122d4-f14a-405d-80a0-e139b7c03b0c\") " Jan 31 09:10:26 crc kubenswrapper[4830]: I0131 09:10:26.238361 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3a2122d4-f14a-405d-80a0-e139b7c03b0c-trusted-ca-bundle\") pod \"3a2122d4-f14a-405d-80a0-e139b7c03b0c\" (UID: \"3a2122d4-f14a-405d-80a0-e139b7c03b0c\") " Jan 31 09:10:26 crc kubenswrapper[4830]: I0131 09:10:26.238277 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3a2122d4-f14a-405d-80a0-e139b7c03b0c-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "3a2122d4-f14a-405d-80a0-e139b7c03b0c" (UID: "3a2122d4-f14a-405d-80a0-e139b7c03b0c"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:10:26 crc kubenswrapper[4830]: I0131 09:10:26.238903 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3a2122d4-f14a-405d-80a0-e139b7c03b0c-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "3a2122d4-f14a-405d-80a0-e139b7c03b0c" (UID: "3a2122d4-f14a-405d-80a0-e139b7c03b0c"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:10:26 crc kubenswrapper[4830]: I0131 09:10:26.238982 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/3a2122d4-f14a-405d-80a0-e139b7c03b0c-console-serving-cert\") pod \"3a2122d4-f14a-405d-80a0-e139b7c03b0c\" (UID: \"3a2122d4-f14a-405d-80a0-e139b7c03b0c\") " Jan 31 09:10:26 crc kubenswrapper[4830]: I0131 09:10:26.239463 4830 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/3a2122d4-f14a-405d-80a0-e139b7c03b0c-console-config\") on node \"crc\" DevicePath \"\"" Jan 31 09:10:26 crc kubenswrapper[4830]: I0131 09:10:26.239484 4830 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/3a2122d4-f14a-405d-80a0-e139b7c03b0c-service-ca\") on node \"crc\" DevicePath \"\"" Jan 31 09:10:26 crc kubenswrapper[4830]: I0131 09:10:26.239497 4830 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/3a2122d4-f14a-405d-80a0-e139b7c03b0c-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 09:10:26 crc kubenswrapper[4830]: I0131 09:10:26.239511 4830 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3a2122d4-f14a-405d-80a0-e139b7c03b0c-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 09:10:26 crc kubenswrapper[4830]: I0131 09:10:26.243918 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3a2122d4-f14a-405d-80a0-e139b7c03b0c-kube-api-access-sw86p" (OuterVolumeSpecName: "kube-api-access-sw86p") pod "3a2122d4-f14a-405d-80a0-e139b7c03b0c" (UID: "3a2122d4-f14a-405d-80a0-e139b7c03b0c"). InnerVolumeSpecName "kube-api-access-sw86p". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:10:26 crc kubenswrapper[4830]: I0131 09:10:26.244826 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3a2122d4-f14a-405d-80a0-e139b7c03b0c-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "3a2122d4-f14a-405d-80a0-e139b7c03b0c" (UID: "3a2122d4-f14a-405d-80a0-e139b7c03b0c"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:10:26 crc kubenswrapper[4830]: I0131 09:10:26.244897 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3a2122d4-f14a-405d-80a0-e139b7c03b0c-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "3a2122d4-f14a-405d-80a0-e139b7c03b0c" (UID: "3a2122d4-f14a-405d-80a0-e139b7c03b0c"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:10:26 crc kubenswrapper[4830]: I0131 09:10:26.341154 4830 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/3a2122d4-f14a-405d-80a0-e139b7c03b0c-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 09:10:26 crc kubenswrapper[4830]: I0131 09:10:26.341208 4830 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/3a2122d4-f14a-405d-80a0-e139b7c03b0c-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 31 09:10:26 crc kubenswrapper[4830]: I0131 09:10:26.342265 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sw86p\" (UniqueName: \"kubernetes.io/projected/3a2122d4-f14a-405d-80a0-e139b7c03b0c-kube-api-access-sw86p\") on node \"crc\" DevicePath \"\"" Jan 31 09:10:26 crc kubenswrapper[4830]: I0131 09:10:26.745036 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-76b6b6b6bb-kwnwx_3a2122d4-f14a-405d-80a0-e139b7c03b0c/console/0.log" Jan 31 09:10:26 crc kubenswrapper[4830]: I0131 09:10:26.745123 4830 generic.go:334] "Generic (PLEG): container finished" podID="3a2122d4-f14a-405d-80a0-e139b7c03b0c" containerID="b9992a558b3a0f6e85ebe23d3b9e41a931ecc199793f8ad96aece0ae1a83776a" exitCode=2 Jan 31 09:10:26 crc kubenswrapper[4830]: I0131 09:10:26.745176 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-76b6b6b6bb-kwnwx" event={"ID":"3a2122d4-f14a-405d-80a0-e139b7c03b0c","Type":"ContainerDied","Data":"b9992a558b3a0f6e85ebe23d3b9e41a931ecc199793f8ad96aece0ae1a83776a"} Jan 31 09:10:26 crc kubenswrapper[4830]: I0131 09:10:26.745238 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-76b6b6b6bb-kwnwx" event={"ID":"3a2122d4-f14a-405d-80a0-e139b7c03b0c","Type":"ContainerDied","Data":"1ba6165679da9d773250ea31f752574e46b9f81b75ea5a45c928e85a6fc9be40"} Jan 31 09:10:26 crc kubenswrapper[4830]: I0131 09:10:26.745457 4830 scope.go:117] "RemoveContainer" containerID="b9992a558b3a0f6e85ebe23d3b9e41a931ecc199793f8ad96aece0ae1a83776a" Jan 31 09:10:26 crc kubenswrapper[4830]: I0131 09:10:26.745528 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-76b6b6b6bb-kwnwx" Jan 31 09:10:26 crc kubenswrapper[4830]: I0131 09:10:26.773925 4830 scope.go:117] "RemoveContainer" containerID="b9992a558b3a0f6e85ebe23d3b9e41a931ecc199793f8ad96aece0ae1a83776a" Jan 31 09:10:26 crc kubenswrapper[4830]: I0131 09:10:26.774098 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-76b6b6b6bb-kwnwx"] Jan 31 09:10:26 crc kubenswrapper[4830]: E0131 09:10:26.774816 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b9992a558b3a0f6e85ebe23d3b9e41a931ecc199793f8ad96aece0ae1a83776a\": container with ID starting with b9992a558b3a0f6e85ebe23d3b9e41a931ecc199793f8ad96aece0ae1a83776a not found: ID does not exist" containerID="b9992a558b3a0f6e85ebe23d3b9e41a931ecc199793f8ad96aece0ae1a83776a" Jan 31 09:10:26 crc kubenswrapper[4830]: I0131 09:10:26.774946 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b9992a558b3a0f6e85ebe23d3b9e41a931ecc199793f8ad96aece0ae1a83776a"} err="failed to get container status \"b9992a558b3a0f6e85ebe23d3b9e41a931ecc199793f8ad96aece0ae1a83776a\": rpc error: code = NotFound desc = could not find container \"b9992a558b3a0f6e85ebe23d3b9e41a931ecc199793f8ad96aece0ae1a83776a\": container with ID starting with b9992a558b3a0f6e85ebe23d3b9e41a931ecc199793f8ad96aece0ae1a83776a not found: ID does not exist" Jan 31 09:10:26 crc kubenswrapper[4830]: I0131 09:10:26.779138 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-76b6b6b6bb-kwnwx"] Jan 31 09:10:28 crc kubenswrapper[4830]: I0131 09:10:28.264106 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3a2122d4-f14a-405d-80a0-e139b7c03b0c" path="/var/lib/kubelet/pods/3a2122d4-f14a-405d-80a0-e139b7c03b0c/volumes" Jan 31 09:10:44 crc kubenswrapper[4830]: I0131 09:10:44.353921 4830 patch_prober.go:28] interesting pod/machine-config-daemon-gt7kd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 09:10:44 crc kubenswrapper[4830]: I0131 09:10:44.355016 4830 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" podUID="158dbfda-9b0a-4809-9946-3c6ee2d082dc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 09:10:44 crc kubenswrapper[4830]: I0131 09:10:44.355083 4830 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" Jan 31 09:10:44 crc kubenswrapper[4830]: I0131 09:10:44.355979 4830 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"40de0b135d2e6436aca04cec9e087aebbf22156339d1945255baa4aa59e53756"} pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 31 09:10:44 crc kubenswrapper[4830]: I0131 09:10:44.356036 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" podUID="158dbfda-9b0a-4809-9946-3c6ee2d082dc" containerName="machine-config-daemon" containerID="cri-o://40de0b135d2e6436aca04cec9e087aebbf22156339d1945255baa4aa59e53756" gracePeriod=600 Jan 31 09:10:44 crc kubenswrapper[4830]: I0131 09:10:44.891016 4830 generic.go:334] "Generic (PLEG): container finished" podID="158dbfda-9b0a-4809-9946-3c6ee2d082dc" containerID="40de0b135d2e6436aca04cec9e087aebbf22156339d1945255baa4aa59e53756" exitCode=0 Jan 31 09:10:44 crc kubenswrapper[4830]: I0131 09:10:44.891119 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" event={"ID":"158dbfda-9b0a-4809-9946-3c6ee2d082dc","Type":"ContainerDied","Data":"40de0b135d2e6436aca04cec9e087aebbf22156339d1945255baa4aa59e53756"} Jan 31 09:10:44 crc kubenswrapper[4830]: I0131 09:10:44.891819 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" event={"ID":"158dbfda-9b0a-4809-9946-3c6ee2d082dc","Type":"ContainerStarted","Data":"28b103ac2ba54a2d7fb62b9e350f386540aa590898443607b7a7ceffbe4db67d"} Jan 31 09:10:44 crc kubenswrapper[4830]: I0131 09:10:44.891845 4830 scope.go:117] "RemoveContainer" containerID="daea99fc983195352b8e4718b50bf7bbcdbf16fe4b6ceb22c6175dbbdd6d0099" Jan 31 09:10:56 crc kubenswrapper[4830]: I0131 09:10:56.639629 4830 scope.go:117] "RemoveContainer" containerID="612de1acd164c5d864167c9e586526fa1bcddbe39ed6bfa04ccb806b246b55ff" Jan 31 09:12:18 crc kubenswrapper[4830]: I0131 09:12:18.082215 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08tt9f5"] Jan 31 09:12:18 crc kubenswrapper[4830]: E0131 09:12:18.084207 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a2122d4-f14a-405d-80a0-e139b7c03b0c" containerName="console" Jan 31 09:12:18 crc kubenswrapper[4830]: I0131 09:12:18.084277 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a2122d4-f14a-405d-80a0-e139b7c03b0c" containerName="console" Jan 31 09:12:18 crc kubenswrapper[4830]: I0131 09:12:18.084446 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="3a2122d4-f14a-405d-80a0-e139b7c03b0c" containerName="console" Jan 31 09:12:18 crc kubenswrapper[4830]: I0131 09:12:18.085378 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08tt9f5" Jan 31 09:12:18 crc kubenswrapper[4830]: I0131 09:12:18.088083 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 31 09:12:18 crc kubenswrapper[4830]: I0131 09:12:18.099645 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08tt9f5"] Jan 31 09:12:18 crc kubenswrapper[4830]: I0131 09:12:18.157294 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r5k29\" (UniqueName: \"kubernetes.io/projected/2a5ef80a-1adb-44b7-92a8-91e7a020a693-kube-api-access-r5k29\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08tt9f5\" (UID: \"2a5ef80a-1adb-44b7-92a8-91e7a020a693\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08tt9f5" Jan 31 09:12:18 crc kubenswrapper[4830]: I0131 09:12:18.157342 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/2a5ef80a-1adb-44b7-92a8-91e7a020a693-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08tt9f5\" (UID: \"2a5ef80a-1adb-44b7-92a8-91e7a020a693\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08tt9f5" Jan 31 09:12:18 crc kubenswrapper[4830]: I0131 09:12:18.157366 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/2a5ef80a-1adb-44b7-92a8-91e7a020a693-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08tt9f5\" (UID: \"2a5ef80a-1adb-44b7-92a8-91e7a020a693\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08tt9f5" Jan 31 09:12:18 crc kubenswrapper[4830]: I0131 09:12:18.258760 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r5k29\" (UniqueName: \"kubernetes.io/projected/2a5ef80a-1adb-44b7-92a8-91e7a020a693-kube-api-access-r5k29\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08tt9f5\" (UID: \"2a5ef80a-1adb-44b7-92a8-91e7a020a693\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08tt9f5" Jan 31 09:12:18 crc kubenswrapper[4830]: I0131 09:12:18.258807 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/2a5ef80a-1adb-44b7-92a8-91e7a020a693-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08tt9f5\" (UID: \"2a5ef80a-1adb-44b7-92a8-91e7a020a693\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08tt9f5" Jan 31 09:12:18 crc kubenswrapper[4830]: I0131 09:12:18.258832 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/2a5ef80a-1adb-44b7-92a8-91e7a020a693-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08tt9f5\" (UID: \"2a5ef80a-1adb-44b7-92a8-91e7a020a693\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08tt9f5" Jan 31 09:12:18 crc kubenswrapper[4830]: I0131 09:12:18.259490 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/2a5ef80a-1adb-44b7-92a8-91e7a020a693-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08tt9f5\" (UID: \"2a5ef80a-1adb-44b7-92a8-91e7a020a693\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08tt9f5" Jan 31 09:12:18 crc kubenswrapper[4830]: I0131 09:12:18.259988 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/2a5ef80a-1adb-44b7-92a8-91e7a020a693-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08tt9f5\" (UID: \"2a5ef80a-1adb-44b7-92a8-91e7a020a693\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08tt9f5" Jan 31 09:12:18 crc kubenswrapper[4830]: I0131 09:12:18.282081 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r5k29\" (UniqueName: \"kubernetes.io/projected/2a5ef80a-1adb-44b7-92a8-91e7a020a693-kube-api-access-r5k29\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08tt9f5\" (UID: \"2a5ef80a-1adb-44b7-92a8-91e7a020a693\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08tt9f5" Jan 31 09:12:18 crc kubenswrapper[4830]: I0131 09:12:18.407928 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08tt9f5" Jan 31 09:12:18 crc kubenswrapper[4830]: I0131 09:12:18.683140 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08tt9f5"] Jan 31 09:12:19 crc kubenswrapper[4830]: I0131 09:12:19.667439 4830 generic.go:334] "Generic (PLEG): container finished" podID="2a5ef80a-1adb-44b7-92a8-91e7a020a693" containerID="0cbca952dd203050f6e101ecf76122b42f4cc52e808094bfce5cd50c64835522" exitCode=0 Jan 31 09:12:19 crc kubenswrapper[4830]: I0131 09:12:19.667484 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08tt9f5" event={"ID":"2a5ef80a-1adb-44b7-92a8-91e7a020a693","Type":"ContainerDied","Data":"0cbca952dd203050f6e101ecf76122b42f4cc52e808094bfce5cd50c64835522"} Jan 31 09:12:19 crc kubenswrapper[4830]: I0131 09:12:19.667932 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08tt9f5" event={"ID":"2a5ef80a-1adb-44b7-92a8-91e7a020a693","Type":"ContainerStarted","Data":"24d7884b2073d330953a59a0ee25aac45401d0fb77228eca7469af03e7d2c704"} Jan 31 09:12:19 crc kubenswrapper[4830]: I0131 09:12:19.670507 4830 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 31 09:12:21 crc kubenswrapper[4830]: I0131 09:12:21.685845 4830 generic.go:334] "Generic (PLEG): container finished" podID="2a5ef80a-1adb-44b7-92a8-91e7a020a693" containerID="02882141ccc96364b9b2ae433c30f5d85b5b46108f4c2d32b8393d9e4d0e1b58" exitCode=0 Jan 31 09:12:21 crc kubenswrapper[4830]: I0131 09:12:21.685929 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08tt9f5" event={"ID":"2a5ef80a-1adb-44b7-92a8-91e7a020a693","Type":"ContainerDied","Data":"02882141ccc96364b9b2ae433c30f5d85b5b46108f4c2d32b8393d9e4d0e1b58"} Jan 31 09:12:22 crc kubenswrapper[4830]: I0131 09:12:22.705276 4830 generic.go:334] "Generic (PLEG): container finished" podID="2a5ef80a-1adb-44b7-92a8-91e7a020a693" containerID="7623c0773b50d6b08bee74327b895c2f2486c37bef0869e920695a6cfb32ee4a" exitCode=0 Jan 31 09:12:22 crc kubenswrapper[4830]: I0131 09:12:22.706409 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08tt9f5" event={"ID":"2a5ef80a-1adb-44b7-92a8-91e7a020a693","Type":"ContainerDied","Data":"7623c0773b50d6b08bee74327b895c2f2486c37bef0869e920695a6cfb32ee4a"} Jan 31 09:12:24 crc kubenswrapper[4830]: I0131 09:12:24.048081 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08tt9f5" Jan 31 09:12:24 crc kubenswrapper[4830]: I0131 09:12:24.157276 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/2a5ef80a-1adb-44b7-92a8-91e7a020a693-bundle\") pod \"2a5ef80a-1adb-44b7-92a8-91e7a020a693\" (UID: \"2a5ef80a-1adb-44b7-92a8-91e7a020a693\") " Jan 31 09:12:24 crc kubenswrapper[4830]: I0131 09:12:24.157363 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r5k29\" (UniqueName: \"kubernetes.io/projected/2a5ef80a-1adb-44b7-92a8-91e7a020a693-kube-api-access-r5k29\") pod \"2a5ef80a-1adb-44b7-92a8-91e7a020a693\" (UID: \"2a5ef80a-1adb-44b7-92a8-91e7a020a693\") " Jan 31 09:12:24 crc kubenswrapper[4830]: I0131 09:12:24.157407 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/2a5ef80a-1adb-44b7-92a8-91e7a020a693-util\") pod \"2a5ef80a-1adb-44b7-92a8-91e7a020a693\" (UID: \"2a5ef80a-1adb-44b7-92a8-91e7a020a693\") " Jan 31 09:12:24 crc kubenswrapper[4830]: I0131 09:12:24.159471 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2a5ef80a-1adb-44b7-92a8-91e7a020a693-bundle" (OuterVolumeSpecName: "bundle") pod "2a5ef80a-1adb-44b7-92a8-91e7a020a693" (UID: "2a5ef80a-1adb-44b7-92a8-91e7a020a693"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 09:12:24 crc kubenswrapper[4830]: I0131 09:12:24.165158 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2a5ef80a-1adb-44b7-92a8-91e7a020a693-kube-api-access-r5k29" (OuterVolumeSpecName: "kube-api-access-r5k29") pod "2a5ef80a-1adb-44b7-92a8-91e7a020a693" (UID: "2a5ef80a-1adb-44b7-92a8-91e7a020a693"). InnerVolumeSpecName "kube-api-access-r5k29". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:12:24 crc kubenswrapper[4830]: I0131 09:12:24.172016 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2a5ef80a-1adb-44b7-92a8-91e7a020a693-util" (OuterVolumeSpecName: "util") pod "2a5ef80a-1adb-44b7-92a8-91e7a020a693" (UID: "2a5ef80a-1adb-44b7-92a8-91e7a020a693"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 09:12:24 crc kubenswrapper[4830]: I0131 09:12:24.262023 4830 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/2a5ef80a-1adb-44b7-92a8-91e7a020a693-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 09:12:24 crc kubenswrapper[4830]: I0131 09:12:24.262074 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r5k29\" (UniqueName: \"kubernetes.io/projected/2a5ef80a-1adb-44b7-92a8-91e7a020a693-kube-api-access-r5k29\") on node \"crc\" DevicePath \"\"" Jan 31 09:12:24 crc kubenswrapper[4830]: I0131 09:12:24.262092 4830 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/2a5ef80a-1adb-44b7-92a8-91e7a020a693-util\") on node \"crc\" DevicePath \"\"" Jan 31 09:12:24 crc kubenswrapper[4830]: I0131 09:12:24.728516 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08tt9f5" event={"ID":"2a5ef80a-1adb-44b7-92a8-91e7a020a693","Type":"ContainerDied","Data":"24d7884b2073d330953a59a0ee25aac45401d0fb77228eca7469af03e7d2c704"} Jan 31 09:12:24 crc kubenswrapper[4830]: I0131 09:12:24.728581 4830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="24d7884b2073d330953a59a0ee25aac45401d0fb77228eca7469af03e7d2c704" Jan 31 09:12:24 crc kubenswrapper[4830]: I0131 09:12:24.728622 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08tt9f5" Jan 31 09:12:32 crc kubenswrapper[4830]: I0131 09:12:32.822374 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-222kl"] Jan 31 09:12:32 crc kubenswrapper[4830]: E0131 09:12:32.823295 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a5ef80a-1adb-44b7-92a8-91e7a020a693" containerName="pull" Jan 31 09:12:32 crc kubenswrapper[4830]: I0131 09:12:32.823310 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a5ef80a-1adb-44b7-92a8-91e7a020a693" containerName="pull" Jan 31 09:12:32 crc kubenswrapper[4830]: E0131 09:12:32.823322 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a5ef80a-1adb-44b7-92a8-91e7a020a693" containerName="util" Jan 31 09:12:32 crc kubenswrapper[4830]: I0131 09:12:32.823328 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a5ef80a-1adb-44b7-92a8-91e7a020a693" containerName="util" Jan 31 09:12:32 crc kubenswrapper[4830]: E0131 09:12:32.823354 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a5ef80a-1adb-44b7-92a8-91e7a020a693" containerName="extract" Jan 31 09:12:32 crc kubenswrapper[4830]: I0131 09:12:32.823360 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a5ef80a-1adb-44b7-92a8-91e7a020a693" containerName="extract" Jan 31 09:12:32 crc kubenswrapper[4830]: I0131 09:12:32.823468 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="2a5ef80a-1adb-44b7-92a8-91e7a020a693" containerName="extract" Jan 31 09:12:32 crc kubenswrapper[4830]: I0131 09:12:32.824084 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-222kl" Jan 31 09:12:32 crc kubenswrapper[4830]: I0131 09:12:32.826465 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"openshift-service-ca.crt" Jan 31 09:12:32 crc kubenswrapper[4830]: I0131 09:12:32.827005 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-dockercfg-ztn94" Jan 31 09:12:32 crc kubenswrapper[4830]: I0131 09:12:32.827974 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"kube-root-ca.crt" Jan 31 09:12:32 crc kubenswrapper[4830]: I0131 09:12:32.835186 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-222kl"] Jan 31 09:12:32 crc kubenswrapper[4830]: I0131 09:12:32.945894 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-5d644b584c-jfsw8"] Jan 31 09:12:32 crc kubenswrapper[4830]: I0131 09:12:32.946799 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5d644b584c-jfsw8" Jan 31 09:12:32 crc kubenswrapper[4830]: I0131 09:12:32.950588 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-dockercfg-jsdt6" Jan 31 09:12:32 crc kubenswrapper[4830]: I0131 09:12:32.950657 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-service-cert" Jan 31 09:12:32 crc kubenswrapper[4830]: I0131 09:12:32.961876 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-5d644b584c-bnxt4"] Jan 31 09:12:32 crc kubenswrapper[4830]: I0131 09:12:32.962960 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5d644b584c-bnxt4" Jan 31 09:12:32 crc kubenswrapper[4830]: I0131 09:12:32.971868 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-5d644b584c-jfsw8"] Jan 31 09:12:32 crc kubenswrapper[4830]: I0131 09:12:32.995784 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-5d644b584c-bnxt4"] Jan 31 09:12:33 crc kubenswrapper[4830]: I0131 09:12:33.010519 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-njh5g\" (UniqueName: \"kubernetes.io/projected/3addecb4-84c5-4b88-b751-b6a26db362be-kube-api-access-njh5g\") pod \"obo-prometheus-operator-68bc856cb9-222kl\" (UID: \"3addecb4-84c5-4b88-b751-b6a26db362be\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-222kl" Jan 31 09:12:33 crc kubenswrapper[4830]: I0131 09:12:33.112180 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/2b9a494b-8847-4bf7-820e-2739aa96a464-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-5d644b584c-jfsw8\" (UID: \"2b9a494b-8847-4bf7-820e-2739aa96a464\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5d644b584c-jfsw8" Jan 31 09:12:33 crc kubenswrapper[4830]: I0131 09:12:33.112239 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/419c93d3-0d80-4fbf-91cd-c88303e038e5-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-5d644b584c-bnxt4\" (UID: \"419c93d3-0d80-4fbf-91cd-c88303e038e5\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5d644b584c-bnxt4" Jan 31 09:12:33 crc kubenswrapper[4830]: I0131 09:12:33.112308 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/2b9a494b-8847-4bf7-820e-2739aa96a464-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-5d644b584c-jfsw8\" (UID: \"2b9a494b-8847-4bf7-820e-2739aa96a464\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5d644b584c-jfsw8" Jan 31 09:12:33 crc kubenswrapper[4830]: I0131 09:12:33.112391 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/419c93d3-0d80-4fbf-91cd-c88303e038e5-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-5d644b584c-bnxt4\" (UID: \"419c93d3-0d80-4fbf-91cd-c88303e038e5\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5d644b584c-bnxt4" Jan 31 09:12:33 crc kubenswrapper[4830]: I0131 09:12:33.112471 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-njh5g\" (UniqueName: \"kubernetes.io/projected/3addecb4-84c5-4b88-b751-b6a26db362be-kube-api-access-njh5g\") pod \"obo-prometheus-operator-68bc856cb9-222kl\" (UID: \"3addecb4-84c5-4b88-b751-b6a26db362be\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-222kl" Jan 31 09:12:33 crc kubenswrapper[4830]: I0131 09:12:33.142738 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-njh5g\" (UniqueName: \"kubernetes.io/projected/3addecb4-84c5-4b88-b751-b6a26db362be-kube-api-access-njh5g\") pod \"obo-prometheus-operator-68bc856cb9-222kl\" (UID: \"3addecb4-84c5-4b88-b751-b6a26db362be\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-222kl" Jan 31 09:12:33 crc kubenswrapper[4830]: I0131 09:12:33.177035 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-l59nt"] Jan 31 09:12:33 crc kubenswrapper[4830]: I0131 09:12:33.179067 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-l59nt" Jan 31 09:12:33 crc kubenswrapper[4830]: I0131 09:12:33.183877 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-tls" Jan 31 09:12:33 crc kubenswrapper[4830]: I0131 09:12:33.191126 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-sa-dockercfg-kdnr6" Jan 31 09:12:33 crc kubenswrapper[4830]: I0131 09:12:33.218950 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/419c93d3-0d80-4fbf-91cd-c88303e038e5-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-5d644b584c-bnxt4\" (UID: \"419c93d3-0d80-4fbf-91cd-c88303e038e5\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5d644b584c-bnxt4" Jan 31 09:12:33 crc kubenswrapper[4830]: I0131 09:12:33.219669 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/2b9a494b-8847-4bf7-820e-2739aa96a464-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-5d644b584c-jfsw8\" (UID: \"2b9a494b-8847-4bf7-820e-2739aa96a464\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5d644b584c-jfsw8" Jan 31 09:12:33 crc kubenswrapper[4830]: I0131 09:12:33.219704 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/419c93d3-0d80-4fbf-91cd-c88303e038e5-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-5d644b584c-bnxt4\" (UID: \"419c93d3-0d80-4fbf-91cd-c88303e038e5\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5d644b584c-bnxt4" Jan 31 09:12:33 crc kubenswrapper[4830]: I0131 09:12:33.219764 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/2b9a494b-8847-4bf7-820e-2739aa96a464-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-5d644b584c-jfsw8\" (UID: \"2b9a494b-8847-4bf7-820e-2739aa96a464\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5d644b584c-jfsw8" Jan 31 09:12:33 crc kubenswrapper[4830]: I0131 09:12:33.224163 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-l59nt"] Jan 31 09:12:33 crc kubenswrapper[4830]: I0131 09:12:33.226422 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/419c93d3-0d80-4fbf-91cd-c88303e038e5-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-5d644b584c-bnxt4\" (UID: \"419c93d3-0d80-4fbf-91cd-c88303e038e5\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5d644b584c-bnxt4" Jan 31 09:12:33 crc kubenswrapper[4830]: I0131 09:12:33.227854 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/419c93d3-0d80-4fbf-91cd-c88303e038e5-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-5d644b584c-bnxt4\" (UID: \"419c93d3-0d80-4fbf-91cd-c88303e038e5\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5d644b584c-bnxt4" Jan 31 09:12:33 crc kubenswrapper[4830]: I0131 09:12:33.230336 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/2b9a494b-8847-4bf7-820e-2739aa96a464-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-5d644b584c-jfsw8\" (UID: \"2b9a494b-8847-4bf7-820e-2739aa96a464\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5d644b584c-jfsw8" Jan 31 09:12:33 crc kubenswrapper[4830]: I0131 09:12:33.230586 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/2b9a494b-8847-4bf7-820e-2739aa96a464-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-5d644b584c-jfsw8\" (UID: \"2b9a494b-8847-4bf7-820e-2739aa96a464\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5d644b584c-jfsw8" Jan 31 09:12:33 crc kubenswrapper[4830]: I0131 09:12:33.263976 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5d644b584c-jfsw8" Jan 31 09:12:33 crc kubenswrapper[4830]: I0131 09:12:33.289643 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5d644b584c-bnxt4" Jan 31 09:12:33 crc kubenswrapper[4830]: I0131 09:12:33.322116 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/1ebf3f9f-75ef-4cfd-a7f7-d5fb556aeb48-observability-operator-tls\") pod \"observability-operator-59bdc8b94-l59nt\" (UID: \"1ebf3f9f-75ef-4cfd-a7f7-d5fb556aeb48\") " pod="openshift-operators/observability-operator-59bdc8b94-l59nt" Jan 31 09:12:33 crc kubenswrapper[4830]: I0131 09:12:33.322202 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vrwrh\" (UniqueName: \"kubernetes.io/projected/1ebf3f9f-75ef-4cfd-a7f7-d5fb556aeb48-kube-api-access-vrwrh\") pod \"observability-operator-59bdc8b94-l59nt\" (UID: \"1ebf3f9f-75ef-4cfd-a7f7-d5fb556aeb48\") " pod="openshift-operators/observability-operator-59bdc8b94-l59nt" Jan 31 09:12:33 crc kubenswrapper[4830]: I0131 09:12:33.373168 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-wtdqw"] Jan 31 09:12:33 crc kubenswrapper[4830]: I0131 09:12:33.374585 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-wtdqw" Jan 31 09:12:33 crc kubenswrapper[4830]: I0131 09:12:33.379393 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"perses-operator-dockercfg-fmdqk" Jan 31 09:12:33 crc kubenswrapper[4830]: I0131 09:12:33.423827 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vrwrh\" (UniqueName: \"kubernetes.io/projected/1ebf3f9f-75ef-4cfd-a7f7-d5fb556aeb48-kube-api-access-vrwrh\") pod \"observability-operator-59bdc8b94-l59nt\" (UID: \"1ebf3f9f-75ef-4cfd-a7f7-d5fb556aeb48\") " pod="openshift-operators/observability-operator-59bdc8b94-l59nt" Jan 31 09:12:33 crc kubenswrapper[4830]: I0131 09:12:33.424048 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/1ebf3f9f-75ef-4cfd-a7f7-d5fb556aeb48-observability-operator-tls\") pod \"observability-operator-59bdc8b94-l59nt\" (UID: \"1ebf3f9f-75ef-4cfd-a7f7-d5fb556aeb48\") " pod="openshift-operators/observability-operator-59bdc8b94-l59nt" Jan 31 09:12:33 crc kubenswrapper[4830]: I0131 09:12:33.425672 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-wtdqw"] Jan 31 09:12:33 crc kubenswrapper[4830]: I0131 09:12:33.429341 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/1ebf3f9f-75ef-4cfd-a7f7-d5fb556aeb48-observability-operator-tls\") pod \"observability-operator-59bdc8b94-l59nt\" (UID: \"1ebf3f9f-75ef-4cfd-a7f7-d5fb556aeb48\") " pod="openshift-operators/observability-operator-59bdc8b94-l59nt" Jan 31 09:12:33 crc kubenswrapper[4830]: I0131 09:12:33.454347 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-222kl" Jan 31 09:12:33 crc kubenswrapper[4830]: I0131 09:12:33.455622 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vrwrh\" (UniqueName: \"kubernetes.io/projected/1ebf3f9f-75ef-4cfd-a7f7-d5fb556aeb48-kube-api-access-vrwrh\") pod \"observability-operator-59bdc8b94-l59nt\" (UID: \"1ebf3f9f-75ef-4cfd-a7f7-d5fb556aeb48\") " pod="openshift-operators/observability-operator-59bdc8b94-l59nt" Jan 31 09:12:33 crc kubenswrapper[4830]: I0131 09:12:33.525884 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/0af185f3-0cfa-4299-8eee-0e523d87504c-openshift-service-ca\") pod \"perses-operator-5bf474d74f-wtdqw\" (UID: \"0af185f3-0cfa-4299-8eee-0e523d87504c\") " pod="openshift-operators/perses-operator-5bf474d74f-wtdqw" Jan 31 09:12:33 crc kubenswrapper[4830]: I0131 09:12:33.526413 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cvnpj\" (UniqueName: \"kubernetes.io/projected/0af185f3-0cfa-4299-8eee-0e523d87504c-kube-api-access-cvnpj\") pod \"perses-operator-5bf474d74f-wtdqw\" (UID: \"0af185f3-0cfa-4299-8eee-0e523d87504c\") " pod="openshift-operators/perses-operator-5bf474d74f-wtdqw" Jan 31 09:12:33 crc kubenswrapper[4830]: I0131 09:12:33.534060 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-l59nt" Jan 31 09:12:33 crc kubenswrapper[4830]: I0131 09:12:33.629749 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/0af185f3-0cfa-4299-8eee-0e523d87504c-openshift-service-ca\") pod \"perses-operator-5bf474d74f-wtdqw\" (UID: \"0af185f3-0cfa-4299-8eee-0e523d87504c\") " pod="openshift-operators/perses-operator-5bf474d74f-wtdqw" Jan 31 09:12:33 crc kubenswrapper[4830]: I0131 09:12:33.629815 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cvnpj\" (UniqueName: \"kubernetes.io/projected/0af185f3-0cfa-4299-8eee-0e523d87504c-kube-api-access-cvnpj\") pod \"perses-operator-5bf474d74f-wtdqw\" (UID: \"0af185f3-0cfa-4299-8eee-0e523d87504c\") " pod="openshift-operators/perses-operator-5bf474d74f-wtdqw" Jan 31 09:12:33 crc kubenswrapper[4830]: I0131 09:12:33.631315 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/0af185f3-0cfa-4299-8eee-0e523d87504c-openshift-service-ca\") pod \"perses-operator-5bf474d74f-wtdqw\" (UID: \"0af185f3-0cfa-4299-8eee-0e523d87504c\") " pod="openshift-operators/perses-operator-5bf474d74f-wtdqw" Jan 31 09:12:33 crc kubenswrapper[4830]: I0131 09:12:33.670155 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cvnpj\" (UniqueName: \"kubernetes.io/projected/0af185f3-0cfa-4299-8eee-0e523d87504c-kube-api-access-cvnpj\") pod \"perses-operator-5bf474d74f-wtdqw\" (UID: \"0af185f3-0cfa-4299-8eee-0e523d87504c\") " pod="openshift-operators/perses-operator-5bf474d74f-wtdqw" Jan 31 09:12:33 crc kubenswrapper[4830]: I0131 09:12:33.694270 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-wtdqw" Jan 31 09:12:33 crc kubenswrapper[4830]: I0131 09:12:33.735465 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-5d644b584c-bnxt4"] Jan 31 09:12:33 crc kubenswrapper[4830]: I0131 09:12:33.757434 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-5d644b584c-jfsw8"] Jan 31 09:12:33 crc kubenswrapper[4830]: I0131 09:12:33.924769 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-222kl"] Jan 31 09:12:34 crc kubenswrapper[4830]: I0131 09:12:34.212073 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-l59nt"] Jan 31 09:12:34 crc kubenswrapper[4830]: W0131 09:12:34.231168 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1ebf3f9f_75ef_4cfd_a7f7_d5fb556aeb48.slice/crio-ca7a09747884f0afaec3ca7c7067068d2f426e91f58b05095c16a01309f21c19 WatchSource:0}: Error finding container ca7a09747884f0afaec3ca7c7067068d2f426e91f58b05095c16a01309f21c19: Status 404 returned error can't find the container with id ca7a09747884f0afaec3ca7c7067068d2f426e91f58b05095c16a01309f21c19 Jan 31 09:12:34 crc kubenswrapper[4830]: I0131 09:12:34.329458 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-wtdqw"] Jan 31 09:12:34 crc kubenswrapper[4830]: I0131 09:12:34.817547 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-l59nt" event={"ID":"1ebf3f9f-75ef-4cfd-a7f7-d5fb556aeb48","Type":"ContainerStarted","Data":"ca7a09747884f0afaec3ca7c7067068d2f426e91f58b05095c16a01309f21c19"} Jan 31 09:12:34 crc kubenswrapper[4830]: I0131 09:12:34.822974 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5d644b584c-bnxt4" event={"ID":"419c93d3-0d80-4fbf-91cd-c88303e038e5","Type":"ContainerStarted","Data":"3cc0faa7f27712980c9f4f250f8a16446a34e97378b576a9bd7b78d16ca20e97"} Jan 31 09:12:34 crc kubenswrapper[4830]: I0131 09:12:34.824150 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5d644b584c-jfsw8" event={"ID":"2b9a494b-8847-4bf7-820e-2739aa96a464","Type":"ContainerStarted","Data":"e8d4ce72e29616e631985755c87e869493bde76a1ce5a28188d9ed26d5780fc9"} Jan 31 09:12:34 crc kubenswrapper[4830]: I0131 09:12:34.825228 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-222kl" event={"ID":"3addecb4-84c5-4b88-b751-b6a26db362be","Type":"ContainerStarted","Data":"c60e554c252bdeb7fd651fdd580ebdf9815a1f655ffd37df3669bf733be4439d"} Jan 31 09:12:34 crc kubenswrapper[4830]: I0131 09:12:34.826094 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-wtdqw" event={"ID":"0af185f3-0cfa-4299-8eee-0e523d87504c","Type":"ContainerStarted","Data":"db6c608a65cf14ec90356933f7f19d492ec8f2e96b794b2733df45d3cc2ccf3d"} Jan 31 09:12:44 crc kubenswrapper[4830]: I0131 09:12:44.353412 4830 patch_prober.go:28] interesting pod/machine-config-daemon-gt7kd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 09:12:44 crc kubenswrapper[4830]: I0131 09:12:44.354064 4830 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" podUID="158dbfda-9b0a-4809-9946-3c6ee2d082dc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 09:12:45 crc kubenswrapper[4830]: I0131 09:12:45.937126 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-l59nt" event={"ID":"1ebf3f9f-75ef-4cfd-a7f7-d5fb556aeb48","Type":"ContainerStarted","Data":"84b9f1eb86465dfd8f507bac40d6eba61e040eff8f9e2cec0a5f6e8db4aeffc3"} Jan 31 09:12:45 crc kubenswrapper[4830]: I0131 09:12:45.937542 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/observability-operator-59bdc8b94-l59nt" Jan 31 09:12:45 crc kubenswrapper[4830]: I0131 09:12:45.938911 4830 patch_prober.go:28] interesting pod/observability-operator-59bdc8b94-l59nt container/operator namespace/openshift-operators: Readiness probe status=failure output="Get \"http://10.217.0.93:8081/healthz\": dial tcp 10.217.0.93:8081: connect: connection refused" start-of-body= Jan 31 09:12:45 crc kubenswrapper[4830]: I0131 09:12:45.938994 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators/observability-operator-59bdc8b94-l59nt" podUID="1ebf3f9f-75ef-4cfd-a7f7-d5fb556aeb48" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.93:8081/healthz\": dial tcp 10.217.0.93:8081: connect: connection refused" Jan 31 09:12:45 crc kubenswrapper[4830]: I0131 09:12:45.939312 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5d644b584c-bnxt4" event={"ID":"419c93d3-0d80-4fbf-91cd-c88303e038e5","Type":"ContainerStarted","Data":"ec4d657ef5fa8bc5e334b827e88fb56030c04bc0acfed7856bbfb79001175ba2"} Jan 31 09:12:45 crc kubenswrapper[4830]: I0131 09:12:45.941757 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5d644b584c-jfsw8" event={"ID":"2b9a494b-8847-4bf7-820e-2739aa96a464","Type":"ContainerStarted","Data":"30371473f926bba58c10bc1498ffbd95f9c30bc312bd1cf1453072521bbeb055"} Jan 31 09:12:45 crc kubenswrapper[4830]: I0131 09:12:45.944015 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-222kl" event={"ID":"3addecb4-84c5-4b88-b751-b6a26db362be","Type":"ContainerStarted","Data":"582134bae8360f11fc7bf4a9583972c5be822a9caddc59c242608c6ec747fb96"} Jan 31 09:12:45 crc kubenswrapper[4830]: I0131 09:12:45.946435 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-wtdqw" event={"ID":"0af185f3-0cfa-4299-8eee-0e523d87504c","Type":"ContainerStarted","Data":"aeb839fdcc480fb0bc8ce050ca42bf5fea15588fd0b7c10c73f318683e52464e"} Jan 31 09:12:45 crc kubenswrapper[4830]: I0131 09:12:45.946962 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/perses-operator-5bf474d74f-wtdqw" Jan 31 09:12:45 crc kubenswrapper[4830]: I0131 09:12:45.959568 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-operator-59bdc8b94-l59nt" podStartSLOduration=1.862313101 podStartE2EDuration="12.959542081s" podCreationTimestamp="2026-01-31 09:12:33 +0000 UTC" firstStartedPulling="2026-01-31 09:12:34.234036239 +0000 UTC m=+698.727398681" lastFinishedPulling="2026-01-31 09:12:45.331265219 +0000 UTC m=+709.824627661" observedRunningTime="2026-01-31 09:12:45.957520852 +0000 UTC m=+710.450883314" watchObservedRunningTime="2026-01-31 09:12:45.959542081 +0000 UTC m=+710.452904533" Jan 31 09:12:45 crc kubenswrapper[4830]: I0131 09:12:45.988913 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-222kl" podStartSLOduration=2.633454094 podStartE2EDuration="13.988890766s" podCreationTimestamp="2026-01-31 09:12:32 +0000 UTC" firstStartedPulling="2026-01-31 09:12:33.93714137 +0000 UTC m=+698.430503812" lastFinishedPulling="2026-01-31 09:12:45.292578042 +0000 UTC m=+709.785940484" observedRunningTime="2026-01-31 09:12:45.983395256 +0000 UTC m=+710.476757698" watchObservedRunningTime="2026-01-31 09:12:45.988890766 +0000 UTC m=+710.482253208" Jan 31 09:12:46 crc kubenswrapper[4830]: I0131 09:12:46.004877 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5d644b584c-bnxt4" podStartSLOduration=2.550796116 podStartE2EDuration="14.004854651s" podCreationTimestamp="2026-01-31 09:12:32 +0000 UTC" firstStartedPulling="2026-01-31 09:12:33.838444405 +0000 UTC m=+698.331806847" lastFinishedPulling="2026-01-31 09:12:45.29250293 +0000 UTC m=+709.785865382" observedRunningTime="2026-01-31 09:12:46.00207488 +0000 UTC m=+710.495437322" watchObservedRunningTime="2026-01-31 09:12:46.004854651 +0000 UTC m=+710.498217093" Jan 31 09:12:46 crc kubenswrapper[4830]: I0131 09:12:46.074374 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/perses-operator-5bf474d74f-wtdqw" podStartSLOduration=2.123711956 podStartE2EDuration="13.074345366s" podCreationTimestamp="2026-01-31 09:12:33 +0000 UTC" firstStartedPulling="2026-01-31 09:12:34.341946742 +0000 UTC m=+698.835309184" lastFinishedPulling="2026-01-31 09:12:45.292580152 +0000 UTC m=+709.785942594" observedRunningTime="2026-01-31 09:12:46.068526026 +0000 UTC m=+710.561888458" watchObservedRunningTime="2026-01-31 09:12:46.074345366 +0000 UTC m=+710.567707808" Jan 31 09:12:46 crc kubenswrapper[4830]: I0131 09:12:46.106861 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5d644b584c-jfsw8" podStartSLOduration=2.661138701 podStartE2EDuration="14.106835692s" podCreationTimestamp="2026-01-31 09:12:32 +0000 UTC" firstStartedPulling="2026-01-31 09:12:33.881218941 +0000 UTC m=+698.374581383" lastFinishedPulling="2026-01-31 09:12:45.326915932 +0000 UTC m=+709.820278374" observedRunningTime="2026-01-31 09:12:46.106273206 +0000 UTC m=+710.599635658" watchObservedRunningTime="2026-01-31 09:12:46.106835692 +0000 UTC m=+710.600198134" Jan 31 09:12:46 crc kubenswrapper[4830]: I0131 09:12:46.958681 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/observability-operator-59bdc8b94-l59nt" Jan 31 09:12:53 crc kubenswrapper[4830]: I0131 09:12:53.698877 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/perses-operator-5bf474d74f-wtdqw" Jan 31 09:12:54 crc kubenswrapper[4830]: I0131 09:12:54.079714 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-r8pc4"] Jan 31 09:12:54 crc kubenswrapper[4830]: I0131 09:12:54.080286 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-r8pc4" podUID="159b9801-57e3-4cf0-9b81-10aacb5eef83" containerName="ovn-controller" containerID="cri-o://0fd109715e65725addfeba399eacb1abbe16132e06e9ee50b542e877ca9d8d82" gracePeriod=30 Jan 31 09:12:54 crc kubenswrapper[4830]: I0131 09:12:54.080394 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-r8pc4" podUID="159b9801-57e3-4cf0-9b81-10aacb5eef83" containerName="nbdb" containerID="cri-o://351a35da1a2503d6c5d323c7a9a26460236f70517498b41055e6d21bbf92b358" gracePeriod=30 Jan 31 09:12:54 crc kubenswrapper[4830]: I0131 09:12:54.080480 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-r8pc4" podUID="159b9801-57e3-4cf0-9b81-10aacb5eef83" containerName="kube-rbac-proxy-node" containerID="cri-o://320f24051b1a05475b54823ec3401ad252f0efd6c0d3c5098643f959bad15179" gracePeriod=30 Jan 31 09:12:54 crc kubenswrapper[4830]: I0131 09:12:54.080544 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-r8pc4" podUID="159b9801-57e3-4cf0-9b81-10aacb5eef83" containerName="ovn-acl-logging" containerID="cri-o://27c8241e84418d80626b17fe37ccf994304394c3fa85c7f1a058d8d54ad7e7d6" gracePeriod=30 Jan 31 09:12:54 crc kubenswrapper[4830]: I0131 09:12:54.080643 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-r8pc4" podUID="159b9801-57e3-4cf0-9b81-10aacb5eef83" containerName="northd" containerID="cri-o://3614c1ff7ca6762c15b9b85e6187ad94029630e54254e8ccb45b5e7c24692561" gracePeriod=30 Jan 31 09:12:54 crc kubenswrapper[4830]: I0131 09:12:54.080646 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-r8pc4" podUID="159b9801-57e3-4cf0-9b81-10aacb5eef83" containerName="sbdb" containerID="cri-o://a39d2c99a3b1f6ff5d003d2163046c5127e41dac82f41744c1fbffd0cec8ff09" gracePeriod=30 Jan 31 09:12:54 crc kubenswrapper[4830]: I0131 09:12:54.080781 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-r8pc4" podUID="159b9801-57e3-4cf0-9b81-10aacb5eef83" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://ae12083012db5dcd6cc0b23cf64d04c952587334fdf617d36d4cdc2f657eb163" gracePeriod=30 Jan 31 09:12:54 crc kubenswrapper[4830]: I0131 09:12:54.138883 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-r8pc4" podUID="159b9801-57e3-4cf0-9b81-10aacb5eef83" containerName="ovnkube-controller" containerID="cri-o://f4d93300488a1d98f2b7829b938554fd6261d49065ba6bab59723ae725087360" gracePeriod=30 Jan 31 09:12:55 crc kubenswrapper[4830]: I0131 09:12:55.005154 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-cjqbn_b7e133cc-19e8-4770-9146-88dac53a6531/kube-multus/2.log" Jan 31 09:12:55 crc kubenswrapper[4830]: I0131 09:12:55.006837 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-cjqbn_b7e133cc-19e8-4770-9146-88dac53a6531/kube-multus/1.log" Jan 31 09:12:55 crc kubenswrapper[4830]: I0131 09:12:55.006883 4830 generic.go:334] "Generic (PLEG): container finished" podID="b7e133cc-19e8-4770-9146-88dac53a6531" containerID="688600880adb08704161ae3933906d1341bce11f0e4231769fa30f33301668d5" exitCode=2 Jan 31 09:12:55 crc kubenswrapper[4830]: I0131 09:12:55.006951 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-cjqbn" event={"ID":"b7e133cc-19e8-4770-9146-88dac53a6531","Type":"ContainerDied","Data":"688600880adb08704161ae3933906d1341bce11f0e4231769fa30f33301668d5"} Jan 31 09:12:55 crc kubenswrapper[4830]: I0131 09:12:55.007007 4830 scope.go:117] "RemoveContainer" containerID="9875f32d43bbc74af3de68db341e1562d735fcd5fba747d5ca7aceea458db68a" Jan 31 09:12:55 crc kubenswrapper[4830]: I0131 09:12:55.007584 4830 scope.go:117] "RemoveContainer" containerID="688600880adb08704161ae3933906d1341bce11f0e4231769fa30f33301668d5" Jan 31 09:12:55 crc kubenswrapper[4830]: E0131 09:12:55.008003 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-cjqbn_openshift-multus(b7e133cc-19e8-4770-9146-88dac53a6531)\"" pod="openshift-multus/multus-cjqbn" podUID="b7e133cc-19e8-4770-9146-88dac53a6531" Jan 31 09:12:55 crc kubenswrapper[4830]: I0131 09:12:55.010612 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-r8pc4_159b9801-57e3-4cf0-9b81-10aacb5eef83/ovnkube-controller/3.log" Jan 31 09:12:55 crc kubenswrapper[4830]: I0131 09:12:55.015179 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-r8pc4_159b9801-57e3-4cf0-9b81-10aacb5eef83/ovn-acl-logging/0.log" Jan 31 09:12:55 crc kubenswrapper[4830]: I0131 09:12:55.016384 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-r8pc4_159b9801-57e3-4cf0-9b81-10aacb5eef83/ovn-controller/0.log" Jan 31 09:12:55 crc kubenswrapper[4830]: I0131 09:12:55.016900 4830 generic.go:334] "Generic (PLEG): container finished" podID="159b9801-57e3-4cf0-9b81-10aacb5eef83" containerID="f4d93300488a1d98f2b7829b938554fd6261d49065ba6bab59723ae725087360" exitCode=0 Jan 31 09:12:55 crc kubenswrapper[4830]: I0131 09:12:55.016980 4830 generic.go:334] "Generic (PLEG): container finished" podID="159b9801-57e3-4cf0-9b81-10aacb5eef83" containerID="a39d2c99a3b1f6ff5d003d2163046c5127e41dac82f41744c1fbffd0cec8ff09" exitCode=0 Jan 31 09:12:55 crc kubenswrapper[4830]: I0131 09:12:55.017054 4830 generic.go:334] "Generic (PLEG): container finished" podID="159b9801-57e3-4cf0-9b81-10aacb5eef83" containerID="351a35da1a2503d6c5d323c7a9a26460236f70517498b41055e6d21bbf92b358" exitCode=0 Jan 31 09:12:55 crc kubenswrapper[4830]: I0131 09:12:55.017114 4830 generic.go:334] "Generic (PLEG): container finished" podID="159b9801-57e3-4cf0-9b81-10aacb5eef83" containerID="3614c1ff7ca6762c15b9b85e6187ad94029630e54254e8ccb45b5e7c24692561" exitCode=0 Jan 31 09:12:55 crc kubenswrapper[4830]: I0131 09:12:55.017176 4830 generic.go:334] "Generic (PLEG): container finished" podID="159b9801-57e3-4cf0-9b81-10aacb5eef83" containerID="27c8241e84418d80626b17fe37ccf994304394c3fa85c7f1a058d8d54ad7e7d6" exitCode=143 Jan 31 09:12:55 crc kubenswrapper[4830]: I0131 09:12:55.017248 4830 generic.go:334] "Generic (PLEG): container finished" podID="159b9801-57e3-4cf0-9b81-10aacb5eef83" containerID="0fd109715e65725addfeba399eacb1abbe16132e06e9ee50b542e877ca9d8d82" exitCode=143 Jan 31 09:12:55 crc kubenswrapper[4830]: I0131 09:12:55.016972 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-r8pc4" event={"ID":"159b9801-57e3-4cf0-9b81-10aacb5eef83","Type":"ContainerDied","Data":"f4d93300488a1d98f2b7829b938554fd6261d49065ba6bab59723ae725087360"} Jan 31 09:12:55 crc kubenswrapper[4830]: I0131 09:12:55.017567 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-r8pc4" event={"ID":"159b9801-57e3-4cf0-9b81-10aacb5eef83","Type":"ContainerDied","Data":"a39d2c99a3b1f6ff5d003d2163046c5127e41dac82f41744c1fbffd0cec8ff09"} Jan 31 09:12:55 crc kubenswrapper[4830]: I0131 09:12:55.017656 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-r8pc4" event={"ID":"159b9801-57e3-4cf0-9b81-10aacb5eef83","Type":"ContainerDied","Data":"351a35da1a2503d6c5d323c7a9a26460236f70517498b41055e6d21bbf92b358"} Jan 31 09:12:55 crc kubenswrapper[4830]: I0131 09:12:55.017781 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-r8pc4" event={"ID":"159b9801-57e3-4cf0-9b81-10aacb5eef83","Type":"ContainerDied","Data":"3614c1ff7ca6762c15b9b85e6187ad94029630e54254e8ccb45b5e7c24692561"} Jan 31 09:12:55 crc kubenswrapper[4830]: I0131 09:12:55.017877 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-r8pc4" event={"ID":"159b9801-57e3-4cf0-9b81-10aacb5eef83","Type":"ContainerDied","Data":"27c8241e84418d80626b17fe37ccf994304394c3fa85c7f1a058d8d54ad7e7d6"} Jan 31 09:12:55 crc kubenswrapper[4830]: I0131 09:12:55.017944 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-r8pc4" event={"ID":"159b9801-57e3-4cf0-9b81-10aacb5eef83","Type":"ContainerDied","Data":"0fd109715e65725addfeba399eacb1abbe16132e06e9ee50b542e877ca9d8d82"} Jan 31 09:12:55 crc kubenswrapper[4830]: I0131 09:12:55.033989 4830 scope.go:117] "RemoveContainer" containerID="766440d35d97de136fa66a347be009991bd05f76b51aff44c7369006f3196a4f" Jan 31 09:12:55 crc kubenswrapper[4830]: I0131 09:12:55.785980 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-r8pc4_159b9801-57e3-4cf0-9b81-10aacb5eef83/ovn-acl-logging/0.log" Jan 31 09:12:55 crc kubenswrapper[4830]: I0131 09:12:55.786634 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-r8pc4_159b9801-57e3-4cf0-9b81-10aacb5eef83/ovn-controller/0.log" Jan 31 09:12:55 crc kubenswrapper[4830]: I0131 09:12:55.787049 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-r8pc4" Jan 31 09:12:55 crc kubenswrapper[4830]: I0131 09:12:55.858511 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/159b9801-57e3-4cf0-9b81-10aacb5eef83-log-socket\") pod \"159b9801-57e3-4cf0-9b81-10aacb5eef83\" (UID: \"159b9801-57e3-4cf0-9b81-10aacb5eef83\") " Jan 31 09:12:55 crc kubenswrapper[4830]: I0131 09:12:55.858565 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/159b9801-57e3-4cf0-9b81-10aacb5eef83-run-systemd\") pod \"159b9801-57e3-4cf0-9b81-10aacb5eef83\" (UID: \"159b9801-57e3-4cf0-9b81-10aacb5eef83\") " Jan 31 09:12:55 crc kubenswrapper[4830]: I0131 09:12:55.858583 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/159b9801-57e3-4cf0-9b81-10aacb5eef83-systemd-units\") pod \"159b9801-57e3-4cf0-9b81-10aacb5eef83\" (UID: \"159b9801-57e3-4cf0-9b81-10aacb5eef83\") " Jan 31 09:12:55 crc kubenswrapper[4830]: I0131 09:12:55.858614 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/159b9801-57e3-4cf0-9b81-10aacb5eef83-run-ovn\") pod \"159b9801-57e3-4cf0-9b81-10aacb5eef83\" (UID: \"159b9801-57e3-4cf0-9b81-10aacb5eef83\") " Jan 31 09:12:55 crc kubenswrapper[4830]: I0131 09:12:55.858649 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/159b9801-57e3-4cf0-9b81-10aacb5eef83-host-slash\") pod \"159b9801-57e3-4cf0-9b81-10aacb5eef83\" (UID: \"159b9801-57e3-4cf0-9b81-10aacb5eef83\") " Jan 31 09:12:55 crc kubenswrapper[4830]: I0131 09:12:55.858675 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/159b9801-57e3-4cf0-9b81-10aacb5eef83-etc-openvswitch\") pod \"159b9801-57e3-4cf0-9b81-10aacb5eef83\" (UID: \"159b9801-57e3-4cf0-9b81-10aacb5eef83\") " Jan 31 09:12:55 crc kubenswrapper[4830]: I0131 09:12:55.858700 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/159b9801-57e3-4cf0-9b81-10aacb5eef83-host-run-ovn-kubernetes\") pod \"159b9801-57e3-4cf0-9b81-10aacb5eef83\" (UID: \"159b9801-57e3-4cf0-9b81-10aacb5eef83\") " Jan 31 09:12:55 crc kubenswrapper[4830]: I0131 09:12:55.858737 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/159b9801-57e3-4cf0-9b81-10aacb5eef83-host-kubelet\") pod \"159b9801-57e3-4cf0-9b81-10aacb5eef83\" (UID: \"159b9801-57e3-4cf0-9b81-10aacb5eef83\") " Jan 31 09:12:55 crc kubenswrapper[4830]: I0131 09:12:55.858761 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/159b9801-57e3-4cf0-9b81-10aacb5eef83-var-lib-openvswitch\") pod \"159b9801-57e3-4cf0-9b81-10aacb5eef83\" (UID: \"159b9801-57e3-4cf0-9b81-10aacb5eef83\") " Jan 31 09:12:55 crc kubenswrapper[4830]: I0131 09:12:55.858797 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/159b9801-57e3-4cf0-9b81-10aacb5eef83-ovnkube-script-lib\") pod \"159b9801-57e3-4cf0-9b81-10aacb5eef83\" (UID: \"159b9801-57e3-4cf0-9b81-10aacb5eef83\") " Jan 31 09:12:55 crc kubenswrapper[4830]: I0131 09:12:55.858836 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/159b9801-57e3-4cf0-9b81-10aacb5eef83-host-cni-netd\") pod \"159b9801-57e3-4cf0-9b81-10aacb5eef83\" (UID: \"159b9801-57e3-4cf0-9b81-10aacb5eef83\") " Jan 31 09:12:55 crc kubenswrapper[4830]: I0131 09:12:55.858869 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/159b9801-57e3-4cf0-9b81-10aacb5eef83-host-var-lib-cni-networks-ovn-kubernetes\") pod \"159b9801-57e3-4cf0-9b81-10aacb5eef83\" (UID: \"159b9801-57e3-4cf0-9b81-10aacb5eef83\") " Jan 31 09:12:55 crc kubenswrapper[4830]: I0131 09:12:55.858890 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/159b9801-57e3-4cf0-9b81-10aacb5eef83-ovnkube-config\") pod \"159b9801-57e3-4cf0-9b81-10aacb5eef83\" (UID: \"159b9801-57e3-4cf0-9b81-10aacb5eef83\") " Jan 31 09:12:55 crc kubenswrapper[4830]: I0131 09:12:55.858912 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8nvq5\" (UniqueName: \"kubernetes.io/projected/159b9801-57e3-4cf0-9b81-10aacb5eef83-kube-api-access-8nvq5\") pod \"159b9801-57e3-4cf0-9b81-10aacb5eef83\" (UID: \"159b9801-57e3-4cf0-9b81-10aacb5eef83\") " Jan 31 09:12:55 crc kubenswrapper[4830]: I0131 09:12:55.858936 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/159b9801-57e3-4cf0-9b81-10aacb5eef83-run-openvswitch\") pod \"159b9801-57e3-4cf0-9b81-10aacb5eef83\" (UID: \"159b9801-57e3-4cf0-9b81-10aacb5eef83\") " Jan 31 09:12:55 crc kubenswrapper[4830]: I0131 09:12:55.858975 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/159b9801-57e3-4cf0-9b81-10aacb5eef83-env-overrides\") pod \"159b9801-57e3-4cf0-9b81-10aacb5eef83\" (UID: \"159b9801-57e3-4cf0-9b81-10aacb5eef83\") " Jan 31 09:12:55 crc kubenswrapper[4830]: I0131 09:12:55.859004 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/159b9801-57e3-4cf0-9b81-10aacb5eef83-ovn-node-metrics-cert\") pod \"159b9801-57e3-4cf0-9b81-10aacb5eef83\" (UID: \"159b9801-57e3-4cf0-9b81-10aacb5eef83\") " Jan 31 09:12:55 crc kubenswrapper[4830]: I0131 09:12:55.859038 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/159b9801-57e3-4cf0-9b81-10aacb5eef83-node-log\") pod \"159b9801-57e3-4cf0-9b81-10aacb5eef83\" (UID: \"159b9801-57e3-4cf0-9b81-10aacb5eef83\") " Jan 31 09:12:55 crc kubenswrapper[4830]: I0131 09:12:55.859059 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/159b9801-57e3-4cf0-9b81-10aacb5eef83-host-run-netns\") pod \"159b9801-57e3-4cf0-9b81-10aacb5eef83\" (UID: \"159b9801-57e3-4cf0-9b81-10aacb5eef83\") " Jan 31 09:12:55 crc kubenswrapper[4830]: I0131 09:12:55.859078 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/159b9801-57e3-4cf0-9b81-10aacb5eef83-host-cni-bin\") pod \"159b9801-57e3-4cf0-9b81-10aacb5eef83\" (UID: \"159b9801-57e3-4cf0-9b81-10aacb5eef83\") " Jan 31 09:12:55 crc kubenswrapper[4830]: I0131 09:12:55.859380 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/159b9801-57e3-4cf0-9b81-10aacb5eef83-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "159b9801-57e3-4cf0-9b81-10aacb5eef83" (UID: "159b9801-57e3-4cf0-9b81-10aacb5eef83"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 09:12:55 crc kubenswrapper[4830]: I0131 09:12:55.859419 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/159b9801-57e3-4cf0-9b81-10aacb5eef83-log-socket" (OuterVolumeSpecName: "log-socket") pod "159b9801-57e3-4cf0-9b81-10aacb5eef83" (UID: "159b9801-57e3-4cf0-9b81-10aacb5eef83"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 09:12:55 crc kubenswrapper[4830]: I0131 09:12:55.859923 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/159b9801-57e3-4cf0-9b81-10aacb5eef83-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "159b9801-57e3-4cf0-9b81-10aacb5eef83" (UID: "159b9801-57e3-4cf0-9b81-10aacb5eef83"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 09:12:55 crc kubenswrapper[4830]: I0131 09:12:55.860021 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/159b9801-57e3-4cf0-9b81-10aacb5eef83-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "159b9801-57e3-4cf0-9b81-10aacb5eef83" (UID: "159b9801-57e3-4cf0-9b81-10aacb5eef83"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 09:12:55 crc kubenswrapper[4830]: I0131 09:12:55.860055 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/159b9801-57e3-4cf0-9b81-10aacb5eef83-host-slash" (OuterVolumeSpecName: "host-slash") pod "159b9801-57e3-4cf0-9b81-10aacb5eef83" (UID: "159b9801-57e3-4cf0-9b81-10aacb5eef83"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 09:12:55 crc kubenswrapper[4830]: I0131 09:12:55.860090 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/159b9801-57e3-4cf0-9b81-10aacb5eef83-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "159b9801-57e3-4cf0-9b81-10aacb5eef83" (UID: "159b9801-57e3-4cf0-9b81-10aacb5eef83"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 09:12:55 crc kubenswrapper[4830]: I0131 09:12:55.860125 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/159b9801-57e3-4cf0-9b81-10aacb5eef83-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "159b9801-57e3-4cf0-9b81-10aacb5eef83" (UID: "159b9801-57e3-4cf0-9b81-10aacb5eef83"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 09:12:55 crc kubenswrapper[4830]: I0131 09:12:55.860157 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/159b9801-57e3-4cf0-9b81-10aacb5eef83-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "159b9801-57e3-4cf0-9b81-10aacb5eef83" (UID: "159b9801-57e3-4cf0-9b81-10aacb5eef83"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 09:12:55 crc kubenswrapper[4830]: I0131 09:12:55.860181 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/159b9801-57e3-4cf0-9b81-10aacb5eef83-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "159b9801-57e3-4cf0-9b81-10aacb5eef83" (UID: "159b9801-57e3-4cf0-9b81-10aacb5eef83"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 09:12:55 crc kubenswrapper[4830]: I0131 09:12:55.860261 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/159b9801-57e3-4cf0-9b81-10aacb5eef83-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "159b9801-57e3-4cf0-9b81-10aacb5eef83" (UID: "159b9801-57e3-4cf0-9b81-10aacb5eef83"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 09:12:55 crc kubenswrapper[4830]: I0131 09:12:55.860294 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/159b9801-57e3-4cf0-9b81-10aacb5eef83-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "159b9801-57e3-4cf0-9b81-10aacb5eef83" (UID: "159b9801-57e3-4cf0-9b81-10aacb5eef83"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 09:12:55 crc kubenswrapper[4830]: I0131 09:12:55.860319 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/159b9801-57e3-4cf0-9b81-10aacb5eef83-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "159b9801-57e3-4cf0-9b81-10aacb5eef83" (UID: "159b9801-57e3-4cf0-9b81-10aacb5eef83"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 09:12:55 crc kubenswrapper[4830]: I0131 09:12:55.860618 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/159b9801-57e3-4cf0-9b81-10aacb5eef83-node-log" (OuterVolumeSpecName: "node-log") pod "159b9801-57e3-4cf0-9b81-10aacb5eef83" (UID: "159b9801-57e3-4cf0-9b81-10aacb5eef83"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 09:12:55 crc kubenswrapper[4830]: I0131 09:12:55.860645 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/159b9801-57e3-4cf0-9b81-10aacb5eef83-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "159b9801-57e3-4cf0-9b81-10aacb5eef83" (UID: "159b9801-57e3-4cf0-9b81-10aacb5eef83"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 09:12:55 crc kubenswrapper[4830]: I0131 09:12:55.860688 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/159b9801-57e3-4cf0-9b81-10aacb5eef83-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "159b9801-57e3-4cf0-9b81-10aacb5eef83" (UID: "159b9801-57e3-4cf0-9b81-10aacb5eef83"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:12:55 crc kubenswrapper[4830]: I0131 09:12:55.860895 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/159b9801-57e3-4cf0-9b81-10aacb5eef83-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "159b9801-57e3-4cf0-9b81-10aacb5eef83" (UID: "159b9801-57e3-4cf0-9b81-10aacb5eef83"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:12:55 crc kubenswrapper[4830]: I0131 09:12:55.861131 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/159b9801-57e3-4cf0-9b81-10aacb5eef83-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "159b9801-57e3-4cf0-9b81-10aacb5eef83" (UID: "159b9801-57e3-4cf0-9b81-10aacb5eef83"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:12:55 crc kubenswrapper[4830]: I0131 09:12:55.876393 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/159b9801-57e3-4cf0-9b81-10aacb5eef83-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "159b9801-57e3-4cf0-9b81-10aacb5eef83" (UID: "159b9801-57e3-4cf0-9b81-10aacb5eef83"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:12:55 crc kubenswrapper[4830]: I0131 09:12:55.882012 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/159b9801-57e3-4cf0-9b81-10aacb5eef83-kube-api-access-8nvq5" (OuterVolumeSpecName: "kube-api-access-8nvq5") pod "159b9801-57e3-4cf0-9b81-10aacb5eef83" (UID: "159b9801-57e3-4cf0-9b81-10aacb5eef83"). InnerVolumeSpecName "kube-api-access-8nvq5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:12:55 crc kubenswrapper[4830]: I0131 09:12:55.891955 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-sgnqp"] Jan 31 09:12:55 crc kubenswrapper[4830]: E0131 09:12:55.892250 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="159b9801-57e3-4cf0-9b81-10aacb5eef83" containerName="kube-rbac-proxy-node" Jan 31 09:12:55 crc kubenswrapper[4830]: I0131 09:12:55.892271 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="159b9801-57e3-4cf0-9b81-10aacb5eef83" containerName="kube-rbac-proxy-node" Jan 31 09:12:55 crc kubenswrapper[4830]: E0131 09:12:55.892285 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="159b9801-57e3-4cf0-9b81-10aacb5eef83" containerName="ovn-acl-logging" Jan 31 09:12:55 crc kubenswrapper[4830]: I0131 09:12:55.892293 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="159b9801-57e3-4cf0-9b81-10aacb5eef83" containerName="ovn-acl-logging" Jan 31 09:12:55 crc kubenswrapper[4830]: E0131 09:12:55.892304 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="159b9801-57e3-4cf0-9b81-10aacb5eef83" containerName="kube-rbac-proxy-ovn-metrics" Jan 31 09:12:55 crc kubenswrapper[4830]: I0131 09:12:55.892311 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="159b9801-57e3-4cf0-9b81-10aacb5eef83" containerName="kube-rbac-proxy-ovn-metrics" Jan 31 09:12:55 crc kubenswrapper[4830]: E0131 09:12:55.892319 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="159b9801-57e3-4cf0-9b81-10aacb5eef83" containerName="ovnkube-controller" Jan 31 09:12:55 crc kubenswrapper[4830]: I0131 09:12:55.892326 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="159b9801-57e3-4cf0-9b81-10aacb5eef83" containerName="ovnkube-controller" Jan 31 09:12:55 crc kubenswrapper[4830]: E0131 09:12:55.892340 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="159b9801-57e3-4cf0-9b81-10aacb5eef83" containerName="sbdb" Jan 31 09:12:55 crc kubenswrapper[4830]: I0131 09:12:55.892347 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="159b9801-57e3-4cf0-9b81-10aacb5eef83" containerName="sbdb" Jan 31 09:12:55 crc kubenswrapper[4830]: E0131 09:12:55.892357 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="159b9801-57e3-4cf0-9b81-10aacb5eef83" containerName="northd" Jan 31 09:12:55 crc kubenswrapper[4830]: I0131 09:12:55.892364 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="159b9801-57e3-4cf0-9b81-10aacb5eef83" containerName="northd" Jan 31 09:12:55 crc kubenswrapper[4830]: E0131 09:12:55.892373 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="159b9801-57e3-4cf0-9b81-10aacb5eef83" containerName="nbdb" Jan 31 09:12:55 crc kubenswrapper[4830]: I0131 09:12:55.892380 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="159b9801-57e3-4cf0-9b81-10aacb5eef83" containerName="nbdb" Jan 31 09:12:55 crc kubenswrapper[4830]: E0131 09:12:55.892391 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="159b9801-57e3-4cf0-9b81-10aacb5eef83" containerName="ovnkube-controller" Jan 31 09:12:55 crc kubenswrapper[4830]: I0131 09:12:55.892397 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="159b9801-57e3-4cf0-9b81-10aacb5eef83" containerName="ovnkube-controller" Jan 31 09:12:55 crc kubenswrapper[4830]: E0131 09:12:55.892405 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="159b9801-57e3-4cf0-9b81-10aacb5eef83" containerName="ovnkube-controller" Jan 31 09:12:55 crc kubenswrapper[4830]: I0131 09:12:55.892411 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="159b9801-57e3-4cf0-9b81-10aacb5eef83" containerName="ovnkube-controller" Jan 31 09:12:55 crc kubenswrapper[4830]: E0131 09:12:55.892420 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="159b9801-57e3-4cf0-9b81-10aacb5eef83" containerName="kubecfg-setup" Jan 31 09:12:55 crc kubenswrapper[4830]: I0131 09:12:55.892426 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="159b9801-57e3-4cf0-9b81-10aacb5eef83" containerName="kubecfg-setup" Jan 31 09:12:55 crc kubenswrapper[4830]: E0131 09:12:55.892431 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="159b9801-57e3-4cf0-9b81-10aacb5eef83" containerName="ovn-controller" Jan 31 09:12:55 crc kubenswrapper[4830]: I0131 09:12:55.892437 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="159b9801-57e3-4cf0-9b81-10aacb5eef83" containerName="ovn-controller" Jan 31 09:12:55 crc kubenswrapper[4830]: E0131 09:12:55.892446 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="159b9801-57e3-4cf0-9b81-10aacb5eef83" containerName="ovnkube-controller" Jan 31 09:12:55 crc kubenswrapper[4830]: I0131 09:12:55.892452 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="159b9801-57e3-4cf0-9b81-10aacb5eef83" containerName="ovnkube-controller" Jan 31 09:12:55 crc kubenswrapper[4830]: I0131 09:12:55.892571 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="159b9801-57e3-4cf0-9b81-10aacb5eef83" containerName="ovnkube-controller" Jan 31 09:12:55 crc kubenswrapper[4830]: I0131 09:12:55.892580 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="159b9801-57e3-4cf0-9b81-10aacb5eef83" containerName="ovnkube-controller" Jan 31 09:12:55 crc kubenswrapper[4830]: I0131 09:12:55.892586 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="159b9801-57e3-4cf0-9b81-10aacb5eef83" containerName="sbdb" Jan 31 09:12:55 crc kubenswrapper[4830]: I0131 09:12:55.892599 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="159b9801-57e3-4cf0-9b81-10aacb5eef83" containerName="ovn-acl-logging" Jan 31 09:12:55 crc kubenswrapper[4830]: I0131 09:12:55.892606 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="159b9801-57e3-4cf0-9b81-10aacb5eef83" containerName="ovnkube-controller" Jan 31 09:12:55 crc kubenswrapper[4830]: I0131 09:12:55.892613 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="159b9801-57e3-4cf0-9b81-10aacb5eef83" containerName="ovn-controller" Jan 31 09:12:55 crc kubenswrapper[4830]: I0131 09:12:55.892619 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="159b9801-57e3-4cf0-9b81-10aacb5eef83" containerName="kube-rbac-proxy-node" Jan 31 09:12:55 crc kubenswrapper[4830]: I0131 09:12:55.892628 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="159b9801-57e3-4cf0-9b81-10aacb5eef83" containerName="kube-rbac-proxy-ovn-metrics" Jan 31 09:12:55 crc kubenswrapper[4830]: I0131 09:12:55.892639 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="159b9801-57e3-4cf0-9b81-10aacb5eef83" containerName="northd" Jan 31 09:12:55 crc kubenswrapper[4830]: I0131 09:12:55.892648 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="159b9801-57e3-4cf0-9b81-10aacb5eef83" containerName="nbdb" Jan 31 09:12:55 crc kubenswrapper[4830]: E0131 09:12:55.892846 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="159b9801-57e3-4cf0-9b81-10aacb5eef83" containerName="ovnkube-controller" Jan 31 09:12:55 crc kubenswrapper[4830]: I0131 09:12:55.892854 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="159b9801-57e3-4cf0-9b81-10aacb5eef83" containerName="ovnkube-controller" Jan 31 09:12:55 crc kubenswrapper[4830]: I0131 09:12:55.892950 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="159b9801-57e3-4cf0-9b81-10aacb5eef83" containerName="ovnkube-controller" Jan 31 09:12:55 crc kubenswrapper[4830]: I0131 09:12:55.893163 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="159b9801-57e3-4cf0-9b81-10aacb5eef83" containerName="ovnkube-controller" Jan 31 09:12:55 crc kubenswrapper[4830]: I0131 09:12:55.896516 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-sgnqp" Jan 31 09:12:55 crc kubenswrapper[4830]: I0131 09:12:55.897561 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/159b9801-57e3-4cf0-9b81-10aacb5eef83-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "159b9801-57e3-4cf0-9b81-10aacb5eef83" (UID: "159b9801-57e3-4cf0-9b81-10aacb5eef83"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 09:12:55 crc kubenswrapper[4830]: I0131 09:12:55.961024 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/e87ff23b-1ce8-4556-8998-7fc4dd84775c-host-run-netns\") pod \"ovnkube-node-sgnqp\" (UID: \"e87ff23b-1ce8-4556-8998-7fc4dd84775c\") " pod="openshift-ovn-kubernetes/ovnkube-node-sgnqp" Jan 31 09:12:55 crc kubenswrapper[4830]: I0131 09:12:55.961090 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e87ff23b-1ce8-4556-8998-7fc4dd84775c-var-lib-openvswitch\") pod \"ovnkube-node-sgnqp\" (UID: \"e87ff23b-1ce8-4556-8998-7fc4dd84775c\") " pod="openshift-ovn-kubernetes/ovnkube-node-sgnqp" Jan 31 09:12:55 crc kubenswrapper[4830]: I0131 09:12:55.961129 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/e87ff23b-1ce8-4556-8998-7fc4dd84775c-ovnkube-script-lib\") pod \"ovnkube-node-sgnqp\" (UID: \"e87ff23b-1ce8-4556-8998-7fc4dd84775c\") " pod="openshift-ovn-kubernetes/ovnkube-node-sgnqp" Jan 31 09:12:55 crc kubenswrapper[4830]: I0131 09:12:55.961154 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e87ff23b-1ce8-4556-8998-7fc4dd84775c-etc-openvswitch\") pod \"ovnkube-node-sgnqp\" (UID: \"e87ff23b-1ce8-4556-8998-7fc4dd84775c\") " pod="openshift-ovn-kubernetes/ovnkube-node-sgnqp" Jan 31 09:12:55 crc kubenswrapper[4830]: I0131 09:12:55.961178 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e87ff23b-1ce8-4556-8998-7fc4dd84775c-run-openvswitch\") pod \"ovnkube-node-sgnqp\" (UID: \"e87ff23b-1ce8-4556-8998-7fc4dd84775c\") " pod="openshift-ovn-kubernetes/ovnkube-node-sgnqp" Jan 31 09:12:55 crc kubenswrapper[4830]: I0131 09:12:55.961197 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/e87ff23b-1ce8-4556-8998-7fc4dd84775c-run-systemd\") pod \"ovnkube-node-sgnqp\" (UID: \"e87ff23b-1ce8-4556-8998-7fc4dd84775c\") " pod="openshift-ovn-kubernetes/ovnkube-node-sgnqp" Jan 31 09:12:55 crc kubenswrapper[4830]: I0131 09:12:55.961306 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/e87ff23b-1ce8-4556-8998-7fc4dd84775c-host-kubelet\") pod \"ovnkube-node-sgnqp\" (UID: \"e87ff23b-1ce8-4556-8998-7fc4dd84775c\") " pod="openshift-ovn-kubernetes/ovnkube-node-sgnqp" Jan 31 09:12:55 crc kubenswrapper[4830]: I0131 09:12:55.961380 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gwvkf\" (UniqueName: \"kubernetes.io/projected/e87ff23b-1ce8-4556-8998-7fc4dd84775c-kube-api-access-gwvkf\") pod \"ovnkube-node-sgnqp\" (UID: \"e87ff23b-1ce8-4556-8998-7fc4dd84775c\") " pod="openshift-ovn-kubernetes/ovnkube-node-sgnqp" Jan 31 09:12:55 crc kubenswrapper[4830]: I0131 09:12:55.961439 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e87ff23b-1ce8-4556-8998-7fc4dd84775c-host-run-ovn-kubernetes\") pod \"ovnkube-node-sgnqp\" (UID: \"e87ff23b-1ce8-4556-8998-7fc4dd84775c\") " pod="openshift-ovn-kubernetes/ovnkube-node-sgnqp" Jan 31 09:12:55 crc kubenswrapper[4830]: I0131 09:12:55.961467 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e87ff23b-1ce8-4556-8998-7fc4dd84775c-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-sgnqp\" (UID: \"e87ff23b-1ce8-4556-8998-7fc4dd84775c\") " pod="openshift-ovn-kubernetes/ovnkube-node-sgnqp" Jan 31 09:12:55 crc kubenswrapper[4830]: I0131 09:12:55.961524 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e87ff23b-1ce8-4556-8998-7fc4dd84775c-host-cni-netd\") pod \"ovnkube-node-sgnqp\" (UID: \"e87ff23b-1ce8-4556-8998-7fc4dd84775c\") " pod="openshift-ovn-kubernetes/ovnkube-node-sgnqp" Jan 31 09:12:55 crc kubenswrapper[4830]: I0131 09:12:55.961539 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/e87ff23b-1ce8-4556-8998-7fc4dd84775c-env-overrides\") pod \"ovnkube-node-sgnqp\" (UID: \"e87ff23b-1ce8-4556-8998-7fc4dd84775c\") " pod="openshift-ovn-kubernetes/ovnkube-node-sgnqp" Jan 31 09:12:55 crc kubenswrapper[4830]: I0131 09:12:55.961595 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/e87ff23b-1ce8-4556-8998-7fc4dd84775c-ovnkube-config\") pod \"ovnkube-node-sgnqp\" (UID: \"e87ff23b-1ce8-4556-8998-7fc4dd84775c\") " pod="openshift-ovn-kubernetes/ovnkube-node-sgnqp" Jan 31 09:12:55 crc kubenswrapper[4830]: I0131 09:12:55.961636 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/e87ff23b-1ce8-4556-8998-7fc4dd84775c-node-log\") pod \"ovnkube-node-sgnqp\" (UID: \"e87ff23b-1ce8-4556-8998-7fc4dd84775c\") " pod="openshift-ovn-kubernetes/ovnkube-node-sgnqp" Jan 31 09:12:55 crc kubenswrapper[4830]: I0131 09:12:55.961653 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/e87ff23b-1ce8-4556-8998-7fc4dd84775c-log-socket\") pod \"ovnkube-node-sgnqp\" (UID: \"e87ff23b-1ce8-4556-8998-7fc4dd84775c\") " pod="openshift-ovn-kubernetes/ovnkube-node-sgnqp" Jan 31 09:12:55 crc kubenswrapper[4830]: I0131 09:12:55.961691 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/e87ff23b-1ce8-4556-8998-7fc4dd84775c-systemd-units\") pod \"ovnkube-node-sgnqp\" (UID: \"e87ff23b-1ce8-4556-8998-7fc4dd84775c\") " pod="openshift-ovn-kubernetes/ovnkube-node-sgnqp" Jan 31 09:12:55 crc kubenswrapper[4830]: I0131 09:12:55.961742 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/e87ff23b-1ce8-4556-8998-7fc4dd84775c-ovn-node-metrics-cert\") pod \"ovnkube-node-sgnqp\" (UID: \"e87ff23b-1ce8-4556-8998-7fc4dd84775c\") " pod="openshift-ovn-kubernetes/ovnkube-node-sgnqp" Jan 31 09:12:55 crc kubenswrapper[4830]: I0131 09:12:55.961783 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/e87ff23b-1ce8-4556-8998-7fc4dd84775c-host-slash\") pod \"ovnkube-node-sgnqp\" (UID: \"e87ff23b-1ce8-4556-8998-7fc4dd84775c\") " pod="openshift-ovn-kubernetes/ovnkube-node-sgnqp" Jan 31 09:12:55 crc kubenswrapper[4830]: I0131 09:12:55.961837 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/e87ff23b-1ce8-4556-8998-7fc4dd84775c-host-cni-bin\") pod \"ovnkube-node-sgnqp\" (UID: \"e87ff23b-1ce8-4556-8998-7fc4dd84775c\") " pod="openshift-ovn-kubernetes/ovnkube-node-sgnqp" Jan 31 09:12:55 crc kubenswrapper[4830]: I0131 09:12:55.961873 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/e87ff23b-1ce8-4556-8998-7fc4dd84775c-run-ovn\") pod \"ovnkube-node-sgnqp\" (UID: \"e87ff23b-1ce8-4556-8998-7fc4dd84775c\") " pod="openshift-ovn-kubernetes/ovnkube-node-sgnqp" Jan 31 09:12:55 crc kubenswrapper[4830]: I0131 09:12:55.961983 4830 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/159b9801-57e3-4cf0-9b81-10aacb5eef83-run-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 31 09:12:55 crc kubenswrapper[4830]: I0131 09:12:55.961995 4830 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/159b9801-57e3-4cf0-9b81-10aacb5eef83-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 31 09:12:55 crc kubenswrapper[4830]: I0131 09:12:55.962006 4830 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/159b9801-57e3-4cf0-9b81-10aacb5eef83-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 31 09:12:55 crc kubenswrapper[4830]: I0131 09:12:55.962018 4830 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/159b9801-57e3-4cf0-9b81-10aacb5eef83-node-log\") on node \"crc\" DevicePath \"\"" Jan 31 09:12:55 crc kubenswrapper[4830]: I0131 09:12:55.962030 4830 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/159b9801-57e3-4cf0-9b81-10aacb5eef83-host-run-netns\") on node \"crc\" DevicePath \"\"" Jan 31 09:12:55 crc kubenswrapper[4830]: I0131 09:12:55.962041 4830 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/159b9801-57e3-4cf0-9b81-10aacb5eef83-host-cni-bin\") on node \"crc\" DevicePath \"\"" Jan 31 09:12:55 crc kubenswrapper[4830]: I0131 09:12:55.962050 4830 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/159b9801-57e3-4cf0-9b81-10aacb5eef83-log-socket\") on node \"crc\" DevicePath \"\"" Jan 31 09:12:55 crc kubenswrapper[4830]: I0131 09:12:55.962059 4830 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/159b9801-57e3-4cf0-9b81-10aacb5eef83-run-systemd\") on node \"crc\" DevicePath \"\"" Jan 31 09:12:55 crc kubenswrapper[4830]: I0131 09:12:55.962069 4830 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/159b9801-57e3-4cf0-9b81-10aacb5eef83-systemd-units\") on node \"crc\" DevicePath \"\"" Jan 31 09:12:55 crc kubenswrapper[4830]: I0131 09:12:55.962077 4830 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/159b9801-57e3-4cf0-9b81-10aacb5eef83-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 31 09:12:55 crc kubenswrapper[4830]: I0131 09:12:55.962089 4830 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/159b9801-57e3-4cf0-9b81-10aacb5eef83-host-slash\") on node \"crc\" DevicePath \"\"" Jan 31 09:12:55 crc kubenswrapper[4830]: I0131 09:12:55.962097 4830 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/159b9801-57e3-4cf0-9b81-10aacb5eef83-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 31 09:12:55 crc kubenswrapper[4830]: I0131 09:12:55.962107 4830 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/159b9801-57e3-4cf0-9b81-10aacb5eef83-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 31 09:12:55 crc kubenswrapper[4830]: I0131 09:12:55.962116 4830 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/159b9801-57e3-4cf0-9b81-10aacb5eef83-host-kubelet\") on node \"crc\" DevicePath \"\"" Jan 31 09:12:55 crc kubenswrapper[4830]: I0131 09:12:55.962125 4830 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/159b9801-57e3-4cf0-9b81-10aacb5eef83-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 31 09:12:55 crc kubenswrapper[4830]: I0131 09:12:55.962135 4830 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/159b9801-57e3-4cf0-9b81-10aacb5eef83-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 31 09:12:55 crc kubenswrapper[4830]: I0131 09:12:55.962146 4830 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/159b9801-57e3-4cf0-9b81-10aacb5eef83-host-cni-netd\") on node \"crc\" DevicePath \"\"" Jan 31 09:12:55 crc kubenswrapper[4830]: I0131 09:12:55.962157 4830 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/159b9801-57e3-4cf0-9b81-10aacb5eef83-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 31 09:12:55 crc kubenswrapper[4830]: I0131 09:12:55.962167 4830 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/159b9801-57e3-4cf0-9b81-10aacb5eef83-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 31 09:12:55 crc kubenswrapper[4830]: I0131 09:12:55.962179 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8nvq5\" (UniqueName: \"kubernetes.io/projected/159b9801-57e3-4cf0-9b81-10aacb5eef83-kube-api-access-8nvq5\") on node \"crc\" DevicePath \"\"" Jan 31 09:12:56 crc kubenswrapper[4830]: I0131 09:12:56.030321 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-r8pc4_159b9801-57e3-4cf0-9b81-10aacb5eef83/ovn-acl-logging/0.log" Jan 31 09:12:56 crc kubenswrapper[4830]: I0131 09:12:56.031626 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-r8pc4_159b9801-57e3-4cf0-9b81-10aacb5eef83/ovn-controller/0.log" Jan 31 09:12:56 crc kubenswrapper[4830]: I0131 09:12:56.032149 4830 generic.go:334] "Generic (PLEG): container finished" podID="159b9801-57e3-4cf0-9b81-10aacb5eef83" containerID="ae12083012db5dcd6cc0b23cf64d04c952587334fdf617d36d4cdc2f657eb163" exitCode=0 Jan 31 09:12:56 crc kubenswrapper[4830]: I0131 09:12:56.032192 4830 generic.go:334] "Generic (PLEG): container finished" podID="159b9801-57e3-4cf0-9b81-10aacb5eef83" containerID="320f24051b1a05475b54823ec3401ad252f0efd6c0d3c5098643f959bad15179" exitCode=0 Jan 31 09:12:56 crc kubenswrapper[4830]: I0131 09:12:56.032209 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-r8pc4" event={"ID":"159b9801-57e3-4cf0-9b81-10aacb5eef83","Type":"ContainerDied","Data":"ae12083012db5dcd6cc0b23cf64d04c952587334fdf617d36d4cdc2f657eb163"} Jan 31 09:12:56 crc kubenswrapper[4830]: I0131 09:12:56.032267 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-r8pc4" event={"ID":"159b9801-57e3-4cf0-9b81-10aacb5eef83","Type":"ContainerDied","Data":"320f24051b1a05475b54823ec3401ad252f0efd6c0d3c5098643f959bad15179"} Jan 31 09:12:56 crc kubenswrapper[4830]: I0131 09:12:56.032282 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-r8pc4" event={"ID":"159b9801-57e3-4cf0-9b81-10aacb5eef83","Type":"ContainerDied","Data":"a62084ce1ecb569b06c0f5e5d4ebedf6167c26b47f68c88eac425a8407c28db9"} Jan 31 09:12:56 crc kubenswrapper[4830]: I0131 09:12:56.032299 4830 scope.go:117] "RemoveContainer" containerID="f4d93300488a1d98f2b7829b938554fd6261d49065ba6bab59723ae725087360" Jan 31 09:12:56 crc kubenswrapper[4830]: I0131 09:12:56.032515 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-r8pc4" Jan 31 09:12:56 crc kubenswrapper[4830]: I0131 09:12:56.035234 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-cjqbn_b7e133cc-19e8-4770-9146-88dac53a6531/kube-multus/2.log" Jan 31 09:12:56 crc kubenswrapper[4830]: I0131 09:12:56.057021 4830 scope.go:117] "RemoveContainer" containerID="a39d2c99a3b1f6ff5d003d2163046c5127e41dac82f41744c1fbffd0cec8ff09" Jan 31 09:12:56 crc kubenswrapper[4830]: I0131 09:12:56.064033 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/e87ff23b-1ce8-4556-8998-7fc4dd84775c-systemd-units\") pod \"ovnkube-node-sgnqp\" (UID: \"e87ff23b-1ce8-4556-8998-7fc4dd84775c\") " pod="openshift-ovn-kubernetes/ovnkube-node-sgnqp" Jan 31 09:12:56 crc kubenswrapper[4830]: I0131 09:12:56.064239 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/e87ff23b-1ce8-4556-8998-7fc4dd84775c-ovn-node-metrics-cert\") pod \"ovnkube-node-sgnqp\" (UID: \"e87ff23b-1ce8-4556-8998-7fc4dd84775c\") " pod="openshift-ovn-kubernetes/ovnkube-node-sgnqp" Jan 31 09:12:56 crc kubenswrapper[4830]: I0131 09:12:56.064350 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/e87ff23b-1ce8-4556-8998-7fc4dd84775c-host-slash\") pod \"ovnkube-node-sgnqp\" (UID: \"e87ff23b-1ce8-4556-8998-7fc4dd84775c\") " pod="openshift-ovn-kubernetes/ovnkube-node-sgnqp" Jan 31 09:12:56 crc kubenswrapper[4830]: I0131 09:12:56.064482 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/e87ff23b-1ce8-4556-8998-7fc4dd84775c-host-cni-bin\") pod \"ovnkube-node-sgnqp\" (UID: \"e87ff23b-1ce8-4556-8998-7fc4dd84775c\") " pod="openshift-ovn-kubernetes/ovnkube-node-sgnqp" Jan 31 09:12:56 crc kubenswrapper[4830]: I0131 09:12:56.064600 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/e87ff23b-1ce8-4556-8998-7fc4dd84775c-run-ovn\") pod \"ovnkube-node-sgnqp\" (UID: \"e87ff23b-1ce8-4556-8998-7fc4dd84775c\") " pod="openshift-ovn-kubernetes/ovnkube-node-sgnqp" Jan 31 09:12:56 crc kubenswrapper[4830]: I0131 09:12:56.064714 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/e87ff23b-1ce8-4556-8998-7fc4dd84775c-host-run-netns\") pod \"ovnkube-node-sgnqp\" (UID: \"e87ff23b-1ce8-4556-8998-7fc4dd84775c\") " pod="openshift-ovn-kubernetes/ovnkube-node-sgnqp" Jan 31 09:12:56 crc kubenswrapper[4830]: I0131 09:12:56.064819 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e87ff23b-1ce8-4556-8998-7fc4dd84775c-var-lib-openvswitch\") pod \"ovnkube-node-sgnqp\" (UID: \"e87ff23b-1ce8-4556-8998-7fc4dd84775c\") " pod="openshift-ovn-kubernetes/ovnkube-node-sgnqp" Jan 31 09:12:56 crc kubenswrapper[4830]: I0131 09:12:56.064900 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/e87ff23b-1ce8-4556-8998-7fc4dd84775c-ovnkube-script-lib\") pod \"ovnkube-node-sgnqp\" (UID: \"e87ff23b-1ce8-4556-8998-7fc4dd84775c\") " pod="openshift-ovn-kubernetes/ovnkube-node-sgnqp" Jan 31 09:12:56 crc kubenswrapper[4830]: I0131 09:12:56.065043 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e87ff23b-1ce8-4556-8998-7fc4dd84775c-etc-openvswitch\") pod \"ovnkube-node-sgnqp\" (UID: \"e87ff23b-1ce8-4556-8998-7fc4dd84775c\") " pod="openshift-ovn-kubernetes/ovnkube-node-sgnqp" Jan 31 09:12:56 crc kubenswrapper[4830]: I0131 09:12:56.065121 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e87ff23b-1ce8-4556-8998-7fc4dd84775c-run-openvswitch\") pod \"ovnkube-node-sgnqp\" (UID: \"e87ff23b-1ce8-4556-8998-7fc4dd84775c\") " pod="openshift-ovn-kubernetes/ovnkube-node-sgnqp" Jan 31 09:12:56 crc kubenswrapper[4830]: I0131 09:12:56.065195 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/e87ff23b-1ce8-4556-8998-7fc4dd84775c-run-systemd\") pod \"ovnkube-node-sgnqp\" (UID: \"e87ff23b-1ce8-4556-8998-7fc4dd84775c\") " pod="openshift-ovn-kubernetes/ovnkube-node-sgnqp" Jan 31 09:12:56 crc kubenswrapper[4830]: I0131 09:12:56.065261 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/e87ff23b-1ce8-4556-8998-7fc4dd84775c-host-kubelet\") pod \"ovnkube-node-sgnqp\" (UID: \"e87ff23b-1ce8-4556-8998-7fc4dd84775c\") " pod="openshift-ovn-kubernetes/ovnkube-node-sgnqp" Jan 31 09:12:56 crc kubenswrapper[4830]: I0131 09:12:56.065325 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gwvkf\" (UniqueName: \"kubernetes.io/projected/e87ff23b-1ce8-4556-8998-7fc4dd84775c-kube-api-access-gwvkf\") pod \"ovnkube-node-sgnqp\" (UID: \"e87ff23b-1ce8-4556-8998-7fc4dd84775c\") " pod="openshift-ovn-kubernetes/ovnkube-node-sgnqp" Jan 31 09:12:56 crc kubenswrapper[4830]: I0131 09:12:56.065411 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e87ff23b-1ce8-4556-8998-7fc4dd84775c-host-run-ovn-kubernetes\") pod \"ovnkube-node-sgnqp\" (UID: \"e87ff23b-1ce8-4556-8998-7fc4dd84775c\") " pod="openshift-ovn-kubernetes/ovnkube-node-sgnqp" Jan 31 09:12:56 crc kubenswrapper[4830]: I0131 09:12:56.065485 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e87ff23b-1ce8-4556-8998-7fc4dd84775c-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-sgnqp\" (UID: \"e87ff23b-1ce8-4556-8998-7fc4dd84775c\") " pod="openshift-ovn-kubernetes/ovnkube-node-sgnqp" Jan 31 09:12:56 crc kubenswrapper[4830]: I0131 09:12:56.065570 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e87ff23b-1ce8-4556-8998-7fc4dd84775c-host-cni-netd\") pod \"ovnkube-node-sgnqp\" (UID: \"e87ff23b-1ce8-4556-8998-7fc4dd84775c\") " pod="openshift-ovn-kubernetes/ovnkube-node-sgnqp" Jan 31 09:12:56 crc kubenswrapper[4830]: I0131 09:12:56.065657 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/e87ff23b-1ce8-4556-8998-7fc4dd84775c-env-overrides\") pod \"ovnkube-node-sgnqp\" (UID: \"e87ff23b-1ce8-4556-8998-7fc4dd84775c\") " pod="openshift-ovn-kubernetes/ovnkube-node-sgnqp" Jan 31 09:12:56 crc kubenswrapper[4830]: I0131 09:12:56.065805 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/e87ff23b-1ce8-4556-8998-7fc4dd84775c-ovnkube-config\") pod \"ovnkube-node-sgnqp\" (UID: \"e87ff23b-1ce8-4556-8998-7fc4dd84775c\") " pod="openshift-ovn-kubernetes/ovnkube-node-sgnqp" Jan 31 09:12:56 crc kubenswrapper[4830]: I0131 09:12:56.065902 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/e87ff23b-1ce8-4556-8998-7fc4dd84775c-node-log\") pod \"ovnkube-node-sgnqp\" (UID: \"e87ff23b-1ce8-4556-8998-7fc4dd84775c\") " pod="openshift-ovn-kubernetes/ovnkube-node-sgnqp" Jan 31 09:12:56 crc kubenswrapper[4830]: I0131 09:12:56.065969 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/e87ff23b-1ce8-4556-8998-7fc4dd84775c-log-socket\") pod \"ovnkube-node-sgnqp\" (UID: \"e87ff23b-1ce8-4556-8998-7fc4dd84775c\") " pod="openshift-ovn-kubernetes/ovnkube-node-sgnqp" Jan 31 09:12:56 crc kubenswrapper[4830]: I0131 09:12:56.066104 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/e87ff23b-1ce8-4556-8998-7fc4dd84775c-log-socket\") pod \"ovnkube-node-sgnqp\" (UID: \"e87ff23b-1ce8-4556-8998-7fc4dd84775c\") " pod="openshift-ovn-kubernetes/ovnkube-node-sgnqp" Jan 31 09:12:56 crc kubenswrapper[4830]: I0131 09:12:56.066200 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/e87ff23b-1ce8-4556-8998-7fc4dd84775c-host-cni-bin\") pod \"ovnkube-node-sgnqp\" (UID: \"e87ff23b-1ce8-4556-8998-7fc4dd84775c\") " pod="openshift-ovn-kubernetes/ovnkube-node-sgnqp" Jan 31 09:12:56 crc kubenswrapper[4830]: I0131 09:12:56.066440 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/e87ff23b-1ce8-4556-8998-7fc4dd84775c-run-ovn\") pod \"ovnkube-node-sgnqp\" (UID: \"e87ff23b-1ce8-4556-8998-7fc4dd84775c\") " pod="openshift-ovn-kubernetes/ovnkube-node-sgnqp" Jan 31 09:12:56 crc kubenswrapper[4830]: I0131 09:12:56.066553 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/e87ff23b-1ce8-4556-8998-7fc4dd84775c-host-run-netns\") pod \"ovnkube-node-sgnqp\" (UID: \"e87ff23b-1ce8-4556-8998-7fc4dd84775c\") " pod="openshift-ovn-kubernetes/ovnkube-node-sgnqp" Jan 31 09:12:56 crc kubenswrapper[4830]: I0131 09:12:56.066650 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/e87ff23b-1ce8-4556-8998-7fc4dd84775c-host-kubelet\") pod \"ovnkube-node-sgnqp\" (UID: \"e87ff23b-1ce8-4556-8998-7fc4dd84775c\") " pod="openshift-ovn-kubernetes/ovnkube-node-sgnqp" Jan 31 09:12:56 crc kubenswrapper[4830]: I0131 09:12:56.066766 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e87ff23b-1ce8-4556-8998-7fc4dd84775c-var-lib-openvswitch\") pod \"ovnkube-node-sgnqp\" (UID: \"e87ff23b-1ce8-4556-8998-7fc4dd84775c\") " pod="openshift-ovn-kubernetes/ovnkube-node-sgnqp" Jan 31 09:12:56 crc kubenswrapper[4830]: I0131 09:12:56.067171 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e87ff23b-1ce8-4556-8998-7fc4dd84775c-host-run-ovn-kubernetes\") pod \"ovnkube-node-sgnqp\" (UID: \"e87ff23b-1ce8-4556-8998-7fc4dd84775c\") " pod="openshift-ovn-kubernetes/ovnkube-node-sgnqp" Jan 31 09:12:56 crc kubenswrapper[4830]: I0131 09:12:56.067277 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e87ff23b-1ce8-4556-8998-7fc4dd84775c-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-sgnqp\" (UID: \"e87ff23b-1ce8-4556-8998-7fc4dd84775c\") " pod="openshift-ovn-kubernetes/ovnkube-node-sgnqp" Jan 31 09:12:56 crc kubenswrapper[4830]: I0131 09:12:56.067351 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e87ff23b-1ce8-4556-8998-7fc4dd84775c-host-cni-netd\") pod \"ovnkube-node-sgnqp\" (UID: \"e87ff23b-1ce8-4556-8998-7fc4dd84775c\") " pod="openshift-ovn-kubernetes/ovnkube-node-sgnqp" Jan 31 09:12:56 crc kubenswrapper[4830]: I0131 09:12:56.067756 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/e87ff23b-1ce8-4556-8998-7fc4dd84775c-ovnkube-script-lib\") pod \"ovnkube-node-sgnqp\" (UID: \"e87ff23b-1ce8-4556-8998-7fc4dd84775c\") " pod="openshift-ovn-kubernetes/ovnkube-node-sgnqp" Jan 31 09:12:56 crc kubenswrapper[4830]: I0131 09:12:56.064956 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/e87ff23b-1ce8-4556-8998-7fc4dd84775c-host-slash\") pod \"ovnkube-node-sgnqp\" (UID: \"e87ff23b-1ce8-4556-8998-7fc4dd84775c\") " pod="openshift-ovn-kubernetes/ovnkube-node-sgnqp" Jan 31 09:12:56 crc kubenswrapper[4830]: I0131 09:12:56.067832 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e87ff23b-1ce8-4556-8998-7fc4dd84775c-etc-openvswitch\") pod \"ovnkube-node-sgnqp\" (UID: \"e87ff23b-1ce8-4556-8998-7fc4dd84775c\") " pod="openshift-ovn-kubernetes/ovnkube-node-sgnqp" Jan 31 09:12:56 crc kubenswrapper[4830]: I0131 09:12:56.064985 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/e87ff23b-1ce8-4556-8998-7fc4dd84775c-systemd-units\") pod \"ovnkube-node-sgnqp\" (UID: \"e87ff23b-1ce8-4556-8998-7fc4dd84775c\") " pod="openshift-ovn-kubernetes/ovnkube-node-sgnqp" Jan 31 09:12:56 crc kubenswrapper[4830]: I0131 09:12:56.067877 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e87ff23b-1ce8-4556-8998-7fc4dd84775c-run-openvswitch\") pod \"ovnkube-node-sgnqp\" (UID: \"e87ff23b-1ce8-4556-8998-7fc4dd84775c\") " pod="openshift-ovn-kubernetes/ovnkube-node-sgnqp" Jan 31 09:12:56 crc kubenswrapper[4830]: I0131 09:12:56.067906 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/e87ff23b-1ce8-4556-8998-7fc4dd84775c-run-systemd\") pod \"ovnkube-node-sgnqp\" (UID: \"e87ff23b-1ce8-4556-8998-7fc4dd84775c\") " pod="openshift-ovn-kubernetes/ovnkube-node-sgnqp" Jan 31 09:12:56 crc kubenswrapper[4830]: I0131 09:12:56.068241 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/e87ff23b-1ce8-4556-8998-7fc4dd84775c-env-overrides\") pod \"ovnkube-node-sgnqp\" (UID: \"e87ff23b-1ce8-4556-8998-7fc4dd84775c\") " pod="openshift-ovn-kubernetes/ovnkube-node-sgnqp" Jan 31 09:12:56 crc kubenswrapper[4830]: I0131 09:12:56.068348 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/e87ff23b-1ce8-4556-8998-7fc4dd84775c-ovnkube-config\") pod \"ovnkube-node-sgnqp\" (UID: \"e87ff23b-1ce8-4556-8998-7fc4dd84775c\") " pod="openshift-ovn-kubernetes/ovnkube-node-sgnqp" Jan 31 09:12:56 crc kubenswrapper[4830]: I0131 09:12:56.068350 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/e87ff23b-1ce8-4556-8998-7fc4dd84775c-node-log\") pod \"ovnkube-node-sgnqp\" (UID: \"e87ff23b-1ce8-4556-8998-7fc4dd84775c\") " pod="openshift-ovn-kubernetes/ovnkube-node-sgnqp" Jan 31 09:12:56 crc kubenswrapper[4830]: I0131 09:12:56.073746 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/e87ff23b-1ce8-4556-8998-7fc4dd84775c-ovn-node-metrics-cert\") pod \"ovnkube-node-sgnqp\" (UID: \"e87ff23b-1ce8-4556-8998-7fc4dd84775c\") " pod="openshift-ovn-kubernetes/ovnkube-node-sgnqp" Jan 31 09:12:56 crc kubenswrapper[4830]: I0131 09:12:56.077632 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-r8pc4"] Jan 31 09:12:56 crc kubenswrapper[4830]: I0131 09:12:56.084034 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-r8pc4"] Jan 31 09:12:56 crc kubenswrapper[4830]: I0131 09:12:56.091546 4830 scope.go:117] "RemoveContainer" containerID="351a35da1a2503d6c5d323c7a9a26460236f70517498b41055e6d21bbf92b358" Jan 31 09:12:56 crc kubenswrapper[4830]: I0131 09:12:56.111413 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gwvkf\" (UniqueName: \"kubernetes.io/projected/e87ff23b-1ce8-4556-8998-7fc4dd84775c-kube-api-access-gwvkf\") pod \"ovnkube-node-sgnqp\" (UID: \"e87ff23b-1ce8-4556-8998-7fc4dd84775c\") " pod="openshift-ovn-kubernetes/ovnkube-node-sgnqp" Jan 31 09:12:56 crc kubenswrapper[4830]: I0131 09:12:56.114962 4830 scope.go:117] "RemoveContainer" containerID="3614c1ff7ca6762c15b9b85e6187ad94029630e54254e8ccb45b5e7c24692561" Jan 31 09:12:56 crc kubenswrapper[4830]: I0131 09:12:56.137480 4830 scope.go:117] "RemoveContainer" containerID="ae12083012db5dcd6cc0b23cf64d04c952587334fdf617d36d4cdc2f657eb163" Jan 31 09:12:56 crc kubenswrapper[4830]: I0131 09:12:56.164985 4830 scope.go:117] "RemoveContainer" containerID="320f24051b1a05475b54823ec3401ad252f0efd6c0d3c5098643f959bad15179" Jan 31 09:12:56 crc kubenswrapper[4830]: I0131 09:12:56.196419 4830 scope.go:117] "RemoveContainer" containerID="27c8241e84418d80626b17fe37ccf994304394c3fa85c7f1a058d8d54ad7e7d6" Jan 31 09:12:56 crc kubenswrapper[4830]: I0131 09:12:56.216895 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-sgnqp" Jan 31 09:12:56 crc kubenswrapper[4830]: I0131 09:12:56.229018 4830 scope.go:117] "RemoveContainer" containerID="0fd109715e65725addfeba399eacb1abbe16132e06e9ee50b542e877ca9d8d82" Jan 31 09:12:56 crc kubenswrapper[4830]: I0131 09:12:56.278841 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="159b9801-57e3-4cf0-9b81-10aacb5eef83" path="/var/lib/kubelet/pods/159b9801-57e3-4cf0-9b81-10aacb5eef83/volumes" Jan 31 09:12:56 crc kubenswrapper[4830]: I0131 09:12:56.280277 4830 scope.go:117] "RemoveContainer" containerID="ba6e5a840aca1952877a5d2c18af1d28779b13d54e0c653a2b02b3c3d7b748e4" Jan 31 09:12:56 crc kubenswrapper[4830]: I0131 09:12:56.397141 4830 scope.go:117] "RemoveContainer" containerID="f4d93300488a1d98f2b7829b938554fd6261d49065ba6bab59723ae725087360" Jan 31 09:12:56 crc kubenswrapper[4830]: E0131 09:12:56.402916 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f4d93300488a1d98f2b7829b938554fd6261d49065ba6bab59723ae725087360\": container with ID starting with f4d93300488a1d98f2b7829b938554fd6261d49065ba6bab59723ae725087360 not found: ID does not exist" containerID="f4d93300488a1d98f2b7829b938554fd6261d49065ba6bab59723ae725087360" Jan 31 09:12:56 crc kubenswrapper[4830]: I0131 09:12:56.402978 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f4d93300488a1d98f2b7829b938554fd6261d49065ba6bab59723ae725087360"} err="failed to get container status \"f4d93300488a1d98f2b7829b938554fd6261d49065ba6bab59723ae725087360\": rpc error: code = NotFound desc = could not find container \"f4d93300488a1d98f2b7829b938554fd6261d49065ba6bab59723ae725087360\": container with ID starting with f4d93300488a1d98f2b7829b938554fd6261d49065ba6bab59723ae725087360 not found: ID does not exist" Jan 31 09:12:56 crc kubenswrapper[4830]: I0131 09:12:56.403009 4830 scope.go:117] "RemoveContainer" containerID="a39d2c99a3b1f6ff5d003d2163046c5127e41dac82f41744c1fbffd0cec8ff09" Jan 31 09:12:56 crc kubenswrapper[4830]: E0131 09:12:56.405934 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a39d2c99a3b1f6ff5d003d2163046c5127e41dac82f41744c1fbffd0cec8ff09\": container with ID starting with a39d2c99a3b1f6ff5d003d2163046c5127e41dac82f41744c1fbffd0cec8ff09 not found: ID does not exist" containerID="a39d2c99a3b1f6ff5d003d2163046c5127e41dac82f41744c1fbffd0cec8ff09" Jan 31 09:12:56 crc kubenswrapper[4830]: I0131 09:12:56.405969 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a39d2c99a3b1f6ff5d003d2163046c5127e41dac82f41744c1fbffd0cec8ff09"} err="failed to get container status \"a39d2c99a3b1f6ff5d003d2163046c5127e41dac82f41744c1fbffd0cec8ff09\": rpc error: code = NotFound desc = could not find container \"a39d2c99a3b1f6ff5d003d2163046c5127e41dac82f41744c1fbffd0cec8ff09\": container with ID starting with a39d2c99a3b1f6ff5d003d2163046c5127e41dac82f41744c1fbffd0cec8ff09 not found: ID does not exist" Jan 31 09:12:56 crc kubenswrapper[4830]: I0131 09:12:56.405991 4830 scope.go:117] "RemoveContainer" containerID="351a35da1a2503d6c5d323c7a9a26460236f70517498b41055e6d21bbf92b358" Jan 31 09:12:56 crc kubenswrapper[4830]: E0131 09:12:56.408410 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"351a35da1a2503d6c5d323c7a9a26460236f70517498b41055e6d21bbf92b358\": container with ID starting with 351a35da1a2503d6c5d323c7a9a26460236f70517498b41055e6d21bbf92b358 not found: ID does not exist" containerID="351a35da1a2503d6c5d323c7a9a26460236f70517498b41055e6d21bbf92b358" Jan 31 09:12:56 crc kubenswrapper[4830]: I0131 09:12:56.408440 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"351a35da1a2503d6c5d323c7a9a26460236f70517498b41055e6d21bbf92b358"} err="failed to get container status \"351a35da1a2503d6c5d323c7a9a26460236f70517498b41055e6d21bbf92b358\": rpc error: code = NotFound desc = could not find container \"351a35da1a2503d6c5d323c7a9a26460236f70517498b41055e6d21bbf92b358\": container with ID starting with 351a35da1a2503d6c5d323c7a9a26460236f70517498b41055e6d21bbf92b358 not found: ID does not exist" Jan 31 09:12:56 crc kubenswrapper[4830]: I0131 09:12:56.408460 4830 scope.go:117] "RemoveContainer" containerID="3614c1ff7ca6762c15b9b85e6187ad94029630e54254e8ccb45b5e7c24692561" Jan 31 09:12:56 crc kubenswrapper[4830]: E0131 09:12:56.411090 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3614c1ff7ca6762c15b9b85e6187ad94029630e54254e8ccb45b5e7c24692561\": container with ID starting with 3614c1ff7ca6762c15b9b85e6187ad94029630e54254e8ccb45b5e7c24692561 not found: ID does not exist" containerID="3614c1ff7ca6762c15b9b85e6187ad94029630e54254e8ccb45b5e7c24692561" Jan 31 09:12:56 crc kubenswrapper[4830]: I0131 09:12:56.411136 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3614c1ff7ca6762c15b9b85e6187ad94029630e54254e8ccb45b5e7c24692561"} err="failed to get container status \"3614c1ff7ca6762c15b9b85e6187ad94029630e54254e8ccb45b5e7c24692561\": rpc error: code = NotFound desc = could not find container \"3614c1ff7ca6762c15b9b85e6187ad94029630e54254e8ccb45b5e7c24692561\": container with ID starting with 3614c1ff7ca6762c15b9b85e6187ad94029630e54254e8ccb45b5e7c24692561 not found: ID does not exist" Jan 31 09:12:56 crc kubenswrapper[4830]: I0131 09:12:56.411160 4830 scope.go:117] "RemoveContainer" containerID="ae12083012db5dcd6cc0b23cf64d04c952587334fdf617d36d4cdc2f657eb163" Jan 31 09:12:56 crc kubenswrapper[4830]: E0131 09:12:56.411596 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ae12083012db5dcd6cc0b23cf64d04c952587334fdf617d36d4cdc2f657eb163\": container with ID starting with ae12083012db5dcd6cc0b23cf64d04c952587334fdf617d36d4cdc2f657eb163 not found: ID does not exist" containerID="ae12083012db5dcd6cc0b23cf64d04c952587334fdf617d36d4cdc2f657eb163" Jan 31 09:12:56 crc kubenswrapper[4830]: I0131 09:12:56.411622 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ae12083012db5dcd6cc0b23cf64d04c952587334fdf617d36d4cdc2f657eb163"} err="failed to get container status \"ae12083012db5dcd6cc0b23cf64d04c952587334fdf617d36d4cdc2f657eb163\": rpc error: code = NotFound desc = could not find container \"ae12083012db5dcd6cc0b23cf64d04c952587334fdf617d36d4cdc2f657eb163\": container with ID starting with ae12083012db5dcd6cc0b23cf64d04c952587334fdf617d36d4cdc2f657eb163 not found: ID does not exist" Jan 31 09:12:56 crc kubenswrapper[4830]: I0131 09:12:56.411638 4830 scope.go:117] "RemoveContainer" containerID="320f24051b1a05475b54823ec3401ad252f0efd6c0d3c5098643f959bad15179" Jan 31 09:12:56 crc kubenswrapper[4830]: E0131 09:12:56.411854 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"320f24051b1a05475b54823ec3401ad252f0efd6c0d3c5098643f959bad15179\": container with ID starting with 320f24051b1a05475b54823ec3401ad252f0efd6c0d3c5098643f959bad15179 not found: ID does not exist" containerID="320f24051b1a05475b54823ec3401ad252f0efd6c0d3c5098643f959bad15179" Jan 31 09:12:56 crc kubenswrapper[4830]: I0131 09:12:56.411881 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"320f24051b1a05475b54823ec3401ad252f0efd6c0d3c5098643f959bad15179"} err="failed to get container status \"320f24051b1a05475b54823ec3401ad252f0efd6c0d3c5098643f959bad15179\": rpc error: code = NotFound desc = could not find container \"320f24051b1a05475b54823ec3401ad252f0efd6c0d3c5098643f959bad15179\": container with ID starting with 320f24051b1a05475b54823ec3401ad252f0efd6c0d3c5098643f959bad15179 not found: ID does not exist" Jan 31 09:12:56 crc kubenswrapper[4830]: I0131 09:12:56.411903 4830 scope.go:117] "RemoveContainer" containerID="27c8241e84418d80626b17fe37ccf994304394c3fa85c7f1a058d8d54ad7e7d6" Jan 31 09:12:56 crc kubenswrapper[4830]: E0131 09:12:56.413554 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"27c8241e84418d80626b17fe37ccf994304394c3fa85c7f1a058d8d54ad7e7d6\": container with ID starting with 27c8241e84418d80626b17fe37ccf994304394c3fa85c7f1a058d8d54ad7e7d6 not found: ID does not exist" containerID="27c8241e84418d80626b17fe37ccf994304394c3fa85c7f1a058d8d54ad7e7d6" Jan 31 09:12:56 crc kubenswrapper[4830]: I0131 09:12:56.413578 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"27c8241e84418d80626b17fe37ccf994304394c3fa85c7f1a058d8d54ad7e7d6"} err="failed to get container status \"27c8241e84418d80626b17fe37ccf994304394c3fa85c7f1a058d8d54ad7e7d6\": rpc error: code = NotFound desc = could not find container \"27c8241e84418d80626b17fe37ccf994304394c3fa85c7f1a058d8d54ad7e7d6\": container with ID starting with 27c8241e84418d80626b17fe37ccf994304394c3fa85c7f1a058d8d54ad7e7d6 not found: ID does not exist" Jan 31 09:12:56 crc kubenswrapper[4830]: I0131 09:12:56.413592 4830 scope.go:117] "RemoveContainer" containerID="0fd109715e65725addfeba399eacb1abbe16132e06e9ee50b542e877ca9d8d82" Jan 31 09:12:56 crc kubenswrapper[4830]: E0131 09:12:56.417645 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0fd109715e65725addfeba399eacb1abbe16132e06e9ee50b542e877ca9d8d82\": container with ID starting with 0fd109715e65725addfeba399eacb1abbe16132e06e9ee50b542e877ca9d8d82 not found: ID does not exist" containerID="0fd109715e65725addfeba399eacb1abbe16132e06e9ee50b542e877ca9d8d82" Jan 31 09:12:56 crc kubenswrapper[4830]: I0131 09:12:56.417685 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0fd109715e65725addfeba399eacb1abbe16132e06e9ee50b542e877ca9d8d82"} err="failed to get container status \"0fd109715e65725addfeba399eacb1abbe16132e06e9ee50b542e877ca9d8d82\": rpc error: code = NotFound desc = could not find container \"0fd109715e65725addfeba399eacb1abbe16132e06e9ee50b542e877ca9d8d82\": container with ID starting with 0fd109715e65725addfeba399eacb1abbe16132e06e9ee50b542e877ca9d8d82 not found: ID does not exist" Jan 31 09:12:56 crc kubenswrapper[4830]: I0131 09:12:56.417710 4830 scope.go:117] "RemoveContainer" containerID="ba6e5a840aca1952877a5d2c18af1d28779b13d54e0c653a2b02b3c3d7b748e4" Jan 31 09:12:56 crc kubenswrapper[4830]: E0131 09:12:56.418982 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ba6e5a840aca1952877a5d2c18af1d28779b13d54e0c653a2b02b3c3d7b748e4\": container with ID starting with ba6e5a840aca1952877a5d2c18af1d28779b13d54e0c653a2b02b3c3d7b748e4 not found: ID does not exist" containerID="ba6e5a840aca1952877a5d2c18af1d28779b13d54e0c653a2b02b3c3d7b748e4" Jan 31 09:12:56 crc kubenswrapper[4830]: I0131 09:12:56.419058 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ba6e5a840aca1952877a5d2c18af1d28779b13d54e0c653a2b02b3c3d7b748e4"} err="failed to get container status \"ba6e5a840aca1952877a5d2c18af1d28779b13d54e0c653a2b02b3c3d7b748e4\": rpc error: code = NotFound desc = could not find container \"ba6e5a840aca1952877a5d2c18af1d28779b13d54e0c653a2b02b3c3d7b748e4\": container with ID starting with ba6e5a840aca1952877a5d2c18af1d28779b13d54e0c653a2b02b3c3d7b748e4 not found: ID does not exist" Jan 31 09:12:56 crc kubenswrapper[4830]: I0131 09:12:56.419097 4830 scope.go:117] "RemoveContainer" containerID="f4d93300488a1d98f2b7829b938554fd6261d49065ba6bab59723ae725087360" Jan 31 09:12:56 crc kubenswrapper[4830]: I0131 09:12:56.419573 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f4d93300488a1d98f2b7829b938554fd6261d49065ba6bab59723ae725087360"} err="failed to get container status \"f4d93300488a1d98f2b7829b938554fd6261d49065ba6bab59723ae725087360\": rpc error: code = NotFound desc = could not find container \"f4d93300488a1d98f2b7829b938554fd6261d49065ba6bab59723ae725087360\": container with ID starting with f4d93300488a1d98f2b7829b938554fd6261d49065ba6bab59723ae725087360 not found: ID does not exist" Jan 31 09:12:56 crc kubenswrapper[4830]: I0131 09:12:56.419630 4830 scope.go:117] "RemoveContainer" containerID="a39d2c99a3b1f6ff5d003d2163046c5127e41dac82f41744c1fbffd0cec8ff09" Jan 31 09:12:56 crc kubenswrapper[4830]: I0131 09:12:56.424641 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a39d2c99a3b1f6ff5d003d2163046c5127e41dac82f41744c1fbffd0cec8ff09"} err="failed to get container status \"a39d2c99a3b1f6ff5d003d2163046c5127e41dac82f41744c1fbffd0cec8ff09\": rpc error: code = NotFound desc = could not find container \"a39d2c99a3b1f6ff5d003d2163046c5127e41dac82f41744c1fbffd0cec8ff09\": container with ID starting with a39d2c99a3b1f6ff5d003d2163046c5127e41dac82f41744c1fbffd0cec8ff09 not found: ID does not exist" Jan 31 09:12:56 crc kubenswrapper[4830]: I0131 09:12:56.424689 4830 scope.go:117] "RemoveContainer" containerID="351a35da1a2503d6c5d323c7a9a26460236f70517498b41055e6d21bbf92b358" Jan 31 09:12:56 crc kubenswrapper[4830]: I0131 09:12:56.425274 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"351a35da1a2503d6c5d323c7a9a26460236f70517498b41055e6d21bbf92b358"} err="failed to get container status \"351a35da1a2503d6c5d323c7a9a26460236f70517498b41055e6d21bbf92b358\": rpc error: code = NotFound desc = could not find container \"351a35da1a2503d6c5d323c7a9a26460236f70517498b41055e6d21bbf92b358\": container with ID starting with 351a35da1a2503d6c5d323c7a9a26460236f70517498b41055e6d21bbf92b358 not found: ID does not exist" Jan 31 09:12:56 crc kubenswrapper[4830]: I0131 09:12:56.425330 4830 scope.go:117] "RemoveContainer" containerID="3614c1ff7ca6762c15b9b85e6187ad94029630e54254e8ccb45b5e7c24692561" Jan 31 09:12:56 crc kubenswrapper[4830]: I0131 09:12:56.429854 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3614c1ff7ca6762c15b9b85e6187ad94029630e54254e8ccb45b5e7c24692561"} err="failed to get container status \"3614c1ff7ca6762c15b9b85e6187ad94029630e54254e8ccb45b5e7c24692561\": rpc error: code = NotFound desc = could not find container \"3614c1ff7ca6762c15b9b85e6187ad94029630e54254e8ccb45b5e7c24692561\": container with ID starting with 3614c1ff7ca6762c15b9b85e6187ad94029630e54254e8ccb45b5e7c24692561 not found: ID does not exist" Jan 31 09:12:56 crc kubenswrapper[4830]: I0131 09:12:56.429896 4830 scope.go:117] "RemoveContainer" containerID="ae12083012db5dcd6cc0b23cf64d04c952587334fdf617d36d4cdc2f657eb163" Jan 31 09:12:56 crc kubenswrapper[4830]: I0131 09:12:56.431801 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ae12083012db5dcd6cc0b23cf64d04c952587334fdf617d36d4cdc2f657eb163"} err="failed to get container status \"ae12083012db5dcd6cc0b23cf64d04c952587334fdf617d36d4cdc2f657eb163\": rpc error: code = NotFound desc = could not find container \"ae12083012db5dcd6cc0b23cf64d04c952587334fdf617d36d4cdc2f657eb163\": container with ID starting with ae12083012db5dcd6cc0b23cf64d04c952587334fdf617d36d4cdc2f657eb163 not found: ID does not exist" Jan 31 09:12:56 crc kubenswrapper[4830]: I0131 09:12:56.431823 4830 scope.go:117] "RemoveContainer" containerID="320f24051b1a05475b54823ec3401ad252f0efd6c0d3c5098643f959bad15179" Jan 31 09:12:56 crc kubenswrapper[4830]: I0131 09:12:56.432947 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"320f24051b1a05475b54823ec3401ad252f0efd6c0d3c5098643f959bad15179"} err="failed to get container status \"320f24051b1a05475b54823ec3401ad252f0efd6c0d3c5098643f959bad15179\": rpc error: code = NotFound desc = could not find container \"320f24051b1a05475b54823ec3401ad252f0efd6c0d3c5098643f959bad15179\": container with ID starting with 320f24051b1a05475b54823ec3401ad252f0efd6c0d3c5098643f959bad15179 not found: ID does not exist" Jan 31 09:12:56 crc kubenswrapper[4830]: I0131 09:12:56.432971 4830 scope.go:117] "RemoveContainer" containerID="27c8241e84418d80626b17fe37ccf994304394c3fa85c7f1a058d8d54ad7e7d6" Jan 31 09:12:56 crc kubenswrapper[4830]: I0131 09:12:56.433689 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"27c8241e84418d80626b17fe37ccf994304394c3fa85c7f1a058d8d54ad7e7d6"} err="failed to get container status \"27c8241e84418d80626b17fe37ccf994304394c3fa85c7f1a058d8d54ad7e7d6\": rpc error: code = NotFound desc = could not find container \"27c8241e84418d80626b17fe37ccf994304394c3fa85c7f1a058d8d54ad7e7d6\": container with ID starting with 27c8241e84418d80626b17fe37ccf994304394c3fa85c7f1a058d8d54ad7e7d6 not found: ID does not exist" Jan 31 09:12:56 crc kubenswrapper[4830]: I0131 09:12:56.433711 4830 scope.go:117] "RemoveContainer" containerID="0fd109715e65725addfeba399eacb1abbe16132e06e9ee50b542e877ca9d8d82" Jan 31 09:12:56 crc kubenswrapper[4830]: I0131 09:12:56.435331 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0fd109715e65725addfeba399eacb1abbe16132e06e9ee50b542e877ca9d8d82"} err="failed to get container status \"0fd109715e65725addfeba399eacb1abbe16132e06e9ee50b542e877ca9d8d82\": rpc error: code = NotFound desc = could not find container \"0fd109715e65725addfeba399eacb1abbe16132e06e9ee50b542e877ca9d8d82\": container with ID starting with 0fd109715e65725addfeba399eacb1abbe16132e06e9ee50b542e877ca9d8d82 not found: ID does not exist" Jan 31 09:12:56 crc kubenswrapper[4830]: I0131 09:12:56.435354 4830 scope.go:117] "RemoveContainer" containerID="ba6e5a840aca1952877a5d2c18af1d28779b13d54e0c653a2b02b3c3d7b748e4" Jan 31 09:12:56 crc kubenswrapper[4830]: I0131 09:12:56.435990 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ba6e5a840aca1952877a5d2c18af1d28779b13d54e0c653a2b02b3c3d7b748e4"} err="failed to get container status \"ba6e5a840aca1952877a5d2c18af1d28779b13d54e0c653a2b02b3c3d7b748e4\": rpc error: code = NotFound desc = could not find container \"ba6e5a840aca1952877a5d2c18af1d28779b13d54e0c653a2b02b3c3d7b748e4\": container with ID starting with ba6e5a840aca1952877a5d2c18af1d28779b13d54e0c653a2b02b3c3d7b748e4 not found: ID does not exist" Jan 31 09:12:56 crc kubenswrapper[4830]: I0131 09:12:56.827824 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-bqklj"] Jan 31 09:12:56 crc kubenswrapper[4830]: I0131 09:12:56.828951 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-bqklj" Jan 31 09:12:56 crc kubenswrapper[4830]: I0131 09:12:56.831777 4830 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-r4txh" Jan 31 09:12:56 crc kubenswrapper[4830]: I0131 09:12:56.840702 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Jan 31 09:12:56 crc kubenswrapper[4830]: I0131 09:12:56.841021 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Jan 31 09:12:56 crc kubenswrapper[4830]: I0131 09:12:56.846811 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-858654f9db-45w4k"] Jan 31 09:12:56 crc kubenswrapper[4830]: I0131 09:12:56.852517 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-45w4k" Jan 31 09:12:56 crc kubenswrapper[4830]: I0131 09:12:56.860745 4830 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-pt47j" Jan 31 09:12:56 crc kubenswrapper[4830]: I0131 09:12:56.865932 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-22grv"] Jan 31 09:12:56 crc kubenswrapper[4830]: I0131 09:12:56.867037 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-22grv" Jan 31 09:12:56 crc kubenswrapper[4830]: I0131 09:12:56.872921 4830 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-bkxp4" Jan 31 09:12:56 crc kubenswrapper[4830]: I0131 09:12:56.883213 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xj774\" (UniqueName: \"kubernetes.io/projected/c75f40f1-4c71-458a-906c-af1914c240de-kube-api-access-xj774\") pod \"cert-manager-cainjector-cf98fcc89-bqklj\" (UID: \"c75f40f1-4c71-458a-906c-af1914c240de\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-bqklj" Jan 31 09:12:56 crc kubenswrapper[4830]: I0131 09:12:56.984832 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2jbwq\" (UniqueName: \"kubernetes.io/projected/eb0ab04d-4e0a-4a84-965a-2c0513d6d79a-kube-api-access-2jbwq\") pod \"cert-manager-webhook-687f57d79b-22grv\" (UID: \"eb0ab04d-4e0a-4a84-965a-2c0513d6d79a\") " pod="cert-manager/cert-manager-webhook-687f57d79b-22grv" Jan 31 09:12:56 crc kubenswrapper[4830]: I0131 09:12:56.984949 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wxccq\" (UniqueName: \"kubernetes.io/projected/25c15123-ed27-483d-8a40-7241f614a210-kube-api-access-wxccq\") pod \"cert-manager-858654f9db-45w4k\" (UID: \"25c15123-ed27-483d-8a40-7241f614a210\") " pod="cert-manager/cert-manager-858654f9db-45w4k" Jan 31 09:12:56 crc kubenswrapper[4830]: I0131 09:12:56.985150 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xj774\" (UniqueName: \"kubernetes.io/projected/c75f40f1-4c71-458a-906c-af1914c240de-kube-api-access-xj774\") pod \"cert-manager-cainjector-cf98fcc89-bqklj\" (UID: \"c75f40f1-4c71-458a-906c-af1914c240de\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-bqklj" Jan 31 09:12:57 crc kubenswrapper[4830]: I0131 09:12:57.008972 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xj774\" (UniqueName: \"kubernetes.io/projected/c75f40f1-4c71-458a-906c-af1914c240de-kube-api-access-xj774\") pod \"cert-manager-cainjector-cf98fcc89-bqklj\" (UID: \"c75f40f1-4c71-458a-906c-af1914c240de\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-bqklj" Jan 31 09:12:57 crc kubenswrapper[4830]: I0131 09:12:57.043825 4830 generic.go:334] "Generic (PLEG): container finished" podID="e87ff23b-1ce8-4556-8998-7fc4dd84775c" containerID="04c1873105c6829fe98084be31390dd2dbc514079266433d37235eda907582e0" exitCode=0 Jan 31 09:12:57 crc kubenswrapper[4830]: I0131 09:12:57.043890 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sgnqp" event={"ID":"e87ff23b-1ce8-4556-8998-7fc4dd84775c","Type":"ContainerDied","Data":"04c1873105c6829fe98084be31390dd2dbc514079266433d37235eda907582e0"} Jan 31 09:12:57 crc kubenswrapper[4830]: I0131 09:12:57.043931 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sgnqp" event={"ID":"e87ff23b-1ce8-4556-8998-7fc4dd84775c","Type":"ContainerStarted","Data":"90d3413c8160e47f237c88916f163b2ba45e65f89e6487fba75216f93de5d910"} Jan 31 09:12:57 crc kubenswrapper[4830]: I0131 09:12:57.086699 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2jbwq\" (UniqueName: \"kubernetes.io/projected/eb0ab04d-4e0a-4a84-965a-2c0513d6d79a-kube-api-access-2jbwq\") pod \"cert-manager-webhook-687f57d79b-22grv\" (UID: \"eb0ab04d-4e0a-4a84-965a-2c0513d6d79a\") " pod="cert-manager/cert-manager-webhook-687f57d79b-22grv" Jan 31 09:12:57 crc kubenswrapper[4830]: I0131 09:12:57.087563 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wxccq\" (UniqueName: \"kubernetes.io/projected/25c15123-ed27-483d-8a40-7241f614a210-kube-api-access-wxccq\") pod \"cert-manager-858654f9db-45w4k\" (UID: \"25c15123-ed27-483d-8a40-7241f614a210\") " pod="cert-manager/cert-manager-858654f9db-45w4k" Jan 31 09:12:57 crc kubenswrapper[4830]: I0131 09:12:57.112157 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wxccq\" (UniqueName: \"kubernetes.io/projected/25c15123-ed27-483d-8a40-7241f614a210-kube-api-access-wxccq\") pod \"cert-manager-858654f9db-45w4k\" (UID: \"25c15123-ed27-483d-8a40-7241f614a210\") " pod="cert-manager/cert-manager-858654f9db-45w4k" Jan 31 09:12:57 crc kubenswrapper[4830]: I0131 09:12:57.112320 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2jbwq\" (UniqueName: \"kubernetes.io/projected/eb0ab04d-4e0a-4a84-965a-2c0513d6d79a-kube-api-access-2jbwq\") pod \"cert-manager-webhook-687f57d79b-22grv\" (UID: \"eb0ab04d-4e0a-4a84-965a-2c0513d6d79a\") " pod="cert-manager/cert-manager-webhook-687f57d79b-22grv" Jan 31 09:12:57 crc kubenswrapper[4830]: I0131 09:12:57.158164 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-bqklj" Jan 31 09:12:57 crc kubenswrapper[4830]: I0131 09:12:57.177718 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-45w4k" Jan 31 09:12:57 crc kubenswrapper[4830]: I0131 09:12:57.185127 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-22grv" Jan 31 09:12:57 crc kubenswrapper[4830]: E0131 09:12:57.207005 4830 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-cainjector-cf98fcc89-bqklj_cert-manager_c75f40f1-4c71-458a-906c-af1914c240de_0(1808ad092b84e13388aa200eaa9ead5a118369e07e110891f14ab98ba464270b): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 31 09:12:57 crc kubenswrapper[4830]: E0131 09:12:57.207106 4830 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-cainjector-cf98fcc89-bqklj_cert-manager_c75f40f1-4c71-458a-906c-af1914c240de_0(1808ad092b84e13388aa200eaa9ead5a118369e07e110891f14ab98ba464270b): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="cert-manager/cert-manager-cainjector-cf98fcc89-bqklj" Jan 31 09:12:57 crc kubenswrapper[4830]: E0131 09:12:57.207132 4830 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-cainjector-cf98fcc89-bqklj_cert-manager_c75f40f1-4c71-458a-906c-af1914c240de_0(1808ad092b84e13388aa200eaa9ead5a118369e07e110891f14ab98ba464270b): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="cert-manager/cert-manager-cainjector-cf98fcc89-bqklj" Jan 31 09:12:57 crc kubenswrapper[4830]: E0131 09:12:57.207213 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"cert-manager-cainjector-cf98fcc89-bqklj_cert-manager(c75f40f1-4c71-458a-906c-af1914c240de)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"cert-manager-cainjector-cf98fcc89-bqklj_cert-manager(c75f40f1-4c71-458a-906c-af1914c240de)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-cainjector-cf98fcc89-bqklj_cert-manager_c75f40f1-4c71-458a-906c-af1914c240de_0(1808ad092b84e13388aa200eaa9ead5a118369e07e110891f14ab98ba464270b): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="cert-manager/cert-manager-cainjector-cf98fcc89-bqklj" podUID="c75f40f1-4c71-458a-906c-af1914c240de" Jan 31 09:12:57 crc kubenswrapper[4830]: E0131 09:12:57.237413 4830 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-858654f9db-45w4k_cert-manager_25c15123-ed27-483d-8a40-7241f614a210_0(d187fe79cc77c6b718e18820dc4a1448f02f49998a215997944dce53405080e9): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 31 09:12:57 crc kubenswrapper[4830]: E0131 09:12:57.237486 4830 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-858654f9db-45w4k_cert-manager_25c15123-ed27-483d-8a40-7241f614a210_0(d187fe79cc77c6b718e18820dc4a1448f02f49998a215997944dce53405080e9): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="cert-manager/cert-manager-858654f9db-45w4k" Jan 31 09:12:57 crc kubenswrapper[4830]: E0131 09:12:57.237510 4830 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-858654f9db-45w4k_cert-manager_25c15123-ed27-483d-8a40-7241f614a210_0(d187fe79cc77c6b718e18820dc4a1448f02f49998a215997944dce53405080e9): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="cert-manager/cert-manager-858654f9db-45w4k" Jan 31 09:12:57 crc kubenswrapper[4830]: E0131 09:12:57.237590 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"cert-manager-858654f9db-45w4k_cert-manager(25c15123-ed27-483d-8a40-7241f614a210)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"cert-manager-858654f9db-45w4k_cert-manager(25c15123-ed27-483d-8a40-7241f614a210)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-858654f9db-45w4k_cert-manager_25c15123-ed27-483d-8a40-7241f614a210_0(d187fe79cc77c6b718e18820dc4a1448f02f49998a215997944dce53405080e9): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="cert-manager/cert-manager-858654f9db-45w4k" podUID="25c15123-ed27-483d-8a40-7241f614a210" Jan 31 09:12:57 crc kubenswrapper[4830]: E0131 09:12:57.269213 4830 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-webhook-687f57d79b-22grv_cert-manager_eb0ab04d-4e0a-4a84-965a-2c0513d6d79a_0(1ba183ba3d0b258b1d1ed5351cf9656d57d25f85e3b76cdfbd26b1b95e7eb4c3): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 31 09:12:57 crc kubenswrapper[4830]: E0131 09:12:57.269287 4830 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-webhook-687f57d79b-22grv_cert-manager_eb0ab04d-4e0a-4a84-965a-2c0513d6d79a_0(1ba183ba3d0b258b1d1ed5351cf9656d57d25f85e3b76cdfbd26b1b95e7eb4c3): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="cert-manager/cert-manager-webhook-687f57d79b-22grv" Jan 31 09:12:57 crc kubenswrapper[4830]: E0131 09:12:57.269313 4830 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-webhook-687f57d79b-22grv_cert-manager_eb0ab04d-4e0a-4a84-965a-2c0513d6d79a_0(1ba183ba3d0b258b1d1ed5351cf9656d57d25f85e3b76cdfbd26b1b95e7eb4c3): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="cert-manager/cert-manager-webhook-687f57d79b-22grv" Jan 31 09:12:57 crc kubenswrapper[4830]: E0131 09:12:57.269363 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"cert-manager-webhook-687f57d79b-22grv_cert-manager(eb0ab04d-4e0a-4a84-965a-2c0513d6d79a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"cert-manager-webhook-687f57d79b-22grv_cert-manager(eb0ab04d-4e0a-4a84-965a-2c0513d6d79a)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-webhook-687f57d79b-22grv_cert-manager_eb0ab04d-4e0a-4a84-965a-2c0513d6d79a_0(1ba183ba3d0b258b1d1ed5351cf9656d57d25f85e3b76cdfbd26b1b95e7eb4c3): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="cert-manager/cert-manager-webhook-687f57d79b-22grv" podUID="eb0ab04d-4e0a-4a84-965a-2c0513d6d79a" Jan 31 09:12:58 crc kubenswrapper[4830]: I0131 09:12:58.092049 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sgnqp" event={"ID":"e87ff23b-1ce8-4556-8998-7fc4dd84775c","Type":"ContainerStarted","Data":"0202dee534293b9abdd5329896e3b432efdb8d0a5bd309427b4af2a3297213c9"} Jan 31 09:12:58 crc kubenswrapper[4830]: I0131 09:12:58.092550 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sgnqp" event={"ID":"e87ff23b-1ce8-4556-8998-7fc4dd84775c","Type":"ContainerStarted","Data":"272d96b0021746af44de372b421fa9f4afdd50440870d0b32e559866f3090232"} Jan 31 09:12:58 crc kubenswrapper[4830]: I0131 09:12:58.092565 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sgnqp" event={"ID":"e87ff23b-1ce8-4556-8998-7fc4dd84775c","Type":"ContainerStarted","Data":"73ba3b73365f2bc186c9910fbbc50c9cb984ba743bfb902e1d075c3e164715cf"} Jan 31 09:12:58 crc kubenswrapper[4830]: I0131 09:12:58.092578 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sgnqp" event={"ID":"e87ff23b-1ce8-4556-8998-7fc4dd84775c","Type":"ContainerStarted","Data":"807dfa2047f8d4e3113aa79f4823f72fb337811462e086676d502d30ad464acd"} Jan 31 09:12:58 crc kubenswrapper[4830]: I0131 09:12:58.092586 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sgnqp" event={"ID":"e87ff23b-1ce8-4556-8998-7fc4dd84775c","Type":"ContainerStarted","Data":"331ee484631fe44674047f7ea5df8726c725635ba093e8245e8b40b70d411df3"} Jan 31 09:12:58 crc kubenswrapper[4830]: I0131 09:12:58.092595 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sgnqp" event={"ID":"e87ff23b-1ce8-4556-8998-7fc4dd84775c","Type":"ContainerStarted","Data":"2370e5939f609ce2da3d0e8f0ea38e41c28fa3c2193c4dc5eb0f8b8e865902d4"} Jan 31 09:13:02 crc kubenswrapper[4830]: I0131 09:13:02.122415 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sgnqp" event={"ID":"e87ff23b-1ce8-4556-8998-7fc4dd84775c","Type":"ContainerStarted","Data":"5757cbb4b4e85d1dc4742376b76ddc45dcc131cc770ad11c4c61223a39c97c1b"} Jan 31 09:13:03 crc kubenswrapper[4830]: I0131 09:13:03.134975 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sgnqp" event={"ID":"e87ff23b-1ce8-4556-8998-7fc4dd84775c","Type":"ContainerStarted","Data":"59a6a039b5128d999bcc97057743cb139970ade1224f165162b72b3b98445657"} Jan 31 09:13:03 crc kubenswrapper[4830]: I0131 09:13:03.135419 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-sgnqp" Jan 31 09:13:03 crc kubenswrapper[4830]: I0131 09:13:03.135480 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-sgnqp" Jan 31 09:13:03 crc kubenswrapper[4830]: I0131 09:13:03.173501 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-sgnqp" podStartSLOduration=8.173474162 podStartE2EDuration="8.173474162s" podCreationTimestamp="2026-01-31 09:12:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 09:13:03.169536369 +0000 UTC m=+727.662898861" watchObservedRunningTime="2026-01-31 09:13:03.173474162 +0000 UTC m=+727.666836604" Jan 31 09:13:03 crc kubenswrapper[4830]: I0131 09:13:03.179756 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-sgnqp" Jan 31 09:13:03 crc kubenswrapper[4830]: I0131 09:13:03.320360 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-22grv"] Jan 31 09:13:03 crc kubenswrapper[4830]: I0131 09:13:03.321013 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-22grv" Jan 31 09:13:03 crc kubenswrapper[4830]: I0131 09:13:03.321632 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-22grv" Jan 31 09:13:03 crc kubenswrapper[4830]: I0131 09:13:03.332047 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-45w4k"] Jan 31 09:13:03 crc kubenswrapper[4830]: I0131 09:13:03.332228 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-45w4k" Jan 31 09:13:03 crc kubenswrapper[4830]: I0131 09:13:03.332928 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-45w4k" Jan 31 09:13:03 crc kubenswrapper[4830]: I0131 09:13:03.359952 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-bqklj"] Jan 31 09:13:03 crc kubenswrapper[4830]: I0131 09:13:03.360152 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-bqklj" Jan 31 09:13:03 crc kubenswrapper[4830]: I0131 09:13:03.360801 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-bqklj" Jan 31 09:13:03 crc kubenswrapper[4830]: E0131 09:13:03.380073 4830 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-webhook-687f57d79b-22grv_cert-manager_eb0ab04d-4e0a-4a84-965a-2c0513d6d79a_0(a9a2f3f5cb1624d0e6211b49e1d313b1fbb41cd9fada157559917b34c42571f8): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 31 09:13:03 crc kubenswrapper[4830]: E0131 09:13:03.380178 4830 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-webhook-687f57d79b-22grv_cert-manager_eb0ab04d-4e0a-4a84-965a-2c0513d6d79a_0(a9a2f3f5cb1624d0e6211b49e1d313b1fbb41cd9fada157559917b34c42571f8): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="cert-manager/cert-manager-webhook-687f57d79b-22grv" Jan 31 09:13:03 crc kubenswrapper[4830]: E0131 09:13:03.380268 4830 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-webhook-687f57d79b-22grv_cert-manager_eb0ab04d-4e0a-4a84-965a-2c0513d6d79a_0(a9a2f3f5cb1624d0e6211b49e1d313b1fbb41cd9fada157559917b34c42571f8): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="cert-manager/cert-manager-webhook-687f57d79b-22grv" Jan 31 09:13:03 crc kubenswrapper[4830]: E0131 09:13:03.380329 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"cert-manager-webhook-687f57d79b-22grv_cert-manager(eb0ab04d-4e0a-4a84-965a-2c0513d6d79a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"cert-manager-webhook-687f57d79b-22grv_cert-manager(eb0ab04d-4e0a-4a84-965a-2c0513d6d79a)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-webhook-687f57d79b-22grv_cert-manager_eb0ab04d-4e0a-4a84-965a-2c0513d6d79a_0(a9a2f3f5cb1624d0e6211b49e1d313b1fbb41cd9fada157559917b34c42571f8): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="cert-manager/cert-manager-webhook-687f57d79b-22grv" podUID="eb0ab04d-4e0a-4a84-965a-2c0513d6d79a" Jan 31 09:13:03 crc kubenswrapper[4830]: E0131 09:13:03.402512 4830 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-858654f9db-45w4k_cert-manager_25c15123-ed27-483d-8a40-7241f614a210_0(4518f229bf7bd9c546b56eace8821ebc8e709693b5816acf9eea7fade061c499): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 31 09:13:03 crc kubenswrapper[4830]: E0131 09:13:03.402771 4830 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-858654f9db-45w4k_cert-manager_25c15123-ed27-483d-8a40-7241f614a210_0(4518f229bf7bd9c546b56eace8821ebc8e709693b5816acf9eea7fade061c499): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="cert-manager/cert-manager-858654f9db-45w4k" Jan 31 09:13:03 crc kubenswrapper[4830]: E0131 09:13:03.402803 4830 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-858654f9db-45w4k_cert-manager_25c15123-ed27-483d-8a40-7241f614a210_0(4518f229bf7bd9c546b56eace8821ebc8e709693b5816acf9eea7fade061c499): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="cert-manager/cert-manager-858654f9db-45w4k" Jan 31 09:13:03 crc kubenswrapper[4830]: E0131 09:13:03.402884 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"cert-manager-858654f9db-45w4k_cert-manager(25c15123-ed27-483d-8a40-7241f614a210)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"cert-manager-858654f9db-45w4k_cert-manager(25c15123-ed27-483d-8a40-7241f614a210)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-858654f9db-45w4k_cert-manager_25c15123-ed27-483d-8a40-7241f614a210_0(4518f229bf7bd9c546b56eace8821ebc8e709693b5816acf9eea7fade061c499): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="cert-manager/cert-manager-858654f9db-45w4k" podUID="25c15123-ed27-483d-8a40-7241f614a210" Jan 31 09:13:03 crc kubenswrapper[4830]: E0131 09:13:03.423964 4830 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-cainjector-cf98fcc89-bqklj_cert-manager_c75f40f1-4c71-458a-906c-af1914c240de_0(156ed7d9ca2d99d9586d05216a82cbec30dedc34c6546dfee324320dfb027d51): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 31 09:13:03 crc kubenswrapper[4830]: E0131 09:13:03.424064 4830 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-cainjector-cf98fcc89-bqklj_cert-manager_c75f40f1-4c71-458a-906c-af1914c240de_0(156ed7d9ca2d99d9586d05216a82cbec30dedc34c6546dfee324320dfb027d51): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="cert-manager/cert-manager-cainjector-cf98fcc89-bqklj" Jan 31 09:13:03 crc kubenswrapper[4830]: E0131 09:13:03.424098 4830 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-cainjector-cf98fcc89-bqklj_cert-manager_c75f40f1-4c71-458a-906c-af1914c240de_0(156ed7d9ca2d99d9586d05216a82cbec30dedc34c6546dfee324320dfb027d51): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="cert-manager/cert-manager-cainjector-cf98fcc89-bqklj" Jan 31 09:13:03 crc kubenswrapper[4830]: E0131 09:13:03.424168 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"cert-manager-cainjector-cf98fcc89-bqklj_cert-manager(c75f40f1-4c71-458a-906c-af1914c240de)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"cert-manager-cainjector-cf98fcc89-bqklj_cert-manager(c75f40f1-4c71-458a-906c-af1914c240de)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-cainjector-cf98fcc89-bqklj_cert-manager_c75f40f1-4c71-458a-906c-af1914c240de_0(156ed7d9ca2d99d9586d05216a82cbec30dedc34c6546dfee324320dfb027d51): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="cert-manager/cert-manager-cainjector-cf98fcc89-bqklj" podUID="c75f40f1-4c71-458a-906c-af1914c240de" Jan 31 09:13:04 crc kubenswrapper[4830]: I0131 09:13:04.142455 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-sgnqp" Jan 31 09:13:04 crc kubenswrapper[4830]: I0131 09:13:04.174310 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-sgnqp" Jan 31 09:13:07 crc kubenswrapper[4830]: I0131 09:13:07.252156 4830 scope.go:117] "RemoveContainer" containerID="688600880adb08704161ae3933906d1341bce11f0e4231769fa30f33301668d5" Jan 31 09:13:07 crc kubenswrapper[4830]: E0131 09:13:07.253163 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-cjqbn_openshift-multus(b7e133cc-19e8-4770-9146-88dac53a6531)\"" pod="openshift-multus/multus-cjqbn" podUID="b7e133cc-19e8-4770-9146-88dac53a6531" Jan 31 09:13:14 crc kubenswrapper[4830]: I0131 09:13:14.353452 4830 patch_prober.go:28] interesting pod/machine-config-daemon-gt7kd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 09:13:14 crc kubenswrapper[4830]: I0131 09:13:14.353548 4830 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" podUID="158dbfda-9b0a-4809-9946-3c6ee2d082dc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 09:13:15 crc kubenswrapper[4830]: I0131 09:13:15.251118 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-bqklj" Jan 31 09:13:15 crc kubenswrapper[4830]: I0131 09:13:15.252437 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-bqklj" Jan 31 09:13:15 crc kubenswrapper[4830]: E0131 09:13:15.288669 4830 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-cainjector-cf98fcc89-bqklj_cert-manager_c75f40f1-4c71-458a-906c-af1914c240de_0(2c4d6dbd3665b3460f4a7f09144c72132bb5dbaa65236b06b2ded60e412a9088): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 31 09:13:15 crc kubenswrapper[4830]: E0131 09:13:15.288791 4830 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-cainjector-cf98fcc89-bqklj_cert-manager_c75f40f1-4c71-458a-906c-af1914c240de_0(2c4d6dbd3665b3460f4a7f09144c72132bb5dbaa65236b06b2ded60e412a9088): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="cert-manager/cert-manager-cainjector-cf98fcc89-bqklj" Jan 31 09:13:15 crc kubenswrapper[4830]: E0131 09:13:15.288828 4830 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-cainjector-cf98fcc89-bqklj_cert-manager_c75f40f1-4c71-458a-906c-af1914c240de_0(2c4d6dbd3665b3460f4a7f09144c72132bb5dbaa65236b06b2ded60e412a9088): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="cert-manager/cert-manager-cainjector-cf98fcc89-bqklj" Jan 31 09:13:15 crc kubenswrapper[4830]: E0131 09:13:15.288906 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"cert-manager-cainjector-cf98fcc89-bqklj_cert-manager(c75f40f1-4c71-458a-906c-af1914c240de)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"cert-manager-cainjector-cf98fcc89-bqklj_cert-manager(c75f40f1-4c71-458a-906c-af1914c240de)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-cainjector-cf98fcc89-bqklj_cert-manager_c75f40f1-4c71-458a-906c-af1914c240de_0(2c4d6dbd3665b3460f4a7f09144c72132bb5dbaa65236b06b2ded60e412a9088): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="cert-manager/cert-manager-cainjector-cf98fcc89-bqklj" podUID="c75f40f1-4c71-458a-906c-af1914c240de" Jan 31 09:13:18 crc kubenswrapper[4830]: I0131 09:13:18.251468 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-45w4k" Jan 31 09:13:18 crc kubenswrapper[4830]: I0131 09:13:18.251616 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-22grv" Jan 31 09:13:18 crc kubenswrapper[4830]: I0131 09:13:18.252801 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-45w4k" Jan 31 09:13:18 crc kubenswrapper[4830]: I0131 09:13:18.253523 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-22grv" Jan 31 09:13:18 crc kubenswrapper[4830]: E0131 09:13:18.299827 4830 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-858654f9db-45w4k_cert-manager_25c15123-ed27-483d-8a40-7241f614a210_0(b17a48f49cc2fd6c571f71f769879df715c449eeac76292c3cfb6f0da4b1bb7c): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 31 09:13:18 crc kubenswrapper[4830]: E0131 09:13:18.299922 4830 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-858654f9db-45w4k_cert-manager_25c15123-ed27-483d-8a40-7241f614a210_0(b17a48f49cc2fd6c571f71f769879df715c449eeac76292c3cfb6f0da4b1bb7c): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="cert-manager/cert-manager-858654f9db-45w4k" Jan 31 09:13:18 crc kubenswrapper[4830]: E0131 09:13:18.299961 4830 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-858654f9db-45w4k_cert-manager_25c15123-ed27-483d-8a40-7241f614a210_0(b17a48f49cc2fd6c571f71f769879df715c449eeac76292c3cfb6f0da4b1bb7c): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="cert-manager/cert-manager-858654f9db-45w4k" Jan 31 09:13:18 crc kubenswrapper[4830]: E0131 09:13:18.300034 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"cert-manager-858654f9db-45w4k_cert-manager(25c15123-ed27-483d-8a40-7241f614a210)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"cert-manager-858654f9db-45w4k_cert-manager(25c15123-ed27-483d-8a40-7241f614a210)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-858654f9db-45w4k_cert-manager_25c15123-ed27-483d-8a40-7241f614a210_0(b17a48f49cc2fd6c571f71f769879df715c449eeac76292c3cfb6f0da4b1bb7c): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="cert-manager/cert-manager-858654f9db-45w4k" podUID="25c15123-ed27-483d-8a40-7241f614a210" Jan 31 09:13:18 crc kubenswrapper[4830]: E0131 09:13:18.312104 4830 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-webhook-687f57d79b-22grv_cert-manager_eb0ab04d-4e0a-4a84-965a-2c0513d6d79a_0(ddee02cf1669231f0dbedc4784564b8764a7dfa4a45b3dff25cafe244fb837c5): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 31 09:13:18 crc kubenswrapper[4830]: E0131 09:13:18.312241 4830 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-webhook-687f57d79b-22grv_cert-manager_eb0ab04d-4e0a-4a84-965a-2c0513d6d79a_0(ddee02cf1669231f0dbedc4784564b8764a7dfa4a45b3dff25cafe244fb837c5): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="cert-manager/cert-manager-webhook-687f57d79b-22grv" Jan 31 09:13:18 crc kubenswrapper[4830]: E0131 09:13:18.312300 4830 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-webhook-687f57d79b-22grv_cert-manager_eb0ab04d-4e0a-4a84-965a-2c0513d6d79a_0(ddee02cf1669231f0dbedc4784564b8764a7dfa4a45b3dff25cafe244fb837c5): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="cert-manager/cert-manager-webhook-687f57d79b-22grv" Jan 31 09:13:18 crc kubenswrapper[4830]: E0131 09:13:18.312398 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"cert-manager-webhook-687f57d79b-22grv_cert-manager(eb0ab04d-4e0a-4a84-965a-2c0513d6d79a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"cert-manager-webhook-687f57d79b-22grv_cert-manager(eb0ab04d-4e0a-4a84-965a-2c0513d6d79a)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-webhook-687f57d79b-22grv_cert-manager_eb0ab04d-4e0a-4a84-965a-2c0513d6d79a_0(ddee02cf1669231f0dbedc4784564b8764a7dfa4a45b3dff25cafe244fb837c5): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="cert-manager/cert-manager-webhook-687f57d79b-22grv" podUID="eb0ab04d-4e0a-4a84-965a-2c0513d6d79a" Jan 31 09:13:21 crc kubenswrapper[4830]: I0131 09:13:21.251713 4830 scope.go:117] "RemoveContainer" containerID="688600880adb08704161ae3933906d1341bce11f0e4231769fa30f33301668d5" Jan 31 09:13:22 crc kubenswrapper[4830]: I0131 09:13:22.286570 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-cjqbn_b7e133cc-19e8-4770-9146-88dac53a6531/kube-multus/2.log" Jan 31 09:13:22 crc kubenswrapper[4830]: I0131 09:13:22.287074 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-cjqbn" event={"ID":"b7e133cc-19e8-4770-9146-88dac53a6531","Type":"ContainerStarted","Data":"eb82240da946adfa611b413c0481d88ea0cf872aab62ba4c4733872acb9531a2"} Jan 31 09:13:26 crc kubenswrapper[4830]: I0131 09:13:26.299040 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-sgnqp" Jan 31 09:13:27 crc kubenswrapper[4830]: I0131 09:13:27.250981 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-bqklj" Jan 31 09:13:27 crc kubenswrapper[4830]: I0131 09:13:27.251957 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-bqklj" Jan 31 09:13:27 crc kubenswrapper[4830]: I0131 09:13:27.682036 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-bqklj"] Jan 31 09:13:28 crc kubenswrapper[4830]: I0131 09:13:28.333682 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-bqklj" event={"ID":"c75f40f1-4c71-458a-906c-af1914c240de","Type":"ContainerStarted","Data":"d4dba21a02a1ed4fab54d6732064868c3f23f13f64987f6f495c135485e10d95"} Jan 31 09:13:30 crc kubenswrapper[4830]: I0131 09:13:30.349790 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-bqklj" event={"ID":"c75f40f1-4c71-458a-906c-af1914c240de","Type":"ContainerStarted","Data":"bc4b45dae29a36ef41a95652f3fe71fd9e0b56cd60115fc1d78ba8dd8dc7ea6f"} Jan 31 09:13:30 crc kubenswrapper[4830]: I0131 09:13:30.374438 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-cf98fcc89-bqklj" podStartSLOduration=32.248244398 podStartE2EDuration="34.374422237s" podCreationTimestamp="2026-01-31 09:12:56 +0000 UTC" firstStartedPulling="2026-01-31 09:13:27.702984373 +0000 UTC m=+752.196346815" lastFinishedPulling="2026-01-31 09:13:29.829162222 +0000 UTC m=+754.322524654" observedRunningTime="2026-01-31 09:13:30.373411357 +0000 UTC m=+754.866773799" watchObservedRunningTime="2026-01-31 09:13:30.374422237 +0000 UTC m=+754.867784679" Jan 31 09:13:31 crc kubenswrapper[4830]: I0131 09:13:31.251200 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-45w4k" Jan 31 09:13:31 crc kubenswrapper[4830]: I0131 09:13:31.251842 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-45w4k" Jan 31 09:13:31 crc kubenswrapper[4830]: I0131 09:13:31.664369 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-45w4k"] Jan 31 09:13:32 crc kubenswrapper[4830]: I0131 09:13:32.367248 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-45w4k" event={"ID":"25c15123-ed27-483d-8a40-7241f614a210","Type":"ContainerStarted","Data":"cf49861705aaac3ae2bc4040da90b9340212c4b2ea4db6be3b8583442b6a6bb7"} Jan 31 09:13:33 crc kubenswrapper[4830]: I0131 09:13:33.251126 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-22grv" Jan 31 09:13:33 crc kubenswrapper[4830]: I0131 09:13:33.251674 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-22grv" Jan 31 09:13:33 crc kubenswrapper[4830]: I0131 09:13:33.501059 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-22grv"] Jan 31 09:13:34 crc kubenswrapper[4830]: I0131 09:13:34.380783 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-45w4k" event={"ID":"25c15123-ed27-483d-8a40-7241f614a210","Type":"ContainerStarted","Data":"dd6ebf8dc87099efb2f578026531f77547a2348c13f3d2208022a7313a2b7afc"} Jan 31 09:13:34 crc kubenswrapper[4830]: I0131 09:13:34.385255 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-22grv" event={"ID":"eb0ab04d-4e0a-4a84-965a-2c0513d6d79a","Type":"ContainerStarted","Data":"0c0d9475b6fc6e96c9b5e9a3b0e3c663d985126e7761189af7d3b021f5208cc2"} Jan 31 09:13:34 crc kubenswrapper[4830]: I0131 09:13:34.407668 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-858654f9db-45w4k" podStartSLOduration=36.853465594 podStartE2EDuration="38.407439002s" podCreationTimestamp="2026-01-31 09:12:56 +0000 UTC" firstStartedPulling="2026-01-31 09:13:31.673202931 +0000 UTC m=+756.166565373" lastFinishedPulling="2026-01-31 09:13:33.227176339 +0000 UTC m=+757.720538781" observedRunningTime="2026-01-31 09:13:34.399902425 +0000 UTC m=+758.893264887" watchObservedRunningTime="2026-01-31 09:13:34.407439002 +0000 UTC m=+758.900801444" Jan 31 09:13:36 crc kubenswrapper[4830]: I0131 09:13:36.403168 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-22grv" event={"ID":"eb0ab04d-4e0a-4a84-965a-2c0513d6d79a","Type":"ContainerStarted","Data":"8a54447fbc2df77483fc47ed3cd361cd6819cb3ac4710d4892449b8b1ea4d522"} Jan 31 09:13:36 crc kubenswrapper[4830]: I0131 09:13:36.404186 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-687f57d79b-22grv" Jan 31 09:13:40 crc kubenswrapper[4830]: I0131 09:13:40.682384 4830 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 31 09:13:42 crc kubenswrapper[4830]: I0131 09:13:42.192498 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-687f57d79b-22grv" Jan 31 09:13:42 crc kubenswrapper[4830]: I0131 09:13:42.209536 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-687f57d79b-22grv" podStartSLOduration=44.110110605 podStartE2EDuration="46.209513214s" podCreationTimestamp="2026-01-31 09:12:56 +0000 UTC" firstStartedPulling="2026-01-31 09:13:33.505227623 +0000 UTC m=+757.998590065" lastFinishedPulling="2026-01-31 09:13:35.604630232 +0000 UTC m=+760.097992674" observedRunningTime="2026-01-31 09:13:36.424533912 +0000 UTC m=+760.917896354" watchObservedRunningTime="2026-01-31 09:13:42.209513214 +0000 UTC m=+766.702875676" Jan 31 09:13:42 crc kubenswrapper[4830]: I0131 09:13:42.677017 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-7rpg5"] Jan 31 09:13:42 crc kubenswrapper[4830]: I0131 09:13:42.678873 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7rpg5" Jan 31 09:13:42 crc kubenswrapper[4830]: I0131 09:13:42.708986 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f9a42034-ff52-4b09-8460-11f8451e6dae-utilities\") pod \"redhat-marketplace-7rpg5\" (UID: \"f9a42034-ff52-4b09-8460-11f8451e6dae\") " pod="openshift-marketplace/redhat-marketplace-7rpg5" Jan 31 09:13:42 crc kubenswrapper[4830]: I0131 09:13:42.709150 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xntf5\" (UniqueName: \"kubernetes.io/projected/f9a42034-ff52-4b09-8460-11f8451e6dae-kube-api-access-xntf5\") pod \"redhat-marketplace-7rpg5\" (UID: \"f9a42034-ff52-4b09-8460-11f8451e6dae\") " pod="openshift-marketplace/redhat-marketplace-7rpg5" Jan 31 09:13:42 crc kubenswrapper[4830]: I0131 09:13:42.709187 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f9a42034-ff52-4b09-8460-11f8451e6dae-catalog-content\") pod \"redhat-marketplace-7rpg5\" (UID: \"f9a42034-ff52-4b09-8460-11f8451e6dae\") " pod="openshift-marketplace/redhat-marketplace-7rpg5" Jan 31 09:13:42 crc kubenswrapper[4830]: I0131 09:13:42.737565 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-7rpg5"] Jan 31 09:13:42 crc kubenswrapper[4830]: I0131 09:13:42.810373 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f9a42034-ff52-4b09-8460-11f8451e6dae-utilities\") pod \"redhat-marketplace-7rpg5\" (UID: \"f9a42034-ff52-4b09-8460-11f8451e6dae\") " pod="openshift-marketplace/redhat-marketplace-7rpg5" Jan 31 09:13:42 crc kubenswrapper[4830]: I0131 09:13:42.810501 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xntf5\" (UniqueName: \"kubernetes.io/projected/f9a42034-ff52-4b09-8460-11f8451e6dae-kube-api-access-xntf5\") pod \"redhat-marketplace-7rpg5\" (UID: \"f9a42034-ff52-4b09-8460-11f8451e6dae\") " pod="openshift-marketplace/redhat-marketplace-7rpg5" Jan 31 09:13:42 crc kubenswrapper[4830]: I0131 09:13:42.810533 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f9a42034-ff52-4b09-8460-11f8451e6dae-catalog-content\") pod \"redhat-marketplace-7rpg5\" (UID: \"f9a42034-ff52-4b09-8460-11f8451e6dae\") " pod="openshift-marketplace/redhat-marketplace-7rpg5" Jan 31 09:13:42 crc kubenswrapper[4830]: I0131 09:13:42.810993 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f9a42034-ff52-4b09-8460-11f8451e6dae-utilities\") pod \"redhat-marketplace-7rpg5\" (UID: \"f9a42034-ff52-4b09-8460-11f8451e6dae\") " pod="openshift-marketplace/redhat-marketplace-7rpg5" Jan 31 09:13:42 crc kubenswrapper[4830]: I0131 09:13:42.811061 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f9a42034-ff52-4b09-8460-11f8451e6dae-catalog-content\") pod \"redhat-marketplace-7rpg5\" (UID: \"f9a42034-ff52-4b09-8460-11f8451e6dae\") " pod="openshift-marketplace/redhat-marketplace-7rpg5" Jan 31 09:13:42 crc kubenswrapper[4830]: I0131 09:13:42.838121 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xntf5\" (UniqueName: \"kubernetes.io/projected/f9a42034-ff52-4b09-8460-11f8451e6dae-kube-api-access-xntf5\") pod \"redhat-marketplace-7rpg5\" (UID: \"f9a42034-ff52-4b09-8460-11f8451e6dae\") " pod="openshift-marketplace/redhat-marketplace-7rpg5" Jan 31 09:13:43 crc kubenswrapper[4830]: I0131 09:13:43.008077 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7rpg5" Jan 31 09:13:43 crc kubenswrapper[4830]: I0131 09:13:43.483333 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-7rpg5"] Jan 31 09:13:44 crc kubenswrapper[4830]: I0131 09:13:44.352880 4830 patch_prober.go:28] interesting pod/machine-config-daemon-gt7kd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 09:13:44 crc kubenswrapper[4830]: I0131 09:13:44.353044 4830 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" podUID="158dbfda-9b0a-4809-9946-3c6ee2d082dc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 09:13:44 crc kubenswrapper[4830]: I0131 09:13:44.353139 4830 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" Jan 31 09:13:44 crc kubenswrapper[4830]: I0131 09:13:44.354418 4830 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"28b103ac2ba54a2d7fb62b9e350f386540aa590898443607b7a7ceffbe4db67d"} pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 31 09:13:44 crc kubenswrapper[4830]: I0131 09:13:44.354570 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" podUID="158dbfda-9b0a-4809-9946-3c6ee2d082dc" containerName="machine-config-daemon" containerID="cri-o://28b103ac2ba54a2d7fb62b9e350f386540aa590898443607b7a7ceffbe4db67d" gracePeriod=600 Jan 31 09:13:44 crc kubenswrapper[4830]: I0131 09:13:44.471252 4830 generic.go:334] "Generic (PLEG): container finished" podID="f9a42034-ff52-4b09-8460-11f8451e6dae" containerID="92b2c6f7d96ac9ada88ba74543379d9e917ef2abd05ed356b39de309cc317b25" exitCode=0 Jan 31 09:13:44 crc kubenswrapper[4830]: I0131 09:13:44.471324 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7rpg5" event={"ID":"f9a42034-ff52-4b09-8460-11f8451e6dae","Type":"ContainerDied","Data":"92b2c6f7d96ac9ada88ba74543379d9e917ef2abd05ed356b39de309cc317b25"} Jan 31 09:13:44 crc kubenswrapper[4830]: I0131 09:13:44.471364 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7rpg5" event={"ID":"f9a42034-ff52-4b09-8460-11f8451e6dae","Type":"ContainerStarted","Data":"64f83b45cb40d022c4670eb72f406c639bdb7c8f755c2b4a20ffa244cf737268"} Jan 31 09:13:45 crc kubenswrapper[4830]: I0131 09:13:45.482158 4830 generic.go:334] "Generic (PLEG): container finished" podID="158dbfda-9b0a-4809-9946-3c6ee2d082dc" containerID="28b103ac2ba54a2d7fb62b9e350f386540aa590898443607b7a7ceffbe4db67d" exitCode=0 Jan 31 09:13:45 crc kubenswrapper[4830]: I0131 09:13:45.482218 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" event={"ID":"158dbfda-9b0a-4809-9946-3c6ee2d082dc","Type":"ContainerDied","Data":"28b103ac2ba54a2d7fb62b9e350f386540aa590898443607b7a7ceffbe4db67d"} Jan 31 09:13:45 crc kubenswrapper[4830]: I0131 09:13:45.483367 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" event={"ID":"158dbfda-9b0a-4809-9946-3c6ee2d082dc","Type":"ContainerStarted","Data":"b9a249a59033511b4c694877132f9e35c14cbd330f48a89cd21a667a4732ff74"} Jan 31 09:13:45 crc kubenswrapper[4830]: I0131 09:13:45.483398 4830 scope.go:117] "RemoveContainer" containerID="40de0b135d2e6436aca04cec9e087aebbf22156339d1945255baa4aa59e53756" Jan 31 09:13:45 crc kubenswrapper[4830]: I0131 09:13:45.488379 4830 generic.go:334] "Generic (PLEG): container finished" podID="f9a42034-ff52-4b09-8460-11f8451e6dae" containerID="1d29424b4875fc1faf57f1a610919afdec0520433a5d2c7b340037c6cd72daf8" exitCode=0 Jan 31 09:13:45 crc kubenswrapper[4830]: I0131 09:13:45.488491 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7rpg5" event={"ID":"f9a42034-ff52-4b09-8460-11f8451e6dae","Type":"ContainerDied","Data":"1d29424b4875fc1faf57f1a610919afdec0520433a5d2c7b340037c6cd72daf8"} Jan 31 09:13:46 crc kubenswrapper[4830]: I0131 09:13:46.501962 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7rpg5" event={"ID":"f9a42034-ff52-4b09-8460-11f8451e6dae","Type":"ContainerStarted","Data":"b33fa47d65878eb1c94f9fbae8d2033c15ea7a59412f9ba96b67496533ceaa94"} Jan 31 09:13:53 crc kubenswrapper[4830]: I0131 09:13:53.008513 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-7rpg5" Jan 31 09:13:53 crc kubenswrapper[4830]: I0131 09:13:53.009227 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-7rpg5" Jan 31 09:13:53 crc kubenswrapper[4830]: I0131 09:13:53.053940 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-7rpg5" Jan 31 09:13:53 crc kubenswrapper[4830]: I0131 09:13:53.075451 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-7rpg5" podStartSLOduration=9.637306803 podStartE2EDuration="11.075436308s" podCreationTimestamp="2026-01-31 09:13:42 +0000 UTC" firstStartedPulling="2026-01-31 09:13:44.473099679 +0000 UTC m=+768.966462121" lastFinishedPulling="2026-01-31 09:13:45.911229184 +0000 UTC m=+770.404591626" observedRunningTime="2026-01-31 09:13:46.536369068 +0000 UTC m=+771.029731530" watchObservedRunningTime="2026-01-31 09:13:53.075436308 +0000 UTC m=+777.568798750" Jan 31 09:13:53 crc kubenswrapper[4830]: I0131 09:13:53.629074 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-7rpg5" Jan 31 09:13:53 crc kubenswrapper[4830]: I0131 09:13:53.686667 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-7rpg5"] Jan 31 09:13:55 crc kubenswrapper[4830]: I0131 09:13:55.593051 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-7rpg5" podUID="f9a42034-ff52-4b09-8460-11f8451e6dae" containerName="registry-server" containerID="cri-o://b33fa47d65878eb1c94f9fbae8d2033c15ea7a59412f9ba96b67496533ceaa94" gracePeriod=2 Jan 31 09:13:56 crc kubenswrapper[4830]: I0131 09:13:56.517343 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7rpg5" Jan 31 09:13:56 crc kubenswrapper[4830]: I0131 09:13:56.533936 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f9a42034-ff52-4b09-8460-11f8451e6dae-catalog-content\") pod \"f9a42034-ff52-4b09-8460-11f8451e6dae\" (UID: \"f9a42034-ff52-4b09-8460-11f8451e6dae\") " Jan 31 09:13:56 crc kubenswrapper[4830]: I0131 09:13:56.534120 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xntf5\" (UniqueName: \"kubernetes.io/projected/f9a42034-ff52-4b09-8460-11f8451e6dae-kube-api-access-xntf5\") pod \"f9a42034-ff52-4b09-8460-11f8451e6dae\" (UID: \"f9a42034-ff52-4b09-8460-11f8451e6dae\") " Jan 31 09:13:56 crc kubenswrapper[4830]: I0131 09:13:56.534175 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f9a42034-ff52-4b09-8460-11f8451e6dae-utilities\") pod \"f9a42034-ff52-4b09-8460-11f8451e6dae\" (UID: \"f9a42034-ff52-4b09-8460-11f8451e6dae\") " Jan 31 09:13:56 crc kubenswrapper[4830]: I0131 09:13:56.536285 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f9a42034-ff52-4b09-8460-11f8451e6dae-utilities" (OuterVolumeSpecName: "utilities") pod "f9a42034-ff52-4b09-8460-11f8451e6dae" (UID: "f9a42034-ff52-4b09-8460-11f8451e6dae"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 09:13:56 crc kubenswrapper[4830]: I0131 09:13:56.552793 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f9a42034-ff52-4b09-8460-11f8451e6dae-kube-api-access-xntf5" (OuterVolumeSpecName: "kube-api-access-xntf5") pod "f9a42034-ff52-4b09-8460-11f8451e6dae" (UID: "f9a42034-ff52-4b09-8460-11f8451e6dae"). InnerVolumeSpecName "kube-api-access-xntf5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:13:56 crc kubenswrapper[4830]: I0131 09:13:56.563889 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f9a42034-ff52-4b09-8460-11f8451e6dae-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f9a42034-ff52-4b09-8460-11f8451e6dae" (UID: "f9a42034-ff52-4b09-8460-11f8451e6dae"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 09:13:56 crc kubenswrapper[4830]: I0131 09:13:56.601415 4830 generic.go:334] "Generic (PLEG): container finished" podID="f9a42034-ff52-4b09-8460-11f8451e6dae" containerID="b33fa47d65878eb1c94f9fbae8d2033c15ea7a59412f9ba96b67496533ceaa94" exitCode=0 Jan 31 09:13:56 crc kubenswrapper[4830]: I0131 09:13:56.601484 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7rpg5" event={"ID":"f9a42034-ff52-4b09-8460-11f8451e6dae","Type":"ContainerDied","Data":"b33fa47d65878eb1c94f9fbae8d2033c15ea7a59412f9ba96b67496533ceaa94"} Jan 31 09:13:56 crc kubenswrapper[4830]: I0131 09:13:56.601531 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7rpg5" event={"ID":"f9a42034-ff52-4b09-8460-11f8451e6dae","Type":"ContainerDied","Data":"64f83b45cb40d022c4670eb72f406c639bdb7c8f755c2b4a20ffa244cf737268"} Jan 31 09:13:56 crc kubenswrapper[4830]: I0131 09:13:56.601558 4830 scope.go:117] "RemoveContainer" containerID="b33fa47d65878eb1c94f9fbae8d2033c15ea7a59412f9ba96b67496533ceaa94" Jan 31 09:13:56 crc kubenswrapper[4830]: I0131 09:13:56.601628 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7rpg5" Jan 31 09:13:56 crc kubenswrapper[4830]: I0131 09:13:56.627156 4830 scope.go:117] "RemoveContainer" containerID="1d29424b4875fc1faf57f1a610919afdec0520433a5d2c7b340037c6cd72daf8" Jan 31 09:13:56 crc kubenswrapper[4830]: I0131 09:13:56.637686 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xntf5\" (UniqueName: \"kubernetes.io/projected/f9a42034-ff52-4b09-8460-11f8451e6dae-kube-api-access-xntf5\") on node \"crc\" DevicePath \"\"" Jan 31 09:13:56 crc kubenswrapper[4830]: I0131 09:13:56.638019 4830 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f9a42034-ff52-4b09-8460-11f8451e6dae-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 09:13:56 crc kubenswrapper[4830]: I0131 09:13:56.638109 4830 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f9a42034-ff52-4b09-8460-11f8451e6dae-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 09:13:56 crc kubenswrapper[4830]: I0131 09:13:56.642118 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-7rpg5"] Jan 31 09:13:56 crc kubenswrapper[4830]: I0131 09:13:56.661060 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-7rpg5"] Jan 31 09:13:56 crc kubenswrapper[4830]: I0131 09:13:56.669026 4830 scope.go:117] "RemoveContainer" containerID="92b2c6f7d96ac9ada88ba74543379d9e917ef2abd05ed356b39de309cc317b25" Jan 31 09:13:56 crc kubenswrapper[4830]: I0131 09:13:56.688769 4830 scope.go:117] "RemoveContainer" containerID="b33fa47d65878eb1c94f9fbae8d2033c15ea7a59412f9ba96b67496533ceaa94" Jan 31 09:13:56 crc kubenswrapper[4830]: E0131 09:13:56.689358 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b33fa47d65878eb1c94f9fbae8d2033c15ea7a59412f9ba96b67496533ceaa94\": container with ID starting with b33fa47d65878eb1c94f9fbae8d2033c15ea7a59412f9ba96b67496533ceaa94 not found: ID does not exist" containerID="b33fa47d65878eb1c94f9fbae8d2033c15ea7a59412f9ba96b67496533ceaa94" Jan 31 09:13:56 crc kubenswrapper[4830]: I0131 09:13:56.689422 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b33fa47d65878eb1c94f9fbae8d2033c15ea7a59412f9ba96b67496533ceaa94"} err="failed to get container status \"b33fa47d65878eb1c94f9fbae8d2033c15ea7a59412f9ba96b67496533ceaa94\": rpc error: code = NotFound desc = could not find container \"b33fa47d65878eb1c94f9fbae8d2033c15ea7a59412f9ba96b67496533ceaa94\": container with ID starting with b33fa47d65878eb1c94f9fbae8d2033c15ea7a59412f9ba96b67496533ceaa94 not found: ID does not exist" Jan 31 09:13:56 crc kubenswrapper[4830]: I0131 09:13:56.689465 4830 scope.go:117] "RemoveContainer" containerID="1d29424b4875fc1faf57f1a610919afdec0520433a5d2c7b340037c6cd72daf8" Jan 31 09:13:56 crc kubenswrapper[4830]: E0131 09:13:56.689833 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1d29424b4875fc1faf57f1a610919afdec0520433a5d2c7b340037c6cd72daf8\": container with ID starting with 1d29424b4875fc1faf57f1a610919afdec0520433a5d2c7b340037c6cd72daf8 not found: ID does not exist" containerID="1d29424b4875fc1faf57f1a610919afdec0520433a5d2c7b340037c6cd72daf8" Jan 31 09:13:56 crc kubenswrapper[4830]: I0131 09:13:56.689922 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1d29424b4875fc1faf57f1a610919afdec0520433a5d2c7b340037c6cd72daf8"} err="failed to get container status \"1d29424b4875fc1faf57f1a610919afdec0520433a5d2c7b340037c6cd72daf8\": rpc error: code = NotFound desc = could not find container \"1d29424b4875fc1faf57f1a610919afdec0520433a5d2c7b340037c6cd72daf8\": container with ID starting with 1d29424b4875fc1faf57f1a610919afdec0520433a5d2c7b340037c6cd72daf8 not found: ID does not exist" Jan 31 09:13:56 crc kubenswrapper[4830]: I0131 09:13:56.690253 4830 scope.go:117] "RemoveContainer" containerID="92b2c6f7d96ac9ada88ba74543379d9e917ef2abd05ed356b39de309cc317b25" Jan 31 09:13:56 crc kubenswrapper[4830]: E0131 09:13:56.690614 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"92b2c6f7d96ac9ada88ba74543379d9e917ef2abd05ed356b39de309cc317b25\": container with ID starting with 92b2c6f7d96ac9ada88ba74543379d9e917ef2abd05ed356b39de309cc317b25 not found: ID does not exist" containerID="92b2c6f7d96ac9ada88ba74543379d9e917ef2abd05ed356b39de309cc317b25" Jan 31 09:13:56 crc kubenswrapper[4830]: I0131 09:13:56.691104 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"92b2c6f7d96ac9ada88ba74543379d9e917ef2abd05ed356b39de309cc317b25"} err="failed to get container status \"92b2c6f7d96ac9ada88ba74543379d9e917ef2abd05ed356b39de309cc317b25\": rpc error: code = NotFound desc = could not find container \"92b2c6f7d96ac9ada88ba74543379d9e917ef2abd05ed356b39de309cc317b25\": container with ID starting with 92b2c6f7d96ac9ada88ba74543379d9e917ef2abd05ed356b39de309cc317b25 not found: ID does not exist" Jan 31 09:13:58 crc kubenswrapper[4830]: I0131 09:13:58.259536 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f9a42034-ff52-4b09-8460-11f8451e6dae" path="/var/lib/kubelet/pods/f9a42034-ff52-4b09-8460-11f8451e6dae/volumes" Jan 31 09:14:09 crc kubenswrapper[4830]: I0131 09:14:09.105483 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bgpsr7"] Jan 31 09:14:09 crc kubenswrapper[4830]: E0131 09:14:09.106782 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9a42034-ff52-4b09-8460-11f8451e6dae" containerName="extract-content" Jan 31 09:14:09 crc kubenswrapper[4830]: I0131 09:14:09.106805 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9a42034-ff52-4b09-8460-11f8451e6dae" containerName="extract-content" Jan 31 09:14:09 crc kubenswrapper[4830]: E0131 09:14:09.106816 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9a42034-ff52-4b09-8460-11f8451e6dae" containerName="registry-server" Jan 31 09:14:09 crc kubenswrapper[4830]: I0131 09:14:09.106825 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9a42034-ff52-4b09-8460-11f8451e6dae" containerName="registry-server" Jan 31 09:14:09 crc kubenswrapper[4830]: E0131 09:14:09.106849 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9a42034-ff52-4b09-8460-11f8451e6dae" containerName="extract-utilities" Jan 31 09:14:09 crc kubenswrapper[4830]: I0131 09:14:09.106858 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9a42034-ff52-4b09-8460-11f8451e6dae" containerName="extract-utilities" Jan 31 09:14:09 crc kubenswrapper[4830]: I0131 09:14:09.107061 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="f9a42034-ff52-4b09-8460-11f8451e6dae" containerName="registry-server" Jan 31 09:14:09 crc kubenswrapper[4830]: I0131 09:14:09.108430 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bgpsr7" Jan 31 09:14:09 crc kubenswrapper[4830]: I0131 09:14:09.111796 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 31 09:14:09 crc kubenswrapper[4830]: I0131 09:14:09.112680 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bgpsr7"] Jan 31 09:14:09 crc kubenswrapper[4830]: I0131 09:14:09.250371 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rcm8m\" (UniqueName: \"kubernetes.io/projected/e9a3487d-9bd3-40fe-9096-22c0a7afb0ec-kube-api-access-rcm8m\") pod \"40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bgpsr7\" (UID: \"e9a3487d-9bd3-40fe-9096-22c0a7afb0ec\") " pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bgpsr7" Jan 31 09:14:09 crc kubenswrapper[4830]: I0131 09:14:09.250479 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e9a3487d-9bd3-40fe-9096-22c0a7afb0ec-bundle\") pod \"40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bgpsr7\" (UID: \"e9a3487d-9bd3-40fe-9096-22c0a7afb0ec\") " pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bgpsr7" Jan 31 09:14:09 crc kubenswrapper[4830]: I0131 09:14:09.250510 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e9a3487d-9bd3-40fe-9096-22c0a7afb0ec-util\") pod \"40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bgpsr7\" (UID: \"e9a3487d-9bd3-40fe-9096-22c0a7afb0ec\") " pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bgpsr7" Jan 31 09:14:09 crc kubenswrapper[4830]: I0131 09:14:09.306670 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2sbgbc"] Jan 31 09:14:09 crc kubenswrapper[4830]: I0131 09:14:09.308198 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2sbgbc" Jan 31 09:14:09 crc kubenswrapper[4830]: I0131 09:14:09.325323 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2sbgbc"] Jan 31 09:14:09 crc kubenswrapper[4830]: I0131 09:14:09.351891 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rcm8m\" (UniqueName: \"kubernetes.io/projected/e9a3487d-9bd3-40fe-9096-22c0a7afb0ec-kube-api-access-rcm8m\") pod \"40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bgpsr7\" (UID: \"e9a3487d-9bd3-40fe-9096-22c0a7afb0ec\") " pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bgpsr7" Jan 31 09:14:09 crc kubenswrapper[4830]: I0131 09:14:09.351973 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e9a3487d-9bd3-40fe-9096-22c0a7afb0ec-bundle\") pod \"40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bgpsr7\" (UID: \"e9a3487d-9bd3-40fe-9096-22c0a7afb0ec\") " pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bgpsr7" Jan 31 09:14:09 crc kubenswrapper[4830]: I0131 09:14:09.352008 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e9a3487d-9bd3-40fe-9096-22c0a7afb0ec-util\") pod \"40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bgpsr7\" (UID: \"e9a3487d-9bd3-40fe-9096-22c0a7afb0ec\") " pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bgpsr7" Jan 31 09:14:09 crc kubenswrapper[4830]: I0131 09:14:09.352663 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e9a3487d-9bd3-40fe-9096-22c0a7afb0ec-bundle\") pod \"40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bgpsr7\" (UID: \"e9a3487d-9bd3-40fe-9096-22c0a7afb0ec\") " pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bgpsr7" Jan 31 09:14:09 crc kubenswrapper[4830]: I0131 09:14:09.352857 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e9a3487d-9bd3-40fe-9096-22c0a7afb0ec-util\") pod \"40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bgpsr7\" (UID: \"e9a3487d-9bd3-40fe-9096-22c0a7afb0ec\") " pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bgpsr7" Jan 31 09:14:09 crc kubenswrapper[4830]: I0131 09:14:09.375744 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rcm8m\" (UniqueName: \"kubernetes.io/projected/e9a3487d-9bd3-40fe-9096-22c0a7afb0ec-kube-api-access-rcm8m\") pod \"40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bgpsr7\" (UID: \"e9a3487d-9bd3-40fe-9096-22c0a7afb0ec\") " pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bgpsr7" Jan 31 09:14:09 crc kubenswrapper[4830]: I0131 09:14:09.453445 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/2aa663f0-7298-40eb-a298-3173bffe5362-util\") pod \"19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2sbgbc\" (UID: \"2aa663f0-7298-40eb-a298-3173bffe5362\") " pod="openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2sbgbc" Jan 31 09:14:09 crc kubenswrapper[4830]: I0131 09:14:09.453992 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/2aa663f0-7298-40eb-a298-3173bffe5362-bundle\") pod \"19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2sbgbc\" (UID: \"2aa663f0-7298-40eb-a298-3173bffe5362\") " pod="openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2sbgbc" Jan 31 09:14:09 crc kubenswrapper[4830]: I0131 09:14:09.454044 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w72bb\" (UniqueName: \"kubernetes.io/projected/2aa663f0-7298-40eb-a298-3173bffe5362-kube-api-access-w72bb\") pod \"19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2sbgbc\" (UID: \"2aa663f0-7298-40eb-a298-3173bffe5362\") " pod="openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2sbgbc" Jan 31 09:14:09 crc kubenswrapper[4830]: I0131 09:14:09.490414 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bgpsr7" Jan 31 09:14:09 crc kubenswrapper[4830]: I0131 09:14:09.555534 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/2aa663f0-7298-40eb-a298-3173bffe5362-bundle\") pod \"19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2sbgbc\" (UID: \"2aa663f0-7298-40eb-a298-3173bffe5362\") " pod="openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2sbgbc" Jan 31 09:14:09 crc kubenswrapper[4830]: I0131 09:14:09.555599 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w72bb\" (UniqueName: \"kubernetes.io/projected/2aa663f0-7298-40eb-a298-3173bffe5362-kube-api-access-w72bb\") pod \"19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2sbgbc\" (UID: \"2aa663f0-7298-40eb-a298-3173bffe5362\") " pod="openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2sbgbc" Jan 31 09:14:09 crc kubenswrapper[4830]: I0131 09:14:09.555698 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/2aa663f0-7298-40eb-a298-3173bffe5362-util\") pod \"19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2sbgbc\" (UID: \"2aa663f0-7298-40eb-a298-3173bffe5362\") " pod="openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2sbgbc" Jan 31 09:14:09 crc kubenswrapper[4830]: I0131 09:14:09.556189 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/2aa663f0-7298-40eb-a298-3173bffe5362-bundle\") pod \"19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2sbgbc\" (UID: \"2aa663f0-7298-40eb-a298-3173bffe5362\") " pod="openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2sbgbc" Jan 31 09:14:09 crc kubenswrapper[4830]: I0131 09:14:09.556226 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/2aa663f0-7298-40eb-a298-3173bffe5362-util\") pod \"19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2sbgbc\" (UID: \"2aa663f0-7298-40eb-a298-3173bffe5362\") " pod="openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2sbgbc" Jan 31 09:14:09 crc kubenswrapper[4830]: I0131 09:14:09.581669 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w72bb\" (UniqueName: \"kubernetes.io/projected/2aa663f0-7298-40eb-a298-3173bffe5362-kube-api-access-w72bb\") pod \"19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2sbgbc\" (UID: \"2aa663f0-7298-40eb-a298-3173bffe5362\") " pod="openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2sbgbc" Jan 31 09:14:09 crc kubenswrapper[4830]: I0131 09:14:09.631136 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2sbgbc" Jan 31 09:14:09 crc kubenswrapper[4830]: I0131 09:14:09.964614 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2sbgbc"] Jan 31 09:14:09 crc kubenswrapper[4830]: I0131 09:14:09.998333 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bgpsr7"] Jan 31 09:14:10 crc kubenswrapper[4830]: W0131 09:14:10.005773 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode9a3487d_9bd3_40fe_9096_22c0a7afb0ec.slice/crio-01b74431e665a9d2ef41b0b1344e9082ae216477e3ed1e1873ca686c29a57bbd WatchSource:0}: Error finding container 01b74431e665a9d2ef41b0b1344e9082ae216477e3ed1e1873ca686c29a57bbd: Status 404 returned error can't find the container with id 01b74431e665a9d2ef41b0b1344e9082ae216477e3ed1e1873ca686c29a57bbd Jan 31 09:14:10 crc kubenswrapper[4830]: I0131 09:14:10.734528 4830 generic.go:334] "Generic (PLEG): container finished" podID="2aa663f0-7298-40eb-a298-3173bffe5362" containerID="f0f3f82e6d6081c1303b67070055cf189bc35c9399a474cdddb9b628ad378aef" exitCode=0 Jan 31 09:14:10 crc kubenswrapper[4830]: I0131 09:14:10.734631 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2sbgbc" event={"ID":"2aa663f0-7298-40eb-a298-3173bffe5362","Type":"ContainerDied","Data":"f0f3f82e6d6081c1303b67070055cf189bc35c9399a474cdddb9b628ad378aef"} Jan 31 09:14:10 crc kubenswrapper[4830]: I0131 09:14:10.734665 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2sbgbc" event={"ID":"2aa663f0-7298-40eb-a298-3173bffe5362","Type":"ContainerStarted","Data":"5324fc4b7c0c3434dd99f958241c9fafa900af3d38027fdb3b5037498e44376d"} Jan 31 09:14:10 crc kubenswrapper[4830]: I0131 09:14:10.736965 4830 generic.go:334] "Generic (PLEG): container finished" podID="e9a3487d-9bd3-40fe-9096-22c0a7afb0ec" containerID="6458f198c36c3be33e56f5f9b2465886ea4ff4d583d2aa9932d02f17503492e7" exitCode=0 Jan 31 09:14:10 crc kubenswrapper[4830]: I0131 09:14:10.737058 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bgpsr7" event={"ID":"e9a3487d-9bd3-40fe-9096-22c0a7afb0ec","Type":"ContainerDied","Data":"6458f198c36c3be33e56f5f9b2465886ea4ff4d583d2aa9932d02f17503492e7"} Jan 31 09:14:10 crc kubenswrapper[4830]: I0131 09:14:10.737115 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bgpsr7" event={"ID":"e9a3487d-9bd3-40fe-9096-22c0a7afb0ec","Type":"ContainerStarted","Data":"01b74431e665a9d2ef41b0b1344e9082ae216477e3ed1e1873ca686c29a57bbd"} Jan 31 09:14:12 crc kubenswrapper[4830]: I0131 09:14:12.752959 4830 generic.go:334] "Generic (PLEG): container finished" podID="e9a3487d-9bd3-40fe-9096-22c0a7afb0ec" containerID="5f7d5c9b6f134d310fdbffe5db37f8dff909d93d60737440c374d09c86fd4a0d" exitCode=0 Jan 31 09:14:12 crc kubenswrapper[4830]: I0131 09:14:12.753044 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bgpsr7" event={"ID":"e9a3487d-9bd3-40fe-9096-22c0a7afb0ec","Type":"ContainerDied","Data":"5f7d5c9b6f134d310fdbffe5db37f8dff909d93d60737440c374d09c86fd4a0d"} Jan 31 09:14:12 crc kubenswrapper[4830]: I0131 09:14:12.756348 4830 generic.go:334] "Generic (PLEG): container finished" podID="2aa663f0-7298-40eb-a298-3173bffe5362" containerID="150c6dd468ae3426ce88f3f61f0fa209d387660ea42a70b6b93a5b4b0741a0d4" exitCode=0 Jan 31 09:14:12 crc kubenswrapper[4830]: I0131 09:14:12.756382 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2sbgbc" event={"ID":"2aa663f0-7298-40eb-a298-3173bffe5362","Type":"ContainerDied","Data":"150c6dd468ae3426ce88f3f61f0fa209d387660ea42a70b6b93a5b4b0741a0d4"} Jan 31 09:14:12 crc kubenswrapper[4830]: I0131 09:14:12.858131 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-hxcvk"] Jan 31 09:14:12 crc kubenswrapper[4830]: I0131 09:14:12.859974 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hxcvk" Jan 31 09:14:12 crc kubenswrapper[4830]: I0131 09:14:12.875765 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-hxcvk"] Jan 31 09:14:13 crc kubenswrapper[4830]: I0131 09:14:13.024739 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w625v\" (UniqueName: \"kubernetes.io/projected/57f44b36-ee6c-4fd9-8ec5-1fdea9d8594d-kube-api-access-w625v\") pod \"redhat-operators-hxcvk\" (UID: \"57f44b36-ee6c-4fd9-8ec5-1fdea9d8594d\") " pod="openshift-marketplace/redhat-operators-hxcvk" Jan 31 09:14:13 crc kubenswrapper[4830]: I0131 09:14:13.024830 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57f44b36-ee6c-4fd9-8ec5-1fdea9d8594d-utilities\") pod \"redhat-operators-hxcvk\" (UID: \"57f44b36-ee6c-4fd9-8ec5-1fdea9d8594d\") " pod="openshift-marketplace/redhat-operators-hxcvk" Jan 31 09:14:13 crc kubenswrapper[4830]: I0131 09:14:13.025087 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57f44b36-ee6c-4fd9-8ec5-1fdea9d8594d-catalog-content\") pod \"redhat-operators-hxcvk\" (UID: \"57f44b36-ee6c-4fd9-8ec5-1fdea9d8594d\") " pod="openshift-marketplace/redhat-operators-hxcvk" Jan 31 09:14:13 crc kubenswrapper[4830]: I0131 09:14:13.126974 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57f44b36-ee6c-4fd9-8ec5-1fdea9d8594d-utilities\") pod \"redhat-operators-hxcvk\" (UID: \"57f44b36-ee6c-4fd9-8ec5-1fdea9d8594d\") " pod="openshift-marketplace/redhat-operators-hxcvk" Jan 31 09:14:13 crc kubenswrapper[4830]: I0131 09:14:13.127120 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57f44b36-ee6c-4fd9-8ec5-1fdea9d8594d-catalog-content\") pod \"redhat-operators-hxcvk\" (UID: \"57f44b36-ee6c-4fd9-8ec5-1fdea9d8594d\") " pod="openshift-marketplace/redhat-operators-hxcvk" Jan 31 09:14:13 crc kubenswrapper[4830]: I0131 09:14:13.127172 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w625v\" (UniqueName: \"kubernetes.io/projected/57f44b36-ee6c-4fd9-8ec5-1fdea9d8594d-kube-api-access-w625v\") pod \"redhat-operators-hxcvk\" (UID: \"57f44b36-ee6c-4fd9-8ec5-1fdea9d8594d\") " pod="openshift-marketplace/redhat-operators-hxcvk" Jan 31 09:14:13 crc kubenswrapper[4830]: I0131 09:14:13.127622 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57f44b36-ee6c-4fd9-8ec5-1fdea9d8594d-utilities\") pod \"redhat-operators-hxcvk\" (UID: \"57f44b36-ee6c-4fd9-8ec5-1fdea9d8594d\") " pod="openshift-marketplace/redhat-operators-hxcvk" Jan 31 09:14:13 crc kubenswrapper[4830]: I0131 09:14:13.127649 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57f44b36-ee6c-4fd9-8ec5-1fdea9d8594d-catalog-content\") pod \"redhat-operators-hxcvk\" (UID: \"57f44b36-ee6c-4fd9-8ec5-1fdea9d8594d\") " pod="openshift-marketplace/redhat-operators-hxcvk" Jan 31 09:14:13 crc kubenswrapper[4830]: I0131 09:14:13.148921 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w625v\" (UniqueName: \"kubernetes.io/projected/57f44b36-ee6c-4fd9-8ec5-1fdea9d8594d-kube-api-access-w625v\") pod \"redhat-operators-hxcvk\" (UID: \"57f44b36-ee6c-4fd9-8ec5-1fdea9d8594d\") " pod="openshift-marketplace/redhat-operators-hxcvk" Jan 31 09:14:13 crc kubenswrapper[4830]: I0131 09:14:13.193506 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hxcvk" Jan 31 09:14:13 crc kubenswrapper[4830]: I0131 09:14:13.440650 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-hxcvk"] Jan 31 09:14:13 crc kubenswrapper[4830]: I0131 09:14:13.764958 4830 generic.go:334] "Generic (PLEG): container finished" podID="57f44b36-ee6c-4fd9-8ec5-1fdea9d8594d" containerID="d3cfabc3b61ef8f33f78b1f06a448a2d3f6f77ceff7e50f75b9a01a5d51cf8b8" exitCode=0 Jan 31 09:14:13 crc kubenswrapper[4830]: I0131 09:14:13.765030 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hxcvk" event={"ID":"57f44b36-ee6c-4fd9-8ec5-1fdea9d8594d","Type":"ContainerDied","Data":"d3cfabc3b61ef8f33f78b1f06a448a2d3f6f77ceff7e50f75b9a01a5d51cf8b8"} Jan 31 09:14:13 crc kubenswrapper[4830]: I0131 09:14:13.765062 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hxcvk" event={"ID":"57f44b36-ee6c-4fd9-8ec5-1fdea9d8594d","Type":"ContainerStarted","Data":"6c5cce48eddbe972a4ae98ba7c20ae3920ae3d751e5c72f9d82506a432847b5f"} Jan 31 09:14:13 crc kubenswrapper[4830]: I0131 09:14:13.771155 4830 generic.go:334] "Generic (PLEG): container finished" podID="2aa663f0-7298-40eb-a298-3173bffe5362" containerID="83266c0efe36f7b27673673bf32e82bbc103fd102d5711be3cb90d1e3c9b00f1" exitCode=0 Jan 31 09:14:13 crc kubenswrapper[4830]: I0131 09:14:13.771249 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2sbgbc" event={"ID":"2aa663f0-7298-40eb-a298-3173bffe5362","Type":"ContainerDied","Data":"83266c0efe36f7b27673673bf32e82bbc103fd102d5711be3cb90d1e3c9b00f1"} Jan 31 09:14:13 crc kubenswrapper[4830]: I0131 09:14:13.774270 4830 generic.go:334] "Generic (PLEG): container finished" podID="e9a3487d-9bd3-40fe-9096-22c0a7afb0ec" containerID="501df8ed685246ed6a3950396e29bfdc41fb1c7f35278bed14e2fda2494e232c" exitCode=0 Jan 31 09:14:13 crc kubenswrapper[4830]: I0131 09:14:13.774330 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bgpsr7" event={"ID":"e9a3487d-9bd3-40fe-9096-22c0a7afb0ec","Type":"ContainerDied","Data":"501df8ed685246ed6a3950396e29bfdc41fb1c7f35278bed14e2fda2494e232c"} Jan 31 09:14:14 crc kubenswrapper[4830]: I0131 09:14:14.786037 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hxcvk" event={"ID":"57f44b36-ee6c-4fd9-8ec5-1fdea9d8594d","Type":"ContainerStarted","Data":"1ffb8e674e4ee036c9affb7098294a3402ad657094f18ddd339ab0fe06a32537"} Jan 31 09:14:15 crc kubenswrapper[4830]: I0131 09:14:15.122101 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bgpsr7" Jan 31 09:14:15 crc kubenswrapper[4830]: I0131 09:14:15.180539 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2sbgbc" Jan 31 09:14:15 crc kubenswrapper[4830]: I0131 09:14:15.263537 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e9a3487d-9bd3-40fe-9096-22c0a7afb0ec-bundle\") pod \"e9a3487d-9bd3-40fe-9096-22c0a7afb0ec\" (UID: \"e9a3487d-9bd3-40fe-9096-22c0a7afb0ec\") " Jan 31 09:14:15 crc kubenswrapper[4830]: I0131 09:14:15.264518 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rcm8m\" (UniqueName: \"kubernetes.io/projected/e9a3487d-9bd3-40fe-9096-22c0a7afb0ec-kube-api-access-rcm8m\") pod \"e9a3487d-9bd3-40fe-9096-22c0a7afb0ec\" (UID: \"e9a3487d-9bd3-40fe-9096-22c0a7afb0ec\") " Jan 31 09:14:15 crc kubenswrapper[4830]: I0131 09:14:15.264566 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e9a3487d-9bd3-40fe-9096-22c0a7afb0ec-util\") pod \"e9a3487d-9bd3-40fe-9096-22c0a7afb0ec\" (UID: \"e9a3487d-9bd3-40fe-9096-22c0a7afb0ec\") " Jan 31 09:14:15 crc kubenswrapper[4830]: I0131 09:14:15.264462 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e9a3487d-9bd3-40fe-9096-22c0a7afb0ec-bundle" (OuterVolumeSpecName: "bundle") pod "e9a3487d-9bd3-40fe-9096-22c0a7afb0ec" (UID: "e9a3487d-9bd3-40fe-9096-22c0a7afb0ec"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 09:14:15 crc kubenswrapper[4830]: I0131 09:14:15.266296 4830 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e9a3487d-9bd3-40fe-9096-22c0a7afb0ec-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 09:14:15 crc kubenswrapper[4830]: I0131 09:14:15.271414 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e9a3487d-9bd3-40fe-9096-22c0a7afb0ec-kube-api-access-rcm8m" (OuterVolumeSpecName: "kube-api-access-rcm8m") pod "e9a3487d-9bd3-40fe-9096-22c0a7afb0ec" (UID: "e9a3487d-9bd3-40fe-9096-22c0a7afb0ec"). InnerVolumeSpecName "kube-api-access-rcm8m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:14:15 crc kubenswrapper[4830]: I0131 09:14:15.279658 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e9a3487d-9bd3-40fe-9096-22c0a7afb0ec-util" (OuterVolumeSpecName: "util") pod "e9a3487d-9bd3-40fe-9096-22c0a7afb0ec" (UID: "e9a3487d-9bd3-40fe-9096-22c0a7afb0ec"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 09:14:15 crc kubenswrapper[4830]: I0131 09:14:15.367658 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/2aa663f0-7298-40eb-a298-3173bffe5362-bundle\") pod \"2aa663f0-7298-40eb-a298-3173bffe5362\" (UID: \"2aa663f0-7298-40eb-a298-3173bffe5362\") " Jan 31 09:14:15 crc kubenswrapper[4830]: I0131 09:14:15.367849 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w72bb\" (UniqueName: \"kubernetes.io/projected/2aa663f0-7298-40eb-a298-3173bffe5362-kube-api-access-w72bb\") pod \"2aa663f0-7298-40eb-a298-3173bffe5362\" (UID: \"2aa663f0-7298-40eb-a298-3173bffe5362\") " Jan 31 09:14:15 crc kubenswrapper[4830]: I0131 09:14:15.367920 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/2aa663f0-7298-40eb-a298-3173bffe5362-util\") pod \"2aa663f0-7298-40eb-a298-3173bffe5362\" (UID: \"2aa663f0-7298-40eb-a298-3173bffe5362\") " Jan 31 09:14:15 crc kubenswrapper[4830]: I0131 09:14:15.368321 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rcm8m\" (UniqueName: \"kubernetes.io/projected/e9a3487d-9bd3-40fe-9096-22c0a7afb0ec-kube-api-access-rcm8m\") on node \"crc\" DevicePath \"\"" Jan 31 09:14:15 crc kubenswrapper[4830]: I0131 09:14:15.368341 4830 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e9a3487d-9bd3-40fe-9096-22c0a7afb0ec-util\") on node \"crc\" DevicePath \"\"" Jan 31 09:14:15 crc kubenswrapper[4830]: I0131 09:14:15.368674 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2aa663f0-7298-40eb-a298-3173bffe5362-bundle" (OuterVolumeSpecName: "bundle") pod "2aa663f0-7298-40eb-a298-3173bffe5362" (UID: "2aa663f0-7298-40eb-a298-3173bffe5362"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 09:14:15 crc kubenswrapper[4830]: I0131 09:14:15.373865 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2aa663f0-7298-40eb-a298-3173bffe5362-kube-api-access-w72bb" (OuterVolumeSpecName: "kube-api-access-w72bb") pod "2aa663f0-7298-40eb-a298-3173bffe5362" (UID: "2aa663f0-7298-40eb-a298-3173bffe5362"). InnerVolumeSpecName "kube-api-access-w72bb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:14:15 crc kubenswrapper[4830]: I0131 09:14:15.470330 4830 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/2aa663f0-7298-40eb-a298-3173bffe5362-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 09:14:15 crc kubenswrapper[4830]: I0131 09:14:15.470392 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w72bb\" (UniqueName: \"kubernetes.io/projected/2aa663f0-7298-40eb-a298-3173bffe5362-kube-api-access-w72bb\") on node \"crc\" DevicePath \"\"" Jan 31 09:14:15 crc kubenswrapper[4830]: I0131 09:14:15.799343 4830 generic.go:334] "Generic (PLEG): container finished" podID="57f44b36-ee6c-4fd9-8ec5-1fdea9d8594d" containerID="1ffb8e674e4ee036c9affb7098294a3402ad657094f18ddd339ab0fe06a32537" exitCode=0 Jan 31 09:14:15 crc kubenswrapper[4830]: I0131 09:14:15.799437 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hxcvk" event={"ID":"57f44b36-ee6c-4fd9-8ec5-1fdea9d8594d","Type":"ContainerDied","Data":"1ffb8e674e4ee036c9affb7098294a3402ad657094f18ddd339ab0fe06a32537"} Jan 31 09:14:15 crc kubenswrapper[4830]: I0131 09:14:15.806505 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2sbgbc" event={"ID":"2aa663f0-7298-40eb-a298-3173bffe5362","Type":"ContainerDied","Data":"5324fc4b7c0c3434dd99f958241c9fafa900af3d38027fdb3b5037498e44376d"} Jan 31 09:14:15 crc kubenswrapper[4830]: I0131 09:14:15.806627 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2sbgbc" Jan 31 09:14:15 crc kubenswrapper[4830]: I0131 09:14:15.806661 4830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5324fc4b7c0c3434dd99f958241c9fafa900af3d38027fdb3b5037498e44376d" Jan 31 09:14:15 crc kubenswrapper[4830]: I0131 09:14:15.807483 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2aa663f0-7298-40eb-a298-3173bffe5362-util" (OuterVolumeSpecName: "util") pod "2aa663f0-7298-40eb-a298-3173bffe5362" (UID: "2aa663f0-7298-40eb-a298-3173bffe5362"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 09:14:15 crc kubenswrapper[4830]: I0131 09:14:15.811476 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bgpsr7" event={"ID":"e9a3487d-9bd3-40fe-9096-22c0a7afb0ec","Type":"ContainerDied","Data":"01b74431e665a9d2ef41b0b1344e9082ae216477e3ed1e1873ca686c29a57bbd"} Jan 31 09:14:15 crc kubenswrapper[4830]: I0131 09:14:15.811529 4830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="01b74431e665a9d2ef41b0b1344e9082ae216477e3ed1e1873ca686c29a57bbd" Jan 31 09:14:15 crc kubenswrapper[4830]: I0131 09:14:15.811529 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bgpsr7" Jan 31 09:14:15 crc kubenswrapper[4830]: I0131 09:14:15.876241 4830 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/2aa663f0-7298-40eb-a298-3173bffe5362-util\") on node \"crc\" DevicePath \"\"" Jan 31 09:14:16 crc kubenswrapper[4830]: I0131 09:14:16.864265 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hxcvk" event={"ID":"57f44b36-ee6c-4fd9-8ec5-1fdea9d8594d","Type":"ContainerStarted","Data":"fb7e9811070339627e73a4d38b6beaaf1e3bba3298bbb6a2b6cea153cb3d54f1"} Jan 31 09:14:16 crc kubenswrapper[4830]: I0131 09:14:16.900629 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-hxcvk" podStartSLOduration=2.464521756 podStartE2EDuration="4.900604655s" podCreationTimestamp="2026-01-31 09:14:12 +0000 UTC" firstStartedPulling="2026-01-31 09:14:13.767956656 +0000 UTC m=+798.261319098" lastFinishedPulling="2026-01-31 09:14:16.204039555 +0000 UTC m=+800.697401997" observedRunningTime="2026-01-31 09:14:16.900064289 +0000 UTC m=+801.393426731" watchObservedRunningTime="2026-01-31 09:14:16.900604655 +0000 UTC m=+801.393967097" Jan 31 09:14:19 crc kubenswrapper[4830]: I0131 09:14:19.295077 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/cluster-logging-operator-79cf69ddc8-qdl6z"] Jan 31 09:14:19 crc kubenswrapper[4830]: E0131 09:14:19.296045 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9a3487d-9bd3-40fe-9096-22c0a7afb0ec" containerName="pull" Jan 31 09:14:19 crc kubenswrapper[4830]: I0131 09:14:19.296062 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9a3487d-9bd3-40fe-9096-22c0a7afb0ec" containerName="pull" Jan 31 09:14:19 crc kubenswrapper[4830]: E0131 09:14:19.296080 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9a3487d-9bd3-40fe-9096-22c0a7afb0ec" containerName="extract" Jan 31 09:14:19 crc kubenswrapper[4830]: I0131 09:14:19.296089 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9a3487d-9bd3-40fe-9096-22c0a7afb0ec" containerName="extract" Jan 31 09:14:19 crc kubenswrapper[4830]: E0131 09:14:19.296100 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2aa663f0-7298-40eb-a298-3173bffe5362" containerName="util" Jan 31 09:14:19 crc kubenswrapper[4830]: I0131 09:14:19.296109 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="2aa663f0-7298-40eb-a298-3173bffe5362" containerName="util" Jan 31 09:14:19 crc kubenswrapper[4830]: E0131 09:14:19.296125 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9a3487d-9bd3-40fe-9096-22c0a7afb0ec" containerName="util" Jan 31 09:14:19 crc kubenswrapper[4830]: I0131 09:14:19.296131 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9a3487d-9bd3-40fe-9096-22c0a7afb0ec" containerName="util" Jan 31 09:14:19 crc kubenswrapper[4830]: E0131 09:14:19.296140 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2aa663f0-7298-40eb-a298-3173bffe5362" containerName="extract" Jan 31 09:14:19 crc kubenswrapper[4830]: I0131 09:14:19.296146 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="2aa663f0-7298-40eb-a298-3173bffe5362" containerName="extract" Jan 31 09:14:19 crc kubenswrapper[4830]: E0131 09:14:19.296157 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2aa663f0-7298-40eb-a298-3173bffe5362" containerName="pull" Jan 31 09:14:19 crc kubenswrapper[4830]: I0131 09:14:19.296162 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="2aa663f0-7298-40eb-a298-3173bffe5362" containerName="pull" Jan 31 09:14:19 crc kubenswrapper[4830]: I0131 09:14:19.296299 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="e9a3487d-9bd3-40fe-9096-22c0a7afb0ec" containerName="extract" Jan 31 09:14:19 crc kubenswrapper[4830]: I0131 09:14:19.296317 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="2aa663f0-7298-40eb-a298-3173bffe5362" containerName="extract" Jan 31 09:14:19 crc kubenswrapper[4830]: I0131 09:14:19.296834 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/cluster-logging-operator-79cf69ddc8-qdl6z" Jan 31 09:14:19 crc kubenswrapper[4830]: I0131 09:14:19.299400 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"openshift-service-ca.crt" Jan 31 09:14:19 crc kubenswrapper[4830]: I0131 09:14:19.299649 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"kube-root-ca.crt" Jan 31 09:14:19 crc kubenswrapper[4830]: I0131 09:14:19.299792 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"cluster-logging-operator-dockercfg-lc2wx" Jan 31 09:14:19 crc kubenswrapper[4830]: I0131 09:14:19.316043 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/cluster-logging-operator-79cf69ddc8-qdl6z"] Jan 31 09:14:19 crc kubenswrapper[4830]: I0131 09:14:19.435707 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pkxb9\" (UniqueName: \"kubernetes.io/projected/e293840d-c6e3-4d1d-a859-c656d68171fe-kube-api-access-pkxb9\") pod \"cluster-logging-operator-79cf69ddc8-qdl6z\" (UID: \"e293840d-c6e3-4d1d-a859-c656d68171fe\") " pod="openshift-logging/cluster-logging-operator-79cf69ddc8-qdl6z" Jan 31 09:14:19 crc kubenswrapper[4830]: I0131 09:14:19.537703 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pkxb9\" (UniqueName: \"kubernetes.io/projected/e293840d-c6e3-4d1d-a859-c656d68171fe-kube-api-access-pkxb9\") pod \"cluster-logging-operator-79cf69ddc8-qdl6z\" (UID: \"e293840d-c6e3-4d1d-a859-c656d68171fe\") " pod="openshift-logging/cluster-logging-operator-79cf69ddc8-qdl6z" Jan 31 09:14:19 crc kubenswrapper[4830]: I0131 09:14:19.578770 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pkxb9\" (UniqueName: \"kubernetes.io/projected/e293840d-c6e3-4d1d-a859-c656d68171fe-kube-api-access-pkxb9\") pod \"cluster-logging-operator-79cf69ddc8-qdl6z\" (UID: \"e293840d-c6e3-4d1d-a859-c656d68171fe\") " pod="openshift-logging/cluster-logging-operator-79cf69ddc8-qdl6z" Jan 31 09:14:19 crc kubenswrapper[4830]: I0131 09:14:19.616194 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/cluster-logging-operator-79cf69ddc8-qdl6z" Jan 31 09:14:20 crc kubenswrapper[4830]: I0131 09:14:20.074829 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/cluster-logging-operator-79cf69ddc8-qdl6z"] Jan 31 09:14:20 crc kubenswrapper[4830]: W0131 09:14:20.087492 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode293840d_c6e3_4d1d_a859_c656d68171fe.slice/crio-9ca6c6d9cbc97b421950951f0b27e0f7665544db5419acb571a433905cab13e4 WatchSource:0}: Error finding container 9ca6c6d9cbc97b421950951f0b27e0f7665544db5419acb571a433905cab13e4: Status 404 returned error can't find the container with id 9ca6c6d9cbc97b421950951f0b27e0f7665544db5419acb571a433905cab13e4 Jan 31 09:14:20 crc kubenswrapper[4830]: I0131 09:14:20.894581 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/cluster-logging-operator-79cf69ddc8-qdl6z" event={"ID":"e293840d-c6e3-4d1d-a859-c656d68171fe","Type":"ContainerStarted","Data":"9ca6c6d9cbc97b421950951f0b27e0f7665544db5419acb571a433905cab13e4"} Jan 31 09:14:23 crc kubenswrapper[4830]: I0131 09:14:23.199067 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-hxcvk" Jan 31 09:14:23 crc kubenswrapper[4830]: I0131 09:14:23.199569 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-hxcvk" Jan 31 09:14:24 crc kubenswrapper[4830]: I0131 09:14:24.254553 4830 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-hxcvk" podUID="57f44b36-ee6c-4fd9-8ec5-1fdea9d8594d" containerName="registry-server" probeResult="failure" output=< Jan 31 09:14:24 crc kubenswrapper[4830]: timeout: failed to connect service ":50051" within 1s Jan 31 09:14:24 crc kubenswrapper[4830]: > Jan 31 09:14:27 crc kubenswrapper[4830]: I0131 09:14:27.956453 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/cluster-logging-operator-79cf69ddc8-qdl6z" event={"ID":"e293840d-c6e3-4d1d-a859-c656d68171fe","Type":"ContainerStarted","Data":"bd490902d714501c9f8bf1dc742138eebb69fb8885c7e82ae5e86562372c592b"} Jan 31 09:14:27 crc kubenswrapper[4830]: I0131 09:14:27.978248 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/cluster-logging-operator-79cf69ddc8-qdl6z" podStartSLOduration=2.239403662 podStartE2EDuration="8.978223771s" podCreationTimestamp="2026-01-31 09:14:19 +0000 UTC" firstStartedPulling="2026-01-31 09:14:20.100545021 +0000 UTC m=+804.593907463" lastFinishedPulling="2026-01-31 09:14:26.83936513 +0000 UTC m=+811.332727572" observedRunningTime="2026-01-31 09:14:27.974886565 +0000 UTC m=+812.468249017" watchObservedRunningTime="2026-01-31 09:14:27.978223771 +0000 UTC m=+812.471586213" Jan 31 09:14:30 crc kubenswrapper[4830]: I0131 09:14:30.359961 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators-redhat/loki-operator-controller-manager-688c9bff97-t8jpp"] Jan 31 09:14:30 crc kubenswrapper[4830]: I0131 09:14:30.362057 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators-redhat/loki-operator-controller-manager-688c9bff97-t8jpp" Jan 31 09:14:30 crc kubenswrapper[4830]: I0131 09:14:30.369274 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators-redhat"/"loki-operator-manager-config" Jan 31 09:14:30 crc kubenswrapper[4830]: I0131 09:14:30.369429 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators-redhat"/"loki-operator-metrics" Jan 31 09:14:30 crc kubenswrapper[4830]: I0131 09:14:30.369658 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators-redhat"/"loki-operator-controller-manager-dockercfg-phwrm" Jan 31 09:14:30 crc kubenswrapper[4830]: I0131 09:14:30.369748 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators-redhat"/"openshift-service-ca.crt" Jan 31 09:14:30 crc kubenswrapper[4830]: I0131 09:14:30.369919 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators-redhat"/"kube-root-ca.crt" Jan 31 09:14:30 crc kubenswrapper[4830]: I0131 09:14:30.370061 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators-redhat"/"loki-operator-controller-manager-service-cert" Jan 31 09:14:30 crc kubenswrapper[4830]: I0131 09:14:30.386879 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators-redhat/loki-operator-controller-manager-688c9bff97-t8jpp"] Jan 31 09:14:30 crc kubenswrapper[4830]: I0131 09:14:30.515461 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ce3329e2-9eca-4a04-bf1d-0578e12beaa5-webhook-cert\") pod \"loki-operator-controller-manager-688c9bff97-t8jpp\" (UID: \"ce3329e2-9eca-4a04-bf1d-0578e12beaa5\") " pod="openshift-operators-redhat/loki-operator-controller-manager-688c9bff97-t8jpp" Jan 31 09:14:30 crc kubenswrapper[4830]: I0131 09:14:30.515551 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"loki-operator-metrics-cert\" (UniqueName: \"kubernetes.io/secret/ce3329e2-9eca-4a04-bf1d-0578e12beaa5-loki-operator-metrics-cert\") pod \"loki-operator-controller-manager-688c9bff97-t8jpp\" (UID: \"ce3329e2-9eca-4a04-bf1d-0578e12beaa5\") " pod="openshift-operators-redhat/loki-operator-controller-manager-688c9bff97-t8jpp" Jan 31 09:14:30 crc kubenswrapper[4830]: I0131 09:14:30.515598 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manager-config\" (UniqueName: \"kubernetes.io/configmap/ce3329e2-9eca-4a04-bf1d-0578e12beaa5-manager-config\") pod \"loki-operator-controller-manager-688c9bff97-t8jpp\" (UID: \"ce3329e2-9eca-4a04-bf1d-0578e12beaa5\") " pod="openshift-operators-redhat/loki-operator-controller-manager-688c9bff97-t8jpp" Jan 31 09:14:30 crc kubenswrapper[4830]: I0131 09:14:30.515671 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-75bkv\" (UniqueName: \"kubernetes.io/projected/ce3329e2-9eca-4a04-bf1d-0578e12beaa5-kube-api-access-75bkv\") pod \"loki-operator-controller-manager-688c9bff97-t8jpp\" (UID: \"ce3329e2-9eca-4a04-bf1d-0578e12beaa5\") " pod="openshift-operators-redhat/loki-operator-controller-manager-688c9bff97-t8jpp" Jan 31 09:14:30 crc kubenswrapper[4830]: I0131 09:14:30.515710 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/ce3329e2-9eca-4a04-bf1d-0578e12beaa5-apiservice-cert\") pod \"loki-operator-controller-manager-688c9bff97-t8jpp\" (UID: \"ce3329e2-9eca-4a04-bf1d-0578e12beaa5\") " pod="openshift-operators-redhat/loki-operator-controller-manager-688c9bff97-t8jpp" Jan 31 09:14:30 crc kubenswrapper[4830]: I0131 09:14:30.617419 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-75bkv\" (UniqueName: \"kubernetes.io/projected/ce3329e2-9eca-4a04-bf1d-0578e12beaa5-kube-api-access-75bkv\") pod \"loki-operator-controller-manager-688c9bff97-t8jpp\" (UID: \"ce3329e2-9eca-4a04-bf1d-0578e12beaa5\") " pod="openshift-operators-redhat/loki-operator-controller-manager-688c9bff97-t8jpp" Jan 31 09:14:30 crc kubenswrapper[4830]: I0131 09:14:30.617505 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/ce3329e2-9eca-4a04-bf1d-0578e12beaa5-apiservice-cert\") pod \"loki-operator-controller-manager-688c9bff97-t8jpp\" (UID: \"ce3329e2-9eca-4a04-bf1d-0578e12beaa5\") " pod="openshift-operators-redhat/loki-operator-controller-manager-688c9bff97-t8jpp" Jan 31 09:14:30 crc kubenswrapper[4830]: I0131 09:14:30.617568 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ce3329e2-9eca-4a04-bf1d-0578e12beaa5-webhook-cert\") pod \"loki-operator-controller-manager-688c9bff97-t8jpp\" (UID: \"ce3329e2-9eca-4a04-bf1d-0578e12beaa5\") " pod="openshift-operators-redhat/loki-operator-controller-manager-688c9bff97-t8jpp" Jan 31 09:14:30 crc kubenswrapper[4830]: I0131 09:14:30.617614 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"loki-operator-metrics-cert\" (UniqueName: \"kubernetes.io/secret/ce3329e2-9eca-4a04-bf1d-0578e12beaa5-loki-operator-metrics-cert\") pod \"loki-operator-controller-manager-688c9bff97-t8jpp\" (UID: \"ce3329e2-9eca-4a04-bf1d-0578e12beaa5\") " pod="openshift-operators-redhat/loki-operator-controller-manager-688c9bff97-t8jpp" Jan 31 09:14:30 crc kubenswrapper[4830]: I0131 09:14:30.617649 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manager-config\" (UniqueName: \"kubernetes.io/configmap/ce3329e2-9eca-4a04-bf1d-0578e12beaa5-manager-config\") pod \"loki-operator-controller-manager-688c9bff97-t8jpp\" (UID: \"ce3329e2-9eca-4a04-bf1d-0578e12beaa5\") " pod="openshift-operators-redhat/loki-operator-controller-manager-688c9bff97-t8jpp" Jan 31 09:14:30 crc kubenswrapper[4830]: I0131 09:14:30.618677 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manager-config\" (UniqueName: \"kubernetes.io/configmap/ce3329e2-9eca-4a04-bf1d-0578e12beaa5-manager-config\") pod \"loki-operator-controller-manager-688c9bff97-t8jpp\" (UID: \"ce3329e2-9eca-4a04-bf1d-0578e12beaa5\") " pod="openshift-operators-redhat/loki-operator-controller-manager-688c9bff97-t8jpp" Jan 31 09:14:30 crc kubenswrapper[4830]: I0131 09:14:30.628870 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/ce3329e2-9eca-4a04-bf1d-0578e12beaa5-apiservice-cert\") pod \"loki-operator-controller-manager-688c9bff97-t8jpp\" (UID: \"ce3329e2-9eca-4a04-bf1d-0578e12beaa5\") " pod="openshift-operators-redhat/loki-operator-controller-manager-688c9bff97-t8jpp" Jan 31 09:14:30 crc kubenswrapper[4830]: I0131 09:14:30.629855 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"loki-operator-metrics-cert\" (UniqueName: \"kubernetes.io/secret/ce3329e2-9eca-4a04-bf1d-0578e12beaa5-loki-operator-metrics-cert\") pod \"loki-operator-controller-manager-688c9bff97-t8jpp\" (UID: \"ce3329e2-9eca-4a04-bf1d-0578e12beaa5\") " pod="openshift-operators-redhat/loki-operator-controller-manager-688c9bff97-t8jpp" Jan 31 09:14:30 crc kubenswrapper[4830]: I0131 09:14:30.630537 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ce3329e2-9eca-4a04-bf1d-0578e12beaa5-webhook-cert\") pod \"loki-operator-controller-manager-688c9bff97-t8jpp\" (UID: \"ce3329e2-9eca-4a04-bf1d-0578e12beaa5\") " pod="openshift-operators-redhat/loki-operator-controller-manager-688c9bff97-t8jpp" Jan 31 09:14:30 crc kubenswrapper[4830]: I0131 09:14:30.639194 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-75bkv\" (UniqueName: \"kubernetes.io/projected/ce3329e2-9eca-4a04-bf1d-0578e12beaa5-kube-api-access-75bkv\") pod \"loki-operator-controller-manager-688c9bff97-t8jpp\" (UID: \"ce3329e2-9eca-4a04-bf1d-0578e12beaa5\") " pod="openshift-operators-redhat/loki-operator-controller-manager-688c9bff97-t8jpp" Jan 31 09:14:30 crc kubenswrapper[4830]: I0131 09:14:30.686635 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators-redhat/loki-operator-controller-manager-688c9bff97-t8jpp" Jan 31 09:14:31 crc kubenswrapper[4830]: I0131 09:14:31.015567 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators-redhat/loki-operator-controller-manager-688c9bff97-t8jpp"] Jan 31 09:14:31 crc kubenswrapper[4830]: W0131 09:14:31.026772 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podce3329e2_9eca_4a04_bf1d_0578e12beaa5.slice/crio-06d7818e830440ebd6549f8e37ddde91d48c1ae6ae0dfa9e2489fe0f7d83a918 WatchSource:0}: Error finding container 06d7818e830440ebd6549f8e37ddde91d48c1ae6ae0dfa9e2489fe0f7d83a918: Status 404 returned error can't find the container with id 06d7818e830440ebd6549f8e37ddde91d48c1ae6ae0dfa9e2489fe0f7d83a918 Jan 31 09:14:31 crc kubenswrapper[4830]: I0131 09:14:31.987962 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-688c9bff97-t8jpp" event={"ID":"ce3329e2-9eca-4a04-bf1d-0578e12beaa5","Type":"ContainerStarted","Data":"06d7818e830440ebd6549f8e37ddde91d48c1ae6ae0dfa9e2489fe0f7d83a918"} Jan 31 09:14:33 crc kubenswrapper[4830]: I0131 09:14:33.292377 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-hxcvk" Jan 31 09:14:33 crc kubenswrapper[4830]: I0131 09:14:33.348185 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-hxcvk" Jan 31 09:14:35 crc kubenswrapper[4830]: I0131 09:14:35.013141 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-688c9bff97-t8jpp" event={"ID":"ce3329e2-9eca-4a04-bf1d-0578e12beaa5","Type":"ContainerStarted","Data":"094038c5117902e3dfa535713a374ec621d40c8bc0b99cd163b60a4a2eeca820"} Jan 31 09:14:36 crc kubenswrapper[4830]: I0131 09:14:36.441987 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-hxcvk"] Jan 31 09:14:36 crc kubenswrapper[4830]: I0131 09:14:36.442715 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-hxcvk" podUID="57f44b36-ee6c-4fd9-8ec5-1fdea9d8594d" containerName="registry-server" containerID="cri-o://fb7e9811070339627e73a4d38b6beaaf1e3bba3298bbb6a2b6cea153cb3d54f1" gracePeriod=2 Jan 31 09:14:36 crc kubenswrapper[4830]: I0131 09:14:36.842274 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hxcvk" Jan 31 09:14:36 crc kubenswrapper[4830]: I0131 09:14:36.926364 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57f44b36-ee6c-4fd9-8ec5-1fdea9d8594d-utilities\") pod \"57f44b36-ee6c-4fd9-8ec5-1fdea9d8594d\" (UID: \"57f44b36-ee6c-4fd9-8ec5-1fdea9d8594d\") " Jan 31 09:14:36 crc kubenswrapper[4830]: I0131 09:14:36.926523 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w625v\" (UniqueName: \"kubernetes.io/projected/57f44b36-ee6c-4fd9-8ec5-1fdea9d8594d-kube-api-access-w625v\") pod \"57f44b36-ee6c-4fd9-8ec5-1fdea9d8594d\" (UID: \"57f44b36-ee6c-4fd9-8ec5-1fdea9d8594d\") " Jan 31 09:14:36 crc kubenswrapper[4830]: I0131 09:14:36.926548 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57f44b36-ee6c-4fd9-8ec5-1fdea9d8594d-catalog-content\") pod \"57f44b36-ee6c-4fd9-8ec5-1fdea9d8594d\" (UID: \"57f44b36-ee6c-4fd9-8ec5-1fdea9d8594d\") " Jan 31 09:14:36 crc kubenswrapper[4830]: I0131 09:14:36.928085 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57f44b36-ee6c-4fd9-8ec5-1fdea9d8594d-utilities" (OuterVolumeSpecName: "utilities") pod "57f44b36-ee6c-4fd9-8ec5-1fdea9d8594d" (UID: "57f44b36-ee6c-4fd9-8ec5-1fdea9d8594d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 09:14:36 crc kubenswrapper[4830]: I0131 09:14:36.934626 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57f44b36-ee6c-4fd9-8ec5-1fdea9d8594d-kube-api-access-w625v" (OuterVolumeSpecName: "kube-api-access-w625v") pod "57f44b36-ee6c-4fd9-8ec5-1fdea9d8594d" (UID: "57f44b36-ee6c-4fd9-8ec5-1fdea9d8594d"). InnerVolumeSpecName "kube-api-access-w625v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:14:37 crc kubenswrapper[4830]: I0131 09:14:37.028201 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w625v\" (UniqueName: \"kubernetes.io/projected/57f44b36-ee6c-4fd9-8ec5-1fdea9d8594d-kube-api-access-w625v\") on node \"crc\" DevicePath \"\"" Jan 31 09:14:37 crc kubenswrapper[4830]: I0131 09:14:37.028630 4830 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57f44b36-ee6c-4fd9-8ec5-1fdea9d8594d-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 09:14:37 crc kubenswrapper[4830]: I0131 09:14:37.038056 4830 generic.go:334] "Generic (PLEG): container finished" podID="57f44b36-ee6c-4fd9-8ec5-1fdea9d8594d" containerID="fb7e9811070339627e73a4d38b6beaaf1e3bba3298bbb6a2b6cea153cb3d54f1" exitCode=0 Jan 31 09:14:37 crc kubenswrapper[4830]: I0131 09:14:37.038121 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hxcvk" event={"ID":"57f44b36-ee6c-4fd9-8ec5-1fdea9d8594d","Type":"ContainerDied","Data":"fb7e9811070339627e73a4d38b6beaaf1e3bba3298bbb6a2b6cea153cb3d54f1"} Jan 31 09:14:37 crc kubenswrapper[4830]: I0131 09:14:37.038306 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hxcvk" Jan 31 09:14:37 crc kubenswrapper[4830]: I0131 09:14:37.038351 4830 scope.go:117] "RemoveContainer" containerID="fb7e9811070339627e73a4d38b6beaaf1e3bba3298bbb6a2b6cea153cb3d54f1" Jan 31 09:14:37 crc kubenswrapper[4830]: I0131 09:14:37.038326 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hxcvk" event={"ID":"57f44b36-ee6c-4fd9-8ec5-1fdea9d8594d","Type":"ContainerDied","Data":"6c5cce48eddbe972a4ae98ba7c20ae3920ae3d751e5c72f9d82506a432847b5f"} Jan 31 09:14:37 crc kubenswrapper[4830]: I0131 09:14:37.059672 4830 scope.go:117] "RemoveContainer" containerID="1ffb8e674e4ee036c9affb7098294a3402ad657094f18ddd339ab0fe06a32537" Jan 31 09:14:37 crc kubenswrapper[4830]: I0131 09:14:37.059645 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57f44b36-ee6c-4fd9-8ec5-1fdea9d8594d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57f44b36-ee6c-4fd9-8ec5-1fdea9d8594d" (UID: "57f44b36-ee6c-4fd9-8ec5-1fdea9d8594d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 09:14:37 crc kubenswrapper[4830]: I0131 09:14:37.090302 4830 scope.go:117] "RemoveContainer" containerID="d3cfabc3b61ef8f33f78b1f06a448a2d3f6f77ceff7e50f75b9a01a5d51cf8b8" Jan 31 09:14:37 crc kubenswrapper[4830]: I0131 09:14:37.112106 4830 scope.go:117] "RemoveContainer" containerID="fb7e9811070339627e73a4d38b6beaaf1e3bba3298bbb6a2b6cea153cb3d54f1" Jan 31 09:14:37 crc kubenswrapper[4830]: E0131 09:14:37.112856 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fb7e9811070339627e73a4d38b6beaaf1e3bba3298bbb6a2b6cea153cb3d54f1\": container with ID starting with fb7e9811070339627e73a4d38b6beaaf1e3bba3298bbb6a2b6cea153cb3d54f1 not found: ID does not exist" containerID="fb7e9811070339627e73a4d38b6beaaf1e3bba3298bbb6a2b6cea153cb3d54f1" Jan 31 09:14:37 crc kubenswrapper[4830]: I0131 09:14:37.112928 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fb7e9811070339627e73a4d38b6beaaf1e3bba3298bbb6a2b6cea153cb3d54f1"} err="failed to get container status \"fb7e9811070339627e73a4d38b6beaaf1e3bba3298bbb6a2b6cea153cb3d54f1\": rpc error: code = NotFound desc = could not find container \"fb7e9811070339627e73a4d38b6beaaf1e3bba3298bbb6a2b6cea153cb3d54f1\": container with ID starting with fb7e9811070339627e73a4d38b6beaaf1e3bba3298bbb6a2b6cea153cb3d54f1 not found: ID does not exist" Jan 31 09:14:37 crc kubenswrapper[4830]: I0131 09:14:37.112970 4830 scope.go:117] "RemoveContainer" containerID="1ffb8e674e4ee036c9affb7098294a3402ad657094f18ddd339ab0fe06a32537" Jan 31 09:14:37 crc kubenswrapper[4830]: E0131 09:14:37.113598 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1ffb8e674e4ee036c9affb7098294a3402ad657094f18ddd339ab0fe06a32537\": container with ID starting with 1ffb8e674e4ee036c9affb7098294a3402ad657094f18ddd339ab0fe06a32537 not found: ID does not exist" containerID="1ffb8e674e4ee036c9affb7098294a3402ad657094f18ddd339ab0fe06a32537" Jan 31 09:14:37 crc kubenswrapper[4830]: I0131 09:14:37.113637 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1ffb8e674e4ee036c9affb7098294a3402ad657094f18ddd339ab0fe06a32537"} err="failed to get container status \"1ffb8e674e4ee036c9affb7098294a3402ad657094f18ddd339ab0fe06a32537\": rpc error: code = NotFound desc = could not find container \"1ffb8e674e4ee036c9affb7098294a3402ad657094f18ddd339ab0fe06a32537\": container with ID starting with 1ffb8e674e4ee036c9affb7098294a3402ad657094f18ddd339ab0fe06a32537 not found: ID does not exist" Jan 31 09:14:37 crc kubenswrapper[4830]: I0131 09:14:37.113667 4830 scope.go:117] "RemoveContainer" containerID="d3cfabc3b61ef8f33f78b1f06a448a2d3f6f77ceff7e50f75b9a01a5d51cf8b8" Jan 31 09:14:37 crc kubenswrapper[4830]: E0131 09:14:37.114313 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d3cfabc3b61ef8f33f78b1f06a448a2d3f6f77ceff7e50f75b9a01a5d51cf8b8\": container with ID starting with d3cfabc3b61ef8f33f78b1f06a448a2d3f6f77ceff7e50f75b9a01a5d51cf8b8 not found: ID does not exist" containerID="d3cfabc3b61ef8f33f78b1f06a448a2d3f6f77ceff7e50f75b9a01a5d51cf8b8" Jan 31 09:14:37 crc kubenswrapper[4830]: I0131 09:14:37.114359 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d3cfabc3b61ef8f33f78b1f06a448a2d3f6f77ceff7e50f75b9a01a5d51cf8b8"} err="failed to get container status \"d3cfabc3b61ef8f33f78b1f06a448a2d3f6f77ceff7e50f75b9a01a5d51cf8b8\": rpc error: code = NotFound desc = could not find container \"d3cfabc3b61ef8f33f78b1f06a448a2d3f6f77ceff7e50f75b9a01a5d51cf8b8\": container with ID starting with d3cfabc3b61ef8f33f78b1f06a448a2d3f6f77ceff7e50f75b9a01a5d51cf8b8 not found: ID does not exist" Jan 31 09:14:37 crc kubenswrapper[4830]: I0131 09:14:37.130774 4830 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57f44b36-ee6c-4fd9-8ec5-1fdea9d8594d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 09:14:37 crc kubenswrapper[4830]: I0131 09:14:37.386599 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-hxcvk"] Jan 31 09:14:37 crc kubenswrapper[4830]: I0131 09:14:37.402004 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-hxcvk"] Jan 31 09:14:38 crc kubenswrapper[4830]: I0131 09:14:38.262765 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57f44b36-ee6c-4fd9-8ec5-1fdea9d8594d" path="/var/lib/kubelet/pods/57f44b36-ee6c-4fd9-8ec5-1fdea9d8594d/volumes" Jan 31 09:14:42 crc kubenswrapper[4830]: I0131 09:14:42.090667 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-688c9bff97-t8jpp" event={"ID":"ce3329e2-9eca-4a04-bf1d-0578e12beaa5","Type":"ContainerStarted","Data":"3d641c9a21c89a6b510725bb30270c2fd9b24b6ebccd0da6c7ea8f86812e41df"} Jan 31 09:14:42 crc kubenswrapper[4830]: I0131 09:14:42.091242 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators-redhat/loki-operator-controller-manager-688c9bff97-t8jpp" Jan 31 09:14:42 crc kubenswrapper[4830]: I0131 09:14:42.093882 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators-redhat/loki-operator-controller-manager-688c9bff97-t8jpp" Jan 31 09:14:42 crc kubenswrapper[4830]: I0131 09:14:42.119382 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators-redhat/loki-operator-controller-manager-688c9bff97-t8jpp" podStartSLOduration=1.707283036 podStartE2EDuration="12.119351744s" podCreationTimestamp="2026-01-31 09:14:30 +0000 UTC" firstStartedPulling="2026-01-31 09:14:31.028852709 +0000 UTC m=+815.522215151" lastFinishedPulling="2026-01-31 09:14:41.440921417 +0000 UTC m=+825.934283859" observedRunningTime="2026-01-31 09:14:42.111387725 +0000 UTC m=+826.604750167" watchObservedRunningTime="2026-01-31 09:14:42.119351744 +0000 UTC m=+826.612714186" Jan 31 09:14:46 crc kubenswrapper[4830]: I0131 09:14:46.438780 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["minio-dev/minio"] Jan 31 09:14:46 crc kubenswrapper[4830]: E0131 09:14:46.439665 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="57f44b36-ee6c-4fd9-8ec5-1fdea9d8594d" containerName="extract-content" Jan 31 09:14:46 crc kubenswrapper[4830]: I0131 09:14:46.439688 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="57f44b36-ee6c-4fd9-8ec5-1fdea9d8594d" containerName="extract-content" Jan 31 09:14:46 crc kubenswrapper[4830]: E0131 09:14:46.439703 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="57f44b36-ee6c-4fd9-8ec5-1fdea9d8594d" containerName="registry-server" Jan 31 09:14:46 crc kubenswrapper[4830]: I0131 09:14:46.439711 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="57f44b36-ee6c-4fd9-8ec5-1fdea9d8594d" containerName="registry-server" Jan 31 09:14:46 crc kubenswrapper[4830]: E0131 09:14:46.439750 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="57f44b36-ee6c-4fd9-8ec5-1fdea9d8594d" containerName="extract-utilities" Jan 31 09:14:46 crc kubenswrapper[4830]: I0131 09:14:46.439759 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="57f44b36-ee6c-4fd9-8ec5-1fdea9d8594d" containerName="extract-utilities" Jan 31 09:14:46 crc kubenswrapper[4830]: I0131 09:14:46.439904 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="57f44b36-ee6c-4fd9-8ec5-1fdea9d8594d" containerName="registry-server" Jan 31 09:14:46 crc kubenswrapper[4830]: I0131 09:14:46.440484 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="minio-dev/minio" Jan 31 09:14:46 crc kubenswrapper[4830]: I0131 09:14:46.443469 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"minio-dev"/"openshift-service-ca.crt" Jan 31 09:14:46 crc kubenswrapper[4830]: I0131 09:14:46.443966 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"minio-dev"/"kube-root-ca.crt" Jan 31 09:14:46 crc kubenswrapper[4830]: I0131 09:14:46.445888 4830 reflector.go:368] Caches populated for *v1.Secret from object-"minio-dev"/"default-dockercfg-59gvl" Jan 31 09:14:46 crc kubenswrapper[4830]: I0131 09:14:46.449583 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["minio-dev/minio"] Jan 31 09:14:46 crc kubenswrapper[4830]: I0131 09:14:46.601525 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-e3f54fd8-5cde-4ed6-8ed3-43f6a619984b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e3f54fd8-5cde-4ed6-8ed3-43f6a619984b\") pod \"minio\" (UID: \"9914cc68-bf73-4f01-843a-90870531071b\") " pod="minio-dev/minio" Jan 31 09:14:46 crc kubenswrapper[4830]: I0131 09:14:46.601770 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zzbp4\" (UniqueName: \"kubernetes.io/projected/9914cc68-bf73-4f01-843a-90870531071b-kube-api-access-zzbp4\") pod \"minio\" (UID: \"9914cc68-bf73-4f01-843a-90870531071b\") " pod="minio-dev/minio" Jan 31 09:14:46 crc kubenswrapper[4830]: I0131 09:14:46.703851 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zzbp4\" (UniqueName: \"kubernetes.io/projected/9914cc68-bf73-4f01-843a-90870531071b-kube-api-access-zzbp4\") pod \"minio\" (UID: \"9914cc68-bf73-4f01-843a-90870531071b\") " pod="minio-dev/minio" Jan 31 09:14:46 crc kubenswrapper[4830]: I0131 09:14:46.703976 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-e3f54fd8-5cde-4ed6-8ed3-43f6a619984b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e3f54fd8-5cde-4ed6-8ed3-43f6a619984b\") pod \"minio\" (UID: \"9914cc68-bf73-4f01-843a-90870531071b\") " pod="minio-dev/minio" Jan 31 09:14:46 crc kubenswrapper[4830]: I0131 09:14:46.707902 4830 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 31 09:14:46 crc kubenswrapper[4830]: I0131 09:14:46.707968 4830 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-e3f54fd8-5cde-4ed6-8ed3-43f6a619984b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e3f54fd8-5cde-4ed6-8ed3-43f6a619984b\") pod \"minio\" (UID: \"9914cc68-bf73-4f01-843a-90870531071b\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1bb47fc183f6db92701d913c269ba04deca47ef019d46e9187d764852fe625f3/globalmount\"" pod="minio-dev/minio" Jan 31 09:14:46 crc kubenswrapper[4830]: I0131 09:14:46.731903 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zzbp4\" (UniqueName: \"kubernetes.io/projected/9914cc68-bf73-4f01-843a-90870531071b-kube-api-access-zzbp4\") pod \"minio\" (UID: \"9914cc68-bf73-4f01-843a-90870531071b\") " pod="minio-dev/minio" Jan 31 09:14:46 crc kubenswrapper[4830]: I0131 09:14:46.745831 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-e3f54fd8-5cde-4ed6-8ed3-43f6a619984b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e3f54fd8-5cde-4ed6-8ed3-43f6a619984b\") pod \"minio\" (UID: \"9914cc68-bf73-4f01-843a-90870531071b\") " pod="minio-dev/minio" Jan 31 09:14:46 crc kubenswrapper[4830]: I0131 09:14:46.804320 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="minio-dev/minio" Jan 31 09:14:47 crc kubenswrapper[4830]: I0131 09:14:47.294364 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["minio-dev/minio"] Jan 31 09:14:48 crc kubenswrapper[4830]: I0131 09:14:48.139304 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="minio-dev/minio" event={"ID":"9914cc68-bf73-4f01-843a-90870531071b","Type":"ContainerStarted","Data":"a3254d602faf555f5b4501eb3e98320454e244d827209a03a1011e9b386f8503"} Jan 31 09:14:51 crc kubenswrapper[4830]: I0131 09:14:51.162230 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="minio-dev/minio" event={"ID":"9914cc68-bf73-4f01-843a-90870531071b","Type":"ContainerStarted","Data":"968c4a9a0a1620d3aa2a977b887222750f2ce62dbdb673c2eb78a4e3204c7b82"} Jan 31 09:14:51 crc kubenswrapper[4830]: I0131 09:14:51.204109 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="minio-dev/minio" podStartSLOduration=3.88563194 podStartE2EDuration="7.204084618s" podCreationTimestamp="2026-01-31 09:14:44 +0000 UTC" firstStartedPulling="2026-01-31 09:14:47.27937431 +0000 UTC m=+831.772736752" lastFinishedPulling="2026-01-31 09:14:50.597826938 +0000 UTC m=+835.091189430" observedRunningTime="2026-01-31 09:14:51.195866982 +0000 UTC m=+835.689229424" watchObservedRunningTime="2026-01-31 09:14:51.204084618 +0000 UTC m=+835.697447060" Jan 31 09:14:55 crc kubenswrapper[4830]: I0131 09:14:55.325103 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-distributor-5f678c8dd6-vm6jc"] Jan 31 09:14:55 crc kubenswrapper[4830]: I0131 09:14:55.326828 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-distributor-5f678c8dd6-vm6jc" Jan 31 09:14:55 crc kubenswrapper[4830]: I0131 09:14:55.333977 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"logging-loki-config" Jan 31 09:14:55 crc kubenswrapper[4830]: I0131 09:14:55.334166 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"logging-loki-ca-bundle" Jan 31 09:14:55 crc kubenswrapper[4830]: I0131 09:14:55.334659 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-dockercfg-6gh5t" Jan 31 09:14:55 crc kubenswrapper[4830]: I0131 09:14:55.334932 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-distributor-http" Jan 31 09:14:55 crc kubenswrapper[4830]: I0131 09:14:55.336178 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-distributor-grpc" Jan 31 09:14:55 crc kubenswrapper[4830]: I0131 09:14:55.340056 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-distributor-5f678c8dd6-vm6jc"] Jan 31 09:14:55 crc kubenswrapper[4830]: I0131 09:14:55.423891 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-distributor-grpc\" (UniqueName: \"kubernetes.io/secret/e5b91203-480c-424e-877a-5f2f437d1ada-logging-loki-distributor-grpc\") pod \"logging-loki-distributor-5f678c8dd6-vm6jc\" (UID: \"e5b91203-480c-424e-877a-5f2f437d1ada\") " pod="openshift-logging/logging-loki-distributor-5f678c8dd6-vm6jc" Jan 31 09:14:55 crc kubenswrapper[4830]: I0131 09:14:55.424438 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e5b91203-480c-424e-877a-5f2f437d1ada-config\") pod \"logging-loki-distributor-5f678c8dd6-vm6jc\" (UID: \"e5b91203-480c-424e-877a-5f2f437d1ada\") " pod="openshift-logging/logging-loki-distributor-5f678c8dd6-vm6jc" Jan 31 09:14:55 crc kubenswrapper[4830]: I0131 09:14:55.424498 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-distributor-http\" (UniqueName: \"kubernetes.io/secret/e5b91203-480c-424e-877a-5f2f437d1ada-logging-loki-distributor-http\") pod \"logging-loki-distributor-5f678c8dd6-vm6jc\" (UID: \"e5b91203-480c-424e-877a-5f2f437d1ada\") " pod="openshift-logging/logging-loki-distributor-5f678c8dd6-vm6jc" Jan 31 09:14:55 crc kubenswrapper[4830]: I0131 09:14:55.424526 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e5b91203-480c-424e-877a-5f2f437d1ada-logging-loki-ca-bundle\") pod \"logging-loki-distributor-5f678c8dd6-vm6jc\" (UID: \"e5b91203-480c-424e-877a-5f2f437d1ada\") " pod="openshift-logging/logging-loki-distributor-5f678c8dd6-vm6jc" Jan 31 09:14:55 crc kubenswrapper[4830]: I0131 09:14:55.424631 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x4krm\" (UniqueName: \"kubernetes.io/projected/e5b91203-480c-424e-877a-5f2f437d1ada-kube-api-access-x4krm\") pod \"logging-loki-distributor-5f678c8dd6-vm6jc\" (UID: \"e5b91203-480c-424e-877a-5f2f437d1ada\") " pod="openshift-logging/logging-loki-distributor-5f678c8dd6-vm6jc" Jan 31 09:14:55 crc kubenswrapper[4830]: I0131 09:14:55.470232 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-querier-76788598db-f89hf"] Jan 31 09:14:55 crc kubenswrapper[4830]: I0131 09:14:55.474990 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-querier-76788598db-f89hf" Jan 31 09:14:55 crc kubenswrapper[4830]: I0131 09:14:55.484038 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-s3" Jan 31 09:14:55 crc kubenswrapper[4830]: I0131 09:14:55.484312 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-querier-http" Jan 31 09:14:55 crc kubenswrapper[4830]: I0131 09:14:55.484536 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-querier-grpc" Jan 31 09:14:55 crc kubenswrapper[4830]: I0131 09:14:55.512349 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-querier-76788598db-f89hf"] Jan 31 09:14:55 crc kubenswrapper[4830]: I0131 09:14:55.532204 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x4krm\" (UniqueName: \"kubernetes.io/projected/e5b91203-480c-424e-877a-5f2f437d1ada-kube-api-access-x4krm\") pod \"logging-loki-distributor-5f678c8dd6-vm6jc\" (UID: \"e5b91203-480c-424e-877a-5f2f437d1ada\") " pod="openshift-logging/logging-loki-distributor-5f678c8dd6-vm6jc" Jan 31 09:14:55 crc kubenswrapper[4830]: I0131 09:14:55.532604 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-distributor-grpc\" (UniqueName: \"kubernetes.io/secret/e5b91203-480c-424e-877a-5f2f437d1ada-logging-loki-distributor-grpc\") pod \"logging-loki-distributor-5f678c8dd6-vm6jc\" (UID: \"e5b91203-480c-424e-877a-5f2f437d1ada\") " pod="openshift-logging/logging-loki-distributor-5f678c8dd6-vm6jc" Jan 31 09:14:55 crc kubenswrapper[4830]: I0131 09:14:55.533162 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e5b91203-480c-424e-877a-5f2f437d1ada-config\") pod \"logging-loki-distributor-5f678c8dd6-vm6jc\" (UID: \"e5b91203-480c-424e-877a-5f2f437d1ada\") " pod="openshift-logging/logging-loki-distributor-5f678c8dd6-vm6jc" Jan 31 09:14:55 crc kubenswrapper[4830]: I0131 09:14:55.533239 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-distributor-http\" (UniqueName: \"kubernetes.io/secret/e5b91203-480c-424e-877a-5f2f437d1ada-logging-loki-distributor-http\") pod \"logging-loki-distributor-5f678c8dd6-vm6jc\" (UID: \"e5b91203-480c-424e-877a-5f2f437d1ada\") " pod="openshift-logging/logging-loki-distributor-5f678c8dd6-vm6jc" Jan 31 09:14:55 crc kubenswrapper[4830]: I0131 09:14:55.533276 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e5b91203-480c-424e-877a-5f2f437d1ada-logging-loki-ca-bundle\") pod \"logging-loki-distributor-5f678c8dd6-vm6jc\" (UID: \"e5b91203-480c-424e-877a-5f2f437d1ada\") " pod="openshift-logging/logging-loki-distributor-5f678c8dd6-vm6jc" Jan 31 09:14:55 crc kubenswrapper[4830]: I0131 09:14:55.553004 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e5b91203-480c-424e-877a-5f2f437d1ada-logging-loki-ca-bundle\") pod \"logging-loki-distributor-5f678c8dd6-vm6jc\" (UID: \"e5b91203-480c-424e-877a-5f2f437d1ada\") " pod="openshift-logging/logging-loki-distributor-5f678c8dd6-vm6jc" Jan 31 09:14:55 crc kubenswrapper[4830]: I0131 09:14:55.560753 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-distributor-http\" (UniqueName: \"kubernetes.io/secret/e5b91203-480c-424e-877a-5f2f437d1ada-logging-loki-distributor-http\") pod \"logging-loki-distributor-5f678c8dd6-vm6jc\" (UID: \"e5b91203-480c-424e-877a-5f2f437d1ada\") " pod="openshift-logging/logging-loki-distributor-5f678c8dd6-vm6jc" Jan 31 09:14:55 crc kubenswrapper[4830]: I0131 09:14:55.561986 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e5b91203-480c-424e-877a-5f2f437d1ada-config\") pod \"logging-loki-distributor-5f678c8dd6-vm6jc\" (UID: \"e5b91203-480c-424e-877a-5f2f437d1ada\") " pod="openshift-logging/logging-loki-distributor-5f678c8dd6-vm6jc" Jan 31 09:14:55 crc kubenswrapper[4830]: I0131 09:14:55.565207 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-distributor-grpc\" (UniqueName: \"kubernetes.io/secret/e5b91203-480c-424e-877a-5f2f437d1ada-logging-loki-distributor-grpc\") pod \"logging-loki-distributor-5f678c8dd6-vm6jc\" (UID: \"e5b91203-480c-424e-877a-5f2f437d1ada\") " pod="openshift-logging/logging-loki-distributor-5f678c8dd6-vm6jc" Jan 31 09:14:55 crc kubenswrapper[4830]: I0131 09:14:55.568364 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x4krm\" (UniqueName: \"kubernetes.io/projected/e5b91203-480c-424e-877a-5f2f437d1ada-kube-api-access-x4krm\") pod \"logging-loki-distributor-5f678c8dd6-vm6jc\" (UID: \"e5b91203-480c-424e-877a-5f2f437d1ada\") " pod="openshift-logging/logging-loki-distributor-5f678c8dd6-vm6jc" Jan 31 09:14:55 crc kubenswrapper[4830]: I0131 09:14:55.582566 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-query-frontend-69d9546745-8k7rn"] Jan 31 09:14:55 crc kubenswrapper[4830]: I0131 09:14:55.583959 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-query-frontend-69d9546745-8k7rn" Jan 31 09:14:55 crc kubenswrapper[4830]: I0131 09:14:55.616101 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-query-frontend-grpc" Jan 31 09:14:55 crc kubenswrapper[4830]: I0131 09:14:55.618354 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-query-frontend-http" Jan 31 09:14:55 crc kubenswrapper[4830]: I0131 09:14:55.632541 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-query-frontend-69d9546745-8k7rn"] Jan 31 09:14:55 crc kubenswrapper[4830]: I0131 09:14:55.634829 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/8aa52b7a-444c-4f07-9c3a-c2223e966e34-logging-loki-s3\") pod \"logging-loki-querier-76788598db-f89hf\" (UID: \"8aa52b7a-444c-4f07-9c3a-c2223e966e34\") " pod="openshift-logging/logging-loki-querier-76788598db-f89hf" Jan 31 09:14:55 crc kubenswrapper[4830]: I0131 09:14:55.634889 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-querier-grpc\" (UniqueName: \"kubernetes.io/secret/8aa52b7a-444c-4f07-9c3a-c2223e966e34-logging-loki-querier-grpc\") pod \"logging-loki-querier-76788598db-f89hf\" (UID: \"8aa52b7a-444c-4f07-9c3a-c2223e966e34\") " pod="openshift-logging/logging-loki-querier-76788598db-f89hf" Jan 31 09:14:55 crc kubenswrapper[4830]: I0131 09:14:55.634952 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-querier-http\" (UniqueName: \"kubernetes.io/secret/8aa52b7a-444c-4f07-9c3a-c2223e966e34-logging-loki-querier-http\") pod \"logging-loki-querier-76788598db-f89hf\" (UID: \"8aa52b7a-444c-4f07-9c3a-c2223e966e34\") " pod="openshift-logging/logging-loki-querier-76788598db-f89hf" Jan 31 09:14:55 crc kubenswrapper[4830]: I0131 09:14:55.634974 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8aa52b7a-444c-4f07-9c3a-c2223e966e34-logging-loki-ca-bundle\") pod \"logging-loki-querier-76788598db-f89hf\" (UID: \"8aa52b7a-444c-4f07-9c3a-c2223e966e34\") " pod="openshift-logging/logging-loki-querier-76788598db-f89hf" Jan 31 09:14:55 crc kubenswrapper[4830]: I0131 09:14:55.635008 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8aa52b7a-444c-4f07-9c3a-c2223e966e34-config\") pod \"logging-loki-querier-76788598db-f89hf\" (UID: \"8aa52b7a-444c-4f07-9c3a-c2223e966e34\") " pod="openshift-logging/logging-loki-querier-76788598db-f89hf" Jan 31 09:14:55 crc kubenswrapper[4830]: I0131 09:14:55.635039 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mhvlv\" (UniqueName: \"kubernetes.io/projected/8aa52b7a-444c-4f07-9c3a-c2223e966e34-kube-api-access-mhvlv\") pod \"logging-loki-querier-76788598db-f89hf\" (UID: \"8aa52b7a-444c-4f07-9c3a-c2223e966e34\") " pod="openshift-logging/logging-loki-querier-76788598db-f89hf" Jan 31 09:14:55 crc kubenswrapper[4830]: I0131 09:14:55.667418 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-distributor-5f678c8dd6-vm6jc" Jan 31 09:14:55 crc kubenswrapper[4830]: I0131 09:14:55.737251 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-query-frontend-grpc\" (UniqueName: \"kubernetes.io/secret/6a2f00bb-9954-46d0-901b-3d9a82939850-logging-loki-query-frontend-grpc\") pod \"logging-loki-query-frontend-69d9546745-8k7rn\" (UID: \"6a2f00bb-9954-46d0-901b-3d9a82939850\") " pod="openshift-logging/logging-loki-query-frontend-69d9546745-8k7rn" Jan 31 09:14:55 crc kubenswrapper[4830]: I0131 09:14:55.737310 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-query-frontend-http\" (UniqueName: \"kubernetes.io/secret/6a2f00bb-9954-46d0-901b-3d9a82939850-logging-loki-query-frontend-http\") pod \"logging-loki-query-frontend-69d9546745-8k7rn\" (UID: \"6a2f00bb-9954-46d0-901b-3d9a82939850\") " pod="openshift-logging/logging-loki-query-frontend-69d9546745-8k7rn" Jan 31 09:14:55 crc kubenswrapper[4830]: I0131 09:14:55.737353 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/8aa52b7a-444c-4f07-9c3a-c2223e966e34-logging-loki-s3\") pod \"logging-loki-querier-76788598db-f89hf\" (UID: \"8aa52b7a-444c-4f07-9c3a-c2223e966e34\") " pod="openshift-logging/logging-loki-querier-76788598db-f89hf" Jan 31 09:14:55 crc kubenswrapper[4830]: I0131 09:14:55.737375 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-querier-grpc\" (UniqueName: \"kubernetes.io/secret/8aa52b7a-444c-4f07-9c3a-c2223e966e34-logging-loki-querier-grpc\") pod \"logging-loki-querier-76788598db-f89hf\" (UID: \"8aa52b7a-444c-4f07-9c3a-c2223e966e34\") " pod="openshift-logging/logging-loki-querier-76788598db-f89hf" Jan 31 09:14:55 crc kubenswrapper[4830]: I0131 09:14:55.737425 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r5vtv\" (UniqueName: \"kubernetes.io/projected/6a2f00bb-9954-46d0-901b-3d9a82939850-kube-api-access-r5vtv\") pod \"logging-loki-query-frontend-69d9546745-8k7rn\" (UID: \"6a2f00bb-9954-46d0-901b-3d9a82939850\") " pod="openshift-logging/logging-loki-query-frontend-69d9546745-8k7rn" Jan 31 09:14:55 crc kubenswrapper[4830]: I0131 09:14:55.737459 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6a2f00bb-9954-46d0-901b-3d9a82939850-logging-loki-ca-bundle\") pod \"logging-loki-query-frontend-69d9546745-8k7rn\" (UID: \"6a2f00bb-9954-46d0-901b-3d9a82939850\") " pod="openshift-logging/logging-loki-query-frontend-69d9546745-8k7rn" Jan 31 09:14:55 crc kubenswrapper[4830]: I0131 09:14:55.737476 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6a2f00bb-9954-46d0-901b-3d9a82939850-config\") pod \"logging-loki-query-frontend-69d9546745-8k7rn\" (UID: \"6a2f00bb-9954-46d0-901b-3d9a82939850\") " pod="openshift-logging/logging-loki-query-frontend-69d9546745-8k7rn" Jan 31 09:14:55 crc kubenswrapper[4830]: I0131 09:14:55.737508 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-querier-http\" (UniqueName: \"kubernetes.io/secret/8aa52b7a-444c-4f07-9c3a-c2223e966e34-logging-loki-querier-http\") pod \"logging-loki-querier-76788598db-f89hf\" (UID: \"8aa52b7a-444c-4f07-9c3a-c2223e966e34\") " pod="openshift-logging/logging-loki-querier-76788598db-f89hf" Jan 31 09:14:55 crc kubenswrapper[4830]: I0131 09:14:55.737528 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8aa52b7a-444c-4f07-9c3a-c2223e966e34-logging-loki-ca-bundle\") pod \"logging-loki-querier-76788598db-f89hf\" (UID: \"8aa52b7a-444c-4f07-9c3a-c2223e966e34\") " pod="openshift-logging/logging-loki-querier-76788598db-f89hf" Jan 31 09:14:55 crc kubenswrapper[4830]: I0131 09:14:55.737557 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8aa52b7a-444c-4f07-9c3a-c2223e966e34-config\") pod \"logging-loki-querier-76788598db-f89hf\" (UID: \"8aa52b7a-444c-4f07-9c3a-c2223e966e34\") " pod="openshift-logging/logging-loki-querier-76788598db-f89hf" Jan 31 09:14:55 crc kubenswrapper[4830]: I0131 09:14:55.737578 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mhvlv\" (UniqueName: \"kubernetes.io/projected/8aa52b7a-444c-4f07-9c3a-c2223e966e34-kube-api-access-mhvlv\") pod \"logging-loki-querier-76788598db-f89hf\" (UID: \"8aa52b7a-444c-4f07-9c3a-c2223e966e34\") " pod="openshift-logging/logging-loki-querier-76788598db-f89hf" Jan 31 09:14:55 crc kubenswrapper[4830]: I0131 09:14:55.742317 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8aa52b7a-444c-4f07-9c3a-c2223e966e34-logging-loki-ca-bundle\") pod \"logging-loki-querier-76788598db-f89hf\" (UID: \"8aa52b7a-444c-4f07-9c3a-c2223e966e34\") " pod="openshift-logging/logging-loki-querier-76788598db-f89hf" Jan 31 09:14:55 crc kubenswrapper[4830]: I0131 09:14:55.747461 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8aa52b7a-444c-4f07-9c3a-c2223e966e34-config\") pod \"logging-loki-querier-76788598db-f89hf\" (UID: \"8aa52b7a-444c-4f07-9c3a-c2223e966e34\") " pod="openshift-logging/logging-loki-querier-76788598db-f89hf" Jan 31 09:14:55 crc kubenswrapper[4830]: I0131 09:14:55.752653 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/8aa52b7a-444c-4f07-9c3a-c2223e966e34-logging-loki-s3\") pod \"logging-loki-querier-76788598db-f89hf\" (UID: \"8aa52b7a-444c-4f07-9c3a-c2223e966e34\") " pod="openshift-logging/logging-loki-querier-76788598db-f89hf" Jan 31 09:14:55 crc kubenswrapper[4830]: I0131 09:14:55.753061 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-querier-grpc\" (UniqueName: \"kubernetes.io/secret/8aa52b7a-444c-4f07-9c3a-c2223e966e34-logging-loki-querier-grpc\") pod \"logging-loki-querier-76788598db-f89hf\" (UID: \"8aa52b7a-444c-4f07-9c3a-c2223e966e34\") " pod="openshift-logging/logging-loki-querier-76788598db-f89hf" Jan 31 09:14:55 crc kubenswrapper[4830]: I0131 09:14:55.755568 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-querier-http\" (UniqueName: \"kubernetes.io/secret/8aa52b7a-444c-4f07-9c3a-c2223e966e34-logging-loki-querier-http\") pod \"logging-loki-querier-76788598db-f89hf\" (UID: \"8aa52b7a-444c-4f07-9c3a-c2223e966e34\") " pod="openshift-logging/logging-loki-querier-76788598db-f89hf" Jan 31 09:14:55 crc kubenswrapper[4830]: I0131 09:14:55.780832 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mhvlv\" (UniqueName: \"kubernetes.io/projected/8aa52b7a-444c-4f07-9c3a-c2223e966e34-kube-api-access-mhvlv\") pod \"logging-loki-querier-76788598db-f89hf\" (UID: \"8aa52b7a-444c-4f07-9c3a-c2223e966e34\") " pod="openshift-logging/logging-loki-querier-76788598db-f89hf" Jan 31 09:14:55 crc kubenswrapper[4830]: I0131 09:14:55.803994 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-querier-76788598db-f89hf" Jan 31 09:14:55 crc kubenswrapper[4830]: I0131 09:14:55.841855 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r5vtv\" (UniqueName: \"kubernetes.io/projected/6a2f00bb-9954-46d0-901b-3d9a82939850-kube-api-access-r5vtv\") pod \"logging-loki-query-frontend-69d9546745-8k7rn\" (UID: \"6a2f00bb-9954-46d0-901b-3d9a82939850\") " pod="openshift-logging/logging-loki-query-frontend-69d9546745-8k7rn" Jan 31 09:14:55 crc kubenswrapper[4830]: I0131 09:14:55.841942 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6a2f00bb-9954-46d0-901b-3d9a82939850-logging-loki-ca-bundle\") pod \"logging-loki-query-frontend-69d9546745-8k7rn\" (UID: \"6a2f00bb-9954-46d0-901b-3d9a82939850\") " pod="openshift-logging/logging-loki-query-frontend-69d9546745-8k7rn" Jan 31 09:14:55 crc kubenswrapper[4830]: I0131 09:14:55.841980 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6a2f00bb-9954-46d0-901b-3d9a82939850-config\") pod \"logging-loki-query-frontend-69d9546745-8k7rn\" (UID: \"6a2f00bb-9954-46d0-901b-3d9a82939850\") " pod="openshift-logging/logging-loki-query-frontend-69d9546745-8k7rn" Jan 31 09:14:55 crc kubenswrapper[4830]: I0131 09:14:55.842058 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-query-frontend-grpc\" (UniqueName: \"kubernetes.io/secret/6a2f00bb-9954-46d0-901b-3d9a82939850-logging-loki-query-frontend-grpc\") pod \"logging-loki-query-frontend-69d9546745-8k7rn\" (UID: \"6a2f00bb-9954-46d0-901b-3d9a82939850\") " pod="openshift-logging/logging-loki-query-frontend-69d9546745-8k7rn" Jan 31 09:14:55 crc kubenswrapper[4830]: I0131 09:14:55.842105 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-query-frontend-http\" (UniqueName: \"kubernetes.io/secret/6a2f00bb-9954-46d0-901b-3d9a82939850-logging-loki-query-frontend-http\") pod \"logging-loki-query-frontend-69d9546745-8k7rn\" (UID: \"6a2f00bb-9954-46d0-901b-3d9a82939850\") " pod="openshift-logging/logging-loki-query-frontend-69d9546745-8k7rn" Jan 31 09:14:55 crc kubenswrapper[4830]: I0131 09:14:55.843337 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6a2f00bb-9954-46d0-901b-3d9a82939850-config\") pod \"logging-loki-query-frontend-69d9546745-8k7rn\" (UID: \"6a2f00bb-9954-46d0-901b-3d9a82939850\") " pod="openshift-logging/logging-loki-query-frontend-69d9546745-8k7rn" Jan 31 09:14:55 crc kubenswrapper[4830]: I0131 09:14:55.843905 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6a2f00bb-9954-46d0-901b-3d9a82939850-logging-loki-ca-bundle\") pod \"logging-loki-query-frontend-69d9546745-8k7rn\" (UID: \"6a2f00bb-9954-46d0-901b-3d9a82939850\") " pod="openshift-logging/logging-loki-query-frontend-69d9546745-8k7rn" Jan 31 09:14:55 crc kubenswrapper[4830]: I0131 09:14:55.845517 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-query-frontend-http\" (UniqueName: \"kubernetes.io/secret/6a2f00bb-9954-46d0-901b-3d9a82939850-logging-loki-query-frontend-http\") pod \"logging-loki-query-frontend-69d9546745-8k7rn\" (UID: \"6a2f00bb-9954-46d0-901b-3d9a82939850\") " pod="openshift-logging/logging-loki-query-frontend-69d9546745-8k7rn" Jan 31 09:14:55 crc kubenswrapper[4830]: I0131 09:14:55.857646 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-query-frontend-grpc\" (UniqueName: \"kubernetes.io/secret/6a2f00bb-9954-46d0-901b-3d9a82939850-logging-loki-query-frontend-grpc\") pod \"logging-loki-query-frontend-69d9546745-8k7rn\" (UID: \"6a2f00bb-9954-46d0-901b-3d9a82939850\") " pod="openshift-logging/logging-loki-query-frontend-69d9546745-8k7rn" Jan 31 09:14:55 crc kubenswrapper[4830]: I0131 09:14:55.876772 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-gateway-74c87577db-hwvhd"] Jan 31 09:14:55 crc kubenswrapper[4830]: I0131 09:14:55.878916 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-gateway-74c87577db-hwvhd" Jan 31 09:14:55 crc kubenswrapper[4830]: I0131 09:14:55.888674 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r5vtv\" (UniqueName: \"kubernetes.io/projected/6a2f00bb-9954-46d0-901b-3d9a82939850-kube-api-access-r5vtv\") pod \"logging-loki-query-frontend-69d9546745-8k7rn\" (UID: \"6a2f00bb-9954-46d0-901b-3d9a82939850\") " pod="openshift-logging/logging-loki-query-frontend-69d9546745-8k7rn" Jan 31 09:14:55 crc kubenswrapper[4830]: I0131 09:14:55.889122 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-gateway-http" Jan 31 09:14:55 crc kubenswrapper[4830]: I0131 09:14:55.901098 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-gateway" Jan 31 09:14:55 crc kubenswrapper[4830]: I0131 09:14:55.901604 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-gateway-dockercfg-9kd6b" Jan 31 09:14:55 crc kubenswrapper[4830]: I0131 09:14:55.901844 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"logging-loki-gateway-ca-bundle" Jan 31 09:14:55 crc kubenswrapper[4830]: I0131 09:14:55.902005 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"logging-loki-gateway" Jan 31 09:14:55 crc kubenswrapper[4830]: I0131 09:14:55.910062 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-gateway-client-http" Jan 31 09:14:55 crc kubenswrapper[4830]: I0131 09:14:55.931914 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-query-frontend-69d9546745-8k7rn" Jan 31 09:14:55 crc kubenswrapper[4830]: I0131 09:14:55.959543 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-gateway-74c87577db-hwvhd"] Jan 31 09:14:55 crc kubenswrapper[4830]: I0131 09:14:55.978799 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-gateway-74c87577db-fjtpt"] Jan 31 09:14:55 crc kubenswrapper[4830]: I0131 09:14:55.980099 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-gateway-74c87577db-fjtpt" Jan 31 09:14:55 crc kubenswrapper[4830]: I0131 09:14:55.989878 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-gateway-74c87577db-fjtpt"] Jan 31 09:14:56 crc kubenswrapper[4830]: I0131 09:14:56.048816 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vb8lg\" (UniqueName: \"kubernetes.io/projected/fd432483-7467-4c9d-a13e-8ee908a8ed2b-kube-api-access-vb8lg\") pod \"logging-loki-gateway-74c87577db-hwvhd\" (UID: \"fd432483-7467-4c9d-a13e-8ee908a8ed2b\") " pod="openshift-logging/logging-loki-gateway-74c87577db-hwvhd" Jan 31 09:14:56 crc kubenswrapper[4830]: I0131 09:14:56.049960 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fd432483-7467-4c9d-a13e-8ee908a8ed2b-logging-loki-ca-bundle\") pod \"logging-loki-gateway-74c87577db-hwvhd\" (UID: \"fd432483-7467-4c9d-a13e-8ee908a8ed2b\") " pod="openshift-logging/logging-loki-gateway-74c87577db-hwvhd" Jan 31 09:14:56 crc kubenswrapper[4830]: I0131 09:14:56.050008 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/fd432483-7467-4c9d-a13e-8ee908a8ed2b-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-74c87577db-hwvhd\" (UID: \"fd432483-7467-4c9d-a13e-8ee908a8ed2b\") " pod="openshift-logging/logging-loki-gateway-74c87577db-hwvhd" Jan 31 09:14:56 crc kubenswrapper[4830]: I0131 09:14:56.050050 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/fd432483-7467-4c9d-a13e-8ee908a8ed2b-lokistack-gateway\") pod \"logging-loki-gateway-74c87577db-hwvhd\" (UID: \"fd432483-7467-4c9d-a13e-8ee908a8ed2b\") " pod="openshift-logging/logging-loki-gateway-74c87577db-hwvhd" Jan 31 09:14:56 crc kubenswrapper[4830]: I0131 09:14:56.050118 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/fd432483-7467-4c9d-a13e-8ee908a8ed2b-tls-secret\") pod \"logging-loki-gateway-74c87577db-hwvhd\" (UID: \"fd432483-7467-4c9d-a13e-8ee908a8ed2b\") " pod="openshift-logging/logging-loki-gateway-74c87577db-hwvhd" Jan 31 09:14:56 crc kubenswrapper[4830]: I0131 09:14:56.050146 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/fd432483-7467-4c9d-a13e-8ee908a8ed2b-tenants\") pod \"logging-loki-gateway-74c87577db-hwvhd\" (UID: \"fd432483-7467-4c9d-a13e-8ee908a8ed2b\") " pod="openshift-logging/logging-loki-gateway-74c87577db-hwvhd" Jan 31 09:14:56 crc kubenswrapper[4830]: I0131 09:14:56.050166 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/fd432483-7467-4c9d-a13e-8ee908a8ed2b-rbac\") pod \"logging-loki-gateway-74c87577db-hwvhd\" (UID: \"fd432483-7467-4c9d-a13e-8ee908a8ed2b\") " pod="openshift-logging/logging-loki-gateway-74c87577db-hwvhd" Jan 31 09:14:56 crc kubenswrapper[4830]: I0131 09:14:56.050187 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fd432483-7467-4c9d-a13e-8ee908a8ed2b-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-74c87577db-hwvhd\" (UID: \"fd432483-7467-4c9d-a13e-8ee908a8ed2b\") " pod="openshift-logging/logging-loki-gateway-74c87577db-hwvhd" Jan 31 09:14:56 crc kubenswrapper[4830]: I0131 09:14:56.152193 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fd432483-7467-4c9d-a13e-8ee908a8ed2b-logging-loki-ca-bundle\") pod \"logging-loki-gateway-74c87577db-hwvhd\" (UID: \"fd432483-7467-4c9d-a13e-8ee908a8ed2b\") " pod="openshift-logging/logging-loki-gateway-74c87577db-hwvhd" Jan 31 09:14:56 crc kubenswrapper[4830]: I0131 09:14:56.152261 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/fd432483-7467-4c9d-a13e-8ee908a8ed2b-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-74c87577db-hwvhd\" (UID: \"fd432483-7467-4c9d-a13e-8ee908a8ed2b\") " pod="openshift-logging/logging-loki-gateway-74c87577db-hwvhd" Jan 31 09:14:56 crc kubenswrapper[4830]: I0131 09:14:56.152296 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/867e058e-8774-4ff8-af99-a8f35ac530ce-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-74c87577db-fjtpt\" (UID: \"867e058e-8774-4ff8-af99-a8f35ac530ce\") " pod="openshift-logging/logging-loki-gateway-74c87577db-fjtpt" Jan 31 09:14:56 crc kubenswrapper[4830]: I0131 09:14:56.152325 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rs2gh\" (UniqueName: \"kubernetes.io/projected/867e058e-8774-4ff8-af99-a8f35ac530ce-kube-api-access-rs2gh\") pod \"logging-loki-gateway-74c87577db-fjtpt\" (UID: \"867e058e-8774-4ff8-af99-a8f35ac530ce\") " pod="openshift-logging/logging-loki-gateway-74c87577db-fjtpt" Jan 31 09:14:56 crc kubenswrapper[4830]: I0131 09:14:56.152558 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/fd432483-7467-4c9d-a13e-8ee908a8ed2b-lokistack-gateway\") pod \"logging-loki-gateway-74c87577db-hwvhd\" (UID: \"fd432483-7467-4c9d-a13e-8ee908a8ed2b\") " pod="openshift-logging/logging-loki-gateway-74c87577db-hwvhd" Jan 31 09:14:56 crc kubenswrapper[4830]: I0131 09:14:56.152593 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/867e058e-8774-4ff8-af99-a8f35ac530ce-lokistack-gateway\") pod \"logging-loki-gateway-74c87577db-fjtpt\" (UID: \"867e058e-8774-4ff8-af99-a8f35ac530ce\") " pod="openshift-logging/logging-loki-gateway-74c87577db-fjtpt" Jan 31 09:14:56 crc kubenswrapper[4830]: I0131 09:14:56.152615 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/867e058e-8774-4ff8-af99-a8f35ac530ce-tenants\") pod \"logging-loki-gateway-74c87577db-fjtpt\" (UID: \"867e058e-8774-4ff8-af99-a8f35ac530ce\") " pod="openshift-logging/logging-loki-gateway-74c87577db-fjtpt" Jan 31 09:14:56 crc kubenswrapper[4830]: I0131 09:14:56.152636 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/867e058e-8774-4ff8-af99-a8f35ac530ce-tls-secret\") pod \"logging-loki-gateway-74c87577db-fjtpt\" (UID: \"867e058e-8774-4ff8-af99-a8f35ac530ce\") " pod="openshift-logging/logging-loki-gateway-74c87577db-fjtpt" Jan 31 09:14:56 crc kubenswrapper[4830]: I0131 09:14:56.152666 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/fd432483-7467-4c9d-a13e-8ee908a8ed2b-tenants\") pod \"logging-loki-gateway-74c87577db-hwvhd\" (UID: \"fd432483-7467-4c9d-a13e-8ee908a8ed2b\") " pod="openshift-logging/logging-loki-gateway-74c87577db-hwvhd" Jan 31 09:14:56 crc kubenswrapper[4830]: I0131 09:14:56.152682 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/fd432483-7467-4c9d-a13e-8ee908a8ed2b-tls-secret\") pod \"logging-loki-gateway-74c87577db-hwvhd\" (UID: \"fd432483-7467-4c9d-a13e-8ee908a8ed2b\") " pod="openshift-logging/logging-loki-gateway-74c87577db-hwvhd" Jan 31 09:14:56 crc kubenswrapper[4830]: I0131 09:14:56.152706 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/fd432483-7467-4c9d-a13e-8ee908a8ed2b-rbac\") pod \"logging-loki-gateway-74c87577db-hwvhd\" (UID: \"fd432483-7467-4c9d-a13e-8ee908a8ed2b\") " pod="openshift-logging/logging-loki-gateway-74c87577db-hwvhd" Jan 31 09:14:56 crc kubenswrapper[4830]: I0131 09:14:56.152744 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fd432483-7467-4c9d-a13e-8ee908a8ed2b-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-74c87577db-hwvhd\" (UID: \"fd432483-7467-4c9d-a13e-8ee908a8ed2b\") " pod="openshift-logging/logging-loki-gateway-74c87577db-hwvhd" Jan 31 09:14:56 crc kubenswrapper[4830]: I0131 09:14:56.152779 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/867e058e-8774-4ff8-af99-a8f35ac530ce-rbac\") pod \"logging-loki-gateway-74c87577db-fjtpt\" (UID: \"867e058e-8774-4ff8-af99-a8f35ac530ce\") " pod="openshift-logging/logging-loki-gateway-74c87577db-fjtpt" Jan 31 09:14:56 crc kubenswrapper[4830]: I0131 09:14:56.152800 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/867e058e-8774-4ff8-af99-a8f35ac530ce-logging-loki-ca-bundle\") pod \"logging-loki-gateway-74c87577db-fjtpt\" (UID: \"867e058e-8774-4ff8-af99-a8f35ac530ce\") " pod="openshift-logging/logging-loki-gateway-74c87577db-fjtpt" Jan 31 09:14:56 crc kubenswrapper[4830]: I0131 09:14:56.152838 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vb8lg\" (UniqueName: \"kubernetes.io/projected/fd432483-7467-4c9d-a13e-8ee908a8ed2b-kube-api-access-vb8lg\") pod \"logging-loki-gateway-74c87577db-hwvhd\" (UID: \"fd432483-7467-4c9d-a13e-8ee908a8ed2b\") " pod="openshift-logging/logging-loki-gateway-74c87577db-hwvhd" Jan 31 09:14:56 crc kubenswrapper[4830]: I0131 09:14:56.152857 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/867e058e-8774-4ff8-af99-a8f35ac530ce-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-74c87577db-fjtpt\" (UID: \"867e058e-8774-4ff8-af99-a8f35ac530ce\") " pod="openshift-logging/logging-loki-gateway-74c87577db-fjtpt" Jan 31 09:14:56 crc kubenswrapper[4830]: E0131 09:14:56.154177 4830 secret.go:188] Couldn't get secret openshift-logging/logging-loki-gateway-http: secret "logging-loki-gateway-http" not found Jan 31 09:14:56 crc kubenswrapper[4830]: E0131 09:14:56.154275 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fd432483-7467-4c9d-a13e-8ee908a8ed2b-tls-secret podName:fd432483-7467-4c9d-a13e-8ee908a8ed2b nodeName:}" failed. No retries permitted until 2026-01-31 09:14:56.654248952 +0000 UTC m=+841.147611394 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-secret" (UniqueName: "kubernetes.io/secret/fd432483-7467-4c9d-a13e-8ee908a8ed2b-tls-secret") pod "logging-loki-gateway-74c87577db-hwvhd" (UID: "fd432483-7467-4c9d-a13e-8ee908a8ed2b") : secret "logging-loki-gateway-http" not found Jan 31 09:14:56 crc kubenswrapper[4830]: I0131 09:14:56.155361 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/fd432483-7467-4c9d-a13e-8ee908a8ed2b-rbac\") pod \"logging-loki-gateway-74c87577db-hwvhd\" (UID: \"fd432483-7467-4c9d-a13e-8ee908a8ed2b\") " pod="openshift-logging/logging-loki-gateway-74c87577db-hwvhd" Jan 31 09:14:56 crc kubenswrapper[4830]: I0131 09:14:56.156146 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fd432483-7467-4c9d-a13e-8ee908a8ed2b-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-74c87577db-hwvhd\" (UID: \"fd432483-7467-4c9d-a13e-8ee908a8ed2b\") " pod="openshift-logging/logging-loki-gateway-74c87577db-hwvhd" Jan 31 09:14:56 crc kubenswrapper[4830]: I0131 09:14:56.156610 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/fd432483-7467-4c9d-a13e-8ee908a8ed2b-lokistack-gateway\") pod \"logging-loki-gateway-74c87577db-hwvhd\" (UID: \"fd432483-7467-4c9d-a13e-8ee908a8ed2b\") " pod="openshift-logging/logging-loki-gateway-74c87577db-hwvhd" Jan 31 09:14:56 crc kubenswrapper[4830]: I0131 09:14:56.157511 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fd432483-7467-4c9d-a13e-8ee908a8ed2b-logging-loki-ca-bundle\") pod \"logging-loki-gateway-74c87577db-hwvhd\" (UID: \"fd432483-7467-4c9d-a13e-8ee908a8ed2b\") " pod="openshift-logging/logging-loki-gateway-74c87577db-hwvhd" Jan 31 09:14:56 crc kubenswrapper[4830]: I0131 09:14:56.158708 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/fd432483-7467-4c9d-a13e-8ee908a8ed2b-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-74c87577db-hwvhd\" (UID: \"fd432483-7467-4c9d-a13e-8ee908a8ed2b\") " pod="openshift-logging/logging-loki-gateway-74c87577db-hwvhd" Jan 31 09:14:56 crc kubenswrapper[4830]: I0131 09:14:56.160044 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/fd432483-7467-4c9d-a13e-8ee908a8ed2b-tenants\") pod \"logging-loki-gateway-74c87577db-hwvhd\" (UID: \"fd432483-7467-4c9d-a13e-8ee908a8ed2b\") " pod="openshift-logging/logging-loki-gateway-74c87577db-hwvhd" Jan 31 09:14:56 crc kubenswrapper[4830]: I0131 09:14:56.179092 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-distributor-5f678c8dd6-vm6jc"] Jan 31 09:14:56 crc kubenswrapper[4830]: I0131 09:14:56.179582 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vb8lg\" (UniqueName: \"kubernetes.io/projected/fd432483-7467-4c9d-a13e-8ee908a8ed2b-kube-api-access-vb8lg\") pod \"logging-loki-gateway-74c87577db-hwvhd\" (UID: \"fd432483-7467-4c9d-a13e-8ee908a8ed2b\") " pod="openshift-logging/logging-loki-gateway-74c87577db-hwvhd" Jan 31 09:14:56 crc kubenswrapper[4830]: I0131 09:14:56.204582 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-distributor-5f678c8dd6-vm6jc" event={"ID":"e5b91203-480c-424e-877a-5f2f437d1ada","Type":"ContainerStarted","Data":"adcdf4f3ac9ababc436860539124bf067432b9981c65c7a8afe9a3f82949b9b7"} Jan 31 09:14:56 crc kubenswrapper[4830]: I0131 09:14:56.254841 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/867e058e-8774-4ff8-af99-a8f35ac530ce-lokistack-gateway\") pod \"logging-loki-gateway-74c87577db-fjtpt\" (UID: \"867e058e-8774-4ff8-af99-a8f35ac530ce\") " pod="openshift-logging/logging-loki-gateway-74c87577db-fjtpt" Jan 31 09:14:56 crc kubenswrapper[4830]: I0131 09:14:56.254893 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/867e058e-8774-4ff8-af99-a8f35ac530ce-tenants\") pod \"logging-loki-gateway-74c87577db-fjtpt\" (UID: \"867e058e-8774-4ff8-af99-a8f35ac530ce\") " pod="openshift-logging/logging-loki-gateway-74c87577db-fjtpt" Jan 31 09:14:56 crc kubenswrapper[4830]: I0131 09:14:56.254916 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/867e058e-8774-4ff8-af99-a8f35ac530ce-tls-secret\") pod \"logging-loki-gateway-74c87577db-fjtpt\" (UID: \"867e058e-8774-4ff8-af99-a8f35ac530ce\") " pod="openshift-logging/logging-loki-gateway-74c87577db-fjtpt" Jan 31 09:14:56 crc kubenswrapper[4830]: I0131 09:14:56.254972 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/867e058e-8774-4ff8-af99-a8f35ac530ce-rbac\") pod \"logging-loki-gateway-74c87577db-fjtpt\" (UID: \"867e058e-8774-4ff8-af99-a8f35ac530ce\") " pod="openshift-logging/logging-loki-gateway-74c87577db-fjtpt" Jan 31 09:14:56 crc kubenswrapper[4830]: I0131 09:14:56.254990 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/867e058e-8774-4ff8-af99-a8f35ac530ce-logging-loki-ca-bundle\") pod \"logging-loki-gateway-74c87577db-fjtpt\" (UID: \"867e058e-8774-4ff8-af99-a8f35ac530ce\") " pod="openshift-logging/logging-loki-gateway-74c87577db-fjtpt" Jan 31 09:14:56 crc kubenswrapper[4830]: I0131 09:14:56.255029 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/867e058e-8774-4ff8-af99-a8f35ac530ce-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-74c87577db-fjtpt\" (UID: \"867e058e-8774-4ff8-af99-a8f35ac530ce\") " pod="openshift-logging/logging-loki-gateway-74c87577db-fjtpt" Jan 31 09:14:56 crc kubenswrapper[4830]: I0131 09:14:56.255085 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/867e058e-8774-4ff8-af99-a8f35ac530ce-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-74c87577db-fjtpt\" (UID: \"867e058e-8774-4ff8-af99-a8f35ac530ce\") " pod="openshift-logging/logging-loki-gateway-74c87577db-fjtpt" Jan 31 09:14:56 crc kubenswrapper[4830]: I0131 09:14:56.255102 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rs2gh\" (UniqueName: \"kubernetes.io/projected/867e058e-8774-4ff8-af99-a8f35ac530ce-kube-api-access-rs2gh\") pod \"logging-loki-gateway-74c87577db-fjtpt\" (UID: \"867e058e-8774-4ff8-af99-a8f35ac530ce\") " pod="openshift-logging/logging-loki-gateway-74c87577db-fjtpt" Jan 31 09:14:56 crc kubenswrapper[4830]: I0131 09:14:56.256290 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/867e058e-8774-4ff8-af99-a8f35ac530ce-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-74c87577db-fjtpt\" (UID: \"867e058e-8774-4ff8-af99-a8f35ac530ce\") " pod="openshift-logging/logging-loki-gateway-74c87577db-fjtpt" Jan 31 09:14:56 crc kubenswrapper[4830]: I0131 09:14:56.256429 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/867e058e-8774-4ff8-af99-a8f35ac530ce-rbac\") pod \"logging-loki-gateway-74c87577db-fjtpt\" (UID: \"867e058e-8774-4ff8-af99-a8f35ac530ce\") " pod="openshift-logging/logging-loki-gateway-74c87577db-fjtpt" Jan 31 09:14:56 crc kubenswrapper[4830]: I0131 09:14:56.257069 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/867e058e-8774-4ff8-af99-a8f35ac530ce-logging-loki-ca-bundle\") pod \"logging-loki-gateway-74c87577db-fjtpt\" (UID: \"867e058e-8774-4ff8-af99-a8f35ac530ce\") " pod="openshift-logging/logging-loki-gateway-74c87577db-fjtpt" Jan 31 09:14:56 crc kubenswrapper[4830]: E0131 09:14:56.257219 4830 secret.go:188] Couldn't get secret openshift-logging/logging-loki-gateway-http: secret "logging-loki-gateway-http" not found Jan 31 09:14:56 crc kubenswrapper[4830]: E0131 09:14:56.257282 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/867e058e-8774-4ff8-af99-a8f35ac530ce-tls-secret podName:867e058e-8774-4ff8-af99-a8f35ac530ce nodeName:}" failed. No retries permitted until 2026-01-31 09:14:56.757258407 +0000 UTC m=+841.250621049 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-secret" (UniqueName: "kubernetes.io/secret/867e058e-8774-4ff8-af99-a8f35ac530ce-tls-secret") pod "logging-loki-gateway-74c87577db-fjtpt" (UID: "867e058e-8774-4ff8-af99-a8f35ac530ce") : secret "logging-loki-gateway-http" not found Jan 31 09:14:56 crc kubenswrapper[4830]: I0131 09:14:56.258456 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/867e058e-8774-4ff8-af99-a8f35ac530ce-lokistack-gateway\") pod \"logging-loki-gateway-74c87577db-fjtpt\" (UID: \"867e058e-8774-4ff8-af99-a8f35ac530ce\") " pod="openshift-logging/logging-loki-gateway-74c87577db-fjtpt" Jan 31 09:14:56 crc kubenswrapper[4830]: I0131 09:14:56.272488 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/867e058e-8774-4ff8-af99-a8f35ac530ce-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-74c87577db-fjtpt\" (UID: \"867e058e-8774-4ff8-af99-a8f35ac530ce\") " pod="openshift-logging/logging-loki-gateway-74c87577db-fjtpt" Jan 31 09:14:56 crc kubenswrapper[4830]: I0131 09:14:56.274923 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/867e058e-8774-4ff8-af99-a8f35ac530ce-tenants\") pod \"logging-loki-gateway-74c87577db-fjtpt\" (UID: \"867e058e-8774-4ff8-af99-a8f35ac530ce\") " pod="openshift-logging/logging-loki-gateway-74c87577db-fjtpt" Jan 31 09:14:56 crc kubenswrapper[4830]: I0131 09:14:56.279216 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rs2gh\" (UniqueName: \"kubernetes.io/projected/867e058e-8774-4ff8-af99-a8f35ac530ce-kube-api-access-rs2gh\") pod \"logging-loki-gateway-74c87577db-fjtpt\" (UID: \"867e058e-8774-4ff8-af99-a8f35ac530ce\") " pod="openshift-logging/logging-loki-gateway-74c87577db-fjtpt" Jan 31 09:14:56 crc kubenswrapper[4830]: I0131 09:14:56.478172 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-ingester-0"] Jan 31 09:14:56 crc kubenswrapper[4830]: I0131 09:14:56.479296 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-ingester-0" Jan 31 09:14:56 crc kubenswrapper[4830]: W0131 09:14:56.481439 4830 reflector.go:561] object-"openshift-logging"/"logging-loki-ingester-http": failed to list *v1.Secret: secrets "logging-loki-ingester-http" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-logging": no relationship found between node 'crc' and this object Jan 31 09:14:56 crc kubenswrapper[4830]: E0131 09:14:56.481511 4830 reflector.go:158] "Unhandled Error" err="object-\"openshift-logging\"/\"logging-loki-ingester-http\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"logging-loki-ingester-http\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-logging\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 31 09:14:56 crc kubenswrapper[4830]: W0131 09:14:56.482086 4830 reflector.go:561] object-"openshift-logging"/"logging-loki-ingester-grpc": failed to list *v1.Secret: secrets "logging-loki-ingester-grpc" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-logging": no relationship found between node 'crc' and this object Jan 31 09:14:56 crc kubenswrapper[4830]: E0131 09:14:56.482114 4830 reflector.go:158] "Unhandled Error" err="object-\"openshift-logging\"/\"logging-loki-ingester-grpc\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"logging-loki-ingester-grpc\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-logging\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 31 09:14:56 crc kubenswrapper[4830]: I0131 09:14:56.507912 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-ingester-0"] Jan 31 09:14:56 crc kubenswrapper[4830]: I0131 09:14:56.528613 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-querier-76788598db-f89hf"] Jan 31 09:14:56 crc kubenswrapper[4830]: I0131 09:14:56.553418 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-compactor-0"] Jan 31 09:14:56 crc kubenswrapper[4830]: I0131 09:14:56.558613 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-compactor-0" Jan 31 09:14:56 crc kubenswrapper[4830]: I0131 09:14:56.559817 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k2xjw\" (UniqueName: \"kubernetes.io/projected/07a77a4a-344b-45bb-8488-a536a94185b1-kube-api-access-k2xjw\") pod \"logging-loki-ingester-0\" (UID: \"07a77a4a-344b-45bb-8488-a536a94185b1\") " pod="openshift-logging/logging-loki-ingester-0" Jan 31 09:14:56 crc kubenswrapper[4830]: I0131 09:14:56.559874 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ingester-grpc\" (UniqueName: \"kubernetes.io/secret/07a77a4a-344b-45bb-8488-a536a94185b1-logging-loki-ingester-grpc\") pod \"logging-loki-ingester-0\" (UID: \"07a77a4a-344b-45bb-8488-a536a94185b1\") " pod="openshift-logging/logging-loki-ingester-0" Jan 31 09:14:56 crc kubenswrapper[4830]: I0131 09:14:56.559936 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-c5f6bae2-e82e-4217-8267-f61aa94745a3\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c5f6bae2-e82e-4217-8267-f61aa94745a3\") pod \"logging-loki-ingester-0\" (UID: \"07a77a4a-344b-45bb-8488-a536a94185b1\") " pod="openshift-logging/logging-loki-ingester-0" Jan 31 09:14:56 crc kubenswrapper[4830]: I0131 09:14:56.559982 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/07a77a4a-344b-45bb-8488-a536a94185b1-logging-loki-ca-bundle\") pod \"logging-loki-ingester-0\" (UID: \"07a77a4a-344b-45bb-8488-a536a94185b1\") " pod="openshift-logging/logging-loki-ingester-0" Jan 31 09:14:56 crc kubenswrapper[4830]: I0131 09:14:56.560032 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/07a77a4a-344b-45bb-8488-a536a94185b1-config\") pod \"logging-loki-ingester-0\" (UID: \"07a77a4a-344b-45bb-8488-a536a94185b1\") " pod="openshift-logging/logging-loki-ingester-0" Jan 31 09:14:56 crc kubenswrapper[4830]: I0131 09:14:56.560056 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ingester-http\" (UniqueName: \"kubernetes.io/secret/07a77a4a-344b-45bb-8488-a536a94185b1-logging-loki-ingester-http\") pod \"logging-loki-ingester-0\" (UID: \"07a77a4a-344b-45bb-8488-a536a94185b1\") " pod="openshift-logging/logging-loki-ingester-0" Jan 31 09:14:56 crc kubenswrapper[4830]: I0131 09:14:56.560136 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-694ead87-2f6e-4ed8-a619-e0484cea1d73\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-694ead87-2f6e-4ed8-a619-e0484cea1d73\") pod \"logging-loki-ingester-0\" (UID: \"07a77a4a-344b-45bb-8488-a536a94185b1\") " pod="openshift-logging/logging-loki-ingester-0" Jan 31 09:14:56 crc kubenswrapper[4830]: I0131 09:14:56.560169 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/07a77a4a-344b-45bb-8488-a536a94185b1-logging-loki-s3\") pod \"logging-loki-ingester-0\" (UID: \"07a77a4a-344b-45bb-8488-a536a94185b1\") " pod="openshift-logging/logging-loki-ingester-0" Jan 31 09:14:56 crc kubenswrapper[4830]: I0131 09:14:56.562210 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-compactor-grpc" Jan 31 09:14:56 crc kubenswrapper[4830]: I0131 09:14:56.562442 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-compactor-http" Jan 31 09:14:56 crc kubenswrapper[4830]: I0131 09:14:56.576796 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-query-frontend-69d9546745-8k7rn"] Jan 31 09:14:56 crc kubenswrapper[4830]: I0131 09:14:56.589532 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-compactor-0"] Jan 31 09:14:56 crc kubenswrapper[4830]: I0131 09:14:56.665375 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k2xjw\" (UniqueName: \"kubernetes.io/projected/07a77a4a-344b-45bb-8488-a536a94185b1-kube-api-access-k2xjw\") pod \"logging-loki-ingester-0\" (UID: \"07a77a4a-344b-45bb-8488-a536a94185b1\") " pod="openshift-logging/logging-loki-ingester-0" Jan 31 09:14:56 crc kubenswrapper[4830]: I0131 09:14:56.665432 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ingester-grpc\" (UniqueName: \"kubernetes.io/secret/07a77a4a-344b-45bb-8488-a536a94185b1-logging-loki-ingester-grpc\") pod \"logging-loki-ingester-0\" (UID: \"07a77a4a-344b-45bb-8488-a536a94185b1\") " pod="openshift-logging/logging-loki-ingester-0" Jan 31 09:14:56 crc kubenswrapper[4830]: I0131 09:14:56.665475 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/70d5f51c-1a87-45fb-8822-7aa0997fceb1-logging-loki-ca-bundle\") pod \"logging-loki-compactor-0\" (UID: \"70d5f51c-1a87-45fb-8822-7aa0997fceb1\") " pod="openshift-logging/logging-loki-compactor-0" Jan 31 09:14:56 crc kubenswrapper[4830]: I0131 09:14:56.665501 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-c5f6bae2-e82e-4217-8267-f61aa94745a3\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c5f6bae2-e82e-4217-8267-f61aa94745a3\") pod \"logging-loki-ingester-0\" (UID: \"07a77a4a-344b-45bb-8488-a536a94185b1\") " pod="openshift-logging/logging-loki-ingester-0" Jan 31 09:14:56 crc kubenswrapper[4830]: I0131 09:14:56.665530 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/07a77a4a-344b-45bb-8488-a536a94185b1-logging-loki-ca-bundle\") pod \"logging-loki-ingester-0\" (UID: \"07a77a4a-344b-45bb-8488-a536a94185b1\") " pod="openshift-logging/logging-loki-ingester-0" Jan 31 09:14:56 crc kubenswrapper[4830]: I0131 09:14:56.665557 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fhsh7\" (UniqueName: \"kubernetes.io/projected/70d5f51c-1a87-45fb-8822-7aa0997fceb1-kube-api-access-fhsh7\") pod \"logging-loki-compactor-0\" (UID: \"70d5f51c-1a87-45fb-8822-7aa0997fceb1\") " pod="openshift-logging/logging-loki-compactor-0" Jan 31 09:14:56 crc kubenswrapper[4830]: I0131 09:14:56.665579 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/70d5f51c-1a87-45fb-8822-7aa0997fceb1-config\") pod \"logging-loki-compactor-0\" (UID: \"70d5f51c-1a87-45fb-8822-7aa0997fceb1\") " pod="openshift-logging/logging-loki-compactor-0" Jan 31 09:14:56 crc kubenswrapper[4830]: I0131 09:14:56.665600 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/07a77a4a-344b-45bb-8488-a536a94185b1-config\") pod \"logging-loki-ingester-0\" (UID: \"07a77a4a-344b-45bb-8488-a536a94185b1\") " pod="openshift-logging/logging-loki-ingester-0" Jan 31 09:14:56 crc kubenswrapper[4830]: I0131 09:14:56.665625 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-8dd1cdd5-4eda-4aaa-a480-8f76a92b6f89\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8dd1cdd5-4eda-4aaa-a480-8f76a92b6f89\") pod \"logging-loki-compactor-0\" (UID: \"70d5f51c-1a87-45fb-8822-7aa0997fceb1\") " pod="openshift-logging/logging-loki-compactor-0" Jan 31 09:14:56 crc kubenswrapper[4830]: I0131 09:14:56.665644 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ingester-http\" (UniqueName: \"kubernetes.io/secret/07a77a4a-344b-45bb-8488-a536a94185b1-logging-loki-ingester-http\") pod \"logging-loki-ingester-0\" (UID: \"07a77a4a-344b-45bb-8488-a536a94185b1\") " pod="openshift-logging/logging-loki-ingester-0" Jan 31 09:14:56 crc kubenswrapper[4830]: I0131 09:14:56.665671 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-compactor-http\" (UniqueName: \"kubernetes.io/secret/70d5f51c-1a87-45fb-8822-7aa0997fceb1-logging-loki-compactor-http\") pod \"logging-loki-compactor-0\" (UID: \"70d5f51c-1a87-45fb-8822-7aa0997fceb1\") " pod="openshift-logging/logging-loki-compactor-0" Jan 31 09:14:56 crc kubenswrapper[4830]: I0131 09:14:56.665693 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/70d5f51c-1a87-45fb-8822-7aa0997fceb1-logging-loki-s3\") pod \"logging-loki-compactor-0\" (UID: \"70d5f51c-1a87-45fb-8822-7aa0997fceb1\") " pod="openshift-logging/logging-loki-compactor-0" Jan 31 09:14:56 crc kubenswrapper[4830]: I0131 09:14:56.665751 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-694ead87-2f6e-4ed8-a619-e0484cea1d73\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-694ead87-2f6e-4ed8-a619-e0484cea1d73\") pod \"logging-loki-ingester-0\" (UID: \"07a77a4a-344b-45bb-8488-a536a94185b1\") " pod="openshift-logging/logging-loki-ingester-0" Jan 31 09:14:56 crc kubenswrapper[4830]: I0131 09:14:56.665773 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/07a77a4a-344b-45bb-8488-a536a94185b1-logging-loki-s3\") pod \"logging-loki-ingester-0\" (UID: \"07a77a4a-344b-45bb-8488-a536a94185b1\") " pod="openshift-logging/logging-loki-ingester-0" Jan 31 09:14:56 crc kubenswrapper[4830]: I0131 09:14:56.665790 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-compactor-grpc\" (UniqueName: \"kubernetes.io/secret/70d5f51c-1a87-45fb-8822-7aa0997fceb1-logging-loki-compactor-grpc\") pod \"logging-loki-compactor-0\" (UID: \"70d5f51c-1a87-45fb-8822-7aa0997fceb1\") " pod="openshift-logging/logging-loki-compactor-0" Jan 31 09:14:56 crc kubenswrapper[4830]: I0131 09:14:56.665814 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/fd432483-7467-4c9d-a13e-8ee908a8ed2b-tls-secret\") pod \"logging-loki-gateway-74c87577db-hwvhd\" (UID: \"fd432483-7467-4c9d-a13e-8ee908a8ed2b\") " pod="openshift-logging/logging-loki-gateway-74c87577db-hwvhd" Jan 31 09:14:56 crc kubenswrapper[4830]: I0131 09:14:56.667668 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/07a77a4a-344b-45bb-8488-a536a94185b1-config\") pod \"logging-loki-ingester-0\" (UID: \"07a77a4a-344b-45bb-8488-a536a94185b1\") " pod="openshift-logging/logging-loki-ingester-0" Jan 31 09:14:56 crc kubenswrapper[4830]: I0131 09:14:56.668257 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/07a77a4a-344b-45bb-8488-a536a94185b1-logging-loki-ca-bundle\") pod \"logging-loki-ingester-0\" (UID: \"07a77a4a-344b-45bb-8488-a536a94185b1\") " pod="openshift-logging/logging-loki-ingester-0" Jan 31 09:14:56 crc kubenswrapper[4830]: I0131 09:14:56.671518 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/fd432483-7467-4c9d-a13e-8ee908a8ed2b-tls-secret\") pod \"logging-loki-gateway-74c87577db-hwvhd\" (UID: \"fd432483-7467-4c9d-a13e-8ee908a8ed2b\") " pod="openshift-logging/logging-loki-gateway-74c87577db-hwvhd" Jan 31 09:14:56 crc kubenswrapper[4830]: I0131 09:14:56.675569 4830 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 31 09:14:56 crc kubenswrapper[4830]: I0131 09:14:56.675627 4830 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-c5f6bae2-e82e-4217-8267-f61aa94745a3\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c5f6bae2-e82e-4217-8267-f61aa94745a3\") pod \"logging-loki-ingester-0\" (UID: \"07a77a4a-344b-45bb-8488-a536a94185b1\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/b9201d746c80d4db6ce81ed8f97154db3163ccea92192615ea3dc124691eaa63/globalmount\"" pod="openshift-logging/logging-loki-ingester-0" Jan 31 09:14:56 crc kubenswrapper[4830]: I0131 09:14:56.675640 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/07a77a4a-344b-45bb-8488-a536a94185b1-logging-loki-s3\") pod \"logging-loki-ingester-0\" (UID: \"07a77a4a-344b-45bb-8488-a536a94185b1\") " pod="openshift-logging/logging-loki-ingester-0" Jan 31 09:14:56 crc kubenswrapper[4830]: I0131 09:14:56.681259 4830 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 31 09:14:56 crc kubenswrapper[4830]: I0131 09:14:56.681318 4830 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-694ead87-2f6e-4ed8-a619-e0484cea1d73\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-694ead87-2f6e-4ed8-a619-e0484cea1d73\") pod \"logging-loki-ingester-0\" (UID: \"07a77a4a-344b-45bb-8488-a536a94185b1\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/b474151029f900bbdce23d6ca8263380f56d26d15ff95bf8bc60cceb71658659/globalmount\"" pod="openshift-logging/logging-loki-ingester-0" Jan 31 09:14:56 crc kubenswrapper[4830]: I0131 09:14:56.701718 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k2xjw\" (UniqueName: \"kubernetes.io/projected/07a77a4a-344b-45bb-8488-a536a94185b1-kube-api-access-k2xjw\") pod \"logging-loki-ingester-0\" (UID: \"07a77a4a-344b-45bb-8488-a536a94185b1\") " pod="openshift-logging/logging-loki-ingester-0" Jan 31 09:14:56 crc kubenswrapper[4830]: I0131 09:14:56.702950 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-index-gateway-0"] Jan 31 09:14:56 crc kubenswrapper[4830]: I0131 09:14:56.715489 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-index-gateway-0" Jan 31 09:14:56 crc kubenswrapper[4830]: I0131 09:14:56.725704 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-index-gateway-grpc" Jan 31 09:14:56 crc kubenswrapper[4830]: I0131 09:14:56.726066 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-index-gateway-http" Jan 31 09:14:56 crc kubenswrapper[4830]: I0131 09:14:56.734164 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-index-gateway-0"] Jan 31 09:14:56 crc kubenswrapper[4830]: I0131 09:14:56.753901 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-694ead87-2f6e-4ed8-a619-e0484cea1d73\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-694ead87-2f6e-4ed8-a619-e0484cea1d73\") pod \"logging-loki-ingester-0\" (UID: \"07a77a4a-344b-45bb-8488-a536a94185b1\") " pod="openshift-logging/logging-loki-ingester-0" Jan 31 09:14:56 crc kubenswrapper[4830]: I0131 09:14:56.756644 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-c5f6bae2-e82e-4217-8267-f61aa94745a3\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c5f6bae2-e82e-4217-8267-f61aa94745a3\") pod \"logging-loki-ingester-0\" (UID: \"07a77a4a-344b-45bb-8488-a536a94185b1\") " pod="openshift-logging/logging-loki-ingester-0" Jan 31 09:14:56 crc kubenswrapper[4830]: I0131 09:14:56.768225 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f5n5d\" (UniqueName: \"kubernetes.io/projected/efadb8be-37d4-4e2b-9df2-3d1301ae81a8-kube-api-access-f5n5d\") pod \"logging-loki-index-gateway-0\" (UID: \"efadb8be-37d4-4e2b-9df2-3d1301ae81a8\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 31 09:14:56 crc kubenswrapper[4830]: I0131 09:14:56.768312 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/70d5f51c-1a87-45fb-8822-7aa0997fceb1-logging-loki-s3\") pod \"logging-loki-compactor-0\" (UID: \"70d5f51c-1a87-45fb-8822-7aa0997fceb1\") " pod="openshift-logging/logging-loki-compactor-0" Jan 31 09:14:56 crc kubenswrapper[4830]: I0131 09:14:56.768344 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/efadb8be-37d4-4e2b-9df2-3d1301ae81a8-config\") pod \"logging-loki-index-gateway-0\" (UID: \"efadb8be-37d4-4e2b-9df2-3d1301ae81a8\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 31 09:14:56 crc kubenswrapper[4830]: I0131 09:14:56.768375 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/867e058e-8774-4ff8-af99-a8f35ac530ce-tls-secret\") pod \"logging-loki-gateway-74c87577db-fjtpt\" (UID: \"867e058e-8774-4ff8-af99-a8f35ac530ce\") " pod="openshift-logging/logging-loki-gateway-74c87577db-fjtpt" Jan 31 09:14:56 crc kubenswrapper[4830]: I0131 09:14:56.768488 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/70d5f51c-1a87-45fb-8822-7aa0997fceb1-logging-loki-ca-bundle\") pod \"logging-loki-compactor-0\" (UID: \"70d5f51c-1a87-45fb-8822-7aa0997fceb1\") " pod="openshift-logging/logging-loki-compactor-0" Jan 31 09:14:56 crc kubenswrapper[4830]: I0131 09:14:56.768535 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fhsh7\" (UniqueName: \"kubernetes.io/projected/70d5f51c-1a87-45fb-8822-7aa0997fceb1-kube-api-access-fhsh7\") pod \"logging-loki-compactor-0\" (UID: \"70d5f51c-1a87-45fb-8822-7aa0997fceb1\") " pod="openshift-logging/logging-loki-compactor-0" Jan 31 09:14:56 crc kubenswrapper[4830]: I0131 09:14:56.768567 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-index-gateway-http\" (UniqueName: \"kubernetes.io/secret/efadb8be-37d4-4e2b-9df2-3d1301ae81a8-logging-loki-index-gateway-http\") pod \"logging-loki-index-gateway-0\" (UID: \"efadb8be-37d4-4e2b-9df2-3d1301ae81a8\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 31 09:14:56 crc kubenswrapper[4830]: I0131 09:14:56.768618 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-8dd1cdd5-4eda-4aaa-a480-8f76a92b6f89\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8dd1cdd5-4eda-4aaa-a480-8f76a92b6f89\") pod \"logging-loki-compactor-0\" (UID: \"70d5f51c-1a87-45fb-8822-7aa0997fceb1\") " pod="openshift-logging/logging-loki-compactor-0" Jan 31 09:14:56 crc kubenswrapper[4830]: I0131 09:14:56.768661 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-compactor-http\" (UniqueName: \"kubernetes.io/secret/70d5f51c-1a87-45fb-8822-7aa0997fceb1-logging-loki-compactor-http\") pod \"logging-loki-compactor-0\" (UID: \"70d5f51c-1a87-45fb-8822-7aa0997fceb1\") " pod="openshift-logging/logging-loki-compactor-0" Jan 31 09:14:56 crc kubenswrapper[4830]: I0131 09:14:56.768695 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-compactor-grpc\" (UniqueName: \"kubernetes.io/secret/70d5f51c-1a87-45fb-8822-7aa0997fceb1-logging-loki-compactor-grpc\") pod \"logging-loki-compactor-0\" (UID: \"70d5f51c-1a87-45fb-8822-7aa0997fceb1\") " pod="openshift-logging/logging-loki-compactor-0" Jan 31 09:14:56 crc kubenswrapper[4830]: I0131 09:14:56.768754 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-40749a28-0b78-4877-a064-a246ccd730da\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-40749a28-0b78-4877-a064-a246ccd730da\") pod \"logging-loki-index-gateway-0\" (UID: \"efadb8be-37d4-4e2b-9df2-3d1301ae81a8\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 31 09:14:56 crc kubenswrapper[4830]: I0131 09:14:56.768990 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/efadb8be-37d4-4e2b-9df2-3d1301ae81a8-logging-loki-ca-bundle\") pod \"logging-loki-index-gateway-0\" (UID: \"efadb8be-37d4-4e2b-9df2-3d1301ae81a8\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 31 09:14:56 crc kubenswrapper[4830]: I0131 09:14:56.769043 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/efadb8be-37d4-4e2b-9df2-3d1301ae81a8-logging-loki-s3\") pod \"logging-loki-index-gateway-0\" (UID: \"efadb8be-37d4-4e2b-9df2-3d1301ae81a8\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 31 09:14:56 crc kubenswrapper[4830]: I0131 09:14:56.769071 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-index-gateway-grpc\" (UniqueName: \"kubernetes.io/secret/efadb8be-37d4-4e2b-9df2-3d1301ae81a8-logging-loki-index-gateway-grpc\") pod \"logging-loki-index-gateway-0\" (UID: \"efadb8be-37d4-4e2b-9df2-3d1301ae81a8\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 31 09:14:56 crc kubenswrapper[4830]: I0131 09:14:56.769103 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/70d5f51c-1a87-45fb-8822-7aa0997fceb1-config\") pod \"logging-loki-compactor-0\" (UID: \"70d5f51c-1a87-45fb-8822-7aa0997fceb1\") " pod="openshift-logging/logging-loki-compactor-0" Jan 31 09:14:56 crc kubenswrapper[4830]: I0131 09:14:56.771290 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/70d5f51c-1a87-45fb-8822-7aa0997fceb1-config\") pod \"logging-loki-compactor-0\" (UID: \"70d5f51c-1a87-45fb-8822-7aa0997fceb1\") " pod="openshift-logging/logging-loki-compactor-0" Jan 31 09:14:56 crc kubenswrapper[4830]: I0131 09:14:56.774496 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/70d5f51c-1a87-45fb-8822-7aa0997fceb1-logging-loki-ca-bundle\") pod \"logging-loki-compactor-0\" (UID: \"70d5f51c-1a87-45fb-8822-7aa0997fceb1\") " pod="openshift-logging/logging-loki-compactor-0" Jan 31 09:14:56 crc kubenswrapper[4830]: I0131 09:14:56.777027 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/867e058e-8774-4ff8-af99-a8f35ac530ce-tls-secret\") pod \"logging-loki-gateway-74c87577db-fjtpt\" (UID: \"867e058e-8774-4ff8-af99-a8f35ac530ce\") " pod="openshift-logging/logging-loki-gateway-74c87577db-fjtpt" Jan 31 09:14:56 crc kubenswrapper[4830]: I0131 09:14:56.779229 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-compactor-grpc\" (UniqueName: \"kubernetes.io/secret/70d5f51c-1a87-45fb-8822-7aa0997fceb1-logging-loki-compactor-grpc\") pod \"logging-loki-compactor-0\" (UID: \"70d5f51c-1a87-45fb-8822-7aa0997fceb1\") " pod="openshift-logging/logging-loki-compactor-0" Jan 31 09:14:56 crc kubenswrapper[4830]: I0131 09:14:56.782586 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-compactor-http\" (UniqueName: \"kubernetes.io/secret/70d5f51c-1a87-45fb-8822-7aa0997fceb1-logging-loki-compactor-http\") pod \"logging-loki-compactor-0\" (UID: \"70d5f51c-1a87-45fb-8822-7aa0997fceb1\") " pod="openshift-logging/logging-loki-compactor-0" Jan 31 09:14:56 crc kubenswrapper[4830]: I0131 09:14:56.784921 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/70d5f51c-1a87-45fb-8822-7aa0997fceb1-logging-loki-s3\") pod \"logging-loki-compactor-0\" (UID: \"70d5f51c-1a87-45fb-8822-7aa0997fceb1\") " pod="openshift-logging/logging-loki-compactor-0" Jan 31 09:14:56 crc kubenswrapper[4830]: I0131 09:14:56.794433 4830 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 31 09:14:56 crc kubenswrapper[4830]: I0131 09:14:56.794493 4830 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-8dd1cdd5-4eda-4aaa-a480-8f76a92b6f89\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8dd1cdd5-4eda-4aaa-a480-8f76a92b6f89\") pod \"logging-loki-compactor-0\" (UID: \"70d5f51c-1a87-45fb-8822-7aa0997fceb1\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/8cd9720013b6198c46d59e577f02a067cd27a77b5e6ba68bce6b802ff9e8e2d9/globalmount\"" pod="openshift-logging/logging-loki-compactor-0" Jan 31 09:14:56 crc kubenswrapper[4830]: I0131 09:14:56.798052 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fhsh7\" (UniqueName: \"kubernetes.io/projected/70d5f51c-1a87-45fb-8822-7aa0997fceb1-kube-api-access-fhsh7\") pod \"logging-loki-compactor-0\" (UID: \"70d5f51c-1a87-45fb-8822-7aa0997fceb1\") " pod="openshift-logging/logging-loki-compactor-0" Jan 31 09:14:56 crc kubenswrapper[4830]: I0131 09:14:56.835181 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-gateway-dockercfg-9kd6b" Jan 31 09:14:56 crc kubenswrapper[4830]: I0131 09:14:56.846066 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-8dd1cdd5-4eda-4aaa-a480-8f76a92b6f89\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8dd1cdd5-4eda-4aaa-a480-8f76a92b6f89\") pod \"logging-loki-compactor-0\" (UID: \"70d5f51c-1a87-45fb-8822-7aa0997fceb1\") " pod="openshift-logging/logging-loki-compactor-0" Jan 31 09:14:56 crc kubenswrapper[4830]: I0131 09:14:56.846057 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-gateway-74c87577db-hwvhd" Jan 31 09:14:56 crc kubenswrapper[4830]: I0131 09:14:56.871003 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-40749a28-0b78-4877-a064-a246ccd730da\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-40749a28-0b78-4877-a064-a246ccd730da\") pod \"logging-loki-index-gateway-0\" (UID: \"efadb8be-37d4-4e2b-9df2-3d1301ae81a8\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 31 09:14:56 crc kubenswrapper[4830]: I0131 09:14:56.871118 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/efadb8be-37d4-4e2b-9df2-3d1301ae81a8-logging-loki-ca-bundle\") pod \"logging-loki-index-gateway-0\" (UID: \"efadb8be-37d4-4e2b-9df2-3d1301ae81a8\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 31 09:14:56 crc kubenswrapper[4830]: I0131 09:14:56.871156 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/efadb8be-37d4-4e2b-9df2-3d1301ae81a8-logging-loki-s3\") pod \"logging-loki-index-gateway-0\" (UID: \"efadb8be-37d4-4e2b-9df2-3d1301ae81a8\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 31 09:14:56 crc kubenswrapper[4830]: I0131 09:14:56.871183 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-index-gateway-grpc\" (UniqueName: \"kubernetes.io/secret/efadb8be-37d4-4e2b-9df2-3d1301ae81a8-logging-loki-index-gateway-grpc\") pod \"logging-loki-index-gateway-0\" (UID: \"efadb8be-37d4-4e2b-9df2-3d1301ae81a8\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 31 09:14:56 crc kubenswrapper[4830]: I0131 09:14:56.871208 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f5n5d\" (UniqueName: \"kubernetes.io/projected/efadb8be-37d4-4e2b-9df2-3d1301ae81a8-kube-api-access-f5n5d\") pod \"logging-loki-index-gateway-0\" (UID: \"efadb8be-37d4-4e2b-9df2-3d1301ae81a8\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 31 09:14:56 crc kubenswrapper[4830]: I0131 09:14:56.871235 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/efadb8be-37d4-4e2b-9df2-3d1301ae81a8-config\") pod \"logging-loki-index-gateway-0\" (UID: \"efadb8be-37d4-4e2b-9df2-3d1301ae81a8\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 31 09:14:56 crc kubenswrapper[4830]: I0131 09:14:56.871306 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-index-gateway-http\" (UniqueName: \"kubernetes.io/secret/efadb8be-37d4-4e2b-9df2-3d1301ae81a8-logging-loki-index-gateway-http\") pod \"logging-loki-index-gateway-0\" (UID: \"efadb8be-37d4-4e2b-9df2-3d1301ae81a8\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 31 09:14:56 crc kubenswrapper[4830]: I0131 09:14:56.872510 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/efadb8be-37d4-4e2b-9df2-3d1301ae81a8-config\") pod \"logging-loki-index-gateway-0\" (UID: \"efadb8be-37d4-4e2b-9df2-3d1301ae81a8\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 31 09:14:56 crc kubenswrapper[4830]: I0131 09:14:56.873880 4830 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 31 09:14:56 crc kubenswrapper[4830]: I0131 09:14:56.873928 4830 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-40749a28-0b78-4877-a064-a246ccd730da\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-40749a28-0b78-4877-a064-a246ccd730da\") pod \"logging-loki-index-gateway-0\" (UID: \"efadb8be-37d4-4e2b-9df2-3d1301ae81a8\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/e05c48c2c4dc39baaac454961ff3f9918d3988515f4780911778a043a5e8c07c/globalmount\"" pod="openshift-logging/logging-loki-index-gateway-0" Jan 31 09:14:56 crc kubenswrapper[4830]: I0131 09:14:56.873987 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/efadb8be-37d4-4e2b-9df2-3d1301ae81a8-logging-loki-ca-bundle\") pod \"logging-loki-index-gateway-0\" (UID: \"efadb8be-37d4-4e2b-9df2-3d1301ae81a8\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 31 09:14:56 crc kubenswrapper[4830]: I0131 09:14:56.876042 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/efadb8be-37d4-4e2b-9df2-3d1301ae81a8-logging-loki-s3\") pod \"logging-loki-index-gateway-0\" (UID: \"efadb8be-37d4-4e2b-9df2-3d1301ae81a8\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 31 09:14:56 crc kubenswrapper[4830]: I0131 09:14:56.876699 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-index-gateway-http\" (UniqueName: \"kubernetes.io/secret/efadb8be-37d4-4e2b-9df2-3d1301ae81a8-logging-loki-index-gateway-http\") pod \"logging-loki-index-gateway-0\" (UID: \"efadb8be-37d4-4e2b-9df2-3d1301ae81a8\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 31 09:14:56 crc kubenswrapper[4830]: I0131 09:14:56.876981 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-index-gateway-grpc\" (UniqueName: \"kubernetes.io/secret/efadb8be-37d4-4e2b-9df2-3d1301ae81a8-logging-loki-index-gateway-grpc\") pod \"logging-loki-index-gateway-0\" (UID: \"efadb8be-37d4-4e2b-9df2-3d1301ae81a8\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 31 09:14:56 crc kubenswrapper[4830]: I0131 09:14:56.894191 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f5n5d\" (UniqueName: \"kubernetes.io/projected/efadb8be-37d4-4e2b-9df2-3d1301ae81a8-kube-api-access-f5n5d\") pod \"logging-loki-index-gateway-0\" (UID: \"efadb8be-37d4-4e2b-9df2-3d1301ae81a8\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 31 09:14:56 crc kubenswrapper[4830]: I0131 09:14:56.905680 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-40749a28-0b78-4877-a064-a246ccd730da\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-40749a28-0b78-4877-a064-a246ccd730da\") pod \"logging-loki-index-gateway-0\" (UID: \"efadb8be-37d4-4e2b-9df2-3d1301ae81a8\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 31 09:14:56 crc kubenswrapper[4830]: I0131 09:14:56.909073 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-gateway-74c87577db-fjtpt" Jan 31 09:14:56 crc kubenswrapper[4830]: I0131 09:14:56.968241 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-compactor-0" Jan 31 09:14:57 crc kubenswrapper[4830]: I0131 09:14:57.065704 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-index-gateway-0" Jan 31 09:14:57 crc kubenswrapper[4830]: I0131 09:14:57.207015 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-gateway-74c87577db-fjtpt"] Jan 31 09:14:57 crc kubenswrapper[4830]: I0131 09:14:57.218593 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-querier-76788598db-f89hf" event={"ID":"8aa52b7a-444c-4f07-9c3a-c2223e966e34","Type":"ContainerStarted","Data":"6355ded444c4a5a7c7205b06852a52987ced5431318b2c6c2218ab6fdbebfc6e"} Jan 31 09:14:57 crc kubenswrapper[4830]: I0131 09:14:57.220284 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-query-frontend-69d9546745-8k7rn" event={"ID":"6a2f00bb-9954-46d0-901b-3d9a82939850","Type":"ContainerStarted","Data":"4ee150ee3d24c15cb9b8d3cf0cb7fa34dd4ad7ef0ce209931f59f28bd7a2b30e"} Jan 31 09:14:57 crc kubenswrapper[4830]: I0131 09:14:57.269270 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-compactor-0"] Jan 31 09:14:57 crc kubenswrapper[4830]: I0131 09:14:57.291977 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-gateway-74c87577db-hwvhd"] Jan 31 09:14:57 crc kubenswrapper[4830]: W0131 09:14:57.312754 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfd432483_7467_4c9d_a13e_8ee908a8ed2b.slice/crio-c980823e6c007f3332b349c3a875b97ba6687ddf249b744b8049b41ac816a791 WatchSource:0}: Error finding container c980823e6c007f3332b349c3a875b97ba6687ddf249b744b8049b41ac816a791: Status 404 returned error can't find the container with id c980823e6c007f3332b349c3a875b97ba6687ddf249b744b8049b41ac816a791 Jan 31 09:14:57 crc kubenswrapper[4830]: I0131 09:14:57.383579 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-index-gateway-0"] Jan 31 09:14:57 crc kubenswrapper[4830]: W0131 09:14:57.386147 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podefadb8be_37d4_4e2b_9df2_3d1301ae81a8.slice/crio-2172d4667d938475667757b214a15854df817edc0b91056a6229957aa9d6c83e WatchSource:0}: Error finding container 2172d4667d938475667757b214a15854df817edc0b91056a6229957aa9d6c83e: Status 404 returned error can't find the container with id 2172d4667d938475667757b214a15854df817edc0b91056a6229957aa9d6c83e Jan 31 09:14:57 crc kubenswrapper[4830]: I0131 09:14:57.415898 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-ingester-grpc" Jan 31 09:14:57 crc kubenswrapper[4830]: I0131 09:14:57.423902 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ingester-grpc\" (UniqueName: \"kubernetes.io/secret/07a77a4a-344b-45bb-8488-a536a94185b1-logging-loki-ingester-grpc\") pod \"logging-loki-ingester-0\" (UID: \"07a77a4a-344b-45bb-8488-a536a94185b1\") " pod="openshift-logging/logging-loki-ingester-0" Jan 31 09:14:57 crc kubenswrapper[4830]: E0131 09:14:57.668048 4830 secret.go:188] Couldn't get secret openshift-logging/logging-loki-ingester-http: failed to sync secret cache: timed out waiting for the condition Jan 31 09:14:57 crc kubenswrapper[4830]: E0131 09:14:57.668148 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/07a77a4a-344b-45bb-8488-a536a94185b1-logging-loki-ingester-http podName:07a77a4a-344b-45bb-8488-a536a94185b1 nodeName:}" failed. No retries permitted until 2026-01-31 09:14:58.168124968 +0000 UTC m=+842.661487410 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "logging-loki-ingester-http" (UniqueName: "kubernetes.io/secret/07a77a4a-344b-45bb-8488-a536a94185b1-logging-loki-ingester-http") pod "logging-loki-ingester-0" (UID: "07a77a4a-344b-45bb-8488-a536a94185b1") : failed to sync secret cache: timed out waiting for the condition Jan 31 09:14:57 crc kubenswrapper[4830]: I0131 09:14:57.926833 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-ingester-http" Jan 31 09:14:58 crc kubenswrapper[4830]: I0131 09:14:58.195442 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ingester-http\" (UniqueName: \"kubernetes.io/secret/07a77a4a-344b-45bb-8488-a536a94185b1-logging-loki-ingester-http\") pod \"logging-loki-ingester-0\" (UID: \"07a77a4a-344b-45bb-8488-a536a94185b1\") " pod="openshift-logging/logging-loki-ingester-0" Jan 31 09:14:58 crc kubenswrapper[4830]: I0131 09:14:58.204999 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ingester-http\" (UniqueName: \"kubernetes.io/secret/07a77a4a-344b-45bb-8488-a536a94185b1-logging-loki-ingester-http\") pod \"logging-loki-ingester-0\" (UID: \"07a77a4a-344b-45bb-8488-a536a94185b1\") " pod="openshift-logging/logging-loki-ingester-0" Jan 31 09:14:58 crc kubenswrapper[4830]: I0131 09:14:58.230406 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-index-gateway-0" event={"ID":"efadb8be-37d4-4e2b-9df2-3d1301ae81a8","Type":"ContainerStarted","Data":"2172d4667d938475667757b214a15854df817edc0b91056a6229957aa9d6c83e"} Jan 31 09:14:58 crc kubenswrapper[4830]: I0131 09:14:58.231707 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-74c87577db-fjtpt" event={"ID":"867e058e-8774-4ff8-af99-a8f35ac530ce","Type":"ContainerStarted","Data":"e2f12eb902a60f4bdac3c56ceeeb8a03843afd105cf65ad5650dd0d5d152a0e4"} Jan 31 09:14:58 crc kubenswrapper[4830]: I0131 09:14:58.246480 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-compactor-0" event={"ID":"70d5f51c-1a87-45fb-8822-7aa0997fceb1","Type":"ContainerStarted","Data":"cb4627bd103e98a1ab25be5ce874b59a468368d248c4e85cbd9daddb5f7f466b"} Jan 31 09:14:58 crc kubenswrapper[4830]: I0131 09:14:58.249155 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-74c87577db-hwvhd" event={"ID":"fd432483-7467-4c9d-a13e-8ee908a8ed2b","Type":"ContainerStarted","Data":"c980823e6c007f3332b349c3a875b97ba6687ddf249b744b8049b41ac816a791"} Jan 31 09:14:58 crc kubenswrapper[4830]: I0131 09:14:58.299550 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-ingester-0" Jan 31 09:14:58 crc kubenswrapper[4830]: I0131 09:14:58.812380 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-ingester-0"] Jan 31 09:14:59 crc kubenswrapper[4830]: I0131 09:14:59.278127 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-ingester-0" event={"ID":"07a77a4a-344b-45bb-8488-a536a94185b1","Type":"ContainerStarted","Data":"2d688ca4e939b735103177433c76d9336879958b707c64d49d018dc71789a766"} Jan 31 09:15:00 crc kubenswrapper[4830]: I0131 09:15:00.157592 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29497515-qx8rx"] Jan 31 09:15:00 crc kubenswrapper[4830]: I0131 09:15:00.159077 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29497515-qx8rx" Jan 31 09:15:00 crc kubenswrapper[4830]: I0131 09:15:00.164139 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 31 09:15:00 crc kubenswrapper[4830]: I0131 09:15:00.164293 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 31 09:15:00 crc kubenswrapper[4830]: I0131 09:15:00.164746 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29497515-qx8rx"] Jan 31 09:15:00 crc kubenswrapper[4830]: I0131 09:15:00.253390 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3e9b9ae0-0c92-4992-b4cb-44bb51f84c45-secret-volume\") pod \"collect-profiles-29497515-qx8rx\" (UID: \"3e9b9ae0-0c92-4992-b4cb-44bb51f84c45\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497515-qx8rx" Jan 31 09:15:00 crc kubenswrapper[4830]: I0131 09:15:00.253454 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3e9b9ae0-0c92-4992-b4cb-44bb51f84c45-config-volume\") pod \"collect-profiles-29497515-qx8rx\" (UID: \"3e9b9ae0-0c92-4992-b4cb-44bb51f84c45\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497515-qx8rx" Jan 31 09:15:00 crc kubenswrapper[4830]: I0131 09:15:00.253486 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5nbj7\" (UniqueName: \"kubernetes.io/projected/3e9b9ae0-0c92-4992-b4cb-44bb51f84c45-kube-api-access-5nbj7\") pod \"collect-profiles-29497515-qx8rx\" (UID: \"3e9b9ae0-0c92-4992-b4cb-44bb51f84c45\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497515-qx8rx" Jan 31 09:15:00 crc kubenswrapper[4830]: I0131 09:15:00.354374 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3e9b9ae0-0c92-4992-b4cb-44bb51f84c45-secret-volume\") pod \"collect-profiles-29497515-qx8rx\" (UID: \"3e9b9ae0-0c92-4992-b4cb-44bb51f84c45\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497515-qx8rx" Jan 31 09:15:00 crc kubenswrapper[4830]: I0131 09:15:00.354442 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3e9b9ae0-0c92-4992-b4cb-44bb51f84c45-config-volume\") pod \"collect-profiles-29497515-qx8rx\" (UID: \"3e9b9ae0-0c92-4992-b4cb-44bb51f84c45\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497515-qx8rx" Jan 31 09:15:00 crc kubenswrapper[4830]: I0131 09:15:00.354467 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5nbj7\" (UniqueName: \"kubernetes.io/projected/3e9b9ae0-0c92-4992-b4cb-44bb51f84c45-kube-api-access-5nbj7\") pod \"collect-profiles-29497515-qx8rx\" (UID: \"3e9b9ae0-0c92-4992-b4cb-44bb51f84c45\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497515-qx8rx" Jan 31 09:15:00 crc kubenswrapper[4830]: I0131 09:15:00.356504 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3e9b9ae0-0c92-4992-b4cb-44bb51f84c45-config-volume\") pod \"collect-profiles-29497515-qx8rx\" (UID: \"3e9b9ae0-0c92-4992-b4cb-44bb51f84c45\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497515-qx8rx" Jan 31 09:15:00 crc kubenswrapper[4830]: I0131 09:15:00.380795 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3e9b9ae0-0c92-4992-b4cb-44bb51f84c45-secret-volume\") pod \"collect-profiles-29497515-qx8rx\" (UID: \"3e9b9ae0-0c92-4992-b4cb-44bb51f84c45\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497515-qx8rx" Jan 31 09:15:00 crc kubenswrapper[4830]: I0131 09:15:00.384640 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5nbj7\" (UniqueName: \"kubernetes.io/projected/3e9b9ae0-0c92-4992-b4cb-44bb51f84c45-kube-api-access-5nbj7\") pod \"collect-profiles-29497515-qx8rx\" (UID: \"3e9b9ae0-0c92-4992-b4cb-44bb51f84c45\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497515-qx8rx" Jan 31 09:15:00 crc kubenswrapper[4830]: I0131 09:15:00.489664 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29497515-qx8rx" Jan 31 09:15:01 crc kubenswrapper[4830]: I0131 09:15:01.082003 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29497515-qx8rx"] Jan 31 09:15:01 crc kubenswrapper[4830]: W0131 09:15:01.120181 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3e9b9ae0_0c92_4992_b4cb_44bb51f84c45.slice/crio-e8fc11247d3809f97ecab794cba99ece315b60700ffd2c610cbca8d9440946a1 WatchSource:0}: Error finding container e8fc11247d3809f97ecab794cba99ece315b60700ffd2c610cbca8d9440946a1: Status 404 returned error can't find the container with id e8fc11247d3809f97ecab794cba99ece315b60700ffd2c610cbca8d9440946a1 Jan 31 09:15:01 crc kubenswrapper[4830]: I0131 09:15:01.303364 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-index-gateway-0" event={"ID":"efadb8be-37d4-4e2b-9df2-3d1301ae81a8","Type":"ContainerStarted","Data":"b3d7985851e67d510bc4cde5b01a5ef748aaafb5ba12e14f2290ae6c17336b3c"} Jan 31 09:15:01 crc kubenswrapper[4830]: I0131 09:15:01.304926 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-index-gateway-0" Jan 31 09:15:01 crc kubenswrapper[4830]: I0131 09:15:01.313708 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-querier-76788598db-f89hf" event={"ID":"8aa52b7a-444c-4f07-9c3a-c2223e966e34","Type":"ContainerStarted","Data":"8c1e8de624ff535daf3fe498ca38b3594ac8e94283312a512bea1f1ba9886f6e"} Jan 31 09:15:01 crc kubenswrapper[4830]: I0131 09:15:01.315489 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-querier-76788598db-f89hf" Jan 31 09:15:01 crc kubenswrapper[4830]: I0131 09:15:01.316421 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29497515-qx8rx" event={"ID":"3e9b9ae0-0c92-4992-b4cb-44bb51f84c45","Type":"ContainerStarted","Data":"e8fc11247d3809f97ecab794cba99ece315b60700ffd2c610cbca8d9440946a1"} Jan 31 09:15:01 crc kubenswrapper[4830]: I0131 09:15:01.317747 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-compactor-0" event={"ID":"70d5f51c-1a87-45fb-8822-7aa0997fceb1","Type":"ContainerStarted","Data":"85feaa17a4f81acd6abf3c65a259953c67fb92dfd4d49ac4bdbd8f251b05a733"} Jan 31 09:15:01 crc kubenswrapper[4830]: I0131 09:15:01.319751 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-compactor-0" Jan 31 09:15:01 crc kubenswrapper[4830]: I0131 09:15:01.332181 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-index-gateway-0" podStartSLOduration=2.792479187 podStartE2EDuration="6.332155319s" podCreationTimestamp="2026-01-31 09:14:55 +0000 UTC" firstStartedPulling="2026-01-31 09:14:57.388272192 +0000 UTC m=+841.881634634" lastFinishedPulling="2026-01-31 09:15:00.927948324 +0000 UTC m=+845.421310766" observedRunningTime="2026-01-31 09:15:01.327985709 +0000 UTC m=+845.821348161" watchObservedRunningTime="2026-01-31 09:15:01.332155319 +0000 UTC m=+845.825517761" Jan 31 09:15:01 crc kubenswrapper[4830]: I0131 09:15:01.388075 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-compactor-0" podStartSLOduration=2.802636814 podStartE2EDuration="6.388053985s" podCreationTimestamp="2026-01-31 09:14:55 +0000 UTC" firstStartedPulling="2026-01-31 09:14:57.280295414 +0000 UTC m=+841.773657856" lastFinishedPulling="2026-01-31 09:15:00.865712585 +0000 UTC m=+845.359075027" observedRunningTime="2026-01-31 09:15:01.385040939 +0000 UTC m=+845.878403381" watchObservedRunningTime="2026-01-31 09:15:01.388053985 +0000 UTC m=+845.881416427" Jan 31 09:15:01 crc kubenswrapper[4830]: I0131 09:15:01.389785 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-querier-76788598db-f89hf" podStartSLOduration=2.011279879 podStartE2EDuration="6.389774725s" podCreationTimestamp="2026-01-31 09:14:55 +0000 UTC" firstStartedPulling="2026-01-31 09:14:56.552871016 +0000 UTC m=+841.046233458" lastFinishedPulling="2026-01-31 09:15:00.931365862 +0000 UTC m=+845.424728304" observedRunningTime="2026-01-31 09:15:01.352870494 +0000 UTC m=+845.846232936" watchObservedRunningTime="2026-01-31 09:15:01.389774725 +0000 UTC m=+845.883137177" Jan 31 09:15:02 crc kubenswrapper[4830]: I0131 09:15:02.349630 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-query-frontend-69d9546745-8k7rn" event={"ID":"6a2f00bb-9954-46d0-901b-3d9a82939850","Type":"ContainerStarted","Data":"a4d5d47030f020d46d16a60e04c3c961bb1772b580b668862da22bb1ca62214b"} Jan 31 09:15:02 crc kubenswrapper[4830]: I0131 09:15:02.350122 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-query-frontend-69d9546745-8k7rn" Jan 31 09:15:02 crc kubenswrapper[4830]: I0131 09:15:02.353324 4830 generic.go:334] "Generic (PLEG): container finished" podID="3e9b9ae0-0c92-4992-b4cb-44bb51f84c45" containerID="ecaf8ff9f3fbe10cd06a1d1a00ef6ef6c4ccadc02179c3b8c2fd96256ecaeb80" exitCode=0 Jan 31 09:15:02 crc kubenswrapper[4830]: I0131 09:15:02.353444 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29497515-qx8rx" event={"ID":"3e9b9ae0-0c92-4992-b4cb-44bb51f84c45","Type":"ContainerDied","Data":"ecaf8ff9f3fbe10cd06a1d1a00ef6ef6c4ccadc02179c3b8c2fd96256ecaeb80"} Jan 31 09:15:02 crc kubenswrapper[4830]: I0131 09:15:02.355314 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-74c87577db-fjtpt" event={"ID":"867e058e-8774-4ff8-af99-a8f35ac530ce","Type":"ContainerStarted","Data":"34747d4042ae43f6baf5e1b7adccd67769d705bfb249253e1d0ecd5a4834f78d"} Jan 31 09:15:02 crc kubenswrapper[4830]: I0131 09:15:02.356925 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-ingester-0" event={"ID":"07a77a4a-344b-45bb-8488-a536a94185b1","Type":"ContainerStarted","Data":"c5af1d14f0328c067ede449219e989b108665ba20896fee79804f8d6f4d6dcd5"} Jan 31 09:15:02 crc kubenswrapper[4830]: I0131 09:15:02.358599 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-74c87577db-hwvhd" event={"ID":"fd432483-7467-4c9d-a13e-8ee908a8ed2b","Type":"ContainerStarted","Data":"a37251f41393d369e912002e3108f07333e4174fc15602c7d38e4f1af878cbdd"} Jan 31 09:15:02 crc kubenswrapper[4830]: I0131 09:15:02.360466 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-distributor-5f678c8dd6-vm6jc" event={"ID":"e5b91203-480c-424e-877a-5f2f437d1ada","Type":"ContainerStarted","Data":"ba0362ca5607278fa5ecbbe12c5a3f0858fe0e9ac8360e71d48764c6ce18f8b4"} Jan 31 09:15:02 crc kubenswrapper[4830]: I0131 09:15:02.360987 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-distributor-5f678c8dd6-vm6jc" Jan 31 09:15:02 crc kubenswrapper[4830]: I0131 09:15:02.375547 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-query-frontend-69d9546745-8k7rn" podStartSLOduration=3.013737668 podStartE2EDuration="7.375511001s" podCreationTimestamp="2026-01-31 09:14:55 +0000 UTC" firstStartedPulling="2026-01-31 09:14:56.594439593 +0000 UTC m=+841.087802035" lastFinishedPulling="2026-01-31 09:15:00.956212926 +0000 UTC m=+845.449575368" observedRunningTime="2026-01-31 09:15:02.370549069 +0000 UTC m=+846.863911551" watchObservedRunningTime="2026-01-31 09:15:02.375511001 +0000 UTC m=+846.868873443" Jan 31 09:15:02 crc kubenswrapper[4830]: I0131 09:15:02.473135 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-distributor-5f678c8dd6-vm6jc" podStartSLOduration=2.690341164 podStartE2EDuration="7.473117135s" podCreationTimestamp="2026-01-31 09:14:55 +0000 UTC" firstStartedPulling="2026-01-31 09:14:56.193024418 +0000 UTC m=+840.686386860" lastFinishedPulling="2026-01-31 09:15:00.975800389 +0000 UTC m=+845.469162831" observedRunningTime="2026-01-31 09:15:02.436938285 +0000 UTC m=+846.930300727" watchObservedRunningTime="2026-01-31 09:15:02.473117135 +0000 UTC m=+846.966479577" Jan 31 09:15:02 crc kubenswrapper[4830]: I0131 09:15:02.473893 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-ingester-0" podStartSLOduration=5.363432826 podStartE2EDuration="7.473884387s" podCreationTimestamp="2026-01-31 09:14:55 +0000 UTC" firstStartedPulling="2026-01-31 09:14:58.851587612 +0000 UTC m=+843.344950044" lastFinishedPulling="2026-01-31 09:15:00.962039163 +0000 UTC m=+845.455401605" observedRunningTime="2026-01-31 09:15:02.469211183 +0000 UTC m=+846.962573625" watchObservedRunningTime="2026-01-31 09:15:02.473884387 +0000 UTC m=+846.967246839" Jan 31 09:15:03 crc kubenswrapper[4830]: I0131 09:15:03.380335 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-ingester-0" Jan 31 09:15:03 crc kubenswrapper[4830]: I0131 09:15:03.764589 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29497515-qx8rx" Jan 31 09:15:03 crc kubenswrapper[4830]: I0131 09:15:03.929174 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3e9b9ae0-0c92-4992-b4cb-44bb51f84c45-config-volume\") pod \"3e9b9ae0-0c92-4992-b4cb-44bb51f84c45\" (UID: \"3e9b9ae0-0c92-4992-b4cb-44bb51f84c45\") " Jan 31 09:15:03 crc kubenswrapper[4830]: I0131 09:15:03.929239 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3e9b9ae0-0c92-4992-b4cb-44bb51f84c45-secret-volume\") pod \"3e9b9ae0-0c92-4992-b4cb-44bb51f84c45\" (UID: \"3e9b9ae0-0c92-4992-b4cb-44bb51f84c45\") " Jan 31 09:15:03 crc kubenswrapper[4830]: I0131 09:15:03.929349 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5nbj7\" (UniqueName: \"kubernetes.io/projected/3e9b9ae0-0c92-4992-b4cb-44bb51f84c45-kube-api-access-5nbj7\") pod \"3e9b9ae0-0c92-4992-b4cb-44bb51f84c45\" (UID: \"3e9b9ae0-0c92-4992-b4cb-44bb51f84c45\") " Jan 31 09:15:03 crc kubenswrapper[4830]: I0131 09:15:03.930375 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3e9b9ae0-0c92-4992-b4cb-44bb51f84c45-config-volume" (OuterVolumeSpecName: "config-volume") pod "3e9b9ae0-0c92-4992-b4cb-44bb51f84c45" (UID: "3e9b9ae0-0c92-4992-b4cb-44bb51f84c45"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:15:03 crc kubenswrapper[4830]: I0131 09:15:03.939314 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3e9b9ae0-0c92-4992-b4cb-44bb51f84c45-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "3e9b9ae0-0c92-4992-b4cb-44bb51f84c45" (UID: "3e9b9ae0-0c92-4992-b4cb-44bb51f84c45"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:15:03 crc kubenswrapper[4830]: I0131 09:15:03.939879 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3e9b9ae0-0c92-4992-b4cb-44bb51f84c45-kube-api-access-5nbj7" (OuterVolumeSpecName: "kube-api-access-5nbj7") pod "3e9b9ae0-0c92-4992-b4cb-44bb51f84c45" (UID: "3e9b9ae0-0c92-4992-b4cb-44bb51f84c45"). InnerVolumeSpecName "kube-api-access-5nbj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:15:04 crc kubenswrapper[4830]: I0131 09:15:04.031957 4830 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3e9b9ae0-0c92-4992-b4cb-44bb51f84c45-config-volume\") on node \"crc\" DevicePath \"\"" Jan 31 09:15:04 crc kubenswrapper[4830]: I0131 09:15:04.032262 4830 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3e9b9ae0-0c92-4992-b4cb-44bb51f84c45-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 31 09:15:04 crc kubenswrapper[4830]: I0131 09:15:04.032324 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5nbj7\" (UniqueName: \"kubernetes.io/projected/3e9b9ae0-0c92-4992-b4cb-44bb51f84c45-kube-api-access-5nbj7\") on node \"crc\" DevicePath \"\"" Jan 31 09:15:04 crc kubenswrapper[4830]: I0131 09:15:04.411348 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-74c87577db-hwvhd" event={"ID":"fd432483-7467-4c9d-a13e-8ee908a8ed2b","Type":"ContainerStarted","Data":"b3284923c8cdc2a89a516dbf3aad34c562624a30128423f12d1f5097e776e98f"} Jan 31 09:15:04 crc kubenswrapper[4830]: I0131 09:15:04.412394 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-gateway-74c87577db-hwvhd" Jan 31 09:15:04 crc kubenswrapper[4830]: I0131 09:15:04.412431 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-gateway-74c87577db-hwvhd" Jan 31 09:15:04 crc kubenswrapper[4830]: I0131 09:15:04.412988 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29497515-qx8rx" event={"ID":"3e9b9ae0-0c92-4992-b4cb-44bb51f84c45","Type":"ContainerDied","Data":"e8fc11247d3809f97ecab794cba99ece315b60700ffd2c610cbca8d9440946a1"} Jan 31 09:15:04 crc kubenswrapper[4830]: I0131 09:15:04.413037 4830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e8fc11247d3809f97ecab794cba99ece315b60700ffd2c610cbca8d9440946a1" Jan 31 09:15:04 crc kubenswrapper[4830]: I0131 09:15:04.413157 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29497515-qx8rx" Jan 31 09:15:04 crc kubenswrapper[4830]: I0131 09:15:04.418819 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-74c87577db-fjtpt" event={"ID":"867e058e-8774-4ff8-af99-a8f35ac530ce","Type":"ContainerStarted","Data":"e85584953254080065809c686489aaf4af49644b7b4277c488b9a911f8031951"} Jan 31 09:15:04 crc kubenswrapper[4830]: I0131 09:15:04.421050 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-gateway-74c87577db-fjtpt" Jan 31 09:15:04 crc kubenswrapper[4830]: I0131 09:15:04.422045 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-gateway-74c87577db-fjtpt" Jan 31 09:15:04 crc kubenswrapper[4830]: I0131 09:15:04.436334 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-gateway-74c87577db-hwvhd" Jan 31 09:15:04 crc kubenswrapper[4830]: I0131 09:15:04.437407 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-gateway-74c87577db-hwvhd" Jan 31 09:15:04 crc kubenswrapper[4830]: I0131 09:15:04.443463 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-gateway-74c87577db-hwvhd" podStartSLOduration=2.987601592 podStartE2EDuration="9.443442445s" podCreationTimestamp="2026-01-31 09:14:55 +0000 UTC" firstStartedPulling="2026-01-31 09:14:57.317356481 +0000 UTC m=+841.810718923" lastFinishedPulling="2026-01-31 09:15:03.773197334 +0000 UTC m=+848.266559776" observedRunningTime="2026-01-31 09:15:04.44154323 +0000 UTC m=+848.934905682" watchObservedRunningTime="2026-01-31 09:15:04.443442445 +0000 UTC m=+848.936804887" Jan 31 09:15:04 crc kubenswrapper[4830]: I0131 09:15:04.448988 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-gateway-74c87577db-fjtpt" Jan 31 09:15:04 crc kubenswrapper[4830]: I0131 09:15:04.452389 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-gateway-74c87577db-fjtpt" Jan 31 09:15:04 crc kubenswrapper[4830]: I0131 09:15:04.467228 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-gateway-74c87577db-fjtpt" podStartSLOduration=2.922736273 podStartE2EDuration="9.467201158s" podCreationTimestamp="2026-01-31 09:14:55 +0000 UTC" firstStartedPulling="2026-01-31 09:14:57.223046466 +0000 UTC m=+841.716408918" lastFinishedPulling="2026-01-31 09:15:03.767511361 +0000 UTC m=+848.260873803" observedRunningTime="2026-01-31 09:15:04.46449247 +0000 UTC m=+848.957854912" watchObservedRunningTime="2026-01-31 09:15:04.467201158 +0000 UTC m=+848.960563600" Jan 31 09:15:16 crc kubenswrapper[4830]: I0131 09:15:16.976636 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-compactor-0" Jan 31 09:15:17 crc kubenswrapper[4830]: I0131 09:15:17.075120 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-index-gateway-0" Jan 31 09:15:18 crc kubenswrapper[4830]: I0131 09:15:18.307652 4830 patch_prober.go:28] interesting pod/logging-loki-ingester-0 container/loki-ingester namespace/openshift-logging: Readiness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body=Ingester not ready: this instance owns no tokens Jan 31 09:15:18 crc kubenswrapper[4830]: I0131 09:15:18.309096 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-ingester-0" podUID="07a77a4a-344b-45bb-8488-a536a94185b1" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 31 09:15:25 crc kubenswrapper[4830]: I0131 09:15:25.676155 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-distributor-5f678c8dd6-vm6jc" Jan 31 09:15:25 crc kubenswrapper[4830]: I0131 09:15:25.813760 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-querier-76788598db-f89hf" Jan 31 09:15:25 crc kubenswrapper[4830]: I0131 09:15:25.944034 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-query-frontend-69d9546745-8k7rn" Jan 31 09:15:28 crc kubenswrapper[4830]: I0131 09:15:28.308394 4830 patch_prober.go:28] interesting pod/logging-loki-ingester-0 container/loki-ingester namespace/openshift-logging: Readiness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body=Ingester not ready: this instance owns no tokens Jan 31 09:15:28 crc kubenswrapper[4830]: I0131 09:15:28.308465 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-ingester-0" podUID="07a77a4a-344b-45bb-8488-a536a94185b1" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 31 09:15:38 crc kubenswrapper[4830]: I0131 09:15:38.304371 4830 patch_prober.go:28] interesting pod/logging-loki-ingester-0 container/loki-ingester namespace/openshift-logging: Readiness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body=Ingester not ready: waiting for 15s after being ready Jan 31 09:15:38 crc kubenswrapper[4830]: I0131 09:15:38.305193 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-ingester-0" podUID="07a77a4a-344b-45bb-8488-a536a94185b1" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 31 09:15:44 crc kubenswrapper[4830]: I0131 09:15:44.353968 4830 patch_prober.go:28] interesting pod/machine-config-daemon-gt7kd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 09:15:44 crc kubenswrapper[4830]: I0131 09:15:44.354666 4830 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" podUID="158dbfda-9b0a-4809-9946-3c6ee2d082dc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 09:15:48 crc kubenswrapper[4830]: I0131 09:15:48.304493 4830 patch_prober.go:28] interesting pod/logging-loki-ingester-0 container/loki-ingester namespace/openshift-logging: Readiness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body=Ingester not ready: waiting for 15s after being ready Jan 31 09:15:48 crc kubenswrapper[4830]: I0131 09:15:48.304978 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-ingester-0" podUID="07a77a4a-344b-45bb-8488-a536a94185b1" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 31 09:15:58 crc kubenswrapper[4830]: I0131 09:15:58.311416 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-ingester-0" Jan 31 09:16:13 crc kubenswrapper[4830]: I0131 09:16:13.986922 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-84rwf"] Jan 31 09:16:13 crc kubenswrapper[4830]: E0131 09:16:13.988768 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e9b9ae0-0c92-4992-b4cb-44bb51f84c45" containerName="collect-profiles" Jan 31 09:16:13 crc kubenswrapper[4830]: I0131 09:16:13.988791 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e9b9ae0-0c92-4992-b4cb-44bb51f84c45" containerName="collect-profiles" Jan 31 09:16:13 crc kubenswrapper[4830]: I0131 09:16:13.989417 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="3e9b9ae0-0c92-4992-b4cb-44bb51f84c45" containerName="collect-profiles" Jan 31 09:16:13 crc kubenswrapper[4830]: I0131 09:16:13.993821 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-84rwf" Jan 31 09:16:14 crc kubenswrapper[4830]: I0131 09:16:14.025215 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-84rwf"] Jan 31 09:16:14 crc kubenswrapper[4830]: I0131 09:16:14.075419 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6lw2v\" (UniqueName: \"kubernetes.io/projected/f6798858-8042-4711-94f4-e021fb446569-kube-api-access-6lw2v\") pod \"certified-operators-84rwf\" (UID: \"f6798858-8042-4711-94f4-e021fb446569\") " pod="openshift-marketplace/certified-operators-84rwf" Jan 31 09:16:14 crc kubenswrapper[4830]: I0131 09:16:14.075478 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f6798858-8042-4711-94f4-e021fb446569-utilities\") pod \"certified-operators-84rwf\" (UID: \"f6798858-8042-4711-94f4-e021fb446569\") " pod="openshift-marketplace/certified-operators-84rwf" Jan 31 09:16:14 crc kubenswrapper[4830]: I0131 09:16:14.075718 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f6798858-8042-4711-94f4-e021fb446569-catalog-content\") pod \"certified-operators-84rwf\" (UID: \"f6798858-8042-4711-94f4-e021fb446569\") " pod="openshift-marketplace/certified-operators-84rwf" Jan 31 09:16:14 crc kubenswrapper[4830]: I0131 09:16:14.178219 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f6798858-8042-4711-94f4-e021fb446569-catalog-content\") pod \"certified-operators-84rwf\" (UID: \"f6798858-8042-4711-94f4-e021fb446569\") " pod="openshift-marketplace/certified-operators-84rwf" Jan 31 09:16:14 crc kubenswrapper[4830]: I0131 09:16:14.178342 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6lw2v\" (UniqueName: \"kubernetes.io/projected/f6798858-8042-4711-94f4-e021fb446569-kube-api-access-6lw2v\") pod \"certified-operators-84rwf\" (UID: \"f6798858-8042-4711-94f4-e021fb446569\") " pod="openshift-marketplace/certified-operators-84rwf" Jan 31 09:16:14 crc kubenswrapper[4830]: I0131 09:16:14.178372 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f6798858-8042-4711-94f4-e021fb446569-utilities\") pod \"certified-operators-84rwf\" (UID: \"f6798858-8042-4711-94f4-e021fb446569\") " pod="openshift-marketplace/certified-operators-84rwf" Jan 31 09:16:14 crc kubenswrapper[4830]: I0131 09:16:14.179122 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f6798858-8042-4711-94f4-e021fb446569-catalog-content\") pod \"certified-operators-84rwf\" (UID: \"f6798858-8042-4711-94f4-e021fb446569\") " pod="openshift-marketplace/certified-operators-84rwf" Jan 31 09:16:14 crc kubenswrapper[4830]: I0131 09:16:14.179182 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f6798858-8042-4711-94f4-e021fb446569-utilities\") pod \"certified-operators-84rwf\" (UID: \"f6798858-8042-4711-94f4-e021fb446569\") " pod="openshift-marketplace/certified-operators-84rwf" Jan 31 09:16:14 crc kubenswrapper[4830]: I0131 09:16:14.206347 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6lw2v\" (UniqueName: \"kubernetes.io/projected/f6798858-8042-4711-94f4-e021fb446569-kube-api-access-6lw2v\") pod \"certified-operators-84rwf\" (UID: \"f6798858-8042-4711-94f4-e021fb446569\") " pod="openshift-marketplace/certified-operators-84rwf" Jan 31 09:16:14 crc kubenswrapper[4830]: I0131 09:16:14.353164 4830 patch_prober.go:28] interesting pod/machine-config-daemon-gt7kd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 09:16:14 crc kubenswrapper[4830]: I0131 09:16:14.353236 4830 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" podUID="158dbfda-9b0a-4809-9946-3c6ee2d082dc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 09:16:14 crc kubenswrapper[4830]: I0131 09:16:14.373737 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-84rwf" Jan 31 09:16:14 crc kubenswrapper[4830]: I0131 09:16:14.752839 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-84rwf"] Jan 31 09:16:15 crc kubenswrapper[4830]: I0131 09:16:15.006976 4830 generic.go:334] "Generic (PLEG): container finished" podID="f6798858-8042-4711-94f4-e021fb446569" containerID="073ac8f99f72016026606f7d426eccc383bbe7303980d9b32e328dfd8b143d29" exitCode=0 Jan 31 09:16:15 crc kubenswrapper[4830]: I0131 09:16:15.007044 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-84rwf" event={"ID":"f6798858-8042-4711-94f4-e021fb446569","Type":"ContainerDied","Data":"073ac8f99f72016026606f7d426eccc383bbe7303980d9b32e328dfd8b143d29"} Jan 31 09:16:15 crc kubenswrapper[4830]: I0131 09:16:15.007114 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-84rwf" event={"ID":"f6798858-8042-4711-94f4-e021fb446569","Type":"ContainerStarted","Data":"8af9679ff2f9b8babeac2954a66dddd8ab8c7871cea75ec5da361514ba51b13e"} Jan 31 09:16:15 crc kubenswrapper[4830]: I0131 09:16:15.838941 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/collector-dmnbq"] Jan 31 09:16:15 crc kubenswrapper[4830]: I0131 09:16:15.840587 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-dmnbq" Jan 31 09:16:15 crc kubenswrapper[4830]: I0131 09:16:15.844113 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-dockercfg-p96nq" Jan 31 09:16:15 crc kubenswrapper[4830]: I0131 09:16:15.844348 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-syslog-receiver" Jan 31 09:16:15 crc kubenswrapper[4830]: I0131 09:16:15.844458 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"collector-config" Jan 31 09:16:15 crc kubenswrapper[4830]: I0131 09:16:15.846789 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-token" Jan 31 09:16:15 crc kubenswrapper[4830]: I0131 09:16:15.848399 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-metrics" Jan 31 09:16:15 crc kubenswrapper[4830]: I0131 09:16:15.854892 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"collector-trustbundle" Jan 31 09:16:15 crc kubenswrapper[4830]: I0131 09:16:15.860064 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/collector-dmnbq"] Jan 31 09:16:16 crc kubenswrapper[4830]: I0131 09:16:16.017320 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-84rwf" event={"ID":"f6798858-8042-4711-94f4-e021fb446569","Type":"ContainerStarted","Data":"78b745af7afe2fd97986e1fafcffe38b5bd60b399ad2f8a83e2b87f497338159"} Jan 31 09:16:16 crc kubenswrapper[4830]: I0131 09:16:16.028099 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/06a91787-cf41-4609-a0ae-dad37bf94a4f-metrics\") pod \"collector-dmnbq\" (UID: \"06a91787-cf41-4609-a0ae-dad37bf94a4f\") " pod="openshift-logging/collector-dmnbq" Jan 31 09:16:16 crc kubenswrapper[4830]: I0131 09:16:16.028159 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/06a91787-cf41-4609-a0ae-dad37bf94a4f-datadir\") pod \"collector-dmnbq\" (UID: \"06a91787-cf41-4609-a0ae-dad37bf94a4f\") " pod="openshift-logging/collector-dmnbq" Jan 31 09:16:16 crc kubenswrapper[4830]: I0131 09:16:16.028193 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/06a91787-cf41-4609-a0ae-dad37bf94a4f-config-openshift-service-cacrt\") pod \"collector-dmnbq\" (UID: \"06a91787-cf41-4609-a0ae-dad37bf94a4f\") " pod="openshift-logging/collector-dmnbq" Jan 31 09:16:16 crc kubenswrapper[4830]: I0131 09:16:16.028219 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/06a91787-cf41-4609-a0ae-dad37bf94a4f-trusted-ca\") pod \"collector-dmnbq\" (UID: \"06a91787-cf41-4609-a0ae-dad37bf94a4f\") " pod="openshift-logging/collector-dmnbq" Jan 31 09:16:16 crc kubenswrapper[4830]: I0131 09:16:16.028257 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/06a91787-cf41-4609-a0ae-dad37bf94a4f-sa-token\") pod \"collector-dmnbq\" (UID: \"06a91787-cf41-4609-a0ae-dad37bf94a4f\") " pod="openshift-logging/collector-dmnbq" Jan 31 09:16:16 crc kubenswrapper[4830]: I0131 09:16:16.028515 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/06a91787-cf41-4609-a0ae-dad37bf94a4f-config\") pod \"collector-dmnbq\" (UID: \"06a91787-cf41-4609-a0ae-dad37bf94a4f\") " pod="openshift-logging/collector-dmnbq" Jan 31 09:16:16 crc kubenswrapper[4830]: I0131 09:16:16.028556 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/06a91787-cf41-4609-a0ae-dad37bf94a4f-tmp\") pod \"collector-dmnbq\" (UID: \"06a91787-cf41-4609-a0ae-dad37bf94a4f\") " pod="openshift-logging/collector-dmnbq" Jan 31 09:16:16 crc kubenswrapper[4830]: I0131 09:16:16.028703 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/06a91787-cf41-4609-a0ae-dad37bf94a4f-collector-token\") pod \"collector-dmnbq\" (UID: \"06a91787-cf41-4609-a0ae-dad37bf94a4f\") " pod="openshift-logging/collector-dmnbq" Jan 31 09:16:16 crc kubenswrapper[4830]: I0131 09:16:16.028801 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/06a91787-cf41-4609-a0ae-dad37bf94a4f-collector-syslog-receiver\") pod \"collector-dmnbq\" (UID: \"06a91787-cf41-4609-a0ae-dad37bf94a4f\") " pod="openshift-logging/collector-dmnbq" Jan 31 09:16:16 crc kubenswrapper[4830]: I0131 09:16:16.028913 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wjr4v\" (UniqueName: \"kubernetes.io/projected/06a91787-cf41-4609-a0ae-dad37bf94a4f-kube-api-access-wjr4v\") pod \"collector-dmnbq\" (UID: \"06a91787-cf41-4609-a0ae-dad37bf94a4f\") " pod="openshift-logging/collector-dmnbq" Jan 31 09:16:16 crc kubenswrapper[4830]: I0131 09:16:16.028995 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/06a91787-cf41-4609-a0ae-dad37bf94a4f-entrypoint\") pod \"collector-dmnbq\" (UID: \"06a91787-cf41-4609-a0ae-dad37bf94a4f\") " pod="openshift-logging/collector-dmnbq" Jan 31 09:16:16 crc kubenswrapper[4830]: I0131 09:16:16.046346 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-logging/collector-dmnbq"] Jan 31 09:16:16 crc kubenswrapper[4830]: E0131 09:16:16.047096 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[collector-syslog-receiver collector-token config config-openshift-service-cacrt datadir entrypoint kube-api-access-wjr4v metrics sa-token tmp trusted-ca], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openshift-logging/collector-dmnbq" podUID="06a91787-cf41-4609-a0ae-dad37bf94a4f" Jan 31 09:16:16 crc kubenswrapper[4830]: I0131 09:16:16.133831 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/06a91787-cf41-4609-a0ae-dad37bf94a4f-config\") pod \"collector-dmnbq\" (UID: \"06a91787-cf41-4609-a0ae-dad37bf94a4f\") " pod="openshift-logging/collector-dmnbq" Jan 31 09:16:16 crc kubenswrapper[4830]: I0131 09:16:16.133890 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/06a91787-cf41-4609-a0ae-dad37bf94a4f-tmp\") pod \"collector-dmnbq\" (UID: \"06a91787-cf41-4609-a0ae-dad37bf94a4f\") " pod="openshift-logging/collector-dmnbq" Jan 31 09:16:16 crc kubenswrapper[4830]: I0131 09:16:16.133918 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/06a91787-cf41-4609-a0ae-dad37bf94a4f-collector-token\") pod \"collector-dmnbq\" (UID: \"06a91787-cf41-4609-a0ae-dad37bf94a4f\") " pod="openshift-logging/collector-dmnbq" Jan 31 09:16:16 crc kubenswrapper[4830]: I0131 09:16:16.133942 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/06a91787-cf41-4609-a0ae-dad37bf94a4f-collector-syslog-receiver\") pod \"collector-dmnbq\" (UID: \"06a91787-cf41-4609-a0ae-dad37bf94a4f\") " pod="openshift-logging/collector-dmnbq" Jan 31 09:16:16 crc kubenswrapper[4830]: I0131 09:16:16.133987 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wjr4v\" (UniqueName: \"kubernetes.io/projected/06a91787-cf41-4609-a0ae-dad37bf94a4f-kube-api-access-wjr4v\") pod \"collector-dmnbq\" (UID: \"06a91787-cf41-4609-a0ae-dad37bf94a4f\") " pod="openshift-logging/collector-dmnbq" Jan 31 09:16:16 crc kubenswrapper[4830]: I0131 09:16:16.134021 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/06a91787-cf41-4609-a0ae-dad37bf94a4f-entrypoint\") pod \"collector-dmnbq\" (UID: \"06a91787-cf41-4609-a0ae-dad37bf94a4f\") " pod="openshift-logging/collector-dmnbq" Jan 31 09:16:16 crc kubenswrapper[4830]: I0131 09:16:16.134056 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/06a91787-cf41-4609-a0ae-dad37bf94a4f-metrics\") pod \"collector-dmnbq\" (UID: \"06a91787-cf41-4609-a0ae-dad37bf94a4f\") " pod="openshift-logging/collector-dmnbq" Jan 31 09:16:16 crc kubenswrapper[4830]: I0131 09:16:16.134083 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/06a91787-cf41-4609-a0ae-dad37bf94a4f-datadir\") pod \"collector-dmnbq\" (UID: \"06a91787-cf41-4609-a0ae-dad37bf94a4f\") " pod="openshift-logging/collector-dmnbq" Jan 31 09:16:16 crc kubenswrapper[4830]: I0131 09:16:16.134103 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/06a91787-cf41-4609-a0ae-dad37bf94a4f-config-openshift-service-cacrt\") pod \"collector-dmnbq\" (UID: \"06a91787-cf41-4609-a0ae-dad37bf94a4f\") " pod="openshift-logging/collector-dmnbq" Jan 31 09:16:16 crc kubenswrapper[4830]: I0131 09:16:16.134125 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/06a91787-cf41-4609-a0ae-dad37bf94a4f-trusted-ca\") pod \"collector-dmnbq\" (UID: \"06a91787-cf41-4609-a0ae-dad37bf94a4f\") " pod="openshift-logging/collector-dmnbq" Jan 31 09:16:16 crc kubenswrapper[4830]: I0131 09:16:16.134152 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/06a91787-cf41-4609-a0ae-dad37bf94a4f-sa-token\") pod \"collector-dmnbq\" (UID: \"06a91787-cf41-4609-a0ae-dad37bf94a4f\") " pod="openshift-logging/collector-dmnbq" Jan 31 09:16:16 crc kubenswrapper[4830]: I0131 09:16:16.135060 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/06a91787-cf41-4609-a0ae-dad37bf94a4f-config\") pod \"collector-dmnbq\" (UID: \"06a91787-cf41-4609-a0ae-dad37bf94a4f\") " pod="openshift-logging/collector-dmnbq" Jan 31 09:16:16 crc kubenswrapper[4830]: I0131 09:16:16.135393 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/06a91787-cf41-4609-a0ae-dad37bf94a4f-datadir\") pod \"collector-dmnbq\" (UID: \"06a91787-cf41-4609-a0ae-dad37bf94a4f\") " pod="openshift-logging/collector-dmnbq" Jan 31 09:16:16 crc kubenswrapper[4830]: I0131 09:16:16.135860 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/06a91787-cf41-4609-a0ae-dad37bf94a4f-config-openshift-service-cacrt\") pod \"collector-dmnbq\" (UID: \"06a91787-cf41-4609-a0ae-dad37bf94a4f\") " pod="openshift-logging/collector-dmnbq" Jan 31 09:16:16 crc kubenswrapper[4830]: E0131 09:16:16.135933 4830 secret.go:188] Couldn't get secret openshift-logging/collector-metrics: secret "collector-metrics" not found Jan 31 09:16:16 crc kubenswrapper[4830]: E0131 09:16:16.136004 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/06a91787-cf41-4609-a0ae-dad37bf94a4f-metrics podName:06a91787-cf41-4609-a0ae-dad37bf94a4f nodeName:}" failed. No retries permitted until 2026-01-31 09:16:16.635974975 +0000 UTC m=+921.129337407 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics" (UniqueName: "kubernetes.io/secret/06a91787-cf41-4609-a0ae-dad37bf94a4f-metrics") pod "collector-dmnbq" (UID: "06a91787-cf41-4609-a0ae-dad37bf94a4f") : secret "collector-metrics" not found Jan 31 09:16:16 crc kubenswrapper[4830]: I0131 09:16:16.136049 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/06a91787-cf41-4609-a0ae-dad37bf94a4f-entrypoint\") pod \"collector-dmnbq\" (UID: \"06a91787-cf41-4609-a0ae-dad37bf94a4f\") " pod="openshift-logging/collector-dmnbq" Jan 31 09:16:16 crc kubenswrapper[4830]: I0131 09:16:16.137159 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/06a91787-cf41-4609-a0ae-dad37bf94a4f-trusted-ca\") pod \"collector-dmnbq\" (UID: \"06a91787-cf41-4609-a0ae-dad37bf94a4f\") " pod="openshift-logging/collector-dmnbq" Jan 31 09:16:16 crc kubenswrapper[4830]: I0131 09:16:16.148515 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/06a91787-cf41-4609-a0ae-dad37bf94a4f-collector-syslog-receiver\") pod \"collector-dmnbq\" (UID: \"06a91787-cf41-4609-a0ae-dad37bf94a4f\") " pod="openshift-logging/collector-dmnbq" Jan 31 09:16:16 crc kubenswrapper[4830]: I0131 09:16:16.148960 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/06a91787-cf41-4609-a0ae-dad37bf94a4f-tmp\") pod \"collector-dmnbq\" (UID: \"06a91787-cf41-4609-a0ae-dad37bf94a4f\") " pod="openshift-logging/collector-dmnbq" Jan 31 09:16:16 crc kubenswrapper[4830]: I0131 09:16:16.150522 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/06a91787-cf41-4609-a0ae-dad37bf94a4f-collector-token\") pod \"collector-dmnbq\" (UID: \"06a91787-cf41-4609-a0ae-dad37bf94a4f\") " pod="openshift-logging/collector-dmnbq" Jan 31 09:16:16 crc kubenswrapper[4830]: I0131 09:16:16.155266 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wjr4v\" (UniqueName: \"kubernetes.io/projected/06a91787-cf41-4609-a0ae-dad37bf94a4f-kube-api-access-wjr4v\") pod \"collector-dmnbq\" (UID: \"06a91787-cf41-4609-a0ae-dad37bf94a4f\") " pod="openshift-logging/collector-dmnbq" Jan 31 09:16:16 crc kubenswrapper[4830]: I0131 09:16:16.166005 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/06a91787-cf41-4609-a0ae-dad37bf94a4f-sa-token\") pod \"collector-dmnbq\" (UID: \"06a91787-cf41-4609-a0ae-dad37bf94a4f\") " pod="openshift-logging/collector-dmnbq" Jan 31 09:16:16 crc kubenswrapper[4830]: I0131 09:16:16.644610 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/06a91787-cf41-4609-a0ae-dad37bf94a4f-metrics\") pod \"collector-dmnbq\" (UID: \"06a91787-cf41-4609-a0ae-dad37bf94a4f\") " pod="openshift-logging/collector-dmnbq" Jan 31 09:16:16 crc kubenswrapper[4830]: I0131 09:16:16.649504 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/06a91787-cf41-4609-a0ae-dad37bf94a4f-metrics\") pod \"collector-dmnbq\" (UID: \"06a91787-cf41-4609-a0ae-dad37bf94a4f\") " pod="openshift-logging/collector-dmnbq" Jan 31 09:16:17 crc kubenswrapper[4830]: I0131 09:16:17.036188 4830 generic.go:334] "Generic (PLEG): container finished" podID="f6798858-8042-4711-94f4-e021fb446569" containerID="78b745af7afe2fd97986e1fafcffe38b5bd60b399ad2f8a83e2b87f497338159" exitCode=0 Jan 31 09:16:17 crc kubenswrapper[4830]: I0131 09:16:17.036280 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-84rwf" event={"ID":"f6798858-8042-4711-94f4-e021fb446569","Type":"ContainerDied","Data":"78b745af7afe2fd97986e1fafcffe38b5bd60b399ad2f8a83e2b87f497338159"} Jan 31 09:16:17 crc kubenswrapper[4830]: I0131 09:16:17.036402 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-dmnbq" Jan 31 09:16:17 crc kubenswrapper[4830]: I0131 09:16:17.049473 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-dmnbq" Jan 31 09:16:17 crc kubenswrapper[4830]: I0131 09:16:17.155612 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/06a91787-cf41-4609-a0ae-dad37bf94a4f-tmp\") pod \"06a91787-cf41-4609-a0ae-dad37bf94a4f\" (UID: \"06a91787-cf41-4609-a0ae-dad37bf94a4f\") " Jan 31 09:16:17 crc kubenswrapper[4830]: I0131 09:16:17.155699 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/06a91787-cf41-4609-a0ae-dad37bf94a4f-collector-token\") pod \"06a91787-cf41-4609-a0ae-dad37bf94a4f\" (UID: \"06a91787-cf41-4609-a0ae-dad37bf94a4f\") " Jan 31 09:16:17 crc kubenswrapper[4830]: I0131 09:16:17.155761 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/06a91787-cf41-4609-a0ae-dad37bf94a4f-datadir\") pod \"06a91787-cf41-4609-a0ae-dad37bf94a4f\" (UID: \"06a91787-cf41-4609-a0ae-dad37bf94a4f\") " Jan 31 09:16:17 crc kubenswrapper[4830]: I0131 09:16:17.155827 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/06a91787-cf41-4609-a0ae-dad37bf94a4f-collector-syslog-receiver\") pod \"06a91787-cf41-4609-a0ae-dad37bf94a4f\" (UID: \"06a91787-cf41-4609-a0ae-dad37bf94a4f\") " Jan 31 09:16:17 crc kubenswrapper[4830]: I0131 09:16:17.155899 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wjr4v\" (UniqueName: \"kubernetes.io/projected/06a91787-cf41-4609-a0ae-dad37bf94a4f-kube-api-access-wjr4v\") pod \"06a91787-cf41-4609-a0ae-dad37bf94a4f\" (UID: \"06a91787-cf41-4609-a0ae-dad37bf94a4f\") " Jan 31 09:16:17 crc kubenswrapper[4830]: I0131 09:16:17.155928 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/06a91787-cf41-4609-a0ae-dad37bf94a4f-entrypoint\") pod \"06a91787-cf41-4609-a0ae-dad37bf94a4f\" (UID: \"06a91787-cf41-4609-a0ae-dad37bf94a4f\") " Jan 31 09:16:17 crc kubenswrapper[4830]: I0131 09:16:17.155985 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/06a91787-cf41-4609-a0ae-dad37bf94a4f-trusted-ca\") pod \"06a91787-cf41-4609-a0ae-dad37bf94a4f\" (UID: \"06a91787-cf41-4609-a0ae-dad37bf94a4f\") " Jan 31 09:16:17 crc kubenswrapper[4830]: I0131 09:16:17.156013 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/06a91787-cf41-4609-a0ae-dad37bf94a4f-metrics\") pod \"06a91787-cf41-4609-a0ae-dad37bf94a4f\" (UID: \"06a91787-cf41-4609-a0ae-dad37bf94a4f\") " Jan 31 09:16:17 crc kubenswrapper[4830]: I0131 09:16:17.156063 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/06a91787-cf41-4609-a0ae-dad37bf94a4f-config\") pod \"06a91787-cf41-4609-a0ae-dad37bf94a4f\" (UID: \"06a91787-cf41-4609-a0ae-dad37bf94a4f\") " Jan 31 09:16:17 crc kubenswrapper[4830]: I0131 09:16:17.156121 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/06a91787-cf41-4609-a0ae-dad37bf94a4f-sa-token\") pod \"06a91787-cf41-4609-a0ae-dad37bf94a4f\" (UID: \"06a91787-cf41-4609-a0ae-dad37bf94a4f\") " Jan 31 09:16:17 crc kubenswrapper[4830]: I0131 09:16:17.156149 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/06a91787-cf41-4609-a0ae-dad37bf94a4f-config-openshift-service-cacrt\") pod \"06a91787-cf41-4609-a0ae-dad37bf94a4f\" (UID: \"06a91787-cf41-4609-a0ae-dad37bf94a4f\") " Jan 31 09:16:17 crc kubenswrapper[4830]: I0131 09:16:17.156648 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/06a91787-cf41-4609-a0ae-dad37bf94a4f-datadir" (OuterVolumeSpecName: "datadir") pod "06a91787-cf41-4609-a0ae-dad37bf94a4f" (UID: "06a91787-cf41-4609-a0ae-dad37bf94a4f"). InnerVolumeSpecName "datadir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 09:16:17 crc kubenswrapper[4830]: I0131 09:16:17.157710 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/06a91787-cf41-4609-a0ae-dad37bf94a4f-config" (OuterVolumeSpecName: "config") pod "06a91787-cf41-4609-a0ae-dad37bf94a4f" (UID: "06a91787-cf41-4609-a0ae-dad37bf94a4f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:16:17 crc kubenswrapper[4830]: I0131 09:16:17.157906 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/06a91787-cf41-4609-a0ae-dad37bf94a4f-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "06a91787-cf41-4609-a0ae-dad37bf94a4f" (UID: "06a91787-cf41-4609-a0ae-dad37bf94a4f"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:16:17 crc kubenswrapper[4830]: I0131 09:16:17.158511 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/06a91787-cf41-4609-a0ae-dad37bf94a4f-entrypoint" (OuterVolumeSpecName: "entrypoint") pod "06a91787-cf41-4609-a0ae-dad37bf94a4f" (UID: "06a91787-cf41-4609-a0ae-dad37bf94a4f"). InnerVolumeSpecName "entrypoint". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:16:17 crc kubenswrapper[4830]: I0131 09:16:17.158878 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/06a91787-cf41-4609-a0ae-dad37bf94a4f-tmp" (OuterVolumeSpecName: "tmp") pod "06a91787-cf41-4609-a0ae-dad37bf94a4f" (UID: "06a91787-cf41-4609-a0ae-dad37bf94a4f"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 09:16:17 crc kubenswrapper[4830]: I0131 09:16:17.158949 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/06a91787-cf41-4609-a0ae-dad37bf94a4f-config-openshift-service-cacrt" (OuterVolumeSpecName: "config-openshift-service-cacrt") pod "06a91787-cf41-4609-a0ae-dad37bf94a4f" (UID: "06a91787-cf41-4609-a0ae-dad37bf94a4f"). InnerVolumeSpecName "config-openshift-service-cacrt". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:16:17 crc kubenswrapper[4830]: I0131 09:16:17.160352 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/06a91787-cf41-4609-a0ae-dad37bf94a4f-collector-token" (OuterVolumeSpecName: "collector-token") pod "06a91787-cf41-4609-a0ae-dad37bf94a4f" (UID: "06a91787-cf41-4609-a0ae-dad37bf94a4f"). InnerVolumeSpecName "collector-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:16:17 crc kubenswrapper[4830]: I0131 09:16:17.160670 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/06a91787-cf41-4609-a0ae-dad37bf94a4f-sa-token" (OuterVolumeSpecName: "sa-token") pod "06a91787-cf41-4609-a0ae-dad37bf94a4f" (UID: "06a91787-cf41-4609-a0ae-dad37bf94a4f"). InnerVolumeSpecName "sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:16:17 crc kubenswrapper[4830]: I0131 09:16:17.161105 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/06a91787-cf41-4609-a0ae-dad37bf94a4f-metrics" (OuterVolumeSpecName: "metrics") pod "06a91787-cf41-4609-a0ae-dad37bf94a4f" (UID: "06a91787-cf41-4609-a0ae-dad37bf94a4f"). InnerVolumeSpecName "metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:16:17 crc kubenswrapper[4830]: I0131 09:16:17.163231 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/06a91787-cf41-4609-a0ae-dad37bf94a4f-kube-api-access-wjr4v" (OuterVolumeSpecName: "kube-api-access-wjr4v") pod "06a91787-cf41-4609-a0ae-dad37bf94a4f" (UID: "06a91787-cf41-4609-a0ae-dad37bf94a4f"). InnerVolumeSpecName "kube-api-access-wjr4v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:16:17 crc kubenswrapper[4830]: I0131 09:16:17.164180 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/06a91787-cf41-4609-a0ae-dad37bf94a4f-collector-syslog-receiver" (OuterVolumeSpecName: "collector-syslog-receiver") pod "06a91787-cf41-4609-a0ae-dad37bf94a4f" (UID: "06a91787-cf41-4609-a0ae-dad37bf94a4f"). InnerVolumeSpecName "collector-syslog-receiver". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:16:17 crc kubenswrapper[4830]: I0131 09:16:17.258884 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wjr4v\" (UniqueName: \"kubernetes.io/projected/06a91787-cf41-4609-a0ae-dad37bf94a4f-kube-api-access-wjr4v\") on node \"crc\" DevicePath \"\"" Jan 31 09:16:17 crc kubenswrapper[4830]: I0131 09:16:17.258928 4830 reconciler_common.go:293] "Volume detached for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/06a91787-cf41-4609-a0ae-dad37bf94a4f-entrypoint\") on node \"crc\" DevicePath \"\"" Jan 31 09:16:17 crc kubenswrapper[4830]: I0131 09:16:17.258940 4830 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/06a91787-cf41-4609-a0ae-dad37bf94a4f-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 31 09:16:17 crc kubenswrapper[4830]: I0131 09:16:17.258950 4830 reconciler_common.go:293] "Volume detached for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/06a91787-cf41-4609-a0ae-dad37bf94a4f-metrics\") on node \"crc\" DevicePath \"\"" Jan 31 09:16:17 crc kubenswrapper[4830]: I0131 09:16:17.258963 4830 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/06a91787-cf41-4609-a0ae-dad37bf94a4f-config\") on node \"crc\" DevicePath \"\"" Jan 31 09:16:17 crc kubenswrapper[4830]: I0131 09:16:17.258974 4830 reconciler_common.go:293] "Volume detached for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/06a91787-cf41-4609-a0ae-dad37bf94a4f-sa-token\") on node \"crc\" DevicePath \"\"" Jan 31 09:16:17 crc kubenswrapper[4830]: I0131 09:16:17.258983 4830 reconciler_common.go:293] "Volume detached for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/06a91787-cf41-4609-a0ae-dad37bf94a4f-config-openshift-service-cacrt\") on node \"crc\" DevicePath \"\"" Jan 31 09:16:17 crc kubenswrapper[4830]: I0131 09:16:17.258996 4830 reconciler_common.go:293] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/06a91787-cf41-4609-a0ae-dad37bf94a4f-tmp\") on node \"crc\" DevicePath \"\"" Jan 31 09:16:17 crc kubenswrapper[4830]: I0131 09:16:17.259006 4830 reconciler_common.go:293] "Volume detached for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/06a91787-cf41-4609-a0ae-dad37bf94a4f-collector-token\") on node \"crc\" DevicePath \"\"" Jan 31 09:16:17 crc kubenswrapper[4830]: I0131 09:16:17.259016 4830 reconciler_common.go:293] "Volume detached for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/06a91787-cf41-4609-a0ae-dad37bf94a4f-datadir\") on node \"crc\" DevicePath \"\"" Jan 31 09:16:17 crc kubenswrapper[4830]: I0131 09:16:17.259027 4830 reconciler_common.go:293] "Volume detached for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/06a91787-cf41-4609-a0ae-dad37bf94a4f-collector-syslog-receiver\") on node \"crc\" DevicePath \"\"" Jan 31 09:16:18 crc kubenswrapper[4830]: I0131 09:16:18.046591 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-dmnbq" Jan 31 09:16:18 crc kubenswrapper[4830]: I0131 09:16:18.046591 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-84rwf" event={"ID":"f6798858-8042-4711-94f4-e021fb446569","Type":"ContainerStarted","Data":"bff6a475deb5f446fe921003a1c21bfdcb97ba14ef28f451733ca1683405e6fe"} Jan 31 09:16:18 crc kubenswrapper[4830]: I0131 09:16:18.069328 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-84rwf" podStartSLOduration=2.61733739 podStartE2EDuration="5.069308601s" podCreationTimestamp="2026-01-31 09:16:13 +0000 UTC" firstStartedPulling="2026-01-31 09:16:15.008896847 +0000 UTC m=+919.502259289" lastFinishedPulling="2026-01-31 09:16:17.460868048 +0000 UTC m=+921.954230500" observedRunningTime="2026-01-31 09:16:18.064783851 +0000 UTC m=+922.558146293" watchObservedRunningTime="2026-01-31 09:16:18.069308601 +0000 UTC m=+922.562671043" Jan 31 09:16:18 crc kubenswrapper[4830]: I0131 09:16:18.109231 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-logging/collector-dmnbq"] Jan 31 09:16:18 crc kubenswrapper[4830]: I0131 09:16:18.114810 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-logging/collector-dmnbq"] Jan 31 09:16:18 crc kubenswrapper[4830]: I0131 09:16:18.137447 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/collector-mfmq7"] Jan 31 09:16:18 crc kubenswrapper[4830]: I0131 09:16:18.138689 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-mfmq7" Jan 31 09:16:18 crc kubenswrapper[4830]: I0131 09:16:18.142095 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-syslog-receiver" Jan 31 09:16:18 crc kubenswrapper[4830]: I0131 09:16:18.141646 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"collector-config" Jan 31 09:16:18 crc kubenswrapper[4830]: I0131 09:16:18.142335 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-dockercfg-p96nq" Jan 31 09:16:18 crc kubenswrapper[4830]: I0131 09:16:18.147109 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-token" Jan 31 09:16:18 crc kubenswrapper[4830]: I0131 09:16:18.147110 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-metrics" Jan 31 09:16:18 crc kubenswrapper[4830]: I0131 09:16:18.150010 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"collector-trustbundle" Jan 31 09:16:18 crc kubenswrapper[4830]: I0131 09:16:18.162567 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/collector-mfmq7"] Jan 31 09:16:18 crc kubenswrapper[4830]: I0131 09:16:18.261426 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="06a91787-cf41-4609-a0ae-dad37bf94a4f" path="/var/lib/kubelet/pods/06a91787-cf41-4609-a0ae-dad37bf94a4f/volumes" Jan 31 09:16:18 crc kubenswrapper[4830]: I0131 09:16:18.277905 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/eebf46b0-2ea9-47eb-963c-911a9f3e3f1b-trusted-ca\") pod \"collector-mfmq7\" (UID: \"eebf46b0-2ea9-47eb-963c-911a9f3e3f1b\") " pod="openshift-logging/collector-mfmq7" Jan 31 09:16:18 crc kubenswrapper[4830]: I0131 09:16:18.277963 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/eebf46b0-2ea9-47eb-963c-911a9f3e3f1b-datadir\") pod \"collector-mfmq7\" (UID: \"eebf46b0-2ea9-47eb-963c-911a9f3e3f1b\") " pod="openshift-logging/collector-mfmq7" Jan 31 09:16:18 crc kubenswrapper[4830]: I0131 09:16:18.278042 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/eebf46b0-2ea9-47eb-963c-911a9f3e3f1b-sa-token\") pod \"collector-mfmq7\" (UID: \"eebf46b0-2ea9-47eb-963c-911a9f3e3f1b\") " pod="openshift-logging/collector-mfmq7" Jan 31 09:16:18 crc kubenswrapper[4830]: I0131 09:16:18.278069 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/eebf46b0-2ea9-47eb-963c-911a9f3e3f1b-entrypoint\") pod \"collector-mfmq7\" (UID: \"eebf46b0-2ea9-47eb-963c-911a9f3e3f1b\") " pod="openshift-logging/collector-mfmq7" Jan 31 09:16:18 crc kubenswrapper[4830]: I0131 09:16:18.278107 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/eebf46b0-2ea9-47eb-963c-911a9f3e3f1b-tmp\") pod \"collector-mfmq7\" (UID: \"eebf46b0-2ea9-47eb-963c-911a9f3e3f1b\") " pod="openshift-logging/collector-mfmq7" Jan 31 09:16:18 crc kubenswrapper[4830]: I0131 09:16:18.278141 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4brrp\" (UniqueName: \"kubernetes.io/projected/eebf46b0-2ea9-47eb-963c-911a9f3e3f1b-kube-api-access-4brrp\") pod \"collector-mfmq7\" (UID: \"eebf46b0-2ea9-47eb-963c-911a9f3e3f1b\") " pod="openshift-logging/collector-mfmq7" Jan 31 09:16:18 crc kubenswrapper[4830]: I0131 09:16:18.278167 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/eebf46b0-2ea9-47eb-963c-911a9f3e3f1b-metrics\") pod \"collector-mfmq7\" (UID: \"eebf46b0-2ea9-47eb-963c-911a9f3e3f1b\") " pod="openshift-logging/collector-mfmq7" Jan 31 09:16:18 crc kubenswrapper[4830]: I0131 09:16:18.278194 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/eebf46b0-2ea9-47eb-963c-911a9f3e3f1b-collector-token\") pod \"collector-mfmq7\" (UID: \"eebf46b0-2ea9-47eb-963c-911a9f3e3f1b\") " pod="openshift-logging/collector-mfmq7" Jan 31 09:16:18 crc kubenswrapper[4830]: I0131 09:16:18.278224 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/eebf46b0-2ea9-47eb-963c-911a9f3e3f1b-collector-syslog-receiver\") pod \"collector-mfmq7\" (UID: \"eebf46b0-2ea9-47eb-963c-911a9f3e3f1b\") " pod="openshift-logging/collector-mfmq7" Jan 31 09:16:18 crc kubenswrapper[4830]: I0131 09:16:18.278251 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/eebf46b0-2ea9-47eb-963c-911a9f3e3f1b-config-openshift-service-cacrt\") pod \"collector-mfmq7\" (UID: \"eebf46b0-2ea9-47eb-963c-911a9f3e3f1b\") " pod="openshift-logging/collector-mfmq7" Jan 31 09:16:18 crc kubenswrapper[4830]: I0131 09:16:18.278277 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eebf46b0-2ea9-47eb-963c-911a9f3e3f1b-config\") pod \"collector-mfmq7\" (UID: \"eebf46b0-2ea9-47eb-963c-911a9f3e3f1b\") " pod="openshift-logging/collector-mfmq7" Jan 31 09:16:18 crc kubenswrapper[4830]: I0131 09:16:18.379710 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/eebf46b0-2ea9-47eb-963c-911a9f3e3f1b-trusted-ca\") pod \"collector-mfmq7\" (UID: \"eebf46b0-2ea9-47eb-963c-911a9f3e3f1b\") " pod="openshift-logging/collector-mfmq7" Jan 31 09:16:18 crc kubenswrapper[4830]: I0131 09:16:18.379784 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/eebf46b0-2ea9-47eb-963c-911a9f3e3f1b-datadir\") pod \"collector-mfmq7\" (UID: \"eebf46b0-2ea9-47eb-963c-911a9f3e3f1b\") " pod="openshift-logging/collector-mfmq7" Jan 31 09:16:18 crc kubenswrapper[4830]: I0131 09:16:18.379870 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/eebf46b0-2ea9-47eb-963c-911a9f3e3f1b-sa-token\") pod \"collector-mfmq7\" (UID: \"eebf46b0-2ea9-47eb-963c-911a9f3e3f1b\") " pod="openshift-logging/collector-mfmq7" Jan 31 09:16:18 crc kubenswrapper[4830]: I0131 09:16:18.379891 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/eebf46b0-2ea9-47eb-963c-911a9f3e3f1b-entrypoint\") pod \"collector-mfmq7\" (UID: \"eebf46b0-2ea9-47eb-963c-911a9f3e3f1b\") " pod="openshift-logging/collector-mfmq7" Jan 31 09:16:18 crc kubenswrapper[4830]: I0131 09:16:18.379896 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/eebf46b0-2ea9-47eb-963c-911a9f3e3f1b-datadir\") pod \"collector-mfmq7\" (UID: \"eebf46b0-2ea9-47eb-963c-911a9f3e3f1b\") " pod="openshift-logging/collector-mfmq7" Jan 31 09:16:18 crc kubenswrapper[4830]: I0131 09:16:18.379919 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/eebf46b0-2ea9-47eb-963c-911a9f3e3f1b-tmp\") pod \"collector-mfmq7\" (UID: \"eebf46b0-2ea9-47eb-963c-911a9f3e3f1b\") " pod="openshift-logging/collector-mfmq7" Jan 31 09:16:18 crc kubenswrapper[4830]: I0131 09:16:18.379974 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4brrp\" (UniqueName: \"kubernetes.io/projected/eebf46b0-2ea9-47eb-963c-911a9f3e3f1b-kube-api-access-4brrp\") pod \"collector-mfmq7\" (UID: \"eebf46b0-2ea9-47eb-963c-911a9f3e3f1b\") " pod="openshift-logging/collector-mfmq7" Jan 31 09:16:18 crc kubenswrapper[4830]: I0131 09:16:18.379998 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/eebf46b0-2ea9-47eb-963c-911a9f3e3f1b-metrics\") pod \"collector-mfmq7\" (UID: \"eebf46b0-2ea9-47eb-963c-911a9f3e3f1b\") " pod="openshift-logging/collector-mfmq7" Jan 31 09:16:18 crc kubenswrapper[4830]: I0131 09:16:18.380018 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/eebf46b0-2ea9-47eb-963c-911a9f3e3f1b-collector-token\") pod \"collector-mfmq7\" (UID: \"eebf46b0-2ea9-47eb-963c-911a9f3e3f1b\") " pod="openshift-logging/collector-mfmq7" Jan 31 09:16:18 crc kubenswrapper[4830]: I0131 09:16:18.380064 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/eebf46b0-2ea9-47eb-963c-911a9f3e3f1b-collector-syslog-receiver\") pod \"collector-mfmq7\" (UID: \"eebf46b0-2ea9-47eb-963c-911a9f3e3f1b\") " pod="openshift-logging/collector-mfmq7" Jan 31 09:16:18 crc kubenswrapper[4830]: I0131 09:16:18.380086 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/eebf46b0-2ea9-47eb-963c-911a9f3e3f1b-config-openshift-service-cacrt\") pod \"collector-mfmq7\" (UID: \"eebf46b0-2ea9-47eb-963c-911a9f3e3f1b\") " pod="openshift-logging/collector-mfmq7" Jan 31 09:16:18 crc kubenswrapper[4830]: I0131 09:16:18.380113 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eebf46b0-2ea9-47eb-963c-911a9f3e3f1b-config\") pod \"collector-mfmq7\" (UID: \"eebf46b0-2ea9-47eb-963c-911a9f3e3f1b\") " pod="openshift-logging/collector-mfmq7" Jan 31 09:16:18 crc kubenswrapper[4830]: I0131 09:16:18.381178 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/eebf46b0-2ea9-47eb-963c-911a9f3e3f1b-trusted-ca\") pod \"collector-mfmq7\" (UID: \"eebf46b0-2ea9-47eb-963c-911a9f3e3f1b\") " pod="openshift-logging/collector-mfmq7" Jan 31 09:16:18 crc kubenswrapper[4830]: I0131 09:16:18.381783 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eebf46b0-2ea9-47eb-963c-911a9f3e3f1b-config\") pod \"collector-mfmq7\" (UID: \"eebf46b0-2ea9-47eb-963c-911a9f3e3f1b\") " pod="openshift-logging/collector-mfmq7" Jan 31 09:16:18 crc kubenswrapper[4830]: I0131 09:16:18.382035 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/eebf46b0-2ea9-47eb-963c-911a9f3e3f1b-entrypoint\") pod \"collector-mfmq7\" (UID: \"eebf46b0-2ea9-47eb-963c-911a9f3e3f1b\") " pod="openshift-logging/collector-mfmq7" Jan 31 09:16:18 crc kubenswrapper[4830]: I0131 09:16:18.382663 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/eebf46b0-2ea9-47eb-963c-911a9f3e3f1b-config-openshift-service-cacrt\") pod \"collector-mfmq7\" (UID: \"eebf46b0-2ea9-47eb-963c-911a9f3e3f1b\") " pod="openshift-logging/collector-mfmq7" Jan 31 09:16:18 crc kubenswrapper[4830]: I0131 09:16:18.387367 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/eebf46b0-2ea9-47eb-963c-911a9f3e3f1b-tmp\") pod \"collector-mfmq7\" (UID: \"eebf46b0-2ea9-47eb-963c-911a9f3e3f1b\") " pod="openshift-logging/collector-mfmq7" Jan 31 09:16:18 crc kubenswrapper[4830]: I0131 09:16:18.387754 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/eebf46b0-2ea9-47eb-963c-911a9f3e3f1b-metrics\") pod \"collector-mfmq7\" (UID: \"eebf46b0-2ea9-47eb-963c-911a9f3e3f1b\") " pod="openshift-logging/collector-mfmq7" Jan 31 09:16:18 crc kubenswrapper[4830]: I0131 09:16:18.389132 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/eebf46b0-2ea9-47eb-963c-911a9f3e3f1b-collector-syslog-receiver\") pod \"collector-mfmq7\" (UID: \"eebf46b0-2ea9-47eb-963c-911a9f3e3f1b\") " pod="openshift-logging/collector-mfmq7" Jan 31 09:16:18 crc kubenswrapper[4830]: I0131 09:16:18.389497 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/eebf46b0-2ea9-47eb-963c-911a9f3e3f1b-collector-token\") pod \"collector-mfmq7\" (UID: \"eebf46b0-2ea9-47eb-963c-911a9f3e3f1b\") " pod="openshift-logging/collector-mfmq7" Jan 31 09:16:18 crc kubenswrapper[4830]: I0131 09:16:18.402216 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/eebf46b0-2ea9-47eb-963c-911a9f3e3f1b-sa-token\") pod \"collector-mfmq7\" (UID: \"eebf46b0-2ea9-47eb-963c-911a9f3e3f1b\") " pod="openshift-logging/collector-mfmq7" Jan 31 09:16:18 crc kubenswrapper[4830]: I0131 09:16:18.409389 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4brrp\" (UniqueName: \"kubernetes.io/projected/eebf46b0-2ea9-47eb-963c-911a9f3e3f1b-kube-api-access-4brrp\") pod \"collector-mfmq7\" (UID: \"eebf46b0-2ea9-47eb-963c-911a9f3e3f1b\") " pod="openshift-logging/collector-mfmq7" Jan 31 09:16:18 crc kubenswrapper[4830]: I0131 09:16:18.460626 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-mfmq7" Jan 31 09:16:18 crc kubenswrapper[4830]: I0131 09:16:18.723354 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/collector-mfmq7"] Jan 31 09:16:19 crc kubenswrapper[4830]: I0131 09:16:19.056953 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/collector-mfmq7" event={"ID":"eebf46b0-2ea9-47eb-963c-911a9f3e3f1b","Type":"ContainerStarted","Data":"1b2fa710d90609c539b9e19dd4a9fb1aae8298549eb353c76f04ca334e8c55bf"} Jan 31 09:16:24 crc kubenswrapper[4830]: I0131 09:16:24.375399 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-84rwf" Jan 31 09:16:24 crc kubenswrapper[4830]: I0131 09:16:24.376068 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-84rwf" Jan 31 09:16:24 crc kubenswrapper[4830]: I0131 09:16:24.455964 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-84rwf" Jan 31 09:16:25 crc kubenswrapper[4830]: I0131 09:16:25.151590 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-84rwf" Jan 31 09:16:25 crc kubenswrapper[4830]: I0131 09:16:25.213090 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-84rwf"] Jan 31 09:16:26 crc kubenswrapper[4830]: I0131 09:16:26.120085 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/collector-mfmq7" event={"ID":"eebf46b0-2ea9-47eb-963c-911a9f3e3f1b","Type":"ContainerStarted","Data":"34a12f233ea488796e52e0d081e28f2cd88a0e66978876276683dd2208a4a8b6"} Jan 31 09:16:26 crc kubenswrapper[4830]: I0131 09:16:26.153047 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/collector-mfmq7" podStartSLOduration=1.564395644 podStartE2EDuration="8.153015145s" podCreationTimestamp="2026-01-31 09:16:18 +0000 UTC" firstStartedPulling="2026-01-31 09:16:18.746163761 +0000 UTC m=+923.239526203" lastFinishedPulling="2026-01-31 09:16:25.334783262 +0000 UTC m=+929.828145704" observedRunningTime="2026-01-31 09:16:26.147053974 +0000 UTC m=+930.640416546" watchObservedRunningTime="2026-01-31 09:16:26.153015145 +0000 UTC m=+930.646377587" Jan 31 09:16:27 crc kubenswrapper[4830]: I0131 09:16:27.130747 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-84rwf" podUID="f6798858-8042-4711-94f4-e021fb446569" containerName="registry-server" containerID="cri-o://bff6a475deb5f446fe921003a1c21bfdcb97ba14ef28f451733ca1683405e6fe" gracePeriod=2 Jan 31 09:16:28 crc kubenswrapper[4830]: I0131 09:16:28.672761 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-84rwf" Jan 31 09:16:28 crc kubenswrapper[4830]: I0131 09:16:28.790788 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f6798858-8042-4711-94f4-e021fb446569-utilities\") pod \"f6798858-8042-4711-94f4-e021fb446569\" (UID: \"f6798858-8042-4711-94f4-e021fb446569\") " Jan 31 09:16:28 crc kubenswrapper[4830]: I0131 09:16:28.791298 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f6798858-8042-4711-94f4-e021fb446569-catalog-content\") pod \"f6798858-8042-4711-94f4-e021fb446569\" (UID: \"f6798858-8042-4711-94f4-e021fb446569\") " Jan 31 09:16:28 crc kubenswrapper[4830]: I0131 09:16:28.791429 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6lw2v\" (UniqueName: \"kubernetes.io/projected/f6798858-8042-4711-94f4-e021fb446569-kube-api-access-6lw2v\") pod \"f6798858-8042-4711-94f4-e021fb446569\" (UID: \"f6798858-8042-4711-94f4-e021fb446569\") " Jan 31 09:16:28 crc kubenswrapper[4830]: I0131 09:16:28.792620 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f6798858-8042-4711-94f4-e021fb446569-utilities" (OuterVolumeSpecName: "utilities") pod "f6798858-8042-4711-94f4-e021fb446569" (UID: "f6798858-8042-4711-94f4-e021fb446569"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 09:16:28 crc kubenswrapper[4830]: I0131 09:16:28.799016 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f6798858-8042-4711-94f4-e021fb446569-kube-api-access-6lw2v" (OuterVolumeSpecName: "kube-api-access-6lw2v") pod "f6798858-8042-4711-94f4-e021fb446569" (UID: "f6798858-8042-4711-94f4-e021fb446569"). InnerVolumeSpecName "kube-api-access-6lw2v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:16:28 crc kubenswrapper[4830]: I0131 09:16:28.834577 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f6798858-8042-4711-94f4-e021fb446569-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f6798858-8042-4711-94f4-e021fb446569" (UID: "f6798858-8042-4711-94f4-e021fb446569"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 09:16:28 crc kubenswrapper[4830]: I0131 09:16:28.893063 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6lw2v\" (UniqueName: \"kubernetes.io/projected/f6798858-8042-4711-94f4-e021fb446569-kube-api-access-6lw2v\") on node \"crc\" DevicePath \"\"" Jan 31 09:16:28 crc kubenswrapper[4830]: I0131 09:16:28.893107 4830 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f6798858-8042-4711-94f4-e021fb446569-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 09:16:28 crc kubenswrapper[4830]: I0131 09:16:28.893120 4830 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f6798858-8042-4711-94f4-e021fb446569-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 09:16:29 crc kubenswrapper[4830]: I0131 09:16:29.153202 4830 generic.go:334] "Generic (PLEG): container finished" podID="f6798858-8042-4711-94f4-e021fb446569" containerID="bff6a475deb5f446fe921003a1c21bfdcb97ba14ef28f451733ca1683405e6fe" exitCode=0 Jan 31 09:16:29 crc kubenswrapper[4830]: I0131 09:16:29.153296 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-84rwf" Jan 31 09:16:29 crc kubenswrapper[4830]: I0131 09:16:29.153287 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-84rwf" event={"ID":"f6798858-8042-4711-94f4-e021fb446569","Type":"ContainerDied","Data":"bff6a475deb5f446fe921003a1c21bfdcb97ba14ef28f451733ca1683405e6fe"} Jan 31 09:16:29 crc kubenswrapper[4830]: I0131 09:16:29.153456 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-84rwf" event={"ID":"f6798858-8042-4711-94f4-e021fb446569","Type":"ContainerDied","Data":"8af9679ff2f9b8babeac2954a66dddd8ab8c7871cea75ec5da361514ba51b13e"} Jan 31 09:16:29 crc kubenswrapper[4830]: I0131 09:16:29.153479 4830 scope.go:117] "RemoveContainer" containerID="bff6a475deb5f446fe921003a1c21bfdcb97ba14ef28f451733ca1683405e6fe" Jan 31 09:16:29 crc kubenswrapper[4830]: I0131 09:16:29.186500 4830 scope.go:117] "RemoveContainer" containerID="78b745af7afe2fd97986e1fafcffe38b5bd60b399ad2f8a83e2b87f497338159" Jan 31 09:16:29 crc kubenswrapper[4830]: I0131 09:16:29.200189 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-84rwf"] Jan 31 09:16:29 crc kubenswrapper[4830]: I0131 09:16:29.211854 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-84rwf"] Jan 31 09:16:29 crc kubenswrapper[4830]: I0131 09:16:29.242153 4830 scope.go:117] "RemoveContainer" containerID="073ac8f99f72016026606f7d426eccc383bbe7303980d9b32e328dfd8b143d29" Jan 31 09:16:29 crc kubenswrapper[4830]: I0131 09:16:29.264481 4830 scope.go:117] "RemoveContainer" containerID="bff6a475deb5f446fe921003a1c21bfdcb97ba14ef28f451733ca1683405e6fe" Jan 31 09:16:29 crc kubenswrapper[4830]: E0131 09:16:29.264949 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bff6a475deb5f446fe921003a1c21bfdcb97ba14ef28f451733ca1683405e6fe\": container with ID starting with bff6a475deb5f446fe921003a1c21bfdcb97ba14ef28f451733ca1683405e6fe not found: ID does not exist" containerID="bff6a475deb5f446fe921003a1c21bfdcb97ba14ef28f451733ca1683405e6fe" Jan 31 09:16:29 crc kubenswrapper[4830]: I0131 09:16:29.264999 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bff6a475deb5f446fe921003a1c21bfdcb97ba14ef28f451733ca1683405e6fe"} err="failed to get container status \"bff6a475deb5f446fe921003a1c21bfdcb97ba14ef28f451733ca1683405e6fe\": rpc error: code = NotFound desc = could not find container \"bff6a475deb5f446fe921003a1c21bfdcb97ba14ef28f451733ca1683405e6fe\": container with ID starting with bff6a475deb5f446fe921003a1c21bfdcb97ba14ef28f451733ca1683405e6fe not found: ID does not exist" Jan 31 09:16:29 crc kubenswrapper[4830]: I0131 09:16:29.265037 4830 scope.go:117] "RemoveContainer" containerID="78b745af7afe2fd97986e1fafcffe38b5bd60b399ad2f8a83e2b87f497338159" Jan 31 09:16:29 crc kubenswrapper[4830]: E0131 09:16:29.265683 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"78b745af7afe2fd97986e1fafcffe38b5bd60b399ad2f8a83e2b87f497338159\": container with ID starting with 78b745af7afe2fd97986e1fafcffe38b5bd60b399ad2f8a83e2b87f497338159 not found: ID does not exist" containerID="78b745af7afe2fd97986e1fafcffe38b5bd60b399ad2f8a83e2b87f497338159" Jan 31 09:16:29 crc kubenswrapper[4830]: I0131 09:16:29.265706 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"78b745af7afe2fd97986e1fafcffe38b5bd60b399ad2f8a83e2b87f497338159"} err="failed to get container status \"78b745af7afe2fd97986e1fafcffe38b5bd60b399ad2f8a83e2b87f497338159\": rpc error: code = NotFound desc = could not find container \"78b745af7afe2fd97986e1fafcffe38b5bd60b399ad2f8a83e2b87f497338159\": container with ID starting with 78b745af7afe2fd97986e1fafcffe38b5bd60b399ad2f8a83e2b87f497338159 not found: ID does not exist" Jan 31 09:16:29 crc kubenswrapper[4830]: I0131 09:16:29.265869 4830 scope.go:117] "RemoveContainer" containerID="073ac8f99f72016026606f7d426eccc383bbe7303980d9b32e328dfd8b143d29" Jan 31 09:16:29 crc kubenswrapper[4830]: E0131 09:16:29.266451 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"073ac8f99f72016026606f7d426eccc383bbe7303980d9b32e328dfd8b143d29\": container with ID starting with 073ac8f99f72016026606f7d426eccc383bbe7303980d9b32e328dfd8b143d29 not found: ID does not exist" containerID="073ac8f99f72016026606f7d426eccc383bbe7303980d9b32e328dfd8b143d29" Jan 31 09:16:29 crc kubenswrapper[4830]: I0131 09:16:29.266507 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"073ac8f99f72016026606f7d426eccc383bbe7303980d9b32e328dfd8b143d29"} err="failed to get container status \"073ac8f99f72016026606f7d426eccc383bbe7303980d9b32e328dfd8b143d29\": rpc error: code = NotFound desc = could not find container \"073ac8f99f72016026606f7d426eccc383bbe7303980d9b32e328dfd8b143d29\": container with ID starting with 073ac8f99f72016026606f7d426eccc383bbe7303980d9b32e328dfd8b143d29 not found: ID does not exist" Jan 31 09:16:30 crc kubenswrapper[4830]: I0131 09:16:30.265237 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f6798858-8042-4711-94f4-e021fb446569" path="/var/lib/kubelet/pods/f6798858-8042-4711-94f4-e021fb446569/volumes" Jan 31 09:16:36 crc kubenswrapper[4830]: I0131 09:16:36.361770 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-4j9zh"] Jan 31 09:16:36 crc kubenswrapper[4830]: E0131 09:16:36.363068 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f6798858-8042-4711-94f4-e021fb446569" containerName="registry-server" Jan 31 09:16:36 crc kubenswrapper[4830]: I0131 09:16:36.363086 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="f6798858-8042-4711-94f4-e021fb446569" containerName="registry-server" Jan 31 09:16:36 crc kubenswrapper[4830]: E0131 09:16:36.363128 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f6798858-8042-4711-94f4-e021fb446569" containerName="extract-utilities" Jan 31 09:16:36 crc kubenswrapper[4830]: I0131 09:16:36.363136 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="f6798858-8042-4711-94f4-e021fb446569" containerName="extract-utilities" Jan 31 09:16:36 crc kubenswrapper[4830]: E0131 09:16:36.363147 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f6798858-8042-4711-94f4-e021fb446569" containerName="extract-content" Jan 31 09:16:36 crc kubenswrapper[4830]: I0131 09:16:36.363155 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="f6798858-8042-4711-94f4-e021fb446569" containerName="extract-content" Jan 31 09:16:36 crc kubenswrapper[4830]: I0131 09:16:36.363336 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="f6798858-8042-4711-94f4-e021fb446569" containerName="registry-server" Jan 31 09:16:36 crc kubenswrapper[4830]: I0131 09:16:36.364789 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4j9zh" Jan 31 09:16:36 crc kubenswrapper[4830]: I0131 09:16:36.378363 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-4j9zh"] Jan 31 09:16:36 crc kubenswrapper[4830]: I0131 09:16:36.436541 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t4df8\" (UniqueName: \"kubernetes.io/projected/b4921c53-df4f-4e25-96c0-aec6d74b8906-kube-api-access-t4df8\") pod \"community-operators-4j9zh\" (UID: \"b4921c53-df4f-4e25-96c0-aec6d74b8906\") " pod="openshift-marketplace/community-operators-4j9zh" Jan 31 09:16:36 crc kubenswrapper[4830]: I0131 09:16:36.436949 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b4921c53-df4f-4e25-96c0-aec6d74b8906-utilities\") pod \"community-operators-4j9zh\" (UID: \"b4921c53-df4f-4e25-96c0-aec6d74b8906\") " pod="openshift-marketplace/community-operators-4j9zh" Jan 31 09:16:36 crc kubenswrapper[4830]: I0131 09:16:36.437016 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b4921c53-df4f-4e25-96c0-aec6d74b8906-catalog-content\") pod \"community-operators-4j9zh\" (UID: \"b4921c53-df4f-4e25-96c0-aec6d74b8906\") " pod="openshift-marketplace/community-operators-4j9zh" Jan 31 09:16:36 crc kubenswrapper[4830]: I0131 09:16:36.538563 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t4df8\" (UniqueName: \"kubernetes.io/projected/b4921c53-df4f-4e25-96c0-aec6d74b8906-kube-api-access-t4df8\") pod \"community-operators-4j9zh\" (UID: \"b4921c53-df4f-4e25-96c0-aec6d74b8906\") " pod="openshift-marketplace/community-operators-4j9zh" Jan 31 09:16:36 crc kubenswrapper[4830]: I0131 09:16:36.538660 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b4921c53-df4f-4e25-96c0-aec6d74b8906-utilities\") pod \"community-operators-4j9zh\" (UID: \"b4921c53-df4f-4e25-96c0-aec6d74b8906\") " pod="openshift-marketplace/community-operators-4j9zh" Jan 31 09:16:36 crc kubenswrapper[4830]: I0131 09:16:36.538687 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b4921c53-df4f-4e25-96c0-aec6d74b8906-catalog-content\") pod \"community-operators-4j9zh\" (UID: \"b4921c53-df4f-4e25-96c0-aec6d74b8906\") " pod="openshift-marketplace/community-operators-4j9zh" Jan 31 09:16:36 crc kubenswrapper[4830]: I0131 09:16:36.539370 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b4921c53-df4f-4e25-96c0-aec6d74b8906-utilities\") pod \"community-operators-4j9zh\" (UID: \"b4921c53-df4f-4e25-96c0-aec6d74b8906\") " pod="openshift-marketplace/community-operators-4j9zh" Jan 31 09:16:36 crc kubenswrapper[4830]: I0131 09:16:36.539408 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b4921c53-df4f-4e25-96c0-aec6d74b8906-catalog-content\") pod \"community-operators-4j9zh\" (UID: \"b4921c53-df4f-4e25-96c0-aec6d74b8906\") " pod="openshift-marketplace/community-operators-4j9zh" Jan 31 09:16:36 crc kubenswrapper[4830]: I0131 09:16:36.563700 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t4df8\" (UniqueName: \"kubernetes.io/projected/b4921c53-df4f-4e25-96c0-aec6d74b8906-kube-api-access-t4df8\") pod \"community-operators-4j9zh\" (UID: \"b4921c53-df4f-4e25-96c0-aec6d74b8906\") " pod="openshift-marketplace/community-operators-4j9zh" Jan 31 09:16:36 crc kubenswrapper[4830]: I0131 09:16:36.692951 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4j9zh" Jan 31 09:16:37 crc kubenswrapper[4830]: I0131 09:16:37.390608 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-4j9zh"] Jan 31 09:16:38 crc kubenswrapper[4830]: I0131 09:16:38.232246 4830 generic.go:334] "Generic (PLEG): container finished" podID="b4921c53-df4f-4e25-96c0-aec6d74b8906" containerID="3fcd66ca17678bb68e6968017f1700fdab61ca800722a64d01fb0c7c69cbe8f0" exitCode=0 Jan 31 09:16:38 crc kubenswrapper[4830]: I0131 09:16:38.232298 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4j9zh" event={"ID":"b4921c53-df4f-4e25-96c0-aec6d74b8906","Type":"ContainerDied","Data":"3fcd66ca17678bb68e6968017f1700fdab61ca800722a64d01fb0c7c69cbe8f0"} Jan 31 09:16:38 crc kubenswrapper[4830]: I0131 09:16:38.232356 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4j9zh" event={"ID":"b4921c53-df4f-4e25-96c0-aec6d74b8906","Type":"ContainerStarted","Data":"ba0d6d9f04660559c0a42704e6fc9200d6e5953527a1de814250d7c413640e7b"} Jan 31 09:16:39 crc kubenswrapper[4830]: I0131 09:16:39.245054 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4j9zh" event={"ID":"b4921c53-df4f-4e25-96c0-aec6d74b8906","Type":"ContainerStarted","Data":"5098090087a3d1c6449285f63a1d50710256770aa32333e3e8844c73bf78d659"} Jan 31 09:16:40 crc kubenswrapper[4830]: I0131 09:16:40.262512 4830 generic.go:334] "Generic (PLEG): container finished" podID="b4921c53-df4f-4e25-96c0-aec6d74b8906" containerID="5098090087a3d1c6449285f63a1d50710256770aa32333e3e8844c73bf78d659" exitCode=0 Jan 31 09:16:40 crc kubenswrapper[4830]: I0131 09:16:40.270876 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4j9zh" event={"ID":"b4921c53-df4f-4e25-96c0-aec6d74b8906","Type":"ContainerDied","Data":"5098090087a3d1c6449285f63a1d50710256770aa32333e3e8844c73bf78d659"} Jan 31 09:16:41 crc kubenswrapper[4830]: I0131 09:16:41.277798 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4j9zh" event={"ID":"b4921c53-df4f-4e25-96c0-aec6d74b8906","Type":"ContainerStarted","Data":"087f9c07d62def316f19e8b97b0c85f97b874454fd095b069b1cc4f19d66603e"} Jan 31 09:16:41 crc kubenswrapper[4830]: I0131 09:16:41.303100 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-4j9zh" podStartSLOduration=2.654868888 podStartE2EDuration="5.303076947s" podCreationTimestamp="2026-01-31 09:16:36 +0000 UTC" firstStartedPulling="2026-01-31 09:16:38.235137387 +0000 UTC m=+942.728499829" lastFinishedPulling="2026-01-31 09:16:40.883345446 +0000 UTC m=+945.376707888" observedRunningTime="2026-01-31 09:16:41.29758757 +0000 UTC m=+945.790950012" watchObservedRunningTime="2026-01-31 09:16:41.303076947 +0000 UTC m=+945.796439389" Jan 31 09:16:44 crc kubenswrapper[4830]: I0131 09:16:44.353421 4830 patch_prober.go:28] interesting pod/machine-config-daemon-gt7kd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 09:16:44 crc kubenswrapper[4830]: I0131 09:16:44.354217 4830 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" podUID="158dbfda-9b0a-4809-9946-3c6ee2d082dc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 09:16:44 crc kubenswrapper[4830]: I0131 09:16:44.354283 4830 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" Jan 31 09:16:44 crc kubenswrapper[4830]: I0131 09:16:44.355466 4830 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b9a249a59033511b4c694877132f9e35c14cbd330f48a89cd21a667a4732ff74"} pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 31 09:16:44 crc kubenswrapper[4830]: I0131 09:16:44.355540 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" podUID="158dbfda-9b0a-4809-9946-3c6ee2d082dc" containerName="machine-config-daemon" containerID="cri-o://b9a249a59033511b4c694877132f9e35c14cbd330f48a89cd21a667a4732ff74" gracePeriod=600 Jan 31 09:16:45 crc kubenswrapper[4830]: I0131 09:16:45.309550 4830 generic.go:334] "Generic (PLEG): container finished" podID="158dbfda-9b0a-4809-9946-3c6ee2d082dc" containerID="b9a249a59033511b4c694877132f9e35c14cbd330f48a89cd21a667a4732ff74" exitCode=0 Jan 31 09:16:45 crc kubenswrapper[4830]: I0131 09:16:45.310293 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" event={"ID":"158dbfda-9b0a-4809-9946-3c6ee2d082dc","Type":"ContainerDied","Data":"b9a249a59033511b4c694877132f9e35c14cbd330f48a89cd21a667a4732ff74"} Jan 31 09:16:45 crc kubenswrapper[4830]: I0131 09:16:45.310325 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" event={"ID":"158dbfda-9b0a-4809-9946-3c6ee2d082dc","Type":"ContainerStarted","Data":"6ae573c7c9ad02ecbf718005230310a2ac720cf9510afe4a2b4cb658fc772187"} Jan 31 09:16:45 crc kubenswrapper[4830]: I0131 09:16:45.310345 4830 scope.go:117] "RemoveContainer" containerID="28b103ac2ba54a2d7fb62b9e350f386540aa590898443607b7a7ceffbe4db67d" Jan 31 09:16:46 crc kubenswrapper[4830]: I0131 09:16:46.693213 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-4j9zh" Jan 31 09:16:46 crc kubenswrapper[4830]: I0131 09:16:46.693885 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-4j9zh" Jan 31 09:16:46 crc kubenswrapper[4830]: I0131 09:16:46.757772 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-4j9zh" Jan 31 09:16:47 crc kubenswrapper[4830]: I0131 09:16:47.396991 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-4j9zh" Jan 31 09:16:47 crc kubenswrapper[4830]: I0131 09:16:47.451430 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-4j9zh"] Jan 31 09:16:49 crc kubenswrapper[4830]: I0131 09:16:49.364267 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-4j9zh" podUID="b4921c53-df4f-4e25-96c0-aec6d74b8906" containerName="registry-server" containerID="cri-o://087f9c07d62def316f19e8b97b0c85f97b874454fd095b069b1cc4f19d66603e" gracePeriod=2 Jan 31 09:16:49 crc kubenswrapper[4830]: I0131 09:16:49.869859 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4j9zh" Jan 31 09:16:49 crc kubenswrapper[4830]: I0131 09:16:49.994044 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b4921c53-df4f-4e25-96c0-aec6d74b8906-catalog-content\") pod \"b4921c53-df4f-4e25-96c0-aec6d74b8906\" (UID: \"b4921c53-df4f-4e25-96c0-aec6d74b8906\") " Jan 31 09:16:49 crc kubenswrapper[4830]: I0131 09:16:49.994180 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b4921c53-df4f-4e25-96c0-aec6d74b8906-utilities\") pod \"b4921c53-df4f-4e25-96c0-aec6d74b8906\" (UID: \"b4921c53-df4f-4e25-96c0-aec6d74b8906\") " Jan 31 09:16:49 crc kubenswrapper[4830]: I0131 09:16:49.994263 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t4df8\" (UniqueName: \"kubernetes.io/projected/b4921c53-df4f-4e25-96c0-aec6d74b8906-kube-api-access-t4df8\") pod \"b4921c53-df4f-4e25-96c0-aec6d74b8906\" (UID: \"b4921c53-df4f-4e25-96c0-aec6d74b8906\") " Jan 31 09:16:49 crc kubenswrapper[4830]: I0131 09:16:49.995860 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b4921c53-df4f-4e25-96c0-aec6d74b8906-utilities" (OuterVolumeSpecName: "utilities") pod "b4921c53-df4f-4e25-96c0-aec6d74b8906" (UID: "b4921c53-df4f-4e25-96c0-aec6d74b8906"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 09:16:50 crc kubenswrapper[4830]: I0131 09:16:50.001738 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b4921c53-df4f-4e25-96c0-aec6d74b8906-kube-api-access-t4df8" (OuterVolumeSpecName: "kube-api-access-t4df8") pod "b4921c53-df4f-4e25-96c0-aec6d74b8906" (UID: "b4921c53-df4f-4e25-96c0-aec6d74b8906"). InnerVolumeSpecName "kube-api-access-t4df8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:16:50 crc kubenswrapper[4830]: I0131 09:16:50.096675 4830 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b4921c53-df4f-4e25-96c0-aec6d74b8906-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 09:16:50 crc kubenswrapper[4830]: I0131 09:16:50.096718 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t4df8\" (UniqueName: \"kubernetes.io/projected/b4921c53-df4f-4e25-96c0-aec6d74b8906-kube-api-access-t4df8\") on node \"crc\" DevicePath \"\"" Jan 31 09:16:50 crc kubenswrapper[4830]: I0131 09:16:50.377488 4830 generic.go:334] "Generic (PLEG): container finished" podID="b4921c53-df4f-4e25-96c0-aec6d74b8906" containerID="087f9c07d62def316f19e8b97b0c85f97b874454fd095b069b1cc4f19d66603e" exitCode=0 Jan 31 09:16:50 crc kubenswrapper[4830]: I0131 09:16:50.377563 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4j9zh" Jan 31 09:16:50 crc kubenswrapper[4830]: I0131 09:16:50.377563 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4j9zh" event={"ID":"b4921c53-df4f-4e25-96c0-aec6d74b8906","Type":"ContainerDied","Data":"087f9c07d62def316f19e8b97b0c85f97b874454fd095b069b1cc4f19d66603e"} Jan 31 09:16:50 crc kubenswrapper[4830]: I0131 09:16:50.377714 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4j9zh" event={"ID":"b4921c53-df4f-4e25-96c0-aec6d74b8906","Type":"ContainerDied","Data":"ba0d6d9f04660559c0a42704e6fc9200d6e5953527a1de814250d7c413640e7b"} Jan 31 09:16:50 crc kubenswrapper[4830]: I0131 09:16:50.377766 4830 scope.go:117] "RemoveContainer" containerID="087f9c07d62def316f19e8b97b0c85f97b874454fd095b069b1cc4f19d66603e" Jan 31 09:16:50 crc kubenswrapper[4830]: I0131 09:16:50.404958 4830 scope.go:117] "RemoveContainer" containerID="5098090087a3d1c6449285f63a1d50710256770aa32333e3e8844c73bf78d659" Jan 31 09:16:50 crc kubenswrapper[4830]: I0131 09:16:50.436708 4830 scope.go:117] "RemoveContainer" containerID="3fcd66ca17678bb68e6968017f1700fdab61ca800722a64d01fb0c7c69cbe8f0" Jan 31 09:16:50 crc kubenswrapper[4830]: I0131 09:16:50.459464 4830 scope.go:117] "RemoveContainer" containerID="087f9c07d62def316f19e8b97b0c85f97b874454fd095b069b1cc4f19d66603e" Jan 31 09:16:50 crc kubenswrapper[4830]: E0131 09:16:50.460202 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"087f9c07d62def316f19e8b97b0c85f97b874454fd095b069b1cc4f19d66603e\": container with ID starting with 087f9c07d62def316f19e8b97b0c85f97b874454fd095b069b1cc4f19d66603e not found: ID does not exist" containerID="087f9c07d62def316f19e8b97b0c85f97b874454fd095b069b1cc4f19d66603e" Jan 31 09:16:50 crc kubenswrapper[4830]: I0131 09:16:50.460266 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"087f9c07d62def316f19e8b97b0c85f97b874454fd095b069b1cc4f19d66603e"} err="failed to get container status \"087f9c07d62def316f19e8b97b0c85f97b874454fd095b069b1cc4f19d66603e\": rpc error: code = NotFound desc = could not find container \"087f9c07d62def316f19e8b97b0c85f97b874454fd095b069b1cc4f19d66603e\": container with ID starting with 087f9c07d62def316f19e8b97b0c85f97b874454fd095b069b1cc4f19d66603e not found: ID does not exist" Jan 31 09:16:50 crc kubenswrapper[4830]: I0131 09:16:50.460304 4830 scope.go:117] "RemoveContainer" containerID="5098090087a3d1c6449285f63a1d50710256770aa32333e3e8844c73bf78d659" Jan 31 09:16:50 crc kubenswrapper[4830]: E0131 09:16:50.460691 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5098090087a3d1c6449285f63a1d50710256770aa32333e3e8844c73bf78d659\": container with ID starting with 5098090087a3d1c6449285f63a1d50710256770aa32333e3e8844c73bf78d659 not found: ID does not exist" containerID="5098090087a3d1c6449285f63a1d50710256770aa32333e3e8844c73bf78d659" Jan 31 09:16:50 crc kubenswrapper[4830]: I0131 09:16:50.460803 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5098090087a3d1c6449285f63a1d50710256770aa32333e3e8844c73bf78d659"} err="failed to get container status \"5098090087a3d1c6449285f63a1d50710256770aa32333e3e8844c73bf78d659\": rpc error: code = NotFound desc = could not find container \"5098090087a3d1c6449285f63a1d50710256770aa32333e3e8844c73bf78d659\": container with ID starting with 5098090087a3d1c6449285f63a1d50710256770aa32333e3e8844c73bf78d659 not found: ID does not exist" Jan 31 09:16:50 crc kubenswrapper[4830]: I0131 09:16:50.460848 4830 scope.go:117] "RemoveContainer" containerID="3fcd66ca17678bb68e6968017f1700fdab61ca800722a64d01fb0c7c69cbe8f0" Jan 31 09:16:50 crc kubenswrapper[4830]: E0131 09:16:50.461204 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3fcd66ca17678bb68e6968017f1700fdab61ca800722a64d01fb0c7c69cbe8f0\": container with ID starting with 3fcd66ca17678bb68e6968017f1700fdab61ca800722a64d01fb0c7c69cbe8f0 not found: ID does not exist" containerID="3fcd66ca17678bb68e6968017f1700fdab61ca800722a64d01fb0c7c69cbe8f0" Jan 31 09:16:50 crc kubenswrapper[4830]: I0131 09:16:50.461299 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3fcd66ca17678bb68e6968017f1700fdab61ca800722a64d01fb0c7c69cbe8f0"} err="failed to get container status \"3fcd66ca17678bb68e6968017f1700fdab61ca800722a64d01fb0c7c69cbe8f0\": rpc error: code = NotFound desc = could not find container \"3fcd66ca17678bb68e6968017f1700fdab61ca800722a64d01fb0c7c69cbe8f0\": container with ID starting with 3fcd66ca17678bb68e6968017f1700fdab61ca800722a64d01fb0c7c69cbe8f0 not found: ID does not exist" Jan 31 09:16:50 crc kubenswrapper[4830]: I0131 09:16:50.867043 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b4921c53-df4f-4e25-96c0-aec6d74b8906-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b4921c53-df4f-4e25-96c0-aec6d74b8906" (UID: "b4921c53-df4f-4e25-96c0-aec6d74b8906"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 09:16:50 crc kubenswrapper[4830]: I0131 09:16:50.923706 4830 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b4921c53-df4f-4e25-96c0-aec6d74b8906-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 09:16:51 crc kubenswrapper[4830]: I0131 09:16:51.020996 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-4j9zh"] Jan 31 09:16:51 crc kubenswrapper[4830]: I0131 09:16:51.030630 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-4j9zh"] Jan 31 09:16:52 crc kubenswrapper[4830]: I0131 09:16:52.265819 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b4921c53-df4f-4e25-96c0-aec6d74b8906" path="/var/lib/kubelet/pods/b4921c53-df4f-4e25-96c0-aec6d74b8906/volumes" Jan 31 09:16:54 crc kubenswrapper[4830]: I0131 09:16:54.806046 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7136q2sl"] Jan 31 09:16:54 crc kubenswrapper[4830]: E0131 09:16:54.806673 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4921c53-df4f-4e25-96c0-aec6d74b8906" containerName="extract-utilities" Jan 31 09:16:54 crc kubenswrapper[4830]: I0131 09:16:54.806688 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4921c53-df4f-4e25-96c0-aec6d74b8906" containerName="extract-utilities" Jan 31 09:16:54 crc kubenswrapper[4830]: E0131 09:16:54.806707 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4921c53-df4f-4e25-96c0-aec6d74b8906" containerName="extract-content" Jan 31 09:16:54 crc kubenswrapper[4830]: I0131 09:16:54.806713 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4921c53-df4f-4e25-96c0-aec6d74b8906" containerName="extract-content" Jan 31 09:16:54 crc kubenswrapper[4830]: E0131 09:16:54.806746 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4921c53-df4f-4e25-96c0-aec6d74b8906" containerName="registry-server" Jan 31 09:16:54 crc kubenswrapper[4830]: I0131 09:16:54.806753 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4921c53-df4f-4e25-96c0-aec6d74b8906" containerName="registry-server" Jan 31 09:16:54 crc kubenswrapper[4830]: I0131 09:16:54.806899 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="b4921c53-df4f-4e25-96c0-aec6d74b8906" containerName="registry-server" Jan 31 09:16:54 crc kubenswrapper[4830]: I0131 09:16:54.808119 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7136q2sl" Jan 31 09:16:54 crc kubenswrapper[4830]: I0131 09:16:54.816870 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 31 09:16:54 crc kubenswrapper[4830]: I0131 09:16:54.831490 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7136q2sl"] Jan 31 09:16:54 crc kubenswrapper[4830]: I0131 09:16:54.996466 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/2e45971e-893f-4389-b33c-688089a3f7ec-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7136q2sl\" (UID: \"2e45971e-893f-4389-b33c-688089a3f7ec\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7136q2sl" Jan 31 09:16:54 crc kubenswrapper[4830]: I0131 09:16:54.996675 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/2e45971e-893f-4389-b33c-688089a3f7ec-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7136q2sl\" (UID: \"2e45971e-893f-4389-b33c-688089a3f7ec\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7136q2sl" Jan 31 09:16:54 crc kubenswrapper[4830]: I0131 09:16:54.996711 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jxllg\" (UniqueName: \"kubernetes.io/projected/2e45971e-893f-4389-b33c-688089a3f7ec-kube-api-access-jxllg\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7136q2sl\" (UID: \"2e45971e-893f-4389-b33c-688089a3f7ec\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7136q2sl" Jan 31 09:16:55 crc kubenswrapper[4830]: I0131 09:16:55.100116 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/2e45971e-893f-4389-b33c-688089a3f7ec-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7136q2sl\" (UID: \"2e45971e-893f-4389-b33c-688089a3f7ec\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7136q2sl" Jan 31 09:16:55 crc kubenswrapper[4830]: I0131 09:16:55.100295 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/2e45971e-893f-4389-b33c-688089a3f7ec-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7136q2sl\" (UID: \"2e45971e-893f-4389-b33c-688089a3f7ec\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7136q2sl" Jan 31 09:16:55 crc kubenswrapper[4830]: I0131 09:16:55.100524 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jxllg\" (UniqueName: \"kubernetes.io/projected/2e45971e-893f-4389-b33c-688089a3f7ec-kube-api-access-jxllg\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7136q2sl\" (UID: \"2e45971e-893f-4389-b33c-688089a3f7ec\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7136q2sl" Jan 31 09:16:55 crc kubenswrapper[4830]: I0131 09:16:55.101095 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/2e45971e-893f-4389-b33c-688089a3f7ec-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7136q2sl\" (UID: \"2e45971e-893f-4389-b33c-688089a3f7ec\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7136q2sl" Jan 31 09:16:55 crc kubenswrapper[4830]: I0131 09:16:55.102130 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/2e45971e-893f-4389-b33c-688089a3f7ec-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7136q2sl\" (UID: \"2e45971e-893f-4389-b33c-688089a3f7ec\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7136q2sl" Jan 31 09:16:55 crc kubenswrapper[4830]: I0131 09:16:55.126123 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jxllg\" (UniqueName: \"kubernetes.io/projected/2e45971e-893f-4389-b33c-688089a3f7ec-kube-api-access-jxllg\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7136q2sl\" (UID: \"2e45971e-893f-4389-b33c-688089a3f7ec\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7136q2sl" Jan 31 09:16:55 crc kubenswrapper[4830]: I0131 09:16:55.425954 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7136q2sl" Jan 31 09:16:55 crc kubenswrapper[4830]: I0131 09:16:55.915289 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7136q2sl"] Jan 31 09:16:56 crc kubenswrapper[4830]: E0131 09:16:56.347816 4830 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2e45971e_893f_4389_b33c_688089a3f7ec.slice/crio-e036892298d289217e189bb03fad399b1c93fef175be45ee2722e77696334415.scope\": RecentStats: unable to find data in memory cache]" Jan 31 09:16:56 crc kubenswrapper[4830]: I0131 09:16:56.453478 4830 generic.go:334] "Generic (PLEG): container finished" podID="2e45971e-893f-4389-b33c-688089a3f7ec" containerID="e036892298d289217e189bb03fad399b1c93fef175be45ee2722e77696334415" exitCode=0 Jan 31 09:16:56 crc kubenswrapper[4830]: I0131 09:16:56.453536 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7136q2sl" event={"ID":"2e45971e-893f-4389-b33c-688089a3f7ec","Type":"ContainerDied","Data":"e036892298d289217e189bb03fad399b1c93fef175be45ee2722e77696334415"} Jan 31 09:16:56 crc kubenswrapper[4830]: I0131 09:16:56.453569 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7136q2sl" event={"ID":"2e45971e-893f-4389-b33c-688089a3f7ec","Type":"ContainerStarted","Data":"43b007e075f2809b0dd24f724311ab2e39b7be05c07a9da6f6da04972ec793d4"} Jan 31 09:16:58 crc kubenswrapper[4830]: I0131 09:16:58.467512 4830 generic.go:334] "Generic (PLEG): container finished" podID="2e45971e-893f-4389-b33c-688089a3f7ec" containerID="293ddfdf7605b69b9e9b54c3fa3f00c583220e3088f3eb0d46773c1cc1c2cddf" exitCode=0 Jan 31 09:16:58 crc kubenswrapper[4830]: I0131 09:16:58.467610 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7136q2sl" event={"ID":"2e45971e-893f-4389-b33c-688089a3f7ec","Type":"ContainerDied","Data":"293ddfdf7605b69b9e9b54c3fa3f00c583220e3088f3eb0d46773c1cc1c2cddf"} Jan 31 09:16:59 crc kubenswrapper[4830]: I0131 09:16:59.490149 4830 generic.go:334] "Generic (PLEG): container finished" podID="2e45971e-893f-4389-b33c-688089a3f7ec" containerID="952d08f07e12b38841f82d7c5caac2d61b39d8f4c9faaf0a485d474e9669fe5d" exitCode=0 Jan 31 09:16:59 crc kubenswrapper[4830]: I0131 09:16:59.490223 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7136q2sl" event={"ID":"2e45971e-893f-4389-b33c-688089a3f7ec","Type":"ContainerDied","Data":"952d08f07e12b38841f82d7c5caac2d61b39d8f4c9faaf0a485d474e9669fe5d"} Jan 31 09:17:00 crc kubenswrapper[4830]: I0131 09:17:00.830228 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7136q2sl" Jan 31 09:17:00 crc kubenswrapper[4830]: I0131 09:17:00.903305 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jxllg\" (UniqueName: \"kubernetes.io/projected/2e45971e-893f-4389-b33c-688089a3f7ec-kube-api-access-jxllg\") pod \"2e45971e-893f-4389-b33c-688089a3f7ec\" (UID: \"2e45971e-893f-4389-b33c-688089a3f7ec\") " Jan 31 09:17:00 crc kubenswrapper[4830]: I0131 09:17:00.903531 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/2e45971e-893f-4389-b33c-688089a3f7ec-bundle\") pod \"2e45971e-893f-4389-b33c-688089a3f7ec\" (UID: \"2e45971e-893f-4389-b33c-688089a3f7ec\") " Jan 31 09:17:00 crc kubenswrapper[4830]: I0131 09:17:00.903753 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/2e45971e-893f-4389-b33c-688089a3f7ec-util\") pod \"2e45971e-893f-4389-b33c-688089a3f7ec\" (UID: \"2e45971e-893f-4389-b33c-688089a3f7ec\") " Jan 31 09:17:00 crc kubenswrapper[4830]: I0131 09:17:00.904200 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2e45971e-893f-4389-b33c-688089a3f7ec-bundle" (OuterVolumeSpecName: "bundle") pod "2e45971e-893f-4389-b33c-688089a3f7ec" (UID: "2e45971e-893f-4389-b33c-688089a3f7ec"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 09:17:00 crc kubenswrapper[4830]: I0131 09:17:00.904702 4830 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/2e45971e-893f-4389-b33c-688089a3f7ec-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 09:17:00 crc kubenswrapper[4830]: I0131 09:17:00.910710 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2e45971e-893f-4389-b33c-688089a3f7ec-kube-api-access-jxllg" (OuterVolumeSpecName: "kube-api-access-jxllg") pod "2e45971e-893f-4389-b33c-688089a3f7ec" (UID: "2e45971e-893f-4389-b33c-688089a3f7ec"). InnerVolumeSpecName "kube-api-access-jxllg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:17:00 crc kubenswrapper[4830]: I0131 09:17:00.924776 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2e45971e-893f-4389-b33c-688089a3f7ec-util" (OuterVolumeSpecName: "util") pod "2e45971e-893f-4389-b33c-688089a3f7ec" (UID: "2e45971e-893f-4389-b33c-688089a3f7ec"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 09:17:01 crc kubenswrapper[4830]: I0131 09:17:01.007002 4830 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/2e45971e-893f-4389-b33c-688089a3f7ec-util\") on node \"crc\" DevicePath \"\"" Jan 31 09:17:01 crc kubenswrapper[4830]: I0131 09:17:01.007041 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jxllg\" (UniqueName: \"kubernetes.io/projected/2e45971e-893f-4389-b33c-688089a3f7ec-kube-api-access-jxllg\") on node \"crc\" DevicePath \"\"" Jan 31 09:17:01 crc kubenswrapper[4830]: I0131 09:17:01.508216 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7136q2sl" event={"ID":"2e45971e-893f-4389-b33c-688089a3f7ec","Type":"ContainerDied","Data":"43b007e075f2809b0dd24f724311ab2e39b7be05c07a9da6f6da04972ec793d4"} Jan 31 09:17:01 crc kubenswrapper[4830]: I0131 09:17:01.508274 4830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="43b007e075f2809b0dd24f724311ab2e39b7be05c07a9da6f6da04972ec793d4" Jan 31 09:17:01 crc kubenswrapper[4830]: I0131 09:17:01.508293 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7136q2sl" Jan 31 09:17:03 crc kubenswrapper[4830]: I0131 09:17:03.981879 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-bb7hz"] Jan 31 09:17:03 crc kubenswrapper[4830]: E0131 09:17:03.983146 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e45971e-893f-4389-b33c-688089a3f7ec" containerName="util" Jan 31 09:17:03 crc kubenswrapper[4830]: I0131 09:17:03.983179 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e45971e-893f-4389-b33c-688089a3f7ec" containerName="util" Jan 31 09:17:03 crc kubenswrapper[4830]: E0131 09:17:03.983225 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e45971e-893f-4389-b33c-688089a3f7ec" containerName="extract" Jan 31 09:17:03 crc kubenswrapper[4830]: I0131 09:17:03.983244 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e45971e-893f-4389-b33c-688089a3f7ec" containerName="extract" Jan 31 09:17:03 crc kubenswrapper[4830]: E0131 09:17:03.983300 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e45971e-893f-4389-b33c-688089a3f7ec" containerName="pull" Jan 31 09:17:03 crc kubenswrapper[4830]: I0131 09:17:03.983317 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e45971e-893f-4389-b33c-688089a3f7ec" containerName="pull" Jan 31 09:17:03 crc kubenswrapper[4830]: I0131 09:17:03.983639 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="2e45971e-893f-4389-b33c-688089a3f7ec" containerName="extract" Jan 31 09:17:03 crc kubenswrapper[4830]: I0131 09:17:03.984798 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-bb7hz" Jan 31 09:17:03 crc kubenswrapper[4830]: I0131 09:17:03.987777 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Jan 31 09:17:03 crc kubenswrapper[4830]: I0131 09:17:03.988966 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Jan 31 09:17:03 crc kubenswrapper[4830]: I0131 09:17:03.990102 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-v5tc8" Jan 31 09:17:03 crc kubenswrapper[4830]: I0131 09:17:03.998084 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-bb7hz"] Jan 31 09:17:04 crc kubenswrapper[4830]: I0131 09:17:04.061547 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5d752\" (UniqueName: \"kubernetes.io/projected/2c310223-ad74-4147-9aac-1b60f4938062-kube-api-access-5d752\") pod \"nmstate-operator-646758c888-bb7hz\" (UID: \"2c310223-ad74-4147-9aac-1b60f4938062\") " pod="openshift-nmstate/nmstate-operator-646758c888-bb7hz" Jan 31 09:17:04 crc kubenswrapper[4830]: I0131 09:17:04.163275 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5d752\" (UniqueName: \"kubernetes.io/projected/2c310223-ad74-4147-9aac-1b60f4938062-kube-api-access-5d752\") pod \"nmstate-operator-646758c888-bb7hz\" (UID: \"2c310223-ad74-4147-9aac-1b60f4938062\") " pod="openshift-nmstate/nmstate-operator-646758c888-bb7hz" Jan 31 09:17:04 crc kubenswrapper[4830]: I0131 09:17:04.183128 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5d752\" (UniqueName: \"kubernetes.io/projected/2c310223-ad74-4147-9aac-1b60f4938062-kube-api-access-5d752\") pod \"nmstate-operator-646758c888-bb7hz\" (UID: \"2c310223-ad74-4147-9aac-1b60f4938062\") " pod="openshift-nmstate/nmstate-operator-646758c888-bb7hz" Jan 31 09:17:04 crc kubenswrapper[4830]: I0131 09:17:04.304333 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-bb7hz" Jan 31 09:17:04 crc kubenswrapper[4830]: I0131 09:17:04.842121 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-bb7hz"] Jan 31 09:17:05 crc kubenswrapper[4830]: I0131 09:17:05.539355 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-bb7hz" event={"ID":"2c310223-ad74-4147-9aac-1b60f4938062","Type":"ContainerStarted","Data":"ff43334686cbf3b4c787fd4cf714d904b1064e6994eb84b42e0399c6716eba2b"} Jan 31 09:17:07 crc kubenswrapper[4830]: I0131 09:17:07.556514 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-bb7hz" event={"ID":"2c310223-ad74-4147-9aac-1b60f4938062","Type":"ContainerStarted","Data":"5b6d1dcc0370529f1065ffef4607252024763ecf62741d74f6cf102c6175477d"} Jan 31 09:17:07 crc kubenswrapper[4830]: I0131 09:17:07.577528 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-646758c888-bb7hz" podStartSLOduration=2.131499762 podStartE2EDuration="4.57748282s" podCreationTimestamp="2026-01-31 09:17:03 +0000 UTC" firstStartedPulling="2026-01-31 09:17:04.852418092 +0000 UTC m=+969.345780534" lastFinishedPulling="2026-01-31 09:17:07.29840114 +0000 UTC m=+971.791763592" observedRunningTime="2026-01-31 09:17:07.573378202 +0000 UTC m=+972.066740644" watchObservedRunningTime="2026-01-31 09:17:07.57748282 +0000 UTC m=+972.070845262" Jan 31 09:17:08 crc kubenswrapper[4830]: I0131 09:17:08.596062 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-ppr4q"] Jan 31 09:17:08 crc kubenswrapper[4830]: I0131 09:17:08.597767 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-ppr4q" Jan 31 09:17:08 crc kubenswrapper[4830]: I0131 09:17:08.600178 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-jctpk" Jan 31 09:17:08 crc kubenswrapper[4830]: I0131 09:17:08.600621 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-hw8mv"] Jan 31 09:17:08 crc kubenswrapper[4830]: I0131 09:17:08.601861 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-hw8mv" Jan 31 09:17:08 crc kubenswrapper[4830]: I0131 09:17:08.604742 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Jan 31 09:17:08 crc kubenswrapper[4830]: I0131 09:17:08.615961 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-ppr4q"] Jan 31 09:17:08 crc kubenswrapper[4830]: I0131 09:17:08.629975 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-hw8mv"] Jan 31 09:17:08 crc kubenswrapper[4830]: I0131 09:17:08.643508 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-9wzdf"] Jan 31 09:17:08 crc kubenswrapper[4830]: I0131 09:17:08.645513 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-9wzdf" Jan 31 09:17:08 crc kubenswrapper[4830]: I0131 09:17:08.706243 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dp5rz\" (UniqueName: \"kubernetes.io/projected/a580c5e1-30c2-40b1-993d-c375cc99e2f2-kube-api-access-dp5rz\") pod \"nmstate-webhook-8474b5b9d8-hw8mv\" (UID: \"a580c5e1-30c2-40b1-993d-c375cc99e2f2\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-hw8mv" Jan 31 09:17:08 crc kubenswrapper[4830]: I0131 09:17:08.706297 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5sjxp\" (UniqueName: \"kubernetes.io/projected/80b52808-7bda-4187-86e4-356413c4ff68-kube-api-access-5sjxp\") pod \"nmstate-metrics-54757c584b-ppr4q\" (UID: \"80b52808-7bda-4187-86e4-356413c4ff68\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-ppr4q" Jan 31 09:17:08 crc kubenswrapper[4830]: I0131 09:17:08.706371 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/a580c5e1-30c2-40b1-993d-c375cc99e2f2-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-hw8mv\" (UID: \"a580c5e1-30c2-40b1-993d-c375cc99e2f2\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-hw8mv" Jan 31 09:17:08 crc kubenswrapper[4830]: I0131 09:17:08.807601 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/09ac1675-c6eb-453a-83a5-94f0a04c9665-ovs-socket\") pod \"nmstate-handler-9wzdf\" (UID: \"09ac1675-c6eb-453a-83a5-94f0a04c9665\") " pod="openshift-nmstate/nmstate-handler-9wzdf" Jan 31 09:17:08 crc kubenswrapper[4830]: I0131 09:17:08.807683 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dm2gn\" (UniqueName: \"kubernetes.io/projected/09ac1675-c6eb-453a-83a5-94f0a04c9665-kube-api-access-dm2gn\") pod \"nmstate-handler-9wzdf\" (UID: \"09ac1675-c6eb-453a-83a5-94f0a04c9665\") " pod="openshift-nmstate/nmstate-handler-9wzdf" Jan 31 09:17:08 crc kubenswrapper[4830]: I0131 09:17:08.807718 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dp5rz\" (UniqueName: \"kubernetes.io/projected/a580c5e1-30c2-40b1-993d-c375cc99e2f2-kube-api-access-dp5rz\") pod \"nmstate-webhook-8474b5b9d8-hw8mv\" (UID: \"a580c5e1-30c2-40b1-993d-c375cc99e2f2\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-hw8mv" Jan 31 09:17:08 crc kubenswrapper[4830]: I0131 09:17:08.807747 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5sjxp\" (UniqueName: \"kubernetes.io/projected/80b52808-7bda-4187-86e4-356413c4ff68-kube-api-access-5sjxp\") pod \"nmstate-metrics-54757c584b-ppr4q\" (UID: \"80b52808-7bda-4187-86e4-356413c4ff68\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-ppr4q" Jan 31 09:17:08 crc kubenswrapper[4830]: I0131 09:17:08.809605 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/a580c5e1-30c2-40b1-993d-c375cc99e2f2-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-hw8mv\" (UID: \"a580c5e1-30c2-40b1-993d-c375cc99e2f2\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-hw8mv" Jan 31 09:17:08 crc kubenswrapper[4830]: I0131 09:17:08.809804 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/09ac1675-c6eb-453a-83a5-94f0a04c9665-dbus-socket\") pod \"nmstate-handler-9wzdf\" (UID: \"09ac1675-c6eb-453a-83a5-94f0a04c9665\") " pod="openshift-nmstate/nmstate-handler-9wzdf" Jan 31 09:17:08 crc kubenswrapper[4830]: I0131 09:17:08.809850 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/09ac1675-c6eb-453a-83a5-94f0a04c9665-nmstate-lock\") pod \"nmstate-handler-9wzdf\" (UID: \"09ac1675-c6eb-453a-83a5-94f0a04c9665\") " pod="openshift-nmstate/nmstate-handler-9wzdf" Jan 31 09:17:08 crc kubenswrapper[4830]: E0131 09:17:08.810143 4830 secret.go:188] Couldn't get secret openshift-nmstate/openshift-nmstate-webhook: secret "openshift-nmstate-webhook" not found Jan 31 09:17:08 crc kubenswrapper[4830]: E0131 09:17:08.810217 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a580c5e1-30c2-40b1-993d-c375cc99e2f2-tls-key-pair podName:a580c5e1-30c2-40b1-993d-c375cc99e2f2 nodeName:}" failed. No retries permitted until 2026-01-31 09:17:09.310192279 +0000 UTC m=+973.803554721 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-key-pair" (UniqueName: "kubernetes.io/secret/a580c5e1-30c2-40b1-993d-c375cc99e2f2-tls-key-pair") pod "nmstate-webhook-8474b5b9d8-hw8mv" (UID: "a580c5e1-30c2-40b1-993d-c375cc99e2f2") : secret "openshift-nmstate-webhook" not found Jan 31 09:17:08 crc kubenswrapper[4830]: I0131 09:17:08.828343 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-t4b58"] Jan 31 09:17:08 crc kubenswrapper[4830]: I0131 09:17:08.836798 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dp5rz\" (UniqueName: \"kubernetes.io/projected/a580c5e1-30c2-40b1-993d-c375cc99e2f2-kube-api-access-dp5rz\") pod \"nmstate-webhook-8474b5b9d8-hw8mv\" (UID: \"a580c5e1-30c2-40b1-993d-c375cc99e2f2\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-hw8mv" Jan 31 09:17:08 crc kubenswrapper[4830]: I0131 09:17:08.837520 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-t4b58" Jan 31 09:17:08 crc kubenswrapper[4830]: I0131 09:17:08.839879 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Jan 31 09:17:08 crc kubenswrapper[4830]: I0131 09:17:08.840200 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-xbh7z" Jan 31 09:17:08 crc kubenswrapper[4830]: I0131 09:17:08.840574 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Jan 31 09:17:08 crc kubenswrapper[4830]: I0131 09:17:08.860796 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5sjxp\" (UniqueName: \"kubernetes.io/projected/80b52808-7bda-4187-86e4-356413c4ff68-kube-api-access-5sjxp\") pod \"nmstate-metrics-54757c584b-ppr4q\" (UID: \"80b52808-7bda-4187-86e4-356413c4ff68\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-ppr4q" Jan 31 09:17:08 crc kubenswrapper[4830]: I0131 09:17:08.873077 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-t4b58"] Jan 31 09:17:08 crc kubenswrapper[4830]: I0131 09:17:08.911874 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/0782dc69-7ca6-4a3c-898b-a928694c4810-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-t4b58\" (UID: \"0782dc69-7ca6-4a3c-898b-a928694c4810\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-t4b58" Jan 31 09:17:08 crc kubenswrapper[4830]: I0131 09:17:08.911946 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/09ac1675-c6eb-453a-83a5-94f0a04c9665-dbus-socket\") pod \"nmstate-handler-9wzdf\" (UID: \"09ac1675-c6eb-453a-83a5-94f0a04c9665\") " pod="openshift-nmstate/nmstate-handler-9wzdf" Jan 31 09:17:08 crc kubenswrapper[4830]: I0131 09:17:08.911977 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/09ac1675-c6eb-453a-83a5-94f0a04c9665-nmstate-lock\") pod \"nmstate-handler-9wzdf\" (UID: \"09ac1675-c6eb-453a-83a5-94f0a04c9665\") " pod="openshift-nmstate/nmstate-handler-9wzdf" Jan 31 09:17:08 crc kubenswrapper[4830]: I0131 09:17:08.912006 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4hsrq\" (UniqueName: \"kubernetes.io/projected/0782dc69-7ca6-4a3c-898b-a928694c4810-kube-api-access-4hsrq\") pod \"nmstate-console-plugin-7754f76f8b-t4b58\" (UID: \"0782dc69-7ca6-4a3c-898b-a928694c4810\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-t4b58" Jan 31 09:17:08 crc kubenswrapper[4830]: I0131 09:17:08.912027 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/09ac1675-c6eb-453a-83a5-94f0a04c9665-ovs-socket\") pod \"nmstate-handler-9wzdf\" (UID: \"09ac1675-c6eb-453a-83a5-94f0a04c9665\") " pod="openshift-nmstate/nmstate-handler-9wzdf" Jan 31 09:17:08 crc kubenswrapper[4830]: I0131 09:17:08.912078 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dm2gn\" (UniqueName: \"kubernetes.io/projected/09ac1675-c6eb-453a-83a5-94f0a04c9665-kube-api-access-dm2gn\") pod \"nmstate-handler-9wzdf\" (UID: \"09ac1675-c6eb-453a-83a5-94f0a04c9665\") " pod="openshift-nmstate/nmstate-handler-9wzdf" Jan 31 09:17:08 crc kubenswrapper[4830]: I0131 09:17:08.912123 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/0782dc69-7ca6-4a3c-898b-a928694c4810-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-t4b58\" (UID: \"0782dc69-7ca6-4a3c-898b-a928694c4810\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-t4b58" Jan 31 09:17:08 crc kubenswrapper[4830]: I0131 09:17:08.912582 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/09ac1675-c6eb-453a-83a5-94f0a04c9665-dbus-socket\") pod \"nmstate-handler-9wzdf\" (UID: \"09ac1675-c6eb-453a-83a5-94f0a04c9665\") " pod="openshift-nmstate/nmstate-handler-9wzdf" Jan 31 09:17:08 crc kubenswrapper[4830]: I0131 09:17:08.912621 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/09ac1675-c6eb-453a-83a5-94f0a04c9665-nmstate-lock\") pod \"nmstate-handler-9wzdf\" (UID: \"09ac1675-c6eb-453a-83a5-94f0a04c9665\") " pod="openshift-nmstate/nmstate-handler-9wzdf" Jan 31 09:17:08 crc kubenswrapper[4830]: I0131 09:17:08.912648 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/09ac1675-c6eb-453a-83a5-94f0a04c9665-ovs-socket\") pod \"nmstate-handler-9wzdf\" (UID: \"09ac1675-c6eb-453a-83a5-94f0a04c9665\") " pod="openshift-nmstate/nmstate-handler-9wzdf" Jan 31 09:17:08 crc kubenswrapper[4830]: I0131 09:17:08.926993 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-ppr4q" Jan 31 09:17:08 crc kubenswrapper[4830]: I0131 09:17:08.964453 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dm2gn\" (UniqueName: \"kubernetes.io/projected/09ac1675-c6eb-453a-83a5-94f0a04c9665-kube-api-access-dm2gn\") pod \"nmstate-handler-9wzdf\" (UID: \"09ac1675-c6eb-453a-83a5-94f0a04c9665\") " pod="openshift-nmstate/nmstate-handler-9wzdf" Jan 31 09:17:08 crc kubenswrapper[4830]: I0131 09:17:08.986218 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-9wzdf" Jan 31 09:17:09 crc kubenswrapper[4830]: I0131 09:17:09.017937 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/0782dc69-7ca6-4a3c-898b-a928694c4810-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-t4b58\" (UID: \"0782dc69-7ca6-4a3c-898b-a928694c4810\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-t4b58" Jan 31 09:17:09 crc kubenswrapper[4830]: I0131 09:17:09.018012 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4hsrq\" (UniqueName: \"kubernetes.io/projected/0782dc69-7ca6-4a3c-898b-a928694c4810-kube-api-access-4hsrq\") pod \"nmstate-console-plugin-7754f76f8b-t4b58\" (UID: \"0782dc69-7ca6-4a3c-898b-a928694c4810\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-t4b58" Jan 31 09:17:09 crc kubenswrapper[4830]: I0131 09:17:09.018078 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/0782dc69-7ca6-4a3c-898b-a928694c4810-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-t4b58\" (UID: \"0782dc69-7ca6-4a3c-898b-a928694c4810\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-t4b58" Jan 31 09:17:09 crc kubenswrapper[4830]: I0131 09:17:09.019357 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/0782dc69-7ca6-4a3c-898b-a928694c4810-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-t4b58\" (UID: \"0782dc69-7ca6-4a3c-898b-a928694c4810\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-t4b58" Jan 31 09:17:09 crc kubenswrapper[4830]: I0131 09:17:09.025410 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/0782dc69-7ca6-4a3c-898b-a928694c4810-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-t4b58\" (UID: \"0782dc69-7ca6-4a3c-898b-a928694c4810\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-t4b58" Jan 31 09:17:09 crc kubenswrapper[4830]: I0131 09:17:09.048847 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4hsrq\" (UniqueName: \"kubernetes.io/projected/0782dc69-7ca6-4a3c-898b-a928694c4810-kube-api-access-4hsrq\") pod \"nmstate-console-plugin-7754f76f8b-t4b58\" (UID: \"0782dc69-7ca6-4a3c-898b-a928694c4810\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-t4b58" Jan 31 09:17:09 crc kubenswrapper[4830]: I0131 09:17:09.209683 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-757d775c7-jlwx2"] Jan 31 09:17:09 crc kubenswrapper[4830]: I0131 09:17:09.214289 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-757d775c7-jlwx2" Jan 31 09:17:09 crc kubenswrapper[4830]: I0131 09:17:09.226588 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ea4800d4-055a-4c40-8209-81998e951b16-trusted-ca-bundle\") pod \"console-757d775c7-jlwx2\" (UID: \"ea4800d4-055a-4c40-8209-81998e951b16\") " pod="openshift-console/console-757d775c7-jlwx2" Jan 31 09:17:09 crc kubenswrapper[4830]: I0131 09:17:09.226684 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/ea4800d4-055a-4c40-8209-81998e951b16-console-serving-cert\") pod \"console-757d775c7-jlwx2\" (UID: \"ea4800d4-055a-4c40-8209-81998e951b16\") " pod="openshift-console/console-757d775c7-jlwx2" Jan 31 09:17:09 crc kubenswrapper[4830]: I0131 09:17:09.226705 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/ea4800d4-055a-4c40-8209-81998e951b16-console-oauth-config\") pod \"console-757d775c7-jlwx2\" (UID: \"ea4800d4-055a-4c40-8209-81998e951b16\") " pod="openshift-console/console-757d775c7-jlwx2" Jan 31 09:17:09 crc kubenswrapper[4830]: I0131 09:17:09.226731 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/ea4800d4-055a-4c40-8209-81998e951b16-oauth-serving-cert\") pod \"console-757d775c7-jlwx2\" (UID: \"ea4800d4-055a-4c40-8209-81998e951b16\") " pod="openshift-console/console-757d775c7-jlwx2" Jan 31 09:17:09 crc kubenswrapper[4830]: I0131 09:17:09.226772 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-52v4d\" (UniqueName: \"kubernetes.io/projected/ea4800d4-055a-4c40-8209-81998e951b16-kube-api-access-52v4d\") pod \"console-757d775c7-jlwx2\" (UID: \"ea4800d4-055a-4c40-8209-81998e951b16\") " pod="openshift-console/console-757d775c7-jlwx2" Jan 31 09:17:09 crc kubenswrapper[4830]: I0131 09:17:09.226831 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ea4800d4-055a-4c40-8209-81998e951b16-service-ca\") pod \"console-757d775c7-jlwx2\" (UID: \"ea4800d4-055a-4c40-8209-81998e951b16\") " pod="openshift-console/console-757d775c7-jlwx2" Jan 31 09:17:09 crc kubenswrapper[4830]: I0131 09:17:09.226867 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/ea4800d4-055a-4c40-8209-81998e951b16-console-config\") pod \"console-757d775c7-jlwx2\" (UID: \"ea4800d4-055a-4c40-8209-81998e951b16\") " pod="openshift-console/console-757d775c7-jlwx2" Jan 31 09:17:09 crc kubenswrapper[4830]: I0131 09:17:09.227042 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-t4b58" Jan 31 09:17:09 crc kubenswrapper[4830]: I0131 09:17:09.270368 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-757d775c7-jlwx2"] Jan 31 09:17:09 crc kubenswrapper[4830]: I0131 09:17:09.329367 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-52v4d\" (UniqueName: \"kubernetes.io/projected/ea4800d4-055a-4c40-8209-81998e951b16-kube-api-access-52v4d\") pod \"console-757d775c7-jlwx2\" (UID: \"ea4800d4-055a-4c40-8209-81998e951b16\") " pod="openshift-console/console-757d775c7-jlwx2" Jan 31 09:17:09 crc kubenswrapper[4830]: I0131 09:17:09.329418 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/a580c5e1-30c2-40b1-993d-c375cc99e2f2-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-hw8mv\" (UID: \"a580c5e1-30c2-40b1-993d-c375cc99e2f2\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-hw8mv" Jan 31 09:17:09 crc kubenswrapper[4830]: I0131 09:17:09.329526 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ea4800d4-055a-4c40-8209-81998e951b16-service-ca\") pod \"console-757d775c7-jlwx2\" (UID: \"ea4800d4-055a-4c40-8209-81998e951b16\") " pod="openshift-console/console-757d775c7-jlwx2" Jan 31 09:17:09 crc kubenswrapper[4830]: I0131 09:17:09.329622 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/ea4800d4-055a-4c40-8209-81998e951b16-console-config\") pod \"console-757d775c7-jlwx2\" (UID: \"ea4800d4-055a-4c40-8209-81998e951b16\") " pod="openshift-console/console-757d775c7-jlwx2" Jan 31 09:17:09 crc kubenswrapper[4830]: I0131 09:17:09.329660 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ea4800d4-055a-4c40-8209-81998e951b16-trusted-ca-bundle\") pod \"console-757d775c7-jlwx2\" (UID: \"ea4800d4-055a-4c40-8209-81998e951b16\") " pod="openshift-console/console-757d775c7-jlwx2" Jan 31 09:17:09 crc kubenswrapper[4830]: I0131 09:17:09.329813 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/ea4800d4-055a-4c40-8209-81998e951b16-console-serving-cert\") pod \"console-757d775c7-jlwx2\" (UID: \"ea4800d4-055a-4c40-8209-81998e951b16\") " pod="openshift-console/console-757d775c7-jlwx2" Jan 31 09:17:09 crc kubenswrapper[4830]: I0131 09:17:09.329832 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/ea4800d4-055a-4c40-8209-81998e951b16-console-oauth-config\") pod \"console-757d775c7-jlwx2\" (UID: \"ea4800d4-055a-4c40-8209-81998e951b16\") " pod="openshift-console/console-757d775c7-jlwx2" Jan 31 09:17:09 crc kubenswrapper[4830]: I0131 09:17:09.329857 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/ea4800d4-055a-4c40-8209-81998e951b16-oauth-serving-cert\") pod \"console-757d775c7-jlwx2\" (UID: \"ea4800d4-055a-4c40-8209-81998e951b16\") " pod="openshift-console/console-757d775c7-jlwx2" Jan 31 09:17:09 crc kubenswrapper[4830]: I0131 09:17:09.331111 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/ea4800d4-055a-4c40-8209-81998e951b16-oauth-serving-cert\") pod \"console-757d775c7-jlwx2\" (UID: \"ea4800d4-055a-4c40-8209-81998e951b16\") " pod="openshift-console/console-757d775c7-jlwx2" Jan 31 09:17:09 crc kubenswrapper[4830]: I0131 09:17:09.331931 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ea4800d4-055a-4c40-8209-81998e951b16-service-ca\") pod \"console-757d775c7-jlwx2\" (UID: \"ea4800d4-055a-4c40-8209-81998e951b16\") " pod="openshift-console/console-757d775c7-jlwx2" Jan 31 09:17:09 crc kubenswrapper[4830]: I0131 09:17:09.332579 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/ea4800d4-055a-4c40-8209-81998e951b16-console-config\") pod \"console-757d775c7-jlwx2\" (UID: \"ea4800d4-055a-4c40-8209-81998e951b16\") " pod="openshift-console/console-757d775c7-jlwx2" Jan 31 09:17:09 crc kubenswrapper[4830]: I0131 09:17:09.337250 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ea4800d4-055a-4c40-8209-81998e951b16-trusted-ca-bundle\") pod \"console-757d775c7-jlwx2\" (UID: \"ea4800d4-055a-4c40-8209-81998e951b16\") " pod="openshift-console/console-757d775c7-jlwx2" Jan 31 09:17:09 crc kubenswrapper[4830]: I0131 09:17:09.342907 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/a580c5e1-30c2-40b1-993d-c375cc99e2f2-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-hw8mv\" (UID: \"a580c5e1-30c2-40b1-993d-c375cc99e2f2\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-hw8mv" Jan 31 09:17:09 crc kubenswrapper[4830]: I0131 09:17:09.343578 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/ea4800d4-055a-4c40-8209-81998e951b16-console-serving-cert\") pod \"console-757d775c7-jlwx2\" (UID: \"ea4800d4-055a-4c40-8209-81998e951b16\") " pod="openshift-console/console-757d775c7-jlwx2" Jan 31 09:17:09 crc kubenswrapper[4830]: I0131 09:17:09.345907 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/ea4800d4-055a-4c40-8209-81998e951b16-console-oauth-config\") pod \"console-757d775c7-jlwx2\" (UID: \"ea4800d4-055a-4c40-8209-81998e951b16\") " pod="openshift-console/console-757d775c7-jlwx2" Jan 31 09:17:09 crc kubenswrapper[4830]: I0131 09:17:09.361363 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-52v4d\" (UniqueName: \"kubernetes.io/projected/ea4800d4-055a-4c40-8209-81998e951b16-kube-api-access-52v4d\") pod \"console-757d775c7-jlwx2\" (UID: \"ea4800d4-055a-4c40-8209-81998e951b16\") " pod="openshift-console/console-757d775c7-jlwx2" Jan 31 09:17:09 crc kubenswrapper[4830]: I0131 09:17:09.551874 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-hw8mv" Jan 31 09:17:09 crc kubenswrapper[4830]: I0131 09:17:09.573450 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-9wzdf" event={"ID":"09ac1675-c6eb-453a-83a5-94f0a04c9665","Type":"ContainerStarted","Data":"2cde97d87159ecb625dc79d45d1aa6a9eaa268bce35f70265915dc1959b338d3"} Jan 31 09:17:09 crc kubenswrapper[4830]: I0131 09:17:09.599260 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-757d775c7-jlwx2" Jan 31 09:17:09 crc kubenswrapper[4830]: I0131 09:17:09.643445 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-ppr4q"] Jan 31 09:17:09 crc kubenswrapper[4830]: W0131 09:17:09.659172 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod80b52808_7bda_4187_86e4_356413c4ff68.slice/crio-42ef4294fe90fc78637de3a42c48271dc89ccbf69f940f4a7437b67fed178b48 WatchSource:0}: Error finding container 42ef4294fe90fc78637de3a42c48271dc89ccbf69f940f4a7437b67fed178b48: Status 404 returned error can't find the container with id 42ef4294fe90fc78637de3a42c48271dc89ccbf69f940f4a7437b67fed178b48 Jan 31 09:17:09 crc kubenswrapper[4830]: I0131 09:17:09.814106 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-t4b58"] Jan 31 09:17:10 crc kubenswrapper[4830]: I0131 09:17:10.026560 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-hw8mv"] Jan 31 09:17:10 crc kubenswrapper[4830]: I0131 09:17:10.205505 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-757d775c7-jlwx2"] Jan 31 09:17:10 crc kubenswrapper[4830]: W0131 09:17:10.215669 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podea4800d4_055a_4c40_8209_81998e951b16.slice/crio-2d4488494ca7962993e958682574a7876f0eb8e1d9bace1e9fc9aa935926ab6d WatchSource:0}: Error finding container 2d4488494ca7962993e958682574a7876f0eb8e1d9bace1e9fc9aa935926ab6d: Status 404 returned error can't find the container with id 2d4488494ca7962993e958682574a7876f0eb8e1d9bace1e9fc9aa935926ab6d Jan 31 09:17:10 crc kubenswrapper[4830]: I0131 09:17:10.587079 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-757d775c7-jlwx2" event={"ID":"ea4800d4-055a-4c40-8209-81998e951b16","Type":"ContainerStarted","Data":"1eeb5600d7926e689472353f8abc0a4f04b6ce4979b11deb5a0fa88d521b6df5"} Jan 31 09:17:10 crc kubenswrapper[4830]: I0131 09:17:10.587553 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-757d775c7-jlwx2" event={"ID":"ea4800d4-055a-4c40-8209-81998e951b16","Type":"ContainerStarted","Data":"2d4488494ca7962993e958682574a7876f0eb8e1d9bace1e9fc9aa935926ab6d"} Jan 31 09:17:10 crc kubenswrapper[4830]: I0131 09:17:10.589629 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-t4b58" event={"ID":"0782dc69-7ca6-4a3c-898b-a928694c4810","Type":"ContainerStarted","Data":"f3b5f0995a032177fc7936afb6791f1312c365c6010e0c7363aeb3bc026ae43f"} Jan 31 09:17:10 crc kubenswrapper[4830]: I0131 09:17:10.591442 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-hw8mv" event={"ID":"a580c5e1-30c2-40b1-993d-c375cc99e2f2","Type":"ContainerStarted","Data":"df01afdf4dbdb8e9c48bb898dc0f29167b8849a216812fdddaa2bb3ed0080aba"} Jan 31 09:17:10 crc kubenswrapper[4830]: I0131 09:17:10.593466 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-ppr4q" event={"ID":"80b52808-7bda-4187-86e4-356413c4ff68","Type":"ContainerStarted","Data":"42ef4294fe90fc78637de3a42c48271dc89ccbf69f940f4a7437b67fed178b48"} Jan 31 09:17:10 crc kubenswrapper[4830]: I0131 09:17:10.614776 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-757d775c7-jlwx2" podStartSLOduration=1.614736811 podStartE2EDuration="1.614736811s" podCreationTimestamp="2026-01-31 09:17:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 09:17:10.610644514 +0000 UTC m=+975.104006976" watchObservedRunningTime="2026-01-31 09:17:10.614736811 +0000 UTC m=+975.108099263" Jan 31 09:17:13 crc kubenswrapper[4830]: I0131 09:17:13.626168 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-hw8mv" event={"ID":"a580c5e1-30c2-40b1-993d-c375cc99e2f2","Type":"ContainerStarted","Data":"952401b704c81e34aa3f6fc85d8e87cd0ea02a53a52deb4e6bc4e0daa3766638"} Jan 31 09:17:13 crc kubenswrapper[4830]: I0131 09:17:13.626997 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-hw8mv" Jan 31 09:17:13 crc kubenswrapper[4830]: I0131 09:17:13.629296 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-ppr4q" event={"ID":"80b52808-7bda-4187-86e4-356413c4ff68","Type":"ContainerStarted","Data":"cf07796fdca6333c0c9bbabf2ab613781df9f218cea2e328f20ac446f6700e50"} Jan 31 09:17:13 crc kubenswrapper[4830]: I0131 09:17:13.631117 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-9wzdf" event={"ID":"09ac1675-c6eb-453a-83a5-94f0a04c9665","Type":"ContainerStarted","Data":"5bc0b63596448ef93237bc97838d8384621da2684be44b362cdc4a82f3e8d342"} Jan 31 09:17:13 crc kubenswrapper[4830]: I0131 09:17:13.631239 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-9wzdf" Jan 31 09:17:13 crc kubenswrapper[4830]: I0131 09:17:13.633607 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-t4b58" event={"ID":"0782dc69-7ca6-4a3c-898b-a928694c4810","Type":"ContainerStarted","Data":"cb7c0c26363ec0d27a5cc55522054f481d899eb37a4a1a4c61a5ae51941e94ce"} Jan 31 09:17:13 crc kubenswrapper[4830]: I0131 09:17:13.656938 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-hw8mv" podStartSLOduration=3.223126853 podStartE2EDuration="5.656887383s" podCreationTimestamp="2026-01-31 09:17:08 +0000 UTC" firstStartedPulling="2026-01-31 09:17:10.034019815 +0000 UTC m=+974.527382247" lastFinishedPulling="2026-01-31 09:17:12.467780335 +0000 UTC m=+976.961142777" observedRunningTime="2026-01-31 09:17:13.646014211 +0000 UTC m=+978.139376653" watchObservedRunningTime="2026-01-31 09:17:13.656887383 +0000 UTC m=+978.150249815" Jan 31 09:17:13 crc kubenswrapper[4830]: I0131 09:17:13.674628 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-t4b58" podStartSLOduration=3.090623818 podStartE2EDuration="5.674606342s" podCreationTimestamp="2026-01-31 09:17:08 +0000 UTC" firstStartedPulling="2026-01-31 09:17:09.868904323 +0000 UTC m=+974.362266765" lastFinishedPulling="2026-01-31 09:17:12.452886847 +0000 UTC m=+976.946249289" observedRunningTime="2026-01-31 09:17:13.669006351 +0000 UTC m=+978.162368823" watchObservedRunningTime="2026-01-31 09:17:13.674606342 +0000 UTC m=+978.167968784" Jan 31 09:17:13 crc kubenswrapper[4830]: I0131 09:17:13.695638 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-9wzdf" podStartSLOduration=2.328511042 podStartE2EDuration="5.695613355s" podCreationTimestamp="2026-01-31 09:17:08 +0000 UTC" firstStartedPulling="2026-01-31 09:17:09.086204686 +0000 UTC m=+973.579567128" lastFinishedPulling="2026-01-31 09:17:12.453306999 +0000 UTC m=+976.946669441" observedRunningTime="2026-01-31 09:17:13.689730076 +0000 UTC m=+978.183092548" watchObservedRunningTime="2026-01-31 09:17:13.695613355 +0000 UTC m=+978.188975807" Jan 31 09:17:15 crc kubenswrapper[4830]: I0131 09:17:15.656658 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-ppr4q" event={"ID":"80b52808-7bda-4187-86e4-356413c4ff68","Type":"ContainerStarted","Data":"92c2479d55ae4500f952cc265f80a2ca6fa554d0586bde09fc3cc52ed1111563"} Jan 31 09:17:15 crc kubenswrapper[4830]: I0131 09:17:15.691760 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-54757c584b-ppr4q" podStartSLOduration=2.268557459 podStartE2EDuration="7.691710637s" podCreationTimestamp="2026-01-31 09:17:08 +0000 UTC" firstStartedPulling="2026-01-31 09:17:09.663636688 +0000 UTC m=+974.156999130" lastFinishedPulling="2026-01-31 09:17:15.086789866 +0000 UTC m=+979.580152308" observedRunningTime="2026-01-31 09:17:15.682963816 +0000 UTC m=+980.176326288" watchObservedRunningTime="2026-01-31 09:17:15.691710637 +0000 UTC m=+980.185073079" Jan 31 09:17:19 crc kubenswrapper[4830]: I0131 09:17:19.018270 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-9wzdf" Jan 31 09:17:19 crc kubenswrapper[4830]: I0131 09:17:19.600002 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-757d775c7-jlwx2" Jan 31 09:17:19 crc kubenswrapper[4830]: I0131 09:17:19.600101 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-757d775c7-jlwx2" Jan 31 09:17:19 crc kubenswrapper[4830]: I0131 09:17:19.604687 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-757d775c7-jlwx2" Jan 31 09:17:19 crc kubenswrapper[4830]: I0131 09:17:19.694392 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-757d775c7-jlwx2" Jan 31 09:17:19 crc kubenswrapper[4830]: I0131 09:17:19.746867 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-76f59b595d-84k99"] Jan 31 09:17:29 crc kubenswrapper[4830]: I0131 09:17:29.563612 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-hw8mv" Jan 31 09:17:44 crc kubenswrapper[4830]: I0131 09:17:44.845245 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-76f59b595d-84k99" podUID="ac8533f2-4007-40aa-b933-fe2ec3b7d6bf" containerName="console" containerID="cri-o://8d3814bb96c222c8a413e07701c8f3bc2c775210ad99b7c8fa2b0362835e846b" gracePeriod=15 Jan 31 09:17:45 crc kubenswrapper[4830]: I0131 09:17:45.445660 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-76f59b595d-84k99_ac8533f2-4007-40aa-b933-fe2ec3b7d6bf/console/0.log" Jan 31 09:17:45 crc kubenswrapper[4830]: I0131 09:17:45.446127 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-76f59b595d-84k99" Jan 31 09:17:45 crc kubenswrapper[4830]: I0131 09:17:45.484627 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5qjxz\" (UniqueName: \"kubernetes.io/projected/ac8533f2-4007-40aa-b933-fe2ec3b7d6bf-kube-api-access-5qjxz\") pod \"ac8533f2-4007-40aa-b933-fe2ec3b7d6bf\" (UID: \"ac8533f2-4007-40aa-b933-fe2ec3b7d6bf\") " Jan 31 09:17:45 crc kubenswrapper[4830]: I0131 09:17:45.484766 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ac8533f2-4007-40aa-b933-fe2ec3b7d6bf-service-ca\") pod \"ac8533f2-4007-40aa-b933-fe2ec3b7d6bf\" (UID: \"ac8533f2-4007-40aa-b933-fe2ec3b7d6bf\") " Jan 31 09:17:45 crc kubenswrapper[4830]: I0131 09:17:45.484840 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/ac8533f2-4007-40aa-b933-fe2ec3b7d6bf-console-serving-cert\") pod \"ac8533f2-4007-40aa-b933-fe2ec3b7d6bf\" (UID: \"ac8533f2-4007-40aa-b933-fe2ec3b7d6bf\") " Jan 31 09:17:45 crc kubenswrapper[4830]: I0131 09:17:45.484879 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ac8533f2-4007-40aa-b933-fe2ec3b7d6bf-trusted-ca-bundle\") pod \"ac8533f2-4007-40aa-b933-fe2ec3b7d6bf\" (UID: \"ac8533f2-4007-40aa-b933-fe2ec3b7d6bf\") " Jan 31 09:17:45 crc kubenswrapper[4830]: I0131 09:17:45.484906 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/ac8533f2-4007-40aa-b933-fe2ec3b7d6bf-console-oauth-config\") pod \"ac8533f2-4007-40aa-b933-fe2ec3b7d6bf\" (UID: \"ac8533f2-4007-40aa-b933-fe2ec3b7d6bf\") " Jan 31 09:17:45 crc kubenswrapper[4830]: I0131 09:17:45.485073 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/ac8533f2-4007-40aa-b933-fe2ec3b7d6bf-oauth-serving-cert\") pod \"ac8533f2-4007-40aa-b933-fe2ec3b7d6bf\" (UID: \"ac8533f2-4007-40aa-b933-fe2ec3b7d6bf\") " Jan 31 09:17:45 crc kubenswrapper[4830]: I0131 09:17:45.485145 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/ac8533f2-4007-40aa-b933-fe2ec3b7d6bf-console-config\") pod \"ac8533f2-4007-40aa-b933-fe2ec3b7d6bf\" (UID: \"ac8533f2-4007-40aa-b933-fe2ec3b7d6bf\") " Jan 31 09:17:45 crc kubenswrapper[4830]: I0131 09:17:45.486468 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ac8533f2-4007-40aa-b933-fe2ec3b7d6bf-service-ca" (OuterVolumeSpecName: "service-ca") pod "ac8533f2-4007-40aa-b933-fe2ec3b7d6bf" (UID: "ac8533f2-4007-40aa-b933-fe2ec3b7d6bf"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:17:45 crc kubenswrapper[4830]: I0131 09:17:45.486665 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ac8533f2-4007-40aa-b933-fe2ec3b7d6bf-console-config" (OuterVolumeSpecName: "console-config") pod "ac8533f2-4007-40aa-b933-fe2ec3b7d6bf" (UID: "ac8533f2-4007-40aa-b933-fe2ec3b7d6bf"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:17:45 crc kubenswrapper[4830]: I0131 09:17:45.487096 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ac8533f2-4007-40aa-b933-fe2ec3b7d6bf-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "ac8533f2-4007-40aa-b933-fe2ec3b7d6bf" (UID: "ac8533f2-4007-40aa-b933-fe2ec3b7d6bf"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:17:45 crc kubenswrapper[4830]: I0131 09:17:45.487165 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ac8533f2-4007-40aa-b933-fe2ec3b7d6bf-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "ac8533f2-4007-40aa-b933-fe2ec3b7d6bf" (UID: "ac8533f2-4007-40aa-b933-fe2ec3b7d6bf"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:17:45 crc kubenswrapper[4830]: I0131 09:17:45.492098 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ac8533f2-4007-40aa-b933-fe2ec3b7d6bf-kube-api-access-5qjxz" (OuterVolumeSpecName: "kube-api-access-5qjxz") pod "ac8533f2-4007-40aa-b933-fe2ec3b7d6bf" (UID: "ac8533f2-4007-40aa-b933-fe2ec3b7d6bf"). InnerVolumeSpecName "kube-api-access-5qjxz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:17:45 crc kubenswrapper[4830]: I0131 09:17:45.492319 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ac8533f2-4007-40aa-b933-fe2ec3b7d6bf-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "ac8533f2-4007-40aa-b933-fe2ec3b7d6bf" (UID: "ac8533f2-4007-40aa-b933-fe2ec3b7d6bf"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:17:45 crc kubenswrapper[4830]: I0131 09:17:45.495065 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ac8533f2-4007-40aa-b933-fe2ec3b7d6bf-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "ac8533f2-4007-40aa-b933-fe2ec3b7d6bf" (UID: "ac8533f2-4007-40aa-b933-fe2ec3b7d6bf"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:17:45 crc kubenswrapper[4830]: I0131 09:17:45.588314 4830 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/ac8533f2-4007-40aa-b933-fe2ec3b7d6bf-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 09:17:45 crc kubenswrapper[4830]: I0131 09:17:45.588349 4830 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/ac8533f2-4007-40aa-b933-fe2ec3b7d6bf-console-config\") on node \"crc\" DevicePath \"\"" Jan 31 09:17:45 crc kubenswrapper[4830]: I0131 09:17:45.588360 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5qjxz\" (UniqueName: \"kubernetes.io/projected/ac8533f2-4007-40aa-b933-fe2ec3b7d6bf-kube-api-access-5qjxz\") on node \"crc\" DevicePath \"\"" Jan 31 09:17:45 crc kubenswrapper[4830]: I0131 09:17:45.588373 4830 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ac8533f2-4007-40aa-b933-fe2ec3b7d6bf-service-ca\") on node \"crc\" DevicePath \"\"" Jan 31 09:17:45 crc kubenswrapper[4830]: I0131 09:17:45.588384 4830 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/ac8533f2-4007-40aa-b933-fe2ec3b7d6bf-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 09:17:45 crc kubenswrapper[4830]: I0131 09:17:45.588396 4830 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ac8533f2-4007-40aa-b933-fe2ec3b7d6bf-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 09:17:45 crc kubenswrapper[4830]: I0131 09:17:45.588409 4830 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/ac8533f2-4007-40aa-b933-fe2ec3b7d6bf-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 31 09:17:45 crc kubenswrapper[4830]: I0131 09:17:45.917049 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-76f59b595d-84k99_ac8533f2-4007-40aa-b933-fe2ec3b7d6bf/console/0.log" Jan 31 09:17:45 crc kubenswrapper[4830]: I0131 09:17:45.917107 4830 generic.go:334] "Generic (PLEG): container finished" podID="ac8533f2-4007-40aa-b933-fe2ec3b7d6bf" containerID="8d3814bb96c222c8a413e07701c8f3bc2c775210ad99b7c8fa2b0362835e846b" exitCode=2 Jan 31 09:17:45 crc kubenswrapper[4830]: I0131 09:17:45.917141 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-76f59b595d-84k99" event={"ID":"ac8533f2-4007-40aa-b933-fe2ec3b7d6bf","Type":"ContainerDied","Data":"8d3814bb96c222c8a413e07701c8f3bc2c775210ad99b7c8fa2b0362835e846b"} Jan 31 09:17:45 crc kubenswrapper[4830]: I0131 09:17:45.917208 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-76f59b595d-84k99" event={"ID":"ac8533f2-4007-40aa-b933-fe2ec3b7d6bf","Type":"ContainerDied","Data":"57a857b3e24c4a04d04480dfebb213f51d63617c1c8cbcb5501b63c589b30d69"} Jan 31 09:17:45 crc kubenswrapper[4830]: I0131 09:17:45.917209 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-76f59b595d-84k99" Jan 31 09:17:45 crc kubenswrapper[4830]: I0131 09:17:45.917230 4830 scope.go:117] "RemoveContainer" containerID="8d3814bb96c222c8a413e07701c8f3bc2c775210ad99b7c8fa2b0362835e846b" Jan 31 09:17:45 crc kubenswrapper[4830]: I0131 09:17:45.942037 4830 scope.go:117] "RemoveContainer" containerID="8d3814bb96c222c8a413e07701c8f3bc2c775210ad99b7c8fa2b0362835e846b" Jan 31 09:17:45 crc kubenswrapper[4830]: E0131 09:17:45.946479 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8d3814bb96c222c8a413e07701c8f3bc2c775210ad99b7c8fa2b0362835e846b\": container with ID starting with 8d3814bb96c222c8a413e07701c8f3bc2c775210ad99b7c8fa2b0362835e846b not found: ID does not exist" containerID="8d3814bb96c222c8a413e07701c8f3bc2c775210ad99b7c8fa2b0362835e846b" Jan 31 09:17:45 crc kubenswrapper[4830]: I0131 09:17:45.946522 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8d3814bb96c222c8a413e07701c8f3bc2c775210ad99b7c8fa2b0362835e846b"} err="failed to get container status \"8d3814bb96c222c8a413e07701c8f3bc2c775210ad99b7c8fa2b0362835e846b\": rpc error: code = NotFound desc = could not find container \"8d3814bb96c222c8a413e07701c8f3bc2c775210ad99b7c8fa2b0362835e846b\": container with ID starting with 8d3814bb96c222c8a413e07701c8f3bc2c775210ad99b7c8fa2b0362835e846b not found: ID does not exist" Jan 31 09:17:45 crc kubenswrapper[4830]: I0131 09:17:45.969739 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-76f59b595d-84k99"] Jan 31 09:17:45 crc kubenswrapper[4830]: I0131 09:17:45.977210 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-76f59b595d-84k99"] Jan 31 09:17:46 crc kubenswrapper[4830]: I0131 09:17:46.328794 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ac8533f2-4007-40aa-b933-fe2ec3b7d6bf" path="/var/lib/kubelet/pods/ac8533f2-4007-40aa-b933-fe2ec3b7d6bf/volumes" Jan 31 09:17:48 crc kubenswrapper[4830]: I0131 09:17:48.707909 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc4plln"] Jan 31 09:17:48 crc kubenswrapper[4830]: E0131 09:17:48.710877 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac8533f2-4007-40aa-b933-fe2ec3b7d6bf" containerName="console" Jan 31 09:17:48 crc kubenswrapper[4830]: I0131 09:17:48.711363 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac8533f2-4007-40aa-b933-fe2ec3b7d6bf" containerName="console" Jan 31 09:17:48 crc kubenswrapper[4830]: I0131 09:17:48.711616 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="ac8533f2-4007-40aa-b933-fe2ec3b7d6bf" containerName="console" Jan 31 09:17:48 crc kubenswrapper[4830]: I0131 09:17:48.713284 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc4plln" Jan 31 09:17:48 crc kubenswrapper[4830]: I0131 09:17:48.715856 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 31 09:17:48 crc kubenswrapper[4830]: I0131 09:17:48.720638 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc4plln"] Jan 31 09:17:48 crc kubenswrapper[4830]: I0131 09:17:48.762040 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sbwth\" (UniqueName: \"kubernetes.io/projected/6821ef6a-9d75-42b0-8d20-1ebbbabd7896-kube-api-access-sbwth\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc4plln\" (UID: \"6821ef6a-9d75-42b0-8d20-1ebbbabd7896\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc4plln" Jan 31 09:17:48 crc kubenswrapper[4830]: I0131 09:17:48.762128 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6821ef6a-9d75-42b0-8d20-1ebbbabd7896-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc4plln\" (UID: \"6821ef6a-9d75-42b0-8d20-1ebbbabd7896\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc4plln" Jan 31 09:17:48 crc kubenswrapper[4830]: I0131 09:17:48.762193 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6821ef6a-9d75-42b0-8d20-1ebbbabd7896-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc4plln\" (UID: \"6821ef6a-9d75-42b0-8d20-1ebbbabd7896\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc4plln" Jan 31 09:17:48 crc kubenswrapper[4830]: I0131 09:17:48.865008 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6821ef6a-9d75-42b0-8d20-1ebbbabd7896-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc4plln\" (UID: \"6821ef6a-9d75-42b0-8d20-1ebbbabd7896\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc4plln" Jan 31 09:17:48 crc kubenswrapper[4830]: I0131 09:17:48.865185 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sbwth\" (UniqueName: \"kubernetes.io/projected/6821ef6a-9d75-42b0-8d20-1ebbbabd7896-kube-api-access-sbwth\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc4plln\" (UID: \"6821ef6a-9d75-42b0-8d20-1ebbbabd7896\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc4plln" Jan 31 09:17:48 crc kubenswrapper[4830]: I0131 09:17:48.865229 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6821ef6a-9d75-42b0-8d20-1ebbbabd7896-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc4plln\" (UID: \"6821ef6a-9d75-42b0-8d20-1ebbbabd7896\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc4plln" Jan 31 09:17:48 crc kubenswrapper[4830]: I0131 09:17:48.866003 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6821ef6a-9d75-42b0-8d20-1ebbbabd7896-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc4plln\" (UID: \"6821ef6a-9d75-42b0-8d20-1ebbbabd7896\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc4plln" Jan 31 09:17:48 crc kubenswrapper[4830]: I0131 09:17:48.866031 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6821ef6a-9d75-42b0-8d20-1ebbbabd7896-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc4plln\" (UID: \"6821ef6a-9d75-42b0-8d20-1ebbbabd7896\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc4plln" Jan 31 09:17:48 crc kubenswrapper[4830]: I0131 09:17:48.894138 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sbwth\" (UniqueName: \"kubernetes.io/projected/6821ef6a-9d75-42b0-8d20-1ebbbabd7896-kube-api-access-sbwth\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc4plln\" (UID: \"6821ef6a-9d75-42b0-8d20-1ebbbabd7896\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc4plln" Jan 31 09:17:49 crc kubenswrapper[4830]: I0131 09:17:49.033267 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc4plln" Jan 31 09:17:49 crc kubenswrapper[4830]: I0131 09:17:49.534922 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc4plln"] Jan 31 09:17:49 crc kubenswrapper[4830]: I0131 09:17:49.951406 4830 generic.go:334] "Generic (PLEG): container finished" podID="6821ef6a-9d75-42b0-8d20-1ebbbabd7896" containerID="6eaa494f044fdd2a2c9acef7aeeeca234ba8a13d977c4a84ae444b57611d6e5e" exitCode=0 Jan 31 09:17:49 crc kubenswrapper[4830]: I0131 09:17:49.951485 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc4plln" event={"ID":"6821ef6a-9d75-42b0-8d20-1ebbbabd7896","Type":"ContainerDied","Data":"6eaa494f044fdd2a2c9acef7aeeeca234ba8a13d977c4a84ae444b57611d6e5e"} Jan 31 09:17:49 crc kubenswrapper[4830]: I0131 09:17:49.951542 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc4plln" event={"ID":"6821ef6a-9d75-42b0-8d20-1ebbbabd7896","Type":"ContainerStarted","Data":"31cd5d3b606e46fb6098292907070bc0c1a90470e7991ebda8cd5ab18fdb5d85"} Jan 31 09:17:49 crc kubenswrapper[4830]: I0131 09:17:49.954698 4830 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 31 09:17:51 crc kubenswrapper[4830]: I0131 09:17:51.974529 4830 generic.go:334] "Generic (PLEG): container finished" podID="6821ef6a-9d75-42b0-8d20-1ebbbabd7896" containerID="7f7ddd6fc47a9aeec88cf63ee23cfac55559b50cf4e597d5351de26c133d8a75" exitCode=0 Jan 31 09:17:51 crc kubenswrapper[4830]: I0131 09:17:51.974627 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc4plln" event={"ID":"6821ef6a-9d75-42b0-8d20-1ebbbabd7896","Type":"ContainerDied","Data":"7f7ddd6fc47a9aeec88cf63ee23cfac55559b50cf4e597d5351de26c133d8a75"} Jan 31 09:17:52 crc kubenswrapper[4830]: I0131 09:17:52.994404 4830 generic.go:334] "Generic (PLEG): container finished" podID="6821ef6a-9d75-42b0-8d20-1ebbbabd7896" containerID="8823a62d62044c263d80eff2d6da9cea4645fef2de795c95175ac5fee330b7f7" exitCode=0 Jan 31 09:17:52 crc kubenswrapper[4830]: I0131 09:17:52.994474 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc4plln" event={"ID":"6821ef6a-9d75-42b0-8d20-1ebbbabd7896","Type":"ContainerDied","Data":"8823a62d62044c263d80eff2d6da9cea4645fef2de795c95175ac5fee330b7f7"} Jan 31 09:17:54 crc kubenswrapper[4830]: I0131 09:17:54.311527 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc4plln" Jan 31 09:17:54 crc kubenswrapper[4830]: I0131 09:17:54.397487 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6821ef6a-9d75-42b0-8d20-1ebbbabd7896-bundle\") pod \"6821ef6a-9d75-42b0-8d20-1ebbbabd7896\" (UID: \"6821ef6a-9d75-42b0-8d20-1ebbbabd7896\") " Jan 31 09:17:54 crc kubenswrapper[4830]: I0131 09:17:54.397683 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sbwth\" (UniqueName: \"kubernetes.io/projected/6821ef6a-9d75-42b0-8d20-1ebbbabd7896-kube-api-access-sbwth\") pod \"6821ef6a-9d75-42b0-8d20-1ebbbabd7896\" (UID: \"6821ef6a-9d75-42b0-8d20-1ebbbabd7896\") " Jan 31 09:17:54 crc kubenswrapper[4830]: I0131 09:17:54.397760 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6821ef6a-9d75-42b0-8d20-1ebbbabd7896-util\") pod \"6821ef6a-9d75-42b0-8d20-1ebbbabd7896\" (UID: \"6821ef6a-9d75-42b0-8d20-1ebbbabd7896\") " Jan 31 09:17:54 crc kubenswrapper[4830]: I0131 09:17:54.399800 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6821ef6a-9d75-42b0-8d20-1ebbbabd7896-bundle" (OuterVolumeSpecName: "bundle") pod "6821ef6a-9d75-42b0-8d20-1ebbbabd7896" (UID: "6821ef6a-9d75-42b0-8d20-1ebbbabd7896"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 09:17:54 crc kubenswrapper[4830]: I0131 09:17:54.405898 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6821ef6a-9d75-42b0-8d20-1ebbbabd7896-kube-api-access-sbwth" (OuterVolumeSpecName: "kube-api-access-sbwth") pod "6821ef6a-9d75-42b0-8d20-1ebbbabd7896" (UID: "6821ef6a-9d75-42b0-8d20-1ebbbabd7896"). InnerVolumeSpecName "kube-api-access-sbwth". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:17:54 crc kubenswrapper[4830]: I0131 09:17:54.414252 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6821ef6a-9d75-42b0-8d20-1ebbbabd7896-util" (OuterVolumeSpecName: "util") pod "6821ef6a-9d75-42b0-8d20-1ebbbabd7896" (UID: "6821ef6a-9d75-42b0-8d20-1ebbbabd7896"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 09:17:54 crc kubenswrapper[4830]: I0131 09:17:54.499944 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sbwth\" (UniqueName: \"kubernetes.io/projected/6821ef6a-9d75-42b0-8d20-1ebbbabd7896-kube-api-access-sbwth\") on node \"crc\" DevicePath \"\"" Jan 31 09:17:54 crc kubenswrapper[4830]: I0131 09:17:54.500002 4830 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6821ef6a-9d75-42b0-8d20-1ebbbabd7896-util\") on node \"crc\" DevicePath \"\"" Jan 31 09:17:54 crc kubenswrapper[4830]: I0131 09:17:54.500012 4830 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6821ef6a-9d75-42b0-8d20-1ebbbabd7896-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 09:17:55 crc kubenswrapper[4830]: I0131 09:17:55.015349 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc4plln" event={"ID":"6821ef6a-9d75-42b0-8d20-1ebbbabd7896","Type":"ContainerDied","Data":"31cd5d3b606e46fb6098292907070bc0c1a90470e7991ebda8cd5ab18fdb5d85"} Jan 31 09:17:55 crc kubenswrapper[4830]: I0131 09:17:55.015396 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc4plln" Jan 31 09:17:55 crc kubenswrapper[4830]: I0131 09:17:55.015403 4830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="31cd5d3b606e46fb6098292907070bc0c1a90470e7991ebda8cd5ab18fdb5d85" Jan 31 09:18:03 crc kubenswrapper[4830]: I0131 09:18:03.300910 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-74fbb6df4-hrt7k"] Jan 31 09:18:03 crc kubenswrapper[4830]: E0131 09:18:03.302147 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6821ef6a-9d75-42b0-8d20-1ebbbabd7896" containerName="pull" Jan 31 09:18:03 crc kubenswrapper[4830]: I0131 09:18:03.302166 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="6821ef6a-9d75-42b0-8d20-1ebbbabd7896" containerName="pull" Jan 31 09:18:03 crc kubenswrapper[4830]: E0131 09:18:03.302183 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6821ef6a-9d75-42b0-8d20-1ebbbabd7896" containerName="extract" Jan 31 09:18:03 crc kubenswrapper[4830]: I0131 09:18:03.302194 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="6821ef6a-9d75-42b0-8d20-1ebbbabd7896" containerName="extract" Jan 31 09:18:03 crc kubenswrapper[4830]: E0131 09:18:03.302220 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6821ef6a-9d75-42b0-8d20-1ebbbabd7896" containerName="util" Jan 31 09:18:03 crc kubenswrapper[4830]: I0131 09:18:03.302229 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="6821ef6a-9d75-42b0-8d20-1ebbbabd7896" containerName="util" Jan 31 09:18:03 crc kubenswrapper[4830]: I0131 09:18:03.302403 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="6821ef6a-9d75-42b0-8d20-1ebbbabd7896" containerName="extract" Jan 31 09:18:03 crc kubenswrapper[4830]: I0131 09:18:03.303288 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-74fbb6df4-hrt7k" Jan 31 09:18:03 crc kubenswrapper[4830]: I0131 09:18:03.308504 4830 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Jan 31 09:18:03 crc kubenswrapper[4830]: I0131 09:18:03.308778 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Jan 31 09:18:03 crc kubenswrapper[4830]: I0131 09:18:03.309049 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Jan 31 09:18:03 crc kubenswrapper[4830]: I0131 09:18:03.309226 4830 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Jan 31 09:18:03 crc kubenswrapper[4830]: I0131 09:18:03.309312 4830 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-d5q8l" Jan 31 09:18:03 crc kubenswrapper[4830]: I0131 09:18:03.340073 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-74fbb6df4-hrt7k"] Jan 31 09:18:03 crc kubenswrapper[4830]: I0131 09:18:03.414995 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/1145e85a-d436-40c8-baef-ceb53625e06b-apiservice-cert\") pod \"metallb-operator-controller-manager-74fbb6df4-hrt7k\" (UID: \"1145e85a-d436-40c8-baef-ceb53625e06b\") " pod="metallb-system/metallb-operator-controller-manager-74fbb6df4-hrt7k" Jan 31 09:18:03 crc kubenswrapper[4830]: I0131 09:18:03.415227 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/1145e85a-d436-40c8-baef-ceb53625e06b-webhook-cert\") pod \"metallb-operator-controller-manager-74fbb6df4-hrt7k\" (UID: \"1145e85a-d436-40c8-baef-ceb53625e06b\") " pod="metallb-system/metallb-operator-controller-manager-74fbb6df4-hrt7k" Jan 31 09:18:03 crc kubenswrapper[4830]: I0131 09:18:03.415287 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cjm4b\" (UniqueName: \"kubernetes.io/projected/1145e85a-d436-40c8-baef-ceb53625e06b-kube-api-access-cjm4b\") pod \"metallb-operator-controller-manager-74fbb6df4-hrt7k\" (UID: \"1145e85a-d436-40c8-baef-ceb53625e06b\") " pod="metallb-system/metallb-operator-controller-manager-74fbb6df4-hrt7k" Jan 31 09:18:03 crc kubenswrapper[4830]: I0131 09:18:03.517973 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/1145e85a-d436-40c8-baef-ceb53625e06b-apiservice-cert\") pod \"metallb-operator-controller-manager-74fbb6df4-hrt7k\" (UID: \"1145e85a-d436-40c8-baef-ceb53625e06b\") " pod="metallb-system/metallb-operator-controller-manager-74fbb6df4-hrt7k" Jan 31 09:18:03 crc kubenswrapper[4830]: I0131 09:18:03.518081 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/1145e85a-d436-40c8-baef-ceb53625e06b-webhook-cert\") pod \"metallb-operator-controller-manager-74fbb6df4-hrt7k\" (UID: \"1145e85a-d436-40c8-baef-ceb53625e06b\") " pod="metallb-system/metallb-operator-controller-manager-74fbb6df4-hrt7k" Jan 31 09:18:03 crc kubenswrapper[4830]: I0131 09:18:03.518122 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cjm4b\" (UniqueName: \"kubernetes.io/projected/1145e85a-d436-40c8-baef-ceb53625e06b-kube-api-access-cjm4b\") pod \"metallb-operator-controller-manager-74fbb6df4-hrt7k\" (UID: \"1145e85a-d436-40c8-baef-ceb53625e06b\") " pod="metallb-system/metallb-operator-controller-manager-74fbb6df4-hrt7k" Jan 31 09:18:03 crc kubenswrapper[4830]: I0131 09:18:03.525909 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/1145e85a-d436-40c8-baef-ceb53625e06b-webhook-cert\") pod \"metallb-operator-controller-manager-74fbb6df4-hrt7k\" (UID: \"1145e85a-d436-40c8-baef-ceb53625e06b\") " pod="metallb-system/metallb-operator-controller-manager-74fbb6df4-hrt7k" Jan 31 09:18:03 crc kubenswrapper[4830]: I0131 09:18:03.527005 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/1145e85a-d436-40c8-baef-ceb53625e06b-apiservice-cert\") pod \"metallb-operator-controller-manager-74fbb6df4-hrt7k\" (UID: \"1145e85a-d436-40c8-baef-ceb53625e06b\") " pod="metallb-system/metallb-operator-controller-manager-74fbb6df4-hrt7k" Jan 31 09:18:03 crc kubenswrapper[4830]: I0131 09:18:03.550158 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cjm4b\" (UniqueName: \"kubernetes.io/projected/1145e85a-d436-40c8-baef-ceb53625e06b-kube-api-access-cjm4b\") pod \"metallb-operator-controller-manager-74fbb6df4-hrt7k\" (UID: \"1145e85a-d436-40c8-baef-ceb53625e06b\") " pod="metallb-system/metallb-operator-controller-manager-74fbb6df4-hrt7k" Jan 31 09:18:03 crc kubenswrapper[4830]: I0131 09:18:03.637708 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-74fbb6df4-hrt7k" Jan 31 09:18:03 crc kubenswrapper[4830]: I0131 09:18:03.793481 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-55459579-xtkmd"] Jan 31 09:18:03 crc kubenswrapper[4830]: I0131 09:18:03.795287 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-55459579-xtkmd" Jan 31 09:18:03 crc kubenswrapper[4830]: I0131 09:18:03.798494 4830 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Jan 31 09:18:03 crc kubenswrapper[4830]: I0131 09:18:03.798928 4830 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-tkc7k" Jan 31 09:18:03 crc kubenswrapper[4830]: I0131 09:18:03.800033 4830 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Jan 31 09:18:03 crc kubenswrapper[4830]: I0131 09:18:03.990087 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/328e9260-46e9-41a9-a42c-891fe870a5d1-webhook-cert\") pod \"metallb-operator-webhook-server-55459579-xtkmd\" (UID: \"328e9260-46e9-41a9-a42c-891fe870a5d1\") " pod="metallb-system/metallb-operator-webhook-server-55459579-xtkmd" Jan 31 09:18:03 crc kubenswrapper[4830]: I0131 09:18:03.990199 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/328e9260-46e9-41a9-a42c-891fe870a5d1-apiservice-cert\") pod \"metallb-operator-webhook-server-55459579-xtkmd\" (UID: \"328e9260-46e9-41a9-a42c-891fe870a5d1\") " pod="metallb-system/metallb-operator-webhook-server-55459579-xtkmd" Jan 31 09:18:03 crc kubenswrapper[4830]: I0131 09:18:03.990317 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kp28k\" (UniqueName: \"kubernetes.io/projected/328e9260-46e9-41a9-a42c-891fe870a5d1-kube-api-access-kp28k\") pod \"metallb-operator-webhook-server-55459579-xtkmd\" (UID: \"328e9260-46e9-41a9-a42c-891fe870a5d1\") " pod="metallb-system/metallb-operator-webhook-server-55459579-xtkmd" Jan 31 09:18:04 crc kubenswrapper[4830]: I0131 09:18:04.019155 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-55459579-xtkmd"] Jan 31 09:18:04 crc kubenswrapper[4830]: I0131 09:18:04.093015 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/328e9260-46e9-41a9-a42c-891fe870a5d1-apiservice-cert\") pod \"metallb-operator-webhook-server-55459579-xtkmd\" (UID: \"328e9260-46e9-41a9-a42c-891fe870a5d1\") " pod="metallb-system/metallb-operator-webhook-server-55459579-xtkmd" Jan 31 09:18:04 crc kubenswrapper[4830]: I0131 09:18:04.093183 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kp28k\" (UniqueName: \"kubernetes.io/projected/328e9260-46e9-41a9-a42c-891fe870a5d1-kube-api-access-kp28k\") pod \"metallb-operator-webhook-server-55459579-xtkmd\" (UID: \"328e9260-46e9-41a9-a42c-891fe870a5d1\") " pod="metallb-system/metallb-operator-webhook-server-55459579-xtkmd" Jan 31 09:18:04 crc kubenswrapper[4830]: I0131 09:18:04.093888 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/328e9260-46e9-41a9-a42c-891fe870a5d1-webhook-cert\") pod \"metallb-operator-webhook-server-55459579-xtkmd\" (UID: \"328e9260-46e9-41a9-a42c-891fe870a5d1\") " pod="metallb-system/metallb-operator-webhook-server-55459579-xtkmd" Jan 31 09:18:04 crc kubenswrapper[4830]: I0131 09:18:04.103615 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/328e9260-46e9-41a9-a42c-891fe870a5d1-webhook-cert\") pod \"metallb-operator-webhook-server-55459579-xtkmd\" (UID: \"328e9260-46e9-41a9-a42c-891fe870a5d1\") " pod="metallb-system/metallb-operator-webhook-server-55459579-xtkmd" Jan 31 09:18:04 crc kubenswrapper[4830]: I0131 09:18:04.103672 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/328e9260-46e9-41a9-a42c-891fe870a5d1-apiservice-cert\") pod \"metallb-operator-webhook-server-55459579-xtkmd\" (UID: \"328e9260-46e9-41a9-a42c-891fe870a5d1\") " pod="metallb-system/metallb-operator-webhook-server-55459579-xtkmd" Jan 31 09:18:04 crc kubenswrapper[4830]: I0131 09:18:04.118231 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kp28k\" (UniqueName: \"kubernetes.io/projected/328e9260-46e9-41a9-a42c-891fe870a5d1-kube-api-access-kp28k\") pod \"metallb-operator-webhook-server-55459579-xtkmd\" (UID: \"328e9260-46e9-41a9-a42c-891fe870a5d1\") " pod="metallb-system/metallb-operator-webhook-server-55459579-xtkmd" Jan 31 09:18:04 crc kubenswrapper[4830]: I0131 09:18:04.412701 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-55459579-xtkmd" Jan 31 09:18:04 crc kubenswrapper[4830]: I0131 09:18:04.434565 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-74fbb6df4-hrt7k"] Jan 31 09:18:04 crc kubenswrapper[4830]: W0131 09:18:04.448204 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1145e85a_d436_40c8_baef_ceb53625e06b.slice/crio-1af110ceff962dd54a82d930837cf7c65c5c02db58b3e64a5b0d22e459fe2de5 WatchSource:0}: Error finding container 1af110ceff962dd54a82d930837cf7c65c5c02db58b3e64a5b0d22e459fe2de5: Status 404 returned error can't find the container with id 1af110ceff962dd54a82d930837cf7c65c5c02db58b3e64a5b0d22e459fe2de5 Jan 31 09:18:04 crc kubenswrapper[4830]: W0131 09:18:04.910666 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod328e9260_46e9_41a9_a42c_891fe870a5d1.slice/crio-3c6138a2b2268be176b3091b4b3a93b78490a63cfebc9afdcb395113e90b10c5 WatchSource:0}: Error finding container 3c6138a2b2268be176b3091b4b3a93b78490a63cfebc9afdcb395113e90b10c5: Status 404 returned error can't find the container with id 3c6138a2b2268be176b3091b4b3a93b78490a63cfebc9afdcb395113e90b10c5 Jan 31 09:18:04 crc kubenswrapper[4830]: I0131 09:18:04.912492 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-55459579-xtkmd"] Jan 31 09:18:05 crc kubenswrapper[4830]: I0131 09:18:05.104920 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-55459579-xtkmd" event={"ID":"328e9260-46e9-41a9-a42c-891fe870a5d1","Type":"ContainerStarted","Data":"3c6138a2b2268be176b3091b4b3a93b78490a63cfebc9afdcb395113e90b10c5"} Jan 31 09:18:05 crc kubenswrapper[4830]: I0131 09:18:05.107265 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-74fbb6df4-hrt7k" event={"ID":"1145e85a-d436-40c8-baef-ceb53625e06b","Type":"ContainerStarted","Data":"1af110ceff962dd54a82d930837cf7c65c5c02db58b3e64a5b0d22e459fe2de5"} Jan 31 09:18:09 crc kubenswrapper[4830]: I0131 09:18:09.150101 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-74fbb6df4-hrt7k" event={"ID":"1145e85a-d436-40c8-baef-ceb53625e06b","Type":"ContainerStarted","Data":"74b56cb11209b9c16ad49800be665604e4083fba596f8098c7026e6fadcfb5c8"} Jan 31 09:18:09 crc kubenswrapper[4830]: I0131 09:18:09.150932 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-74fbb6df4-hrt7k" Jan 31 09:18:09 crc kubenswrapper[4830]: I0131 09:18:09.187522 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-74fbb6df4-hrt7k" podStartSLOduration=2.365103619 podStartE2EDuration="6.187495087s" podCreationTimestamp="2026-01-31 09:18:03 +0000 UTC" firstStartedPulling="2026-01-31 09:18:04.451692799 +0000 UTC m=+1028.945055241" lastFinishedPulling="2026-01-31 09:18:08.274084267 +0000 UTC m=+1032.767446709" observedRunningTime="2026-01-31 09:18:09.177818959 +0000 UTC m=+1033.671181401" watchObservedRunningTime="2026-01-31 09:18:09.187495087 +0000 UTC m=+1033.680857529" Jan 31 09:18:12 crc kubenswrapper[4830]: I0131 09:18:12.175910 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-55459579-xtkmd" event={"ID":"328e9260-46e9-41a9-a42c-891fe870a5d1","Type":"ContainerStarted","Data":"dc00dac5cf4caa43406fa106a112cf8edadb863d3fe0fc2ff3f1fef11f326498"} Jan 31 09:18:12 crc kubenswrapper[4830]: I0131 09:18:12.177263 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-55459579-xtkmd" Jan 31 09:18:12 crc kubenswrapper[4830]: I0131 09:18:12.202353 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-55459579-xtkmd" podStartSLOduration=2.380630605 podStartE2EDuration="9.202331975s" podCreationTimestamp="2026-01-31 09:18:03 +0000 UTC" firstStartedPulling="2026-01-31 09:18:04.915868679 +0000 UTC m=+1029.409231131" lastFinishedPulling="2026-01-31 09:18:11.737570069 +0000 UTC m=+1036.230932501" observedRunningTime="2026-01-31 09:18:12.195806718 +0000 UTC m=+1036.689169160" watchObservedRunningTime="2026-01-31 09:18:12.202331975 +0000 UTC m=+1036.695694417" Jan 31 09:18:24 crc kubenswrapper[4830]: I0131 09:18:24.457131 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-55459579-xtkmd" Jan 31 09:18:43 crc kubenswrapper[4830]: I0131 09:18:43.642251 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-74fbb6df4-hrt7k" Jan 31 09:18:44 crc kubenswrapper[4830]: I0131 09:18:44.339804 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-4v2n6"] Jan 31 09:18:44 crc kubenswrapper[4830]: I0131 09:18:44.346452 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-4v2n6" Jan 31 09:18:44 crc kubenswrapper[4830]: I0131 09:18:44.349299 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Jan 31 09:18:44 crc kubenswrapper[4830]: I0131 09:18:44.349431 4830 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Jan 31 09:18:44 crc kubenswrapper[4830]: I0131 09:18:44.349596 4830 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-98x4b" Jan 31 09:18:44 crc kubenswrapper[4830]: I0131 09:18:44.350323 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-zwj92"] Jan 31 09:18:44 crc kubenswrapper[4830]: I0131 09:18:44.352673 4830 patch_prober.go:28] interesting pod/machine-config-daemon-gt7kd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 09:18:44 crc kubenswrapper[4830]: I0131 09:18:44.352749 4830 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" podUID="158dbfda-9b0a-4809-9946-3c6ee2d082dc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 09:18:44 crc kubenswrapper[4830]: I0131 09:18:44.359765 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-zwj92" Jan 31 09:18:44 crc kubenswrapper[4830]: I0131 09:18:44.365425 4830 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Jan 31 09:18:44 crc kubenswrapper[4830]: I0131 09:18:44.367810 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-zwj92"] Jan 31 09:18:44 crc kubenswrapper[4830]: I0131 09:18:44.452922 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-x7g8x"] Jan 31 09:18:44 crc kubenswrapper[4830]: I0131 09:18:44.472438 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-x7g8x" Jan 31 09:18:44 crc kubenswrapper[4830]: I0131 09:18:44.489865 4830 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Jan 31 09:18:44 crc kubenswrapper[4830]: I0131 09:18:44.490282 4830 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Jan 31 09:18:44 crc kubenswrapper[4830]: I0131 09:18:44.508981 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/d0107b00-a78b-432b-afc6-a9ccc1b3bf5b-frr-sockets\") pod \"frr-k8s-4v2n6\" (UID: \"d0107b00-a78b-432b-afc6-a9ccc1b3bf5b\") " pod="metallb-system/frr-k8s-4v2n6" Jan 31 09:18:44 crc kubenswrapper[4830]: I0131 09:18:44.509110 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4cczp\" (UniqueName: \"kubernetes.io/projected/3951c2f7-8a23-4d78-9a26-1b89399bdb4e-kube-api-access-4cczp\") pod \"frr-k8s-webhook-server-7df86c4f6c-zwj92\" (UID: \"3951c2f7-8a23-4d78-9a26-1b89399bdb4e\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-zwj92" Jan 31 09:18:44 crc kubenswrapper[4830]: I0131 09:18:44.509174 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/d0107b00-a78b-432b-afc6-a9ccc1b3bf5b-frr-startup\") pod \"frr-k8s-4v2n6\" (UID: \"d0107b00-a78b-432b-afc6-a9ccc1b3bf5b\") " pod="metallb-system/frr-k8s-4v2n6" Jan 31 09:18:44 crc kubenswrapper[4830]: I0131 09:18:44.509249 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3951c2f7-8a23-4d78-9a26-1b89399bdb4e-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-zwj92\" (UID: \"3951c2f7-8a23-4d78-9a26-1b89399bdb4e\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-zwj92" Jan 31 09:18:44 crc kubenswrapper[4830]: I0131 09:18:44.509434 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Jan 31 09:18:44 crc kubenswrapper[4830]: I0131 09:18:44.510109 4830 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-cmrzh" Jan 31 09:18:44 crc kubenswrapper[4830]: I0131 09:18:44.511677 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/d0107b00-a78b-432b-afc6-a9ccc1b3bf5b-reloader\") pod \"frr-k8s-4v2n6\" (UID: \"d0107b00-a78b-432b-afc6-a9ccc1b3bf5b\") " pod="metallb-system/frr-k8s-4v2n6" Jan 31 09:18:44 crc kubenswrapper[4830]: I0131 09:18:44.511909 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/d0107b00-a78b-432b-afc6-a9ccc1b3bf5b-metrics\") pod \"frr-k8s-4v2n6\" (UID: \"d0107b00-a78b-432b-afc6-a9ccc1b3bf5b\") " pod="metallb-system/frr-k8s-4v2n6" Jan 31 09:18:44 crc kubenswrapper[4830]: I0131 09:18:44.512086 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/d0107b00-a78b-432b-afc6-a9ccc1b3bf5b-frr-conf\") pod \"frr-k8s-4v2n6\" (UID: \"d0107b00-a78b-432b-afc6-a9ccc1b3bf5b\") " pod="metallb-system/frr-k8s-4v2n6" Jan 31 09:18:44 crc kubenswrapper[4830]: I0131 09:18:44.512171 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z7lrs\" (UniqueName: \"kubernetes.io/projected/d0107b00-a78b-432b-afc6-a9ccc1b3bf5b-kube-api-access-z7lrs\") pod \"frr-k8s-4v2n6\" (UID: \"d0107b00-a78b-432b-afc6-a9ccc1b3bf5b\") " pod="metallb-system/frr-k8s-4v2n6" Jan 31 09:18:44 crc kubenswrapper[4830]: I0131 09:18:44.512315 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d0107b00-a78b-432b-afc6-a9ccc1b3bf5b-metrics-certs\") pod \"frr-k8s-4v2n6\" (UID: \"d0107b00-a78b-432b-afc6-a9ccc1b3bf5b\") " pod="metallb-system/frr-k8s-4v2n6" Jan 31 09:18:44 crc kubenswrapper[4830]: I0131 09:18:44.550789 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-6968d8fdc4-lhbbn"] Jan 31 09:18:44 crc kubenswrapper[4830]: I0131 09:18:44.552527 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-lhbbn" Jan 31 09:18:44 crc kubenswrapper[4830]: I0131 09:18:44.556174 4830 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Jan 31 09:18:44 crc kubenswrapper[4830]: I0131 09:18:44.571875 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-lhbbn"] Jan 31 09:18:44 crc kubenswrapper[4830]: I0131 09:18:44.614761 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3951c2f7-8a23-4d78-9a26-1b89399bdb4e-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-zwj92\" (UID: \"3951c2f7-8a23-4d78-9a26-1b89399bdb4e\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-zwj92" Jan 31 09:18:44 crc kubenswrapper[4830]: I0131 09:18:44.614866 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2683cf74-2506-4496-b132-4c274291727b-cert\") pod \"controller-6968d8fdc4-lhbbn\" (UID: \"2683cf74-2506-4496-b132-4c274291727b\") " pod="metallb-system/controller-6968d8fdc4-lhbbn" Jan 31 09:18:44 crc kubenswrapper[4830]: I0131 09:18:44.614949 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/d0107b00-a78b-432b-afc6-a9ccc1b3bf5b-reloader\") pod \"frr-k8s-4v2n6\" (UID: \"d0107b00-a78b-432b-afc6-a9ccc1b3bf5b\") " pod="metallb-system/frr-k8s-4v2n6" Jan 31 09:18:44 crc kubenswrapper[4830]: I0131 09:18:44.614985 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/d0107b00-a78b-432b-afc6-a9ccc1b3bf5b-metrics\") pod \"frr-k8s-4v2n6\" (UID: \"d0107b00-a78b-432b-afc6-a9ccc1b3bf5b\") " pod="metallb-system/frr-k8s-4v2n6" Jan 31 09:18:44 crc kubenswrapper[4830]: I0131 09:18:44.615013 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/d0107b00-a78b-432b-afc6-a9ccc1b3bf5b-frr-conf\") pod \"frr-k8s-4v2n6\" (UID: \"d0107b00-a78b-432b-afc6-a9ccc1b3bf5b\") " pod="metallb-system/frr-k8s-4v2n6" Jan 31 09:18:44 crc kubenswrapper[4830]: I0131 09:18:44.615030 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z7lrs\" (UniqueName: \"kubernetes.io/projected/d0107b00-a78b-432b-afc6-a9ccc1b3bf5b-kube-api-access-z7lrs\") pod \"frr-k8s-4v2n6\" (UID: \"d0107b00-a78b-432b-afc6-a9ccc1b3bf5b\") " pod="metallb-system/frr-k8s-4v2n6" Jan 31 09:18:44 crc kubenswrapper[4830]: I0131 09:18:44.615054 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d0107b00-a78b-432b-afc6-a9ccc1b3bf5b-metrics-certs\") pod \"frr-k8s-4v2n6\" (UID: \"d0107b00-a78b-432b-afc6-a9ccc1b3bf5b\") " pod="metallb-system/frr-k8s-4v2n6" Jan 31 09:18:44 crc kubenswrapper[4830]: I0131 09:18:44.615078 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mld9x\" (UniqueName: \"kubernetes.io/projected/2683cf74-2506-4496-b132-4c274291727b-kube-api-access-mld9x\") pod \"controller-6968d8fdc4-lhbbn\" (UID: \"2683cf74-2506-4496-b132-4c274291727b\") " pod="metallb-system/controller-6968d8fdc4-lhbbn" Jan 31 09:18:44 crc kubenswrapper[4830]: I0131 09:18:44.615106 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1d713893-e8db-40ba-872c-e9d1650a56d0-metrics-certs\") pod \"speaker-x7g8x\" (UID: \"1d713893-e8db-40ba-872c-e9d1650a56d0\") " pod="metallb-system/speaker-x7g8x" Jan 31 09:18:44 crc kubenswrapper[4830]: I0131 09:18:44.615126 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2683cf74-2506-4496-b132-4c274291727b-metrics-certs\") pod \"controller-6968d8fdc4-lhbbn\" (UID: \"2683cf74-2506-4496-b132-4c274291727b\") " pod="metallb-system/controller-6968d8fdc4-lhbbn" Jan 31 09:18:44 crc kubenswrapper[4830]: I0131 09:18:44.615158 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/1d713893-e8db-40ba-872c-e9d1650a56d0-memberlist\") pod \"speaker-x7g8x\" (UID: \"1d713893-e8db-40ba-872c-e9d1650a56d0\") " pod="metallb-system/speaker-x7g8x" Jan 31 09:18:44 crc kubenswrapper[4830]: I0131 09:18:44.615184 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/d0107b00-a78b-432b-afc6-a9ccc1b3bf5b-frr-sockets\") pod \"frr-k8s-4v2n6\" (UID: \"d0107b00-a78b-432b-afc6-a9ccc1b3bf5b\") " pod="metallb-system/frr-k8s-4v2n6" Jan 31 09:18:44 crc kubenswrapper[4830]: I0131 09:18:44.615202 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/1d713893-e8db-40ba-872c-e9d1650a56d0-metallb-excludel2\") pod \"speaker-x7g8x\" (UID: \"1d713893-e8db-40ba-872c-e9d1650a56d0\") " pod="metallb-system/speaker-x7g8x" Jan 31 09:18:44 crc kubenswrapper[4830]: I0131 09:18:44.615224 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9f982\" (UniqueName: \"kubernetes.io/projected/1d713893-e8db-40ba-872c-e9d1650a56d0-kube-api-access-9f982\") pod \"speaker-x7g8x\" (UID: \"1d713893-e8db-40ba-872c-e9d1650a56d0\") " pod="metallb-system/speaker-x7g8x" Jan 31 09:18:44 crc kubenswrapper[4830]: I0131 09:18:44.615253 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4cczp\" (UniqueName: \"kubernetes.io/projected/3951c2f7-8a23-4d78-9a26-1b89399bdb4e-kube-api-access-4cczp\") pod \"frr-k8s-webhook-server-7df86c4f6c-zwj92\" (UID: \"3951c2f7-8a23-4d78-9a26-1b89399bdb4e\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-zwj92" Jan 31 09:18:44 crc kubenswrapper[4830]: I0131 09:18:44.615292 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/d0107b00-a78b-432b-afc6-a9ccc1b3bf5b-frr-startup\") pod \"frr-k8s-4v2n6\" (UID: \"d0107b00-a78b-432b-afc6-a9ccc1b3bf5b\") " pod="metallb-system/frr-k8s-4v2n6" Jan 31 09:18:44 crc kubenswrapper[4830]: I0131 09:18:44.616386 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/d0107b00-a78b-432b-afc6-a9ccc1b3bf5b-frr-startup\") pod \"frr-k8s-4v2n6\" (UID: \"d0107b00-a78b-432b-afc6-a9ccc1b3bf5b\") " pod="metallb-system/frr-k8s-4v2n6" Jan 31 09:18:44 crc kubenswrapper[4830]: I0131 09:18:44.617345 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/d0107b00-a78b-432b-afc6-a9ccc1b3bf5b-reloader\") pod \"frr-k8s-4v2n6\" (UID: \"d0107b00-a78b-432b-afc6-a9ccc1b3bf5b\") " pod="metallb-system/frr-k8s-4v2n6" Jan 31 09:18:44 crc kubenswrapper[4830]: I0131 09:18:44.617632 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/d0107b00-a78b-432b-afc6-a9ccc1b3bf5b-metrics\") pod \"frr-k8s-4v2n6\" (UID: \"d0107b00-a78b-432b-afc6-a9ccc1b3bf5b\") " pod="metallb-system/frr-k8s-4v2n6" Jan 31 09:18:44 crc kubenswrapper[4830]: I0131 09:18:44.617905 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/d0107b00-a78b-432b-afc6-a9ccc1b3bf5b-frr-conf\") pod \"frr-k8s-4v2n6\" (UID: \"d0107b00-a78b-432b-afc6-a9ccc1b3bf5b\") " pod="metallb-system/frr-k8s-4v2n6" Jan 31 09:18:44 crc kubenswrapper[4830]: I0131 09:18:44.620227 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/d0107b00-a78b-432b-afc6-a9ccc1b3bf5b-frr-sockets\") pod \"frr-k8s-4v2n6\" (UID: \"d0107b00-a78b-432b-afc6-a9ccc1b3bf5b\") " pod="metallb-system/frr-k8s-4v2n6" Jan 31 09:18:44 crc kubenswrapper[4830]: I0131 09:18:44.626425 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3951c2f7-8a23-4d78-9a26-1b89399bdb4e-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-zwj92\" (UID: \"3951c2f7-8a23-4d78-9a26-1b89399bdb4e\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-zwj92" Jan 31 09:18:44 crc kubenswrapper[4830]: I0131 09:18:44.627345 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d0107b00-a78b-432b-afc6-a9ccc1b3bf5b-metrics-certs\") pod \"frr-k8s-4v2n6\" (UID: \"d0107b00-a78b-432b-afc6-a9ccc1b3bf5b\") " pod="metallb-system/frr-k8s-4v2n6" Jan 31 09:18:44 crc kubenswrapper[4830]: I0131 09:18:44.640602 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z7lrs\" (UniqueName: \"kubernetes.io/projected/d0107b00-a78b-432b-afc6-a9ccc1b3bf5b-kube-api-access-z7lrs\") pod \"frr-k8s-4v2n6\" (UID: \"d0107b00-a78b-432b-afc6-a9ccc1b3bf5b\") " pod="metallb-system/frr-k8s-4v2n6" Jan 31 09:18:44 crc kubenswrapper[4830]: I0131 09:18:44.641503 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4cczp\" (UniqueName: \"kubernetes.io/projected/3951c2f7-8a23-4d78-9a26-1b89399bdb4e-kube-api-access-4cczp\") pod \"frr-k8s-webhook-server-7df86c4f6c-zwj92\" (UID: \"3951c2f7-8a23-4d78-9a26-1b89399bdb4e\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-zwj92" Jan 31 09:18:44 crc kubenswrapper[4830]: I0131 09:18:44.670396 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-4v2n6" Jan 31 09:18:44 crc kubenswrapper[4830]: I0131 09:18:44.685359 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-zwj92" Jan 31 09:18:44 crc kubenswrapper[4830]: I0131 09:18:44.716858 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mld9x\" (UniqueName: \"kubernetes.io/projected/2683cf74-2506-4496-b132-4c274291727b-kube-api-access-mld9x\") pod \"controller-6968d8fdc4-lhbbn\" (UID: \"2683cf74-2506-4496-b132-4c274291727b\") " pod="metallb-system/controller-6968d8fdc4-lhbbn" Jan 31 09:18:44 crc kubenswrapper[4830]: I0131 09:18:44.716917 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1d713893-e8db-40ba-872c-e9d1650a56d0-metrics-certs\") pod \"speaker-x7g8x\" (UID: \"1d713893-e8db-40ba-872c-e9d1650a56d0\") " pod="metallb-system/speaker-x7g8x" Jan 31 09:18:44 crc kubenswrapper[4830]: I0131 09:18:44.716947 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2683cf74-2506-4496-b132-4c274291727b-metrics-certs\") pod \"controller-6968d8fdc4-lhbbn\" (UID: \"2683cf74-2506-4496-b132-4c274291727b\") " pod="metallb-system/controller-6968d8fdc4-lhbbn" Jan 31 09:18:44 crc kubenswrapper[4830]: I0131 09:18:44.716981 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/1d713893-e8db-40ba-872c-e9d1650a56d0-memberlist\") pod \"speaker-x7g8x\" (UID: \"1d713893-e8db-40ba-872c-e9d1650a56d0\") " pod="metallb-system/speaker-x7g8x" Jan 31 09:18:44 crc kubenswrapper[4830]: I0131 09:18:44.717004 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/1d713893-e8db-40ba-872c-e9d1650a56d0-metallb-excludel2\") pod \"speaker-x7g8x\" (UID: \"1d713893-e8db-40ba-872c-e9d1650a56d0\") " pod="metallb-system/speaker-x7g8x" Jan 31 09:18:44 crc kubenswrapper[4830]: I0131 09:18:44.717027 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9f982\" (UniqueName: \"kubernetes.io/projected/1d713893-e8db-40ba-872c-e9d1650a56d0-kube-api-access-9f982\") pod \"speaker-x7g8x\" (UID: \"1d713893-e8db-40ba-872c-e9d1650a56d0\") " pod="metallb-system/speaker-x7g8x" Jan 31 09:18:44 crc kubenswrapper[4830]: I0131 09:18:44.717089 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2683cf74-2506-4496-b132-4c274291727b-cert\") pod \"controller-6968d8fdc4-lhbbn\" (UID: \"2683cf74-2506-4496-b132-4c274291727b\") " pod="metallb-system/controller-6968d8fdc4-lhbbn" Jan 31 09:18:44 crc kubenswrapper[4830]: E0131 09:18:44.717296 4830 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 31 09:18:44 crc kubenswrapper[4830]: E0131 09:18:44.717353 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1d713893-e8db-40ba-872c-e9d1650a56d0-memberlist podName:1d713893-e8db-40ba-872c-e9d1650a56d0 nodeName:}" failed. No retries permitted until 2026-01-31 09:18:45.217333156 +0000 UTC m=+1069.710695598 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/1d713893-e8db-40ba-872c-e9d1650a56d0-memberlist") pod "speaker-x7g8x" (UID: "1d713893-e8db-40ba-872c-e9d1650a56d0") : secret "metallb-memberlist" not found Jan 31 09:18:44 crc kubenswrapper[4830]: E0131 09:18:44.717403 4830 secret.go:188] Couldn't get secret metallb-system/speaker-certs-secret: secret "speaker-certs-secret" not found Jan 31 09:18:44 crc kubenswrapper[4830]: E0131 09:18:44.717425 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1d713893-e8db-40ba-872c-e9d1650a56d0-metrics-certs podName:1d713893-e8db-40ba-872c-e9d1650a56d0 nodeName:}" failed. No retries permitted until 2026-01-31 09:18:45.217418658 +0000 UTC m=+1069.710781100 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/1d713893-e8db-40ba-872c-e9d1650a56d0-metrics-certs") pod "speaker-x7g8x" (UID: "1d713893-e8db-40ba-872c-e9d1650a56d0") : secret "speaker-certs-secret" not found Jan 31 09:18:44 crc kubenswrapper[4830]: I0131 09:18:44.718644 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/1d713893-e8db-40ba-872c-e9d1650a56d0-metallb-excludel2\") pod \"speaker-x7g8x\" (UID: \"1d713893-e8db-40ba-872c-e9d1650a56d0\") " pod="metallb-system/speaker-x7g8x" Jan 31 09:18:44 crc kubenswrapper[4830]: I0131 09:18:44.723703 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2683cf74-2506-4496-b132-4c274291727b-metrics-certs\") pod \"controller-6968d8fdc4-lhbbn\" (UID: \"2683cf74-2506-4496-b132-4c274291727b\") " pod="metallb-system/controller-6968d8fdc4-lhbbn" Jan 31 09:18:44 crc kubenswrapper[4830]: I0131 09:18:44.724110 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2683cf74-2506-4496-b132-4c274291727b-cert\") pod \"controller-6968d8fdc4-lhbbn\" (UID: \"2683cf74-2506-4496-b132-4c274291727b\") " pod="metallb-system/controller-6968d8fdc4-lhbbn" Jan 31 09:18:44 crc kubenswrapper[4830]: I0131 09:18:44.744168 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9f982\" (UniqueName: \"kubernetes.io/projected/1d713893-e8db-40ba-872c-e9d1650a56d0-kube-api-access-9f982\") pod \"speaker-x7g8x\" (UID: \"1d713893-e8db-40ba-872c-e9d1650a56d0\") " pod="metallb-system/speaker-x7g8x" Jan 31 09:18:44 crc kubenswrapper[4830]: I0131 09:18:44.744325 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mld9x\" (UniqueName: \"kubernetes.io/projected/2683cf74-2506-4496-b132-4c274291727b-kube-api-access-mld9x\") pod \"controller-6968d8fdc4-lhbbn\" (UID: \"2683cf74-2506-4496-b132-4c274291727b\") " pod="metallb-system/controller-6968d8fdc4-lhbbn" Jan 31 09:18:44 crc kubenswrapper[4830]: I0131 09:18:44.878072 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-lhbbn" Jan 31 09:18:45 crc kubenswrapper[4830]: I0131 09:18:45.166784 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-zwj92"] Jan 31 09:18:45 crc kubenswrapper[4830]: I0131 09:18:45.227018 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1d713893-e8db-40ba-872c-e9d1650a56d0-metrics-certs\") pod \"speaker-x7g8x\" (UID: \"1d713893-e8db-40ba-872c-e9d1650a56d0\") " pod="metallb-system/speaker-x7g8x" Jan 31 09:18:45 crc kubenswrapper[4830]: I0131 09:18:45.227508 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/1d713893-e8db-40ba-872c-e9d1650a56d0-memberlist\") pod \"speaker-x7g8x\" (UID: \"1d713893-e8db-40ba-872c-e9d1650a56d0\") " pod="metallb-system/speaker-x7g8x" Jan 31 09:18:45 crc kubenswrapper[4830]: E0131 09:18:45.227664 4830 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 31 09:18:45 crc kubenswrapper[4830]: E0131 09:18:45.227715 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1d713893-e8db-40ba-872c-e9d1650a56d0-memberlist podName:1d713893-e8db-40ba-872c-e9d1650a56d0 nodeName:}" failed. No retries permitted until 2026-01-31 09:18:46.227701742 +0000 UTC m=+1070.721064184 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/1d713893-e8db-40ba-872c-e9d1650a56d0-memberlist") pod "speaker-x7g8x" (UID: "1d713893-e8db-40ba-872c-e9d1650a56d0") : secret "metallb-memberlist" not found Jan 31 09:18:45 crc kubenswrapper[4830]: I0131 09:18:45.237326 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1d713893-e8db-40ba-872c-e9d1650a56d0-metrics-certs\") pod \"speaker-x7g8x\" (UID: \"1d713893-e8db-40ba-872c-e9d1650a56d0\") " pod="metallb-system/speaker-x7g8x" Jan 31 09:18:45 crc kubenswrapper[4830]: I0131 09:18:45.368037 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-lhbbn"] Jan 31 09:18:45 crc kubenswrapper[4830]: W0131 09:18:45.371295 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2683cf74_2506_4496_b132_4c274291727b.slice/crio-d1a23555464155b1487b4225cf7c33458ff9c4a200a6d4604677382ee1f1bc3e WatchSource:0}: Error finding container d1a23555464155b1487b4225cf7c33458ff9c4a200a6d4604677382ee1f1bc3e: Status 404 returned error can't find the container with id d1a23555464155b1487b4225cf7c33458ff9c4a200a6d4604677382ee1f1bc3e Jan 31 09:18:45 crc kubenswrapper[4830]: I0131 09:18:45.509287 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-zwj92" event={"ID":"3951c2f7-8a23-4d78-9a26-1b89399bdb4e","Type":"ContainerStarted","Data":"15a3b95f5afa3c8d01ec57b35327c9016001664245f872dc9fc25834d8027293"} Jan 31 09:18:45 crc kubenswrapper[4830]: I0131 09:18:45.511105 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-4v2n6" event={"ID":"d0107b00-a78b-432b-afc6-a9ccc1b3bf5b","Type":"ContainerStarted","Data":"5d69b2481fd53a14be482667f4039f38e11023436e0381e275368918f51f74d3"} Jan 31 09:18:45 crc kubenswrapper[4830]: I0131 09:18:45.514419 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-lhbbn" event={"ID":"2683cf74-2506-4496-b132-4c274291727b","Type":"ContainerStarted","Data":"d1a23555464155b1487b4225cf7c33458ff9c4a200a6d4604677382ee1f1bc3e"} Jan 31 09:18:46 crc kubenswrapper[4830]: I0131 09:18:46.247849 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/1d713893-e8db-40ba-872c-e9d1650a56d0-memberlist\") pod \"speaker-x7g8x\" (UID: \"1d713893-e8db-40ba-872c-e9d1650a56d0\") " pod="metallb-system/speaker-x7g8x" Jan 31 09:18:46 crc kubenswrapper[4830]: I0131 09:18:46.260161 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/1d713893-e8db-40ba-872c-e9d1650a56d0-memberlist\") pod \"speaker-x7g8x\" (UID: \"1d713893-e8db-40ba-872c-e9d1650a56d0\") " pod="metallb-system/speaker-x7g8x" Jan 31 09:18:46 crc kubenswrapper[4830]: I0131 09:18:46.305213 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-x7g8x" Jan 31 09:18:46 crc kubenswrapper[4830]: I0131 09:18:46.525959 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-x7g8x" event={"ID":"1d713893-e8db-40ba-872c-e9d1650a56d0","Type":"ContainerStarted","Data":"ca0437c40e81361a1e9f86fe4c38aeb533577040b4b6483e62fa9d10c15f6b98"} Jan 31 09:18:46 crc kubenswrapper[4830]: I0131 09:18:46.529750 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-lhbbn" event={"ID":"2683cf74-2506-4496-b132-4c274291727b","Type":"ContainerStarted","Data":"b12c647ff8824237e0d875fc2329dc2d04490345d92cab78de066f2201e5fec0"} Jan 31 09:18:46 crc kubenswrapper[4830]: I0131 09:18:46.529787 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-lhbbn" event={"ID":"2683cf74-2506-4496-b132-4c274291727b","Type":"ContainerStarted","Data":"b35c67598916cefe17c3f0ec6514bc96b7c1600e7d97c6d13bbc4661b9177283"} Jan 31 09:18:46 crc kubenswrapper[4830]: I0131 09:18:46.555063 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-6968d8fdc4-lhbbn" podStartSLOduration=2.55503088 podStartE2EDuration="2.55503088s" podCreationTimestamp="2026-01-31 09:18:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 09:18:46.547090632 +0000 UTC m=+1071.040453074" watchObservedRunningTime="2026-01-31 09:18:46.55503088 +0000 UTC m=+1071.048393322" Jan 31 09:18:47 crc kubenswrapper[4830]: I0131 09:18:47.545681 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-x7g8x" event={"ID":"1d713893-e8db-40ba-872c-e9d1650a56d0","Type":"ContainerStarted","Data":"e26c1ed409ffbab7603c8ed24d78b585405848160d6c753efff26a2197d5e009"} Jan 31 09:18:47 crc kubenswrapper[4830]: I0131 09:18:47.546149 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-6968d8fdc4-lhbbn" Jan 31 09:18:47 crc kubenswrapper[4830]: I0131 09:18:47.546178 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-x7g8x" Jan 31 09:18:47 crc kubenswrapper[4830]: I0131 09:18:47.546192 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-x7g8x" event={"ID":"1d713893-e8db-40ba-872c-e9d1650a56d0","Type":"ContainerStarted","Data":"d8ae027c1b15e1df367ce806f666e3b8850ae99e4b45a28fc07df2f9232d9bff"} Jan 31 09:18:47 crc kubenswrapper[4830]: I0131 09:18:47.610305 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-x7g8x" podStartSLOduration=3.610274274 podStartE2EDuration="3.610274274s" podCreationTimestamp="2026-01-31 09:18:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 09:18:47.600065051 +0000 UTC m=+1072.093427503" watchObservedRunningTime="2026-01-31 09:18:47.610274274 +0000 UTC m=+1072.103636716" Jan 31 09:18:54 crc kubenswrapper[4830]: I0131 09:18:54.623975 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-zwj92" event={"ID":"3951c2f7-8a23-4d78-9a26-1b89399bdb4e","Type":"ContainerStarted","Data":"8142163e3ce80c3464ac0822fda30bf877ce271f5e4ceef098795181d0f6e7eb"} Jan 31 09:18:54 crc kubenswrapper[4830]: I0131 09:18:54.624810 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-zwj92" Jan 31 09:18:54 crc kubenswrapper[4830]: I0131 09:18:54.626338 4830 generic.go:334] "Generic (PLEG): container finished" podID="d0107b00-a78b-432b-afc6-a9ccc1b3bf5b" containerID="8468317b77d2c84061d916b768bd5ccbdba0816503e395340531319f0bd91f27" exitCode=0 Jan 31 09:18:54 crc kubenswrapper[4830]: I0131 09:18:54.626380 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-4v2n6" event={"ID":"d0107b00-a78b-432b-afc6-a9ccc1b3bf5b","Type":"ContainerDied","Data":"8468317b77d2c84061d916b768bd5ccbdba0816503e395340531319f0bd91f27"} Jan 31 09:18:54 crc kubenswrapper[4830]: I0131 09:18:54.652325 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-zwj92" podStartSLOduration=1.85858725 podStartE2EDuration="10.652295771s" podCreationTimestamp="2026-01-31 09:18:44 +0000 UTC" firstStartedPulling="2026-01-31 09:18:45.196066704 +0000 UTC m=+1069.689429146" lastFinishedPulling="2026-01-31 09:18:53.989775225 +0000 UTC m=+1078.483137667" observedRunningTime="2026-01-31 09:18:54.648244935 +0000 UTC m=+1079.141607377" watchObservedRunningTime="2026-01-31 09:18:54.652295771 +0000 UTC m=+1079.145658223" Jan 31 09:18:55 crc kubenswrapper[4830]: I0131 09:18:55.639582 4830 generic.go:334] "Generic (PLEG): container finished" podID="d0107b00-a78b-432b-afc6-a9ccc1b3bf5b" containerID="e2bb5777353c615224fa37e06858eaf2abeac447596be029c48e3c8b0a8618e1" exitCode=0 Jan 31 09:18:55 crc kubenswrapper[4830]: I0131 09:18:55.639663 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-4v2n6" event={"ID":"d0107b00-a78b-432b-afc6-a9ccc1b3bf5b","Type":"ContainerDied","Data":"e2bb5777353c615224fa37e06858eaf2abeac447596be029c48e3c8b0a8618e1"} Jan 31 09:18:56 crc kubenswrapper[4830]: I0131 09:18:56.308827 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-x7g8x" Jan 31 09:18:56 crc kubenswrapper[4830]: I0131 09:18:56.651033 4830 generic.go:334] "Generic (PLEG): container finished" podID="d0107b00-a78b-432b-afc6-a9ccc1b3bf5b" containerID="2bdd6623c9408508e0478a21d9cb6e8c7435b40dde3127d480be4b7d129ec61e" exitCode=0 Jan 31 09:18:56 crc kubenswrapper[4830]: I0131 09:18:56.651514 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-4v2n6" event={"ID":"d0107b00-a78b-432b-afc6-a9ccc1b3bf5b","Type":"ContainerDied","Data":"2bdd6623c9408508e0478a21d9cb6e8c7435b40dde3127d480be4b7d129ec61e"} Jan 31 09:18:57 crc kubenswrapper[4830]: I0131 09:18:57.664902 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-4v2n6" event={"ID":"d0107b00-a78b-432b-afc6-a9ccc1b3bf5b","Type":"ContainerStarted","Data":"47278e1942b13207860770b90700cd857ce78a63010a6bfb864a0ec4f3cdc959"} Jan 31 09:18:57 crc kubenswrapper[4830]: I0131 09:18:57.665439 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-4v2n6" event={"ID":"d0107b00-a78b-432b-afc6-a9ccc1b3bf5b","Type":"ContainerStarted","Data":"48cc01afd187531d11a8e7950848cdfb1bbe3d5df848bd9f580f457ec1e94f6e"} Jan 31 09:18:57 crc kubenswrapper[4830]: I0131 09:18:57.665455 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-4v2n6" event={"ID":"d0107b00-a78b-432b-afc6-a9ccc1b3bf5b","Type":"ContainerStarted","Data":"727ecac22e63391e070b11aadabf324369bf6f5aa72356556f5e4f8598e8f60c"} Jan 31 09:18:58 crc kubenswrapper[4830]: I0131 09:18:58.684109 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-4v2n6" event={"ID":"d0107b00-a78b-432b-afc6-a9ccc1b3bf5b","Type":"ContainerStarted","Data":"9527a3efa350eaef88ff2ac49ce92aca89ad03ee9deccb53209354cb228ef43b"} Jan 31 09:18:58 crc kubenswrapper[4830]: I0131 09:18:58.684655 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-4v2n6" event={"ID":"d0107b00-a78b-432b-afc6-a9ccc1b3bf5b","Type":"ContainerStarted","Data":"11bfb00a350a7e36691d960f821ad4d9bc73e161dc9db0cb8f43492e293e1a85"} Jan 31 09:18:58 crc kubenswrapper[4830]: I0131 09:18:58.684673 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-4v2n6" event={"ID":"d0107b00-a78b-432b-afc6-a9ccc1b3bf5b","Type":"ContainerStarted","Data":"94daf95efff731cd5bddd75b2df274157157e7ba9548a6b933598dc83082ab27"} Jan 31 09:18:58 crc kubenswrapper[4830]: I0131 09:18:58.684862 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-4v2n6" Jan 31 09:18:58 crc kubenswrapper[4830]: I0131 09:18:58.715832 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-4v2n6" podStartSLOduration=5.663280558 podStartE2EDuration="14.715811943s" podCreationTimestamp="2026-01-31 09:18:44 +0000 UTC" firstStartedPulling="2026-01-31 09:18:44.904321245 +0000 UTC m=+1069.397683687" lastFinishedPulling="2026-01-31 09:18:53.95685263 +0000 UTC m=+1078.450215072" observedRunningTime="2026-01-31 09:18:58.715253717 +0000 UTC m=+1083.208616179" watchObservedRunningTime="2026-01-31 09:18:58.715811943 +0000 UTC m=+1083.209174385" Jan 31 09:18:59 crc kubenswrapper[4830]: I0131 09:18:59.164000 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-wsmpd"] Jan 31 09:18:59 crc kubenswrapper[4830]: I0131 09:18:59.166012 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-wsmpd" Jan 31 09:18:59 crc kubenswrapper[4830]: I0131 09:18:59.168054 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-gwxt9" Jan 31 09:18:59 crc kubenswrapper[4830]: I0131 09:18:59.172341 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Jan 31 09:18:59 crc kubenswrapper[4830]: I0131 09:18:59.172517 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Jan 31 09:18:59 crc kubenswrapper[4830]: I0131 09:18:59.184601 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-wsmpd"] Jan 31 09:18:59 crc kubenswrapper[4830]: I0131 09:18:59.228656 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-22zlt\" (UniqueName: \"kubernetes.io/projected/90fb9121-7350-4cee-af9d-81bc54ed9f86-kube-api-access-22zlt\") pod \"openstack-operator-index-wsmpd\" (UID: \"90fb9121-7350-4cee-af9d-81bc54ed9f86\") " pod="openstack-operators/openstack-operator-index-wsmpd" Jan 31 09:18:59 crc kubenswrapper[4830]: I0131 09:18:59.331387 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-22zlt\" (UniqueName: \"kubernetes.io/projected/90fb9121-7350-4cee-af9d-81bc54ed9f86-kube-api-access-22zlt\") pod \"openstack-operator-index-wsmpd\" (UID: \"90fb9121-7350-4cee-af9d-81bc54ed9f86\") " pod="openstack-operators/openstack-operator-index-wsmpd" Jan 31 09:18:59 crc kubenswrapper[4830]: I0131 09:18:59.358814 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-22zlt\" (UniqueName: \"kubernetes.io/projected/90fb9121-7350-4cee-af9d-81bc54ed9f86-kube-api-access-22zlt\") pod \"openstack-operator-index-wsmpd\" (UID: \"90fb9121-7350-4cee-af9d-81bc54ed9f86\") " pod="openstack-operators/openstack-operator-index-wsmpd" Jan 31 09:18:59 crc kubenswrapper[4830]: I0131 09:18:59.511332 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-wsmpd" Jan 31 09:18:59 crc kubenswrapper[4830]: I0131 09:18:59.670671 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-4v2n6" Jan 31 09:18:59 crc kubenswrapper[4830]: I0131 09:18:59.724469 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-4v2n6" Jan 31 09:18:59 crc kubenswrapper[4830]: W0131 09:18:59.988911 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod90fb9121_7350_4cee_af9d_81bc54ed9f86.slice/crio-0609f3820a8d446323e0236e5ef754f6ccdeb75a04f4fddda100001272ba723f WatchSource:0}: Error finding container 0609f3820a8d446323e0236e5ef754f6ccdeb75a04f4fddda100001272ba723f: Status 404 returned error can't find the container with id 0609f3820a8d446323e0236e5ef754f6ccdeb75a04f4fddda100001272ba723f Jan 31 09:18:59 crc kubenswrapper[4830]: I0131 09:18:59.992602 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-wsmpd"] Jan 31 09:19:00 crc kubenswrapper[4830]: I0131 09:19:00.709932 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-wsmpd" event={"ID":"90fb9121-7350-4cee-af9d-81bc54ed9f86","Type":"ContainerStarted","Data":"0609f3820a8d446323e0236e5ef754f6ccdeb75a04f4fddda100001272ba723f"} Jan 31 09:19:02 crc kubenswrapper[4830]: I0131 09:19:02.330550 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-wsmpd"] Jan 31 09:19:02 crc kubenswrapper[4830]: I0131 09:19:02.952506 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-nc25d"] Jan 31 09:19:02 crc kubenswrapper[4830]: I0131 09:19:02.953864 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-nc25d" Jan 31 09:19:02 crc kubenswrapper[4830]: I0131 09:19:02.961257 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-nc25d"] Jan 31 09:19:03 crc kubenswrapper[4830]: I0131 09:19:03.013700 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zzxtt\" (UniqueName: \"kubernetes.io/projected/b0b831b3-e535-4264-b46c-c93f7edd51d2-kube-api-access-zzxtt\") pod \"openstack-operator-index-nc25d\" (UID: \"b0b831b3-e535-4264-b46c-c93f7edd51d2\") " pod="openstack-operators/openstack-operator-index-nc25d" Jan 31 09:19:03 crc kubenswrapper[4830]: I0131 09:19:03.116988 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zzxtt\" (UniqueName: \"kubernetes.io/projected/b0b831b3-e535-4264-b46c-c93f7edd51d2-kube-api-access-zzxtt\") pod \"openstack-operator-index-nc25d\" (UID: \"b0b831b3-e535-4264-b46c-c93f7edd51d2\") " pod="openstack-operators/openstack-operator-index-nc25d" Jan 31 09:19:03 crc kubenswrapper[4830]: I0131 09:19:03.139892 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zzxtt\" (UniqueName: \"kubernetes.io/projected/b0b831b3-e535-4264-b46c-c93f7edd51d2-kube-api-access-zzxtt\") pod \"openstack-operator-index-nc25d\" (UID: \"b0b831b3-e535-4264-b46c-c93f7edd51d2\") " pod="openstack-operators/openstack-operator-index-nc25d" Jan 31 09:19:03 crc kubenswrapper[4830]: I0131 09:19:03.284542 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-nc25d" Jan 31 09:19:03 crc kubenswrapper[4830]: I0131 09:19:03.742741 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-wsmpd" event={"ID":"90fb9121-7350-4cee-af9d-81bc54ed9f86","Type":"ContainerStarted","Data":"3955afe6baae42c68b5710680649cb79a01ca5d6385b52a95a9a923bea5a06d7"} Jan 31 09:19:03 crc kubenswrapper[4830]: I0131 09:19:03.743104 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-wsmpd" podUID="90fb9121-7350-4cee-af9d-81bc54ed9f86" containerName="registry-server" containerID="cri-o://3955afe6baae42c68b5710680649cb79a01ca5d6385b52a95a9a923bea5a06d7" gracePeriod=2 Jan 31 09:19:03 crc kubenswrapper[4830]: I0131 09:19:03.763858 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-nc25d"] Jan 31 09:19:04 crc kubenswrapper[4830]: I0131 09:19:04.181215 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-wsmpd" Jan 31 09:19:04 crc kubenswrapper[4830]: I0131 09:19:04.240326 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-22zlt\" (UniqueName: \"kubernetes.io/projected/90fb9121-7350-4cee-af9d-81bc54ed9f86-kube-api-access-22zlt\") pod \"90fb9121-7350-4cee-af9d-81bc54ed9f86\" (UID: \"90fb9121-7350-4cee-af9d-81bc54ed9f86\") " Jan 31 09:19:04 crc kubenswrapper[4830]: I0131 09:19:04.246424 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/90fb9121-7350-4cee-af9d-81bc54ed9f86-kube-api-access-22zlt" (OuterVolumeSpecName: "kube-api-access-22zlt") pod "90fb9121-7350-4cee-af9d-81bc54ed9f86" (UID: "90fb9121-7350-4cee-af9d-81bc54ed9f86"). InnerVolumeSpecName "kube-api-access-22zlt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:19:04 crc kubenswrapper[4830]: I0131 09:19:04.346685 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-22zlt\" (UniqueName: \"kubernetes.io/projected/90fb9121-7350-4cee-af9d-81bc54ed9f86-kube-api-access-22zlt\") on node \"crc\" DevicePath \"\"" Jan 31 09:19:04 crc kubenswrapper[4830]: I0131 09:19:04.690095 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-zwj92" Jan 31 09:19:04 crc kubenswrapper[4830]: I0131 09:19:04.756509 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-nc25d" event={"ID":"b0b831b3-e535-4264-b46c-c93f7edd51d2","Type":"ContainerStarted","Data":"384a0831544a2cb790ebf79501804b539d1b77cdf911870336931f1b831b232d"} Jan 31 09:19:04 crc kubenswrapper[4830]: I0131 09:19:04.756602 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-nc25d" event={"ID":"b0b831b3-e535-4264-b46c-c93f7edd51d2","Type":"ContainerStarted","Data":"7ff8fdea56ee65d6554f84b2266e05ca30b83412d15f0679b45521564b2c75b3"} Jan 31 09:19:04 crc kubenswrapper[4830]: I0131 09:19:04.758412 4830 generic.go:334] "Generic (PLEG): container finished" podID="90fb9121-7350-4cee-af9d-81bc54ed9f86" containerID="3955afe6baae42c68b5710680649cb79a01ca5d6385b52a95a9a923bea5a06d7" exitCode=0 Jan 31 09:19:04 crc kubenswrapper[4830]: I0131 09:19:04.758488 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-wsmpd" Jan 31 09:19:04 crc kubenswrapper[4830]: I0131 09:19:04.758488 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-wsmpd" event={"ID":"90fb9121-7350-4cee-af9d-81bc54ed9f86","Type":"ContainerDied","Data":"3955afe6baae42c68b5710680649cb79a01ca5d6385b52a95a9a923bea5a06d7"} Jan 31 09:19:04 crc kubenswrapper[4830]: I0131 09:19:04.758639 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-wsmpd" event={"ID":"90fb9121-7350-4cee-af9d-81bc54ed9f86","Type":"ContainerDied","Data":"0609f3820a8d446323e0236e5ef754f6ccdeb75a04f4fddda100001272ba723f"} Jan 31 09:19:04 crc kubenswrapper[4830]: I0131 09:19:04.758689 4830 scope.go:117] "RemoveContainer" containerID="3955afe6baae42c68b5710680649cb79a01ca5d6385b52a95a9a923bea5a06d7" Jan 31 09:19:04 crc kubenswrapper[4830]: I0131 09:19:04.783960 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-nc25d" podStartSLOduration=2.7417926230000003 podStartE2EDuration="2.783926413s" podCreationTimestamp="2026-01-31 09:19:02 +0000 UTC" firstStartedPulling="2026-01-31 09:19:03.778016826 +0000 UTC m=+1088.271379268" lastFinishedPulling="2026-01-31 09:19:03.820150626 +0000 UTC m=+1088.313513058" observedRunningTime="2026-01-31 09:19:04.773792592 +0000 UTC m=+1089.267155044" watchObservedRunningTime="2026-01-31 09:19:04.783926413 +0000 UTC m=+1089.277288855" Jan 31 09:19:04 crc kubenswrapper[4830]: I0131 09:19:04.797277 4830 scope.go:117] "RemoveContainer" containerID="3955afe6baae42c68b5710680649cb79a01ca5d6385b52a95a9a923bea5a06d7" Jan 31 09:19:04 crc kubenswrapper[4830]: E0131 09:19:04.798097 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3955afe6baae42c68b5710680649cb79a01ca5d6385b52a95a9a923bea5a06d7\": container with ID starting with 3955afe6baae42c68b5710680649cb79a01ca5d6385b52a95a9a923bea5a06d7 not found: ID does not exist" containerID="3955afe6baae42c68b5710680649cb79a01ca5d6385b52a95a9a923bea5a06d7" Jan 31 09:19:04 crc kubenswrapper[4830]: I0131 09:19:04.798162 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3955afe6baae42c68b5710680649cb79a01ca5d6385b52a95a9a923bea5a06d7"} err="failed to get container status \"3955afe6baae42c68b5710680649cb79a01ca5d6385b52a95a9a923bea5a06d7\": rpc error: code = NotFound desc = could not find container \"3955afe6baae42c68b5710680649cb79a01ca5d6385b52a95a9a923bea5a06d7\": container with ID starting with 3955afe6baae42c68b5710680649cb79a01ca5d6385b52a95a9a923bea5a06d7 not found: ID does not exist" Jan 31 09:19:04 crc kubenswrapper[4830]: I0131 09:19:04.806185 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-wsmpd"] Jan 31 09:19:04 crc kubenswrapper[4830]: I0131 09:19:04.817594 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-wsmpd"] Jan 31 09:19:04 crc kubenswrapper[4830]: I0131 09:19:04.882952 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-6968d8fdc4-lhbbn" Jan 31 09:19:06 crc kubenswrapper[4830]: I0131 09:19:06.270183 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="90fb9121-7350-4cee-af9d-81bc54ed9f86" path="/var/lib/kubelet/pods/90fb9121-7350-4cee-af9d-81bc54ed9f86/volumes" Jan 31 09:19:13 crc kubenswrapper[4830]: I0131 09:19:13.285926 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-nc25d" Jan 31 09:19:13 crc kubenswrapper[4830]: I0131 09:19:13.287069 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-nc25d" Jan 31 09:19:13 crc kubenswrapper[4830]: I0131 09:19:13.316478 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-nc25d" Jan 31 09:19:13 crc kubenswrapper[4830]: I0131 09:19:13.878425 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-nc25d" Jan 31 09:19:14 crc kubenswrapper[4830]: I0131 09:19:14.353775 4830 patch_prober.go:28] interesting pod/machine-config-daemon-gt7kd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 09:19:14 crc kubenswrapper[4830]: I0131 09:19:14.353844 4830 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" podUID="158dbfda-9b0a-4809-9946-3c6ee2d082dc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 09:19:14 crc kubenswrapper[4830]: I0131 09:19:14.677508 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-4v2n6" Jan 31 09:19:14 crc kubenswrapper[4830]: I0131 09:19:14.786064 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/9d2bca841cc06ee79e2fe7d96e1fa5d1c31cf855f577b319bd394a90a0g7mhl"] Jan 31 09:19:14 crc kubenswrapper[4830]: E0131 09:19:14.786464 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="90fb9121-7350-4cee-af9d-81bc54ed9f86" containerName="registry-server" Jan 31 09:19:14 crc kubenswrapper[4830]: I0131 09:19:14.786483 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="90fb9121-7350-4cee-af9d-81bc54ed9f86" containerName="registry-server" Jan 31 09:19:14 crc kubenswrapper[4830]: I0131 09:19:14.786784 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="90fb9121-7350-4cee-af9d-81bc54ed9f86" containerName="registry-server" Jan 31 09:19:14 crc kubenswrapper[4830]: I0131 09:19:14.788271 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/9d2bca841cc06ee79e2fe7d96e1fa5d1c31cf855f577b319bd394a90a0g7mhl" Jan 31 09:19:14 crc kubenswrapper[4830]: I0131 09:19:14.794312 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-bswm7" Jan 31 09:19:14 crc kubenswrapper[4830]: I0131 09:19:14.795943 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/9d2bca841cc06ee79e2fe7d96e1fa5d1c31cf855f577b319bd394a90a0g7mhl"] Jan 31 09:19:14 crc kubenswrapper[4830]: I0131 09:19:14.888257 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hglk5\" (UniqueName: \"kubernetes.io/projected/c3a69b5a-a2ea-4f45-aed3-524702c726d9-kube-api-access-hglk5\") pod \"9d2bca841cc06ee79e2fe7d96e1fa5d1c31cf855f577b319bd394a90a0g7mhl\" (UID: \"c3a69b5a-a2ea-4f45-aed3-524702c726d9\") " pod="openstack-operators/9d2bca841cc06ee79e2fe7d96e1fa5d1c31cf855f577b319bd394a90a0g7mhl" Jan 31 09:19:14 crc kubenswrapper[4830]: I0131 09:19:14.888426 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c3a69b5a-a2ea-4f45-aed3-524702c726d9-util\") pod \"9d2bca841cc06ee79e2fe7d96e1fa5d1c31cf855f577b319bd394a90a0g7mhl\" (UID: \"c3a69b5a-a2ea-4f45-aed3-524702c726d9\") " pod="openstack-operators/9d2bca841cc06ee79e2fe7d96e1fa5d1c31cf855f577b319bd394a90a0g7mhl" Jan 31 09:19:14 crc kubenswrapper[4830]: I0131 09:19:14.888642 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c3a69b5a-a2ea-4f45-aed3-524702c726d9-bundle\") pod \"9d2bca841cc06ee79e2fe7d96e1fa5d1c31cf855f577b319bd394a90a0g7mhl\" (UID: \"c3a69b5a-a2ea-4f45-aed3-524702c726d9\") " pod="openstack-operators/9d2bca841cc06ee79e2fe7d96e1fa5d1c31cf855f577b319bd394a90a0g7mhl" Jan 31 09:19:14 crc kubenswrapper[4830]: I0131 09:19:14.990489 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c3a69b5a-a2ea-4f45-aed3-524702c726d9-bundle\") pod \"9d2bca841cc06ee79e2fe7d96e1fa5d1c31cf855f577b319bd394a90a0g7mhl\" (UID: \"c3a69b5a-a2ea-4f45-aed3-524702c726d9\") " pod="openstack-operators/9d2bca841cc06ee79e2fe7d96e1fa5d1c31cf855f577b319bd394a90a0g7mhl" Jan 31 09:19:14 crc kubenswrapper[4830]: I0131 09:19:14.990601 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hglk5\" (UniqueName: \"kubernetes.io/projected/c3a69b5a-a2ea-4f45-aed3-524702c726d9-kube-api-access-hglk5\") pod \"9d2bca841cc06ee79e2fe7d96e1fa5d1c31cf855f577b319bd394a90a0g7mhl\" (UID: \"c3a69b5a-a2ea-4f45-aed3-524702c726d9\") " pod="openstack-operators/9d2bca841cc06ee79e2fe7d96e1fa5d1c31cf855f577b319bd394a90a0g7mhl" Jan 31 09:19:14 crc kubenswrapper[4830]: I0131 09:19:14.990685 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c3a69b5a-a2ea-4f45-aed3-524702c726d9-util\") pod \"9d2bca841cc06ee79e2fe7d96e1fa5d1c31cf855f577b319bd394a90a0g7mhl\" (UID: \"c3a69b5a-a2ea-4f45-aed3-524702c726d9\") " pod="openstack-operators/9d2bca841cc06ee79e2fe7d96e1fa5d1c31cf855f577b319bd394a90a0g7mhl" Jan 31 09:19:14 crc kubenswrapper[4830]: I0131 09:19:14.991159 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c3a69b5a-a2ea-4f45-aed3-524702c726d9-bundle\") pod \"9d2bca841cc06ee79e2fe7d96e1fa5d1c31cf855f577b319bd394a90a0g7mhl\" (UID: \"c3a69b5a-a2ea-4f45-aed3-524702c726d9\") " pod="openstack-operators/9d2bca841cc06ee79e2fe7d96e1fa5d1c31cf855f577b319bd394a90a0g7mhl" Jan 31 09:19:14 crc kubenswrapper[4830]: I0131 09:19:14.991352 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c3a69b5a-a2ea-4f45-aed3-524702c726d9-util\") pod \"9d2bca841cc06ee79e2fe7d96e1fa5d1c31cf855f577b319bd394a90a0g7mhl\" (UID: \"c3a69b5a-a2ea-4f45-aed3-524702c726d9\") " pod="openstack-operators/9d2bca841cc06ee79e2fe7d96e1fa5d1c31cf855f577b319bd394a90a0g7mhl" Jan 31 09:19:15 crc kubenswrapper[4830]: I0131 09:19:15.011126 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hglk5\" (UniqueName: \"kubernetes.io/projected/c3a69b5a-a2ea-4f45-aed3-524702c726d9-kube-api-access-hglk5\") pod \"9d2bca841cc06ee79e2fe7d96e1fa5d1c31cf855f577b319bd394a90a0g7mhl\" (UID: \"c3a69b5a-a2ea-4f45-aed3-524702c726d9\") " pod="openstack-operators/9d2bca841cc06ee79e2fe7d96e1fa5d1c31cf855f577b319bd394a90a0g7mhl" Jan 31 09:19:15 crc kubenswrapper[4830]: I0131 09:19:15.117351 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/9d2bca841cc06ee79e2fe7d96e1fa5d1c31cf855f577b319bd394a90a0g7mhl" Jan 31 09:19:15 crc kubenswrapper[4830]: I0131 09:19:15.579586 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/9d2bca841cc06ee79e2fe7d96e1fa5d1c31cf855f577b319bd394a90a0g7mhl"] Jan 31 09:19:15 crc kubenswrapper[4830]: I0131 09:19:15.865228 4830 generic.go:334] "Generic (PLEG): container finished" podID="c3a69b5a-a2ea-4f45-aed3-524702c726d9" containerID="f8976947df254cd0e6edc964deeb6a53981a9b1d96dd3813b4117f483bfe89cd" exitCode=0 Jan 31 09:19:15 crc kubenswrapper[4830]: I0131 09:19:15.865297 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/9d2bca841cc06ee79e2fe7d96e1fa5d1c31cf855f577b319bd394a90a0g7mhl" event={"ID":"c3a69b5a-a2ea-4f45-aed3-524702c726d9","Type":"ContainerDied","Data":"f8976947df254cd0e6edc964deeb6a53981a9b1d96dd3813b4117f483bfe89cd"} Jan 31 09:19:15 crc kubenswrapper[4830]: I0131 09:19:15.865337 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/9d2bca841cc06ee79e2fe7d96e1fa5d1c31cf855f577b319bd394a90a0g7mhl" event={"ID":"c3a69b5a-a2ea-4f45-aed3-524702c726d9","Type":"ContainerStarted","Data":"4d1e139af9e897875e231a024d2d3de120a8830a2801ae40c209a7c34399eeea"} Jan 31 09:19:16 crc kubenswrapper[4830]: I0131 09:19:16.883618 4830 generic.go:334] "Generic (PLEG): container finished" podID="c3a69b5a-a2ea-4f45-aed3-524702c726d9" containerID="e2ecd7867d8f03eda0b5d52bbebe9b480855ee968a744592da7a55476bdc86c2" exitCode=0 Jan 31 09:19:16 crc kubenswrapper[4830]: I0131 09:19:16.884272 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/9d2bca841cc06ee79e2fe7d96e1fa5d1c31cf855f577b319bd394a90a0g7mhl" event={"ID":"c3a69b5a-a2ea-4f45-aed3-524702c726d9","Type":"ContainerDied","Data":"e2ecd7867d8f03eda0b5d52bbebe9b480855ee968a744592da7a55476bdc86c2"} Jan 31 09:19:17 crc kubenswrapper[4830]: I0131 09:19:17.900008 4830 generic.go:334] "Generic (PLEG): container finished" podID="c3a69b5a-a2ea-4f45-aed3-524702c726d9" containerID="4b16dcfeeaaa4632330f138b70f88388cc754423fcb789afb43388f7bf69d8e4" exitCode=0 Jan 31 09:19:17 crc kubenswrapper[4830]: I0131 09:19:17.900115 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/9d2bca841cc06ee79e2fe7d96e1fa5d1c31cf855f577b319bd394a90a0g7mhl" event={"ID":"c3a69b5a-a2ea-4f45-aed3-524702c726d9","Type":"ContainerDied","Data":"4b16dcfeeaaa4632330f138b70f88388cc754423fcb789afb43388f7bf69d8e4"} Jan 31 09:19:19 crc kubenswrapper[4830]: I0131 09:19:19.290277 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/9d2bca841cc06ee79e2fe7d96e1fa5d1c31cf855f577b319bd394a90a0g7mhl" Jan 31 09:19:19 crc kubenswrapper[4830]: I0131 09:19:19.472807 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c3a69b5a-a2ea-4f45-aed3-524702c726d9-bundle\") pod \"c3a69b5a-a2ea-4f45-aed3-524702c726d9\" (UID: \"c3a69b5a-a2ea-4f45-aed3-524702c726d9\") " Jan 31 09:19:19 crc kubenswrapper[4830]: I0131 09:19:19.473804 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c3a69b5a-a2ea-4f45-aed3-524702c726d9-util\") pod \"c3a69b5a-a2ea-4f45-aed3-524702c726d9\" (UID: \"c3a69b5a-a2ea-4f45-aed3-524702c726d9\") " Jan 31 09:19:19 crc kubenswrapper[4830]: I0131 09:19:19.473695 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c3a69b5a-a2ea-4f45-aed3-524702c726d9-bundle" (OuterVolumeSpecName: "bundle") pod "c3a69b5a-a2ea-4f45-aed3-524702c726d9" (UID: "c3a69b5a-a2ea-4f45-aed3-524702c726d9"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 09:19:19 crc kubenswrapper[4830]: I0131 09:19:19.473887 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hglk5\" (UniqueName: \"kubernetes.io/projected/c3a69b5a-a2ea-4f45-aed3-524702c726d9-kube-api-access-hglk5\") pod \"c3a69b5a-a2ea-4f45-aed3-524702c726d9\" (UID: \"c3a69b5a-a2ea-4f45-aed3-524702c726d9\") " Jan 31 09:19:19 crc kubenswrapper[4830]: I0131 09:19:19.475537 4830 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c3a69b5a-a2ea-4f45-aed3-524702c726d9-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 09:19:19 crc kubenswrapper[4830]: I0131 09:19:19.482148 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c3a69b5a-a2ea-4f45-aed3-524702c726d9-kube-api-access-hglk5" (OuterVolumeSpecName: "kube-api-access-hglk5") pod "c3a69b5a-a2ea-4f45-aed3-524702c726d9" (UID: "c3a69b5a-a2ea-4f45-aed3-524702c726d9"). InnerVolumeSpecName "kube-api-access-hglk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:19:19 crc kubenswrapper[4830]: I0131 09:19:19.487181 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c3a69b5a-a2ea-4f45-aed3-524702c726d9-util" (OuterVolumeSpecName: "util") pod "c3a69b5a-a2ea-4f45-aed3-524702c726d9" (UID: "c3a69b5a-a2ea-4f45-aed3-524702c726d9"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 09:19:19 crc kubenswrapper[4830]: I0131 09:19:19.576063 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hglk5\" (UniqueName: \"kubernetes.io/projected/c3a69b5a-a2ea-4f45-aed3-524702c726d9-kube-api-access-hglk5\") on node \"crc\" DevicePath \"\"" Jan 31 09:19:19 crc kubenswrapper[4830]: I0131 09:19:19.576106 4830 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c3a69b5a-a2ea-4f45-aed3-524702c726d9-util\") on node \"crc\" DevicePath \"\"" Jan 31 09:19:19 crc kubenswrapper[4830]: I0131 09:19:19.923754 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/9d2bca841cc06ee79e2fe7d96e1fa5d1c31cf855f577b319bd394a90a0g7mhl" event={"ID":"c3a69b5a-a2ea-4f45-aed3-524702c726d9","Type":"ContainerDied","Data":"4d1e139af9e897875e231a024d2d3de120a8830a2801ae40c209a7c34399eeea"} Jan 31 09:19:19 crc kubenswrapper[4830]: I0131 09:19:19.924237 4830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4d1e139af9e897875e231a024d2d3de120a8830a2801ae40c209a7c34399eeea" Jan 31 09:19:19 crc kubenswrapper[4830]: I0131 09:19:19.923920 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/9d2bca841cc06ee79e2fe7d96e1fa5d1c31cf855f577b319bd394a90a0g7mhl" Jan 31 09:19:21 crc kubenswrapper[4830]: I0131 09:19:21.674336 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-init-54dc59fd95-sv8r9"] Jan 31 09:19:21 crc kubenswrapper[4830]: E0131 09:19:21.674878 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3a69b5a-a2ea-4f45-aed3-524702c726d9" containerName="extract" Jan 31 09:19:21 crc kubenswrapper[4830]: I0131 09:19:21.674901 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3a69b5a-a2ea-4f45-aed3-524702c726d9" containerName="extract" Jan 31 09:19:21 crc kubenswrapper[4830]: E0131 09:19:21.674961 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3a69b5a-a2ea-4f45-aed3-524702c726d9" containerName="pull" Jan 31 09:19:21 crc kubenswrapper[4830]: I0131 09:19:21.674971 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3a69b5a-a2ea-4f45-aed3-524702c726d9" containerName="pull" Jan 31 09:19:21 crc kubenswrapper[4830]: E0131 09:19:21.674992 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3a69b5a-a2ea-4f45-aed3-524702c726d9" containerName="util" Jan 31 09:19:21 crc kubenswrapper[4830]: I0131 09:19:21.675000 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3a69b5a-a2ea-4f45-aed3-524702c726d9" containerName="util" Jan 31 09:19:21 crc kubenswrapper[4830]: I0131 09:19:21.675203 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="c3a69b5a-a2ea-4f45-aed3-524702c726d9" containerName="extract" Jan 31 09:19:21 crc kubenswrapper[4830]: I0131 09:19:21.676154 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-54dc59fd95-sv8r9" Jan 31 09:19:21 crc kubenswrapper[4830]: I0131 09:19:21.691397 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-init-dockercfg-47qqc" Jan 31 09:19:21 crc kubenswrapper[4830]: I0131 09:19:21.711479 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-54dc59fd95-sv8r9"] Jan 31 09:19:21 crc kubenswrapper[4830]: I0131 09:19:21.720555 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ddx8l\" (UniqueName: \"kubernetes.io/projected/2a183ae3-dc4b-4f75-a9ca-4832bd5faf06-kube-api-access-ddx8l\") pod \"openstack-operator-controller-init-54dc59fd95-sv8r9\" (UID: \"2a183ae3-dc4b-4f75-a9ca-4832bd5faf06\") " pod="openstack-operators/openstack-operator-controller-init-54dc59fd95-sv8r9" Jan 31 09:19:21 crc kubenswrapper[4830]: I0131 09:19:21.821460 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ddx8l\" (UniqueName: \"kubernetes.io/projected/2a183ae3-dc4b-4f75-a9ca-4832bd5faf06-kube-api-access-ddx8l\") pod \"openstack-operator-controller-init-54dc59fd95-sv8r9\" (UID: \"2a183ae3-dc4b-4f75-a9ca-4832bd5faf06\") " pod="openstack-operators/openstack-operator-controller-init-54dc59fd95-sv8r9" Jan 31 09:19:21 crc kubenswrapper[4830]: I0131 09:19:21.843076 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ddx8l\" (UniqueName: \"kubernetes.io/projected/2a183ae3-dc4b-4f75-a9ca-4832bd5faf06-kube-api-access-ddx8l\") pod \"openstack-operator-controller-init-54dc59fd95-sv8r9\" (UID: \"2a183ae3-dc4b-4f75-a9ca-4832bd5faf06\") " pod="openstack-operators/openstack-operator-controller-init-54dc59fd95-sv8r9" Jan 31 09:19:22 crc kubenswrapper[4830]: I0131 09:19:22.007293 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-54dc59fd95-sv8r9" Jan 31 09:19:22 crc kubenswrapper[4830]: I0131 09:19:22.514598 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-54dc59fd95-sv8r9"] Jan 31 09:19:22 crc kubenswrapper[4830]: I0131 09:19:22.964457 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-54dc59fd95-sv8r9" event={"ID":"2a183ae3-dc4b-4f75-a9ca-4832bd5faf06","Type":"ContainerStarted","Data":"3e14e0934f277297b566bafc27a9eee8049439a3403880850589a7d7b9846417"} Jan 31 09:19:29 crc kubenswrapper[4830]: I0131 09:19:29.067592 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-54dc59fd95-sv8r9" event={"ID":"2a183ae3-dc4b-4f75-a9ca-4832bd5faf06","Type":"ContainerStarted","Data":"02485d5110c6b88cac3b44496e1451c9cb9553b4fe3f14a833ef5e41c773e726"} Jan 31 09:19:29 crc kubenswrapper[4830]: I0131 09:19:29.068352 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-54dc59fd95-sv8r9" Jan 31 09:19:29 crc kubenswrapper[4830]: I0131 09:19:29.101801 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-init-54dc59fd95-sv8r9" podStartSLOduration=2.625611036 podStartE2EDuration="8.10177457s" podCreationTimestamp="2026-01-31 09:19:21 +0000 UTC" firstStartedPulling="2026-01-31 09:19:22.517169409 +0000 UTC m=+1107.010531861" lastFinishedPulling="2026-01-31 09:19:27.993332943 +0000 UTC m=+1112.486695395" observedRunningTime="2026-01-31 09:19:29.094137161 +0000 UTC m=+1113.587499603" watchObservedRunningTime="2026-01-31 09:19:29.10177457 +0000 UTC m=+1113.595137032" Jan 31 09:19:42 crc kubenswrapper[4830]: I0131 09:19:42.010264 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-54dc59fd95-sv8r9" Jan 31 09:19:44 crc kubenswrapper[4830]: I0131 09:19:44.353115 4830 patch_prober.go:28] interesting pod/machine-config-daemon-gt7kd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 09:19:44 crc kubenswrapper[4830]: I0131 09:19:44.353710 4830 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" podUID="158dbfda-9b0a-4809-9946-3c6ee2d082dc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 09:19:44 crc kubenswrapper[4830]: I0131 09:19:44.353819 4830 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" Jan 31 09:19:44 crc kubenswrapper[4830]: I0131 09:19:44.354979 4830 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"6ae573c7c9ad02ecbf718005230310a2ac720cf9510afe4a2b4cb658fc772187"} pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 31 09:19:44 crc kubenswrapper[4830]: I0131 09:19:44.355075 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" podUID="158dbfda-9b0a-4809-9946-3c6ee2d082dc" containerName="machine-config-daemon" containerID="cri-o://6ae573c7c9ad02ecbf718005230310a2ac720cf9510afe4a2b4cb658fc772187" gracePeriod=600 Jan 31 09:19:45 crc kubenswrapper[4830]: I0131 09:19:45.212374 4830 generic.go:334] "Generic (PLEG): container finished" podID="158dbfda-9b0a-4809-9946-3c6ee2d082dc" containerID="6ae573c7c9ad02ecbf718005230310a2ac720cf9510afe4a2b4cb658fc772187" exitCode=0 Jan 31 09:19:45 crc kubenswrapper[4830]: I0131 09:19:45.212479 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" event={"ID":"158dbfda-9b0a-4809-9946-3c6ee2d082dc","Type":"ContainerDied","Data":"6ae573c7c9ad02ecbf718005230310a2ac720cf9510afe4a2b4cb658fc772187"} Jan 31 09:19:45 crc kubenswrapper[4830]: I0131 09:19:45.213364 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" event={"ID":"158dbfda-9b0a-4809-9946-3c6ee2d082dc","Type":"ContainerStarted","Data":"67bf188d9d9b9ad6793313549c12d77b38caf6229dc0633ec340b752f089c942"} Jan 31 09:19:45 crc kubenswrapper[4830]: I0131 09:19:45.213404 4830 scope.go:117] "RemoveContainer" containerID="b9a249a59033511b4c694877132f9e35c14cbd330f48a89cd21a667a4732ff74" Jan 31 09:20:02 crc kubenswrapper[4830]: I0131 09:20:02.132649 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-kwwkw"] Jan 31 09:20:02 crc kubenswrapper[4830]: I0131 09:20:02.136140 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-kwwkw" Jan 31 09:20:02 crc kubenswrapper[4830]: I0131 09:20:02.142249 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-wqd4l" Jan 31 09:20:02 crc kubenswrapper[4830]: I0131 09:20:02.148305 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-8d874c8fc-cpwlp"] Jan 31 09:20:02 crc kubenswrapper[4830]: I0131 09:20:02.149967 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-cpwlp" Jan 31 09:20:02 crc kubenswrapper[4830]: I0131 09:20:02.153009 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-rt69b" Jan 31 09:20:02 crc kubenswrapper[4830]: I0131 09:20:02.176435 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-8d874c8fc-cpwlp"] Jan 31 09:20:02 crc kubenswrapper[4830]: I0131 09:20:02.192579 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-kwwkw"] Jan 31 09:20:02 crc kubenswrapper[4830]: I0131 09:20:02.207837 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d9697b7f4-d8xvw"] Jan 31 09:20:02 crc kubenswrapper[4830]: I0131 09:20:02.210373 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-d8xvw" Jan 31 09:20:02 crc kubenswrapper[4830]: I0131 09:20:02.212861 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-stnmv" Jan 31 09:20:02 crc kubenswrapper[4830]: I0131 09:20:02.237032 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d9b5z\" (UniqueName: \"kubernetes.io/projected/1488b4ea-ba49-423e-a995-917dc9cbb9e2-kube-api-access-d9b5z\") pod \"barbican-operator-controller-manager-7b6c4d8c5f-kwwkw\" (UID: \"1488b4ea-ba49-423e-a995-917dc9cbb9e2\") " pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-kwwkw" Jan 31 09:20:02 crc kubenswrapper[4830]: I0131 09:20:02.337460 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d9697b7f4-d8xvw"] Jan 31 09:20:02 crc kubenswrapper[4830]: I0131 09:20:02.340784 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p8x25\" (UniqueName: \"kubernetes.io/projected/47718a89-dc4c-4f5d-bb58-aec265aa68bf-kube-api-access-p8x25\") pod \"cinder-operator-controller-manager-8d874c8fc-cpwlp\" (UID: \"47718a89-dc4c-4f5d-bb58-aec265aa68bf\") " pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-cpwlp" Jan 31 09:20:02 crc kubenswrapper[4830]: I0131 09:20:02.341009 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d9b5z\" (UniqueName: \"kubernetes.io/projected/1488b4ea-ba49-423e-a995-917dc9cbb9e2-kube-api-access-d9b5z\") pod \"barbican-operator-controller-manager-7b6c4d8c5f-kwwkw\" (UID: \"1488b4ea-ba49-423e-a995-917dc9cbb9e2\") " pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-kwwkw" Jan 31 09:20:02 crc kubenswrapper[4830]: I0131 09:20:02.341111 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mwb6h\" (UniqueName: \"kubernetes.io/projected/3f5623d3-168a-4bca-9154-ecb4c81b5b3b-kube-api-access-mwb6h\") pod \"designate-operator-controller-manager-6d9697b7f4-d8xvw\" (UID: \"3f5623d3-168a-4bca-9154-ecb4c81b5b3b\") " pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-d8xvw" Jan 31 09:20:02 crc kubenswrapper[4830]: I0131 09:20:02.342810 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-8886f4c47-hcpk8"] Jan 31 09:20:02 crc kubenswrapper[4830]: I0131 09:20:02.344264 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-hcpk8" Jan 31 09:20:02 crc kubenswrapper[4830]: I0131 09:20:02.358977 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-vn8xb" Jan 31 09:20:02 crc kubenswrapper[4830]: I0131 09:20:02.386072 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d9b5z\" (UniqueName: \"kubernetes.io/projected/1488b4ea-ba49-423e-a995-917dc9cbb9e2-kube-api-access-d9b5z\") pod \"barbican-operator-controller-manager-7b6c4d8c5f-kwwkw\" (UID: \"1488b4ea-ba49-423e-a995-917dc9cbb9e2\") " pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-kwwkw" Jan 31 09:20:02 crc kubenswrapper[4830]: I0131 09:20:02.391908 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-8886f4c47-hcpk8"] Jan 31 09:20:02 crc kubenswrapper[4830]: I0131 09:20:02.415957 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-69d6db494d-8wnqw"] Jan 31 09:20:02 crc kubenswrapper[4830]: I0131 09:20:02.418266 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-8wnqw" Jan 31 09:20:02 crc kubenswrapper[4830]: I0131 09:20:02.437338 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-69d6db494d-8wnqw"] Jan 31 09:20:02 crc kubenswrapper[4830]: I0131 09:20:02.438293 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-9hbjb" Jan 31 09:20:02 crc kubenswrapper[4830]: I0131 09:20:02.443107 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8q284\" (UniqueName: \"kubernetes.io/projected/17f5c61d-5997-482b-961a-0339cfe6c15c-kube-api-access-8q284\") pod \"glance-operator-controller-manager-8886f4c47-hcpk8\" (UID: \"17f5c61d-5997-482b-961a-0339cfe6c15c\") " pod="openstack-operators/glance-operator-controller-manager-8886f4c47-hcpk8" Jan 31 09:20:02 crc kubenswrapper[4830]: I0131 09:20:02.443222 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p8x25\" (UniqueName: \"kubernetes.io/projected/47718a89-dc4c-4f5d-bb58-aec265aa68bf-kube-api-access-p8x25\") pod \"cinder-operator-controller-manager-8d874c8fc-cpwlp\" (UID: \"47718a89-dc4c-4f5d-bb58-aec265aa68bf\") " pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-cpwlp" Jan 31 09:20:02 crc kubenswrapper[4830]: I0131 09:20:02.443258 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mwb6h\" (UniqueName: \"kubernetes.io/projected/3f5623d3-168a-4bca-9154-ecb4c81b5b3b-kube-api-access-mwb6h\") pod \"designate-operator-controller-manager-6d9697b7f4-d8xvw\" (UID: \"3f5623d3-168a-4bca-9154-ecb4c81b5b3b\") " pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-d8xvw" Jan 31 09:20:02 crc kubenswrapper[4830]: I0131 09:20:02.447055 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5fb775575f-d9xtg"] Jan 31 09:20:02 crc kubenswrapper[4830]: I0131 09:20:02.448554 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-d9xtg" Jan 31 09:20:02 crc kubenswrapper[4830]: I0131 09:20:02.456106 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-vkxg2" Jan 31 09:20:02 crc kubenswrapper[4830]: I0131 09:20:02.467156 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-kwwkw" Jan 31 09:20:02 crc kubenswrapper[4830]: I0131 09:20:02.495305 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-79955696d6-vvv24"] Jan 31 09:20:02 crc kubenswrapper[4830]: I0131 09:20:02.499083 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mwb6h\" (UniqueName: \"kubernetes.io/projected/3f5623d3-168a-4bca-9154-ecb4c81b5b3b-kube-api-access-mwb6h\") pod \"designate-operator-controller-manager-6d9697b7f4-d8xvw\" (UID: \"3f5623d3-168a-4bca-9154-ecb4c81b5b3b\") " pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-d8xvw" Jan 31 09:20:02 crc kubenswrapper[4830]: I0131 09:20:02.507546 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-79955696d6-vvv24" Jan 31 09:20:02 crc kubenswrapper[4830]: I0131 09:20:02.511979 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p8x25\" (UniqueName: \"kubernetes.io/projected/47718a89-dc4c-4f5d-bb58-aec265aa68bf-kube-api-access-p8x25\") pod \"cinder-operator-controller-manager-8d874c8fc-cpwlp\" (UID: \"47718a89-dc4c-4f5d-bb58-aec265aa68bf\") " pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-cpwlp" Jan 31 09:20:02 crc kubenswrapper[4830]: I0131 09:20:02.534246 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Jan 31 09:20:02 crc kubenswrapper[4830]: I0131 09:20:02.534500 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-xdvcw" Jan 31 09:20:02 crc kubenswrapper[4830]: I0131 09:20:02.535020 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-79955696d6-vvv24"] Jan 31 09:20:02 crc kubenswrapper[4830]: I0131 09:20:02.548397 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ls9jf\" (UniqueName: \"kubernetes.io/projected/4d28fd37-b97c-447a-9165-d90d11fd4698-kube-api-access-ls9jf\") pod \"horizon-operator-controller-manager-5fb775575f-d9xtg\" (UID: \"4d28fd37-b97c-447a-9165-d90d11fd4698\") " pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-d9xtg" Jan 31 09:20:02 crc kubenswrapper[4830]: I0131 09:20:02.548474 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8q284\" (UniqueName: \"kubernetes.io/projected/17f5c61d-5997-482b-961a-0339cfe6c15c-kube-api-access-8q284\") pod \"glance-operator-controller-manager-8886f4c47-hcpk8\" (UID: \"17f5c61d-5997-482b-961a-0339cfe6c15c\") " pod="openstack-operators/glance-operator-controller-manager-8886f4c47-hcpk8" Jan 31 09:20:02 crc kubenswrapper[4830]: I0131 09:20:02.548515 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qtzgs\" (UniqueName: \"kubernetes.io/projected/dafe4db4-4a74-4cb2-8e7f-496cfa1a1c5e-kube-api-access-qtzgs\") pod \"heat-operator-controller-manager-69d6db494d-8wnqw\" (UID: \"dafe4db4-4a74-4cb2-8e7f-496cfa1a1c5e\") " pod="openstack-operators/heat-operator-controller-manager-69d6db494d-8wnqw" Jan 31 09:20:02 crc kubenswrapper[4830]: I0131 09:20:02.560824 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5fb775575f-d9xtg"] Jan 31 09:20:02 crc kubenswrapper[4830]: I0131 09:20:02.608161 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-slc6p"] Jan 31 09:20:02 crc kubenswrapper[4830]: I0131 09:20:02.609948 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-slc6p" Jan 31 09:20:02 crc kubenswrapper[4830]: I0131 09:20:02.611862 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-s4t6n" Jan 31 09:20:02 crc kubenswrapper[4830]: I0131 09:20:02.615967 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-d8xvw" Jan 31 09:20:02 crc kubenswrapper[4830]: I0131 09:20:02.622955 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-84f48565d4-kgrns"] Jan 31 09:20:02 crc kubenswrapper[4830]: I0131 09:20:02.624790 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-kgrns" Jan 31 09:20:02 crc kubenswrapper[4830]: I0131 09:20:02.629245 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8q284\" (UniqueName: \"kubernetes.io/projected/17f5c61d-5997-482b-961a-0339cfe6c15c-kube-api-access-8q284\") pod \"glance-operator-controller-manager-8886f4c47-hcpk8\" (UID: \"17f5c61d-5997-482b-961a-0339cfe6c15c\") " pod="openstack-operators/glance-operator-controller-manager-8886f4c47-hcpk8" Jan 31 09:20:02 crc kubenswrapper[4830]: I0131 09:20:02.645174 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-ghfrj" Jan 31 09:20:02 crc kubenswrapper[4830]: I0131 09:20:02.645401 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-7dd968899f-4tqzd"] Jan 31 09:20:02 crc kubenswrapper[4830]: I0131 09:20:02.646763 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-4tqzd" Jan 31 09:20:02 crc kubenswrapper[4830]: I0131 09:20:02.664205 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-j97h5" Jan 31 09:20:02 crc kubenswrapper[4830]: I0131 09:20:02.678280 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8kmkp\" (UniqueName: \"kubernetes.io/projected/0b519925-01de-4cf0-8ff8-0f97137dd3d9-kube-api-access-8kmkp\") pod \"infra-operator-controller-manager-79955696d6-vvv24\" (UID: \"0b519925-01de-4cf0-8ff8-0f97137dd3d9\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-vvv24" Jan 31 09:20:02 crc kubenswrapper[4830]: I0131 09:20:02.678566 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ls9jf\" (UniqueName: \"kubernetes.io/projected/4d28fd37-b97c-447a-9165-d90d11fd4698-kube-api-access-ls9jf\") pod \"horizon-operator-controller-manager-5fb775575f-d9xtg\" (UID: \"4d28fd37-b97c-447a-9165-d90d11fd4698\") " pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-d9xtg" Jan 31 09:20:02 crc kubenswrapper[4830]: I0131 09:20:02.678614 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0b519925-01de-4cf0-8ff8-0f97137dd3d9-cert\") pod \"infra-operator-controller-manager-79955696d6-vvv24\" (UID: \"0b519925-01de-4cf0-8ff8-0f97137dd3d9\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-vvv24" Jan 31 09:20:02 crc kubenswrapper[4830]: I0131 09:20:02.678706 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qtzgs\" (UniqueName: \"kubernetes.io/projected/dafe4db4-4a74-4cb2-8e7f-496cfa1a1c5e-kube-api-access-qtzgs\") pod \"heat-operator-controller-manager-69d6db494d-8wnqw\" (UID: \"dafe4db4-4a74-4cb2-8e7f-496cfa1a1c5e\") " pod="openstack-operators/heat-operator-controller-manager-69d6db494d-8wnqw" Jan 31 09:20:02 crc kubenswrapper[4830]: I0131 09:20:02.680960 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-hcpk8" Jan 31 09:20:02 crc kubenswrapper[4830]: I0131 09:20:02.711892 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qtzgs\" (UniqueName: \"kubernetes.io/projected/dafe4db4-4a74-4cb2-8e7f-496cfa1a1c5e-kube-api-access-qtzgs\") pod \"heat-operator-controller-manager-69d6db494d-8wnqw\" (UID: \"dafe4db4-4a74-4cb2-8e7f-496cfa1a1c5e\") " pod="openstack-operators/heat-operator-controller-manager-69d6db494d-8wnqw" Jan 31 09:20:02 crc kubenswrapper[4830]: I0131 09:20:02.728128 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-slc6p"] Jan 31 09:20:02 crc kubenswrapper[4830]: I0131 09:20:02.740092 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ls9jf\" (UniqueName: \"kubernetes.io/projected/4d28fd37-b97c-447a-9165-d90d11fd4698-kube-api-access-ls9jf\") pod \"horizon-operator-controller-manager-5fb775575f-d9xtg\" (UID: \"4d28fd37-b97c-447a-9165-d90d11fd4698\") " pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-d9xtg" Jan 31 09:20:02 crc kubenswrapper[4830]: I0131 09:20:02.772377 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-8wnqw" Jan 31 09:20:02 crc kubenswrapper[4830]: I0131 09:20:02.789090 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-84f48565d4-kgrns"] Jan 31 09:20:02 crc kubenswrapper[4830]: I0131 09:20:02.794343 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qg8rv\" (UniqueName: \"kubernetes.io/projected/1891b74f-fe71-4020-98a3-5796e2a67ea2-kube-api-access-qg8rv\") pod \"manila-operator-controller-manager-7dd968899f-4tqzd\" (UID: \"1891b74f-fe71-4020-98a3-5796e2a67ea2\") " pod="openstack-operators/manila-operator-controller-manager-7dd968899f-4tqzd" Jan 31 09:20:02 crc kubenswrapper[4830]: I0131 09:20:02.794526 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0b519925-01de-4cf0-8ff8-0f97137dd3d9-cert\") pod \"infra-operator-controller-manager-79955696d6-vvv24\" (UID: \"0b519925-01de-4cf0-8ff8-0f97137dd3d9\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-vvv24" Jan 31 09:20:02 crc kubenswrapper[4830]: E0131 09:20:02.794937 4830 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 31 09:20:02 crc kubenswrapper[4830]: I0131 09:20:02.795284 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8vp4f\" (UniqueName: \"kubernetes.io/projected/758269b2-16c6-4f5a-8f9f-875659eede84-kube-api-access-8vp4f\") pod \"keystone-operator-controller-manager-84f48565d4-kgrns\" (UID: \"758269b2-16c6-4f5a-8f9f-875659eede84\") " pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-kgrns" Jan 31 09:20:02 crc kubenswrapper[4830]: I0131 09:20:02.795341 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wkqff\" (UniqueName: \"kubernetes.io/projected/bd972fba-0692-45af-b28c-db4929fe150a-kube-api-access-wkqff\") pod \"ironic-operator-controller-manager-5f4b8bd54d-slc6p\" (UID: \"bd972fba-0692-45af-b28c-db4929fe150a\") " pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-slc6p" Jan 31 09:20:02 crc kubenswrapper[4830]: I0131 09:20:02.795426 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8kmkp\" (UniqueName: \"kubernetes.io/projected/0b519925-01de-4cf0-8ff8-0f97137dd3d9-kube-api-access-8kmkp\") pod \"infra-operator-controller-manager-79955696d6-vvv24\" (UID: \"0b519925-01de-4cf0-8ff8-0f97137dd3d9\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-vvv24" Jan 31 09:20:02 crc kubenswrapper[4830]: E0131 09:20:02.795486 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0b519925-01de-4cf0-8ff8-0f97137dd3d9-cert podName:0b519925-01de-4cf0-8ff8-0f97137dd3d9 nodeName:}" failed. No retries permitted until 2026-01-31 09:20:03.295453446 +0000 UTC m=+1147.788816078 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/0b519925-01de-4cf0-8ff8-0f97137dd3d9-cert") pod "infra-operator-controller-manager-79955696d6-vvv24" (UID: "0b519925-01de-4cf0-8ff8-0f97137dd3d9") : secret "infra-operator-webhook-server-cert" not found Jan 31 09:20:02 crc kubenswrapper[4830]: I0131 09:20:02.808472 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-cpwlp" Jan 31 09:20:02 crc kubenswrapper[4830]: I0131 09:20:02.831231 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8kmkp\" (UniqueName: \"kubernetes.io/projected/0b519925-01de-4cf0-8ff8-0f97137dd3d9-kube-api-access-8kmkp\") pod \"infra-operator-controller-manager-79955696d6-vvv24\" (UID: \"0b519925-01de-4cf0-8ff8-0f97137dd3d9\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-vvv24" Jan 31 09:20:02 crc kubenswrapper[4830]: I0131 09:20:02.860856 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-7dd968899f-4tqzd"] Jan 31 09:20:02 crc kubenswrapper[4830]: I0131 09:20:02.875788 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-67bf948998-sbhfn"] Jan 31 09:20:02 crc kubenswrapper[4830]: I0131 09:20:02.877600 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-sbhfn" Jan 31 09:20:02 crc kubenswrapper[4830]: I0131 09:20:02.881651 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-lhj77" Jan 31 09:20:02 crc kubenswrapper[4830]: I0131 09:20:02.886299 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-67bf948998-sbhfn"] Jan 31 09:20:02 crc kubenswrapper[4830]: I0131 09:20:02.934709 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qg8rv\" (UniqueName: \"kubernetes.io/projected/1891b74f-fe71-4020-98a3-5796e2a67ea2-kube-api-access-qg8rv\") pod \"manila-operator-controller-manager-7dd968899f-4tqzd\" (UID: \"1891b74f-fe71-4020-98a3-5796e2a67ea2\") " pod="openstack-operators/manila-operator-controller-manager-7dd968899f-4tqzd" Jan 31 09:20:02 crc kubenswrapper[4830]: I0131 09:20:02.935113 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8vp4f\" (UniqueName: \"kubernetes.io/projected/758269b2-16c6-4f5a-8f9f-875659eede84-kube-api-access-8vp4f\") pod \"keystone-operator-controller-manager-84f48565d4-kgrns\" (UID: \"758269b2-16c6-4f5a-8f9f-875659eede84\") " pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-kgrns" Jan 31 09:20:02 crc kubenswrapper[4830]: I0131 09:20:02.935182 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wkqff\" (UniqueName: \"kubernetes.io/projected/bd972fba-0692-45af-b28c-db4929fe150a-kube-api-access-wkqff\") pod \"ironic-operator-controller-manager-5f4b8bd54d-slc6p\" (UID: \"bd972fba-0692-45af-b28c-db4929fe150a\") " pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-slc6p" Jan 31 09:20:02 crc kubenswrapper[4830]: I0131 09:20:02.936699 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-d9xtg" Jan 31 09:20:02 crc kubenswrapper[4830]: I0131 09:20:02.975369 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wkqff\" (UniqueName: \"kubernetes.io/projected/bd972fba-0692-45af-b28c-db4929fe150a-kube-api-access-wkqff\") pod \"ironic-operator-controller-manager-5f4b8bd54d-slc6p\" (UID: \"bd972fba-0692-45af-b28c-db4929fe150a\") " pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-slc6p" Jan 31 09:20:02 crc kubenswrapper[4830]: I0131 09:20:02.982319 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8vp4f\" (UniqueName: \"kubernetes.io/projected/758269b2-16c6-4f5a-8f9f-875659eede84-kube-api-access-8vp4f\") pod \"keystone-operator-controller-manager-84f48565d4-kgrns\" (UID: \"758269b2-16c6-4f5a-8f9f-875659eede84\") " pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-kgrns" Jan 31 09:20:03 crc kubenswrapper[4830]: I0131 09:20:03.000398 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qg8rv\" (UniqueName: \"kubernetes.io/projected/1891b74f-fe71-4020-98a3-5796e2a67ea2-kube-api-access-qg8rv\") pod \"manila-operator-controller-manager-7dd968899f-4tqzd\" (UID: \"1891b74f-fe71-4020-98a3-5796e2a67ea2\") " pod="openstack-operators/manila-operator-controller-manager-7dd968899f-4tqzd" Jan 31 09:20:03 crc kubenswrapper[4830]: I0131 09:20:03.011542 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-585dbc889-sjf7r"] Jan 31 09:20:03 crc kubenswrapper[4830]: I0131 09:20:03.041811 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-56xtr\" (UniqueName: \"kubernetes.io/projected/0e056a0c-ee06-43aa-bf36-35f202f76b17-kube-api-access-56xtr\") pod \"mariadb-operator-controller-manager-67bf948998-sbhfn\" (UID: \"0e056a0c-ee06-43aa-bf36-35f202f76b17\") " pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-sbhfn" Jan 31 09:20:03 crc kubenswrapper[4830]: I0131 09:20:03.043381 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-sjf7r" Jan 31 09:20:03 crc kubenswrapper[4830]: I0131 09:20:03.048935 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-c8c4v" Jan 31 09:20:03 crc kubenswrapper[4830]: I0131 09:20:03.070127 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-slc6p" Jan 31 09:20:03 crc kubenswrapper[4830]: I0131 09:20:03.088231 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-kgrns" Jan 31 09:20:03 crc kubenswrapper[4830]: I0131 09:20:03.120342 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-4tqzd" Jan 31 09:20:03 crc kubenswrapper[4830]: I0131 09:20:03.144855 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mhzvl\" (UniqueName: \"kubernetes.io/projected/617226b5-2b2c-4f6c-902d-9784c8a283de-kube-api-access-mhzvl\") pod \"neutron-operator-controller-manager-585dbc889-sjf7r\" (UID: \"617226b5-2b2c-4f6c-902d-9784c8a283de\") " pod="openstack-operators/neutron-operator-controller-manager-585dbc889-sjf7r" Jan 31 09:20:03 crc kubenswrapper[4830]: I0131 09:20:03.150003 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-56xtr\" (UniqueName: \"kubernetes.io/projected/0e056a0c-ee06-43aa-bf36-35f202f76b17-kube-api-access-56xtr\") pod \"mariadb-operator-controller-manager-67bf948998-sbhfn\" (UID: \"0e056a0c-ee06-43aa-bf36-35f202f76b17\") " pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-sbhfn" Jan 31 09:20:03 crc kubenswrapper[4830]: I0131 09:20:03.177599 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-55bff696bd-rkvx7"] Jan 31 09:20:03 crc kubenswrapper[4830]: I0131 09:20:03.180197 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-56xtr\" (UniqueName: \"kubernetes.io/projected/0e056a0c-ee06-43aa-bf36-35f202f76b17-kube-api-access-56xtr\") pod \"mariadb-operator-controller-manager-67bf948998-sbhfn\" (UID: \"0e056a0c-ee06-43aa-bf36-35f202f76b17\") " pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-sbhfn" Jan 31 09:20:03 crc kubenswrapper[4830]: I0131 09:20:03.184125 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-rkvx7" Jan 31 09:20:03 crc kubenswrapper[4830]: I0131 09:20:03.187950 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-skk5z" Jan 31 09:20:03 crc kubenswrapper[4830]: I0131 09:20:03.190995 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-585dbc889-sjf7r"] Jan 31 09:20:03 crc kubenswrapper[4830]: I0131 09:20:03.199826 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-55bff696bd-rkvx7"] Jan 31 09:20:03 crc kubenswrapper[4830]: I0131 09:20:03.240831 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-6687f8d877-ld2fb"] Jan 31 09:20:03 crc kubenswrapper[4830]: I0131 09:20:03.242705 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-ld2fb" Jan 31 09:20:03 crc kubenswrapper[4830]: I0131 09:20:03.249412 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-r6cx9" Jan 31 09:20:03 crc kubenswrapper[4830]: I0131 09:20:03.254046 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mhzvl\" (UniqueName: \"kubernetes.io/projected/617226b5-2b2c-4f6c-902d-9784c8a283de-kube-api-access-mhzvl\") pod \"neutron-operator-controller-manager-585dbc889-sjf7r\" (UID: \"617226b5-2b2c-4f6c-902d-9784c8a283de\") " pod="openstack-operators/neutron-operator-controller-manager-585dbc889-sjf7r" Jan 31 09:20:03 crc kubenswrapper[4830]: I0131 09:20:03.303340 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-sbhfn" Jan 31 09:20:03 crc kubenswrapper[4830]: I0131 09:20:03.322917 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mhzvl\" (UniqueName: \"kubernetes.io/projected/617226b5-2b2c-4f6c-902d-9784c8a283de-kube-api-access-mhzvl\") pod \"neutron-operator-controller-manager-585dbc889-sjf7r\" (UID: \"617226b5-2b2c-4f6c-902d-9784c8a283de\") " pod="openstack-operators/neutron-operator-controller-manager-585dbc889-sjf7r" Jan 31 09:20:03 crc kubenswrapper[4830]: I0131 09:20:03.351055 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-6687f8d877-ld2fb"] Jan 31 09:20:03 crc kubenswrapper[4830]: I0131 09:20:03.357484 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0b519925-01de-4cf0-8ff8-0f97137dd3d9-cert\") pod \"infra-operator-controller-manager-79955696d6-vvv24\" (UID: \"0b519925-01de-4cf0-8ff8-0f97137dd3d9\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-vvv24" Jan 31 09:20:03 crc kubenswrapper[4830]: I0131 09:20:03.358819 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xpsl8\" (UniqueName: \"kubernetes.io/projected/e681f66d-3695-4b59-9ef1-6f9bbf007ed2-kube-api-access-xpsl8\") pod \"nova-operator-controller-manager-55bff696bd-rkvx7\" (UID: \"e681f66d-3695-4b59-9ef1-6f9bbf007ed2\") " pod="openstack-operators/nova-operator-controller-manager-55bff696bd-rkvx7" Jan 31 09:20:03 crc kubenswrapper[4830]: I0131 09:20:03.359172 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d5ng5\" (UniqueName: \"kubernetes.io/projected/f101dda8-ba4c-42c2-a8e3-9a5e53c2ec8a-kube-api-access-d5ng5\") pod \"octavia-operator-controller-manager-6687f8d877-ld2fb\" (UID: \"f101dda8-ba4c-42c2-a8e3-9a5e53c2ec8a\") " pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-ld2fb" Jan 31 09:20:03 crc kubenswrapper[4830]: E0131 09:20:03.357909 4830 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 31 09:20:03 crc kubenswrapper[4830]: E0131 09:20:03.361803 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0b519925-01de-4cf0-8ff8-0f97137dd3d9-cert podName:0b519925-01de-4cf0-8ff8-0f97137dd3d9 nodeName:}" failed. No retries permitted until 2026-01-31 09:20:04.361763145 +0000 UTC m=+1148.855125767 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/0b519925-01de-4cf0-8ff8-0f97137dd3d9-cert") pod "infra-operator-controller-manager-79955696d6-vvv24" (UID: "0b519925-01de-4cf0-8ff8-0f97137dd3d9") : secret "infra-operator-webhook-server-cert" not found Jan 31 09:20:03 crc kubenswrapper[4830]: I0131 09:20:03.372485 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-788c46999f-gbjts"] Jan 31 09:20:03 crc kubenswrapper[4830]: I0131 09:20:03.378447 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-gbjts" Jan 31 09:20:03 crc kubenswrapper[4830]: I0131 09:20:03.382180 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-db6st" Jan 31 09:20:03 crc kubenswrapper[4830]: I0131 09:20:03.386000 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4drvxhm"] Jan 31 09:20:03 crc kubenswrapper[4830]: I0131 09:20:03.387548 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4drvxhm" Jan 31 09:20:03 crc kubenswrapper[4830]: I0131 09:20:03.391881 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-w9nhx" Jan 31 09:20:03 crc kubenswrapper[4830]: I0131 09:20:03.396908 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Jan 31 09:20:03 crc kubenswrapper[4830]: I0131 09:20:03.397346 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-68fc8c869-gktql"] Jan 31 09:20:03 crc kubenswrapper[4830]: I0131 09:20:03.409040 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-gktql" Jan 31 09:20:03 crc kubenswrapper[4830]: I0131 09:20:03.412674 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-2mb54" Jan 31 09:20:03 crc kubenswrapper[4830]: I0131 09:20:03.429621 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-sjf7r" Jan 31 09:20:03 crc kubenswrapper[4830]: I0131 09:20:03.430675 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-788c46999f-gbjts"] Jan 31 09:20:03 crc kubenswrapper[4830]: I0131 09:20:03.468734 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xpsl8\" (UniqueName: \"kubernetes.io/projected/e681f66d-3695-4b59-9ef1-6f9bbf007ed2-kube-api-access-xpsl8\") pod \"nova-operator-controller-manager-55bff696bd-rkvx7\" (UID: \"e681f66d-3695-4b59-9ef1-6f9bbf007ed2\") " pod="openstack-operators/nova-operator-controller-manager-55bff696bd-rkvx7" Jan 31 09:20:03 crc kubenswrapper[4830]: I0131 09:20:03.494550 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d5ng5\" (UniqueName: \"kubernetes.io/projected/f101dda8-ba4c-42c2-a8e3-9a5e53c2ec8a-kube-api-access-d5ng5\") pod \"octavia-operator-controller-manager-6687f8d877-ld2fb\" (UID: \"f101dda8-ba4c-42c2-a8e3-9a5e53c2ec8a\") " pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-ld2fb" Jan 31 09:20:03 crc kubenswrapper[4830]: I0131 09:20:03.495162 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jh6c7\" (UniqueName: \"kubernetes.io/projected/7ff06918-8b3c-48cb-bd11-1254b9bbc276-kube-api-access-jh6c7\") pod \"ovn-operator-controller-manager-788c46999f-gbjts\" (UID: \"7ff06918-8b3c-48cb-bd11-1254b9bbc276\") " pod="openstack-operators/ovn-operator-controller-manager-788c46999f-gbjts" Jan 31 09:20:03 crc kubenswrapper[4830]: I0131 09:20:03.536289 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xpsl8\" (UniqueName: \"kubernetes.io/projected/e681f66d-3695-4b59-9ef1-6f9bbf007ed2-kube-api-access-xpsl8\") pod \"nova-operator-controller-manager-55bff696bd-rkvx7\" (UID: \"e681f66d-3695-4b59-9ef1-6f9bbf007ed2\") " pod="openstack-operators/nova-operator-controller-manager-55bff696bd-rkvx7" Jan 31 09:20:03 crc kubenswrapper[4830]: I0131 09:20:03.551821 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4drvxhm"] Jan 31 09:20:03 crc kubenswrapper[4830]: I0131 09:20:03.605614 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-5b964cf4cd-2l42c"] Jan 31 09:20:03 crc kubenswrapper[4830]: I0131 09:20:03.607948 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-2l42c" Jan 31 09:20:03 crc kubenswrapper[4830]: I0131 09:20:03.613762 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jh6c7\" (UniqueName: \"kubernetes.io/projected/7ff06918-8b3c-48cb-bd11-1254b9bbc276-kube-api-access-jh6c7\") pod \"ovn-operator-controller-manager-788c46999f-gbjts\" (UID: \"7ff06918-8b3c-48cb-bd11-1254b9bbc276\") " pod="openstack-operators/ovn-operator-controller-manager-788c46999f-gbjts" Jan 31 09:20:03 crc kubenswrapper[4830]: I0131 09:20:03.613923 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/250c9f1b-d78c-488e-b28e-6c2b783edd9b-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4drvxhm\" (UID: \"250c9f1b-d78c-488e-b28e-6c2b783edd9b\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4drvxhm" Jan 31 09:20:03 crc kubenswrapper[4830]: I0131 09:20:03.614275 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ds9lj\" (UniqueName: \"kubernetes.io/projected/21448bf1-0318-4469-baff-d35cf905337b-kube-api-access-ds9lj\") pod \"swift-operator-controller-manager-68fc8c869-gktql\" (UID: \"21448bf1-0318-4469-baff-d35cf905337b\") " pod="openstack-operators/swift-operator-controller-manager-68fc8c869-gktql" Jan 31 09:20:03 crc kubenswrapper[4830]: I0131 09:20:03.614327 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-drmfv\" (UniqueName: \"kubernetes.io/projected/250c9f1b-d78c-488e-b28e-6c2b783edd9b-kube-api-access-drmfv\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4drvxhm\" (UID: \"250c9f1b-d78c-488e-b28e-6c2b783edd9b\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4drvxhm" Jan 31 09:20:03 crc kubenswrapper[4830]: I0131 09:20:03.625695 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d5ng5\" (UniqueName: \"kubernetes.io/projected/f101dda8-ba4c-42c2-a8e3-9a5e53c2ec8a-kube-api-access-d5ng5\") pod \"octavia-operator-controller-manager-6687f8d877-ld2fb\" (UID: \"f101dda8-ba4c-42c2-a8e3-9a5e53c2ec8a\") " pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-ld2fb" Jan 31 09:20:03 crc kubenswrapper[4830]: I0131 09:20:03.626252 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-v4xt2" Jan 31 09:20:03 crc kubenswrapper[4830]: I0131 09:20:03.664920 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-rkvx7" Jan 31 09:20:03 crc kubenswrapper[4830]: I0131 09:20:03.665008 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-ld2fb" Jan 31 09:20:03 crc kubenswrapper[4830]: I0131 09:20:03.665086 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jh6c7\" (UniqueName: \"kubernetes.io/projected/7ff06918-8b3c-48cb-bd11-1254b9bbc276-kube-api-access-jh6c7\") pod \"ovn-operator-controller-manager-788c46999f-gbjts\" (UID: \"7ff06918-8b3c-48cb-bd11-1254b9bbc276\") " pod="openstack-operators/ovn-operator-controller-manager-788c46999f-gbjts" Jan 31 09:20:03 crc kubenswrapper[4830]: I0131 09:20:03.691194 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-68fc8c869-gktql"] Jan 31 09:20:03 crc kubenswrapper[4830]: I0131 09:20:03.732213 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-57fbdcd888-cp9fj"] Jan 31 09:20:03 crc kubenswrapper[4830]: I0131 09:20:03.733778 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-57fbdcd888-cp9fj" Jan 31 09:20:03 crc kubenswrapper[4830]: I0131 09:20:03.738607 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-drmfv\" (UniqueName: \"kubernetes.io/projected/250c9f1b-d78c-488e-b28e-6c2b783edd9b-kube-api-access-drmfv\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4drvxhm\" (UID: \"250c9f1b-d78c-488e-b28e-6c2b783edd9b\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4drvxhm" Jan 31 09:20:03 crc kubenswrapper[4830]: I0131 09:20:03.738734 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/250c9f1b-d78c-488e-b28e-6c2b783edd9b-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4drvxhm\" (UID: \"250c9f1b-d78c-488e-b28e-6c2b783edd9b\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4drvxhm" Jan 31 09:20:03 crc kubenswrapper[4830]: I0131 09:20:03.738821 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ds9lj\" (UniqueName: \"kubernetes.io/projected/21448bf1-0318-4469-baff-d35cf905337b-kube-api-access-ds9lj\") pod \"swift-operator-controller-manager-68fc8c869-gktql\" (UID: \"21448bf1-0318-4469-baff-d35cf905337b\") " pod="openstack-operators/swift-operator-controller-manager-68fc8c869-gktql" Jan 31 09:20:03 crc kubenswrapper[4830]: E0131 09:20:03.739293 4830 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 31 09:20:03 crc kubenswrapper[4830]: E0131 09:20:03.739338 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/250c9f1b-d78c-488e-b28e-6c2b783edd9b-cert podName:250c9f1b-d78c-488e-b28e-6c2b783edd9b nodeName:}" failed. No retries permitted until 2026-01-31 09:20:04.239324049 +0000 UTC m=+1148.732686491 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/250c9f1b-d78c-488e-b28e-6c2b783edd9b-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4drvxhm" (UID: "250c9f1b-d78c-488e-b28e-6c2b783edd9b") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 31 09:20:03 crc kubenswrapper[4830]: I0131 09:20:03.741945 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-bcdgf" Jan 31 09:20:03 crc kubenswrapper[4830]: I0131 09:20:03.782301 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-56f8bfcd9f-czm79"] Jan 31 09:20:03 crc kubenswrapper[4830]: I0131 09:20:03.798359 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-czm79" Jan 31 09:20:03 crc kubenswrapper[4830]: I0131 09:20:03.813420 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-cjz4v" Jan 31 09:20:03 crc kubenswrapper[4830]: I0131 09:20:03.822283 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ds9lj\" (UniqueName: \"kubernetes.io/projected/21448bf1-0318-4469-baff-d35cf905337b-kube-api-access-ds9lj\") pod \"swift-operator-controller-manager-68fc8c869-gktql\" (UID: \"21448bf1-0318-4469-baff-d35cf905337b\") " pod="openstack-operators/swift-operator-controller-manager-68fc8c869-gktql" Jan 31 09:20:03 crc kubenswrapper[4830]: I0131 09:20:03.835441 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-gbjts" Jan 31 09:20:03 crc kubenswrapper[4830]: I0131 09:20:03.841646 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-drmfv\" (UniqueName: \"kubernetes.io/projected/250c9f1b-d78c-488e-b28e-6c2b783edd9b-kube-api-access-drmfv\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4drvxhm\" (UID: \"250c9f1b-d78c-488e-b28e-6c2b783edd9b\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4drvxhm" Jan 31 09:20:03 crc kubenswrapper[4830]: I0131 09:20:03.843403 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xt9bh\" (UniqueName: \"kubernetes.io/projected/388d9bc4-698e-4dea-8029-aa32433cf734-kube-api-access-xt9bh\") pod \"placement-operator-controller-manager-5b964cf4cd-2l42c\" (UID: \"388d9bc4-698e-4dea-8029-aa32433cf734\") " pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-2l42c" Jan 31 09:20:03 crc kubenswrapper[4830]: I0131 09:20:03.843583 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zd2vt\" (UniqueName: \"kubernetes.io/projected/2365408f-7d7a-482c-87c0-0452fa330e4e-kube-api-access-zd2vt\") pod \"telemetry-operator-controller-manager-57fbdcd888-cp9fj\" (UID: \"2365408f-7d7a-482c-87c0-0452fa330e4e\") " pod="openstack-operators/telemetry-operator-controller-manager-57fbdcd888-cp9fj" Jan 31 09:20:03 crc kubenswrapper[4830]: I0131 09:20:03.863849 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-5b964cf4cd-2l42c"] Jan 31 09:20:03 crc kubenswrapper[4830]: I0131 09:20:03.883951 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-57fbdcd888-cp9fj"] Jan 31 09:20:03 crc kubenswrapper[4830]: I0131 09:20:03.909215 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-564965969-62c8t"] Jan 31 09:20:03 crc kubenswrapper[4830]: I0131 09:20:03.911104 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-564965969-62c8t" Jan 31 09:20:03 crc kubenswrapper[4830]: I0131 09:20:03.914892 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-nngxs" Jan 31 09:20:03 crc kubenswrapper[4830]: I0131 09:20:03.928975 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-56f8bfcd9f-czm79"] Jan 31 09:20:03 crc kubenswrapper[4830]: I0131 09:20:03.940026 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-564965969-62c8t"] Jan 31 09:20:03 crc kubenswrapper[4830]: I0131 09:20:03.947191 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h8hvj\" (UniqueName: \"kubernetes.io/projected/68f255f0-5951-47f2-979e-af80607453e8-kube-api-access-h8hvj\") pod \"test-operator-controller-manager-56f8bfcd9f-czm79\" (UID: \"68f255f0-5951-47f2-979e-af80607453e8\") " pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-czm79" Jan 31 09:20:03 crc kubenswrapper[4830]: I0131 09:20:03.947470 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xt9bh\" (UniqueName: \"kubernetes.io/projected/388d9bc4-698e-4dea-8029-aa32433cf734-kube-api-access-xt9bh\") pod \"placement-operator-controller-manager-5b964cf4cd-2l42c\" (UID: \"388d9bc4-698e-4dea-8029-aa32433cf734\") " pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-2l42c" Jan 31 09:20:03 crc kubenswrapper[4830]: I0131 09:20:03.947539 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zd2vt\" (UniqueName: \"kubernetes.io/projected/2365408f-7d7a-482c-87c0-0452fa330e4e-kube-api-access-zd2vt\") pod \"telemetry-operator-controller-manager-57fbdcd888-cp9fj\" (UID: \"2365408f-7d7a-482c-87c0-0452fa330e4e\") " pod="openstack-operators/telemetry-operator-controller-manager-57fbdcd888-cp9fj" Jan 31 09:20:03 crc kubenswrapper[4830]: I0131 09:20:03.960903 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-55f549db95-67sj5"] Jan 31 09:20:03 crc kubenswrapper[4830]: I0131 09:20:03.965843 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-55f549db95-67sj5" Jan 31 09:20:03 crc kubenswrapper[4830]: I0131 09:20:03.974942 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-55f549db95-67sj5"] Jan 31 09:20:03 crc kubenswrapper[4830]: I0131 09:20:03.977842 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-jthfm" Jan 31 09:20:03 crc kubenswrapper[4830]: I0131 09:20:03.978181 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Jan 31 09:20:03 crc kubenswrapper[4830]: I0131 09:20:03.979039 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Jan 31 09:20:03 crc kubenswrapper[4830]: I0131 09:20:03.988289 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-gktql" Jan 31 09:20:03 crc kubenswrapper[4830]: I0131 09:20:03.998254 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xt9bh\" (UniqueName: \"kubernetes.io/projected/388d9bc4-698e-4dea-8029-aa32433cf734-kube-api-access-xt9bh\") pod \"placement-operator-controller-manager-5b964cf4cd-2l42c\" (UID: \"388d9bc4-698e-4dea-8029-aa32433cf734\") " pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-2l42c" Jan 31 09:20:04 crc kubenswrapper[4830]: I0131 09:20:04.006913 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-2l42c" Jan 31 09:20:04 crc kubenswrapper[4830]: I0131 09:20:04.022927 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-slhpt"] Jan 31 09:20:04 crc kubenswrapper[4830]: I0131 09:20:04.030763 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zd2vt\" (UniqueName: \"kubernetes.io/projected/2365408f-7d7a-482c-87c0-0452fa330e4e-kube-api-access-zd2vt\") pod \"telemetry-operator-controller-manager-57fbdcd888-cp9fj\" (UID: \"2365408f-7d7a-482c-87c0-0452fa330e4e\") " pod="openstack-operators/telemetry-operator-controller-manager-57fbdcd888-cp9fj" Jan 31 09:20:04 crc kubenswrapper[4830]: I0131 09:20:04.034135 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-slhpt" Jan 31 09:20:04 crc kubenswrapper[4830]: I0131 09:20:04.042611 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-slhpt"] Jan 31 09:20:04 crc kubenswrapper[4830]: I0131 09:20:04.043328 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-gtsdm" Jan 31 09:20:04 crc kubenswrapper[4830]: I0131 09:20:04.049570 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2mcnz\" (UniqueName: \"kubernetes.io/projected/d4a8ef63-6ba0-4bb4-93b5-dc9fc1134bb5-kube-api-access-2mcnz\") pod \"watcher-operator-controller-manager-564965969-62c8t\" (UID: \"d4a8ef63-6ba0-4bb4-93b5-dc9fc1134bb5\") " pod="openstack-operators/watcher-operator-controller-manager-564965969-62c8t" Jan 31 09:20:04 crc kubenswrapper[4830]: I0131 09:20:04.049650 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/ce245704-5b88-4544-ae21-bcb30ff5d0d0-webhook-certs\") pod \"openstack-operator-controller-manager-55f549db95-67sj5\" (UID: \"ce245704-5b88-4544-ae21-bcb30ff5d0d0\") " pod="openstack-operators/openstack-operator-controller-manager-55f549db95-67sj5" Jan 31 09:20:04 crc kubenswrapper[4830]: I0131 09:20:04.049824 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ce245704-5b88-4544-ae21-bcb30ff5d0d0-metrics-certs\") pod \"openstack-operator-controller-manager-55f549db95-67sj5\" (UID: \"ce245704-5b88-4544-ae21-bcb30ff5d0d0\") " pod="openstack-operators/openstack-operator-controller-manager-55f549db95-67sj5" Jan 31 09:20:04 crc kubenswrapper[4830]: I0131 09:20:04.049856 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h8hvj\" (UniqueName: \"kubernetes.io/projected/68f255f0-5951-47f2-979e-af80607453e8-kube-api-access-h8hvj\") pod \"test-operator-controller-manager-56f8bfcd9f-czm79\" (UID: \"68f255f0-5951-47f2-979e-af80607453e8\") " pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-czm79" Jan 31 09:20:04 crc kubenswrapper[4830]: I0131 09:20:04.052513 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-99jzw\" (UniqueName: \"kubernetes.io/projected/ce245704-5b88-4544-ae21-bcb30ff5d0d0-kube-api-access-99jzw\") pod \"openstack-operator-controller-manager-55f549db95-67sj5\" (UID: \"ce245704-5b88-4544-ae21-bcb30ff5d0d0\") " pod="openstack-operators/openstack-operator-controller-manager-55f549db95-67sj5" Jan 31 09:20:04 crc kubenswrapper[4830]: I0131 09:20:04.061147 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-57fbdcd888-cp9fj" Jan 31 09:20:04 crc kubenswrapper[4830]: I0131 09:20:04.107839 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h8hvj\" (UniqueName: \"kubernetes.io/projected/68f255f0-5951-47f2-979e-af80607453e8-kube-api-access-h8hvj\") pod \"test-operator-controller-manager-56f8bfcd9f-czm79\" (UID: \"68f255f0-5951-47f2-979e-af80607453e8\") " pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-czm79" Jan 31 09:20:04 crc kubenswrapper[4830]: I0131 09:20:04.157865 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2mcnz\" (UniqueName: \"kubernetes.io/projected/d4a8ef63-6ba0-4bb4-93b5-dc9fc1134bb5-kube-api-access-2mcnz\") pod \"watcher-operator-controller-manager-564965969-62c8t\" (UID: \"d4a8ef63-6ba0-4bb4-93b5-dc9fc1134bb5\") " pod="openstack-operators/watcher-operator-controller-manager-564965969-62c8t" Jan 31 09:20:04 crc kubenswrapper[4830]: I0131 09:20:04.157953 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/ce245704-5b88-4544-ae21-bcb30ff5d0d0-webhook-certs\") pod \"openstack-operator-controller-manager-55f549db95-67sj5\" (UID: \"ce245704-5b88-4544-ae21-bcb30ff5d0d0\") " pod="openstack-operators/openstack-operator-controller-manager-55f549db95-67sj5" Jan 31 09:20:04 crc kubenswrapper[4830]: I0131 09:20:04.158184 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ce245704-5b88-4544-ae21-bcb30ff5d0d0-metrics-certs\") pod \"openstack-operator-controller-manager-55f549db95-67sj5\" (UID: \"ce245704-5b88-4544-ae21-bcb30ff5d0d0\") " pod="openstack-operators/openstack-operator-controller-manager-55f549db95-67sj5" Jan 31 09:20:04 crc kubenswrapper[4830]: I0131 09:20:04.158261 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-99jzw\" (UniqueName: \"kubernetes.io/projected/ce245704-5b88-4544-ae21-bcb30ff5d0d0-kube-api-access-99jzw\") pod \"openstack-operator-controller-manager-55f549db95-67sj5\" (UID: \"ce245704-5b88-4544-ae21-bcb30ff5d0d0\") " pod="openstack-operators/openstack-operator-controller-manager-55f549db95-67sj5" Jan 31 09:20:04 crc kubenswrapper[4830]: I0131 09:20:04.158331 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-htt9r\" (UniqueName: \"kubernetes.io/projected/abf5a919-4697-4468-b9e4-8a4617e3a5ca-kube-api-access-htt9r\") pod \"rabbitmq-cluster-operator-manager-668c99d594-slhpt\" (UID: \"abf5a919-4697-4468-b9e4-8a4617e3a5ca\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-slhpt" Jan 31 09:20:04 crc kubenswrapper[4830]: E0131 09:20:04.158875 4830 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 31 09:20:04 crc kubenswrapper[4830]: E0131 09:20:04.159268 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ce245704-5b88-4544-ae21-bcb30ff5d0d0-webhook-certs podName:ce245704-5b88-4544-ae21-bcb30ff5d0d0 nodeName:}" failed. No retries permitted until 2026-01-31 09:20:04.65894768 +0000 UTC m=+1149.152310122 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/ce245704-5b88-4544-ae21-bcb30ff5d0d0-webhook-certs") pod "openstack-operator-controller-manager-55f549db95-67sj5" (UID: "ce245704-5b88-4544-ae21-bcb30ff5d0d0") : secret "webhook-server-cert" not found Jan 31 09:20:04 crc kubenswrapper[4830]: E0131 09:20:04.159405 4830 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 31 09:20:04 crc kubenswrapper[4830]: E0131 09:20:04.159470 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ce245704-5b88-4544-ae21-bcb30ff5d0d0-metrics-certs podName:ce245704-5b88-4544-ae21-bcb30ff5d0d0 nodeName:}" failed. No retries permitted until 2026-01-31 09:20:04.659447514 +0000 UTC m=+1149.152810356 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ce245704-5b88-4544-ae21-bcb30ff5d0d0-metrics-certs") pod "openstack-operator-controller-manager-55f549db95-67sj5" (UID: "ce245704-5b88-4544-ae21-bcb30ff5d0d0") : secret "metrics-server-cert" not found Jan 31 09:20:04 crc kubenswrapper[4830]: I0131 09:20:04.204798 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2mcnz\" (UniqueName: \"kubernetes.io/projected/d4a8ef63-6ba0-4bb4-93b5-dc9fc1134bb5-kube-api-access-2mcnz\") pod \"watcher-operator-controller-manager-564965969-62c8t\" (UID: \"d4a8ef63-6ba0-4bb4-93b5-dc9fc1134bb5\") " pod="openstack-operators/watcher-operator-controller-manager-564965969-62c8t" Jan 31 09:20:04 crc kubenswrapper[4830]: I0131 09:20:04.216416 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-kwwkw"] Jan 31 09:20:04 crc kubenswrapper[4830]: I0131 09:20:04.217212 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-99jzw\" (UniqueName: \"kubernetes.io/projected/ce245704-5b88-4544-ae21-bcb30ff5d0d0-kube-api-access-99jzw\") pod \"openstack-operator-controller-manager-55f549db95-67sj5\" (UID: \"ce245704-5b88-4544-ae21-bcb30ff5d0d0\") " pod="openstack-operators/openstack-operator-controller-manager-55f549db95-67sj5" Jan 31 09:20:04 crc kubenswrapper[4830]: I0131 09:20:04.261275 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/250c9f1b-d78c-488e-b28e-6c2b783edd9b-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4drvxhm\" (UID: \"250c9f1b-d78c-488e-b28e-6c2b783edd9b\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4drvxhm" Jan 31 09:20:04 crc kubenswrapper[4830]: I0131 09:20:04.261435 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-htt9r\" (UniqueName: \"kubernetes.io/projected/abf5a919-4697-4468-b9e4-8a4617e3a5ca-kube-api-access-htt9r\") pod \"rabbitmq-cluster-operator-manager-668c99d594-slhpt\" (UID: \"abf5a919-4697-4468-b9e4-8a4617e3a5ca\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-slhpt" Jan 31 09:20:04 crc kubenswrapper[4830]: E0131 09:20:04.261484 4830 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 31 09:20:04 crc kubenswrapper[4830]: E0131 09:20:04.261592 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/250c9f1b-d78c-488e-b28e-6c2b783edd9b-cert podName:250c9f1b-d78c-488e-b28e-6c2b783edd9b nodeName:}" failed. No retries permitted until 2026-01-31 09:20:05.261567094 +0000 UTC m=+1149.754929706 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/250c9f1b-d78c-488e-b28e-6c2b783edd9b-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4drvxhm" (UID: "250c9f1b-d78c-488e-b28e-6c2b783edd9b") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 31 09:20:04 crc kubenswrapper[4830]: I0131 09:20:04.288786 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-htt9r\" (UniqueName: \"kubernetes.io/projected/abf5a919-4697-4468-b9e4-8a4617e3a5ca-kube-api-access-htt9r\") pod \"rabbitmq-cluster-operator-manager-668c99d594-slhpt\" (UID: \"abf5a919-4697-4468-b9e4-8a4617e3a5ca\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-slhpt" Jan 31 09:20:04 crc kubenswrapper[4830]: I0131 09:20:04.349218 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d9697b7f4-d8xvw"] Jan 31 09:20:04 crc kubenswrapper[4830]: I0131 09:20:04.367966 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0b519925-01de-4cf0-8ff8-0f97137dd3d9-cert\") pod \"infra-operator-controller-manager-79955696d6-vvv24\" (UID: \"0b519925-01de-4cf0-8ff8-0f97137dd3d9\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-vvv24" Jan 31 09:20:04 crc kubenswrapper[4830]: E0131 09:20:04.383850 4830 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 31 09:20:04 crc kubenswrapper[4830]: E0131 09:20:04.383921 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0b519925-01de-4cf0-8ff8-0f97137dd3d9-cert podName:0b519925-01de-4cf0-8ff8-0f97137dd3d9 nodeName:}" failed. No retries permitted until 2026-01-31 09:20:06.383901375 +0000 UTC m=+1150.877263817 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/0b519925-01de-4cf0-8ff8-0f97137dd3d9-cert") pod "infra-operator-controller-manager-79955696d6-vvv24" (UID: "0b519925-01de-4cf0-8ff8-0f97137dd3d9") : secret "infra-operator-webhook-server-cert" not found Jan 31 09:20:04 crc kubenswrapper[4830]: I0131 09:20:04.389568 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-czm79" Jan 31 09:20:04 crc kubenswrapper[4830]: I0131 09:20:04.409021 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-564965969-62c8t" Jan 31 09:20:04 crc kubenswrapper[4830]: I0131 09:20:04.446672 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-slhpt" Jan 31 09:20:04 crc kubenswrapper[4830]: W0131 09:20:04.449567 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod47718a89_dc4c_4f5d_bb58_aec265aa68bf.slice/crio-a2d5e9e78372e323d2f8cf43b80ce864de7d6941a7b437e869ca6f3f4097e7ca WatchSource:0}: Error finding container a2d5e9e78372e323d2f8cf43b80ce864de7d6941a7b437e869ca6f3f4097e7ca: Status 404 returned error can't find the container with id a2d5e9e78372e323d2f8cf43b80ce864de7d6941a7b437e869ca6f3f4097e7ca Jan 31 09:20:04 crc kubenswrapper[4830]: W0131 09:20:04.456311 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod17f5c61d_5997_482b_961a_0339cfe6c15c.slice/crio-dd97e0d66412ccf93bf7505189ce9defc0062e5bcb80892278f88253ca9deb92 WatchSource:0}: Error finding container dd97e0d66412ccf93bf7505189ce9defc0062e5bcb80892278f88253ca9deb92: Status 404 returned error can't find the container with id dd97e0d66412ccf93bf7505189ce9defc0062e5bcb80892278f88253ca9deb92 Jan 31 09:20:04 crc kubenswrapper[4830]: I0131 09:20:04.541790 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-8d874c8fc-cpwlp"] Jan 31 09:20:04 crc kubenswrapper[4830]: I0131 09:20:04.621974 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-d8xvw" event={"ID":"3f5623d3-168a-4bca-9154-ecb4c81b5b3b","Type":"ContainerStarted","Data":"fe5c4675caa53629c2695054f64d394d9b9464d6ac3c3eb6f284a2dc1f9d3263"} Jan 31 09:20:04 crc kubenswrapper[4830]: W0131 09:20:04.645700 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4d28fd37_b97c_447a_9165_d90d11fd4698.slice/crio-2a461dfd1cbeaaed31ffdfbf139a5c65630564a186e3bb3bdb13fa579303989e WatchSource:0}: Error finding container 2a461dfd1cbeaaed31ffdfbf139a5c65630564a186e3bb3bdb13fa579303989e: Status 404 returned error can't find the container with id 2a461dfd1cbeaaed31ffdfbf139a5c65630564a186e3bb3bdb13fa579303989e Jan 31 09:20:04 crc kubenswrapper[4830]: I0131 09:20:04.645897 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-cpwlp" event={"ID":"47718a89-dc4c-4f5d-bb58-aec265aa68bf","Type":"ContainerStarted","Data":"a2d5e9e78372e323d2f8cf43b80ce864de7d6941a7b437e869ca6f3f4097e7ca"} Jan 31 09:20:04 crc kubenswrapper[4830]: W0131 09:20:04.648664 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1891b74f_fe71_4020_98a3_5796e2a67ea2.slice/crio-539a1c6e56255d69ba0d1fc5a7194c8b1cb1778814d8439573be9bf48ada6f05 WatchSource:0}: Error finding container 539a1c6e56255d69ba0d1fc5a7194c8b1cb1778814d8439573be9bf48ada6f05: Status 404 returned error can't find the container with id 539a1c6e56255d69ba0d1fc5a7194c8b1cb1778814d8439573be9bf48ada6f05 Jan 31 09:20:04 crc kubenswrapper[4830]: I0131 09:20:04.652136 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-hcpk8" event={"ID":"17f5c61d-5997-482b-961a-0339cfe6c15c","Type":"ContainerStarted","Data":"dd97e0d66412ccf93bf7505189ce9defc0062e5bcb80892278f88253ca9deb92"} Jan 31 09:20:04 crc kubenswrapper[4830]: I0131 09:20:04.656026 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-8wnqw" event={"ID":"dafe4db4-4a74-4cb2-8e7f-496cfa1a1c5e","Type":"ContainerStarted","Data":"d4b3b22e9096baee65ed8433e4b5a611b25e5cf46e383324f91516a37fff0588"} Jan 31 09:20:04 crc kubenswrapper[4830]: I0131 09:20:04.657498 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-kwwkw" event={"ID":"1488b4ea-ba49-423e-a995-917dc9cbb9e2","Type":"ContainerStarted","Data":"67a1e806b38d3cd19fa7c11e8d419aee9bc79e000d2e8ea11f5b1cc6d2334468"} Jan 31 09:20:04 crc kubenswrapper[4830]: I0131 09:20:04.658522 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-69d6db494d-8wnqw"] Jan 31 09:20:04 crc kubenswrapper[4830]: I0131 09:20:04.673930 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ce245704-5b88-4544-ae21-bcb30ff5d0d0-metrics-certs\") pod \"openstack-operator-controller-manager-55f549db95-67sj5\" (UID: \"ce245704-5b88-4544-ae21-bcb30ff5d0d0\") " pod="openstack-operators/openstack-operator-controller-manager-55f549db95-67sj5" Jan 31 09:20:04 crc kubenswrapper[4830]: I0131 09:20:04.674075 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/ce245704-5b88-4544-ae21-bcb30ff5d0d0-webhook-certs\") pod \"openstack-operator-controller-manager-55f549db95-67sj5\" (UID: \"ce245704-5b88-4544-ae21-bcb30ff5d0d0\") " pod="openstack-operators/openstack-operator-controller-manager-55f549db95-67sj5" Jan 31 09:20:04 crc kubenswrapper[4830]: E0131 09:20:04.674331 4830 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 31 09:20:04 crc kubenswrapper[4830]: E0131 09:20:04.674414 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ce245704-5b88-4544-ae21-bcb30ff5d0d0-webhook-certs podName:ce245704-5b88-4544-ae21-bcb30ff5d0d0 nodeName:}" failed. No retries permitted until 2026-01-31 09:20:05.67439118 +0000 UTC m=+1150.167753622 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/ce245704-5b88-4544-ae21-bcb30ff5d0d0-webhook-certs") pod "openstack-operator-controller-manager-55f549db95-67sj5" (UID: "ce245704-5b88-4544-ae21-bcb30ff5d0d0") : secret "webhook-server-cert" not found Jan 31 09:20:04 crc kubenswrapper[4830]: E0131 09:20:04.674946 4830 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 31 09:20:04 crc kubenswrapper[4830]: E0131 09:20:04.674985 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ce245704-5b88-4544-ae21-bcb30ff5d0d0-metrics-certs podName:ce245704-5b88-4544-ae21-bcb30ff5d0d0 nodeName:}" failed. No retries permitted until 2026-01-31 09:20:05.674976877 +0000 UTC m=+1150.168339319 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ce245704-5b88-4544-ae21-bcb30ff5d0d0-metrics-certs") pod "openstack-operator-controller-manager-55f549db95-67sj5" (UID: "ce245704-5b88-4544-ae21-bcb30ff5d0d0") : secret "metrics-server-cert" not found Jan 31 09:20:04 crc kubenswrapper[4830]: I0131 09:20:04.691900 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-8886f4c47-hcpk8"] Jan 31 09:20:04 crc kubenswrapper[4830]: I0131 09:20:04.716157 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-7dd968899f-4tqzd"] Jan 31 09:20:04 crc kubenswrapper[4830]: I0131 09:20:04.730896 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5fb775575f-d9xtg"] Jan 31 09:20:05 crc kubenswrapper[4830]: I0131 09:20:05.064052 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-slc6p"] Jan 31 09:20:05 crc kubenswrapper[4830]: W0131 09:20:05.076558 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode681f66d_3695_4b59_9ef1_6f9bbf007ed2.slice/crio-7d6132349e7006e7624ce6bfdf3d4b66da03d271adb938170e7cc9a113a7143d WatchSource:0}: Error finding container 7d6132349e7006e7624ce6bfdf3d4b66da03d271adb938170e7cc9a113a7143d: Status 404 returned error can't find the container with id 7d6132349e7006e7624ce6bfdf3d4b66da03d271adb938170e7cc9a113a7143d Jan 31 09:20:05 crc kubenswrapper[4830]: I0131 09:20:05.077518 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-55bff696bd-rkvx7"] Jan 31 09:20:05 crc kubenswrapper[4830]: I0131 09:20:05.087129 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-585dbc889-sjf7r"] Jan 31 09:20:05 crc kubenswrapper[4830]: I0131 09:20:05.112056 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-84f48565d4-kgrns"] Jan 31 09:20:05 crc kubenswrapper[4830]: I0131 09:20:05.147687 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-67bf948998-sbhfn"] Jan 31 09:20:05 crc kubenswrapper[4830]: W0131 09:20:05.197466 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbd972fba_0692_45af_b28c_db4929fe150a.slice/crio-7b09cf3c7845f70f702644f324df76459fbe679a258c713d86cb8a1c30660c2c WatchSource:0}: Error finding container 7b09cf3c7845f70f702644f324df76459fbe679a258c713d86cb8a1c30660c2c: Status 404 returned error can't find the container with id 7b09cf3c7845f70f702644f324df76459fbe679a258c713d86cb8a1c30660c2c Jan 31 09:20:05 crc kubenswrapper[4830]: I0131 09:20:05.293130 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/250c9f1b-d78c-488e-b28e-6c2b783edd9b-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4drvxhm\" (UID: \"250c9f1b-d78c-488e-b28e-6c2b783edd9b\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4drvxhm" Jan 31 09:20:05 crc kubenswrapper[4830]: E0131 09:20:05.294274 4830 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 31 09:20:05 crc kubenswrapper[4830]: E0131 09:20:05.294320 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/250c9f1b-d78c-488e-b28e-6c2b783edd9b-cert podName:250c9f1b-d78c-488e-b28e-6c2b783edd9b nodeName:}" failed. No retries permitted until 2026-01-31 09:20:07.294306409 +0000 UTC m=+1151.787668851 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/250c9f1b-d78c-488e-b28e-6c2b783edd9b-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4drvxhm" (UID: "250c9f1b-d78c-488e-b28e-6c2b783edd9b") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 31 09:20:05 crc kubenswrapper[4830]: I0131 09:20:05.408749 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-788c46999f-gbjts"] Jan 31 09:20:05 crc kubenswrapper[4830]: I0131 09:20:05.418935 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-6687f8d877-ld2fb"] Jan 31 09:20:05 crc kubenswrapper[4830]: I0131 09:20:05.444085 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-68fc8c869-gktql"] Jan 31 09:20:05 crc kubenswrapper[4830]: I0131 09:20:05.704194 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/ce245704-5b88-4544-ae21-bcb30ff5d0d0-webhook-certs\") pod \"openstack-operator-controller-manager-55f549db95-67sj5\" (UID: \"ce245704-5b88-4544-ae21-bcb30ff5d0d0\") " pod="openstack-operators/openstack-operator-controller-manager-55f549db95-67sj5" Jan 31 09:20:05 crc kubenswrapper[4830]: I0131 09:20:05.704319 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ce245704-5b88-4544-ae21-bcb30ff5d0d0-metrics-certs\") pod \"openstack-operator-controller-manager-55f549db95-67sj5\" (UID: \"ce245704-5b88-4544-ae21-bcb30ff5d0d0\") " pod="openstack-operators/openstack-operator-controller-manager-55f549db95-67sj5" Jan 31 09:20:05 crc kubenswrapper[4830]: E0131 09:20:05.704512 4830 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 31 09:20:05 crc kubenswrapper[4830]: E0131 09:20:05.704584 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ce245704-5b88-4544-ae21-bcb30ff5d0d0-metrics-certs podName:ce245704-5b88-4544-ae21-bcb30ff5d0d0 nodeName:}" failed. No retries permitted until 2026-01-31 09:20:07.704565361 +0000 UTC m=+1152.197927803 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ce245704-5b88-4544-ae21-bcb30ff5d0d0-metrics-certs") pod "openstack-operator-controller-manager-55f549db95-67sj5" (UID: "ce245704-5b88-4544-ae21-bcb30ff5d0d0") : secret "metrics-server-cert" not found Jan 31 09:20:05 crc kubenswrapper[4830]: E0131 09:20:05.705172 4830 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 31 09:20:05 crc kubenswrapper[4830]: E0131 09:20:05.705200 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ce245704-5b88-4544-ae21-bcb30ff5d0d0-webhook-certs podName:ce245704-5b88-4544-ae21-bcb30ff5d0d0 nodeName:}" failed. No retries permitted until 2026-01-31 09:20:07.705191969 +0000 UTC m=+1152.198554411 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/ce245704-5b88-4544-ae21-bcb30ff5d0d0-webhook-certs") pod "openstack-operator-controller-manager-55f549db95-67sj5" (UID: "ce245704-5b88-4544-ae21-bcb30ff5d0d0") : secret "webhook-server-cert" not found Jan 31 09:20:05 crc kubenswrapper[4830]: I0131 09:20:05.738130 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-5b964cf4cd-2l42c"] Jan 31 09:20:05 crc kubenswrapper[4830]: I0131 09:20:05.770287 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-ld2fb" event={"ID":"f101dda8-ba4c-42c2-a8e3-9a5e53c2ec8a","Type":"ContainerStarted","Data":"c74efe44c2a8886320ffc2fe7c5194cfbab51ea5e66315dc1c9c5089b33f3311"} Jan 31 09:20:05 crc kubenswrapper[4830]: I0131 09:20:05.781359 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-57fbdcd888-cp9fj"] Jan 31 09:20:05 crc kubenswrapper[4830]: I0131 09:20:05.783404 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-sjf7r" event={"ID":"617226b5-2b2c-4f6c-902d-9784c8a283de","Type":"ContainerStarted","Data":"1c5fb02532b827a6f42fbe669505eaee601e7dced2c03dd9b3bb9263f7f6e52b"} Jan 31 09:20:05 crc kubenswrapper[4830]: I0131 09:20:05.784928 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-slc6p" event={"ID":"bd972fba-0692-45af-b28c-db4929fe150a","Type":"ContainerStarted","Data":"7b09cf3c7845f70f702644f324df76459fbe679a258c713d86cb8a1c30660c2c"} Jan 31 09:20:05 crc kubenswrapper[4830]: I0131 09:20:05.807629 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-rkvx7" event={"ID":"e681f66d-3695-4b59-9ef1-6f9bbf007ed2","Type":"ContainerStarted","Data":"7d6132349e7006e7624ce6bfdf3d4b66da03d271adb938170e7cc9a113a7143d"} Jan 31 09:20:05 crc kubenswrapper[4830]: I0131 09:20:05.833205 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-sbhfn" event={"ID":"0e056a0c-ee06-43aa-bf36-35f202f76b17","Type":"ContainerStarted","Data":"cc7476c38e80d4334af761fa1c7d1faccdb61cf8a184b847789bf790fbcfa51c"} Jan 31 09:20:05 crc kubenswrapper[4830]: I0131 09:20:05.848668 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-56f8bfcd9f-czm79"] Jan 31 09:20:05 crc kubenswrapper[4830]: I0131 09:20:05.895104 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-gbjts" event={"ID":"7ff06918-8b3c-48cb-bd11-1254b9bbc276","Type":"ContainerStarted","Data":"7aee252a875d6797657511b6a0acbdac1545774622a3c1de44009c6dfe65efe0"} Jan 31 09:20:05 crc kubenswrapper[4830]: I0131 09:20:05.901112 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-kgrns" event={"ID":"758269b2-16c6-4f5a-8f9f-875659eede84","Type":"ContainerStarted","Data":"f9293dafd909bc7dacf51fdc5472a44848ec587fb8a6cd0480e59c6fbced9031"} Jan 31 09:20:05 crc kubenswrapper[4830]: I0131 09:20:05.913326 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-gktql" event={"ID":"21448bf1-0318-4469-baff-d35cf905337b","Type":"ContainerStarted","Data":"a2b199ca9360d0e4e9735b5b17977d1d1ef2012718b88d3a7588cb5d7d4b929b"} Jan 31 09:20:05 crc kubenswrapper[4830]: I0131 09:20:05.941354 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-d9xtg" event={"ID":"4d28fd37-b97c-447a-9165-d90d11fd4698","Type":"ContainerStarted","Data":"2a461dfd1cbeaaed31ffdfbf139a5c65630564a186e3bb3bdb13fa579303989e"} Jan 31 09:20:05 crc kubenswrapper[4830]: W0131 09:20:05.949780 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2365408f_7d7a_482c_87c0_0452fa330e4e.slice/crio-b761d74c3f78fab65267327e12a21c74208209fe5a59c87e02eb782d395e1840 WatchSource:0}: Error finding container b761d74c3f78fab65267327e12a21c74208209fe5a59c87e02eb782d395e1840: Status 404 returned error can't find the container with id b761d74c3f78fab65267327e12a21c74208209fe5a59c87e02eb782d395e1840 Jan 31 09:20:05 crc kubenswrapper[4830]: I0131 09:20:05.961056 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-4tqzd" event={"ID":"1891b74f-fe71-4020-98a3-5796e2a67ea2","Type":"ContainerStarted","Data":"539a1c6e56255d69ba0d1fc5a7194c8b1cb1778814d8439573be9bf48ada6f05"} Jan 31 09:20:05 crc kubenswrapper[4830]: I0131 09:20:05.980301 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-slhpt"] Jan 31 09:20:05 crc kubenswrapper[4830]: E0131 09:20:05.996116 4830 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/watcher-operator@sha256:7869203f6f97de780368d507636031090fed3b658d2f7771acbd4481bdfc870b,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-2mcnz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-564965969-62c8t_openstack-operators(d4a8ef63-6ba0-4bb4-93b5-dc9fc1134bb5): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 31 09:20:05 crc kubenswrapper[4830]: E0131 09:20:05.998085 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/watcher-operator-controller-manager-564965969-62c8t" podUID="d4a8ef63-6ba0-4bb4-93b5-dc9fc1134bb5" Jan 31 09:20:06 crc kubenswrapper[4830]: I0131 09:20:06.015371 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-564965969-62c8t"] Jan 31 09:20:06 crc kubenswrapper[4830]: I0131 09:20:06.423575 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0b519925-01de-4cf0-8ff8-0f97137dd3d9-cert\") pod \"infra-operator-controller-manager-79955696d6-vvv24\" (UID: \"0b519925-01de-4cf0-8ff8-0f97137dd3d9\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-vvv24" Jan 31 09:20:06 crc kubenswrapper[4830]: E0131 09:20:06.423715 4830 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 31 09:20:06 crc kubenswrapper[4830]: E0131 09:20:06.426807 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0b519925-01de-4cf0-8ff8-0f97137dd3d9-cert podName:0b519925-01de-4cf0-8ff8-0f97137dd3d9 nodeName:}" failed. No retries permitted until 2026-01-31 09:20:10.426758284 +0000 UTC m=+1154.920120736 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/0b519925-01de-4cf0-8ff8-0f97137dd3d9-cert") pod "infra-operator-controller-manager-79955696d6-vvv24" (UID: "0b519925-01de-4cf0-8ff8-0f97137dd3d9") : secret "infra-operator-webhook-server-cert" not found Jan 31 09:20:07 crc kubenswrapper[4830]: I0131 09:20:07.066235 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-2l42c" event={"ID":"388d9bc4-698e-4dea-8029-aa32433cf734","Type":"ContainerStarted","Data":"7089d0a0352baff596516994e078a0ba6a4d5aff5b996270aa642c9b252053d5"} Jan 31 09:20:07 crc kubenswrapper[4830]: I0131 09:20:07.087935 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-slhpt" event={"ID":"abf5a919-4697-4468-b9e4-8a4617e3a5ca","Type":"ContainerStarted","Data":"1cf8377fda724f3e976f2550e6a78ae944791f3d4f918193c9fe36aca37bcc4a"} Jan 31 09:20:07 crc kubenswrapper[4830]: I0131 09:20:07.090752 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-57fbdcd888-cp9fj" event={"ID":"2365408f-7d7a-482c-87c0-0452fa330e4e","Type":"ContainerStarted","Data":"b761d74c3f78fab65267327e12a21c74208209fe5a59c87e02eb782d395e1840"} Jan 31 09:20:07 crc kubenswrapper[4830]: I0131 09:20:07.104949 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-564965969-62c8t" event={"ID":"d4a8ef63-6ba0-4bb4-93b5-dc9fc1134bb5","Type":"ContainerStarted","Data":"fd13e6ca89bc12d6c570c90d02a8037ac973117f62ff1e4ef8e395c222865178"} Jan 31 09:20:07 crc kubenswrapper[4830]: E0131 09:20:07.107370 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:7869203f6f97de780368d507636031090fed3b658d2f7771acbd4481bdfc870b\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-564965969-62c8t" podUID="d4a8ef63-6ba0-4bb4-93b5-dc9fc1134bb5" Jan 31 09:20:07 crc kubenswrapper[4830]: I0131 09:20:07.145587 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-czm79" event={"ID":"68f255f0-5951-47f2-979e-af80607453e8","Type":"ContainerStarted","Data":"d1bbac450f3400b32d554ef228e8f20a7665c324caa8bfc976b67922e96c397c"} Jan 31 09:20:07 crc kubenswrapper[4830]: I0131 09:20:07.379788 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/250c9f1b-d78c-488e-b28e-6c2b783edd9b-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4drvxhm\" (UID: \"250c9f1b-d78c-488e-b28e-6c2b783edd9b\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4drvxhm" Jan 31 09:20:07 crc kubenswrapper[4830]: E0131 09:20:07.381285 4830 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 31 09:20:07 crc kubenswrapper[4830]: E0131 09:20:07.381340 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/250c9f1b-d78c-488e-b28e-6c2b783edd9b-cert podName:250c9f1b-d78c-488e-b28e-6c2b783edd9b nodeName:}" failed. No retries permitted until 2026-01-31 09:20:11.381321734 +0000 UTC m=+1155.874684176 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/250c9f1b-d78c-488e-b28e-6c2b783edd9b-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4drvxhm" (UID: "250c9f1b-d78c-488e-b28e-6c2b783edd9b") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 31 09:20:07 crc kubenswrapper[4830]: I0131 09:20:07.789549 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ce245704-5b88-4544-ae21-bcb30ff5d0d0-metrics-certs\") pod \"openstack-operator-controller-manager-55f549db95-67sj5\" (UID: \"ce245704-5b88-4544-ae21-bcb30ff5d0d0\") " pod="openstack-operators/openstack-operator-controller-manager-55f549db95-67sj5" Jan 31 09:20:07 crc kubenswrapper[4830]: E0131 09:20:07.790012 4830 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 31 09:20:07 crc kubenswrapper[4830]: E0131 09:20:07.790105 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ce245704-5b88-4544-ae21-bcb30ff5d0d0-metrics-certs podName:ce245704-5b88-4544-ae21-bcb30ff5d0d0 nodeName:}" failed. No retries permitted until 2026-01-31 09:20:11.790083403 +0000 UTC m=+1156.283445845 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ce245704-5b88-4544-ae21-bcb30ff5d0d0-metrics-certs") pod "openstack-operator-controller-manager-55f549db95-67sj5" (UID: "ce245704-5b88-4544-ae21-bcb30ff5d0d0") : secret "metrics-server-cert" not found Jan 31 09:20:07 crc kubenswrapper[4830]: I0131 09:20:07.790124 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/ce245704-5b88-4544-ae21-bcb30ff5d0d0-webhook-certs\") pod \"openstack-operator-controller-manager-55f549db95-67sj5\" (UID: \"ce245704-5b88-4544-ae21-bcb30ff5d0d0\") " pod="openstack-operators/openstack-operator-controller-manager-55f549db95-67sj5" Jan 31 09:20:07 crc kubenswrapper[4830]: E0131 09:20:07.790554 4830 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 31 09:20:07 crc kubenswrapper[4830]: E0131 09:20:07.790673 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ce245704-5b88-4544-ae21-bcb30ff5d0d0-webhook-certs podName:ce245704-5b88-4544-ae21-bcb30ff5d0d0 nodeName:}" failed. No retries permitted until 2026-01-31 09:20:11.790647869 +0000 UTC m=+1156.284010311 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/ce245704-5b88-4544-ae21-bcb30ff5d0d0-webhook-certs") pod "openstack-operator-controller-manager-55f549db95-67sj5" (UID: "ce245704-5b88-4544-ae21-bcb30ff5d0d0") : secret "webhook-server-cert" not found Jan 31 09:20:08 crc kubenswrapper[4830]: E0131 09:20:08.220303 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:7869203f6f97de780368d507636031090fed3b658d2f7771acbd4481bdfc870b\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-564965969-62c8t" podUID="d4a8ef63-6ba0-4bb4-93b5-dc9fc1134bb5" Jan 31 09:20:10 crc kubenswrapper[4830]: I0131 09:20:10.497227 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0b519925-01de-4cf0-8ff8-0f97137dd3d9-cert\") pod \"infra-operator-controller-manager-79955696d6-vvv24\" (UID: \"0b519925-01de-4cf0-8ff8-0f97137dd3d9\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-vvv24" Jan 31 09:20:10 crc kubenswrapper[4830]: E0131 09:20:10.497436 4830 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 31 09:20:10 crc kubenswrapper[4830]: E0131 09:20:10.497898 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0b519925-01de-4cf0-8ff8-0f97137dd3d9-cert podName:0b519925-01de-4cf0-8ff8-0f97137dd3d9 nodeName:}" failed. No retries permitted until 2026-01-31 09:20:18.497875981 +0000 UTC m=+1162.991238423 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/0b519925-01de-4cf0-8ff8-0f97137dd3d9-cert") pod "infra-operator-controller-manager-79955696d6-vvv24" (UID: "0b519925-01de-4cf0-8ff8-0f97137dd3d9") : secret "infra-operator-webhook-server-cert" not found Jan 31 09:20:11 crc kubenswrapper[4830]: I0131 09:20:11.416876 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/250c9f1b-d78c-488e-b28e-6c2b783edd9b-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4drvxhm\" (UID: \"250c9f1b-d78c-488e-b28e-6c2b783edd9b\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4drvxhm" Jan 31 09:20:11 crc kubenswrapper[4830]: E0131 09:20:11.417100 4830 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 31 09:20:11 crc kubenswrapper[4830]: E0131 09:20:11.417151 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/250c9f1b-d78c-488e-b28e-6c2b783edd9b-cert podName:250c9f1b-d78c-488e-b28e-6c2b783edd9b nodeName:}" failed. No retries permitted until 2026-01-31 09:20:19.417135769 +0000 UTC m=+1163.910498201 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/250c9f1b-d78c-488e-b28e-6c2b783edd9b-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4drvxhm" (UID: "250c9f1b-d78c-488e-b28e-6c2b783edd9b") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 31 09:20:11 crc kubenswrapper[4830]: I0131 09:20:11.825598 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/ce245704-5b88-4544-ae21-bcb30ff5d0d0-webhook-certs\") pod \"openstack-operator-controller-manager-55f549db95-67sj5\" (UID: \"ce245704-5b88-4544-ae21-bcb30ff5d0d0\") " pod="openstack-operators/openstack-operator-controller-manager-55f549db95-67sj5" Jan 31 09:20:11 crc kubenswrapper[4830]: I0131 09:20:11.825789 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ce245704-5b88-4544-ae21-bcb30ff5d0d0-metrics-certs\") pod \"openstack-operator-controller-manager-55f549db95-67sj5\" (UID: \"ce245704-5b88-4544-ae21-bcb30ff5d0d0\") " pod="openstack-operators/openstack-operator-controller-manager-55f549db95-67sj5" Jan 31 09:20:11 crc kubenswrapper[4830]: E0131 09:20:11.825905 4830 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 31 09:20:11 crc kubenswrapper[4830]: E0131 09:20:11.826061 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ce245704-5b88-4544-ae21-bcb30ff5d0d0-webhook-certs podName:ce245704-5b88-4544-ae21-bcb30ff5d0d0 nodeName:}" failed. No retries permitted until 2026-01-31 09:20:19.826004681 +0000 UTC m=+1164.319367293 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/ce245704-5b88-4544-ae21-bcb30ff5d0d0-webhook-certs") pod "openstack-operator-controller-manager-55f549db95-67sj5" (UID: "ce245704-5b88-4544-ae21-bcb30ff5d0d0") : secret "webhook-server-cert" not found Jan 31 09:20:11 crc kubenswrapper[4830]: E0131 09:20:11.826641 4830 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 31 09:20:11 crc kubenswrapper[4830]: E0131 09:20:11.826789 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ce245704-5b88-4544-ae21-bcb30ff5d0d0-metrics-certs podName:ce245704-5b88-4544-ae21-bcb30ff5d0d0 nodeName:}" failed. No retries permitted until 2026-01-31 09:20:19.826760583 +0000 UTC m=+1164.320123185 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ce245704-5b88-4544-ae21-bcb30ff5d0d0-metrics-certs") pod "openstack-operator-controller-manager-55f549db95-67sj5" (UID: "ce245704-5b88-4544-ae21-bcb30ff5d0d0") : secret "metrics-server-cert" not found Jan 31 09:20:18 crc kubenswrapper[4830]: I0131 09:20:18.577008 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0b519925-01de-4cf0-8ff8-0f97137dd3d9-cert\") pod \"infra-operator-controller-manager-79955696d6-vvv24\" (UID: \"0b519925-01de-4cf0-8ff8-0f97137dd3d9\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-vvv24" Jan 31 09:20:18 crc kubenswrapper[4830]: I0131 09:20:18.588172 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0b519925-01de-4cf0-8ff8-0f97137dd3d9-cert\") pod \"infra-operator-controller-manager-79955696d6-vvv24\" (UID: \"0b519925-01de-4cf0-8ff8-0f97137dd3d9\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-vvv24" Jan 31 09:20:18 crc kubenswrapper[4830]: I0131 09:20:18.606973 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-79955696d6-vvv24" Jan 31 09:20:19 crc kubenswrapper[4830]: I0131 09:20:19.493922 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/250c9f1b-d78c-488e-b28e-6c2b783edd9b-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4drvxhm\" (UID: \"250c9f1b-d78c-488e-b28e-6c2b783edd9b\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4drvxhm" Jan 31 09:20:19 crc kubenswrapper[4830]: I0131 09:20:19.501024 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/250c9f1b-d78c-488e-b28e-6c2b783edd9b-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4drvxhm\" (UID: \"250c9f1b-d78c-488e-b28e-6c2b783edd9b\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4drvxhm" Jan 31 09:20:19 crc kubenswrapper[4830]: I0131 09:20:19.760526 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4drvxhm" Jan 31 09:20:19 crc kubenswrapper[4830]: I0131 09:20:19.902591 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/ce245704-5b88-4544-ae21-bcb30ff5d0d0-webhook-certs\") pod \"openstack-operator-controller-manager-55f549db95-67sj5\" (UID: \"ce245704-5b88-4544-ae21-bcb30ff5d0d0\") " pod="openstack-operators/openstack-operator-controller-manager-55f549db95-67sj5" Jan 31 09:20:19 crc kubenswrapper[4830]: I0131 09:20:19.903070 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ce245704-5b88-4544-ae21-bcb30ff5d0d0-metrics-certs\") pod \"openstack-operator-controller-manager-55f549db95-67sj5\" (UID: \"ce245704-5b88-4544-ae21-bcb30ff5d0d0\") " pod="openstack-operators/openstack-operator-controller-manager-55f549db95-67sj5" Jan 31 09:20:19 crc kubenswrapper[4830]: I0131 09:20:19.909183 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/ce245704-5b88-4544-ae21-bcb30ff5d0d0-webhook-certs\") pod \"openstack-operator-controller-manager-55f549db95-67sj5\" (UID: \"ce245704-5b88-4544-ae21-bcb30ff5d0d0\") " pod="openstack-operators/openstack-operator-controller-manager-55f549db95-67sj5" Jan 31 09:20:19 crc kubenswrapper[4830]: I0131 09:20:19.909792 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ce245704-5b88-4544-ae21-bcb30ff5d0d0-metrics-certs\") pod \"openstack-operator-controller-manager-55f549db95-67sj5\" (UID: \"ce245704-5b88-4544-ae21-bcb30ff5d0d0\") " pod="openstack-operators/openstack-operator-controller-manager-55f549db95-67sj5" Jan 31 09:20:20 crc kubenswrapper[4830]: I0131 09:20:20.021486 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-55f549db95-67sj5" Jan 31 09:20:21 crc kubenswrapper[4830]: E0131 09:20:21.043058 4830 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/manila-operator@sha256:cd911e8d7a7a1104d77691dbaaf54370015cbb82859337746db5a9186d5dc566" Jan 31 09:20:21 crc kubenswrapper[4830]: E0131 09:20:21.043605 4830 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/manila-operator@sha256:cd911e8d7a7a1104d77691dbaaf54370015cbb82859337746db5a9186d5dc566,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-qg8rv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod manila-operator-controller-manager-7dd968899f-4tqzd_openstack-operators(1891b74f-fe71-4020-98a3-5796e2a67ea2): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 31 09:20:21 crc kubenswrapper[4830]: E0131 09:20:21.044778 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-4tqzd" podUID="1891b74f-fe71-4020-98a3-5796e2a67ea2" Jan 31 09:20:21 crc kubenswrapper[4830]: E0131 09:20:21.413230 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/manila-operator@sha256:cd911e8d7a7a1104d77691dbaaf54370015cbb82859337746db5a9186d5dc566\\\"\"" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-4tqzd" podUID="1891b74f-fe71-4020-98a3-5796e2a67ea2" Jan 31 09:20:21 crc kubenswrapper[4830]: E0131 09:20:21.855938 4830 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/ironic-operator@sha256:bead175f27e5f074f723694f3b66e5aa7238411bf8a27a267b9a2936e4465521" Jan 31 09:20:21 crc kubenswrapper[4830]: E0131 09:20:21.856960 4830 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ironic-operator@sha256:bead175f27e5f074f723694f3b66e5aa7238411bf8a27a267b9a2936e4465521,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-wkqff,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ironic-operator-controller-manager-5f4b8bd54d-slc6p_openstack-operators(bd972fba-0692-45af-b28c-db4929fe150a): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 31 09:20:21 crc kubenswrapper[4830]: E0131 09:20:21.859111 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-slc6p" podUID="bd972fba-0692-45af-b28c-db4929fe150a" Jan 31 09:20:22 crc kubenswrapper[4830]: E0131 09:20:22.421580 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ironic-operator@sha256:bead175f27e5f074f723694f3b66e5aa7238411bf8a27a267b9a2936e4465521\\\"\"" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-slc6p" podUID="bd972fba-0692-45af-b28c-db4929fe150a" Jan 31 09:20:23 crc kubenswrapper[4830]: E0131 09:20:23.251624 4830 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/designate-operator@sha256:d9f6f8dc6a6dd9b0d7c96e4c89b3056291fd61f11126a1304256a4d6cacd0382" Jan 31 09:20:23 crc kubenswrapper[4830]: E0131 09:20:23.251958 4830 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/designate-operator@sha256:d9f6f8dc6a6dd9b0d7c96e4c89b3056291fd61f11126a1304256a4d6cacd0382,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-mwb6h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod designate-operator-controller-manager-6d9697b7f4-d8xvw_openstack-operators(3f5623d3-168a-4bca-9154-ecb4c81b5b3b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 31 09:20:23 crc kubenswrapper[4830]: E0131 09:20:23.253167 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-d8xvw" podUID="3f5623d3-168a-4bca-9154-ecb4c81b5b3b" Jan 31 09:20:23 crc kubenswrapper[4830]: E0131 09:20:23.431512 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/designate-operator@sha256:d9f6f8dc6a6dd9b0d7c96e4c89b3056291fd61f11126a1304256a4d6cacd0382\\\"\"" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-d8xvw" podUID="3f5623d3-168a-4bca-9154-ecb4c81b5b3b" Jan 31 09:20:24 crc kubenswrapper[4830]: E0131 09:20:24.867378 4830 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/cinder-operator@sha256:6e21a1dda86ba365817102d23a5d4d2d5dcd1c4d8e5f8d74bd24548aa8c63898" Jan 31 09:20:24 crc kubenswrapper[4830]: E0131 09:20:24.869143 4830 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/cinder-operator@sha256:6e21a1dda86ba365817102d23a5d4d2d5dcd1c4d8e5f8d74bd24548aa8c63898,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-p8x25,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-operator-controller-manager-8d874c8fc-cpwlp_openstack-operators(47718a89-dc4c-4f5d-bb58-aec265aa68bf): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 31 09:20:24 crc kubenswrapper[4830]: E0131 09:20:24.870504 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-cpwlp" podUID="47718a89-dc4c-4f5d-bb58-aec265aa68bf" Jan 31 09:20:25 crc kubenswrapper[4830]: E0131 09:20:25.371375 4830 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/test-operator@sha256:3e01e99d3ca1b6c20b1bb015b00cfcbffc584f22a93dc6fe4019d63b813c0241" Jan 31 09:20:25 crc kubenswrapper[4830]: E0131 09:20:25.371634 4830 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:3e01e99d3ca1b6c20b1bb015b00cfcbffc584f22a93dc6fe4019d63b813c0241,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-h8hvj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-56f8bfcd9f-czm79_openstack-operators(68f255f0-5951-47f2-979e-af80607453e8): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 31 09:20:25 crc kubenswrapper[4830]: E0131 09:20:25.372851 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-czm79" podUID="68f255f0-5951-47f2-979e-af80607453e8" Jan 31 09:20:25 crc kubenswrapper[4830]: E0131 09:20:25.450946 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/cinder-operator@sha256:6e21a1dda86ba365817102d23a5d4d2d5dcd1c4d8e5f8d74bd24548aa8c63898\\\"\"" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-cpwlp" podUID="47718a89-dc4c-4f5d-bb58-aec265aa68bf" Jan 31 09:20:25 crc kubenswrapper[4830]: E0131 09:20:25.451248 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:3e01e99d3ca1b6c20b1bb015b00cfcbffc584f22a93dc6fe4019d63b813c0241\\\"\"" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-czm79" podUID="68f255f0-5951-47f2-979e-af80607453e8" Jan 31 09:20:26 crc kubenswrapper[4830]: E0131 09:20:26.123620 4830 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/glance-operator@sha256:1f593e8d49d02b6484c89632192ae54771675c54fbd8426e3675b8e20ecfd7c4" Jan 31 09:20:26 crc kubenswrapper[4830]: E0131 09:20:26.123851 4830 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/glance-operator@sha256:1f593e8d49d02b6484c89632192ae54771675c54fbd8426e3675b8e20ecfd7c4,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-8q284,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod glance-operator-controller-manager-8886f4c47-hcpk8_openstack-operators(17f5c61d-5997-482b-961a-0339cfe6c15c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 31 09:20:26 crc kubenswrapper[4830]: E0131 09:20:26.125734 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-hcpk8" podUID="17f5c61d-5997-482b-961a-0339cfe6c15c" Jan 31 09:20:26 crc kubenswrapper[4830]: E0131 09:20:26.457906 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/glance-operator@sha256:1f593e8d49d02b6484c89632192ae54771675c54fbd8426e3675b8e20ecfd7c4\\\"\"" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-hcpk8" podUID="17f5c61d-5997-482b-961a-0339cfe6c15c" Jan 31 09:20:26 crc kubenswrapper[4830]: E0131 09:20:26.982772 4830 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/swift-operator@sha256:42ad717de1b82267d244b016e5491a5b66a5c3deb6b8c2906a379e1296a2c382" Jan 31 09:20:26 crc kubenswrapper[4830]: E0131 09:20:26.983020 4830 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/swift-operator@sha256:42ad717de1b82267d244b016e5491a5b66a5c3deb6b8c2906a379e1296a2c382,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-ds9lj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-68fc8c869-gktql_openstack-operators(21448bf1-0318-4469-baff-d35cf905337b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 31 09:20:26 crc kubenswrapper[4830]: E0131 09:20:26.984340 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-gktql" podUID="21448bf1-0318-4469-baff-d35cf905337b" Jan 31 09:20:27 crc kubenswrapper[4830]: E0131 09:20:27.478210 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:42ad717de1b82267d244b016e5491a5b66a5c3deb6b8c2906a379e1296a2c382\\\"\"" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-gktql" podUID="21448bf1-0318-4469-baff-d35cf905337b" Jan 31 09:20:29 crc kubenswrapper[4830]: E0131 09:20:29.869987 4830 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/octavia-operator@sha256:e6f2f361f1dcbb321407a5884951e16ff96e7b88942b10b548f27ad4de14a0be" Jan 31 09:20:29 crc kubenswrapper[4830]: E0131 09:20:29.871051 4830 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/octavia-operator@sha256:e6f2f361f1dcbb321407a5884951e16ff96e7b88942b10b548f27ad4de14a0be,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-d5ng5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-operator-controller-manager-6687f8d877-ld2fb_openstack-operators(f101dda8-ba4c-42c2-a8e3-9a5e53c2ec8a): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 31 09:20:29 crc kubenswrapper[4830]: E0131 09:20:29.872360 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-ld2fb" podUID="f101dda8-ba4c-42c2-a8e3-9a5e53c2ec8a" Jan 31 09:20:30 crc kubenswrapper[4830]: E0131 09:20:30.502665 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:e6f2f361f1dcbb321407a5884951e16ff96e7b88942b10b548f27ad4de14a0be\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-ld2fb" podUID="f101dda8-ba4c-42c2-a8e3-9a5e53c2ec8a" Jan 31 09:20:30 crc kubenswrapper[4830]: E0131 09:20:30.690767 4830 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.106:5001/openstack-k8s-operators/telemetry-operator:594db30f48077cd941a696ba338492d7ea9c80d8" Jan 31 09:20:30 crc kubenswrapper[4830]: E0131 09:20:30.690862 4830 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.106:5001/openstack-k8s-operators/telemetry-operator:594db30f48077cd941a696ba338492d7ea9c80d8" Jan 31 09:20:30 crc kubenswrapper[4830]: E0131 09:20:30.691092 4830 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:38.102.83.106:5001/openstack-k8s-operators/telemetry-operator:594db30f48077cd941a696ba338492d7ea9c80d8,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-zd2vt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-57fbdcd888-cp9fj_openstack-operators(2365408f-7d7a-482c-87c0-0452fa330e4e): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 31 09:20:30 crc kubenswrapper[4830]: E0131 09:20:30.692308 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/telemetry-operator-controller-manager-57fbdcd888-cp9fj" podUID="2365408f-7d7a-482c-87c0-0452fa330e4e" Jan 31 09:20:31 crc kubenswrapper[4830]: E0131 09:20:31.508222 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.106:5001/openstack-k8s-operators/telemetry-operator:594db30f48077cd941a696ba338492d7ea9c80d8\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-57fbdcd888-cp9fj" podUID="2365408f-7d7a-482c-87c0-0452fa330e4e" Jan 31 09:20:33 crc kubenswrapper[4830]: E0131 09:20:33.222252 4830 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/neutron-operator@sha256:bbb46b8b3b69fdfad7bafc10a7e88f6ea58bcdc3c91e30beb79e24417d52e0f6" Jan 31 09:20:33 crc kubenswrapper[4830]: E0131 09:20:33.223016 4830 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/neutron-operator@sha256:bbb46b8b3b69fdfad7bafc10a7e88f6ea58bcdc3c91e30beb79e24417d52e0f6,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-mhzvl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod neutron-operator-controller-manager-585dbc889-sjf7r_openstack-operators(617226b5-2b2c-4f6c-902d-9784c8a283de): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 31 09:20:33 crc kubenswrapper[4830]: E0131 09:20:33.224210 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-sjf7r" podUID="617226b5-2b2c-4f6c-902d-9784c8a283de" Jan 31 09:20:33 crc kubenswrapper[4830]: E0131 09:20:33.527568 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/neutron-operator@sha256:bbb46b8b3b69fdfad7bafc10a7e88f6ea58bcdc3c91e30beb79e24417d52e0f6\\\"\"" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-sjf7r" podUID="617226b5-2b2c-4f6c-902d-9784c8a283de" Jan 31 09:20:33 crc kubenswrapper[4830]: E0131 09:20:33.831963 4830 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/heat-operator@sha256:27d83ada27cf70cda0c5738f97551d81f1ea4068e83a090f3312e22172d72e10" Jan 31 09:20:33 crc kubenswrapper[4830]: E0131 09:20:33.832203 4830 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/heat-operator@sha256:27d83ada27cf70cda0c5738f97551d81f1ea4068e83a090f3312e22172d72e10,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-qtzgs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-operator-controller-manager-69d6db494d-8wnqw_openstack-operators(dafe4db4-4a74-4cb2-8e7f-496cfa1a1c5e): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 31 09:20:33 crc kubenswrapper[4830]: E0131 09:20:33.833450 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-8wnqw" podUID="dafe4db4-4a74-4cb2-8e7f-496cfa1a1c5e" Jan 31 09:20:34 crc kubenswrapper[4830]: E0131 09:20:34.535880 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/heat-operator@sha256:27d83ada27cf70cda0c5738f97551d81f1ea4068e83a090f3312e22172d72e10\\\"\"" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-8wnqw" podUID="dafe4db4-4a74-4cb2-8e7f-496cfa1a1c5e" Jan 31 09:20:38 crc kubenswrapper[4830]: E0131 09:20:38.193365 4830 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/mariadb-operator@sha256:2d493137559b74e23edb4788b7fbdb38b3e239df0f2d7e6e540e50b2355fc3cf" Jan 31 09:20:38 crc kubenswrapper[4830]: E0131 09:20:38.194830 4830 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/mariadb-operator@sha256:2d493137559b74e23edb4788b7fbdb38b3e239df0f2d7e6e540e50b2355fc3cf,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-56xtr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod mariadb-operator-controller-manager-67bf948998-sbhfn_openstack-operators(0e056a0c-ee06-43aa-bf36-35f202f76b17): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 31 09:20:38 crc kubenswrapper[4830]: E0131 09:20:38.196096 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-sbhfn" podUID="0e056a0c-ee06-43aa-bf36-35f202f76b17" Jan 31 09:20:38 crc kubenswrapper[4830]: E0131 09:20:38.570881 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/mariadb-operator@sha256:2d493137559b74e23edb4788b7fbdb38b3e239df0f2d7e6e540e50b2355fc3cf\\\"\"" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-sbhfn" podUID="0e056a0c-ee06-43aa-bf36-35f202f76b17" Jan 31 09:20:38 crc kubenswrapper[4830]: E0131 09:20:38.823124 4830 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/horizon-operator@sha256:027cd7ab61ef5071d9ad6b729c95a98e51cd254642f01dc019d44cc98a9232f8" Jan 31 09:20:38 crc kubenswrapper[4830]: E0131 09:20:38.823356 4830 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/horizon-operator@sha256:027cd7ab61ef5071d9ad6b729c95a98e51cd254642f01dc019d44cc98a9232f8,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-ls9jf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-operator-controller-manager-5fb775575f-d9xtg_openstack-operators(4d28fd37-b97c-447a-9165-d90d11fd4698): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 31 09:20:38 crc kubenswrapper[4830]: E0131 09:20:38.825445 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-d9xtg" podUID="4d28fd37-b97c-447a-9165-d90d11fd4698" Jan 31 09:20:39 crc kubenswrapper[4830]: E0131 09:20:39.581616 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/horizon-operator@sha256:027cd7ab61ef5071d9ad6b729c95a98e51cd254642f01dc019d44cc98a9232f8\\\"\"" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-d9xtg" podUID="4d28fd37-b97c-447a-9165-d90d11fd4698" Jan 31 09:20:39 crc kubenswrapper[4830]: E0131 09:20:39.686199 4830 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/watcher-operator@sha256:7869203f6f97de780368d507636031090fed3b658d2f7771acbd4481bdfc870b" Jan 31 09:20:39 crc kubenswrapper[4830]: E0131 09:20:39.686540 4830 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/watcher-operator@sha256:7869203f6f97de780368d507636031090fed3b658d2f7771acbd4481bdfc870b,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-2mcnz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-564965969-62c8t_openstack-operators(d4a8ef63-6ba0-4bb4-93b5-dc9fc1134bb5): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 31 09:20:39 crc kubenswrapper[4830]: E0131 09:20:39.687884 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/watcher-operator-controller-manager-564965969-62c8t" podUID="d4a8ef63-6ba0-4bb4-93b5-dc9fc1134bb5" Jan 31 09:20:40 crc kubenswrapper[4830]: E0131 09:20:40.398678 4830 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/keystone-operator@sha256:319c969e88f109b26487a9f5a67203682803d7386424703ab7ca0340be99ae17" Jan 31 09:20:40 crc kubenswrapper[4830]: E0131 09:20:40.399373 4830 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/keystone-operator@sha256:319c969e88f109b26487a9f5a67203682803d7386424703ab7ca0340be99ae17,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-8vp4f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-84f48565d4-kgrns_openstack-operators(758269b2-16c6-4f5a-8f9f-875659eede84): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 31 09:20:40 crc kubenswrapper[4830]: E0131 09:20:40.400621 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-kgrns" podUID="758269b2-16c6-4f5a-8f9f-875659eede84" Jan 31 09:20:40 crc kubenswrapper[4830]: E0131 09:20:40.593394 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/keystone-operator@sha256:319c969e88f109b26487a9f5a67203682803d7386424703ab7ca0340be99ae17\\\"\"" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-kgrns" podUID="758269b2-16c6-4f5a-8f9f-875659eede84" Jan 31 09:20:41 crc kubenswrapper[4830]: E0131 09:20:41.044255 4830 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/nova-operator@sha256:5340b88039fac393da49ef4e181b2720c809c27a6bb30531a07a49342a1da45e" Jan 31 09:20:41 crc kubenswrapper[4830]: E0131 09:20:41.044499 4830 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/nova-operator@sha256:5340b88039fac393da49ef4e181b2720c809c27a6bb30531a07a49342a1da45e,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-xpsl8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-55bff696bd-rkvx7_openstack-operators(e681f66d-3695-4b59-9ef1-6f9bbf007ed2): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 31 09:20:41 crc kubenswrapper[4830]: E0131 09:20:41.045692 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-rkvx7" podUID="e681f66d-3695-4b59-9ef1-6f9bbf007ed2" Jan 31 09:20:41 crc kubenswrapper[4830]: E0131 09:20:41.450559 4830 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2" Jan 31 09:20:41 crc kubenswrapper[4830]: E0131 09:20:41.450810 4830 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-htt9r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-slhpt_openstack-operators(abf5a919-4697-4468-b9e4-8a4617e3a5ca): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 31 09:20:41 crc kubenswrapper[4830]: E0131 09:20:41.452019 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-slhpt" podUID="abf5a919-4697-4468-b9e4-8a4617e3a5ca" Jan 31 09:20:41 crc kubenswrapper[4830]: E0131 09:20:41.743527 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-slhpt" podUID="abf5a919-4697-4468-b9e4-8a4617e3a5ca" Jan 31 09:20:41 crc kubenswrapper[4830]: E0131 09:20:41.743877 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:5340b88039fac393da49ef4e181b2720c809c27a6bb30531a07a49342a1da45e\\\"\"" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-rkvx7" podUID="e681f66d-3695-4b59-9ef1-6f9bbf007ed2" Jan 31 09:20:42 crc kubenswrapper[4830]: I0131 09:20:42.057273 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-79955696d6-vvv24"] Jan 31 09:20:42 crc kubenswrapper[4830]: I0131 09:20:42.116881 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-55f549db95-67sj5"] Jan 31 09:20:42 crc kubenswrapper[4830]: W0131 09:20:42.161396 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podce245704_5b88_4544_ae21_bcb30ff5d0d0.slice/crio-ff8f4439528f5334a2b237023847969757dd84cc9fec3f1f5aa6ab53b00c30e6 WatchSource:0}: Error finding container ff8f4439528f5334a2b237023847969757dd84cc9fec3f1f5aa6ab53b00c30e6: Status 404 returned error can't find the container with id ff8f4439528f5334a2b237023847969757dd84cc9fec3f1f5aa6ab53b00c30e6 Jan 31 09:20:42 crc kubenswrapper[4830]: I0131 09:20:42.395057 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4drvxhm"] Jan 31 09:20:42 crc kubenswrapper[4830]: W0131 09:20:42.443572 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod250c9f1b_d78c_488e_b28e_6c2b783edd9b.slice/crio-47c16e2d20a61d61cb1a9ead97671036206057809ca3a0dc9c9dbac40372324a WatchSource:0}: Error finding container 47c16e2d20a61d61cb1a9ead97671036206057809ca3a0dc9c9dbac40372324a: Status 404 returned error can't find the container with id 47c16e2d20a61d61cb1a9ead97671036206057809ca3a0dc9c9dbac40372324a Jan 31 09:20:42 crc kubenswrapper[4830]: I0131 09:20:42.651768 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-4tqzd" event={"ID":"1891b74f-fe71-4020-98a3-5796e2a67ea2","Type":"ContainerStarted","Data":"32f6281283ec15b9184365b426762c2ae5925724835732331d2fc9a0f9708e67"} Jan 31 09:20:42 crc kubenswrapper[4830]: I0131 09:20:42.670740 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-hcpk8" event={"ID":"17f5c61d-5997-482b-961a-0339cfe6c15c","Type":"ContainerStarted","Data":"2f447e9e8e1d2b9881c62198dadc1a51a2bbca48ff8e66f4cc960c584a4b9838"} Jan 31 09:20:42 crc kubenswrapper[4830]: I0131 09:20:42.688557 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-slc6p" event={"ID":"bd972fba-0692-45af-b28c-db4929fe150a","Type":"ContainerStarted","Data":"e853bb2ecb118b1cc3318dc0554cf415d016c80dcd3c771fdde1705ef75ce376"} Jan 31 09:20:42 crc kubenswrapper[4830]: I0131 09:20:42.718756 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-kwwkw" event={"ID":"1488b4ea-ba49-423e-a995-917dc9cbb9e2","Type":"ContainerStarted","Data":"afaa1296eb4738fc7bb8cdb8903ff98d095088d82900c8c536c58aaf3b17823a"} Jan 31 09:20:42 crc kubenswrapper[4830]: I0131 09:20:42.721130 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-kwwkw" Jan 31 09:20:42 crc kubenswrapper[4830]: I0131 09:20:42.723540 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-2l42c" event={"ID":"388d9bc4-698e-4dea-8029-aa32433cf734","Type":"ContainerStarted","Data":"9aed26c324093444bc9ccd23a18084abbc4df93e2ecc7eea93af5f5bb2391ba2"} Jan 31 09:20:42 crc kubenswrapper[4830]: I0131 09:20:42.723772 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-2l42c" Jan 31 09:20:42 crc kubenswrapper[4830]: I0131 09:20:42.726327 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-gbjts" event={"ID":"7ff06918-8b3c-48cb-bd11-1254b9bbc276","Type":"ContainerStarted","Data":"89ec2b66a9c23b1b8111b0104245522ef4da7c1cd91dbf9cb504251fe8054957"} Jan 31 09:20:42 crc kubenswrapper[4830]: I0131 09:20:42.727386 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-gbjts" Jan 31 09:20:42 crc kubenswrapper[4830]: I0131 09:20:42.731704 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-d8xvw" event={"ID":"3f5623d3-168a-4bca-9154-ecb4c81b5b3b","Type":"ContainerStarted","Data":"9bced4f3ec27f0428b862ec47565b2a44f8905a5c62eaeb9a3c727f9bf0a6d84"} Jan 31 09:20:42 crc kubenswrapper[4830]: I0131 09:20:42.732457 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-d8xvw" Jan 31 09:20:42 crc kubenswrapper[4830]: I0131 09:20:42.737454 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-55f549db95-67sj5" event={"ID":"ce245704-5b88-4544-ae21-bcb30ff5d0d0","Type":"ContainerStarted","Data":"ff8f4439528f5334a2b237023847969757dd84cc9fec3f1f5aa6ab53b00c30e6"} Jan 31 09:20:42 crc kubenswrapper[4830]: I0131 09:20:42.742662 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4drvxhm" event={"ID":"250c9f1b-d78c-488e-b28e-6c2b783edd9b","Type":"ContainerStarted","Data":"47c16e2d20a61d61cb1a9ead97671036206057809ca3a0dc9c9dbac40372324a"} Jan 31 09:20:42 crc kubenswrapper[4830]: I0131 09:20:42.752065 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-czm79" event={"ID":"68f255f0-5951-47f2-979e-af80607453e8","Type":"ContainerStarted","Data":"b2feb0aeb46343e5f4c408422d5788609c74ff97771f008e26d4476e2b4b51ca"} Jan 31 09:20:42 crc kubenswrapper[4830]: I0131 09:20:42.753218 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-czm79" Jan 31 09:20:42 crc kubenswrapper[4830]: I0131 09:20:42.769135 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-79955696d6-vvv24" event={"ID":"0b519925-01de-4cf0-8ff8-0f97137dd3d9","Type":"ContainerStarted","Data":"ae65744ddae6e4a56f46c353ebc53557f7dd82b9c0444524fd481da47b890ead"} Jan 31 09:20:42 crc kubenswrapper[4830]: I0131 09:20:42.769510 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-kwwkw" podStartSLOduration=3.654196775 podStartE2EDuration="40.769484692s" podCreationTimestamp="2026-01-31 09:20:02 +0000 UTC" firstStartedPulling="2026-01-31 09:20:03.886101851 +0000 UTC m=+1148.379464293" lastFinishedPulling="2026-01-31 09:20:41.001389768 +0000 UTC m=+1185.494752210" observedRunningTime="2026-01-31 09:20:42.759294639 +0000 UTC m=+1187.252657081" watchObservedRunningTime="2026-01-31 09:20:42.769484692 +0000 UTC m=+1187.262847134" Jan 31 09:20:42 crc kubenswrapper[4830]: I0131 09:20:42.829495 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-2l42c" podStartSLOduration=5.6639141219999996 podStartE2EDuration="40.829475063s" podCreationTimestamp="2026-01-31 09:20:02 +0000 UTC" firstStartedPulling="2026-01-31 09:20:05.834707115 +0000 UTC m=+1150.328069557" lastFinishedPulling="2026-01-31 09:20:41.000268036 +0000 UTC m=+1185.493630498" observedRunningTime="2026-01-31 09:20:42.801670015 +0000 UTC m=+1187.295032457" watchObservedRunningTime="2026-01-31 09:20:42.829475063 +0000 UTC m=+1187.322837495" Jan 31 09:20:42 crc kubenswrapper[4830]: I0131 09:20:42.835083 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-gbjts" podStartSLOduration=4.895960336 podStartE2EDuration="40.835064403s" podCreationTimestamp="2026-01-31 09:20:02 +0000 UTC" firstStartedPulling="2026-01-31 09:20:05.4877639 +0000 UTC m=+1149.981126342" lastFinishedPulling="2026-01-31 09:20:41.426867967 +0000 UTC m=+1185.920230409" observedRunningTime="2026-01-31 09:20:42.827366793 +0000 UTC m=+1187.320729235" watchObservedRunningTime="2026-01-31 09:20:42.835064403 +0000 UTC m=+1187.328426845" Jan 31 09:20:42 crc kubenswrapper[4830]: I0131 09:20:42.857233 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-d8xvw" podStartSLOduration=3.634129018 podStartE2EDuration="40.857211439s" podCreationTimestamp="2026-01-31 09:20:02 +0000 UTC" firstStartedPulling="2026-01-31 09:20:04.335867446 +0000 UTC m=+1148.829229888" lastFinishedPulling="2026-01-31 09:20:41.558949857 +0000 UTC m=+1186.052312309" observedRunningTime="2026-01-31 09:20:42.85517307 +0000 UTC m=+1187.348535522" watchObservedRunningTime="2026-01-31 09:20:42.857211439 +0000 UTC m=+1187.350573891" Jan 31 09:20:42 crc kubenswrapper[4830]: I0131 09:20:42.898810 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-czm79" podStartSLOduration=4.8974143980000004 podStartE2EDuration="40.898788032s" podCreationTimestamp="2026-01-31 09:20:02 +0000 UTC" firstStartedPulling="2026-01-31 09:20:05.927504458 +0000 UTC m=+1150.420866900" lastFinishedPulling="2026-01-31 09:20:41.928878092 +0000 UTC m=+1186.422240534" observedRunningTime="2026-01-31 09:20:42.897448154 +0000 UTC m=+1187.390810596" watchObservedRunningTime="2026-01-31 09:20:42.898788032 +0000 UTC m=+1187.392150464" Jan 31 09:20:43 crc kubenswrapper[4830]: I0131 09:20:43.786677 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-cpwlp" event={"ID":"47718a89-dc4c-4f5d-bb58-aec265aa68bf","Type":"ContainerStarted","Data":"68d32f98fc69855e761a0992edb05093b5ca47972eeb484d8f1fcb9ba7a65281"} Jan 31 09:20:43 crc kubenswrapper[4830]: I0131 09:20:43.787428 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-cpwlp" Jan 31 09:20:43 crc kubenswrapper[4830]: I0131 09:20:43.792379 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-gktql" event={"ID":"21448bf1-0318-4469-baff-d35cf905337b","Type":"ContainerStarted","Data":"3a2972c830d9e445ddfef93bb30a691291c4a5bdb6638100400c28adb2419129"} Jan 31 09:20:43 crc kubenswrapper[4830]: I0131 09:20:43.793139 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-gktql" Jan 31 09:20:43 crc kubenswrapper[4830]: I0131 09:20:43.795534 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-55f549db95-67sj5" event={"ID":"ce245704-5b88-4544-ae21-bcb30ff5d0d0","Type":"ContainerStarted","Data":"1573771da40f7a5e2ea9a984487d99a8fb68fa359215d29c723caf1854e64eb5"} Jan 31 09:20:43 crc kubenswrapper[4830]: I0131 09:20:43.795566 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-slc6p" Jan 31 09:20:43 crc kubenswrapper[4830]: I0131 09:20:43.795580 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-55f549db95-67sj5" Jan 31 09:20:43 crc kubenswrapper[4830]: I0131 09:20:43.797049 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-hcpk8" Jan 31 09:20:43 crc kubenswrapper[4830]: I0131 09:20:43.827370 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-cpwlp" podStartSLOduration=4.350582817 podStartE2EDuration="41.827348257s" podCreationTimestamp="2026-01-31 09:20:02 +0000 UTC" firstStartedPulling="2026-01-31 09:20:04.526125296 +0000 UTC m=+1149.019487738" lastFinishedPulling="2026-01-31 09:20:42.002890736 +0000 UTC m=+1186.496253178" observedRunningTime="2026-01-31 09:20:43.819893273 +0000 UTC m=+1188.313255715" watchObservedRunningTime="2026-01-31 09:20:43.827348257 +0000 UTC m=+1188.320710699" Jan 31 09:20:43 crc kubenswrapper[4830]: I0131 09:20:43.848437 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-hcpk8" podStartSLOduration=4.367588925 podStartE2EDuration="41.848413351s" podCreationTimestamp="2026-01-31 09:20:02 +0000 UTC" firstStartedPulling="2026-01-31 09:20:04.513327079 +0000 UTC m=+1149.006689521" lastFinishedPulling="2026-01-31 09:20:41.994151505 +0000 UTC m=+1186.487513947" observedRunningTime="2026-01-31 09:20:43.843707656 +0000 UTC m=+1188.337070098" watchObservedRunningTime="2026-01-31 09:20:43.848413351 +0000 UTC m=+1188.341775793" Jan 31 09:20:43 crc kubenswrapper[4830]: I0131 09:20:43.906628 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-gktql" podStartSLOduration=5.466305202 podStartE2EDuration="41.906607131s" podCreationTimestamp="2026-01-31 09:20:02 +0000 UTC" firstStartedPulling="2026-01-31 09:20:05.502649607 +0000 UTC m=+1149.996012069" lastFinishedPulling="2026-01-31 09:20:41.942951556 +0000 UTC m=+1186.436313998" observedRunningTime="2026-01-31 09:20:43.905669754 +0000 UTC m=+1188.399032196" watchObservedRunningTime="2026-01-31 09:20:43.906607131 +0000 UTC m=+1188.399969573" Jan 31 09:20:43 crc kubenswrapper[4830]: I0131 09:20:43.909834 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-4tqzd" podStartSLOduration=4.6637767740000005 podStartE2EDuration="41.909824943s" podCreationTimestamp="2026-01-31 09:20:02 +0000 UTC" firstStartedPulling="2026-01-31 09:20:04.674955727 +0000 UTC m=+1149.168318169" lastFinishedPulling="2026-01-31 09:20:41.921003896 +0000 UTC m=+1186.414366338" observedRunningTime="2026-01-31 09:20:43.87554937 +0000 UTC m=+1188.368911812" watchObservedRunningTime="2026-01-31 09:20:43.909824943 +0000 UTC m=+1188.403187385" Jan 31 09:20:43 crc kubenswrapper[4830]: I0131 09:20:43.980845 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-55f549db95-67sj5" podStartSLOduration=41.9808201 podStartE2EDuration="41.9808201s" podCreationTimestamp="2026-01-31 09:20:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 09:20:43.950042477 +0000 UTC m=+1188.443404939" watchObservedRunningTime="2026-01-31 09:20:43.9808201 +0000 UTC m=+1188.474182542" Jan 31 09:20:43 crc kubenswrapper[4830]: I0131 09:20:43.987038 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-slc6p" podStartSLOduration=5.500947246 podStartE2EDuration="41.987018698s" podCreationTimestamp="2026-01-31 09:20:02 +0000 UTC" firstStartedPulling="2026-01-31 09:20:05.217252558 +0000 UTC m=+1149.710615000" lastFinishedPulling="2026-01-31 09:20:41.70332401 +0000 UTC m=+1186.196686452" observedRunningTime="2026-01-31 09:20:43.973523991 +0000 UTC m=+1188.466886433" watchObservedRunningTime="2026-01-31 09:20:43.987018698 +0000 UTC m=+1188.480381150" Jan 31 09:20:47 crc kubenswrapper[4830]: I0131 09:20:47.852481 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4drvxhm" event={"ID":"250c9f1b-d78c-488e-b28e-6c2b783edd9b","Type":"ContainerStarted","Data":"9380900d47bdf1ab694b731927e0ab1da64712898c40df124879121a9d41869c"} Jan 31 09:20:47 crc kubenswrapper[4830]: I0131 09:20:47.853230 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4drvxhm" Jan 31 09:20:47 crc kubenswrapper[4830]: I0131 09:20:47.855038 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-57fbdcd888-cp9fj" event={"ID":"2365408f-7d7a-482c-87c0-0452fa330e4e","Type":"ContainerStarted","Data":"abb2c1b731c43f14024aabad44cf95d567ea9e4cad24a1c57e4407813432a4ca"} Jan 31 09:20:47 crc kubenswrapper[4830]: I0131 09:20:47.855289 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-57fbdcd888-cp9fj" Jan 31 09:20:47 crc kubenswrapper[4830]: I0131 09:20:47.856667 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-8wnqw" event={"ID":"dafe4db4-4a74-4cb2-8e7f-496cfa1a1c5e","Type":"ContainerStarted","Data":"3ae5c9b4ac5a10678f439dff3ae6e06bfbe5d93ad0ce25eb38aee919c63f307e"} Jan 31 09:20:47 crc kubenswrapper[4830]: I0131 09:20:47.856899 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-8wnqw" Jan 31 09:20:47 crc kubenswrapper[4830]: I0131 09:20:47.859048 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-79955696d6-vvv24" event={"ID":"0b519925-01de-4cf0-8ff8-0f97137dd3d9","Type":"ContainerStarted","Data":"4c2f897ea16a1cc3657f64111f575b90c232f50a3df47592314404672443cf5d"} Jan 31 09:20:47 crc kubenswrapper[4830]: I0131 09:20:47.859135 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-79955696d6-vvv24" Jan 31 09:20:47 crc kubenswrapper[4830]: I0131 09:20:47.861246 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-ld2fb" event={"ID":"f101dda8-ba4c-42c2-a8e3-9a5e53c2ec8a","Type":"ContainerStarted","Data":"65826d84cc5288bdc372aae11461481e47526fb47b67a9c7df52eb2655067fa2"} Jan 31 09:20:47 crc kubenswrapper[4830]: I0131 09:20:47.861580 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-ld2fb" Jan 31 09:20:47 crc kubenswrapper[4830]: I0131 09:20:47.863117 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-sjf7r" event={"ID":"617226b5-2b2c-4f6c-902d-9784c8a283de","Type":"ContainerStarted","Data":"0289dbf30a8ce61b07550a764ab7cb66d247c613e7a69f3d19210c3aa3700369"} Jan 31 09:20:47 crc kubenswrapper[4830]: I0131 09:20:47.863438 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-sjf7r" Jan 31 09:20:47 crc kubenswrapper[4830]: I0131 09:20:47.898512 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4drvxhm" podStartSLOduration=41.619383369 podStartE2EDuration="45.898476635s" podCreationTimestamp="2026-01-31 09:20:02 +0000 UTC" firstStartedPulling="2026-01-31 09:20:42.437176496 +0000 UTC m=+1186.930538938" lastFinishedPulling="2026-01-31 09:20:46.716269762 +0000 UTC m=+1191.209632204" observedRunningTime="2026-01-31 09:20:47.892092682 +0000 UTC m=+1192.385455124" watchObservedRunningTime="2026-01-31 09:20:47.898476635 +0000 UTC m=+1192.391839077" Jan 31 09:20:47 crc kubenswrapper[4830]: I0131 09:20:47.919420 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-sjf7r" podStartSLOduration=4.360678277 podStartE2EDuration="45.919392505s" podCreationTimestamp="2026-01-31 09:20:02 +0000 UTC" firstStartedPulling="2026-01-31 09:20:05.158548133 +0000 UTC m=+1149.651910575" lastFinishedPulling="2026-01-31 09:20:46.717262361 +0000 UTC m=+1191.210624803" observedRunningTime="2026-01-31 09:20:47.917081989 +0000 UTC m=+1192.410444431" watchObservedRunningTime="2026-01-31 09:20:47.919392505 +0000 UTC m=+1192.412754947" Jan 31 09:20:47 crc kubenswrapper[4830]: I0131 09:20:47.939591 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-57fbdcd888-cp9fj" podStartSLOduration=5.202291007 podStartE2EDuration="45.939565494s" podCreationTimestamp="2026-01-31 09:20:02 +0000 UTC" firstStartedPulling="2026-01-31 09:20:05.980024705 +0000 UTC m=+1150.473387147" lastFinishedPulling="2026-01-31 09:20:46.717299202 +0000 UTC m=+1191.210661634" observedRunningTime="2026-01-31 09:20:47.937783443 +0000 UTC m=+1192.431145885" watchObservedRunningTime="2026-01-31 09:20:47.939565494 +0000 UTC m=+1192.432927936" Jan 31 09:20:47 crc kubenswrapper[4830]: I0131 09:20:47.962412 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-79955696d6-vvv24" podStartSLOduration=41.336507843 podStartE2EDuration="45.962391329s" podCreationTimestamp="2026-01-31 09:20:02 +0000 UTC" firstStartedPulling="2026-01-31 09:20:42.089280214 +0000 UTC m=+1186.582642656" lastFinishedPulling="2026-01-31 09:20:46.7151637 +0000 UTC m=+1191.208526142" observedRunningTime="2026-01-31 09:20:47.956659225 +0000 UTC m=+1192.450021667" watchObservedRunningTime="2026-01-31 09:20:47.962391329 +0000 UTC m=+1192.455753771" Jan 31 09:20:47 crc kubenswrapper[4830]: I0131 09:20:47.998516 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-8wnqw" podStartSLOduration=3.794927404 podStartE2EDuration="45.998494695s" podCreationTimestamp="2026-01-31 09:20:02 +0000 UTC" firstStartedPulling="2026-01-31 09:20:04.512803894 +0000 UTC m=+1149.006166336" lastFinishedPulling="2026-01-31 09:20:46.716371195 +0000 UTC m=+1191.209733627" observedRunningTime="2026-01-31 09:20:47.993249495 +0000 UTC m=+1192.486611937" watchObservedRunningTime="2026-01-31 09:20:47.998494695 +0000 UTC m=+1192.491857137" Jan 31 09:20:48 crc kubenswrapper[4830]: I0131 09:20:48.018786 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-ld2fb" podStartSLOduration=4.91485785 podStartE2EDuration="46.018765177s" podCreationTimestamp="2026-01-31 09:20:02 +0000 UTC" firstStartedPulling="2026-01-31 09:20:05.487156412 +0000 UTC m=+1149.980518864" lastFinishedPulling="2026-01-31 09:20:46.591063749 +0000 UTC m=+1191.084426191" observedRunningTime="2026-01-31 09:20:48.011951121 +0000 UTC m=+1192.505313563" watchObservedRunningTime="2026-01-31 09:20:48.018765177 +0000 UTC m=+1192.512127619" Jan 31 09:20:50 crc kubenswrapper[4830]: I0131 09:20:50.028135 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-55f549db95-67sj5" Jan 31 09:20:50 crc kubenswrapper[4830]: I0131 09:20:50.902024 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-sbhfn" event={"ID":"0e056a0c-ee06-43aa-bf36-35f202f76b17","Type":"ContainerStarted","Data":"91d08f1de51730084b1986735e1d5dc1493f1a52c4debe21ac091444f242fff1"} Jan 31 09:20:50 crc kubenswrapper[4830]: I0131 09:20:50.902830 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-sbhfn" Jan 31 09:20:50 crc kubenswrapper[4830]: I0131 09:20:50.924310 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-sbhfn" podStartSLOduration=3.466312905 podStartE2EDuration="48.924286309s" podCreationTimestamp="2026-01-31 09:20:02 +0000 UTC" firstStartedPulling="2026-01-31 09:20:05.197536242 +0000 UTC m=+1149.690898684" lastFinishedPulling="2026-01-31 09:20:50.655509646 +0000 UTC m=+1195.148872088" observedRunningTime="2026-01-31 09:20:50.919931154 +0000 UTC m=+1195.413293586" watchObservedRunningTime="2026-01-31 09:20:50.924286309 +0000 UTC m=+1195.417648771" Jan 31 09:20:52 crc kubenswrapper[4830]: I0131 09:20:52.471020 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-kwwkw" Jan 31 09:20:52 crc kubenswrapper[4830]: I0131 09:20:52.622785 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-d8xvw" Jan 31 09:20:52 crc kubenswrapper[4830]: I0131 09:20:52.687074 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-hcpk8" Jan 31 09:20:52 crc kubenswrapper[4830]: I0131 09:20:52.776442 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-8wnqw" Jan 31 09:20:52 crc kubenswrapper[4830]: I0131 09:20:52.811891 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-cpwlp" Jan 31 09:20:52 crc kubenswrapper[4830]: I0131 09:20:52.931792 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-kgrns" event={"ID":"758269b2-16c6-4f5a-8f9f-875659eede84","Type":"ContainerStarted","Data":"92efecdd91982ec1cbb17dd9b011166406c9ca06b9ac553f35213473bc7469d9"} Jan 31 09:20:52 crc kubenswrapper[4830]: I0131 09:20:52.932100 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-kgrns" Jan 31 09:20:52 crc kubenswrapper[4830]: I0131 09:20:52.953829 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-kgrns" podStartSLOduration=4.042410356 podStartE2EDuration="50.953803355s" podCreationTimestamp="2026-01-31 09:20:02 +0000 UTC" firstStartedPulling="2026-01-31 09:20:05.197567783 +0000 UTC m=+1149.690930215" lastFinishedPulling="2026-01-31 09:20:52.108960772 +0000 UTC m=+1196.602323214" observedRunningTime="2026-01-31 09:20:52.947411502 +0000 UTC m=+1197.440773954" watchObservedRunningTime="2026-01-31 09:20:52.953803355 +0000 UTC m=+1197.447165797" Jan 31 09:20:53 crc kubenswrapper[4830]: I0131 09:20:53.073142 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-slc6p" Jan 31 09:20:53 crc kubenswrapper[4830]: I0131 09:20:53.129024 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-4tqzd" Jan 31 09:20:53 crc kubenswrapper[4830]: I0131 09:20:53.134369 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-4tqzd" Jan 31 09:20:53 crc kubenswrapper[4830]: E0131 09:20:53.256254 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:7869203f6f97de780368d507636031090fed3b658d2f7771acbd4481bdfc870b\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-564965969-62c8t" podUID="d4a8ef63-6ba0-4bb4-93b5-dc9fc1134bb5" Jan 31 09:20:53 crc kubenswrapper[4830]: I0131 09:20:53.433814 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-sjf7r" Jan 31 09:20:53 crc kubenswrapper[4830]: I0131 09:20:53.667452 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-ld2fb" Jan 31 09:20:53 crc kubenswrapper[4830]: I0131 09:20:53.839241 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-gbjts" Jan 31 09:20:53 crc kubenswrapper[4830]: I0131 09:20:53.999158 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-gktql" Jan 31 09:20:54 crc kubenswrapper[4830]: I0131 09:20:54.010906 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-2l42c" Jan 31 09:20:54 crc kubenswrapper[4830]: I0131 09:20:54.064346 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-57fbdcd888-cp9fj" Jan 31 09:20:54 crc kubenswrapper[4830]: I0131 09:20:54.394555 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-czm79" Jan 31 09:20:54 crc kubenswrapper[4830]: I0131 09:20:54.953276 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-d9xtg" event={"ID":"4d28fd37-b97c-447a-9165-d90d11fd4698","Type":"ContainerStarted","Data":"902ecfc4e561e30299ea9903ea913ed25bc7ccebc30137d211b272c3dc40b959"} Jan 31 09:20:54 crc kubenswrapper[4830]: I0131 09:20:54.953587 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-d9xtg" Jan 31 09:20:54 crc kubenswrapper[4830]: I0131 09:20:54.975748 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-d9xtg" podStartSLOduration=3.596299304 podStartE2EDuration="52.975707182s" podCreationTimestamp="2026-01-31 09:20:02 +0000 UTC" firstStartedPulling="2026-01-31 09:20:04.652961015 +0000 UTC m=+1149.146323457" lastFinishedPulling="2026-01-31 09:20:54.032368893 +0000 UTC m=+1198.525731335" observedRunningTime="2026-01-31 09:20:54.973316253 +0000 UTC m=+1199.466678695" watchObservedRunningTime="2026-01-31 09:20:54.975707182 +0000 UTC m=+1199.469069624" Jan 31 09:20:55 crc kubenswrapper[4830]: I0131 09:20:55.964701 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-rkvx7" event={"ID":"e681f66d-3695-4b59-9ef1-6f9bbf007ed2","Type":"ContainerStarted","Data":"a24cbbe1029d268b630dfbbca1c70f7eca44a618c677f697fff7609479ebec72"} Jan 31 09:20:55 crc kubenswrapper[4830]: I0131 09:20:55.965435 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-rkvx7" Jan 31 09:20:55 crc kubenswrapper[4830]: I0131 09:20:55.989975 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-rkvx7" podStartSLOduration=3.384394524 podStartE2EDuration="53.989953815s" podCreationTimestamp="2026-01-31 09:20:02 +0000 UTC" firstStartedPulling="2026-01-31 09:20:05.095639258 +0000 UTC m=+1149.589001700" lastFinishedPulling="2026-01-31 09:20:55.701198549 +0000 UTC m=+1200.194560991" observedRunningTime="2026-01-31 09:20:55.981741819 +0000 UTC m=+1200.475104261" watchObservedRunningTime="2026-01-31 09:20:55.989953815 +0000 UTC m=+1200.483316257" Jan 31 09:20:57 crc kubenswrapper[4830]: I0131 09:20:57.985405 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-slhpt" event={"ID":"abf5a919-4697-4468-b9e4-8a4617e3a5ca","Type":"ContainerStarted","Data":"d706c18d76cfc22b1c391b0d4078ef2adb310b34bb5f7688d64455a13ee69324"} Jan 31 09:20:58 crc kubenswrapper[4830]: I0131 09:20:58.033138 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-slhpt" podStartSLOduration=4.34004898 podStartE2EDuration="55.033114612s" podCreationTimestamp="2026-01-31 09:20:03 +0000 UTC" firstStartedPulling="2026-01-31 09:20:05.992009509 +0000 UTC m=+1150.485371951" lastFinishedPulling="2026-01-31 09:20:56.685075151 +0000 UTC m=+1201.178437583" observedRunningTime="2026-01-31 09:20:58.023143946 +0000 UTC m=+1202.516506388" watchObservedRunningTime="2026-01-31 09:20:58.033114612 +0000 UTC m=+1202.526477054" Jan 31 09:20:58 crc kubenswrapper[4830]: I0131 09:20:58.614427 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-79955696d6-vvv24" Jan 31 09:20:59 crc kubenswrapper[4830]: I0131 09:20:59.766952 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4drvxhm" Jan 31 09:21:02 crc kubenswrapper[4830]: I0131 09:21:02.940505 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-d9xtg" Jan 31 09:21:03 crc kubenswrapper[4830]: I0131 09:21:03.091656 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-kgrns" Jan 31 09:21:03 crc kubenswrapper[4830]: I0131 09:21:03.306944 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-sbhfn" Jan 31 09:21:03 crc kubenswrapper[4830]: I0131 09:21:03.669755 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-rkvx7" Jan 31 09:21:05 crc kubenswrapper[4830]: I0131 09:21:05.043754 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-564965969-62c8t" event={"ID":"d4a8ef63-6ba0-4bb4-93b5-dc9fc1134bb5","Type":"ContainerStarted","Data":"98e0a8b7fb2e6c039e8db7e5839a5c873e660e51c34840c0e12203255868e673"} Jan 31 09:21:05 crc kubenswrapper[4830]: I0131 09:21:05.044473 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-564965969-62c8t" Jan 31 09:21:05 crc kubenswrapper[4830]: I0131 09:21:05.070197 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-564965969-62c8t" podStartSLOduration=4.347718306 podStartE2EDuration="1m3.070168565s" podCreationTimestamp="2026-01-31 09:20:02 +0000 UTC" firstStartedPulling="2026-01-31 09:20:05.995932522 +0000 UTC m=+1150.489294964" lastFinishedPulling="2026-01-31 09:21:04.718382781 +0000 UTC m=+1209.211745223" observedRunningTime="2026-01-31 09:21:05.059052196 +0000 UTC m=+1209.552414638" watchObservedRunningTime="2026-01-31 09:21:05.070168565 +0000 UTC m=+1209.563531007" Jan 31 09:21:14 crc kubenswrapper[4830]: I0131 09:21:14.426061 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-564965969-62c8t" Jan 31 09:21:31 crc kubenswrapper[4830]: I0131 09:21:31.002344 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-6wrv2"] Jan 31 09:21:31 crc kubenswrapper[4830]: I0131 09:21:31.005531 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-6wrv2" Jan 31 09:21:31 crc kubenswrapper[4830]: I0131 09:21:31.015300 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Jan 31 09:21:31 crc kubenswrapper[4830]: I0131 09:21:31.016142 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Jan 31 09:21:31 crc kubenswrapper[4830]: I0131 09:21:31.016745 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-rqdmp" Jan 31 09:21:31 crc kubenswrapper[4830]: I0131 09:21:31.017033 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Jan 31 09:21:31 crc kubenswrapper[4830]: I0131 09:21:31.045556 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-6wrv2"] Jan 31 09:21:31 crc kubenswrapper[4830]: I0131 09:21:31.094735 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-g55g6"] Jan 31 09:21:31 crc kubenswrapper[4830]: I0131 09:21:31.096505 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-g55g6" Jan 31 09:21:31 crc kubenswrapper[4830]: I0131 09:21:31.110590 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Jan 31 09:21:31 crc kubenswrapper[4830]: I0131 09:21:31.120656 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-g55g6"] Jan 31 09:21:31 crc kubenswrapper[4830]: I0131 09:21:31.197778 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5bc277c3-23c6-4b23-90de-63e622971c44-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-g55g6\" (UID: \"5bc277c3-23c6-4b23-90de-63e622971c44\") " pod="openstack/dnsmasq-dns-78dd6ddcc-g55g6" Jan 31 09:21:31 crc kubenswrapper[4830]: I0131 09:21:31.197849 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kpdgm\" (UniqueName: \"kubernetes.io/projected/94ab9436-8a9d-4ad9-b2c2-676351a006d7-kube-api-access-kpdgm\") pod \"dnsmasq-dns-675f4bcbfc-6wrv2\" (UID: \"94ab9436-8a9d-4ad9-b2c2-676351a006d7\") " pod="openstack/dnsmasq-dns-675f4bcbfc-6wrv2" Jan 31 09:21:31 crc kubenswrapper[4830]: I0131 09:21:31.197900 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/94ab9436-8a9d-4ad9-b2c2-676351a006d7-config\") pod \"dnsmasq-dns-675f4bcbfc-6wrv2\" (UID: \"94ab9436-8a9d-4ad9-b2c2-676351a006d7\") " pod="openstack/dnsmasq-dns-675f4bcbfc-6wrv2" Jan 31 09:21:31 crc kubenswrapper[4830]: I0131 09:21:31.197939 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ksj9p\" (UniqueName: \"kubernetes.io/projected/5bc277c3-23c6-4b23-90de-63e622971c44-kube-api-access-ksj9p\") pod \"dnsmasq-dns-78dd6ddcc-g55g6\" (UID: \"5bc277c3-23c6-4b23-90de-63e622971c44\") " pod="openstack/dnsmasq-dns-78dd6ddcc-g55g6" Jan 31 09:21:31 crc kubenswrapper[4830]: I0131 09:21:31.198066 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5bc277c3-23c6-4b23-90de-63e622971c44-config\") pod \"dnsmasq-dns-78dd6ddcc-g55g6\" (UID: \"5bc277c3-23c6-4b23-90de-63e622971c44\") " pod="openstack/dnsmasq-dns-78dd6ddcc-g55g6" Jan 31 09:21:31 crc kubenswrapper[4830]: I0131 09:21:31.300820 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5bc277c3-23c6-4b23-90de-63e622971c44-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-g55g6\" (UID: \"5bc277c3-23c6-4b23-90de-63e622971c44\") " pod="openstack/dnsmasq-dns-78dd6ddcc-g55g6" Jan 31 09:21:31 crc kubenswrapper[4830]: I0131 09:21:31.300892 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kpdgm\" (UniqueName: \"kubernetes.io/projected/94ab9436-8a9d-4ad9-b2c2-676351a006d7-kube-api-access-kpdgm\") pod \"dnsmasq-dns-675f4bcbfc-6wrv2\" (UID: \"94ab9436-8a9d-4ad9-b2c2-676351a006d7\") " pod="openstack/dnsmasq-dns-675f4bcbfc-6wrv2" Jan 31 09:21:31 crc kubenswrapper[4830]: I0131 09:21:31.300923 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/94ab9436-8a9d-4ad9-b2c2-676351a006d7-config\") pod \"dnsmasq-dns-675f4bcbfc-6wrv2\" (UID: \"94ab9436-8a9d-4ad9-b2c2-676351a006d7\") " pod="openstack/dnsmasq-dns-675f4bcbfc-6wrv2" Jan 31 09:21:31 crc kubenswrapper[4830]: I0131 09:21:31.300946 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ksj9p\" (UniqueName: \"kubernetes.io/projected/5bc277c3-23c6-4b23-90de-63e622971c44-kube-api-access-ksj9p\") pod \"dnsmasq-dns-78dd6ddcc-g55g6\" (UID: \"5bc277c3-23c6-4b23-90de-63e622971c44\") " pod="openstack/dnsmasq-dns-78dd6ddcc-g55g6" Jan 31 09:21:31 crc kubenswrapper[4830]: I0131 09:21:31.301012 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5bc277c3-23c6-4b23-90de-63e622971c44-config\") pod \"dnsmasq-dns-78dd6ddcc-g55g6\" (UID: \"5bc277c3-23c6-4b23-90de-63e622971c44\") " pod="openstack/dnsmasq-dns-78dd6ddcc-g55g6" Jan 31 09:21:31 crc kubenswrapper[4830]: I0131 09:21:31.302242 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5bc277c3-23c6-4b23-90de-63e622971c44-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-g55g6\" (UID: \"5bc277c3-23c6-4b23-90de-63e622971c44\") " pod="openstack/dnsmasq-dns-78dd6ddcc-g55g6" Jan 31 09:21:31 crc kubenswrapper[4830]: I0131 09:21:31.302259 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5bc277c3-23c6-4b23-90de-63e622971c44-config\") pod \"dnsmasq-dns-78dd6ddcc-g55g6\" (UID: \"5bc277c3-23c6-4b23-90de-63e622971c44\") " pod="openstack/dnsmasq-dns-78dd6ddcc-g55g6" Jan 31 09:21:31 crc kubenswrapper[4830]: I0131 09:21:31.302357 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/94ab9436-8a9d-4ad9-b2c2-676351a006d7-config\") pod \"dnsmasq-dns-675f4bcbfc-6wrv2\" (UID: \"94ab9436-8a9d-4ad9-b2c2-676351a006d7\") " pod="openstack/dnsmasq-dns-675f4bcbfc-6wrv2" Jan 31 09:21:31 crc kubenswrapper[4830]: I0131 09:21:31.327110 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ksj9p\" (UniqueName: \"kubernetes.io/projected/5bc277c3-23c6-4b23-90de-63e622971c44-kube-api-access-ksj9p\") pod \"dnsmasq-dns-78dd6ddcc-g55g6\" (UID: \"5bc277c3-23c6-4b23-90de-63e622971c44\") " pod="openstack/dnsmasq-dns-78dd6ddcc-g55g6" Jan 31 09:21:31 crc kubenswrapper[4830]: I0131 09:21:31.327555 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kpdgm\" (UniqueName: \"kubernetes.io/projected/94ab9436-8a9d-4ad9-b2c2-676351a006d7-kube-api-access-kpdgm\") pod \"dnsmasq-dns-675f4bcbfc-6wrv2\" (UID: \"94ab9436-8a9d-4ad9-b2c2-676351a006d7\") " pod="openstack/dnsmasq-dns-675f4bcbfc-6wrv2" Jan 31 09:21:31 crc kubenswrapper[4830]: I0131 09:21:31.348378 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-6wrv2" Jan 31 09:21:31 crc kubenswrapper[4830]: I0131 09:21:31.433456 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-g55g6" Jan 31 09:21:31 crc kubenswrapper[4830]: I0131 09:21:31.887766 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-6wrv2"] Jan 31 09:21:32 crc kubenswrapper[4830]: W0131 09:21:32.031411 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5bc277c3_23c6_4b23_90de_63e622971c44.slice/crio-bc8ee11c19343bfd12fbbacced16e8bb8dd4ad0f70cd16b4dcd69b0827dc3b74 WatchSource:0}: Error finding container bc8ee11c19343bfd12fbbacced16e8bb8dd4ad0f70cd16b4dcd69b0827dc3b74: Status 404 returned error can't find the container with id bc8ee11c19343bfd12fbbacced16e8bb8dd4ad0f70cd16b4dcd69b0827dc3b74 Jan 31 09:21:32 crc kubenswrapper[4830]: I0131 09:21:32.035480 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-g55g6"] Jan 31 09:21:32 crc kubenswrapper[4830]: I0131 09:21:32.594075 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-6wrv2" event={"ID":"94ab9436-8a9d-4ad9-b2c2-676351a006d7","Type":"ContainerStarted","Data":"d180f3b6444092f6b002974cc29a825f5102be0c222502b68e1813410d910ef3"} Jan 31 09:21:32 crc kubenswrapper[4830]: I0131 09:21:32.600283 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-g55g6" event={"ID":"5bc277c3-23c6-4b23-90de-63e622971c44","Type":"ContainerStarted","Data":"bc8ee11c19343bfd12fbbacced16e8bb8dd4ad0f70cd16b4dcd69b0827dc3b74"} Jan 31 09:21:33 crc kubenswrapper[4830]: I0131 09:21:33.747130 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-6wrv2"] Jan 31 09:21:33 crc kubenswrapper[4830]: I0131 09:21:33.818788 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-7f4p6"] Jan 31 09:21:33 crc kubenswrapper[4830]: I0131 09:21:33.820589 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-7f4p6" Jan 31 09:21:33 crc kubenswrapper[4830]: I0131 09:21:33.852800 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-7f4p6"] Jan 31 09:21:33 crc kubenswrapper[4830]: I0131 09:21:33.970036 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c18c27da-a436-41fe-b4c9-bb0187e10694-dns-svc\") pod \"dnsmasq-dns-666b6646f7-7f4p6\" (UID: \"c18c27da-a436-41fe-b4c9-bb0187e10694\") " pod="openstack/dnsmasq-dns-666b6646f7-7f4p6" Jan 31 09:21:33 crc kubenswrapper[4830]: I0131 09:21:33.970208 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c18c27da-a436-41fe-b4c9-bb0187e10694-config\") pod \"dnsmasq-dns-666b6646f7-7f4p6\" (UID: \"c18c27da-a436-41fe-b4c9-bb0187e10694\") " pod="openstack/dnsmasq-dns-666b6646f7-7f4p6" Jan 31 09:21:33 crc kubenswrapper[4830]: I0131 09:21:33.970248 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lvrkk\" (UniqueName: \"kubernetes.io/projected/c18c27da-a436-41fe-b4c9-bb0187e10694-kube-api-access-lvrkk\") pod \"dnsmasq-dns-666b6646f7-7f4p6\" (UID: \"c18c27da-a436-41fe-b4c9-bb0187e10694\") " pod="openstack/dnsmasq-dns-666b6646f7-7f4p6" Jan 31 09:21:34 crc kubenswrapper[4830]: I0131 09:21:34.073144 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c18c27da-a436-41fe-b4c9-bb0187e10694-config\") pod \"dnsmasq-dns-666b6646f7-7f4p6\" (UID: \"c18c27da-a436-41fe-b4c9-bb0187e10694\") " pod="openstack/dnsmasq-dns-666b6646f7-7f4p6" Jan 31 09:21:34 crc kubenswrapper[4830]: I0131 09:21:34.073230 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lvrkk\" (UniqueName: \"kubernetes.io/projected/c18c27da-a436-41fe-b4c9-bb0187e10694-kube-api-access-lvrkk\") pod \"dnsmasq-dns-666b6646f7-7f4p6\" (UID: \"c18c27da-a436-41fe-b4c9-bb0187e10694\") " pod="openstack/dnsmasq-dns-666b6646f7-7f4p6" Jan 31 09:21:34 crc kubenswrapper[4830]: I0131 09:21:34.073288 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c18c27da-a436-41fe-b4c9-bb0187e10694-dns-svc\") pod \"dnsmasq-dns-666b6646f7-7f4p6\" (UID: \"c18c27da-a436-41fe-b4c9-bb0187e10694\") " pod="openstack/dnsmasq-dns-666b6646f7-7f4p6" Jan 31 09:21:34 crc kubenswrapper[4830]: I0131 09:21:34.074844 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c18c27da-a436-41fe-b4c9-bb0187e10694-dns-svc\") pod \"dnsmasq-dns-666b6646f7-7f4p6\" (UID: \"c18c27da-a436-41fe-b4c9-bb0187e10694\") " pod="openstack/dnsmasq-dns-666b6646f7-7f4p6" Jan 31 09:21:34 crc kubenswrapper[4830]: I0131 09:21:34.074849 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c18c27da-a436-41fe-b4c9-bb0187e10694-config\") pod \"dnsmasq-dns-666b6646f7-7f4p6\" (UID: \"c18c27da-a436-41fe-b4c9-bb0187e10694\") " pod="openstack/dnsmasq-dns-666b6646f7-7f4p6" Jan 31 09:21:34 crc kubenswrapper[4830]: I0131 09:21:34.113686 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lvrkk\" (UniqueName: \"kubernetes.io/projected/c18c27da-a436-41fe-b4c9-bb0187e10694-kube-api-access-lvrkk\") pod \"dnsmasq-dns-666b6646f7-7f4p6\" (UID: \"c18c27da-a436-41fe-b4c9-bb0187e10694\") " pod="openstack/dnsmasq-dns-666b6646f7-7f4p6" Jan 31 09:21:34 crc kubenswrapper[4830]: I0131 09:21:34.165564 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-7f4p6" Jan 31 09:21:34 crc kubenswrapper[4830]: I0131 09:21:34.326666 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-g55g6"] Jan 31 09:21:34 crc kubenswrapper[4830]: I0131 09:21:34.382470 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-rntrf"] Jan 31 09:21:34 crc kubenswrapper[4830]: I0131 09:21:34.385625 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-rntrf" Jan 31 09:21:34 crc kubenswrapper[4830]: I0131 09:21:34.441167 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-rntrf"] Jan 31 09:21:34 crc kubenswrapper[4830]: I0131 09:21:34.499427 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ee43d170-0675-460c-88e1-5e19a0db0e37-config\") pod \"dnsmasq-dns-57d769cc4f-rntrf\" (UID: \"ee43d170-0675-460c-88e1-5e19a0db0e37\") " pod="openstack/dnsmasq-dns-57d769cc4f-rntrf" Jan 31 09:21:34 crc kubenswrapper[4830]: I0131 09:21:34.499793 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ee43d170-0675-460c-88e1-5e19a0db0e37-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-rntrf\" (UID: \"ee43d170-0675-460c-88e1-5e19a0db0e37\") " pod="openstack/dnsmasq-dns-57d769cc4f-rntrf" Jan 31 09:21:34 crc kubenswrapper[4830]: I0131 09:21:34.499872 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-45nhf\" (UniqueName: \"kubernetes.io/projected/ee43d170-0675-460c-88e1-5e19a0db0e37-kube-api-access-45nhf\") pod \"dnsmasq-dns-57d769cc4f-rntrf\" (UID: \"ee43d170-0675-460c-88e1-5e19a0db0e37\") " pod="openstack/dnsmasq-dns-57d769cc4f-rntrf" Jan 31 09:21:34 crc kubenswrapper[4830]: I0131 09:21:34.609028 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ee43d170-0675-460c-88e1-5e19a0db0e37-config\") pod \"dnsmasq-dns-57d769cc4f-rntrf\" (UID: \"ee43d170-0675-460c-88e1-5e19a0db0e37\") " pod="openstack/dnsmasq-dns-57d769cc4f-rntrf" Jan 31 09:21:34 crc kubenswrapper[4830]: I0131 09:21:34.609147 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ee43d170-0675-460c-88e1-5e19a0db0e37-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-rntrf\" (UID: \"ee43d170-0675-460c-88e1-5e19a0db0e37\") " pod="openstack/dnsmasq-dns-57d769cc4f-rntrf" Jan 31 09:21:34 crc kubenswrapper[4830]: I0131 09:21:34.609183 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-45nhf\" (UniqueName: \"kubernetes.io/projected/ee43d170-0675-460c-88e1-5e19a0db0e37-kube-api-access-45nhf\") pod \"dnsmasq-dns-57d769cc4f-rntrf\" (UID: \"ee43d170-0675-460c-88e1-5e19a0db0e37\") " pod="openstack/dnsmasq-dns-57d769cc4f-rntrf" Jan 31 09:21:34 crc kubenswrapper[4830]: I0131 09:21:34.610423 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ee43d170-0675-460c-88e1-5e19a0db0e37-config\") pod \"dnsmasq-dns-57d769cc4f-rntrf\" (UID: \"ee43d170-0675-460c-88e1-5e19a0db0e37\") " pod="openstack/dnsmasq-dns-57d769cc4f-rntrf" Jan 31 09:21:34 crc kubenswrapper[4830]: I0131 09:21:34.610521 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ee43d170-0675-460c-88e1-5e19a0db0e37-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-rntrf\" (UID: \"ee43d170-0675-460c-88e1-5e19a0db0e37\") " pod="openstack/dnsmasq-dns-57d769cc4f-rntrf" Jan 31 09:21:34 crc kubenswrapper[4830]: I0131 09:21:34.631853 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-45nhf\" (UniqueName: \"kubernetes.io/projected/ee43d170-0675-460c-88e1-5e19a0db0e37-kube-api-access-45nhf\") pod \"dnsmasq-dns-57d769cc4f-rntrf\" (UID: \"ee43d170-0675-460c-88e1-5e19a0db0e37\") " pod="openstack/dnsmasq-dns-57d769cc4f-rntrf" Jan 31 09:21:34 crc kubenswrapper[4830]: I0131 09:21:34.784431 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-rntrf" Jan 31 09:21:34 crc kubenswrapper[4830]: I0131 09:21:34.991348 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Jan 31 09:21:34 crc kubenswrapper[4830]: I0131 09:21:34.993522 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 31 09:21:34 crc kubenswrapper[4830]: I0131 09:21:34.996947 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Jan 31 09:21:34 crc kubenswrapper[4830]: I0131 09:21:34.998562 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Jan 31 09:21:34 crc kubenswrapper[4830]: I0131 09:21:34.998786 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Jan 31 09:21:34 crc kubenswrapper[4830]: I0131 09:21:34.998845 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-2xrsf" Jan 31 09:21:34 crc kubenswrapper[4830]: I0131 09:21:34.998981 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Jan 31 09:21:34 crc kubenswrapper[4830]: I0131 09:21:34.999410 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Jan 31 09:21:34 crc kubenswrapper[4830]: I0131 09:21:34.999857 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Jan 31 09:21:35 crc kubenswrapper[4830]: I0131 09:21:35.028433 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 31 09:21:35 crc kubenswrapper[4830]: I0131 09:21:35.061560 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-2"] Jan 31 09:21:35 crc kubenswrapper[4830]: I0131 09:21:35.066843 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-2" Jan 31 09:21:35 crc kubenswrapper[4830]: I0131 09:21:35.121712 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-1"] Jan 31 09:21:35 crc kubenswrapper[4830]: W0131 09:21:35.167879 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc18c27da_a436_41fe_b4c9_bb0187e10694.slice/crio-2d3cf6f15dceb79f2aad4ab98e70bd9560d8ff62c9dcebdd69bba2b1ff1542e2 WatchSource:0}: Error finding container 2d3cf6f15dceb79f2aad4ab98e70bd9560d8ff62c9dcebdd69bba2b1ff1542e2: Status 404 returned error can't find the container with id 2d3cf6f15dceb79f2aad4ab98e70bd9560d8ff62c9dcebdd69bba2b1ff1542e2 Jan 31 09:21:35 crc kubenswrapper[4830]: I0131 09:21:35.234202 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/759f3f02-a9de-4e01-97f9-a97424c592a6-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"759f3f02-a9de-4e01-97f9-a97424c592a6\") " pod="openstack/rabbitmq-server-0" Jan 31 09:21:35 crc kubenswrapper[4830]: I0131 09:21:35.234353 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/759f3f02-a9de-4e01-97f9-a97424c592a6-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"759f3f02-a9de-4e01-97f9-a97424c592a6\") " pod="openstack/rabbitmq-server-0" Jan 31 09:21:35 crc kubenswrapper[4830]: I0131 09:21:35.234421 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/759f3f02-a9de-4e01-97f9-a97424c592a6-config-data\") pod \"rabbitmq-server-0\" (UID: \"759f3f02-a9de-4e01-97f9-a97424c592a6\") " pod="openstack/rabbitmq-server-0" Jan 31 09:21:35 crc kubenswrapper[4830]: I0131 09:21:35.234514 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/759f3f02-a9de-4e01-97f9-a97424c592a6-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"759f3f02-a9de-4e01-97f9-a97424c592a6\") " pod="openstack/rabbitmq-server-0" Jan 31 09:21:35 crc kubenswrapper[4830]: I0131 09:21:35.234812 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/759f3f02-a9de-4e01-97f9-a97424c592a6-server-conf\") pod \"rabbitmq-server-0\" (UID: \"759f3f02-a9de-4e01-97f9-a97424c592a6\") " pod="openstack/rabbitmq-server-0" Jan 31 09:21:35 crc kubenswrapper[4830]: I0131 09:21:35.234893 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2bdfc\" (UniqueName: \"kubernetes.io/projected/759f3f02-a9de-4e01-97f9-a97424c592a6-kube-api-access-2bdfc\") pod \"rabbitmq-server-0\" (UID: \"759f3f02-a9de-4e01-97f9-a97424c592a6\") " pod="openstack/rabbitmq-server-0" Jan 31 09:21:35 crc kubenswrapper[4830]: I0131 09:21:35.234928 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/759f3f02-a9de-4e01-97f9-a97424c592a6-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"759f3f02-a9de-4e01-97f9-a97424c592a6\") " pod="openstack/rabbitmq-server-0" Jan 31 09:21:35 crc kubenswrapper[4830]: I0131 09:21:35.234945 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/759f3f02-a9de-4e01-97f9-a97424c592a6-pod-info\") pod \"rabbitmq-server-0\" (UID: \"759f3f02-a9de-4e01-97f9-a97424c592a6\") " pod="openstack/rabbitmq-server-0" Jan 31 09:21:35 crc kubenswrapper[4830]: I0131 09:21:35.234990 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/759f3f02-a9de-4e01-97f9-a97424c592a6-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"759f3f02-a9de-4e01-97f9-a97424c592a6\") " pod="openstack/rabbitmq-server-0" Jan 31 09:21:35 crc kubenswrapper[4830]: I0131 09:21:35.235027 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/759f3f02-a9de-4e01-97f9-a97424c592a6-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"759f3f02-a9de-4e01-97f9-a97424c592a6\") " pod="openstack/rabbitmq-server-0" Jan 31 09:21:35 crc kubenswrapper[4830]: I0131 09:21:35.235050 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-8d8f504e-714c-4ccf-bedc-e403cf20e25c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8d8f504e-714c-4ccf-bedc-e403cf20e25c\") pod \"rabbitmq-server-0\" (UID: \"759f3f02-a9de-4e01-97f9-a97424c592a6\") " pod="openstack/rabbitmq-server-0" Jan 31 09:21:35 crc kubenswrapper[4830]: I0131 09:21:35.241995 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-1" Jan 31 09:21:35 crc kubenswrapper[4830]: I0131 09:21:35.257902 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-2"] Jan 31 09:21:35 crc kubenswrapper[4830]: I0131 09:21:35.279434 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-1"] Jan 31 09:21:35 crc kubenswrapper[4830]: I0131 09:21:35.312256 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-7f4p6"] Jan 31 09:21:35 crc kubenswrapper[4830]: I0131 09:21:35.337318 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f60eed79-badf-4909-869b-edbfdfb774ac-pod-info\") pod \"rabbitmq-server-1\" (UID: \"f60eed79-badf-4909-869b-edbfdfb774ac\") " pod="openstack/rabbitmq-server-1" Jan 31 09:21:35 crc kubenswrapper[4830]: I0131 09:21:35.337425 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/8e40a106-74cd-45ea-a936-c34daaf9ce6e-plugins-conf\") pod \"rabbitmq-server-2\" (UID: \"8e40a106-74cd-45ea-a936-c34daaf9ce6e\") " pod="openstack/rabbitmq-server-2" Jan 31 09:21:35 crc kubenswrapper[4830]: I0131 09:21:35.337446 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/8e40a106-74cd-45ea-a936-c34daaf9ce6e-rabbitmq-plugins\") pod \"rabbitmq-server-2\" (UID: \"8e40a106-74cd-45ea-a936-c34daaf9ce6e\") " pod="openstack/rabbitmq-server-2" Jan 31 09:21:35 crc kubenswrapper[4830]: I0131 09:21:35.337484 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/759f3f02-a9de-4e01-97f9-a97424c592a6-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"759f3f02-a9de-4e01-97f9-a97424c592a6\") " pod="openstack/rabbitmq-server-0" Jan 31 09:21:35 crc kubenswrapper[4830]: I0131 09:21:35.337511 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-clnl2\" (UniqueName: \"kubernetes.io/projected/f60eed79-badf-4909-869b-edbfdfb774ac-kube-api-access-clnl2\") pod \"rabbitmq-server-1\" (UID: \"f60eed79-badf-4909-869b-edbfdfb774ac\") " pod="openstack/rabbitmq-server-1" Jan 31 09:21:35 crc kubenswrapper[4830]: I0131 09:21:35.337563 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/759f3f02-a9de-4e01-97f9-a97424c592a6-config-data\") pod \"rabbitmq-server-0\" (UID: \"759f3f02-a9de-4e01-97f9-a97424c592a6\") " pod="openstack/rabbitmq-server-0" Jan 31 09:21:35 crc kubenswrapper[4830]: I0131 09:21:35.337607 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/759f3f02-a9de-4e01-97f9-a97424c592a6-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"759f3f02-a9de-4e01-97f9-a97424c592a6\") " pod="openstack/rabbitmq-server-0" Jan 31 09:21:35 crc kubenswrapper[4830]: I0131 09:21:35.337658 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-6f485614-09b6-423c-b642-4f3bd84a028a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6f485614-09b6-423c-b642-4f3bd84a028a\") pod \"rabbitmq-server-2\" (UID: \"8e40a106-74cd-45ea-a936-c34daaf9ce6e\") " pod="openstack/rabbitmq-server-2" Jan 31 09:21:35 crc kubenswrapper[4830]: I0131 09:21:35.337685 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/8e40a106-74cd-45ea-a936-c34daaf9ce6e-rabbitmq-tls\") pod \"rabbitmq-server-2\" (UID: \"8e40a106-74cd-45ea-a936-c34daaf9ce6e\") " pod="openstack/rabbitmq-server-2" Jan 31 09:21:35 crc kubenswrapper[4830]: I0131 09:21:35.337708 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f60eed79-badf-4909-869b-edbfdfb774ac-server-conf\") pod \"rabbitmq-server-1\" (UID: \"f60eed79-badf-4909-869b-edbfdfb774ac\") " pod="openstack/rabbitmq-server-1" Jan 31 09:21:35 crc kubenswrapper[4830]: I0131 09:21:35.337788 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f60eed79-badf-4909-869b-edbfdfb774ac-rabbitmq-confd\") pod \"rabbitmq-server-1\" (UID: \"f60eed79-badf-4909-869b-edbfdfb774ac\") " pod="openstack/rabbitmq-server-1" Jan 31 09:21:35 crc kubenswrapper[4830]: I0131 09:21:35.337834 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/8e40a106-74cd-45ea-a936-c34daaf9ce6e-server-conf\") pod \"rabbitmq-server-2\" (UID: \"8e40a106-74cd-45ea-a936-c34daaf9ce6e\") " pod="openstack/rabbitmq-server-2" Jan 31 09:21:35 crc kubenswrapper[4830]: I0131 09:21:35.337858 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/8e40a106-74cd-45ea-a936-c34daaf9ce6e-rabbitmq-confd\") pod \"rabbitmq-server-2\" (UID: \"8e40a106-74cd-45ea-a936-c34daaf9ce6e\") " pod="openstack/rabbitmq-server-2" Jan 31 09:21:35 crc kubenswrapper[4830]: I0131 09:21:35.337909 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f60eed79-badf-4909-869b-edbfdfb774ac-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-1\" (UID: \"f60eed79-badf-4909-869b-edbfdfb774ac\") " pod="openstack/rabbitmq-server-1" Jan 31 09:21:35 crc kubenswrapper[4830]: I0131 09:21:35.337944 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f60eed79-badf-4909-869b-edbfdfb774ac-config-data\") pod \"rabbitmq-server-1\" (UID: \"f60eed79-badf-4909-869b-edbfdfb774ac\") " pod="openstack/rabbitmq-server-1" Jan 31 09:21:35 crc kubenswrapper[4830]: I0131 09:21:35.337977 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-d037c5b4-6d32-48fc-a02a-dab15480e5ed\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d037c5b4-6d32-48fc-a02a-dab15480e5ed\") pod \"rabbitmq-server-1\" (UID: \"f60eed79-badf-4909-869b-edbfdfb774ac\") " pod="openstack/rabbitmq-server-1" Jan 31 09:21:35 crc kubenswrapper[4830]: I0131 09:21:35.338006 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f60eed79-badf-4909-869b-edbfdfb774ac-rabbitmq-tls\") pod \"rabbitmq-server-1\" (UID: \"f60eed79-badf-4909-869b-edbfdfb774ac\") " pod="openstack/rabbitmq-server-1" Jan 31 09:21:35 crc kubenswrapper[4830]: I0131 09:21:35.338033 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/8e40a106-74cd-45ea-a936-c34daaf9ce6e-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-2\" (UID: \"8e40a106-74cd-45ea-a936-c34daaf9ce6e\") " pod="openstack/rabbitmq-server-2" Jan 31 09:21:35 crc kubenswrapper[4830]: I0131 09:21:35.338057 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/8e40a106-74cd-45ea-a936-c34daaf9ce6e-pod-info\") pod \"rabbitmq-server-2\" (UID: \"8e40a106-74cd-45ea-a936-c34daaf9ce6e\") " pod="openstack/rabbitmq-server-2" Jan 31 09:21:35 crc kubenswrapper[4830]: I0131 09:21:35.338084 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8e40a106-74cd-45ea-a936-c34daaf9ce6e-config-data\") pod \"rabbitmq-server-2\" (UID: \"8e40a106-74cd-45ea-a936-c34daaf9ce6e\") " pod="openstack/rabbitmq-server-2" Jan 31 09:21:35 crc kubenswrapper[4830]: I0131 09:21:35.338110 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f60eed79-badf-4909-869b-edbfdfb774ac-erlang-cookie-secret\") pod \"rabbitmq-server-1\" (UID: \"f60eed79-badf-4909-869b-edbfdfb774ac\") " pod="openstack/rabbitmq-server-1" Jan 31 09:21:35 crc kubenswrapper[4830]: I0131 09:21:35.338137 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/759f3f02-a9de-4e01-97f9-a97424c592a6-server-conf\") pod \"rabbitmq-server-0\" (UID: \"759f3f02-a9de-4e01-97f9-a97424c592a6\") " pod="openstack/rabbitmq-server-0" Jan 31 09:21:35 crc kubenswrapper[4830]: I0131 09:21:35.338170 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2bdfc\" (UniqueName: \"kubernetes.io/projected/759f3f02-a9de-4e01-97f9-a97424c592a6-kube-api-access-2bdfc\") pod \"rabbitmq-server-0\" (UID: \"759f3f02-a9de-4e01-97f9-a97424c592a6\") " pod="openstack/rabbitmq-server-0" Jan 31 09:21:35 crc kubenswrapper[4830]: I0131 09:21:35.338195 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f60eed79-badf-4909-869b-edbfdfb774ac-plugins-conf\") pod \"rabbitmq-server-1\" (UID: \"f60eed79-badf-4909-869b-edbfdfb774ac\") " pod="openstack/rabbitmq-server-1" Jan 31 09:21:35 crc kubenswrapper[4830]: I0131 09:21:35.338219 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f60eed79-badf-4909-869b-edbfdfb774ac-rabbitmq-plugins\") pod \"rabbitmq-server-1\" (UID: \"f60eed79-badf-4909-869b-edbfdfb774ac\") " pod="openstack/rabbitmq-server-1" Jan 31 09:21:35 crc kubenswrapper[4830]: I0131 09:21:35.338249 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/759f3f02-a9de-4e01-97f9-a97424c592a6-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"759f3f02-a9de-4e01-97f9-a97424c592a6\") " pod="openstack/rabbitmq-server-0" Jan 31 09:21:35 crc kubenswrapper[4830]: I0131 09:21:35.338275 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/759f3f02-a9de-4e01-97f9-a97424c592a6-pod-info\") pod \"rabbitmq-server-0\" (UID: \"759f3f02-a9de-4e01-97f9-a97424c592a6\") " pod="openstack/rabbitmq-server-0" Jan 31 09:21:35 crc kubenswrapper[4830]: I0131 09:21:35.338300 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/8e40a106-74cd-45ea-a936-c34daaf9ce6e-erlang-cookie-secret\") pod \"rabbitmq-server-2\" (UID: \"8e40a106-74cd-45ea-a936-c34daaf9ce6e\") " pod="openstack/rabbitmq-server-2" Jan 31 09:21:35 crc kubenswrapper[4830]: I0131 09:21:35.338325 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t7jqp\" (UniqueName: \"kubernetes.io/projected/8e40a106-74cd-45ea-a936-c34daaf9ce6e-kube-api-access-t7jqp\") pod \"rabbitmq-server-2\" (UID: \"8e40a106-74cd-45ea-a936-c34daaf9ce6e\") " pod="openstack/rabbitmq-server-2" Jan 31 09:21:35 crc kubenswrapper[4830]: I0131 09:21:35.338359 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/759f3f02-a9de-4e01-97f9-a97424c592a6-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"759f3f02-a9de-4e01-97f9-a97424c592a6\") " pod="openstack/rabbitmq-server-0" Jan 31 09:21:35 crc kubenswrapper[4830]: I0131 09:21:35.338390 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/759f3f02-a9de-4e01-97f9-a97424c592a6-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"759f3f02-a9de-4e01-97f9-a97424c592a6\") " pod="openstack/rabbitmq-server-0" Jan 31 09:21:35 crc kubenswrapper[4830]: I0131 09:21:35.338421 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-8d8f504e-714c-4ccf-bedc-e403cf20e25c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8d8f504e-714c-4ccf-bedc-e403cf20e25c\") pod \"rabbitmq-server-0\" (UID: \"759f3f02-a9de-4e01-97f9-a97424c592a6\") " pod="openstack/rabbitmq-server-0" Jan 31 09:21:35 crc kubenswrapper[4830]: I0131 09:21:35.338490 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/759f3f02-a9de-4e01-97f9-a97424c592a6-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"759f3f02-a9de-4e01-97f9-a97424c592a6\") " pod="openstack/rabbitmq-server-0" Jan 31 09:21:35 crc kubenswrapper[4830]: I0131 09:21:35.342457 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/759f3f02-a9de-4e01-97f9-a97424c592a6-server-conf\") pod \"rabbitmq-server-0\" (UID: \"759f3f02-a9de-4e01-97f9-a97424c592a6\") " pod="openstack/rabbitmq-server-0" Jan 31 09:21:35 crc kubenswrapper[4830]: I0131 09:21:35.342774 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/759f3f02-a9de-4e01-97f9-a97424c592a6-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"759f3f02-a9de-4e01-97f9-a97424c592a6\") " pod="openstack/rabbitmq-server-0" Jan 31 09:21:35 crc kubenswrapper[4830]: I0131 09:21:35.352259 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/759f3f02-a9de-4e01-97f9-a97424c592a6-config-data\") pod \"rabbitmq-server-0\" (UID: \"759f3f02-a9de-4e01-97f9-a97424c592a6\") " pod="openstack/rabbitmq-server-0" Jan 31 09:21:35 crc kubenswrapper[4830]: I0131 09:21:35.352817 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/759f3f02-a9de-4e01-97f9-a97424c592a6-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"759f3f02-a9de-4e01-97f9-a97424c592a6\") " pod="openstack/rabbitmq-server-0" Jan 31 09:21:35 crc kubenswrapper[4830]: I0131 09:21:35.363471 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/759f3f02-a9de-4e01-97f9-a97424c592a6-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"759f3f02-a9de-4e01-97f9-a97424c592a6\") " pod="openstack/rabbitmq-server-0" Jan 31 09:21:35 crc kubenswrapper[4830]: I0131 09:21:35.368091 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/759f3f02-a9de-4e01-97f9-a97424c592a6-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"759f3f02-a9de-4e01-97f9-a97424c592a6\") " pod="openstack/rabbitmq-server-0" Jan 31 09:21:35 crc kubenswrapper[4830]: I0131 09:21:35.381177 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/759f3f02-a9de-4e01-97f9-a97424c592a6-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"759f3f02-a9de-4e01-97f9-a97424c592a6\") " pod="openstack/rabbitmq-server-0" Jan 31 09:21:35 crc kubenswrapper[4830]: I0131 09:21:35.383956 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/759f3f02-a9de-4e01-97f9-a97424c592a6-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"759f3f02-a9de-4e01-97f9-a97424c592a6\") " pod="openstack/rabbitmq-server-0" Jan 31 09:21:35 crc kubenswrapper[4830]: I0131 09:21:35.390284 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/759f3f02-a9de-4e01-97f9-a97424c592a6-pod-info\") pod \"rabbitmq-server-0\" (UID: \"759f3f02-a9de-4e01-97f9-a97424c592a6\") " pod="openstack/rabbitmq-server-0" Jan 31 09:21:35 crc kubenswrapper[4830]: I0131 09:21:35.391339 4830 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 31 09:21:35 crc kubenswrapper[4830]: I0131 09:21:35.391412 4830 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-8d8f504e-714c-4ccf-bedc-e403cf20e25c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8d8f504e-714c-4ccf-bedc-e403cf20e25c\") pod \"rabbitmq-server-0\" (UID: \"759f3f02-a9de-4e01-97f9-a97424c592a6\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/e0be0427d62afa185ccf79532f26c2e598786171061941a7d0cad4a7d243c930/globalmount\"" pod="openstack/rabbitmq-server-0" Jan 31 09:21:35 crc kubenswrapper[4830]: I0131 09:21:35.397009 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2bdfc\" (UniqueName: \"kubernetes.io/projected/759f3f02-a9de-4e01-97f9-a97424c592a6-kube-api-access-2bdfc\") pod \"rabbitmq-server-0\" (UID: \"759f3f02-a9de-4e01-97f9-a97424c592a6\") " pod="openstack/rabbitmq-server-0" Jan 31 09:21:35 crc kubenswrapper[4830]: I0131 09:21:35.440487 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f60eed79-badf-4909-869b-edbfdfb774ac-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-1\" (UID: \"f60eed79-badf-4909-869b-edbfdfb774ac\") " pod="openstack/rabbitmq-server-1" Jan 31 09:21:35 crc kubenswrapper[4830]: I0131 09:21:35.440585 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f60eed79-badf-4909-869b-edbfdfb774ac-config-data\") pod \"rabbitmq-server-1\" (UID: \"f60eed79-badf-4909-869b-edbfdfb774ac\") " pod="openstack/rabbitmq-server-1" Jan 31 09:21:35 crc kubenswrapper[4830]: I0131 09:21:35.440636 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-d037c5b4-6d32-48fc-a02a-dab15480e5ed\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d037c5b4-6d32-48fc-a02a-dab15480e5ed\") pod \"rabbitmq-server-1\" (UID: \"f60eed79-badf-4909-869b-edbfdfb774ac\") " pod="openstack/rabbitmq-server-1" Jan 31 09:21:35 crc kubenswrapper[4830]: I0131 09:21:35.440673 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f60eed79-badf-4909-869b-edbfdfb774ac-rabbitmq-tls\") pod \"rabbitmq-server-1\" (UID: \"f60eed79-badf-4909-869b-edbfdfb774ac\") " pod="openstack/rabbitmq-server-1" Jan 31 09:21:35 crc kubenswrapper[4830]: I0131 09:21:35.440704 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/8e40a106-74cd-45ea-a936-c34daaf9ce6e-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-2\" (UID: \"8e40a106-74cd-45ea-a936-c34daaf9ce6e\") " pod="openstack/rabbitmq-server-2" Jan 31 09:21:35 crc kubenswrapper[4830]: I0131 09:21:35.440756 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/8e40a106-74cd-45ea-a936-c34daaf9ce6e-pod-info\") pod \"rabbitmq-server-2\" (UID: \"8e40a106-74cd-45ea-a936-c34daaf9ce6e\") " pod="openstack/rabbitmq-server-2" Jan 31 09:21:35 crc kubenswrapper[4830]: I0131 09:21:35.440795 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8e40a106-74cd-45ea-a936-c34daaf9ce6e-config-data\") pod \"rabbitmq-server-2\" (UID: \"8e40a106-74cd-45ea-a936-c34daaf9ce6e\") " pod="openstack/rabbitmq-server-2" Jan 31 09:21:35 crc kubenswrapper[4830]: I0131 09:21:35.440828 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f60eed79-badf-4909-869b-edbfdfb774ac-erlang-cookie-secret\") pod \"rabbitmq-server-1\" (UID: \"f60eed79-badf-4909-869b-edbfdfb774ac\") " pod="openstack/rabbitmq-server-1" Jan 31 09:21:35 crc kubenswrapper[4830]: I0131 09:21:35.440865 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f60eed79-badf-4909-869b-edbfdfb774ac-plugins-conf\") pod \"rabbitmq-server-1\" (UID: \"f60eed79-badf-4909-869b-edbfdfb774ac\") " pod="openstack/rabbitmq-server-1" Jan 31 09:21:35 crc kubenswrapper[4830]: I0131 09:21:35.440890 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f60eed79-badf-4909-869b-edbfdfb774ac-rabbitmq-plugins\") pod \"rabbitmq-server-1\" (UID: \"f60eed79-badf-4909-869b-edbfdfb774ac\") " pod="openstack/rabbitmq-server-1" Jan 31 09:21:35 crc kubenswrapper[4830]: I0131 09:21:35.440923 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/8e40a106-74cd-45ea-a936-c34daaf9ce6e-erlang-cookie-secret\") pod \"rabbitmq-server-2\" (UID: \"8e40a106-74cd-45ea-a936-c34daaf9ce6e\") " pod="openstack/rabbitmq-server-2" Jan 31 09:21:35 crc kubenswrapper[4830]: I0131 09:21:35.440954 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t7jqp\" (UniqueName: \"kubernetes.io/projected/8e40a106-74cd-45ea-a936-c34daaf9ce6e-kube-api-access-t7jqp\") pod \"rabbitmq-server-2\" (UID: \"8e40a106-74cd-45ea-a936-c34daaf9ce6e\") " pod="openstack/rabbitmq-server-2" Jan 31 09:21:35 crc kubenswrapper[4830]: I0131 09:21:35.441072 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f60eed79-badf-4909-869b-edbfdfb774ac-pod-info\") pod \"rabbitmq-server-1\" (UID: \"f60eed79-badf-4909-869b-edbfdfb774ac\") " pod="openstack/rabbitmq-server-1" Jan 31 09:21:35 crc kubenswrapper[4830]: I0131 09:21:35.441116 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/8e40a106-74cd-45ea-a936-c34daaf9ce6e-plugins-conf\") pod \"rabbitmq-server-2\" (UID: \"8e40a106-74cd-45ea-a936-c34daaf9ce6e\") " pod="openstack/rabbitmq-server-2" Jan 31 09:21:35 crc kubenswrapper[4830]: I0131 09:21:35.441140 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/8e40a106-74cd-45ea-a936-c34daaf9ce6e-rabbitmq-plugins\") pod \"rabbitmq-server-2\" (UID: \"8e40a106-74cd-45ea-a936-c34daaf9ce6e\") " pod="openstack/rabbitmq-server-2" Jan 31 09:21:35 crc kubenswrapper[4830]: I0131 09:21:35.441199 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-clnl2\" (UniqueName: \"kubernetes.io/projected/f60eed79-badf-4909-869b-edbfdfb774ac-kube-api-access-clnl2\") pod \"rabbitmq-server-1\" (UID: \"f60eed79-badf-4909-869b-edbfdfb774ac\") " pod="openstack/rabbitmq-server-1" Jan 31 09:21:35 crc kubenswrapper[4830]: I0131 09:21:35.441302 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-6f485614-09b6-423c-b642-4f3bd84a028a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6f485614-09b6-423c-b642-4f3bd84a028a\") pod \"rabbitmq-server-2\" (UID: \"8e40a106-74cd-45ea-a936-c34daaf9ce6e\") " pod="openstack/rabbitmq-server-2" Jan 31 09:21:35 crc kubenswrapper[4830]: I0131 09:21:35.441331 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/8e40a106-74cd-45ea-a936-c34daaf9ce6e-rabbitmq-tls\") pod \"rabbitmq-server-2\" (UID: \"8e40a106-74cd-45ea-a936-c34daaf9ce6e\") " pod="openstack/rabbitmq-server-2" Jan 31 09:21:35 crc kubenswrapper[4830]: I0131 09:21:35.441360 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f60eed79-badf-4909-869b-edbfdfb774ac-server-conf\") pod \"rabbitmq-server-1\" (UID: \"f60eed79-badf-4909-869b-edbfdfb774ac\") " pod="openstack/rabbitmq-server-1" Jan 31 09:21:35 crc kubenswrapper[4830]: I0131 09:21:35.441421 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f60eed79-badf-4909-869b-edbfdfb774ac-rabbitmq-confd\") pod \"rabbitmq-server-1\" (UID: \"f60eed79-badf-4909-869b-edbfdfb774ac\") " pod="openstack/rabbitmq-server-1" Jan 31 09:21:35 crc kubenswrapper[4830]: I0131 09:21:35.441485 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/8e40a106-74cd-45ea-a936-c34daaf9ce6e-server-conf\") pod \"rabbitmq-server-2\" (UID: \"8e40a106-74cd-45ea-a936-c34daaf9ce6e\") " pod="openstack/rabbitmq-server-2" Jan 31 09:21:35 crc kubenswrapper[4830]: I0131 09:21:35.441521 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/8e40a106-74cd-45ea-a936-c34daaf9ce6e-rabbitmq-confd\") pod \"rabbitmq-server-2\" (UID: \"8e40a106-74cd-45ea-a936-c34daaf9ce6e\") " pod="openstack/rabbitmq-server-2" Jan 31 09:21:35 crc kubenswrapper[4830]: I0131 09:21:35.443596 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/8e40a106-74cd-45ea-a936-c34daaf9ce6e-plugins-conf\") pod \"rabbitmq-server-2\" (UID: \"8e40a106-74cd-45ea-a936-c34daaf9ce6e\") " pod="openstack/rabbitmq-server-2" Jan 31 09:21:35 crc kubenswrapper[4830]: I0131 09:21:35.444121 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f60eed79-badf-4909-869b-edbfdfb774ac-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-1\" (UID: \"f60eed79-badf-4909-869b-edbfdfb774ac\") " pod="openstack/rabbitmq-server-1" Jan 31 09:21:35 crc kubenswrapper[4830]: I0131 09:21:35.444839 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f60eed79-badf-4909-869b-edbfdfb774ac-config-data\") pod \"rabbitmq-server-1\" (UID: \"f60eed79-badf-4909-869b-edbfdfb774ac\") " pod="openstack/rabbitmq-server-1" Jan 31 09:21:35 crc kubenswrapper[4830]: I0131 09:21:35.445211 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/8e40a106-74cd-45ea-a936-c34daaf9ce6e-pod-info\") pod \"rabbitmq-server-2\" (UID: \"8e40a106-74cd-45ea-a936-c34daaf9ce6e\") " pod="openstack/rabbitmq-server-2" Jan 31 09:21:35 crc kubenswrapper[4830]: I0131 09:21:35.445552 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f60eed79-badf-4909-869b-edbfdfb774ac-rabbitmq-plugins\") pod \"rabbitmq-server-1\" (UID: \"f60eed79-badf-4909-869b-edbfdfb774ac\") " pod="openstack/rabbitmq-server-1" Jan 31 09:21:35 crc kubenswrapper[4830]: I0131 09:21:35.446810 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/8e40a106-74cd-45ea-a936-c34daaf9ce6e-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-2\" (UID: \"8e40a106-74cd-45ea-a936-c34daaf9ce6e\") " pod="openstack/rabbitmq-server-2" Jan 31 09:21:35 crc kubenswrapper[4830]: I0131 09:21:35.448131 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8e40a106-74cd-45ea-a936-c34daaf9ce6e-config-data\") pod \"rabbitmq-server-2\" (UID: \"8e40a106-74cd-45ea-a936-c34daaf9ce6e\") " pod="openstack/rabbitmq-server-2" Jan 31 09:21:35 crc kubenswrapper[4830]: I0131 09:21:35.449623 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f60eed79-badf-4909-869b-edbfdfb774ac-rabbitmq-tls\") pod \"rabbitmq-server-1\" (UID: \"f60eed79-badf-4909-869b-edbfdfb774ac\") " pod="openstack/rabbitmq-server-1" Jan 31 09:21:35 crc kubenswrapper[4830]: I0131 09:21:35.450769 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/8e40a106-74cd-45ea-a936-c34daaf9ce6e-rabbitmq-plugins\") pod \"rabbitmq-server-2\" (UID: \"8e40a106-74cd-45ea-a936-c34daaf9ce6e\") " pod="openstack/rabbitmq-server-2" Jan 31 09:21:35 crc kubenswrapper[4830]: I0131 09:21:35.450869 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f60eed79-badf-4909-869b-edbfdfb774ac-plugins-conf\") pod \"rabbitmq-server-1\" (UID: \"f60eed79-badf-4909-869b-edbfdfb774ac\") " pod="openstack/rabbitmq-server-1" Jan 31 09:21:35 crc kubenswrapper[4830]: I0131 09:21:35.454000 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/8e40a106-74cd-45ea-a936-c34daaf9ce6e-rabbitmq-confd\") pod \"rabbitmq-server-2\" (UID: \"8e40a106-74cd-45ea-a936-c34daaf9ce6e\") " pod="openstack/rabbitmq-server-2" Jan 31 09:21:35 crc kubenswrapper[4830]: I0131 09:21:35.455528 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f60eed79-badf-4909-869b-edbfdfb774ac-erlang-cookie-secret\") pod \"rabbitmq-server-1\" (UID: \"f60eed79-badf-4909-869b-edbfdfb774ac\") " pod="openstack/rabbitmq-server-1" Jan 31 09:21:35 crc kubenswrapper[4830]: I0131 09:21:35.457246 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/8e40a106-74cd-45ea-a936-c34daaf9ce6e-server-conf\") pod \"rabbitmq-server-2\" (UID: \"8e40a106-74cd-45ea-a936-c34daaf9ce6e\") " pod="openstack/rabbitmq-server-2" Jan 31 09:21:35 crc kubenswrapper[4830]: I0131 09:21:35.449453 4830 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 31 09:21:35 crc kubenswrapper[4830]: I0131 09:21:35.458249 4830 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-d037c5b4-6d32-48fc-a02a-dab15480e5ed\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d037c5b4-6d32-48fc-a02a-dab15480e5ed\") pod \"rabbitmq-server-1\" (UID: \"f60eed79-badf-4909-869b-edbfdfb774ac\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/60cb0cccd96a2f4d629493269d21de332d9dc63192e998c13cd23571d9652a7c/globalmount\"" pod="openstack/rabbitmq-server-1" Jan 31 09:21:35 crc kubenswrapper[4830]: I0131 09:21:35.460707 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f60eed79-badf-4909-869b-edbfdfb774ac-server-conf\") pod \"rabbitmq-server-1\" (UID: \"f60eed79-badf-4909-869b-edbfdfb774ac\") " pod="openstack/rabbitmq-server-1" Jan 31 09:21:35 crc kubenswrapper[4830]: I0131 09:21:35.465010 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f60eed79-badf-4909-869b-edbfdfb774ac-rabbitmq-confd\") pod \"rabbitmq-server-1\" (UID: \"f60eed79-badf-4909-869b-edbfdfb774ac\") " pod="openstack/rabbitmq-server-1" Jan 31 09:21:35 crc kubenswrapper[4830]: I0131 09:21:35.467484 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/8e40a106-74cd-45ea-a936-c34daaf9ce6e-erlang-cookie-secret\") pod \"rabbitmq-server-2\" (UID: \"8e40a106-74cd-45ea-a936-c34daaf9ce6e\") " pod="openstack/rabbitmq-server-2" Jan 31 09:21:35 crc kubenswrapper[4830]: I0131 09:21:35.468264 4830 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 31 09:21:35 crc kubenswrapper[4830]: I0131 09:21:35.468303 4830 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-6f485614-09b6-423c-b642-4f3bd84a028a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6f485614-09b6-423c-b642-4f3bd84a028a\") pod \"rabbitmq-server-2\" (UID: \"8e40a106-74cd-45ea-a936-c34daaf9ce6e\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/083852915e54f892af909192feeccd2dd3f692a89baba0605ab9244421fa6fc3/globalmount\"" pod="openstack/rabbitmq-server-2" Jan 31 09:21:35 crc kubenswrapper[4830]: I0131 09:21:35.480752 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f60eed79-badf-4909-869b-edbfdfb774ac-pod-info\") pod \"rabbitmq-server-1\" (UID: \"f60eed79-badf-4909-869b-edbfdfb774ac\") " pod="openstack/rabbitmq-server-1" Jan 31 09:21:35 crc kubenswrapper[4830]: I0131 09:21:35.493313 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/8e40a106-74cd-45ea-a936-c34daaf9ce6e-rabbitmq-tls\") pod \"rabbitmq-server-2\" (UID: \"8e40a106-74cd-45ea-a936-c34daaf9ce6e\") " pod="openstack/rabbitmq-server-2" Jan 31 09:21:35 crc kubenswrapper[4830]: I0131 09:21:35.496686 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t7jqp\" (UniqueName: \"kubernetes.io/projected/8e40a106-74cd-45ea-a936-c34daaf9ce6e-kube-api-access-t7jqp\") pod \"rabbitmq-server-2\" (UID: \"8e40a106-74cd-45ea-a936-c34daaf9ce6e\") " pod="openstack/rabbitmq-server-2" Jan 31 09:21:35 crc kubenswrapper[4830]: I0131 09:21:35.502944 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-clnl2\" (UniqueName: \"kubernetes.io/projected/f60eed79-badf-4909-869b-edbfdfb774ac-kube-api-access-clnl2\") pod \"rabbitmq-server-1\" (UID: \"f60eed79-badf-4909-869b-edbfdfb774ac\") " pod="openstack/rabbitmq-server-1" Jan 31 09:21:35 crc kubenswrapper[4830]: I0131 09:21:35.514708 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-8d8f504e-714c-4ccf-bedc-e403cf20e25c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8d8f504e-714c-4ccf-bedc-e403cf20e25c\") pod \"rabbitmq-server-0\" (UID: \"759f3f02-a9de-4e01-97f9-a97424c592a6\") " pod="openstack/rabbitmq-server-0" Jan 31 09:21:35 crc kubenswrapper[4830]: I0131 09:21:35.526343 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 31 09:21:35 crc kubenswrapper[4830]: I0131 09:21:35.553402 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 31 09:21:35 crc kubenswrapper[4830]: I0131 09:21:35.556006 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 31 09:21:35 crc kubenswrapper[4830]: I0131 09:21:35.557517 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Jan 31 09:21:35 crc kubenswrapper[4830]: I0131 09:21:35.557839 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Jan 31 09:21:35 crc kubenswrapper[4830]: I0131 09:21:35.557982 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-kqg76" Jan 31 09:21:35 crc kubenswrapper[4830]: I0131 09:21:35.558533 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Jan 31 09:21:35 crc kubenswrapper[4830]: I0131 09:21:35.558657 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Jan 31 09:21:35 crc kubenswrapper[4830]: I0131 09:21:35.559121 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Jan 31 09:21:35 crc kubenswrapper[4830]: I0131 09:21:35.560074 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Jan 31 09:21:35 crc kubenswrapper[4830]: I0131 09:21:35.563044 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-d037c5b4-6d32-48fc-a02a-dab15480e5ed\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d037c5b4-6d32-48fc-a02a-dab15480e5ed\") pod \"rabbitmq-server-1\" (UID: \"f60eed79-badf-4909-869b-edbfdfb774ac\") " pod="openstack/rabbitmq-server-1" Jan 31 09:21:35 crc kubenswrapper[4830]: I0131 09:21:35.567254 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-rntrf"] Jan 31 09:21:35 crc kubenswrapper[4830]: I0131 09:21:35.568983 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-6f485614-09b6-423c-b642-4f3bd84a028a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6f485614-09b6-423c-b642-4f3bd84a028a\") pod \"rabbitmq-server-2\" (UID: \"8e40a106-74cd-45ea-a936-c34daaf9ce6e\") " pod="openstack/rabbitmq-server-2" Jan 31 09:21:35 crc kubenswrapper[4830]: I0131 09:21:35.589918 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-1" Jan 31 09:21:35 crc kubenswrapper[4830]: I0131 09:21:35.651593 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/18af810d-9de4-4822-86d2-bb7e8a8a449b-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"18af810d-9de4-4822-86d2-bb7e8a8a449b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 09:21:35 crc kubenswrapper[4830]: I0131 09:21:35.651686 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/18af810d-9de4-4822-86d2-bb7e8a8a449b-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"18af810d-9de4-4822-86d2-bb7e8a8a449b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 09:21:35 crc kubenswrapper[4830]: I0131 09:21:35.651776 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/18af810d-9de4-4822-86d2-bb7e8a8a449b-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"18af810d-9de4-4822-86d2-bb7e8a8a449b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 09:21:35 crc kubenswrapper[4830]: I0131 09:21:35.651846 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/18af810d-9de4-4822-86d2-bb7e8a8a449b-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"18af810d-9de4-4822-86d2-bb7e8a8a449b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 09:21:35 crc kubenswrapper[4830]: I0131 09:21:35.652196 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 31 09:21:35 crc kubenswrapper[4830]: I0131 09:21:35.654385 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-46eee7bd-8293-4ccd-8ae2-716ad9fd8039\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-46eee7bd-8293-4ccd-8ae2-716ad9fd8039\") pod \"rabbitmq-cell1-server-0\" (UID: \"18af810d-9de4-4822-86d2-bb7e8a8a449b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 09:21:35 crc kubenswrapper[4830]: I0131 09:21:35.654528 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/18af810d-9de4-4822-86d2-bb7e8a8a449b-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"18af810d-9de4-4822-86d2-bb7e8a8a449b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 09:21:35 crc kubenswrapper[4830]: I0131 09:21:35.654584 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/18af810d-9de4-4822-86d2-bb7e8a8a449b-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"18af810d-9de4-4822-86d2-bb7e8a8a449b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 09:21:35 crc kubenswrapper[4830]: I0131 09:21:35.654617 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/18af810d-9de4-4822-86d2-bb7e8a8a449b-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"18af810d-9de4-4822-86d2-bb7e8a8a449b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 09:21:35 crc kubenswrapper[4830]: I0131 09:21:35.654643 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/18af810d-9de4-4822-86d2-bb7e8a8a449b-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"18af810d-9de4-4822-86d2-bb7e8a8a449b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 09:21:35 crc kubenswrapper[4830]: I0131 09:21:35.654733 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p2w7k\" (UniqueName: \"kubernetes.io/projected/18af810d-9de4-4822-86d2-bb7e8a8a449b-kube-api-access-p2w7k\") pod \"rabbitmq-cell1-server-0\" (UID: \"18af810d-9de4-4822-86d2-bb7e8a8a449b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 09:21:35 crc kubenswrapper[4830]: I0131 09:21:35.654853 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/18af810d-9de4-4822-86d2-bb7e8a8a449b-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"18af810d-9de4-4822-86d2-bb7e8a8a449b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 09:21:35 crc kubenswrapper[4830]: I0131 09:21:35.685286 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-7f4p6" event={"ID":"c18c27da-a436-41fe-b4c9-bb0187e10694","Type":"ContainerStarted","Data":"2d3cf6f15dceb79f2aad4ab98e70bd9560d8ff62c9dcebdd69bba2b1ff1542e2"} Jan 31 09:21:35 crc kubenswrapper[4830]: I0131 09:21:35.693310 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-rntrf" event={"ID":"ee43d170-0675-460c-88e1-5e19a0db0e37","Type":"ContainerStarted","Data":"c93621416fff84b74c56b1dcb53a4c301a954e5a950c665d5457014ea279e468"} Jan 31 09:21:35 crc kubenswrapper[4830]: I0131 09:21:35.772249 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/18af810d-9de4-4822-86d2-bb7e8a8a449b-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"18af810d-9de4-4822-86d2-bb7e8a8a449b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 09:21:35 crc kubenswrapper[4830]: I0131 09:21:35.772333 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/18af810d-9de4-4822-86d2-bb7e8a8a449b-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"18af810d-9de4-4822-86d2-bb7e8a8a449b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 09:21:35 crc kubenswrapper[4830]: I0131 09:21:35.772370 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-46eee7bd-8293-4ccd-8ae2-716ad9fd8039\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-46eee7bd-8293-4ccd-8ae2-716ad9fd8039\") pod \"rabbitmq-cell1-server-0\" (UID: \"18af810d-9de4-4822-86d2-bb7e8a8a449b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 09:21:35 crc kubenswrapper[4830]: I0131 09:21:35.772413 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/18af810d-9de4-4822-86d2-bb7e8a8a449b-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"18af810d-9de4-4822-86d2-bb7e8a8a449b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 09:21:35 crc kubenswrapper[4830]: I0131 09:21:35.772449 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/18af810d-9de4-4822-86d2-bb7e8a8a449b-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"18af810d-9de4-4822-86d2-bb7e8a8a449b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 09:21:35 crc kubenswrapper[4830]: I0131 09:21:35.772473 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/18af810d-9de4-4822-86d2-bb7e8a8a449b-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"18af810d-9de4-4822-86d2-bb7e8a8a449b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 09:21:35 crc kubenswrapper[4830]: I0131 09:21:35.772513 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/18af810d-9de4-4822-86d2-bb7e8a8a449b-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"18af810d-9de4-4822-86d2-bb7e8a8a449b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 09:21:35 crc kubenswrapper[4830]: I0131 09:21:35.772551 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p2w7k\" (UniqueName: \"kubernetes.io/projected/18af810d-9de4-4822-86d2-bb7e8a8a449b-kube-api-access-p2w7k\") pod \"rabbitmq-cell1-server-0\" (UID: \"18af810d-9de4-4822-86d2-bb7e8a8a449b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 09:21:35 crc kubenswrapper[4830]: I0131 09:21:35.772600 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/18af810d-9de4-4822-86d2-bb7e8a8a449b-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"18af810d-9de4-4822-86d2-bb7e8a8a449b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 09:21:35 crc kubenswrapper[4830]: I0131 09:21:35.772735 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/18af810d-9de4-4822-86d2-bb7e8a8a449b-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"18af810d-9de4-4822-86d2-bb7e8a8a449b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 09:21:35 crc kubenswrapper[4830]: I0131 09:21:35.772757 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/18af810d-9de4-4822-86d2-bb7e8a8a449b-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"18af810d-9de4-4822-86d2-bb7e8a8a449b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 09:21:35 crc kubenswrapper[4830]: I0131 09:21:35.774079 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/18af810d-9de4-4822-86d2-bb7e8a8a449b-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"18af810d-9de4-4822-86d2-bb7e8a8a449b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 09:21:35 crc kubenswrapper[4830]: I0131 09:21:35.774484 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/18af810d-9de4-4822-86d2-bb7e8a8a449b-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"18af810d-9de4-4822-86d2-bb7e8a8a449b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 09:21:35 crc kubenswrapper[4830]: I0131 09:21:35.774806 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/18af810d-9de4-4822-86d2-bb7e8a8a449b-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"18af810d-9de4-4822-86d2-bb7e8a8a449b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 09:21:35 crc kubenswrapper[4830]: I0131 09:21:35.775127 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/18af810d-9de4-4822-86d2-bb7e8a8a449b-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"18af810d-9de4-4822-86d2-bb7e8a8a449b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 09:21:35 crc kubenswrapper[4830]: I0131 09:21:35.775798 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/18af810d-9de4-4822-86d2-bb7e8a8a449b-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"18af810d-9de4-4822-86d2-bb7e8a8a449b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 09:21:35 crc kubenswrapper[4830]: I0131 09:21:35.777971 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/18af810d-9de4-4822-86d2-bb7e8a8a449b-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"18af810d-9de4-4822-86d2-bb7e8a8a449b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 09:21:35 crc kubenswrapper[4830]: I0131 09:21:35.779097 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/18af810d-9de4-4822-86d2-bb7e8a8a449b-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"18af810d-9de4-4822-86d2-bb7e8a8a449b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 09:21:35 crc kubenswrapper[4830]: I0131 09:21:35.780357 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/18af810d-9de4-4822-86d2-bb7e8a8a449b-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"18af810d-9de4-4822-86d2-bb7e8a8a449b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 09:21:35 crc kubenswrapper[4830]: I0131 09:21:35.783450 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/18af810d-9de4-4822-86d2-bb7e8a8a449b-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"18af810d-9de4-4822-86d2-bb7e8a8a449b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 09:21:35 crc kubenswrapper[4830]: I0131 09:21:35.799685 4830 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 31 09:21:35 crc kubenswrapper[4830]: I0131 09:21:35.799767 4830 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-46eee7bd-8293-4ccd-8ae2-716ad9fd8039\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-46eee7bd-8293-4ccd-8ae2-716ad9fd8039\") pod \"rabbitmq-cell1-server-0\" (UID: \"18af810d-9de4-4822-86d2-bb7e8a8a449b\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/40d9c20e0fa8978e0eed904adc2a30fbad9b0eabe83eb0834e2e4c5212f639ff/globalmount\"" pod="openstack/rabbitmq-cell1-server-0" Jan 31 09:21:35 crc kubenswrapper[4830]: I0131 09:21:35.816640 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p2w7k\" (UniqueName: \"kubernetes.io/projected/18af810d-9de4-4822-86d2-bb7e8a8a449b-kube-api-access-p2w7k\") pod \"rabbitmq-cell1-server-0\" (UID: \"18af810d-9de4-4822-86d2-bb7e8a8a449b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 09:21:35 crc kubenswrapper[4830]: I0131 09:21:35.865491 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-2" Jan 31 09:21:35 crc kubenswrapper[4830]: I0131 09:21:35.936109 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-46eee7bd-8293-4ccd-8ae2-716ad9fd8039\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-46eee7bd-8293-4ccd-8ae2-716ad9fd8039\") pod \"rabbitmq-cell1-server-0\" (UID: \"18af810d-9de4-4822-86d2-bb7e8a8a449b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 09:21:36 crc kubenswrapper[4830]: I0131 09:21:36.224525 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 31 09:21:36 crc kubenswrapper[4830]: I0131 09:21:36.351765 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-1"] Jan 31 09:21:36 crc kubenswrapper[4830]: W0131 09:21:36.358600 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf60eed79_badf_4909_869b_edbfdfb774ac.slice/crio-9fa0fba36bd5965d593d0aaf3908890f607f78406a9ffc77e1a0826708999699 WatchSource:0}: Error finding container 9fa0fba36bd5965d593d0aaf3908890f607f78406a9ffc77e1a0826708999699: Status 404 returned error can't find the container with id 9fa0fba36bd5965d593d0aaf3908890f607f78406a9ffc77e1a0826708999699 Jan 31 09:21:36 crc kubenswrapper[4830]: I0131 09:21:36.574297 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 31 09:21:36 crc kubenswrapper[4830]: I0131 09:21:36.610585 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Jan 31 09:21:36 crc kubenswrapper[4830]: I0131 09:21:36.613110 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 31 09:21:36 crc kubenswrapper[4830]: I0131 09:21:36.622001 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Jan 31 09:21:36 crc kubenswrapper[4830]: I0131 09:21:36.626180 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-67p2l" Jan 31 09:21:36 crc kubenswrapper[4830]: I0131 09:21:36.627413 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Jan 31 09:21:36 crc kubenswrapper[4830]: I0131 09:21:36.629708 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Jan 31 09:21:36 crc kubenswrapper[4830]: I0131 09:21:36.630272 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Jan 31 09:21:36 crc kubenswrapper[4830]: I0131 09:21:36.632935 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Jan 31 09:21:36 crc kubenswrapper[4830]: W0131 09:21:36.695226 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8e40a106_74cd_45ea_a936_c34daaf9ce6e.slice/crio-1536681b25fe5e19112bbdfdf5b67459abf5832127f90b51a4fdb0aa963c2523 WatchSource:0}: Error finding container 1536681b25fe5e19112bbdfdf5b67459abf5832127f90b51a4fdb0aa963c2523: Status 404 returned error can't find the container with id 1536681b25fe5e19112bbdfdf5b67459abf5832127f90b51a4fdb0aa963c2523 Jan 31 09:21:36 crc kubenswrapper[4830]: I0131 09:21:36.698790 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-2"] Jan 31 09:21:36 crc kubenswrapper[4830]: I0131 09:21:36.717923 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/2ca5d2f1-673e-4173-848a-8d32d33b8bcc-kolla-config\") pod \"openstack-galera-0\" (UID: \"2ca5d2f1-673e-4173-848a-8d32d33b8bcc\") " pod="openstack/openstack-galera-0" Jan 31 09:21:36 crc kubenswrapper[4830]: I0131 09:21:36.718804 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/2ca5d2f1-673e-4173-848a-8d32d33b8bcc-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"2ca5d2f1-673e-4173-848a-8d32d33b8bcc\") " pod="openstack/openstack-galera-0" Jan 31 09:21:36 crc kubenswrapper[4830]: I0131 09:21:36.719029 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"f60eed79-badf-4909-869b-edbfdfb774ac","Type":"ContainerStarted","Data":"9fa0fba36bd5965d593d0aaf3908890f607f78406a9ffc77e1a0826708999699"} Jan 31 09:21:36 crc kubenswrapper[4830]: I0131 09:21:36.719678 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bmswf\" (UniqueName: \"kubernetes.io/projected/2ca5d2f1-673e-4173-848a-8d32d33b8bcc-kube-api-access-bmswf\") pod \"openstack-galera-0\" (UID: \"2ca5d2f1-673e-4173-848a-8d32d33b8bcc\") " pod="openstack/openstack-galera-0" Jan 31 09:21:36 crc kubenswrapper[4830]: I0131 09:21:36.719797 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ca5d2f1-673e-4173-848a-8d32d33b8bcc-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"2ca5d2f1-673e-4173-848a-8d32d33b8bcc\") " pod="openstack/openstack-galera-0" Jan 31 09:21:36 crc kubenswrapper[4830]: I0131 09:21:36.719891 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-0352c7a1-3ec3-4161-9b3e-8e48014cd389\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0352c7a1-3ec3-4161-9b3e-8e48014cd389\") pod \"openstack-galera-0\" (UID: \"2ca5d2f1-673e-4173-848a-8d32d33b8bcc\") " pod="openstack/openstack-galera-0" Jan 31 09:21:36 crc kubenswrapper[4830]: I0131 09:21:36.720122 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/2ca5d2f1-673e-4173-848a-8d32d33b8bcc-config-data-default\") pod \"openstack-galera-0\" (UID: \"2ca5d2f1-673e-4173-848a-8d32d33b8bcc\") " pod="openstack/openstack-galera-0" Jan 31 09:21:36 crc kubenswrapper[4830]: I0131 09:21:36.720177 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/2ca5d2f1-673e-4173-848a-8d32d33b8bcc-config-data-generated\") pod \"openstack-galera-0\" (UID: \"2ca5d2f1-673e-4173-848a-8d32d33b8bcc\") " pod="openstack/openstack-galera-0" Jan 31 09:21:36 crc kubenswrapper[4830]: I0131 09:21:36.721195 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"759f3f02-a9de-4e01-97f9-a97424c592a6","Type":"ContainerStarted","Data":"8bbec4230b13b5b0c22f4a4a352a288949f5625dc743058954802dab27044c28"} Jan 31 09:21:36 crc kubenswrapper[4830]: I0131 09:21:36.724213 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2ca5d2f1-673e-4173-848a-8d32d33b8bcc-operator-scripts\") pod \"openstack-galera-0\" (UID: \"2ca5d2f1-673e-4173-848a-8d32d33b8bcc\") " pod="openstack/openstack-galera-0" Jan 31 09:21:36 crc kubenswrapper[4830]: I0131 09:21:36.826233 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bmswf\" (UniqueName: \"kubernetes.io/projected/2ca5d2f1-673e-4173-848a-8d32d33b8bcc-kube-api-access-bmswf\") pod \"openstack-galera-0\" (UID: \"2ca5d2f1-673e-4173-848a-8d32d33b8bcc\") " pod="openstack/openstack-galera-0" Jan 31 09:21:36 crc kubenswrapper[4830]: I0131 09:21:36.826321 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ca5d2f1-673e-4173-848a-8d32d33b8bcc-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"2ca5d2f1-673e-4173-848a-8d32d33b8bcc\") " pod="openstack/openstack-galera-0" Jan 31 09:21:36 crc kubenswrapper[4830]: I0131 09:21:36.826375 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-0352c7a1-3ec3-4161-9b3e-8e48014cd389\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0352c7a1-3ec3-4161-9b3e-8e48014cd389\") pod \"openstack-galera-0\" (UID: \"2ca5d2f1-673e-4173-848a-8d32d33b8bcc\") " pod="openstack/openstack-galera-0" Jan 31 09:21:36 crc kubenswrapper[4830]: I0131 09:21:36.826482 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/2ca5d2f1-673e-4173-848a-8d32d33b8bcc-config-data-default\") pod \"openstack-galera-0\" (UID: \"2ca5d2f1-673e-4173-848a-8d32d33b8bcc\") " pod="openstack/openstack-galera-0" Jan 31 09:21:36 crc kubenswrapper[4830]: I0131 09:21:36.826523 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/2ca5d2f1-673e-4173-848a-8d32d33b8bcc-config-data-generated\") pod \"openstack-galera-0\" (UID: \"2ca5d2f1-673e-4173-848a-8d32d33b8bcc\") " pod="openstack/openstack-galera-0" Jan 31 09:21:36 crc kubenswrapper[4830]: I0131 09:21:36.828309 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/2ca5d2f1-673e-4173-848a-8d32d33b8bcc-config-data-default\") pod \"openstack-galera-0\" (UID: \"2ca5d2f1-673e-4173-848a-8d32d33b8bcc\") " pod="openstack/openstack-galera-0" Jan 31 09:21:36 crc kubenswrapper[4830]: I0131 09:21:36.829213 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/2ca5d2f1-673e-4173-848a-8d32d33b8bcc-config-data-generated\") pod \"openstack-galera-0\" (UID: \"2ca5d2f1-673e-4173-848a-8d32d33b8bcc\") " pod="openstack/openstack-galera-0" Jan 31 09:21:36 crc kubenswrapper[4830]: I0131 09:21:36.829316 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2ca5d2f1-673e-4173-848a-8d32d33b8bcc-operator-scripts\") pod \"openstack-galera-0\" (UID: \"2ca5d2f1-673e-4173-848a-8d32d33b8bcc\") " pod="openstack/openstack-galera-0" Jan 31 09:21:36 crc kubenswrapper[4830]: I0131 09:21:36.829471 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/2ca5d2f1-673e-4173-848a-8d32d33b8bcc-kolla-config\") pod \"openstack-galera-0\" (UID: \"2ca5d2f1-673e-4173-848a-8d32d33b8bcc\") " pod="openstack/openstack-galera-0" Jan 31 09:21:36 crc kubenswrapper[4830]: I0131 09:21:36.829517 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/2ca5d2f1-673e-4173-848a-8d32d33b8bcc-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"2ca5d2f1-673e-4173-848a-8d32d33b8bcc\") " pod="openstack/openstack-galera-0" Jan 31 09:21:36 crc kubenswrapper[4830]: I0131 09:21:36.832954 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2ca5d2f1-673e-4173-848a-8d32d33b8bcc-operator-scripts\") pod \"openstack-galera-0\" (UID: \"2ca5d2f1-673e-4173-848a-8d32d33b8bcc\") " pod="openstack/openstack-galera-0" Jan 31 09:21:36 crc kubenswrapper[4830]: I0131 09:21:36.833986 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/2ca5d2f1-673e-4173-848a-8d32d33b8bcc-kolla-config\") pod \"openstack-galera-0\" (UID: \"2ca5d2f1-673e-4173-848a-8d32d33b8bcc\") " pod="openstack/openstack-galera-0" Jan 31 09:21:36 crc kubenswrapper[4830]: I0131 09:21:36.838975 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ca5d2f1-673e-4173-848a-8d32d33b8bcc-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"2ca5d2f1-673e-4173-848a-8d32d33b8bcc\") " pod="openstack/openstack-galera-0" Jan 31 09:21:36 crc kubenswrapper[4830]: I0131 09:21:36.840765 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/2ca5d2f1-673e-4173-848a-8d32d33b8bcc-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"2ca5d2f1-673e-4173-848a-8d32d33b8bcc\") " pod="openstack/openstack-galera-0" Jan 31 09:21:36 crc kubenswrapper[4830]: I0131 09:21:36.853421 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bmswf\" (UniqueName: \"kubernetes.io/projected/2ca5d2f1-673e-4173-848a-8d32d33b8bcc-kube-api-access-bmswf\") pod \"openstack-galera-0\" (UID: \"2ca5d2f1-673e-4173-848a-8d32d33b8bcc\") " pod="openstack/openstack-galera-0" Jan 31 09:21:36 crc kubenswrapper[4830]: I0131 09:21:36.935403 4830 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 31 09:21:36 crc kubenswrapper[4830]: I0131 09:21:36.935477 4830 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-0352c7a1-3ec3-4161-9b3e-8e48014cd389\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0352c7a1-3ec3-4161-9b3e-8e48014cd389\") pod \"openstack-galera-0\" (UID: \"2ca5d2f1-673e-4173-848a-8d32d33b8bcc\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/f43c4d4beff04008dc12f804d12260bbfee7958bd31251d861726b9e4dfa1754/globalmount\"" pod="openstack/openstack-galera-0" Jan 31 09:21:37 crc kubenswrapper[4830]: I0131 09:21:37.188244 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 31 09:21:37 crc kubenswrapper[4830]: I0131 09:21:37.361644 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-0352c7a1-3ec3-4161-9b3e-8e48014cd389\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0352c7a1-3ec3-4161-9b3e-8e48014cd389\") pod \"openstack-galera-0\" (UID: \"2ca5d2f1-673e-4173-848a-8d32d33b8bcc\") " pod="openstack/openstack-galera-0" Jan 31 09:21:37 crc kubenswrapper[4830]: I0131 09:21:37.568186 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 31 09:21:37 crc kubenswrapper[4830]: I0131 09:21:37.766848 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"18af810d-9de4-4822-86d2-bb7e8a8a449b","Type":"ContainerStarted","Data":"fd552b526edb7808d77049658f8fe34756e8f14d369a3b2e8790070a45de1166"} Jan 31 09:21:37 crc kubenswrapper[4830]: I0131 09:21:37.771478 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"8e40a106-74cd-45ea-a936-c34daaf9ce6e","Type":"ContainerStarted","Data":"1536681b25fe5e19112bbdfdf5b67459abf5832127f90b51a4fdb0aa963c2523"} Jan 31 09:21:38 crc kubenswrapper[4830]: I0131 09:21:38.326953 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 31 09:21:38 crc kubenswrapper[4830]: I0131 09:21:38.330968 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 31 09:21:38 crc kubenswrapper[4830]: I0131 09:21:38.343353 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Jan 31 09:21:38 crc kubenswrapper[4830]: I0131 09:21:38.343642 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Jan 31 09:21:38 crc kubenswrapper[4830]: I0131 09:21:38.343788 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-gr8z4" Jan 31 09:21:38 crc kubenswrapper[4830]: I0131 09:21:38.343884 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Jan 31 09:21:38 crc kubenswrapper[4830]: I0131 09:21:38.344171 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 31 09:21:38 crc kubenswrapper[4830]: I0131 09:21:38.427857 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h526j\" (UniqueName: \"kubernetes.io/projected/f37f41b4-3b56-45f9-a368-0f772bcf3002-kube-api-access-h526j\") pod \"openstack-cell1-galera-0\" (UID: \"f37f41b4-3b56-45f9-a368-0f772bcf3002\") " pod="openstack/openstack-cell1-galera-0" Jan 31 09:21:38 crc kubenswrapper[4830]: I0131 09:21:38.427968 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/f37f41b4-3b56-45f9-a368-0f772bcf3002-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"f37f41b4-3b56-45f9-a368-0f772bcf3002\") " pod="openstack/openstack-cell1-galera-0" Jan 31 09:21:38 crc kubenswrapper[4830]: I0131 09:21:38.428049 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/f37f41b4-3b56-45f9-a368-0f772bcf3002-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"f37f41b4-3b56-45f9-a368-0f772bcf3002\") " pod="openstack/openstack-cell1-galera-0" Jan 31 09:21:38 crc kubenswrapper[4830]: I0131 09:21:38.428081 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f37f41b4-3b56-45f9-a368-0f772bcf3002-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"f37f41b4-3b56-45f9-a368-0f772bcf3002\") " pod="openstack/openstack-cell1-galera-0" Jan 31 09:21:38 crc kubenswrapper[4830]: I0131 09:21:38.428125 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f37f41b4-3b56-45f9-a368-0f772bcf3002-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"f37f41b4-3b56-45f9-a368-0f772bcf3002\") " pod="openstack/openstack-cell1-galera-0" Jan 31 09:21:38 crc kubenswrapper[4830]: I0131 09:21:38.428161 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/f37f41b4-3b56-45f9-a368-0f772bcf3002-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"f37f41b4-3b56-45f9-a368-0f772bcf3002\") " pod="openstack/openstack-cell1-galera-0" Jan 31 09:21:38 crc kubenswrapper[4830]: I0131 09:21:38.429898 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-3cef8912-5c5c-4656-987d-fe4dc9f1045a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3cef8912-5c5c-4656-987d-fe4dc9f1045a\") pod \"openstack-cell1-galera-0\" (UID: \"f37f41b4-3b56-45f9-a368-0f772bcf3002\") " pod="openstack/openstack-cell1-galera-0" Jan 31 09:21:38 crc kubenswrapper[4830]: I0131 09:21:38.429982 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/f37f41b4-3b56-45f9-a368-0f772bcf3002-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"f37f41b4-3b56-45f9-a368-0f772bcf3002\") " pod="openstack/openstack-cell1-galera-0" Jan 31 09:21:38 crc kubenswrapper[4830]: I0131 09:21:38.462294 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Jan 31 09:21:38 crc kubenswrapper[4830]: I0131 09:21:38.464043 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 31 09:21:38 crc kubenswrapper[4830]: I0131 09:21:38.482250 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Jan 31 09:21:38 crc kubenswrapper[4830]: I0131 09:21:38.482520 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Jan 31 09:21:38 crc kubenswrapper[4830]: I0131 09:21:38.482684 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-vtd62" Jan 31 09:21:38 crc kubenswrapper[4830]: I0131 09:21:38.525890 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Jan 31 09:21:38 crc kubenswrapper[4830]: I0131 09:21:38.535782 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-3cef8912-5c5c-4656-987d-fe4dc9f1045a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3cef8912-5c5c-4656-987d-fe4dc9f1045a\") pod \"openstack-cell1-galera-0\" (UID: \"f37f41b4-3b56-45f9-a368-0f772bcf3002\") " pod="openstack/openstack-cell1-galera-0" Jan 31 09:21:38 crc kubenswrapper[4830]: I0131 09:21:38.535936 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/f37f41b4-3b56-45f9-a368-0f772bcf3002-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"f37f41b4-3b56-45f9-a368-0f772bcf3002\") " pod="openstack/openstack-cell1-galera-0" Jan 31 09:21:38 crc kubenswrapper[4830]: I0131 09:21:38.536069 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h526j\" (UniqueName: \"kubernetes.io/projected/f37f41b4-3b56-45f9-a368-0f772bcf3002-kube-api-access-h526j\") pod \"openstack-cell1-galera-0\" (UID: \"f37f41b4-3b56-45f9-a368-0f772bcf3002\") " pod="openstack/openstack-cell1-galera-0" Jan 31 09:21:38 crc kubenswrapper[4830]: I0131 09:21:38.536236 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/f37f41b4-3b56-45f9-a368-0f772bcf3002-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"f37f41b4-3b56-45f9-a368-0f772bcf3002\") " pod="openstack/openstack-cell1-galera-0" Jan 31 09:21:38 crc kubenswrapper[4830]: I0131 09:21:38.536529 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/f37f41b4-3b56-45f9-a368-0f772bcf3002-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"f37f41b4-3b56-45f9-a368-0f772bcf3002\") " pod="openstack/openstack-cell1-galera-0" Jan 31 09:21:38 crc kubenswrapper[4830]: I0131 09:21:38.536574 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f37f41b4-3b56-45f9-a368-0f772bcf3002-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"f37f41b4-3b56-45f9-a368-0f772bcf3002\") " pod="openstack/openstack-cell1-galera-0" Jan 31 09:21:38 crc kubenswrapper[4830]: I0131 09:21:38.536688 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f37f41b4-3b56-45f9-a368-0f772bcf3002-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"f37f41b4-3b56-45f9-a368-0f772bcf3002\") " pod="openstack/openstack-cell1-galera-0" Jan 31 09:21:38 crc kubenswrapper[4830]: I0131 09:21:38.536777 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/f37f41b4-3b56-45f9-a368-0f772bcf3002-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"f37f41b4-3b56-45f9-a368-0f772bcf3002\") " pod="openstack/openstack-cell1-galera-0" Jan 31 09:21:38 crc kubenswrapper[4830]: I0131 09:21:38.538670 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/f37f41b4-3b56-45f9-a368-0f772bcf3002-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"f37f41b4-3b56-45f9-a368-0f772bcf3002\") " pod="openstack/openstack-cell1-galera-0" Jan 31 09:21:38 crc kubenswrapper[4830]: I0131 09:21:38.546679 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/f37f41b4-3b56-45f9-a368-0f772bcf3002-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"f37f41b4-3b56-45f9-a368-0f772bcf3002\") " pod="openstack/openstack-cell1-galera-0" Jan 31 09:21:38 crc kubenswrapper[4830]: I0131 09:21:38.547261 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f37f41b4-3b56-45f9-a368-0f772bcf3002-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"f37f41b4-3b56-45f9-a368-0f772bcf3002\") " pod="openstack/openstack-cell1-galera-0" Jan 31 09:21:38 crc kubenswrapper[4830]: I0131 09:21:38.549049 4830 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 31 09:21:38 crc kubenswrapper[4830]: I0131 09:21:38.549116 4830 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-3cef8912-5c5c-4656-987d-fe4dc9f1045a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3cef8912-5c5c-4656-987d-fe4dc9f1045a\") pod \"openstack-cell1-galera-0\" (UID: \"f37f41b4-3b56-45f9-a368-0f772bcf3002\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/0051805e6aa9dcd8db85ce09eec78e7b691ae521068e45d01255d35a684489d8/globalmount\"" pod="openstack/openstack-cell1-galera-0" Jan 31 09:21:38 crc kubenswrapper[4830]: I0131 09:21:38.592801 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/f37f41b4-3b56-45f9-a368-0f772bcf3002-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"f37f41b4-3b56-45f9-a368-0f772bcf3002\") " pod="openstack/openstack-cell1-galera-0" Jan 31 09:21:38 crc kubenswrapper[4830]: I0131 09:21:38.594046 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f37f41b4-3b56-45f9-a368-0f772bcf3002-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"f37f41b4-3b56-45f9-a368-0f772bcf3002\") " pod="openstack/openstack-cell1-galera-0" Jan 31 09:21:38 crc kubenswrapper[4830]: I0131 09:21:38.595766 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/f37f41b4-3b56-45f9-a368-0f772bcf3002-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"f37f41b4-3b56-45f9-a368-0f772bcf3002\") " pod="openstack/openstack-cell1-galera-0" Jan 31 09:21:38 crc kubenswrapper[4830]: I0131 09:21:38.644007 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h526j\" (UniqueName: \"kubernetes.io/projected/f37f41b4-3b56-45f9-a368-0f772bcf3002-kube-api-access-h526j\") pod \"openstack-cell1-galera-0\" (UID: \"f37f41b4-3b56-45f9-a368-0f772bcf3002\") " pod="openstack/openstack-cell1-galera-0" Jan 31 09:21:38 crc kubenswrapper[4830]: I0131 09:21:38.648074 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/b3c26555-4046-499e-96c9-5a83b8322d8e-kolla-config\") pod \"memcached-0\" (UID: \"b3c26555-4046-499e-96c9-5a83b8322d8e\") " pod="openstack/memcached-0" Jan 31 09:21:38 crc kubenswrapper[4830]: I0131 09:21:38.648671 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b3c26555-4046-499e-96c9-5a83b8322d8e-config-data\") pod \"memcached-0\" (UID: \"b3c26555-4046-499e-96c9-5a83b8322d8e\") " pod="openstack/memcached-0" Jan 31 09:21:38 crc kubenswrapper[4830]: I0131 09:21:38.648796 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/b3c26555-4046-499e-96c9-5a83b8322d8e-memcached-tls-certs\") pod \"memcached-0\" (UID: \"b3c26555-4046-499e-96c9-5a83b8322d8e\") " pod="openstack/memcached-0" Jan 31 09:21:38 crc kubenswrapper[4830]: I0131 09:21:38.648979 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3c26555-4046-499e-96c9-5a83b8322d8e-combined-ca-bundle\") pod \"memcached-0\" (UID: \"b3c26555-4046-499e-96c9-5a83b8322d8e\") " pod="openstack/memcached-0" Jan 31 09:21:38 crc kubenswrapper[4830]: I0131 09:21:38.649078 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zcmcs\" (UniqueName: \"kubernetes.io/projected/b3c26555-4046-499e-96c9-5a83b8322d8e-kube-api-access-zcmcs\") pod \"memcached-0\" (UID: \"b3c26555-4046-499e-96c9-5a83b8322d8e\") " pod="openstack/memcached-0" Jan 31 09:21:38 crc kubenswrapper[4830]: I0131 09:21:38.742907 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-3cef8912-5c5c-4656-987d-fe4dc9f1045a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3cef8912-5c5c-4656-987d-fe4dc9f1045a\") pod \"openstack-cell1-galera-0\" (UID: \"f37f41b4-3b56-45f9-a368-0f772bcf3002\") " pod="openstack/openstack-cell1-galera-0" Jan 31 09:21:38 crc kubenswrapper[4830]: I0131 09:21:38.767588 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3c26555-4046-499e-96c9-5a83b8322d8e-combined-ca-bundle\") pod \"memcached-0\" (UID: \"b3c26555-4046-499e-96c9-5a83b8322d8e\") " pod="openstack/memcached-0" Jan 31 09:21:38 crc kubenswrapper[4830]: I0131 09:21:38.767659 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zcmcs\" (UniqueName: \"kubernetes.io/projected/b3c26555-4046-499e-96c9-5a83b8322d8e-kube-api-access-zcmcs\") pod \"memcached-0\" (UID: \"b3c26555-4046-499e-96c9-5a83b8322d8e\") " pod="openstack/memcached-0" Jan 31 09:21:38 crc kubenswrapper[4830]: I0131 09:21:38.773797 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/b3c26555-4046-499e-96c9-5a83b8322d8e-kolla-config\") pod \"memcached-0\" (UID: \"b3c26555-4046-499e-96c9-5a83b8322d8e\") " pod="openstack/memcached-0" Jan 31 09:21:38 crc kubenswrapper[4830]: I0131 09:21:38.774002 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b3c26555-4046-499e-96c9-5a83b8322d8e-config-data\") pod \"memcached-0\" (UID: \"b3c26555-4046-499e-96c9-5a83b8322d8e\") " pod="openstack/memcached-0" Jan 31 09:21:38 crc kubenswrapper[4830]: I0131 09:21:38.774064 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/b3c26555-4046-499e-96c9-5a83b8322d8e-memcached-tls-certs\") pod \"memcached-0\" (UID: \"b3c26555-4046-499e-96c9-5a83b8322d8e\") " pod="openstack/memcached-0" Jan 31 09:21:38 crc kubenswrapper[4830]: I0131 09:21:38.774927 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/b3c26555-4046-499e-96c9-5a83b8322d8e-kolla-config\") pod \"memcached-0\" (UID: \"b3c26555-4046-499e-96c9-5a83b8322d8e\") " pod="openstack/memcached-0" Jan 31 09:21:38 crc kubenswrapper[4830]: I0131 09:21:38.779215 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b3c26555-4046-499e-96c9-5a83b8322d8e-config-data\") pod \"memcached-0\" (UID: \"b3c26555-4046-499e-96c9-5a83b8322d8e\") " pod="openstack/memcached-0" Jan 31 09:21:38 crc kubenswrapper[4830]: I0131 09:21:38.799459 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3c26555-4046-499e-96c9-5a83b8322d8e-combined-ca-bundle\") pod \"memcached-0\" (UID: \"b3c26555-4046-499e-96c9-5a83b8322d8e\") " pod="openstack/memcached-0" Jan 31 09:21:38 crc kubenswrapper[4830]: I0131 09:21:38.814770 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zcmcs\" (UniqueName: \"kubernetes.io/projected/b3c26555-4046-499e-96c9-5a83b8322d8e-kube-api-access-zcmcs\") pod \"memcached-0\" (UID: \"b3c26555-4046-499e-96c9-5a83b8322d8e\") " pod="openstack/memcached-0" Jan 31 09:21:38 crc kubenswrapper[4830]: I0131 09:21:38.843674 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/b3c26555-4046-499e-96c9-5a83b8322d8e-memcached-tls-certs\") pod \"memcached-0\" (UID: \"b3c26555-4046-499e-96c9-5a83b8322d8e\") " pod="openstack/memcached-0" Jan 31 09:21:38 crc kubenswrapper[4830]: I0131 09:21:38.849533 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 31 09:21:38 crc kubenswrapper[4830]: I0131 09:21:38.930676 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Jan 31 09:21:39 crc kubenswrapper[4830]: I0131 09:21:39.033971 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 31 09:21:41 crc kubenswrapper[4830]: I0131 09:21:39.985996 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"2ca5d2f1-673e-4173-848a-8d32d33b8bcc","Type":"ContainerStarted","Data":"e5ddda32f8d900a37e0352ddb3ccf52b576643ed5d2d7ed873ba8f1584672daa"} Jan 31 09:21:41 crc kubenswrapper[4830]: I0131 09:21:40.784454 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Jan 31 09:21:41 crc kubenswrapper[4830]: I0131 09:21:40.786488 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 31 09:21:41 crc kubenswrapper[4830]: I0131 09:21:40.805814 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-jwt28" Jan 31 09:21:41 crc kubenswrapper[4830]: I0131 09:21:40.835439 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 31 09:21:41 crc kubenswrapper[4830]: I0131 09:21:40.875359 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rx6cz\" (UniqueName: \"kubernetes.io/projected/5359b6c7-375f-4424-bb43-f4b2a4d40329-kube-api-access-rx6cz\") pod \"kube-state-metrics-0\" (UID: \"5359b6c7-375f-4424-bb43-f4b2a4d40329\") " pod="openstack/kube-state-metrics-0" Jan 31 09:21:41 crc kubenswrapper[4830]: I0131 09:21:40.982180 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rx6cz\" (UniqueName: \"kubernetes.io/projected/5359b6c7-375f-4424-bb43-f4b2a4d40329-kube-api-access-rx6cz\") pod \"kube-state-metrics-0\" (UID: \"5359b6c7-375f-4424-bb43-f4b2a4d40329\") " pod="openstack/kube-state-metrics-0" Jan 31 09:21:41 crc kubenswrapper[4830]: I0131 09:21:41.036078 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rx6cz\" (UniqueName: \"kubernetes.io/projected/5359b6c7-375f-4424-bb43-f4b2a4d40329-kube-api-access-rx6cz\") pod \"kube-state-metrics-0\" (UID: \"5359b6c7-375f-4424-bb43-f4b2a4d40329\") " pod="openstack/kube-state-metrics-0" Jan 31 09:21:41 crc kubenswrapper[4830]: I0131 09:21:41.151255 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 31 09:21:41 crc kubenswrapper[4830]: I0131 09:21:41.619043 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-ui-dashboards-66cbf594b5-swjf6"] Jan 31 09:21:41 crc kubenswrapper[4830]: I0131 09:21:41.636315 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-ui-dashboards-66cbf594b5-swjf6" Jan 31 09:21:41 crc kubenswrapper[4830]: I0131 09:21:41.646661 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-ui-dashboards-sa-dockercfg-4kcbd" Jan 31 09:21:41 crc kubenswrapper[4830]: I0131 09:21:41.646933 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-ui-dashboards" Jan 31 09:21:41 crc kubenswrapper[4830]: I0131 09:21:41.804926 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Jan 31 09:21:41 crc kubenswrapper[4830]: I0131 09:21:41.815436 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hgwfr\" (UniqueName: \"kubernetes.io/projected/51e241ad-2d92-41fb-a218-1a14cd40534d-kube-api-access-hgwfr\") pod \"observability-ui-dashboards-66cbf594b5-swjf6\" (UID: \"51e241ad-2d92-41fb-a218-1a14cd40534d\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-swjf6" Jan 31 09:21:41 crc kubenswrapper[4830]: I0131 09:21:41.815813 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/51e241ad-2d92-41fb-a218-1a14cd40534d-serving-cert\") pod \"observability-ui-dashboards-66cbf594b5-swjf6\" (UID: \"51e241ad-2d92-41fb-a218-1a14cd40534d\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-swjf6" Jan 31 09:21:41 crc kubenswrapper[4830]: I0131 09:21:41.919182 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hgwfr\" (UniqueName: \"kubernetes.io/projected/51e241ad-2d92-41fb-a218-1a14cd40534d-kube-api-access-hgwfr\") pod \"observability-ui-dashboards-66cbf594b5-swjf6\" (UID: \"51e241ad-2d92-41fb-a218-1a14cd40534d\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-swjf6" Jan 31 09:21:41 crc kubenswrapper[4830]: I0131 09:21:41.919533 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/51e241ad-2d92-41fb-a218-1a14cd40534d-serving-cert\") pod \"observability-ui-dashboards-66cbf594b5-swjf6\" (UID: \"51e241ad-2d92-41fb-a218-1a14cd40534d\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-swjf6" Jan 31 09:21:41 crc kubenswrapper[4830]: E0131 09:21:41.920478 4830 secret.go:188] Couldn't get secret openshift-operators/observability-ui-dashboards: secret "observability-ui-dashboards" not found Jan 31 09:21:41 crc kubenswrapper[4830]: E0131 09:21:41.920540 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/51e241ad-2d92-41fb-a218-1a14cd40534d-serving-cert podName:51e241ad-2d92-41fb-a218-1a14cd40534d nodeName:}" failed. No retries permitted until 2026-01-31 09:21:42.420518287 +0000 UTC m=+1246.913880729 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/51e241ad-2d92-41fb-a218-1a14cd40534d-serving-cert") pod "observability-ui-dashboards-66cbf594b5-swjf6" (UID: "51e241ad-2d92-41fb-a218-1a14cd40534d") : secret "observability-ui-dashboards" not found Jan 31 09:21:41 crc kubenswrapper[4830]: I0131 09:21:41.920692 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-ui-dashboards-66cbf594b5-swjf6"] Jan 31 09:21:41 crc kubenswrapper[4830]: I0131 09:21:41.967174 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hgwfr\" (UniqueName: \"kubernetes.io/projected/51e241ad-2d92-41fb-a218-1a14cd40534d-kube-api-access-hgwfr\") pod \"observability-ui-dashboards-66cbf594b5-swjf6\" (UID: \"51e241ad-2d92-41fb-a218-1a14cd40534d\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-swjf6" Jan 31 09:21:41 crc kubenswrapper[4830]: I0131 09:21:41.967266 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 31 09:21:42 crc kubenswrapper[4830]: I0131 09:21:42.059343 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-bbcf59d54-qmgsn"] Jan 31 09:21:42 crc kubenswrapper[4830]: I0131 09:21:42.061507 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-bbcf59d54-qmgsn" Jan 31 09:21:42 crc kubenswrapper[4830]: I0131 09:21:42.074813 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 31 09:21:42 crc kubenswrapper[4830]: I0131 09:21:42.078517 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 31 09:21:42 crc kubenswrapper[4830]: I0131 09:21:42.092353 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"f37f41b4-3b56-45f9-a368-0f772bcf3002","Type":"ContainerStarted","Data":"66cd7b63c3391cc393d1938f0e39f4c7ea8779ef5b7ade2781ea9146f9ae8cc4"} Jan 31 09:21:42 crc kubenswrapper[4830]: I0131 09:21:42.094109 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-tcqxf" Jan 31 09:21:42 crc kubenswrapper[4830]: I0131 09:21:42.094376 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Jan 31 09:21:42 crc kubenswrapper[4830]: I0131 09:21:42.094583 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-2" Jan 31 09:21:42 crc kubenswrapper[4830]: I0131 09:21:42.094709 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-1" Jan 31 09:21:42 crc kubenswrapper[4830]: I0131 09:21:42.094164 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Jan 31 09:21:42 crc kubenswrapper[4830]: I0131 09:21:42.094058 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Jan 31 09:21:42 crc kubenswrapper[4830]: I0131 09:21:42.095019 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Jan 31 09:21:42 crc kubenswrapper[4830]: I0131 09:21:42.099568 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-bbcf59d54-qmgsn"] Jan 31 09:21:42 crc kubenswrapper[4830]: I0131 09:21:42.100011 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"b3c26555-4046-499e-96c9-5a83b8322d8e","Type":"ContainerStarted","Data":"3652034715d21d9f1917376f585589ca7ee0e89eb52be81010194771d0945f84"} Jan 31 09:21:42 crc kubenswrapper[4830]: I0131 09:21:42.111156 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 31 09:21:42 crc kubenswrapper[4830]: I0131 09:21:42.130979 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Jan 31 09:21:42 crc kubenswrapper[4830]: I0131 09:21:42.245496 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/68109d40-9af0-4c37-bf02-7b4744dbab5f-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"68109d40-9af0-4c37-bf02-7b4744dbab5f\") " pod="openstack/prometheus-metric-storage-0" Jan 31 09:21:42 crc kubenswrapper[4830]: I0131 09:21:42.245589 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/68109d40-9af0-4c37-bf02-7b4744dbab5f-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"68109d40-9af0-4c37-bf02-7b4744dbab5f\") " pod="openstack/prometheus-metric-storage-0" Jan 31 09:21:42 crc kubenswrapper[4830]: I0131 09:21:42.245634 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/68109d40-9af0-4c37-bf02-7b4744dbab5f-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"68109d40-9af0-4c37-bf02-7b4744dbab5f\") " pod="openstack/prometheus-metric-storage-0" Jan 31 09:21:42 crc kubenswrapper[4830]: I0131 09:21:42.245696 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-7635d675-22a8-4009-89b3-dfdef75167b6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7635d675-22a8-4009-89b3-dfdef75167b6\") pod \"prometheus-metric-storage-0\" (UID: \"68109d40-9af0-4c37-bf02-7b4744dbab5f\") " pod="openstack/prometheus-metric-storage-0" Jan 31 09:21:42 crc kubenswrapper[4830]: I0131 09:21:42.245752 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/68109d40-9af0-4c37-bf02-7b4744dbab5f-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"68109d40-9af0-4c37-bf02-7b4744dbab5f\") " pod="openstack/prometheus-metric-storage-0" Jan 31 09:21:42 crc kubenswrapper[4830]: I0131 09:21:42.245793 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/68109d40-9af0-4c37-bf02-7b4744dbab5f-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"68109d40-9af0-4c37-bf02-7b4744dbab5f\") " pod="openstack/prometheus-metric-storage-0" Jan 31 09:21:42 crc kubenswrapper[4830]: I0131 09:21:42.245871 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/afe486bd-6c62-42d6-ac04-9c2bb21204d7-console-serving-cert\") pod \"console-bbcf59d54-qmgsn\" (UID: \"afe486bd-6c62-42d6-ac04-9c2bb21204d7\") " pod="openshift-console/console-bbcf59d54-qmgsn" Jan 31 09:21:42 crc kubenswrapper[4830]: I0131 09:21:42.245893 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/afe486bd-6c62-42d6-ac04-9c2bb21204d7-console-oauth-config\") pod \"console-bbcf59d54-qmgsn\" (UID: \"afe486bd-6c62-42d6-ac04-9c2bb21204d7\") " pod="openshift-console/console-bbcf59d54-qmgsn" Jan 31 09:21:42 crc kubenswrapper[4830]: I0131 09:21:42.245914 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/afe486bd-6c62-42d6-ac04-9c2bb21204d7-trusted-ca-bundle\") pod \"console-bbcf59d54-qmgsn\" (UID: \"afe486bd-6c62-42d6-ac04-9c2bb21204d7\") " pod="openshift-console/console-bbcf59d54-qmgsn" Jan 31 09:21:42 crc kubenswrapper[4830]: I0131 09:21:42.245942 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4s6xw\" (UniqueName: \"kubernetes.io/projected/afe486bd-6c62-42d6-ac04-9c2bb21204d7-kube-api-access-4s6xw\") pod \"console-bbcf59d54-qmgsn\" (UID: \"afe486bd-6c62-42d6-ac04-9c2bb21204d7\") " pod="openshift-console/console-bbcf59d54-qmgsn" Jan 31 09:21:42 crc kubenswrapper[4830]: I0131 09:21:42.245985 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/afe486bd-6c62-42d6-ac04-9c2bb21204d7-console-config\") pod \"console-bbcf59d54-qmgsn\" (UID: \"afe486bd-6c62-42d6-ac04-9c2bb21204d7\") " pod="openshift-console/console-bbcf59d54-qmgsn" Jan 31 09:21:42 crc kubenswrapper[4830]: I0131 09:21:42.246020 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/68109d40-9af0-4c37-bf02-7b4744dbab5f-config\") pod \"prometheus-metric-storage-0\" (UID: \"68109d40-9af0-4c37-bf02-7b4744dbab5f\") " pod="openstack/prometheus-metric-storage-0" Jan 31 09:21:42 crc kubenswrapper[4830]: I0131 09:21:42.246052 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/afe486bd-6c62-42d6-ac04-9c2bb21204d7-oauth-serving-cert\") pod \"console-bbcf59d54-qmgsn\" (UID: \"afe486bd-6c62-42d6-ac04-9c2bb21204d7\") " pod="openshift-console/console-bbcf59d54-qmgsn" Jan 31 09:21:42 crc kubenswrapper[4830]: I0131 09:21:42.246095 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5rk7p\" (UniqueName: \"kubernetes.io/projected/68109d40-9af0-4c37-bf02-7b4744dbab5f-kube-api-access-5rk7p\") pod \"prometheus-metric-storage-0\" (UID: \"68109d40-9af0-4c37-bf02-7b4744dbab5f\") " pod="openstack/prometheus-metric-storage-0" Jan 31 09:21:42 crc kubenswrapper[4830]: I0131 09:21:42.246167 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/68109d40-9af0-4c37-bf02-7b4744dbab5f-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"68109d40-9af0-4c37-bf02-7b4744dbab5f\") " pod="openstack/prometheus-metric-storage-0" Jan 31 09:21:42 crc kubenswrapper[4830]: I0131 09:21:42.246199 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/68109d40-9af0-4c37-bf02-7b4744dbab5f-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"68109d40-9af0-4c37-bf02-7b4744dbab5f\") " pod="openstack/prometheus-metric-storage-0" Jan 31 09:21:42 crc kubenswrapper[4830]: I0131 09:21:42.246225 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/afe486bd-6c62-42d6-ac04-9c2bb21204d7-service-ca\") pod \"console-bbcf59d54-qmgsn\" (UID: \"afe486bd-6c62-42d6-ac04-9c2bb21204d7\") " pod="openshift-console/console-bbcf59d54-qmgsn" Jan 31 09:21:42 crc kubenswrapper[4830]: I0131 09:21:42.349809 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/68109d40-9af0-4c37-bf02-7b4744dbab5f-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"68109d40-9af0-4c37-bf02-7b4744dbab5f\") " pod="openstack/prometheus-metric-storage-0" Jan 31 09:21:42 crc kubenswrapper[4830]: I0131 09:21:42.349894 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/68109d40-9af0-4c37-bf02-7b4744dbab5f-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"68109d40-9af0-4c37-bf02-7b4744dbab5f\") " pod="openstack/prometheus-metric-storage-0" Jan 31 09:21:42 crc kubenswrapper[4830]: I0131 09:21:42.349966 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/afe486bd-6c62-42d6-ac04-9c2bb21204d7-console-serving-cert\") pod \"console-bbcf59d54-qmgsn\" (UID: \"afe486bd-6c62-42d6-ac04-9c2bb21204d7\") " pod="openshift-console/console-bbcf59d54-qmgsn" Jan 31 09:21:42 crc kubenswrapper[4830]: I0131 09:21:42.349989 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/afe486bd-6c62-42d6-ac04-9c2bb21204d7-console-oauth-config\") pod \"console-bbcf59d54-qmgsn\" (UID: \"afe486bd-6c62-42d6-ac04-9c2bb21204d7\") " pod="openshift-console/console-bbcf59d54-qmgsn" Jan 31 09:21:42 crc kubenswrapper[4830]: I0131 09:21:42.350007 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/afe486bd-6c62-42d6-ac04-9c2bb21204d7-trusted-ca-bundle\") pod \"console-bbcf59d54-qmgsn\" (UID: \"afe486bd-6c62-42d6-ac04-9c2bb21204d7\") " pod="openshift-console/console-bbcf59d54-qmgsn" Jan 31 09:21:42 crc kubenswrapper[4830]: I0131 09:21:42.350037 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4s6xw\" (UniqueName: \"kubernetes.io/projected/afe486bd-6c62-42d6-ac04-9c2bb21204d7-kube-api-access-4s6xw\") pod \"console-bbcf59d54-qmgsn\" (UID: \"afe486bd-6c62-42d6-ac04-9c2bb21204d7\") " pod="openshift-console/console-bbcf59d54-qmgsn" Jan 31 09:21:42 crc kubenswrapper[4830]: I0131 09:21:42.350085 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/afe486bd-6c62-42d6-ac04-9c2bb21204d7-console-config\") pod \"console-bbcf59d54-qmgsn\" (UID: \"afe486bd-6c62-42d6-ac04-9c2bb21204d7\") " pod="openshift-console/console-bbcf59d54-qmgsn" Jan 31 09:21:42 crc kubenswrapper[4830]: I0131 09:21:42.350125 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/68109d40-9af0-4c37-bf02-7b4744dbab5f-config\") pod \"prometheus-metric-storage-0\" (UID: \"68109d40-9af0-4c37-bf02-7b4744dbab5f\") " pod="openstack/prometheus-metric-storage-0" Jan 31 09:21:42 crc kubenswrapper[4830]: I0131 09:21:42.350157 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/afe486bd-6c62-42d6-ac04-9c2bb21204d7-oauth-serving-cert\") pod \"console-bbcf59d54-qmgsn\" (UID: \"afe486bd-6c62-42d6-ac04-9c2bb21204d7\") " pod="openshift-console/console-bbcf59d54-qmgsn" Jan 31 09:21:42 crc kubenswrapper[4830]: I0131 09:21:42.350182 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5rk7p\" (UniqueName: \"kubernetes.io/projected/68109d40-9af0-4c37-bf02-7b4744dbab5f-kube-api-access-5rk7p\") pod \"prometheus-metric-storage-0\" (UID: \"68109d40-9af0-4c37-bf02-7b4744dbab5f\") " pod="openstack/prometheus-metric-storage-0" Jan 31 09:21:42 crc kubenswrapper[4830]: I0131 09:21:42.350232 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/68109d40-9af0-4c37-bf02-7b4744dbab5f-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"68109d40-9af0-4c37-bf02-7b4744dbab5f\") " pod="openstack/prometheus-metric-storage-0" Jan 31 09:21:42 crc kubenswrapper[4830]: I0131 09:21:42.350255 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/68109d40-9af0-4c37-bf02-7b4744dbab5f-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"68109d40-9af0-4c37-bf02-7b4744dbab5f\") " pod="openstack/prometheus-metric-storage-0" Jan 31 09:21:42 crc kubenswrapper[4830]: I0131 09:21:42.350277 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/afe486bd-6c62-42d6-ac04-9c2bb21204d7-service-ca\") pod \"console-bbcf59d54-qmgsn\" (UID: \"afe486bd-6c62-42d6-ac04-9c2bb21204d7\") " pod="openshift-console/console-bbcf59d54-qmgsn" Jan 31 09:21:42 crc kubenswrapper[4830]: I0131 09:21:42.350312 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/68109d40-9af0-4c37-bf02-7b4744dbab5f-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"68109d40-9af0-4c37-bf02-7b4744dbab5f\") " pod="openstack/prometheus-metric-storage-0" Jan 31 09:21:42 crc kubenswrapper[4830]: I0131 09:21:42.350349 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/68109d40-9af0-4c37-bf02-7b4744dbab5f-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"68109d40-9af0-4c37-bf02-7b4744dbab5f\") " pod="openstack/prometheus-metric-storage-0" Jan 31 09:21:42 crc kubenswrapper[4830]: I0131 09:21:42.350392 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/68109d40-9af0-4c37-bf02-7b4744dbab5f-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"68109d40-9af0-4c37-bf02-7b4744dbab5f\") " pod="openstack/prometheus-metric-storage-0" Jan 31 09:21:42 crc kubenswrapper[4830]: I0131 09:21:42.350437 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-7635d675-22a8-4009-89b3-dfdef75167b6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7635d675-22a8-4009-89b3-dfdef75167b6\") pod \"prometheus-metric-storage-0\" (UID: \"68109d40-9af0-4c37-bf02-7b4744dbab5f\") " pod="openstack/prometheus-metric-storage-0" Jan 31 09:21:42 crc kubenswrapper[4830]: I0131 09:21:42.351171 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/68109d40-9af0-4c37-bf02-7b4744dbab5f-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"68109d40-9af0-4c37-bf02-7b4744dbab5f\") " pod="openstack/prometheus-metric-storage-0" Jan 31 09:21:42 crc kubenswrapper[4830]: I0131 09:21:42.352015 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/afe486bd-6c62-42d6-ac04-9c2bb21204d7-console-config\") pod \"console-bbcf59d54-qmgsn\" (UID: \"afe486bd-6c62-42d6-ac04-9c2bb21204d7\") " pod="openshift-console/console-bbcf59d54-qmgsn" Jan 31 09:21:42 crc kubenswrapper[4830]: I0131 09:21:42.353870 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/afe486bd-6c62-42d6-ac04-9c2bb21204d7-trusted-ca-bundle\") pod \"console-bbcf59d54-qmgsn\" (UID: \"afe486bd-6c62-42d6-ac04-9c2bb21204d7\") " pod="openshift-console/console-bbcf59d54-qmgsn" Jan 31 09:21:42 crc kubenswrapper[4830]: I0131 09:21:42.355213 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/afe486bd-6c62-42d6-ac04-9c2bb21204d7-service-ca\") pod \"console-bbcf59d54-qmgsn\" (UID: \"afe486bd-6c62-42d6-ac04-9c2bb21204d7\") " pod="openshift-console/console-bbcf59d54-qmgsn" Jan 31 09:21:42 crc kubenswrapper[4830]: I0131 09:21:42.358973 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/68109d40-9af0-4c37-bf02-7b4744dbab5f-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"68109d40-9af0-4c37-bf02-7b4744dbab5f\") " pod="openstack/prometheus-metric-storage-0" Jan 31 09:21:42 crc kubenswrapper[4830]: I0131 09:21:42.373599 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/afe486bd-6c62-42d6-ac04-9c2bb21204d7-console-serving-cert\") pod \"console-bbcf59d54-qmgsn\" (UID: \"afe486bd-6c62-42d6-ac04-9c2bb21204d7\") " pod="openshift-console/console-bbcf59d54-qmgsn" Jan 31 09:21:42 crc kubenswrapper[4830]: I0131 09:21:42.374168 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/68109d40-9af0-4c37-bf02-7b4744dbab5f-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"68109d40-9af0-4c37-bf02-7b4744dbab5f\") " pod="openstack/prometheus-metric-storage-0" Jan 31 09:21:42 crc kubenswrapper[4830]: I0131 09:21:42.376121 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/68109d40-9af0-4c37-bf02-7b4744dbab5f-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"68109d40-9af0-4c37-bf02-7b4744dbab5f\") " pod="openstack/prometheus-metric-storage-0" Jan 31 09:21:42 crc kubenswrapper[4830]: I0131 09:21:42.378091 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/afe486bd-6c62-42d6-ac04-9c2bb21204d7-oauth-serving-cert\") pod \"console-bbcf59d54-qmgsn\" (UID: \"afe486bd-6c62-42d6-ac04-9c2bb21204d7\") " pod="openshift-console/console-bbcf59d54-qmgsn" Jan 31 09:21:42 crc kubenswrapper[4830]: I0131 09:21:42.383815 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/68109d40-9af0-4c37-bf02-7b4744dbab5f-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"68109d40-9af0-4c37-bf02-7b4744dbab5f\") " pod="openstack/prometheus-metric-storage-0" Jan 31 09:21:42 crc kubenswrapper[4830]: I0131 09:21:42.385074 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/68109d40-9af0-4c37-bf02-7b4744dbab5f-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"68109d40-9af0-4c37-bf02-7b4744dbab5f\") " pod="openstack/prometheus-metric-storage-0" Jan 31 09:21:42 crc kubenswrapper[4830]: I0131 09:21:42.385379 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/afe486bd-6c62-42d6-ac04-9c2bb21204d7-console-oauth-config\") pod \"console-bbcf59d54-qmgsn\" (UID: \"afe486bd-6c62-42d6-ac04-9c2bb21204d7\") " pod="openshift-console/console-bbcf59d54-qmgsn" Jan 31 09:21:42 crc kubenswrapper[4830]: I0131 09:21:42.398692 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/68109d40-9af0-4c37-bf02-7b4744dbab5f-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"68109d40-9af0-4c37-bf02-7b4744dbab5f\") " pod="openstack/prometheus-metric-storage-0" Jan 31 09:21:42 crc kubenswrapper[4830]: I0131 09:21:42.404439 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/68109d40-9af0-4c37-bf02-7b4744dbab5f-config\") pod \"prometheus-metric-storage-0\" (UID: \"68109d40-9af0-4c37-bf02-7b4744dbab5f\") " pod="openstack/prometheus-metric-storage-0" Jan 31 09:21:42 crc kubenswrapper[4830]: I0131 09:21:42.419150 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4s6xw\" (UniqueName: \"kubernetes.io/projected/afe486bd-6c62-42d6-ac04-9c2bb21204d7-kube-api-access-4s6xw\") pod \"console-bbcf59d54-qmgsn\" (UID: \"afe486bd-6c62-42d6-ac04-9c2bb21204d7\") " pod="openshift-console/console-bbcf59d54-qmgsn" Jan 31 09:21:42 crc kubenswrapper[4830]: I0131 09:21:42.426262 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 31 09:21:42 crc kubenswrapper[4830]: I0131 09:21:42.431769 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5rk7p\" (UniqueName: \"kubernetes.io/projected/68109d40-9af0-4c37-bf02-7b4744dbab5f-kube-api-access-5rk7p\") pod \"prometheus-metric-storage-0\" (UID: \"68109d40-9af0-4c37-bf02-7b4744dbab5f\") " pod="openstack/prometheus-metric-storage-0" Jan 31 09:21:42 crc kubenswrapper[4830]: I0131 09:21:42.462978 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/51e241ad-2d92-41fb-a218-1a14cd40534d-serving-cert\") pod \"observability-ui-dashboards-66cbf594b5-swjf6\" (UID: \"51e241ad-2d92-41fb-a218-1a14cd40534d\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-swjf6" Jan 31 09:21:42 crc kubenswrapper[4830]: I0131 09:21:42.500671 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/51e241ad-2d92-41fb-a218-1a14cd40534d-serving-cert\") pod \"observability-ui-dashboards-66cbf594b5-swjf6\" (UID: \"51e241ad-2d92-41fb-a218-1a14cd40534d\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-swjf6" Jan 31 09:21:42 crc kubenswrapper[4830]: I0131 09:21:42.501084 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-bbcf59d54-qmgsn" Jan 31 09:21:42 crc kubenswrapper[4830]: I0131 09:21:42.541516 4830 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 31 09:21:42 crc kubenswrapper[4830]: I0131 09:21:42.541570 4830 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-7635d675-22a8-4009-89b3-dfdef75167b6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7635d675-22a8-4009-89b3-dfdef75167b6\") pod \"prometheus-metric-storage-0\" (UID: \"68109d40-9af0-4c37-bf02-7b4744dbab5f\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/5aaf80fa6ac263624dc34aeab406fa0928a0afca3643198b3250e21367e491fb/globalmount\"" pod="openstack/prometheus-metric-storage-0" Jan 31 09:21:42 crc kubenswrapper[4830]: I0131 09:21:42.735779 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-7635d675-22a8-4009-89b3-dfdef75167b6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7635d675-22a8-4009-89b3-dfdef75167b6\") pod \"prometheus-metric-storage-0\" (UID: \"68109d40-9af0-4c37-bf02-7b4744dbab5f\") " pod="openstack/prometheus-metric-storage-0" Jan 31 09:21:42 crc kubenswrapper[4830]: I0131 09:21:42.746528 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-ui-dashboards-66cbf594b5-swjf6" Jan 31 09:21:42 crc kubenswrapper[4830]: I0131 09:21:42.822227 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 31 09:21:43 crc kubenswrapper[4830]: I0131 09:21:43.223665 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"5359b6c7-375f-4424-bb43-f4b2a4d40329","Type":"ContainerStarted","Data":"37639203ab4b8d83607b483fc8dabad84364def21225bc6ef913ae771aaddddd"} Jan 31 09:21:44 crc kubenswrapper[4830]: I0131 09:21:44.073331 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ps27t"] Jan 31 09:21:44 crc kubenswrapper[4830]: I0131 09:21:44.076668 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ps27t" Jan 31 09:21:44 crc kubenswrapper[4830]: I0131 09:21:44.081386 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Jan 31 09:21:44 crc kubenswrapper[4830]: I0131 09:21:44.081616 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-49dt9" Jan 31 09:21:44 crc kubenswrapper[4830]: I0131 09:21:44.081900 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Jan 31 09:21:44 crc kubenswrapper[4830]: I0131 09:21:44.086796 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ps27t"] Jan 31 09:21:44 crc kubenswrapper[4830]: I0131 09:21:44.118239 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dcebcaa8-8cb1-4a94-a15a-0ef39c9bee73-combined-ca-bundle\") pod \"ovn-controller-ps27t\" (UID: \"dcebcaa8-8cb1-4a94-a15a-0ef39c9bee73\") " pod="openstack/ovn-controller-ps27t" Jan 31 09:21:44 crc kubenswrapper[4830]: I0131 09:21:44.118342 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/dcebcaa8-8cb1-4a94-a15a-0ef39c9bee73-var-run\") pod \"ovn-controller-ps27t\" (UID: \"dcebcaa8-8cb1-4a94-a15a-0ef39c9bee73\") " pod="openstack/ovn-controller-ps27t" Jan 31 09:21:44 crc kubenswrapper[4830]: I0131 09:21:44.118392 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/dcebcaa8-8cb1-4a94-a15a-0ef39c9bee73-scripts\") pod \"ovn-controller-ps27t\" (UID: \"dcebcaa8-8cb1-4a94-a15a-0ef39c9bee73\") " pod="openstack/ovn-controller-ps27t" Jan 31 09:21:44 crc kubenswrapper[4830]: I0131 09:21:44.118435 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/dcebcaa8-8cb1-4a94-a15a-0ef39c9bee73-var-log-ovn\") pod \"ovn-controller-ps27t\" (UID: \"dcebcaa8-8cb1-4a94-a15a-0ef39c9bee73\") " pod="openstack/ovn-controller-ps27t" Jan 31 09:21:44 crc kubenswrapper[4830]: I0131 09:21:44.118465 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/dcebcaa8-8cb1-4a94-a15a-0ef39c9bee73-ovn-controller-tls-certs\") pod \"ovn-controller-ps27t\" (UID: \"dcebcaa8-8cb1-4a94-a15a-0ef39c9bee73\") " pod="openstack/ovn-controller-ps27t" Jan 31 09:21:44 crc kubenswrapper[4830]: I0131 09:21:44.118516 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4x2z7\" (UniqueName: \"kubernetes.io/projected/dcebcaa8-8cb1-4a94-a15a-0ef39c9bee73-kube-api-access-4x2z7\") pod \"ovn-controller-ps27t\" (UID: \"dcebcaa8-8cb1-4a94-a15a-0ef39c9bee73\") " pod="openstack/ovn-controller-ps27t" Jan 31 09:21:44 crc kubenswrapper[4830]: I0131 09:21:44.118648 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/dcebcaa8-8cb1-4a94-a15a-0ef39c9bee73-var-run-ovn\") pod \"ovn-controller-ps27t\" (UID: \"dcebcaa8-8cb1-4a94-a15a-0ef39c9bee73\") " pod="openstack/ovn-controller-ps27t" Jan 31 09:21:44 crc kubenswrapper[4830]: I0131 09:21:44.121851 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-gk8dv"] Jan 31 09:21:44 crc kubenswrapper[4830]: I0131 09:21:44.130236 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-gk8dv" Jan 31 09:21:44 crc kubenswrapper[4830]: I0131 09:21:44.145375 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-gk8dv"] Jan 31 09:21:44 crc kubenswrapper[4830]: I0131 09:21:44.234599 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/dcebcaa8-8cb1-4a94-a15a-0ef39c9bee73-var-run-ovn\") pod \"ovn-controller-ps27t\" (UID: \"dcebcaa8-8cb1-4a94-a15a-0ef39c9bee73\") " pod="openstack/ovn-controller-ps27t" Jan 31 09:21:44 crc kubenswrapper[4830]: I0131 09:21:44.235262 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/e3aa8ed2-c434-4c0c-9a0e-8fe7ce3cc5d1-etc-ovs\") pod \"ovn-controller-ovs-gk8dv\" (UID: \"e3aa8ed2-c434-4c0c-9a0e-8fe7ce3cc5d1\") " pod="openstack/ovn-controller-ovs-gk8dv" Jan 31 09:21:44 crc kubenswrapper[4830]: I0131 09:21:44.235322 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dcebcaa8-8cb1-4a94-a15a-0ef39c9bee73-combined-ca-bundle\") pod \"ovn-controller-ps27t\" (UID: \"dcebcaa8-8cb1-4a94-a15a-0ef39c9bee73\") " pod="openstack/ovn-controller-ps27t" Jan 31 09:21:44 crc kubenswrapper[4830]: I0131 09:21:44.235442 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/dcebcaa8-8cb1-4a94-a15a-0ef39c9bee73-var-run\") pod \"ovn-controller-ps27t\" (UID: \"dcebcaa8-8cb1-4a94-a15a-0ef39c9bee73\") " pod="openstack/ovn-controller-ps27t" Jan 31 09:21:44 crc kubenswrapper[4830]: I0131 09:21:44.235556 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/dcebcaa8-8cb1-4a94-a15a-0ef39c9bee73-scripts\") pod \"ovn-controller-ps27t\" (UID: \"dcebcaa8-8cb1-4a94-a15a-0ef39c9bee73\") " pod="openstack/ovn-controller-ps27t" Jan 31 09:21:44 crc kubenswrapper[4830]: I0131 09:21:44.235661 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/dcebcaa8-8cb1-4a94-a15a-0ef39c9bee73-var-log-ovn\") pod \"ovn-controller-ps27t\" (UID: \"dcebcaa8-8cb1-4a94-a15a-0ef39c9bee73\") " pod="openstack/ovn-controller-ps27t" Jan 31 09:21:44 crc kubenswrapper[4830]: I0131 09:21:44.236227 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/dcebcaa8-8cb1-4a94-a15a-0ef39c9bee73-var-run-ovn\") pod \"ovn-controller-ps27t\" (UID: \"dcebcaa8-8cb1-4a94-a15a-0ef39c9bee73\") " pod="openstack/ovn-controller-ps27t" Jan 31 09:21:44 crc kubenswrapper[4830]: I0131 09:21:44.235713 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/dcebcaa8-8cb1-4a94-a15a-0ef39c9bee73-ovn-controller-tls-certs\") pod \"ovn-controller-ps27t\" (UID: \"dcebcaa8-8cb1-4a94-a15a-0ef39c9bee73\") " pod="openstack/ovn-controller-ps27t" Jan 31 09:21:44 crc kubenswrapper[4830]: I0131 09:21:44.236612 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4x2z7\" (UniqueName: \"kubernetes.io/projected/dcebcaa8-8cb1-4a94-a15a-0ef39c9bee73-kube-api-access-4x2z7\") pod \"ovn-controller-ps27t\" (UID: \"dcebcaa8-8cb1-4a94-a15a-0ef39c9bee73\") " pod="openstack/ovn-controller-ps27t" Jan 31 09:21:44 crc kubenswrapper[4830]: I0131 09:21:44.236685 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/e3aa8ed2-c434-4c0c-9a0e-8fe7ce3cc5d1-var-log\") pod \"ovn-controller-ovs-gk8dv\" (UID: \"e3aa8ed2-c434-4c0c-9a0e-8fe7ce3cc5d1\") " pod="openstack/ovn-controller-ovs-gk8dv" Jan 31 09:21:44 crc kubenswrapper[4830]: I0131 09:21:44.236786 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-72wxr\" (UniqueName: \"kubernetes.io/projected/e3aa8ed2-c434-4c0c-9a0e-8fe7ce3cc5d1-kube-api-access-72wxr\") pod \"ovn-controller-ovs-gk8dv\" (UID: \"e3aa8ed2-c434-4c0c-9a0e-8fe7ce3cc5d1\") " pod="openstack/ovn-controller-ovs-gk8dv" Jan 31 09:21:44 crc kubenswrapper[4830]: I0131 09:21:44.236856 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/e3aa8ed2-c434-4c0c-9a0e-8fe7ce3cc5d1-var-lib\") pod \"ovn-controller-ovs-gk8dv\" (UID: \"e3aa8ed2-c434-4c0c-9a0e-8fe7ce3cc5d1\") " pod="openstack/ovn-controller-ovs-gk8dv" Jan 31 09:21:44 crc kubenswrapper[4830]: I0131 09:21:44.236986 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/e3aa8ed2-c434-4c0c-9a0e-8fe7ce3cc5d1-var-run\") pod \"ovn-controller-ovs-gk8dv\" (UID: \"e3aa8ed2-c434-4c0c-9a0e-8fe7ce3cc5d1\") " pod="openstack/ovn-controller-ovs-gk8dv" Jan 31 09:21:44 crc kubenswrapper[4830]: I0131 09:21:44.237016 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e3aa8ed2-c434-4c0c-9a0e-8fe7ce3cc5d1-scripts\") pod \"ovn-controller-ovs-gk8dv\" (UID: \"e3aa8ed2-c434-4c0c-9a0e-8fe7ce3cc5d1\") " pod="openstack/ovn-controller-ovs-gk8dv" Jan 31 09:21:44 crc kubenswrapper[4830]: I0131 09:21:44.238258 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/dcebcaa8-8cb1-4a94-a15a-0ef39c9bee73-var-run\") pod \"ovn-controller-ps27t\" (UID: \"dcebcaa8-8cb1-4a94-a15a-0ef39c9bee73\") " pod="openstack/ovn-controller-ps27t" Jan 31 09:21:44 crc kubenswrapper[4830]: I0131 09:21:44.238527 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-ui-dashboards-66cbf594b5-swjf6"] Jan 31 09:21:44 crc kubenswrapper[4830]: I0131 09:21:44.244230 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/dcebcaa8-8cb1-4a94-a15a-0ef39c9bee73-var-log-ovn\") pod \"ovn-controller-ps27t\" (UID: \"dcebcaa8-8cb1-4a94-a15a-0ef39c9bee73\") " pod="openstack/ovn-controller-ps27t" Jan 31 09:21:44 crc kubenswrapper[4830]: I0131 09:21:44.247258 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/dcebcaa8-8cb1-4a94-a15a-0ef39c9bee73-scripts\") pod \"ovn-controller-ps27t\" (UID: \"dcebcaa8-8cb1-4a94-a15a-0ef39c9bee73\") " pod="openstack/ovn-controller-ps27t" Jan 31 09:21:44 crc kubenswrapper[4830]: I0131 09:21:44.266515 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dcebcaa8-8cb1-4a94-a15a-0ef39c9bee73-combined-ca-bundle\") pod \"ovn-controller-ps27t\" (UID: \"dcebcaa8-8cb1-4a94-a15a-0ef39c9bee73\") " pod="openstack/ovn-controller-ps27t" Jan 31 09:21:44 crc kubenswrapper[4830]: I0131 09:21:44.269935 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/dcebcaa8-8cb1-4a94-a15a-0ef39c9bee73-ovn-controller-tls-certs\") pod \"ovn-controller-ps27t\" (UID: \"dcebcaa8-8cb1-4a94-a15a-0ef39c9bee73\") " pod="openstack/ovn-controller-ps27t" Jan 31 09:21:44 crc kubenswrapper[4830]: I0131 09:21:44.282514 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4x2z7\" (UniqueName: \"kubernetes.io/projected/dcebcaa8-8cb1-4a94-a15a-0ef39c9bee73-kube-api-access-4x2z7\") pod \"ovn-controller-ps27t\" (UID: \"dcebcaa8-8cb1-4a94-a15a-0ef39c9bee73\") " pod="openstack/ovn-controller-ps27t" Jan 31 09:21:44 crc kubenswrapper[4830]: I0131 09:21:44.339211 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/e3aa8ed2-c434-4c0c-9a0e-8fe7ce3cc5d1-etc-ovs\") pod \"ovn-controller-ovs-gk8dv\" (UID: \"e3aa8ed2-c434-4c0c-9a0e-8fe7ce3cc5d1\") " pod="openstack/ovn-controller-ovs-gk8dv" Jan 31 09:21:44 crc kubenswrapper[4830]: I0131 09:21:44.339519 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/e3aa8ed2-c434-4c0c-9a0e-8fe7ce3cc5d1-etc-ovs\") pod \"ovn-controller-ovs-gk8dv\" (UID: \"e3aa8ed2-c434-4c0c-9a0e-8fe7ce3cc5d1\") " pod="openstack/ovn-controller-ovs-gk8dv" Jan 31 09:21:44 crc kubenswrapper[4830]: I0131 09:21:44.342350 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/e3aa8ed2-c434-4c0c-9a0e-8fe7ce3cc5d1-var-log\") pod \"ovn-controller-ovs-gk8dv\" (UID: \"e3aa8ed2-c434-4c0c-9a0e-8fe7ce3cc5d1\") " pod="openstack/ovn-controller-ovs-gk8dv" Jan 31 09:21:44 crc kubenswrapper[4830]: I0131 09:21:44.342470 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-72wxr\" (UniqueName: \"kubernetes.io/projected/e3aa8ed2-c434-4c0c-9a0e-8fe7ce3cc5d1-kube-api-access-72wxr\") pod \"ovn-controller-ovs-gk8dv\" (UID: \"e3aa8ed2-c434-4c0c-9a0e-8fe7ce3cc5d1\") " pod="openstack/ovn-controller-ovs-gk8dv" Jan 31 09:21:44 crc kubenswrapper[4830]: I0131 09:21:44.342545 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/e3aa8ed2-c434-4c0c-9a0e-8fe7ce3cc5d1-var-lib\") pod \"ovn-controller-ovs-gk8dv\" (UID: \"e3aa8ed2-c434-4c0c-9a0e-8fe7ce3cc5d1\") " pod="openstack/ovn-controller-ovs-gk8dv" Jan 31 09:21:44 crc kubenswrapper[4830]: I0131 09:21:44.342697 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/e3aa8ed2-c434-4c0c-9a0e-8fe7ce3cc5d1-var-run\") pod \"ovn-controller-ovs-gk8dv\" (UID: \"e3aa8ed2-c434-4c0c-9a0e-8fe7ce3cc5d1\") " pod="openstack/ovn-controller-ovs-gk8dv" Jan 31 09:21:44 crc kubenswrapper[4830]: I0131 09:21:44.342742 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e3aa8ed2-c434-4c0c-9a0e-8fe7ce3cc5d1-scripts\") pod \"ovn-controller-ovs-gk8dv\" (UID: \"e3aa8ed2-c434-4c0c-9a0e-8fe7ce3cc5d1\") " pod="openstack/ovn-controller-ovs-gk8dv" Jan 31 09:21:44 crc kubenswrapper[4830]: I0131 09:21:44.344048 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/e3aa8ed2-c434-4c0c-9a0e-8fe7ce3cc5d1-var-log\") pod \"ovn-controller-ovs-gk8dv\" (UID: \"e3aa8ed2-c434-4c0c-9a0e-8fe7ce3cc5d1\") " pod="openstack/ovn-controller-ovs-gk8dv" Jan 31 09:21:44 crc kubenswrapper[4830]: I0131 09:21:44.344200 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/e3aa8ed2-c434-4c0c-9a0e-8fe7ce3cc5d1-var-run\") pod \"ovn-controller-ovs-gk8dv\" (UID: \"e3aa8ed2-c434-4c0c-9a0e-8fe7ce3cc5d1\") " pod="openstack/ovn-controller-ovs-gk8dv" Jan 31 09:21:44 crc kubenswrapper[4830]: I0131 09:21:44.344401 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/e3aa8ed2-c434-4c0c-9a0e-8fe7ce3cc5d1-var-lib\") pod \"ovn-controller-ovs-gk8dv\" (UID: \"e3aa8ed2-c434-4c0c-9a0e-8fe7ce3cc5d1\") " pod="openstack/ovn-controller-ovs-gk8dv" Jan 31 09:21:44 crc kubenswrapper[4830]: I0131 09:21:44.348256 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 31 09:21:44 crc kubenswrapper[4830]: I0131 09:21:44.352942 4830 patch_prober.go:28] interesting pod/machine-config-daemon-gt7kd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 09:21:44 crc kubenswrapper[4830]: I0131 09:21:44.353012 4830 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" podUID="158dbfda-9b0a-4809-9946-3c6ee2d082dc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 09:21:44 crc kubenswrapper[4830]: I0131 09:21:44.356052 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e3aa8ed2-c434-4c0c-9a0e-8fe7ce3cc5d1-scripts\") pod \"ovn-controller-ovs-gk8dv\" (UID: \"e3aa8ed2-c434-4c0c-9a0e-8fe7ce3cc5d1\") " pod="openstack/ovn-controller-ovs-gk8dv" Jan 31 09:21:44 crc kubenswrapper[4830]: I0131 09:21:44.383183 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-72wxr\" (UniqueName: \"kubernetes.io/projected/e3aa8ed2-c434-4c0c-9a0e-8fe7ce3cc5d1-kube-api-access-72wxr\") pod \"ovn-controller-ovs-gk8dv\" (UID: \"e3aa8ed2-c434-4c0c-9a0e-8fe7ce3cc5d1\") " pod="openstack/ovn-controller-ovs-gk8dv" Jan 31 09:21:44 crc kubenswrapper[4830]: I0131 09:21:44.444996 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-bbcf59d54-qmgsn"] Jan 31 09:21:44 crc kubenswrapper[4830]: I0131 09:21:44.453983 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ps27t" Jan 31 09:21:44 crc kubenswrapper[4830]: I0131 09:21:44.495008 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-gk8dv" Jan 31 09:21:44 crc kubenswrapper[4830]: I0131 09:21:44.533379 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 31 09:21:44 crc kubenswrapper[4830]: I0131 09:21:44.535767 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 31 09:21:44 crc kubenswrapper[4830]: I0131 09:21:44.541487 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-cd2gs" Jan 31 09:21:44 crc kubenswrapper[4830]: I0131 09:21:44.541810 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Jan 31 09:21:44 crc kubenswrapper[4830]: I0131 09:21:44.542004 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Jan 31 09:21:44 crc kubenswrapper[4830]: I0131 09:21:44.542278 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Jan 31 09:21:44 crc kubenswrapper[4830]: I0131 09:21:44.542444 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Jan 31 09:21:44 crc kubenswrapper[4830]: I0131 09:21:44.545657 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 31 09:21:44 crc kubenswrapper[4830]: I0131 09:21:44.650713 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4r6rj\" (UniqueName: \"kubernetes.io/projected/6f46adde-a4fc-42fc-aa3b-de8154dbc99c-kube-api-access-4r6rj\") pod \"ovsdbserver-nb-0\" (UID: \"6f46adde-a4fc-42fc-aa3b-de8154dbc99c\") " pod="openstack/ovsdbserver-nb-0" Jan 31 09:21:44 crc kubenswrapper[4830]: I0131 09:21:44.650862 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f46adde-a4fc-42fc-aa3b-de8154dbc99c-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"6f46adde-a4fc-42fc-aa3b-de8154dbc99c\") " pod="openstack/ovsdbserver-nb-0" Jan 31 09:21:44 crc kubenswrapper[4830]: I0131 09:21:44.650916 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/6f46adde-a4fc-42fc-aa3b-de8154dbc99c-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"6f46adde-a4fc-42fc-aa3b-de8154dbc99c\") " pod="openstack/ovsdbserver-nb-0" Jan 31 09:21:44 crc kubenswrapper[4830]: I0131 09:21:44.650964 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/6f46adde-a4fc-42fc-aa3b-de8154dbc99c-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"6f46adde-a4fc-42fc-aa3b-de8154dbc99c\") " pod="openstack/ovsdbserver-nb-0" Jan 31 09:21:44 crc kubenswrapper[4830]: I0131 09:21:44.651003 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/6f46adde-a4fc-42fc-aa3b-de8154dbc99c-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"6f46adde-a4fc-42fc-aa3b-de8154dbc99c\") " pod="openstack/ovsdbserver-nb-0" Jan 31 09:21:44 crc kubenswrapper[4830]: I0131 09:21:44.651045 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6f46adde-a4fc-42fc-aa3b-de8154dbc99c-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"6f46adde-a4fc-42fc-aa3b-de8154dbc99c\") " pod="openstack/ovsdbserver-nb-0" Jan 31 09:21:44 crc kubenswrapper[4830]: I0131 09:21:44.651095 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-50d5fa3b-478e-46b1-9098-a76ca0286e88\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-50d5fa3b-478e-46b1-9098-a76ca0286e88\") pod \"ovsdbserver-nb-0\" (UID: \"6f46adde-a4fc-42fc-aa3b-de8154dbc99c\") " pod="openstack/ovsdbserver-nb-0" Jan 31 09:21:44 crc kubenswrapper[4830]: I0131 09:21:44.651144 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6f46adde-a4fc-42fc-aa3b-de8154dbc99c-config\") pod \"ovsdbserver-nb-0\" (UID: \"6f46adde-a4fc-42fc-aa3b-de8154dbc99c\") " pod="openstack/ovsdbserver-nb-0" Jan 31 09:21:44 crc kubenswrapper[4830]: I0131 09:21:44.754297 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/6f46adde-a4fc-42fc-aa3b-de8154dbc99c-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"6f46adde-a4fc-42fc-aa3b-de8154dbc99c\") " pod="openstack/ovsdbserver-nb-0" Jan 31 09:21:44 crc kubenswrapper[4830]: I0131 09:21:44.754355 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/6f46adde-a4fc-42fc-aa3b-de8154dbc99c-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"6f46adde-a4fc-42fc-aa3b-de8154dbc99c\") " pod="openstack/ovsdbserver-nb-0" Jan 31 09:21:44 crc kubenswrapper[4830]: I0131 09:21:44.754390 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6f46adde-a4fc-42fc-aa3b-de8154dbc99c-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"6f46adde-a4fc-42fc-aa3b-de8154dbc99c\") " pod="openstack/ovsdbserver-nb-0" Jan 31 09:21:44 crc kubenswrapper[4830]: I0131 09:21:44.754430 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-50d5fa3b-478e-46b1-9098-a76ca0286e88\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-50d5fa3b-478e-46b1-9098-a76ca0286e88\") pod \"ovsdbserver-nb-0\" (UID: \"6f46adde-a4fc-42fc-aa3b-de8154dbc99c\") " pod="openstack/ovsdbserver-nb-0" Jan 31 09:21:44 crc kubenswrapper[4830]: I0131 09:21:44.754466 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6f46adde-a4fc-42fc-aa3b-de8154dbc99c-config\") pod \"ovsdbserver-nb-0\" (UID: \"6f46adde-a4fc-42fc-aa3b-de8154dbc99c\") " pod="openstack/ovsdbserver-nb-0" Jan 31 09:21:44 crc kubenswrapper[4830]: I0131 09:21:44.754565 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4r6rj\" (UniqueName: \"kubernetes.io/projected/6f46adde-a4fc-42fc-aa3b-de8154dbc99c-kube-api-access-4r6rj\") pod \"ovsdbserver-nb-0\" (UID: \"6f46adde-a4fc-42fc-aa3b-de8154dbc99c\") " pod="openstack/ovsdbserver-nb-0" Jan 31 09:21:44 crc kubenswrapper[4830]: I0131 09:21:44.754606 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f46adde-a4fc-42fc-aa3b-de8154dbc99c-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"6f46adde-a4fc-42fc-aa3b-de8154dbc99c\") " pod="openstack/ovsdbserver-nb-0" Jan 31 09:21:44 crc kubenswrapper[4830]: I0131 09:21:44.754636 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/6f46adde-a4fc-42fc-aa3b-de8154dbc99c-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"6f46adde-a4fc-42fc-aa3b-de8154dbc99c\") " pod="openstack/ovsdbserver-nb-0" Jan 31 09:21:44 crc kubenswrapper[4830]: I0131 09:21:44.755847 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6f46adde-a4fc-42fc-aa3b-de8154dbc99c-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"6f46adde-a4fc-42fc-aa3b-de8154dbc99c\") " pod="openstack/ovsdbserver-nb-0" Jan 31 09:21:44 crc kubenswrapper[4830]: I0131 09:21:44.757374 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/6f46adde-a4fc-42fc-aa3b-de8154dbc99c-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"6f46adde-a4fc-42fc-aa3b-de8154dbc99c\") " pod="openstack/ovsdbserver-nb-0" Jan 31 09:21:44 crc kubenswrapper[4830]: I0131 09:21:44.759004 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/6f46adde-a4fc-42fc-aa3b-de8154dbc99c-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"6f46adde-a4fc-42fc-aa3b-de8154dbc99c\") " pod="openstack/ovsdbserver-nb-0" Jan 31 09:21:44 crc kubenswrapper[4830]: I0131 09:21:44.764778 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f46adde-a4fc-42fc-aa3b-de8154dbc99c-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"6f46adde-a4fc-42fc-aa3b-de8154dbc99c\") " pod="openstack/ovsdbserver-nb-0" Jan 31 09:21:44 crc kubenswrapper[4830]: I0131 09:21:44.767301 4830 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 31 09:21:44 crc kubenswrapper[4830]: I0131 09:21:44.767340 4830 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-50d5fa3b-478e-46b1-9098-a76ca0286e88\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-50d5fa3b-478e-46b1-9098-a76ca0286e88\") pod \"ovsdbserver-nb-0\" (UID: \"6f46adde-a4fc-42fc-aa3b-de8154dbc99c\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/997f2d49507e0e19b01b129c94678888eaa57c55658cf6efbe94dbea8ce88b94/globalmount\"" pod="openstack/ovsdbserver-nb-0" Jan 31 09:21:44 crc kubenswrapper[4830]: I0131 09:21:44.778320 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6f46adde-a4fc-42fc-aa3b-de8154dbc99c-config\") pod \"ovsdbserver-nb-0\" (UID: \"6f46adde-a4fc-42fc-aa3b-de8154dbc99c\") " pod="openstack/ovsdbserver-nb-0" Jan 31 09:21:44 crc kubenswrapper[4830]: I0131 09:21:44.779365 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/6f46adde-a4fc-42fc-aa3b-de8154dbc99c-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"6f46adde-a4fc-42fc-aa3b-de8154dbc99c\") " pod="openstack/ovsdbserver-nb-0" Jan 31 09:21:44 crc kubenswrapper[4830]: I0131 09:21:44.799044 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4r6rj\" (UniqueName: \"kubernetes.io/projected/6f46adde-a4fc-42fc-aa3b-de8154dbc99c-kube-api-access-4r6rj\") pod \"ovsdbserver-nb-0\" (UID: \"6f46adde-a4fc-42fc-aa3b-de8154dbc99c\") " pod="openstack/ovsdbserver-nb-0" Jan 31 09:21:44 crc kubenswrapper[4830]: I0131 09:21:44.827137 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-50d5fa3b-478e-46b1-9098-a76ca0286e88\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-50d5fa3b-478e-46b1-9098-a76ca0286e88\") pod \"ovsdbserver-nb-0\" (UID: \"6f46adde-a4fc-42fc-aa3b-de8154dbc99c\") " pod="openstack/ovsdbserver-nb-0" Jan 31 09:21:44 crc kubenswrapper[4830]: I0131 09:21:44.871891 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 31 09:21:47 crc kubenswrapper[4830]: I0131 09:21:47.963957 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 31 09:21:47 crc kubenswrapper[4830]: I0131 09:21:47.967330 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 31 09:21:47 crc kubenswrapper[4830]: I0131 09:21:47.972665 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Jan 31 09:21:47 crc kubenswrapper[4830]: I0131 09:21:47.972892 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-h5lth" Jan 31 09:21:47 crc kubenswrapper[4830]: I0131 09:21:47.973101 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Jan 31 09:21:47 crc kubenswrapper[4830]: I0131 09:21:47.973246 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Jan 31 09:21:47 crc kubenswrapper[4830]: I0131 09:21:47.979834 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 31 09:21:48 crc kubenswrapper[4830]: I0131 09:21:48.166275 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/e47f665d-2a2a-464a-b6a3-e255f1440eda-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"e47f665d-2a2a-464a-b6a3-e255f1440eda\") " pod="openstack/ovsdbserver-sb-0" Jan 31 09:21:48 crc kubenswrapper[4830]: I0131 09:21:48.166384 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e47f665d-2a2a-464a-b6a3-e255f1440eda-config\") pod \"ovsdbserver-sb-0\" (UID: \"e47f665d-2a2a-464a-b6a3-e255f1440eda\") " pod="openstack/ovsdbserver-sb-0" Jan 31 09:21:48 crc kubenswrapper[4830]: I0131 09:21:48.166508 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-fcb4789e-6574-4d5b-a931-62174c66ac9f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-fcb4789e-6574-4d5b-a931-62174c66ac9f\") pod \"ovsdbserver-sb-0\" (UID: \"e47f665d-2a2a-464a-b6a3-e255f1440eda\") " pod="openstack/ovsdbserver-sb-0" Jan 31 09:21:48 crc kubenswrapper[4830]: I0131 09:21:48.166534 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6xc4s\" (UniqueName: \"kubernetes.io/projected/e47f665d-2a2a-464a-b6a3-e255f1440eda-kube-api-access-6xc4s\") pod \"ovsdbserver-sb-0\" (UID: \"e47f665d-2a2a-464a-b6a3-e255f1440eda\") " pod="openstack/ovsdbserver-sb-0" Jan 31 09:21:48 crc kubenswrapper[4830]: I0131 09:21:48.166605 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/e47f665d-2a2a-464a-b6a3-e255f1440eda-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"e47f665d-2a2a-464a-b6a3-e255f1440eda\") " pod="openstack/ovsdbserver-sb-0" Jan 31 09:21:48 crc kubenswrapper[4830]: I0131 09:21:48.166628 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/e47f665d-2a2a-464a-b6a3-e255f1440eda-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"e47f665d-2a2a-464a-b6a3-e255f1440eda\") " pod="openstack/ovsdbserver-sb-0" Jan 31 09:21:48 crc kubenswrapper[4830]: I0131 09:21:48.166702 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e47f665d-2a2a-464a-b6a3-e255f1440eda-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"e47f665d-2a2a-464a-b6a3-e255f1440eda\") " pod="openstack/ovsdbserver-sb-0" Jan 31 09:21:48 crc kubenswrapper[4830]: I0131 09:21:48.166759 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e47f665d-2a2a-464a-b6a3-e255f1440eda-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"e47f665d-2a2a-464a-b6a3-e255f1440eda\") " pod="openstack/ovsdbserver-sb-0" Jan 31 09:21:48 crc kubenswrapper[4830]: I0131 09:21:48.274157 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e47f665d-2a2a-464a-b6a3-e255f1440eda-config\") pod \"ovsdbserver-sb-0\" (UID: \"e47f665d-2a2a-464a-b6a3-e255f1440eda\") " pod="openstack/ovsdbserver-sb-0" Jan 31 09:21:48 crc kubenswrapper[4830]: I0131 09:21:48.274212 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-fcb4789e-6574-4d5b-a931-62174c66ac9f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-fcb4789e-6574-4d5b-a931-62174c66ac9f\") pod \"ovsdbserver-sb-0\" (UID: \"e47f665d-2a2a-464a-b6a3-e255f1440eda\") " pod="openstack/ovsdbserver-sb-0" Jan 31 09:21:48 crc kubenswrapper[4830]: I0131 09:21:48.274232 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6xc4s\" (UniqueName: \"kubernetes.io/projected/e47f665d-2a2a-464a-b6a3-e255f1440eda-kube-api-access-6xc4s\") pod \"ovsdbserver-sb-0\" (UID: \"e47f665d-2a2a-464a-b6a3-e255f1440eda\") " pod="openstack/ovsdbserver-sb-0" Jan 31 09:21:48 crc kubenswrapper[4830]: I0131 09:21:48.274303 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/e47f665d-2a2a-464a-b6a3-e255f1440eda-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"e47f665d-2a2a-464a-b6a3-e255f1440eda\") " pod="openstack/ovsdbserver-sb-0" Jan 31 09:21:48 crc kubenswrapper[4830]: I0131 09:21:48.274335 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/e47f665d-2a2a-464a-b6a3-e255f1440eda-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"e47f665d-2a2a-464a-b6a3-e255f1440eda\") " pod="openstack/ovsdbserver-sb-0" Jan 31 09:21:48 crc kubenswrapper[4830]: I0131 09:21:48.274375 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e47f665d-2a2a-464a-b6a3-e255f1440eda-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"e47f665d-2a2a-464a-b6a3-e255f1440eda\") " pod="openstack/ovsdbserver-sb-0" Jan 31 09:21:48 crc kubenswrapper[4830]: I0131 09:21:48.274400 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e47f665d-2a2a-464a-b6a3-e255f1440eda-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"e47f665d-2a2a-464a-b6a3-e255f1440eda\") " pod="openstack/ovsdbserver-sb-0" Jan 31 09:21:48 crc kubenswrapper[4830]: I0131 09:21:48.274470 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/e47f665d-2a2a-464a-b6a3-e255f1440eda-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"e47f665d-2a2a-464a-b6a3-e255f1440eda\") " pod="openstack/ovsdbserver-sb-0" Jan 31 09:21:48 crc kubenswrapper[4830]: I0131 09:21:48.278403 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/e47f665d-2a2a-464a-b6a3-e255f1440eda-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"e47f665d-2a2a-464a-b6a3-e255f1440eda\") " pod="openstack/ovsdbserver-sb-0" Jan 31 09:21:48 crc kubenswrapper[4830]: I0131 09:21:48.279708 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e47f665d-2a2a-464a-b6a3-e255f1440eda-config\") pod \"ovsdbserver-sb-0\" (UID: \"e47f665d-2a2a-464a-b6a3-e255f1440eda\") " pod="openstack/ovsdbserver-sb-0" Jan 31 09:21:48 crc kubenswrapper[4830]: I0131 09:21:48.328902 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e47f665d-2a2a-464a-b6a3-e255f1440eda-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"e47f665d-2a2a-464a-b6a3-e255f1440eda\") " pod="openstack/ovsdbserver-sb-0" Jan 31 09:21:48 crc kubenswrapper[4830]: I0131 09:21:48.329502 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/e47f665d-2a2a-464a-b6a3-e255f1440eda-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"e47f665d-2a2a-464a-b6a3-e255f1440eda\") " pod="openstack/ovsdbserver-sb-0" Jan 31 09:21:48 crc kubenswrapper[4830]: I0131 09:21:48.334774 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/e47f665d-2a2a-464a-b6a3-e255f1440eda-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"e47f665d-2a2a-464a-b6a3-e255f1440eda\") " pod="openstack/ovsdbserver-sb-0" Jan 31 09:21:48 crc kubenswrapper[4830]: I0131 09:21:48.340159 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e47f665d-2a2a-464a-b6a3-e255f1440eda-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"e47f665d-2a2a-464a-b6a3-e255f1440eda\") " pod="openstack/ovsdbserver-sb-0" Jan 31 09:21:48 crc kubenswrapper[4830]: I0131 09:21:48.350780 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6xc4s\" (UniqueName: \"kubernetes.io/projected/e47f665d-2a2a-464a-b6a3-e255f1440eda-kube-api-access-6xc4s\") pod \"ovsdbserver-sb-0\" (UID: \"e47f665d-2a2a-464a-b6a3-e255f1440eda\") " pod="openstack/ovsdbserver-sb-0" Jan 31 09:21:48 crc kubenswrapper[4830]: I0131 09:21:48.472889 4830 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 31 09:21:48 crc kubenswrapper[4830]: I0131 09:21:48.473298 4830 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-fcb4789e-6574-4d5b-a931-62174c66ac9f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-fcb4789e-6574-4d5b-a931-62174c66ac9f\") pod \"ovsdbserver-sb-0\" (UID: \"e47f665d-2a2a-464a-b6a3-e255f1440eda\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1827efd08f7f1b48255a2f9e224ed1ac838699b1308164645c9b643157841c44/globalmount\"" pod="openstack/ovsdbserver-sb-0" Jan 31 09:21:48 crc kubenswrapper[4830]: W0131 09:21:48.478034 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podafe486bd_6c62_42d6_ac04_9c2bb21204d7.slice/crio-f549ba0312ce22b0db24a435bd7862b98c516b5893cc908f2421fb7be512bce6 WatchSource:0}: Error finding container f549ba0312ce22b0db24a435bd7862b98c516b5893cc908f2421fb7be512bce6: Status 404 returned error can't find the container with id f549ba0312ce22b0db24a435bd7862b98c516b5893cc908f2421fb7be512bce6 Jan 31 09:21:48 crc kubenswrapper[4830]: I0131 09:21:48.478128 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"68109d40-9af0-4c37-bf02-7b4744dbab5f","Type":"ContainerStarted","Data":"6b8928a60366130aa9d4a34de626cddf20c7c7a4f5e9dd68404d2294d1a938d4"} Jan 31 09:21:48 crc kubenswrapper[4830]: I0131 09:21:48.499082 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-ui-dashboards-66cbf594b5-swjf6" event={"ID":"51e241ad-2d92-41fb-a218-1a14cd40534d","Type":"ContainerStarted","Data":"2ca0056471aab2c47c377785786b0871f908c73f503a1362ac43343c0531ab00"} Jan 31 09:21:48 crc kubenswrapper[4830]: I0131 09:21:48.536454 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-fcb4789e-6574-4d5b-a931-62174c66ac9f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-fcb4789e-6574-4d5b-a931-62174c66ac9f\") pod \"ovsdbserver-sb-0\" (UID: \"e47f665d-2a2a-464a-b6a3-e255f1440eda\") " pod="openstack/ovsdbserver-sb-0" Jan 31 09:21:48 crc kubenswrapper[4830]: I0131 09:21:48.786685 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 31 09:21:49 crc kubenswrapper[4830]: I0131 09:21:49.435174 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ps27t"] Jan 31 09:21:49 crc kubenswrapper[4830]: I0131 09:21:49.520709 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-bbcf59d54-qmgsn" event={"ID":"afe486bd-6c62-42d6-ac04-9c2bb21204d7","Type":"ContainerStarted","Data":"1a17af186cd49559857c4ee4b13ab37df2f7b3afdf6c5f13f5fe7127854f599d"} Jan 31 09:21:49 crc kubenswrapper[4830]: I0131 09:21:49.520778 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-bbcf59d54-qmgsn" event={"ID":"afe486bd-6c62-42d6-ac04-9c2bb21204d7","Type":"ContainerStarted","Data":"f549ba0312ce22b0db24a435bd7862b98c516b5893cc908f2421fb7be512bce6"} Jan 31 09:21:49 crc kubenswrapper[4830]: I0131 09:21:49.552475 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-bbcf59d54-qmgsn" podStartSLOduration=8.552454525 podStartE2EDuration="8.552454525s" podCreationTimestamp="2026-01-31 09:21:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 09:21:49.549414018 +0000 UTC m=+1254.042776460" watchObservedRunningTime="2026-01-31 09:21:49.552454525 +0000 UTC m=+1254.045816967" Jan 31 09:21:50 crc kubenswrapper[4830]: I0131 09:21:50.054203 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-gk8dv"] Jan 31 09:21:50 crc kubenswrapper[4830]: I0131 09:21:50.670156 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 31 09:21:52 crc kubenswrapper[4830]: I0131 09:21:52.502568 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-bbcf59d54-qmgsn" Jan 31 09:21:52 crc kubenswrapper[4830]: I0131 09:21:52.503159 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-bbcf59d54-qmgsn" Jan 31 09:21:52 crc kubenswrapper[4830]: I0131 09:21:52.508450 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-bbcf59d54-qmgsn" Jan 31 09:21:52 crc kubenswrapper[4830]: I0131 09:21:52.570191 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-bbcf59d54-qmgsn" Jan 31 09:21:52 crc kubenswrapper[4830]: I0131 09:21:52.638632 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-757d775c7-jlwx2"] Jan 31 09:21:57 crc kubenswrapper[4830]: W0131 09:21:57.233116 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6f46adde_a4fc_42fc_aa3b_de8154dbc99c.slice/crio-dd0e63d4cd687cd839300966cd28ae5304242d553f4576fe500e3e895362b5d8 WatchSource:0}: Error finding container dd0e63d4cd687cd839300966cd28ae5304242d553f4576fe500e3e895362b5d8: Status 404 returned error can't find the container with id dd0e63d4cd687cd839300966cd28ae5304242d553f4576fe500e3e895362b5d8 Jan 31 09:21:57 crc kubenswrapper[4830]: I0131 09:21:57.614513 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-gk8dv" event={"ID":"e3aa8ed2-c434-4c0c-9a0e-8fe7ce3cc5d1","Type":"ContainerStarted","Data":"064d501cd99b698124d2a07f97987f47be952c84d1ebb8a314228987ffa67799"} Jan 31 09:21:57 crc kubenswrapper[4830]: I0131 09:21:57.616561 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ps27t" event={"ID":"dcebcaa8-8cb1-4a94-a15a-0ef39c9bee73","Type":"ContainerStarted","Data":"61c54112527c6e5cf470d1880721174a51a82928a34830934a46f00f3c2be7c4"} Jan 31 09:21:57 crc kubenswrapper[4830]: I0131 09:21:57.623426 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"6f46adde-a4fc-42fc-aa3b-de8154dbc99c","Type":"ContainerStarted","Data":"dd0e63d4cd687cd839300966cd28ae5304242d553f4576fe500e3e895362b5d8"} Jan 31 09:22:03 crc kubenswrapper[4830]: E0131 09:22:03.309347 4830 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-mariadb:current-podified" Jan 31 09:22:03 crc kubenswrapper[4830]: E0131 09:22:03.310172 4830 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:mysql-bootstrap,Image:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,Command:[bash /var/lib/operator-scripts/mysql_bootstrap.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:True,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:mysql-db,ReadOnly:false,MountPath:/var/lib/mysql,SubPath:mysql,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-default,ReadOnly:true,MountPath:/var/lib/config-data/default,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-generated,ReadOnly:false,MountPath:/var/lib/config-data/generated,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:operator-scripts,ReadOnly:true,MountPath:/var/lib/operator-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bmswf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstack-galera-0_openstack(2ca5d2f1-673e-4173-848a-8d32d33b8bcc): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 31 09:22:03 crc kubenswrapper[4830]: E0131 09:22:03.311368 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/openstack-galera-0" podUID="2ca5d2f1-673e-4173-848a-8d32d33b8bcc" Jan 31 09:22:03 crc kubenswrapper[4830]: E0131 09:22:03.683496 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-mariadb:current-podified\\\"\"" pod="openstack/openstack-galera-0" podUID="2ca5d2f1-673e-4173-848a-8d32d33b8bcc" Jan 31 09:22:04 crc kubenswrapper[4830]: E0131 09:22:04.855477 4830 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified" Jan 31 09:22:04 crc kubenswrapper[4830]: E0131 09:22:04.856222 4830 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-clnl2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-server-1_openstack(f60eed79-badf-4909-869b-edbfdfb774ac): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 31 09:22:04 crc kubenswrapper[4830]: E0131 09:22:04.857636 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-server-1" podUID="f60eed79-badf-4909-869b-edbfdfb774ac" Jan 31 09:22:04 crc kubenswrapper[4830]: E0131 09:22:04.875117 4830 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified" Jan 31 09:22:04 crc kubenswrapper[4830]: E0131 09:22:04.875404 4830 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-p2w7k,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cell1-server-0_openstack(18af810d-9de4-4822-86d2-bb7e8a8a449b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 31 09:22:04 crc kubenswrapper[4830]: E0131 09:22:04.876966 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-cell1-server-0" podUID="18af810d-9de4-4822-86d2-bb7e8a8a449b" Jan 31 09:22:04 crc kubenswrapper[4830]: E0131 09:22:04.949591 4830 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified" Jan 31 09:22:04 crc kubenswrapper[4830]: E0131 09:22:04.950344 4830 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-t7jqp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-server-2_openstack(8e40a106-74cd-45ea-a936-c34daaf9ce6e): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 31 09:22:04 crc kubenswrapper[4830]: E0131 09:22:04.952092 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-server-2" podUID="8e40a106-74cd-45ea-a936-c34daaf9ce6e" Jan 31 09:22:05 crc kubenswrapper[4830]: E0131 09:22:05.002261 4830 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-mariadb:current-podified" Jan 31 09:22:05 crc kubenswrapper[4830]: E0131 09:22:05.002462 4830 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:mysql-bootstrap,Image:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,Command:[bash /var/lib/operator-scripts/mysql_bootstrap.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:True,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:mysql-db,ReadOnly:false,MountPath:/var/lib/mysql,SubPath:mysql,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-default,ReadOnly:true,MountPath:/var/lib/config-data/default,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-generated,ReadOnly:false,MountPath:/var/lib/config-data/generated,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:operator-scripts,ReadOnly:true,MountPath:/var/lib/operator-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-h526j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstack-cell1-galera-0_openstack(f37f41b4-3b56-45f9-a368-0f772bcf3002): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 31 09:22:05 crc kubenswrapper[4830]: E0131 09:22:05.003606 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/openstack-cell1-galera-0" podUID="f37f41b4-3b56-45f9-a368-0f772bcf3002" Jan 31 09:22:05 crc kubenswrapper[4830]: I0131 09:22:05.336838 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 31 09:22:05 crc kubenswrapper[4830]: E0131 09:22:05.707482 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-mariadb:current-podified\\\"\"" pod="openstack/openstack-cell1-galera-0" podUID="f37f41b4-3b56-45f9-a368-0f772bcf3002" Jan 31 09:22:05 crc kubenswrapper[4830]: E0131 09:22:05.708171 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified\\\"\"" pod="openstack/rabbitmq-server-2" podUID="8e40a106-74cd-45ea-a936-c34daaf9ce6e" Jan 31 09:22:05 crc kubenswrapper[4830]: E0131 09:22:05.708219 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified\\\"\"" pod="openstack/rabbitmq-cell1-server-0" podUID="18af810d-9de4-4822-86d2-bb7e8a8a449b" Jan 31 09:22:05 crc kubenswrapper[4830]: E0131 09:22:05.708530 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified\\\"\"" pod="openstack/rabbitmq-server-1" podUID="f60eed79-badf-4909-869b-edbfdfb774ac" Jan 31 09:22:07 crc kubenswrapper[4830]: I0131 09:22:07.674372 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-dqnl9"] Jan 31 09:22:07 crc kubenswrapper[4830]: I0131 09:22:07.676952 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-dqnl9" Jan 31 09:22:07 crc kubenswrapper[4830]: I0131 09:22:07.683943 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-dqnl9"] Jan 31 09:22:07 crc kubenswrapper[4830]: I0131 09:22:07.684137 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Jan 31 09:22:07 crc kubenswrapper[4830]: I0131 09:22:07.809120 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/c7f2be11-cbc3-426b-8d36-55d2bec20af6-ovs-rundir\") pod \"ovn-controller-metrics-dqnl9\" (UID: \"c7f2be11-cbc3-426b-8d36-55d2bec20af6\") " pod="openstack/ovn-controller-metrics-dqnl9" Jan 31 09:22:07 crc kubenswrapper[4830]: I0131 09:22:07.809177 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c7f2be11-cbc3-426b-8d36-55d2bec20af6-config\") pod \"ovn-controller-metrics-dqnl9\" (UID: \"c7f2be11-cbc3-426b-8d36-55d2bec20af6\") " pod="openstack/ovn-controller-metrics-dqnl9" Jan 31 09:22:07 crc kubenswrapper[4830]: I0131 09:22:07.809314 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xmfz2\" (UniqueName: \"kubernetes.io/projected/c7f2be11-cbc3-426b-8d36-55d2bec20af6-kube-api-access-xmfz2\") pod \"ovn-controller-metrics-dqnl9\" (UID: \"c7f2be11-cbc3-426b-8d36-55d2bec20af6\") " pod="openstack/ovn-controller-metrics-dqnl9" Jan 31 09:22:07 crc kubenswrapper[4830]: I0131 09:22:07.809391 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7f2be11-cbc3-426b-8d36-55d2bec20af6-combined-ca-bundle\") pod \"ovn-controller-metrics-dqnl9\" (UID: \"c7f2be11-cbc3-426b-8d36-55d2bec20af6\") " pod="openstack/ovn-controller-metrics-dqnl9" Jan 31 09:22:07 crc kubenswrapper[4830]: I0131 09:22:07.809420 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/c7f2be11-cbc3-426b-8d36-55d2bec20af6-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-dqnl9\" (UID: \"c7f2be11-cbc3-426b-8d36-55d2bec20af6\") " pod="openstack/ovn-controller-metrics-dqnl9" Jan 31 09:22:07 crc kubenswrapper[4830]: I0131 09:22:07.809473 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/c7f2be11-cbc3-426b-8d36-55d2bec20af6-ovn-rundir\") pod \"ovn-controller-metrics-dqnl9\" (UID: \"c7f2be11-cbc3-426b-8d36-55d2bec20af6\") " pod="openstack/ovn-controller-metrics-dqnl9" Jan 31 09:22:07 crc kubenswrapper[4830]: I0131 09:22:07.855586 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-7f4p6"] Jan 31 09:22:07 crc kubenswrapper[4830]: I0131 09:22:07.912330 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/c7f2be11-cbc3-426b-8d36-55d2bec20af6-ovn-rundir\") pod \"ovn-controller-metrics-dqnl9\" (UID: \"c7f2be11-cbc3-426b-8d36-55d2bec20af6\") " pod="openstack/ovn-controller-metrics-dqnl9" Jan 31 09:22:07 crc kubenswrapper[4830]: I0131 09:22:07.912447 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/c7f2be11-cbc3-426b-8d36-55d2bec20af6-ovs-rundir\") pod \"ovn-controller-metrics-dqnl9\" (UID: \"c7f2be11-cbc3-426b-8d36-55d2bec20af6\") " pod="openstack/ovn-controller-metrics-dqnl9" Jan 31 09:22:07 crc kubenswrapper[4830]: I0131 09:22:07.912478 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c7f2be11-cbc3-426b-8d36-55d2bec20af6-config\") pod \"ovn-controller-metrics-dqnl9\" (UID: \"c7f2be11-cbc3-426b-8d36-55d2bec20af6\") " pod="openstack/ovn-controller-metrics-dqnl9" Jan 31 09:22:07 crc kubenswrapper[4830]: I0131 09:22:07.912564 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xmfz2\" (UniqueName: \"kubernetes.io/projected/c7f2be11-cbc3-426b-8d36-55d2bec20af6-kube-api-access-xmfz2\") pod \"ovn-controller-metrics-dqnl9\" (UID: \"c7f2be11-cbc3-426b-8d36-55d2bec20af6\") " pod="openstack/ovn-controller-metrics-dqnl9" Jan 31 09:22:07 crc kubenswrapper[4830]: I0131 09:22:07.912611 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7f2be11-cbc3-426b-8d36-55d2bec20af6-combined-ca-bundle\") pod \"ovn-controller-metrics-dqnl9\" (UID: \"c7f2be11-cbc3-426b-8d36-55d2bec20af6\") " pod="openstack/ovn-controller-metrics-dqnl9" Jan 31 09:22:07 crc kubenswrapper[4830]: I0131 09:22:07.912633 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/c7f2be11-cbc3-426b-8d36-55d2bec20af6-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-dqnl9\" (UID: \"c7f2be11-cbc3-426b-8d36-55d2bec20af6\") " pod="openstack/ovn-controller-metrics-dqnl9" Jan 31 09:22:07 crc kubenswrapper[4830]: I0131 09:22:07.912830 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/c7f2be11-cbc3-426b-8d36-55d2bec20af6-ovn-rundir\") pod \"ovn-controller-metrics-dqnl9\" (UID: \"c7f2be11-cbc3-426b-8d36-55d2bec20af6\") " pod="openstack/ovn-controller-metrics-dqnl9" Jan 31 09:22:07 crc kubenswrapper[4830]: I0131 09:22:07.913042 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/c7f2be11-cbc3-426b-8d36-55d2bec20af6-ovs-rundir\") pod \"ovn-controller-metrics-dqnl9\" (UID: \"c7f2be11-cbc3-426b-8d36-55d2bec20af6\") " pod="openstack/ovn-controller-metrics-dqnl9" Jan 31 09:22:07 crc kubenswrapper[4830]: I0131 09:22:07.914098 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c7f2be11-cbc3-426b-8d36-55d2bec20af6-config\") pod \"ovn-controller-metrics-dqnl9\" (UID: \"c7f2be11-cbc3-426b-8d36-55d2bec20af6\") " pod="openstack/ovn-controller-metrics-dqnl9" Jan 31 09:22:07 crc kubenswrapper[4830]: I0131 09:22:07.929702 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/c7f2be11-cbc3-426b-8d36-55d2bec20af6-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-dqnl9\" (UID: \"c7f2be11-cbc3-426b-8d36-55d2bec20af6\") " pod="openstack/ovn-controller-metrics-dqnl9" Jan 31 09:22:07 crc kubenswrapper[4830]: I0131 09:22:07.950959 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xmfz2\" (UniqueName: \"kubernetes.io/projected/c7f2be11-cbc3-426b-8d36-55d2bec20af6-kube-api-access-xmfz2\") pod \"ovn-controller-metrics-dqnl9\" (UID: \"c7f2be11-cbc3-426b-8d36-55d2bec20af6\") " pod="openstack/ovn-controller-metrics-dqnl9" Jan 31 09:22:07 crc kubenswrapper[4830]: I0131 09:22:07.951138 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7f2be11-cbc3-426b-8d36-55d2bec20af6-combined-ca-bundle\") pod \"ovn-controller-metrics-dqnl9\" (UID: \"c7f2be11-cbc3-426b-8d36-55d2bec20af6\") " pod="openstack/ovn-controller-metrics-dqnl9" Jan 31 09:22:07 crc kubenswrapper[4830]: I0131 09:22:07.954843 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-ll6c8"] Jan 31 09:22:07 crc kubenswrapper[4830]: I0131 09:22:07.967803 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-ll6c8" Jan 31 09:22:07 crc kubenswrapper[4830]: I0131 09:22:07.979093 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Jan 31 09:22:07 crc kubenswrapper[4830]: I0131 09:22:07.983211 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-ll6c8"] Jan 31 09:22:08 crc kubenswrapper[4830]: I0131 09:22:08.015596 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-dqnl9" Jan 31 09:22:08 crc kubenswrapper[4830]: I0131 09:22:08.117289 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/feb7542a-b048-4323-a00f-9cdba1b8713f-config\") pod \"dnsmasq-dns-7fd796d7df-ll6c8\" (UID: \"feb7542a-b048-4323-a00f-9cdba1b8713f\") " pod="openstack/dnsmasq-dns-7fd796d7df-ll6c8" Jan 31 09:22:08 crc kubenswrapper[4830]: I0131 09:22:08.117409 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/feb7542a-b048-4323-a00f-9cdba1b8713f-dns-svc\") pod \"dnsmasq-dns-7fd796d7df-ll6c8\" (UID: \"feb7542a-b048-4323-a00f-9cdba1b8713f\") " pod="openstack/dnsmasq-dns-7fd796d7df-ll6c8" Jan 31 09:22:08 crc kubenswrapper[4830]: I0131 09:22:08.117467 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q29jq\" (UniqueName: \"kubernetes.io/projected/feb7542a-b048-4323-a00f-9cdba1b8713f-kube-api-access-q29jq\") pod \"dnsmasq-dns-7fd796d7df-ll6c8\" (UID: \"feb7542a-b048-4323-a00f-9cdba1b8713f\") " pod="openstack/dnsmasq-dns-7fd796d7df-ll6c8" Jan 31 09:22:08 crc kubenswrapper[4830]: I0131 09:22:08.117491 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/feb7542a-b048-4323-a00f-9cdba1b8713f-ovsdbserver-nb\") pod \"dnsmasq-dns-7fd796d7df-ll6c8\" (UID: \"feb7542a-b048-4323-a00f-9cdba1b8713f\") " pod="openstack/dnsmasq-dns-7fd796d7df-ll6c8" Jan 31 09:22:08 crc kubenswrapper[4830]: I0131 09:22:08.153456 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-rntrf"] Jan 31 09:22:08 crc kubenswrapper[4830]: I0131 09:22:08.200189 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-ck5gn"] Jan 31 09:22:08 crc kubenswrapper[4830]: I0131 09:22:08.202646 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-ck5gn" Jan 31 09:22:08 crc kubenswrapper[4830]: I0131 09:22:08.206479 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Jan 31 09:22:08 crc kubenswrapper[4830]: I0131 09:22:08.219365 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-ck5gn"] Jan 31 09:22:08 crc kubenswrapper[4830]: I0131 09:22:08.220530 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/feb7542a-b048-4323-a00f-9cdba1b8713f-config\") pod \"dnsmasq-dns-7fd796d7df-ll6c8\" (UID: \"feb7542a-b048-4323-a00f-9cdba1b8713f\") " pod="openstack/dnsmasq-dns-7fd796d7df-ll6c8" Jan 31 09:22:08 crc kubenswrapper[4830]: I0131 09:22:08.220674 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/feb7542a-b048-4323-a00f-9cdba1b8713f-dns-svc\") pod \"dnsmasq-dns-7fd796d7df-ll6c8\" (UID: \"feb7542a-b048-4323-a00f-9cdba1b8713f\") " pod="openstack/dnsmasq-dns-7fd796d7df-ll6c8" Jan 31 09:22:08 crc kubenswrapper[4830]: I0131 09:22:08.220706 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q29jq\" (UniqueName: \"kubernetes.io/projected/feb7542a-b048-4323-a00f-9cdba1b8713f-kube-api-access-q29jq\") pod \"dnsmasq-dns-7fd796d7df-ll6c8\" (UID: \"feb7542a-b048-4323-a00f-9cdba1b8713f\") " pod="openstack/dnsmasq-dns-7fd796d7df-ll6c8" Jan 31 09:22:08 crc kubenswrapper[4830]: I0131 09:22:08.220748 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/feb7542a-b048-4323-a00f-9cdba1b8713f-ovsdbserver-nb\") pod \"dnsmasq-dns-7fd796d7df-ll6c8\" (UID: \"feb7542a-b048-4323-a00f-9cdba1b8713f\") " pod="openstack/dnsmasq-dns-7fd796d7df-ll6c8" Jan 31 09:22:08 crc kubenswrapper[4830]: I0131 09:22:08.221948 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/feb7542a-b048-4323-a00f-9cdba1b8713f-ovsdbserver-nb\") pod \"dnsmasq-dns-7fd796d7df-ll6c8\" (UID: \"feb7542a-b048-4323-a00f-9cdba1b8713f\") " pod="openstack/dnsmasq-dns-7fd796d7df-ll6c8" Jan 31 09:22:08 crc kubenswrapper[4830]: I0131 09:22:08.222842 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/feb7542a-b048-4323-a00f-9cdba1b8713f-config\") pod \"dnsmasq-dns-7fd796d7df-ll6c8\" (UID: \"feb7542a-b048-4323-a00f-9cdba1b8713f\") " pod="openstack/dnsmasq-dns-7fd796d7df-ll6c8" Jan 31 09:22:08 crc kubenswrapper[4830]: I0131 09:22:08.233891 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/feb7542a-b048-4323-a00f-9cdba1b8713f-dns-svc\") pod \"dnsmasq-dns-7fd796d7df-ll6c8\" (UID: \"feb7542a-b048-4323-a00f-9cdba1b8713f\") " pod="openstack/dnsmasq-dns-7fd796d7df-ll6c8" Jan 31 09:22:08 crc kubenswrapper[4830]: I0131 09:22:08.275741 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q29jq\" (UniqueName: \"kubernetes.io/projected/feb7542a-b048-4323-a00f-9cdba1b8713f-kube-api-access-q29jq\") pod \"dnsmasq-dns-7fd796d7df-ll6c8\" (UID: \"feb7542a-b048-4323-a00f-9cdba1b8713f\") " pod="openstack/dnsmasq-dns-7fd796d7df-ll6c8" Jan 31 09:22:08 crc kubenswrapper[4830]: I0131 09:22:08.327579 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fjs6z\" (UniqueName: \"kubernetes.io/projected/9644fbf3-9d54-43de-992f-7bccf944c31f-kube-api-access-fjs6z\") pod \"dnsmasq-dns-86db49b7ff-ck5gn\" (UID: \"9644fbf3-9d54-43de-992f-7bccf944c31f\") " pod="openstack/dnsmasq-dns-86db49b7ff-ck5gn" Jan 31 09:22:08 crc kubenswrapper[4830]: I0131 09:22:08.328073 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9644fbf3-9d54-43de-992f-7bccf944c31f-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-ck5gn\" (UID: \"9644fbf3-9d54-43de-992f-7bccf944c31f\") " pod="openstack/dnsmasq-dns-86db49b7ff-ck5gn" Jan 31 09:22:08 crc kubenswrapper[4830]: I0131 09:22:08.328165 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9644fbf3-9d54-43de-992f-7bccf944c31f-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-ck5gn\" (UID: \"9644fbf3-9d54-43de-992f-7bccf944c31f\") " pod="openstack/dnsmasq-dns-86db49b7ff-ck5gn" Jan 31 09:22:08 crc kubenswrapper[4830]: I0131 09:22:08.328412 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9644fbf3-9d54-43de-992f-7bccf944c31f-config\") pod \"dnsmasq-dns-86db49b7ff-ck5gn\" (UID: \"9644fbf3-9d54-43de-992f-7bccf944c31f\") " pod="openstack/dnsmasq-dns-86db49b7ff-ck5gn" Jan 31 09:22:08 crc kubenswrapper[4830]: I0131 09:22:08.328456 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9644fbf3-9d54-43de-992f-7bccf944c31f-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-ck5gn\" (UID: \"9644fbf3-9d54-43de-992f-7bccf944c31f\") " pod="openstack/dnsmasq-dns-86db49b7ff-ck5gn" Jan 31 09:22:08 crc kubenswrapper[4830]: I0131 09:22:08.376045 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-ll6c8" Jan 31 09:22:08 crc kubenswrapper[4830]: I0131 09:22:08.431335 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9644fbf3-9d54-43de-992f-7bccf944c31f-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-ck5gn\" (UID: \"9644fbf3-9d54-43de-992f-7bccf944c31f\") " pod="openstack/dnsmasq-dns-86db49b7ff-ck5gn" Jan 31 09:22:08 crc kubenswrapper[4830]: I0131 09:22:08.431891 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9644fbf3-9d54-43de-992f-7bccf944c31f-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-ck5gn\" (UID: \"9644fbf3-9d54-43de-992f-7bccf944c31f\") " pod="openstack/dnsmasq-dns-86db49b7ff-ck5gn" Jan 31 09:22:08 crc kubenswrapper[4830]: I0131 09:22:08.432079 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9644fbf3-9d54-43de-992f-7bccf944c31f-config\") pod \"dnsmasq-dns-86db49b7ff-ck5gn\" (UID: \"9644fbf3-9d54-43de-992f-7bccf944c31f\") " pod="openstack/dnsmasq-dns-86db49b7ff-ck5gn" Jan 31 09:22:08 crc kubenswrapper[4830]: I0131 09:22:08.432218 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9644fbf3-9d54-43de-992f-7bccf944c31f-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-ck5gn\" (UID: \"9644fbf3-9d54-43de-992f-7bccf944c31f\") " pod="openstack/dnsmasq-dns-86db49b7ff-ck5gn" Jan 31 09:22:08 crc kubenswrapper[4830]: I0131 09:22:08.432420 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fjs6z\" (UniqueName: \"kubernetes.io/projected/9644fbf3-9d54-43de-992f-7bccf944c31f-kube-api-access-fjs6z\") pod \"dnsmasq-dns-86db49b7ff-ck5gn\" (UID: \"9644fbf3-9d54-43de-992f-7bccf944c31f\") " pod="openstack/dnsmasq-dns-86db49b7ff-ck5gn" Jan 31 09:22:08 crc kubenswrapper[4830]: I0131 09:22:08.433065 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9644fbf3-9d54-43de-992f-7bccf944c31f-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-ck5gn\" (UID: \"9644fbf3-9d54-43de-992f-7bccf944c31f\") " pod="openstack/dnsmasq-dns-86db49b7ff-ck5gn" Jan 31 09:22:08 crc kubenswrapper[4830]: I0131 09:22:08.433159 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9644fbf3-9d54-43de-992f-7bccf944c31f-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-ck5gn\" (UID: \"9644fbf3-9d54-43de-992f-7bccf944c31f\") " pod="openstack/dnsmasq-dns-86db49b7ff-ck5gn" Jan 31 09:22:08 crc kubenswrapper[4830]: I0131 09:22:08.433767 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9644fbf3-9d54-43de-992f-7bccf944c31f-config\") pod \"dnsmasq-dns-86db49b7ff-ck5gn\" (UID: \"9644fbf3-9d54-43de-992f-7bccf944c31f\") " pod="openstack/dnsmasq-dns-86db49b7ff-ck5gn" Jan 31 09:22:08 crc kubenswrapper[4830]: I0131 09:22:08.434777 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9644fbf3-9d54-43de-992f-7bccf944c31f-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-ck5gn\" (UID: \"9644fbf3-9d54-43de-992f-7bccf944c31f\") " pod="openstack/dnsmasq-dns-86db49b7ff-ck5gn" Jan 31 09:22:08 crc kubenswrapper[4830]: I0131 09:22:08.458766 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fjs6z\" (UniqueName: \"kubernetes.io/projected/9644fbf3-9d54-43de-992f-7bccf944c31f-kube-api-access-fjs6z\") pod \"dnsmasq-dns-86db49b7ff-ck5gn\" (UID: \"9644fbf3-9d54-43de-992f-7bccf944c31f\") " pod="openstack/dnsmasq-dns-86db49b7ff-ck5gn" Jan 31 09:22:08 crc kubenswrapper[4830]: I0131 09:22:08.534564 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-ck5gn" Jan 31 09:22:11 crc kubenswrapper[4830]: W0131 09:22:11.027539 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode47f665d_2a2a_464a_b6a3_e255f1440eda.slice/crio-7c04a089877a11a0067f49176bc92f363f20d9a4323fcf24ec0ed3c796451a3c WatchSource:0}: Error finding container 7c04a089877a11a0067f49176bc92f363f20d9a4323fcf24ec0ed3c796451a3c: Status 404 returned error can't find the container with id 7c04a089877a11a0067f49176bc92f363f20d9a4323fcf24ec0ed3c796451a3c Jan 31 09:22:11 crc kubenswrapper[4830]: E0131 09:22:11.477526 4830 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/cluster-observability-operator/obo-prometheus-operator-prometheus-config-reloader-rhel9@sha256:9a2097bc5b2e02bc1703f64c452ce8fe4bc6775b732db930ff4770b76ae4653a" Jan 31 09:22:11 crc kubenswrapper[4830]: E0131 09:22:11.477865 4830 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init-config-reloader,Image:registry.redhat.io/cluster-observability-operator/obo-prometheus-operator-prometheus-config-reloader-rhel9@sha256:9a2097bc5b2e02bc1703f64c452ce8fe4bc6775b732db930ff4770b76ae4653a,Command:[/bin/prometheus-config-reloader],Args:[--watch-interval=0 --listen-address=:8081 --config-file=/etc/prometheus/config/prometheus.yaml.gz --config-envsubst-file=/etc/prometheus/config_out/prometheus.env.yaml --watched-dir=/etc/prometheus/rules/prometheus-metric-storage-rulefiles-0 --watched-dir=/etc/prometheus/rules/prometheus-metric-storage-rulefiles-1 --watched-dir=/etc/prometheus/rules/prometheus-metric-storage-rulefiles-2],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:reloader-init,HostPort:0,ContainerPort:8081,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:SHARD,Value:0,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:false,MountPath:/etc/prometheus/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-out,ReadOnly:false,MountPath:/etc/prometheus/config_out,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:prometheus-metric-storage-rulefiles-0,ReadOnly:false,MountPath:/etc/prometheus/rules/prometheus-metric-storage-rulefiles-0,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:prometheus-metric-storage-rulefiles-1,ReadOnly:false,MountPath:/etc/prometheus/rules/prometheus-metric-storage-rulefiles-1,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:prometheus-metric-storage-rulefiles-2,ReadOnly:false,MountPath:/etc/prometheus/rules/prometheus-metric-storage-rulefiles-2,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5rk7p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod prometheus-metric-storage-0_openstack(68109d40-9af0-4c37-bf02-7b4744dbab5f): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 31 09:22:11 crc kubenswrapper[4830]: E0131 09:22:11.479125 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init-config-reloader\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openstack/prometheus-metric-storage-0" podUID="68109d40-9af0-4c37-bf02-7b4744dbab5f" Jan 31 09:22:11 crc kubenswrapper[4830]: E0131 09:22:11.560385 4830 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/cluster-observability-operator/dashboards-console-plugin-rhel9@sha256:093d2731ac848ed5fd57356b155a19d3bf7b8db96d95b09c5d0095e143f7254f" Jan 31 09:22:11 crc kubenswrapper[4830]: E0131 09:22:11.560622 4830 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:observability-ui-dashboards,Image:registry.redhat.io/cluster-observability-operator/dashboards-console-plugin-rhel9@sha256:093d2731ac848ed5fd57356b155a19d3bf7b8db96d95b09c5d0095e143f7254f,Command:[],Args:[-port=9443 -cert=/var/serving-cert/tls.crt -key=/var/serving-cert/tls.key],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:web,HostPort:0,ContainerPort:9443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:serving-cert,ReadOnly:true,MountPath:/var/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hgwfr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000350000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod observability-ui-dashboards-66cbf594b5-swjf6_openshift-operators(51e241ad-2d92-41fb-a218-1a14cd40534d): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 31 09:22:11 crc kubenswrapper[4830]: E0131 09:22:11.562811 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"observability-ui-dashboards\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-operators/observability-ui-dashboards-66cbf594b5-swjf6" podUID="51e241ad-2d92-41fb-a218-1a14cd40534d" Jan 31 09:22:11 crc kubenswrapper[4830]: E0131 09:22:11.687185 4830 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified" Jan 31 09:22:11 crc kubenswrapper[4830]: E0131 09:22:11.687497 4830 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2bdfc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-server-0_openstack(759f3f02-a9de-4e01-97f9-a97424c592a6): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 31 09:22:11 crc kubenswrapper[4830]: E0131 09:22:11.689190 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-server-0" podUID="759f3f02-a9de-4e01-97f9-a97424c592a6" Jan 31 09:22:11 crc kubenswrapper[4830]: I0131 09:22:11.790094 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"e47f665d-2a2a-464a-b6a3-e255f1440eda","Type":"ContainerStarted","Data":"7c04a089877a11a0067f49176bc92f363f20d9a4323fcf24ec0ed3c796451a3c"} Jan 31 09:22:11 crc kubenswrapper[4830]: E0131 09:22:11.791717 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified\\\"\"" pod="openstack/rabbitmq-server-0" podUID="759f3f02-a9de-4e01-97f9-a97424c592a6" Jan 31 09:22:11 crc kubenswrapper[4830]: E0131 09:22:11.791976 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"observability-ui-dashboards\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/cluster-observability-operator/dashboards-console-plugin-rhel9@sha256:093d2731ac848ed5fd57356b155a19d3bf7b8db96d95b09c5d0095e143f7254f\\\"\"" pod="openshift-operators/observability-ui-dashboards-66cbf594b5-swjf6" podUID="51e241ad-2d92-41fb-a218-1a14cd40534d" Jan 31 09:22:11 crc kubenswrapper[4830]: E0131 09:22:11.792313 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init-config-reloader\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/cluster-observability-operator/obo-prometheus-operator-prometheus-config-reloader-rhel9@sha256:9a2097bc5b2e02bc1703f64c452ce8fe4bc6775b732db930ff4770b76ae4653a\\\"\"" pod="openstack/prometheus-metric-storage-0" podUID="68109d40-9af0-4c37-bf02-7b4744dbab5f" Jan 31 09:22:14 crc kubenswrapper[4830]: I0131 09:22:14.353138 4830 patch_prober.go:28] interesting pod/machine-config-daemon-gt7kd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 09:22:14 crc kubenswrapper[4830]: I0131 09:22:14.353626 4830 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" podUID="158dbfda-9b0a-4809-9946-3c6ee2d082dc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 09:22:17 crc kubenswrapper[4830]: I0131 09:22:17.717911 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-757d775c7-jlwx2" podUID="ea4800d4-055a-4c40-8209-81998e951b16" containerName="console" containerID="cri-o://1eeb5600d7926e689472353f8abc0a4f04b6ce4979b11deb5a0fa88d521b6df5" gracePeriod=15 Jan 31 09:22:17 crc kubenswrapper[4830]: I0131 09:22:17.852182 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-757d775c7-jlwx2_ea4800d4-055a-4c40-8209-81998e951b16/console/0.log" Jan 31 09:22:17 crc kubenswrapper[4830]: I0131 09:22:17.852231 4830 generic.go:334] "Generic (PLEG): container finished" podID="ea4800d4-055a-4c40-8209-81998e951b16" containerID="1eeb5600d7926e689472353f8abc0a4f04b6ce4979b11deb5a0fa88d521b6df5" exitCode=2 Jan 31 09:22:17 crc kubenswrapper[4830]: I0131 09:22:17.852266 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-757d775c7-jlwx2" event={"ID":"ea4800d4-055a-4c40-8209-81998e951b16","Type":"ContainerDied","Data":"1eeb5600d7926e689472353f8abc0a4f04b6ce4979b11deb5a0fa88d521b6df5"} Jan 31 09:22:18 crc kubenswrapper[4830]: E0131 09:22:18.163296 4830 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 31 09:22:18 crc kubenswrapper[4830]: E0131 09:22:18.163519 4830 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n659h4h664hbh658h587h67ch89h587h8fh679hc6hf9h55fh644h5d5h698h68dh5cdh5ffh669h54ch9h689hb8hd4h5bfhd8h5d7h5fh665h574q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-45nhf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-57d769cc4f-rntrf_openstack(ee43d170-0675-460c-88e1-5e19a0db0e37): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 31 09:22:18 crc kubenswrapper[4830]: E0131 09:22:18.164965 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-57d769cc4f-rntrf" podUID="ee43d170-0675-460c-88e1-5e19a0db0e37" Jan 31 09:22:19 crc kubenswrapper[4830]: I0131 09:22:19.600635 4830 patch_prober.go:28] interesting pod/console-757d775c7-jlwx2 container/console namespace/openshift-console: Readiness probe status=failure output="Get \"https://10.217.0.67:8443/health\": dial tcp 10.217.0.67:8443: connect: connection refused" start-of-body= Jan 31 09:22:19 crc kubenswrapper[4830]: I0131 09:22:19.601176 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/console-757d775c7-jlwx2" podUID="ea4800d4-055a-4c40-8209-81998e951b16" containerName="console" probeResult="failure" output="Get \"https://10.217.0.67:8443/health\": dial tcp 10.217.0.67:8443: connect: connection refused" Jan 31 09:22:20 crc kubenswrapper[4830]: E0131 09:22:20.588234 4830 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 31 09:22:20 crc kubenswrapper[4830]: E0131 09:22:20.588982 4830 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ksj9p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-78dd6ddcc-g55g6_openstack(5bc277c3-23c6-4b23-90de-63e622971c44): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 31 09:22:20 crc kubenswrapper[4830]: E0131 09:22:20.590254 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-78dd6ddcc-g55g6" podUID="5bc277c3-23c6-4b23-90de-63e622971c44" Jan 31 09:22:20 crc kubenswrapper[4830]: E0131 09:22:20.674252 4830 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 31 09:22:20 crc kubenswrapper[4830]: E0131 09:22:20.674452 4830 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n68chd6h679hbfh55fhc6h5ffh5d8h94h56ch589hb4hc5h57bh677hcdh655h8dh667h675h654h66ch567h8fh659h5b4h675h566h55bh54h67dh6dq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lvrkk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-666b6646f7-7f4p6_openstack(c18c27da-a436-41fe-b4c9-bb0187e10694): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 31 09:22:20 crc kubenswrapper[4830]: E0131 09:22:20.675677 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-666b6646f7-7f4p6" podUID="c18c27da-a436-41fe-b4c9-bb0187e10694" Jan 31 09:22:21 crc kubenswrapper[4830]: I0131 09:22:21.173665 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-rntrf" Jan 31 09:22:21 crc kubenswrapper[4830]: I0131 09:22:21.312511 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ee43d170-0675-460c-88e1-5e19a0db0e37-config\") pod \"ee43d170-0675-460c-88e1-5e19a0db0e37\" (UID: \"ee43d170-0675-460c-88e1-5e19a0db0e37\") " Jan 31 09:22:21 crc kubenswrapper[4830]: I0131 09:22:21.312747 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-45nhf\" (UniqueName: \"kubernetes.io/projected/ee43d170-0675-460c-88e1-5e19a0db0e37-kube-api-access-45nhf\") pod \"ee43d170-0675-460c-88e1-5e19a0db0e37\" (UID: \"ee43d170-0675-460c-88e1-5e19a0db0e37\") " Jan 31 09:22:21 crc kubenswrapper[4830]: I0131 09:22:21.312842 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ee43d170-0675-460c-88e1-5e19a0db0e37-dns-svc\") pod \"ee43d170-0675-460c-88e1-5e19a0db0e37\" (UID: \"ee43d170-0675-460c-88e1-5e19a0db0e37\") " Jan 31 09:22:21 crc kubenswrapper[4830]: I0131 09:22:21.313517 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ee43d170-0675-460c-88e1-5e19a0db0e37-config" (OuterVolumeSpecName: "config") pod "ee43d170-0675-460c-88e1-5e19a0db0e37" (UID: "ee43d170-0675-460c-88e1-5e19a0db0e37"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:22:21 crc kubenswrapper[4830]: I0131 09:22:21.313710 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ee43d170-0675-460c-88e1-5e19a0db0e37-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "ee43d170-0675-460c-88e1-5e19a0db0e37" (UID: "ee43d170-0675-460c-88e1-5e19a0db0e37"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:22:21 crc kubenswrapper[4830]: I0131 09:22:21.314358 4830 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ee43d170-0675-460c-88e1-5e19a0db0e37-config\") on node \"crc\" DevicePath \"\"" Jan 31 09:22:21 crc kubenswrapper[4830]: I0131 09:22:21.314381 4830 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ee43d170-0675-460c-88e1-5e19a0db0e37-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 31 09:22:21 crc kubenswrapper[4830]: I0131 09:22:21.320812 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ee43d170-0675-460c-88e1-5e19a0db0e37-kube-api-access-45nhf" (OuterVolumeSpecName: "kube-api-access-45nhf") pod "ee43d170-0675-460c-88e1-5e19a0db0e37" (UID: "ee43d170-0675-460c-88e1-5e19a0db0e37"). InnerVolumeSpecName "kube-api-access-45nhf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:22:21 crc kubenswrapper[4830]: I0131 09:22:21.416325 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-45nhf\" (UniqueName: \"kubernetes.io/projected/ee43d170-0675-460c-88e1-5e19a0db0e37-kube-api-access-45nhf\") on node \"crc\" DevicePath \"\"" Jan 31 09:22:21 crc kubenswrapper[4830]: E0131 09:22:21.570369 4830 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 31 09:22:21 crc kubenswrapper[4830]: E0131 09:22:21.570668 4830 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kpdgm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-675f4bcbfc-6wrv2_openstack(94ab9436-8a9d-4ad9-b2c2-676351a006d7): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 31 09:22:21 crc kubenswrapper[4830]: E0131 09:22:21.571807 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-675f4bcbfc-6wrv2" podUID="94ab9436-8a9d-4ad9-b2c2-676351a006d7" Jan 31 09:22:21 crc kubenswrapper[4830]: I0131 09:22:21.778579 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-g55g6" Jan 31 09:22:21 crc kubenswrapper[4830]: I0131 09:22:21.793317 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-7f4p6" Jan 31 09:22:21 crc kubenswrapper[4830]: I0131 09:22:21.933103 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-g55g6" event={"ID":"5bc277c3-23c6-4b23-90de-63e622971c44","Type":"ContainerDied","Data":"bc8ee11c19343bfd12fbbacced16e8bb8dd4ad0f70cd16b4dcd69b0827dc3b74"} Jan 31 09:22:21 crc kubenswrapper[4830]: I0131 09:22:21.933152 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-g55g6" Jan 31 09:22:21 crc kubenswrapper[4830]: I0131 09:22:21.939588 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5bc277c3-23c6-4b23-90de-63e622971c44-config\") pod \"5bc277c3-23c6-4b23-90de-63e622971c44\" (UID: \"5bc277c3-23c6-4b23-90de-63e622971c44\") " Jan 31 09:22:21 crc kubenswrapper[4830]: I0131 09:22:21.939700 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lvrkk\" (UniqueName: \"kubernetes.io/projected/c18c27da-a436-41fe-b4c9-bb0187e10694-kube-api-access-lvrkk\") pod \"c18c27da-a436-41fe-b4c9-bb0187e10694\" (UID: \"c18c27da-a436-41fe-b4c9-bb0187e10694\") " Jan 31 09:22:21 crc kubenswrapper[4830]: I0131 09:22:21.939839 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5bc277c3-23c6-4b23-90de-63e622971c44-dns-svc\") pod \"5bc277c3-23c6-4b23-90de-63e622971c44\" (UID: \"5bc277c3-23c6-4b23-90de-63e622971c44\") " Jan 31 09:22:21 crc kubenswrapper[4830]: I0131 09:22:21.940195 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c18c27da-a436-41fe-b4c9-bb0187e10694-dns-svc\") pod \"c18c27da-a436-41fe-b4c9-bb0187e10694\" (UID: \"c18c27da-a436-41fe-b4c9-bb0187e10694\") " Jan 31 09:22:21 crc kubenswrapper[4830]: I0131 09:22:21.940508 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c18c27da-a436-41fe-b4c9-bb0187e10694-config\") pod \"c18c27da-a436-41fe-b4c9-bb0187e10694\" (UID: \"c18c27da-a436-41fe-b4c9-bb0187e10694\") " Jan 31 09:22:21 crc kubenswrapper[4830]: I0131 09:22:21.940631 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ksj9p\" (UniqueName: \"kubernetes.io/projected/5bc277c3-23c6-4b23-90de-63e622971c44-kube-api-access-ksj9p\") pod \"5bc277c3-23c6-4b23-90de-63e622971c44\" (UID: \"5bc277c3-23c6-4b23-90de-63e622971c44\") " Jan 31 09:22:21 crc kubenswrapper[4830]: I0131 09:22:21.942153 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-7f4p6" event={"ID":"c18c27da-a436-41fe-b4c9-bb0187e10694","Type":"ContainerDied","Data":"2d3cf6f15dceb79f2aad4ab98e70bd9560d8ff62c9dcebdd69bba2b1ff1542e2"} Jan 31 09:22:21 crc kubenswrapper[4830]: I0131 09:22:21.942325 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-7f4p6" Jan 31 09:22:21 crc kubenswrapper[4830]: I0131 09:22:21.942494 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5bc277c3-23c6-4b23-90de-63e622971c44-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "5bc277c3-23c6-4b23-90de-63e622971c44" (UID: "5bc277c3-23c6-4b23-90de-63e622971c44"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:22:21 crc kubenswrapper[4830]: I0131 09:22:21.943183 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5bc277c3-23c6-4b23-90de-63e622971c44-config" (OuterVolumeSpecName: "config") pod "5bc277c3-23c6-4b23-90de-63e622971c44" (UID: "5bc277c3-23c6-4b23-90de-63e622971c44"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:22:21 crc kubenswrapper[4830]: I0131 09:22:21.944143 4830 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5bc277c3-23c6-4b23-90de-63e622971c44-config\") on node \"crc\" DevicePath \"\"" Jan 31 09:22:21 crc kubenswrapper[4830]: I0131 09:22:21.944171 4830 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5bc277c3-23c6-4b23-90de-63e622971c44-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 31 09:22:21 crc kubenswrapper[4830]: I0131 09:22:21.944217 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c18c27da-a436-41fe-b4c9-bb0187e10694-config" (OuterVolumeSpecName: "config") pod "c18c27da-a436-41fe-b4c9-bb0187e10694" (UID: "c18c27da-a436-41fe-b4c9-bb0187e10694"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:22:21 crc kubenswrapper[4830]: I0131 09:22:21.944777 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c18c27da-a436-41fe-b4c9-bb0187e10694-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "c18c27da-a436-41fe-b4c9-bb0187e10694" (UID: "c18c27da-a436-41fe-b4c9-bb0187e10694"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:22:21 crc kubenswrapper[4830]: I0131 09:22:21.946311 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-rntrf" Jan 31 09:22:21 crc kubenswrapper[4830]: I0131 09:22:21.946407 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-rntrf" event={"ID":"ee43d170-0675-460c-88e1-5e19a0db0e37","Type":"ContainerDied","Data":"c93621416fff84b74c56b1dcb53a4c301a954e5a950c665d5457014ea279e468"} Jan 31 09:22:21 crc kubenswrapper[4830]: I0131 09:22:21.960477 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c18c27da-a436-41fe-b4c9-bb0187e10694-kube-api-access-lvrkk" (OuterVolumeSpecName: "kube-api-access-lvrkk") pod "c18c27da-a436-41fe-b4c9-bb0187e10694" (UID: "c18c27da-a436-41fe-b4c9-bb0187e10694"). InnerVolumeSpecName "kube-api-access-lvrkk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:22:22 crc kubenswrapper[4830]: I0131 09:22:22.000577 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5bc277c3-23c6-4b23-90de-63e622971c44-kube-api-access-ksj9p" (OuterVolumeSpecName: "kube-api-access-ksj9p") pod "5bc277c3-23c6-4b23-90de-63e622971c44" (UID: "5bc277c3-23c6-4b23-90de-63e622971c44"). InnerVolumeSpecName "kube-api-access-ksj9p". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:22:22 crc kubenswrapper[4830]: I0131 09:22:22.048440 4830 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c18c27da-a436-41fe-b4c9-bb0187e10694-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 31 09:22:22 crc kubenswrapper[4830]: I0131 09:22:22.048480 4830 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c18c27da-a436-41fe-b4c9-bb0187e10694-config\") on node \"crc\" DevicePath \"\"" Jan 31 09:22:22 crc kubenswrapper[4830]: I0131 09:22:22.048490 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ksj9p\" (UniqueName: \"kubernetes.io/projected/5bc277c3-23c6-4b23-90de-63e622971c44-kube-api-access-ksj9p\") on node \"crc\" DevicePath \"\"" Jan 31 09:22:22 crc kubenswrapper[4830]: I0131 09:22:22.048499 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lvrkk\" (UniqueName: \"kubernetes.io/projected/c18c27da-a436-41fe-b4c9-bb0187e10694-kube-api-access-lvrkk\") on node \"crc\" DevicePath \"\"" Jan 31 09:22:22 crc kubenswrapper[4830]: I0131 09:22:22.080347 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-rntrf"] Jan 31 09:22:22 crc kubenswrapper[4830]: I0131 09:22:22.097672 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-rntrf"] Jan 31 09:22:22 crc kubenswrapper[4830]: E0131 09:22:22.117300 4830 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podee43d170_0675_460c_88e1_5e19a0db0e37.slice/crio-c93621416fff84b74c56b1dcb53a4c301a954e5a950c665d5457014ea279e468\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podee43d170_0675_460c_88e1_5e19a0db0e37.slice\": RecentStats: unable to find data in memory cache]" Jan 31 09:22:22 crc kubenswrapper[4830]: I0131 09:22:22.179322 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-ll6c8"] Jan 31 09:22:22 crc kubenswrapper[4830]: I0131 09:22:22.298353 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ee43d170-0675-460c-88e1-5e19a0db0e37" path="/var/lib/kubelet/pods/ee43d170-0675-460c-88e1-5e19a0db0e37/volumes" Jan 31 09:22:22 crc kubenswrapper[4830]: I0131 09:22:22.394031 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-g55g6"] Jan 31 09:22:22 crc kubenswrapper[4830]: I0131 09:22:22.437555 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-g55g6"] Jan 31 09:22:22 crc kubenswrapper[4830]: I0131 09:22:22.449035 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-dqnl9"] Jan 31 09:22:22 crc kubenswrapper[4830]: I0131 09:22:22.468613 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-7f4p6"] Jan 31 09:22:22 crc kubenswrapper[4830]: I0131 09:22:22.478715 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-7f4p6"] Jan 31 09:22:22 crc kubenswrapper[4830]: I0131 09:22:22.489567 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-ck5gn"] Jan 31 09:22:22 crc kubenswrapper[4830]: W0131 09:22:22.582384 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfeb7542a_b048_4323_a00f_9cdba1b8713f.slice/crio-9589ae5e569582cf85030f479f8b9efd588d2e36fc98ed0035e60b3235545d48 WatchSource:0}: Error finding container 9589ae5e569582cf85030f479f8b9efd588d2e36fc98ed0035e60b3235545d48: Status 404 returned error can't find the container with id 9589ae5e569582cf85030f479f8b9efd588d2e36fc98ed0035e60b3235545d48 Jan 31 09:22:22 crc kubenswrapper[4830]: W0131 09:22:22.590109 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc7f2be11_cbc3_426b_8d36_55d2bec20af6.slice/crio-f92e92a7f6614b661b6479ab65cba8ccf98fe06cb68d13bbf5553503662c2b98 WatchSource:0}: Error finding container f92e92a7f6614b661b6479ab65cba8ccf98fe06cb68d13bbf5553503662c2b98: Status 404 returned error can't find the container with id f92e92a7f6614b661b6479ab65cba8ccf98fe06cb68d13bbf5553503662c2b98 Jan 31 09:22:22 crc kubenswrapper[4830]: W0131 09:22:22.651843 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9644fbf3_9d54_43de_992f_7bccf944c31f.slice/crio-e30f4d3e016320cfb76856e779a7b17c29b6cc7ad0e9b23135ba943288b6cd30 WatchSource:0}: Error finding container e30f4d3e016320cfb76856e779a7b17c29b6cc7ad0e9b23135ba943288b6cd30: Status 404 returned error can't find the container with id e30f4d3e016320cfb76856e779a7b17c29b6cc7ad0e9b23135ba943288b6cd30 Jan 31 09:22:22 crc kubenswrapper[4830]: I0131 09:22:22.684952 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-757d775c7-jlwx2_ea4800d4-055a-4c40-8209-81998e951b16/console/0.log" Jan 31 09:22:22 crc kubenswrapper[4830]: I0131 09:22:22.685042 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-757d775c7-jlwx2" Jan 31 09:22:22 crc kubenswrapper[4830]: I0131 09:22:22.693875 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-6wrv2" Jan 31 09:22:22 crc kubenswrapper[4830]: I0131 09:22:22.772394 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/ea4800d4-055a-4c40-8209-81998e951b16-console-config\") pod \"ea4800d4-055a-4c40-8209-81998e951b16\" (UID: \"ea4800d4-055a-4c40-8209-81998e951b16\") " Jan 31 09:22:22 crc kubenswrapper[4830]: I0131 09:22:22.772477 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kpdgm\" (UniqueName: \"kubernetes.io/projected/94ab9436-8a9d-4ad9-b2c2-676351a006d7-kube-api-access-kpdgm\") pod \"94ab9436-8a9d-4ad9-b2c2-676351a006d7\" (UID: \"94ab9436-8a9d-4ad9-b2c2-676351a006d7\") " Jan 31 09:22:22 crc kubenswrapper[4830]: I0131 09:22:22.772542 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/ea4800d4-055a-4c40-8209-81998e951b16-oauth-serving-cert\") pod \"ea4800d4-055a-4c40-8209-81998e951b16\" (UID: \"ea4800d4-055a-4c40-8209-81998e951b16\") " Jan 31 09:22:22 crc kubenswrapper[4830]: I0131 09:22:22.772612 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ea4800d4-055a-4c40-8209-81998e951b16-service-ca\") pod \"ea4800d4-055a-4c40-8209-81998e951b16\" (UID: \"ea4800d4-055a-4c40-8209-81998e951b16\") " Jan 31 09:22:22 crc kubenswrapper[4830]: I0131 09:22:22.772785 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/ea4800d4-055a-4c40-8209-81998e951b16-console-oauth-config\") pod \"ea4800d4-055a-4c40-8209-81998e951b16\" (UID: \"ea4800d4-055a-4c40-8209-81998e951b16\") " Jan 31 09:22:22 crc kubenswrapper[4830]: I0131 09:22:22.772826 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ea4800d4-055a-4c40-8209-81998e951b16-trusted-ca-bundle\") pod \"ea4800d4-055a-4c40-8209-81998e951b16\" (UID: \"ea4800d4-055a-4c40-8209-81998e951b16\") " Jan 31 09:22:22 crc kubenswrapper[4830]: I0131 09:22:22.772857 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-52v4d\" (UniqueName: \"kubernetes.io/projected/ea4800d4-055a-4c40-8209-81998e951b16-kube-api-access-52v4d\") pod \"ea4800d4-055a-4c40-8209-81998e951b16\" (UID: \"ea4800d4-055a-4c40-8209-81998e951b16\") " Jan 31 09:22:22 crc kubenswrapper[4830]: I0131 09:22:22.772904 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/ea4800d4-055a-4c40-8209-81998e951b16-console-serving-cert\") pod \"ea4800d4-055a-4c40-8209-81998e951b16\" (UID: \"ea4800d4-055a-4c40-8209-81998e951b16\") " Jan 31 09:22:22 crc kubenswrapper[4830]: I0131 09:22:22.772937 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/94ab9436-8a9d-4ad9-b2c2-676351a006d7-config\") pod \"94ab9436-8a9d-4ad9-b2c2-676351a006d7\" (UID: \"94ab9436-8a9d-4ad9-b2c2-676351a006d7\") " Jan 31 09:22:22 crc kubenswrapper[4830]: I0131 09:22:22.774374 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ea4800d4-055a-4c40-8209-81998e951b16-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "ea4800d4-055a-4c40-8209-81998e951b16" (UID: "ea4800d4-055a-4c40-8209-81998e951b16"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:22:22 crc kubenswrapper[4830]: I0131 09:22:22.774426 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ea4800d4-055a-4c40-8209-81998e951b16-service-ca" (OuterVolumeSpecName: "service-ca") pod "ea4800d4-055a-4c40-8209-81998e951b16" (UID: "ea4800d4-055a-4c40-8209-81998e951b16"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:22:22 crc kubenswrapper[4830]: I0131 09:22:22.774452 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ea4800d4-055a-4c40-8209-81998e951b16-console-config" (OuterVolumeSpecName: "console-config") pod "ea4800d4-055a-4c40-8209-81998e951b16" (UID: "ea4800d4-055a-4c40-8209-81998e951b16"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:22:22 crc kubenswrapper[4830]: I0131 09:22:22.774456 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/94ab9436-8a9d-4ad9-b2c2-676351a006d7-config" (OuterVolumeSpecName: "config") pod "94ab9436-8a9d-4ad9-b2c2-676351a006d7" (UID: "94ab9436-8a9d-4ad9-b2c2-676351a006d7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:22:22 crc kubenswrapper[4830]: I0131 09:22:22.776120 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ea4800d4-055a-4c40-8209-81998e951b16-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "ea4800d4-055a-4c40-8209-81998e951b16" (UID: "ea4800d4-055a-4c40-8209-81998e951b16"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:22:22 crc kubenswrapper[4830]: I0131 09:22:22.778172 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/94ab9436-8a9d-4ad9-b2c2-676351a006d7-kube-api-access-kpdgm" (OuterVolumeSpecName: "kube-api-access-kpdgm") pod "94ab9436-8a9d-4ad9-b2c2-676351a006d7" (UID: "94ab9436-8a9d-4ad9-b2c2-676351a006d7"). InnerVolumeSpecName "kube-api-access-kpdgm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:22:22 crc kubenswrapper[4830]: I0131 09:22:22.778875 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ea4800d4-055a-4c40-8209-81998e951b16-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "ea4800d4-055a-4c40-8209-81998e951b16" (UID: "ea4800d4-055a-4c40-8209-81998e951b16"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:22:22 crc kubenswrapper[4830]: I0131 09:22:22.778919 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ea4800d4-055a-4c40-8209-81998e951b16-kube-api-access-52v4d" (OuterVolumeSpecName: "kube-api-access-52v4d") pod "ea4800d4-055a-4c40-8209-81998e951b16" (UID: "ea4800d4-055a-4c40-8209-81998e951b16"). InnerVolumeSpecName "kube-api-access-52v4d". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:22:22 crc kubenswrapper[4830]: I0131 09:22:22.779275 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ea4800d4-055a-4c40-8209-81998e951b16-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "ea4800d4-055a-4c40-8209-81998e951b16" (UID: "ea4800d4-055a-4c40-8209-81998e951b16"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:22:22 crc kubenswrapper[4830]: I0131 09:22:22.877168 4830 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/ea4800d4-055a-4c40-8209-81998e951b16-console-config\") on node \"crc\" DevicePath \"\"" Jan 31 09:22:22 crc kubenswrapper[4830]: I0131 09:22:22.877214 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kpdgm\" (UniqueName: \"kubernetes.io/projected/94ab9436-8a9d-4ad9-b2c2-676351a006d7-kube-api-access-kpdgm\") on node \"crc\" DevicePath \"\"" Jan 31 09:22:22 crc kubenswrapper[4830]: I0131 09:22:22.877226 4830 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/ea4800d4-055a-4c40-8209-81998e951b16-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 09:22:22 crc kubenswrapper[4830]: I0131 09:22:22.877235 4830 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ea4800d4-055a-4c40-8209-81998e951b16-service-ca\") on node \"crc\" DevicePath \"\"" Jan 31 09:22:22 crc kubenswrapper[4830]: I0131 09:22:22.877246 4830 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/ea4800d4-055a-4c40-8209-81998e951b16-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 31 09:22:22 crc kubenswrapper[4830]: I0131 09:22:22.877254 4830 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ea4800d4-055a-4c40-8209-81998e951b16-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 09:22:22 crc kubenswrapper[4830]: I0131 09:22:22.877263 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-52v4d\" (UniqueName: \"kubernetes.io/projected/ea4800d4-055a-4c40-8209-81998e951b16-kube-api-access-52v4d\") on node \"crc\" DevicePath \"\"" Jan 31 09:22:22 crc kubenswrapper[4830]: I0131 09:22:22.877272 4830 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/ea4800d4-055a-4c40-8209-81998e951b16-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 09:22:22 crc kubenswrapper[4830]: I0131 09:22:22.877285 4830 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/94ab9436-8a9d-4ad9-b2c2-676351a006d7-config\") on node \"crc\" DevicePath \"\"" Jan 31 09:22:22 crc kubenswrapper[4830]: I0131 09:22:22.963903 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-757d775c7-jlwx2_ea4800d4-055a-4c40-8209-81998e951b16/console/0.log" Jan 31 09:22:22 crc kubenswrapper[4830]: I0131 09:22:22.964377 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-757d775c7-jlwx2" event={"ID":"ea4800d4-055a-4c40-8209-81998e951b16","Type":"ContainerDied","Data":"2d4488494ca7962993e958682574a7876f0eb8e1d9bace1e9fc9aa935926ab6d"} Jan 31 09:22:22 crc kubenswrapper[4830]: I0131 09:22:22.964427 4830 scope.go:117] "RemoveContainer" containerID="1eeb5600d7926e689472353f8abc0a4f04b6ce4979b11deb5a0fa88d521b6df5" Jan 31 09:22:22 crc kubenswrapper[4830]: I0131 09:22:22.964576 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-757d775c7-jlwx2" Jan 31 09:22:22 crc kubenswrapper[4830]: I0131 09:22:22.966451 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-ck5gn" event={"ID":"9644fbf3-9d54-43de-992f-7bccf944c31f","Type":"ContainerStarted","Data":"e30f4d3e016320cfb76856e779a7b17c29b6cc7ad0e9b23135ba943288b6cd30"} Jan 31 09:22:22 crc kubenswrapper[4830]: I0131 09:22:22.967567 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-6wrv2" event={"ID":"94ab9436-8a9d-4ad9-b2c2-676351a006d7","Type":"ContainerDied","Data":"d180f3b6444092f6b002974cc29a825f5102be0c222502b68e1813410d910ef3"} Jan 31 09:22:22 crc kubenswrapper[4830]: I0131 09:22:22.967664 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-6wrv2" Jan 31 09:22:22 crc kubenswrapper[4830]: I0131 09:22:22.980465 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-dqnl9" event={"ID":"c7f2be11-cbc3-426b-8d36-55d2bec20af6","Type":"ContainerStarted","Data":"f92e92a7f6614b661b6479ab65cba8ccf98fe06cb68d13bbf5553503662c2b98"} Jan 31 09:22:22 crc kubenswrapper[4830]: I0131 09:22:22.982056 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-ll6c8" event={"ID":"feb7542a-b048-4323-a00f-9cdba1b8713f","Type":"ContainerStarted","Data":"9589ae5e569582cf85030f479f8b9efd588d2e36fc98ed0035e60b3235545d48"} Jan 31 09:22:23 crc kubenswrapper[4830]: I0131 09:22:23.019635 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-757d775c7-jlwx2"] Jan 31 09:22:23 crc kubenswrapper[4830]: I0131 09:22:23.030312 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-757d775c7-jlwx2"] Jan 31 09:22:23 crc kubenswrapper[4830]: I0131 09:22:23.080906 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-6wrv2"] Jan 31 09:22:23 crc kubenswrapper[4830]: I0131 09:22:23.092109 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-6wrv2"] Jan 31 09:22:23 crc kubenswrapper[4830]: E0131 09:22:23.478500 4830 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0" Jan 31 09:22:23 crc kubenswrapper[4830]: E0131 09:22:23.478592 4830 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0" Jan 31 09:22:23 crc kubenswrapper[4830]: E0131 09:22:23.478941 4830 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-state-metrics,Image:registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0,Command:[],Args:[--resources=pods --namespaces=openstack],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:http-metrics,HostPort:0,ContainerPort:8080,Protocol:TCP,HostIP:,},ContainerPort{Name:telemetry,HostPort:0,ContainerPort:8081,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-rx6cz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{0 8080 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod kube-state-metrics-0_openstack(5359b6c7-375f-4424-bb43-f4b2a4d40329): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 31 09:22:23 crc kubenswrapper[4830]: E0131 09:22:23.480641 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-state-metrics\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openstack/kube-state-metrics-0" podUID="5359b6c7-375f-4424-bb43-f4b2a4d40329" Jan 31 09:22:24 crc kubenswrapper[4830]: I0131 09:22:24.000883 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"b3c26555-4046-499e-96c9-5a83b8322d8e","Type":"ContainerStarted","Data":"cda79f4a9622516f4146308af267fe379193a2580f8c06312dc472f6c88455a8"} Jan 31 09:22:24 crc kubenswrapper[4830]: I0131 09:22:24.001544 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Jan 31 09:22:24 crc kubenswrapper[4830]: I0131 09:22:24.007477 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-gk8dv" event={"ID":"e3aa8ed2-c434-4c0c-9a0e-8fe7ce3cc5d1","Type":"ContainerStarted","Data":"e7646fc50c7d852ae0b8ee15a5de9535d9fef4346f8303c19a88dc541aaf68ac"} Jan 31 09:22:24 crc kubenswrapper[4830]: I0131 09:22:24.013020 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ps27t" event={"ID":"dcebcaa8-8cb1-4a94-a15a-0ef39c9bee73","Type":"ContainerStarted","Data":"df12a2de8c1bc3cf41b7a97377ae3a22083f5c9d1478f3546a5fdc4f4ab3b779"} Jan 31 09:22:24 crc kubenswrapper[4830]: I0131 09:22:24.013394 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ps27t" Jan 31 09:22:24 crc kubenswrapper[4830]: I0131 09:22:24.020517 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"6f46adde-a4fc-42fc-aa3b-de8154dbc99c","Type":"ContainerStarted","Data":"3326ff65aa7d5526b063e8d3bcc3d704020aac1a0b530cf7c4e1fcfd0ef1a84b"} Jan 31 09:22:24 crc kubenswrapper[4830]: E0131 09:22:24.022190 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-state-metrics\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0\\\"\"" pod="openstack/kube-state-metrics-0" podUID="5359b6c7-375f-4424-bb43-f4b2a4d40329" Jan 31 09:22:24 crc kubenswrapper[4830]: I0131 09:22:24.079172 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=8.145112103 podStartE2EDuration="46.079137025s" podCreationTimestamp="2026-01-31 09:21:38 +0000 UTC" firstStartedPulling="2026-01-31 09:21:41.851688026 +0000 UTC m=+1246.345050468" lastFinishedPulling="2026-01-31 09:22:19.785712948 +0000 UTC m=+1284.279075390" observedRunningTime="2026-01-31 09:22:24.076966763 +0000 UTC m=+1288.570329215" watchObservedRunningTime="2026-01-31 09:22:24.079137025 +0000 UTC m=+1288.572499467" Jan 31 09:22:24 crc kubenswrapper[4830]: I0131 09:22:24.130281 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ps27t" podStartSLOduration=14.749776099 podStartE2EDuration="40.130251541s" podCreationTimestamp="2026-01-31 09:21:44 +0000 UTC" firstStartedPulling="2026-01-31 09:21:57.219518074 +0000 UTC m=+1261.712880516" lastFinishedPulling="2026-01-31 09:22:22.599993516 +0000 UTC m=+1287.093355958" observedRunningTime="2026-01-31 09:22:24.129446178 +0000 UTC m=+1288.622808620" watchObservedRunningTime="2026-01-31 09:22:24.130251541 +0000 UTC m=+1288.623613983" Jan 31 09:22:24 crc kubenswrapper[4830]: I0131 09:22:24.282318 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5bc277c3-23c6-4b23-90de-63e622971c44" path="/var/lib/kubelet/pods/5bc277c3-23c6-4b23-90de-63e622971c44/volumes" Jan 31 09:22:24 crc kubenswrapper[4830]: I0131 09:22:24.283585 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="94ab9436-8a9d-4ad9-b2c2-676351a006d7" path="/var/lib/kubelet/pods/94ab9436-8a9d-4ad9-b2c2-676351a006d7/volumes" Jan 31 09:22:24 crc kubenswrapper[4830]: I0131 09:22:24.284204 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c18c27da-a436-41fe-b4c9-bb0187e10694" path="/var/lib/kubelet/pods/c18c27da-a436-41fe-b4c9-bb0187e10694/volumes" Jan 31 09:22:24 crc kubenswrapper[4830]: I0131 09:22:24.284925 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ea4800d4-055a-4c40-8209-81998e951b16" path="/var/lib/kubelet/pods/ea4800d4-055a-4c40-8209-81998e951b16/volumes" Jan 31 09:22:25 crc kubenswrapper[4830]: I0131 09:22:25.032127 4830 generic.go:334] "Generic (PLEG): container finished" podID="feb7542a-b048-4323-a00f-9cdba1b8713f" containerID="dde1aa6a2cee6935983e3261958c75380b14a6f833287161d3a37a4e5640bb1f" exitCode=0 Jan 31 09:22:25 crc kubenswrapper[4830]: I0131 09:22:25.032297 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-ll6c8" event={"ID":"feb7542a-b048-4323-a00f-9cdba1b8713f","Type":"ContainerDied","Data":"dde1aa6a2cee6935983e3261958c75380b14a6f833287161d3a37a4e5640bb1f"} Jan 31 09:22:25 crc kubenswrapper[4830]: I0131 09:22:25.034463 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"2ca5d2f1-673e-4173-848a-8d32d33b8bcc","Type":"ContainerStarted","Data":"5a95c143dbe1eea918d6986ca854f7912f381ec8c8a8bca5adc962f6a3ac5aab"} Jan 31 09:22:25 crc kubenswrapper[4830]: I0131 09:22:25.038963 4830 generic.go:334] "Generic (PLEG): container finished" podID="e3aa8ed2-c434-4c0c-9a0e-8fe7ce3cc5d1" containerID="e7646fc50c7d852ae0b8ee15a5de9535d9fef4346f8303c19a88dc541aaf68ac" exitCode=0 Jan 31 09:22:25 crc kubenswrapper[4830]: I0131 09:22:25.039232 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-gk8dv" event={"ID":"e3aa8ed2-c434-4c0c-9a0e-8fe7ce3cc5d1","Type":"ContainerDied","Data":"e7646fc50c7d852ae0b8ee15a5de9535d9fef4346f8303c19a88dc541aaf68ac"} Jan 31 09:22:25 crc kubenswrapper[4830]: I0131 09:22:25.041919 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"e47f665d-2a2a-464a-b6a3-e255f1440eda","Type":"ContainerStarted","Data":"d0165f56b35e419592b76d3bacb833330d4ad7115e820cb61c3afca43227aa63"} Jan 31 09:22:25 crc kubenswrapper[4830]: I0131 09:22:25.043682 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"f37f41b4-3b56-45f9-a368-0f772bcf3002","Type":"ContainerStarted","Data":"cdb56f991da7c792dafd4bd87c59024a24b3277f0cc29c284bc867ed48845277"} Jan 31 09:22:28 crc kubenswrapper[4830]: I0131 09:22:28.852713 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Jan 31 09:22:30 crc kubenswrapper[4830]: I0131 09:22:30.925116 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-ll6c8"] Jan 31 09:22:30 crc kubenswrapper[4830]: I0131 09:22:30.962066 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-698758b865-wmdpv"] Jan 31 09:22:30 crc kubenswrapper[4830]: E0131 09:22:30.962623 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea4800d4-055a-4c40-8209-81998e951b16" containerName="console" Jan 31 09:22:30 crc kubenswrapper[4830]: I0131 09:22:30.962646 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea4800d4-055a-4c40-8209-81998e951b16" containerName="console" Jan 31 09:22:30 crc kubenswrapper[4830]: I0131 09:22:30.962936 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="ea4800d4-055a-4c40-8209-81998e951b16" containerName="console" Jan 31 09:22:30 crc kubenswrapper[4830]: I0131 09:22:30.964419 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-wmdpv" Jan 31 09:22:30 crc kubenswrapper[4830]: I0131 09:22:30.977466 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-698758b865-wmdpv"] Jan 31 09:22:31 crc kubenswrapper[4830]: I0131 09:22:31.023150 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-72ksn\" (UniqueName: \"kubernetes.io/projected/0db9fdf8-5944-4eac-b0fe-9ca72f89ea5d-kube-api-access-72ksn\") pod \"dnsmasq-dns-698758b865-wmdpv\" (UID: \"0db9fdf8-5944-4eac-b0fe-9ca72f89ea5d\") " pod="openstack/dnsmasq-dns-698758b865-wmdpv" Jan 31 09:22:31 crc kubenswrapper[4830]: I0131 09:22:31.023239 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0db9fdf8-5944-4eac-b0fe-9ca72f89ea5d-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-wmdpv\" (UID: \"0db9fdf8-5944-4eac-b0fe-9ca72f89ea5d\") " pod="openstack/dnsmasq-dns-698758b865-wmdpv" Jan 31 09:22:31 crc kubenswrapper[4830]: I0131 09:22:31.023350 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0db9fdf8-5944-4eac-b0fe-9ca72f89ea5d-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-wmdpv\" (UID: \"0db9fdf8-5944-4eac-b0fe-9ca72f89ea5d\") " pod="openstack/dnsmasq-dns-698758b865-wmdpv" Jan 31 09:22:31 crc kubenswrapper[4830]: I0131 09:22:31.023382 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0db9fdf8-5944-4eac-b0fe-9ca72f89ea5d-dns-svc\") pod \"dnsmasq-dns-698758b865-wmdpv\" (UID: \"0db9fdf8-5944-4eac-b0fe-9ca72f89ea5d\") " pod="openstack/dnsmasq-dns-698758b865-wmdpv" Jan 31 09:22:31 crc kubenswrapper[4830]: I0131 09:22:31.023449 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0db9fdf8-5944-4eac-b0fe-9ca72f89ea5d-config\") pod \"dnsmasq-dns-698758b865-wmdpv\" (UID: \"0db9fdf8-5944-4eac-b0fe-9ca72f89ea5d\") " pod="openstack/dnsmasq-dns-698758b865-wmdpv" Jan 31 09:22:31 crc kubenswrapper[4830]: I0131 09:22:31.125444 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-72ksn\" (UniqueName: \"kubernetes.io/projected/0db9fdf8-5944-4eac-b0fe-9ca72f89ea5d-kube-api-access-72ksn\") pod \"dnsmasq-dns-698758b865-wmdpv\" (UID: \"0db9fdf8-5944-4eac-b0fe-9ca72f89ea5d\") " pod="openstack/dnsmasq-dns-698758b865-wmdpv" Jan 31 09:22:31 crc kubenswrapper[4830]: I0131 09:22:31.125520 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0db9fdf8-5944-4eac-b0fe-9ca72f89ea5d-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-wmdpv\" (UID: \"0db9fdf8-5944-4eac-b0fe-9ca72f89ea5d\") " pod="openstack/dnsmasq-dns-698758b865-wmdpv" Jan 31 09:22:31 crc kubenswrapper[4830]: I0131 09:22:31.125582 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0db9fdf8-5944-4eac-b0fe-9ca72f89ea5d-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-wmdpv\" (UID: \"0db9fdf8-5944-4eac-b0fe-9ca72f89ea5d\") " pod="openstack/dnsmasq-dns-698758b865-wmdpv" Jan 31 09:22:31 crc kubenswrapper[4830]: I0131 09:22:31.125612 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0db9fdf8-5944-4eac-b0fe-9ca72f89ea5d-dns-svc\") pod \"dnsmasq-dns-698758b865-wmdpv\" (UID: \"0db9fdf8-5944-4eac-b0fe-9ca72f89ea5d\") " pod="openstack/dnsmasq-dns-698758b865-wmdpv" Jan 31 09:22:31 crc kubenswrapper[4830]: I0131 09:22:31.125692 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0db9fdf8-5944-4eac-b0fe-9ca72f89ea5d-config\") pod \"dnsmasq-dns-698758b865-wmdpv\" (UID: \"0db9fdf8-5944-4eac-b0fe-9ca72f89ea5d\") " pod="openstack/dnsmasq-dns-698758b865-wmdpv" Jan 31 09:22:31 crc kubenswrapper[4830]: I0131 09:22:31.126788 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0db9fdf8-5944-4eac-b0fe-9ca72f89ea5d-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-wmdpv\" (UID: \"0db9fdf8-5944-4eac-b0fe-9ca72f89ea5d\") " pod="openstack/dnsmasq-dns-698758b865-wmdpv" Jan 31 09:22:31 crc kubenswrapper[4830]: I0131 09:22:31.126787 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0db9fdf8-5944-4eac-b0fe-9ca72f89ea5d-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-wmdpv\" (UID: \"0db9fdf8-5944-4eac-b0fe-9ca72f89ea5d\") " pod="openstack/dnsmasq-dns-698758b865-wmdpv" Jan 31 09:22:31 crc kubenswrapper[4830]: I0131 09:22:31.126787 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0db9fdf8-5944-4eac-b0fe-9ca72f89ea5d-config\") pod \"dnsmasq-dns-698758b865-wmdpv\" (UID: \"0db9fdf8-5944-4eac-b0fe-9ca72f89ea5d\") " pod="openstack/dnsmasq-dns-698758b865-wmdpv" Jan 31 09:22:31 crc kubenswrapper[4830]: I0131 09:22:31.127376 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0db9fdf8-5944-4eac-b0fe-9ca72f89ea5d-dns-svc\") pod \"dnsmasq-dns-698758b865-wmdpv\" (UID: \"0db9fdf8-5944-4eac-b0fe-9ca72f89ea5d\") " pod="openstack/dnsmasq-dns-698758b865-wmdpv" Jan 31 09:22:31 crc kubenswrapper[4830]: I0131 09:22:31.183236 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-72ksn\" (UniqueName: \"kubernetes.io/projected/0db9fdf8-5944-4eac-b0fe-9ca72f89ea5d-kube-api-access-72ksn\") pod \"dnsmasq-dns-698758b865-wmdpv\" (UID: \"0db9fdf8-5944-4eac-b0fe-9ca72f89ea5d\") " pod="openstack/dnsmasq-dns-698758b865-wmdpv" Jan 31 09:22:31 crc kubenswrapper[4830]: I0131 09:22:31.298711 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-wmdpv" Jan 31 09:22:32 crc kubenswrapper[4830]: I0131 09:22:32.177883 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Jan 31 09:22:32 crc kubenswrapper[4830]: I0131 09:22:32.187657 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Jan 31 09:22:32 crc kubenswrapper[4830]: I0131 09:22:32.191794 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Jan 31 09:22:32 crc kubenswrapper[4830]: I0131 09:22:32.192352 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Jan 31 09:22:32 crc kubenswrapper[4830]: I0131 09:22:32.192597 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Jan 31 09:22:32 crc kubenswrapper[4830]: I0131 09:22:32.197500 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-zfpkh" Jan 31 09:22:32 crc kubenswrapper[4830]: I0131 09:22:32.203874 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Jan 31 09:22:32 crc kubenswrapper[4830]: I0131 09:22:32.256511 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1023f27a-9c1d-4818-a3f5-94946296ae46-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"1023f27a-9c1d-4818-a3f5-94946296ae46\") " pod="openstack/swift-storage-0" Jan 31 09:22:32 crc kubenswrapper[4830]: I0131 09:22:32.256804 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/1023f27a-9c1d-4818-a3f5-94946296ae46-lock\") pod \"swift-storage-0\" (UID: \"1023f27a-9c1d-4818-a3f5-94946296ae46\") " pod="openstack/swift-storage-0" Jan 31 09:22:32 crc kubenswrapper[4830]: I0131 09:22:32.256904 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ln8b8\" (UniqueName: \"kubernetes.io/projected/1023f27a-9c1d-4818-a3f5-94946296ae46-kube-api-access-ln8b8\") pod \"swift-storage-0\" (UID: \"1023f27a-9c1d-4818-a3f5-94946296ae46\") " pod="openstack/swift-storage-0" Jan 31 09:22:32 crc kubenswrapper[4830]: I0131 09:22:32.256955 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/1023f27a-9c1d-4818-a3f5-94946296ae46-cache\") pod \"swift-storage-0\" (UID: \"1023f27a-9c1d-4818-a3f5-94946296ae46\") " pod="openstack/swift-storage-0" Jan 31 09:22:32 crc kubenswrapper[4830]: I0131 09:22:32.257308 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/1023f27a-9c1d-4818-a3f5-94946296ae46-etc-swift\") pod \"swift-storage-0\" (UID: \"1023f27a-9c1d-4818-a3f5-94946296ae46\") " pod="openstack/swift-storage-0" Jan 31 09:22:32 crc kubenswrapper[4830]: I0131 09:22:32.257523 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-1f9a735d-b6fb-4a01-9b2f-2c00f4be5e7d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-1f9a735d-b6fb-4a01-9b2f-2c00f4be5e7d\") pod \"swift-storage-0\" (UID: \"1023f27a-9c1d-4818-a3f5-94946296ae46\") " pod="openstack/swift-storage-0" Jan 31 09:22:32 crc kubenswrapper[4830]: I0131 09:22:32.360507 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ln8b8\" (UniqueName: \"kubernetes.io/projected/1023f27a-9c1d-4818-a3f5-94946296ae46-kube-api-access-ln8b8\") pod \"swift-storage-0\" (UID: \"1023f27a-9c1d-4818-a3f5-94946296ae46\") " pod="openstack/swift-storage-0" Jan 31 09:22:32 crc kubenswrapper[4830]: I0131 09:22:32.360596 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/1023f27a-9c1d-4818-a3f5-94946296ae46-cache\") pod \"swift-storage-0\" (UID: \"1023f27a-9c1d-4818-a3f5-94946296ae46\") " pod="openstack/swift-storage-0" Jan 31 09:22:32 crc kubenswrapper[4830]: I0131 09:22:32.360662 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/1023f27a-9c1d-4818-a3f5-94946296ae46-etc-swift\") pod \"swift-storage-0\" (UID: \"1023f27a-9c1d-4818-a3f5-94946296ae46\") " pod="openstack/swift-storage-0" Jan 31 09:22:32 crc kubenswrapper[4830]: I0131 09:22:32.360717 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-1f9a735d-b6fb-4a01-9b2f-2c00f4be5e7d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-1f9a735d-b6fb-4a01-9b2f-2c00f4be5e7d\") pod \"swift-storage-0\" (UID: \"1023f27a-9c1d-4818-a3f5-94946296ae46\") " pod="openstack/swift-storage-0" Jan 31 09:22:32 crc kubenswrapper[4830]: I0131 09:22:32.360879 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1023f27a-9c1d-4818-a3f5-94946296ae46-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"1023f27a-9c1d-4818-a3f5-94946296ae46\") " pod="openstack/swift-storage-0" Jan 31 09:22:32 crc kubenswrapper[4830]: E0131 09:22:32.360990 4830 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 31 09:22:32 crc kubenswrapper[4830]: E0131 09:22:32.361046 4830 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 31 09:22:32 crc kubenswrapper[4830]: E0131 09:22:32.361161 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/1023f27a-9c1d-4818-a3f5-94946296ae46-etc-swift podName:1023f27a-9c1d-4818-a3f5-94946296ae46 nodeName:}" failed. No retries permitted until 2026-01-31 09:22:32.861126213 +0000 UTC m=+1297.354488655 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/1023f27a-9c1d-4818-a3f5-94946296ae46-etc-swift") pod "swift-storage-0" (UID: "1023f27a-9c1d-4818-a3f5-94946296ae46") : configmap "swift-ring-files" not found Jan 31 09:22:32 crc kubenswrapper[4830]: I0131 09:22:32.361025 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/1023f27a-9c1d-4818-a3f5-94946296ae46-lock\") pod \"swift-storage-0\" (UID: \"1023f27a-9c1d-4818-a3f5-94946296ae46\") " pod="openstack/swift-storage-0" Jan 31 09:22:32 crc kubenswrapper[4830]: I0131 09:22:32.361719 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/1023f27a-9c1d-4818-a3f5-94946296ae46-lock\") pod \"swift-storage-0\" (UID: \"1023f27a-9c1d-4818-a3f5-94946296ae46\") " pod="openstack/swift-storage-0" Jan 31 09:22:32 crc kubenswrapper[4830]: I0131 09:22:32.361945 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/1023f27a-9c1d-4818-a3f5-94946296ae46-cache\") pod \"swift-storage-0\" (UID: \"1023f27a-9c1d-4818-a3f5-94946296ae46\") " pod="openstack/swift-storage-0" Jan 31 09:22:32 crc kubenswrapper[4830]: I0131 09:22:32.366551 4830 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 31 09:22:32 crc kubenswrapper[4830]: I0131 09:22:32.366605 4830 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-1f9a735d-b6fb-4a01-9b2f-2c00f4be5e7d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-1f9a735d-b6fb-4a01-9b2f-2c00f4be5e7d\") pod \"swift-storage-0\" (UID: \"1023f27a-9c1d-4818-a3f5-94946296ae46\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/005705f3b12f32034e6325c76110ad31e2fc01891288fad7266911d5be0e5ed6/globalmount\"" pod="openstack/swift-storage-0" Jan 31 09:22:32 crc kubenswrapper[4830]: I0131 09:22:32.371483 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1023f27a-9c1d-4818-a3f5-94946296ae46-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"1023f27a-9c1d-4818-a3f5-94946296ae46\") " pod="openstack/swift-storage-0" Jan 31 09:22:32 crc kubenswrapper[4830]: I0131 09:22:32.387538 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ln8b8\" (UniqueName: \"kubernetes.io/projected/1023f27a-9c1d-4818-a3f5-94946296ae46-kube-api-access-ln8b8\") pod \"swift-storage-0\" (UID: \"1023f27a-9c1d-4818-a3f5-94946296ae46\") " pod="openstack/swift-storage-0" Jan 31 09:22:32 crc kubenswrapper[4830]: I0131 09:22:32.420886 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-1f9a735d-b6fb-4a01-9b2f-2c00f4be5e7d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-1f9a735d-b6fb-4a01-9b2f-2c00f4be5e7d\") pod \"swift-storage-0\" (UID: \"1023f27a-9c1d-4818-a3f5-94946296ae46\") " pod="openstack/swift-storage-0" Jan 31 09:22:32 crc kubenswrapper[4830]: I0131 09:22:32.719637 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-4qmzq"] Jan 31 09:22:32 crc kubenswrapper[4830]: I0131 09:22:32.721559 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-4qmzq" Jan 31 09:22:32 crc kubenswrapper[4830]: I0131 09:22:32.731312 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Jan 31 09:22:32 crc kubenswrapper[4830]: I0131 09:22:32.731415 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Jan 31 09:22:32 crc kubenswrapper[4830]: I0131 09:22:32.731318 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Jan 31 09:22:32 crc kubenswrapper[4830]: I0131 09:22:32.744835 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-4qmzq"] Jan 31 09:22:32 crc kubenswrapper[4830]: I0131 09:22:32.774640 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/c888f2ed-bb7b-4ee1-a17d-2b656f9464b6-dispersionconf\") pod \"swift-ring-rebalance-4qmzq\" (UID: \"c888f2ed-bb7b-4ee1-a17d-2b656f9464b6\") " pod="openstack/swift-ring-rebalance-4qmzq" Jan 31 09:22:32 crc kubenswrapper[4830]: I0131 09:22:32.774742 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b6zv9\" (UniqueName: \"kubernetes.io/projected/c888f2ed-bb7b-4ee1-a17d-2b656f9464b6-kube-api-access-b6zv9\") pod \"swift-ring-rebalance-4qmzq\" (UID: \"c888f2ed-bb7b-4ee1-a17d-2b656f9464b6\") " pod="openstack/swift-ring-rebalance-4qmzq" Jan 31 09:22:32 crc kubenswrapper[4830]: I0131 09:22:32.774782 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/c888f2ed-bb7b-4ee1-a17d-2b656f9464b6-ring-data-devices\") pod \"swift-ring-rebalance-4qmzq\" (UID: \"c888f2ed-bb7b-4ee1-a17d-2b656f9464b6\") " pod="openstack/swift-ring-rebalance-4qmzq" Jan 31 09:22:32 crc kubenswrapper[4830]: I0131 09:22:32.775320 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/c888f2ed-bb7b-4ee1-a17d-2b656f9464b6-etc-swift\") pod \"swift-ring-rebalance-4qmzq\" (UID: \"c888f2ed-bb7b-4ee1-a17d-2b656f9464b6\") " pod="openstack/swift-ring-rebalance-4qmzq" Jan 31 09:22:32 crc kubenswrapper[4830]: I0131 09:22:32.775586 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/c888f2ed-bb7b-4ee1-a17d-2b656f9464b6-swiftconf\") pod \"swift-ring-rebalance-4qmzq\" (UID: \"c888f2ed-bb7b-4ee1-a17d-2b656f9464b6\") " pod="openstack/swift-ring-rebalance-4qmzq" Jan 31 09:22:32 crc kubenswrapper[4830]: I0131 09:22:32.775669 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c888f2ed-bb7b-4ee1-a17d-2b656f9464b6-combined-ca-bundle\") pod \"swift-ring-rebalance-4qmzq\" (UID: \"c888f2ed-bb7b-4ee1-a17d-2b656f9464b6\") " pod="openstack/swift-ring-rebalance-4qmzq" Jan 31 09:22:32 crc kubenswrapper[4830]: I0131 09:22:32.775828 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c888f2ed-bb7b-4ee1-a17d-2b656f9464b6-scripts\") pod \"swift-ring-rebalance-4qmzq\" (UID: \"c888f2ed-bb7b-4ee1-a17d-2b656f9464b6\") " pod="openstack/swift-ring-rebalance-4qmzq" Jan 31 09:22:32 crc kubenswrapper[4830]: I0131 09:22:32.878325 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/c888f2ed-bb7b-4ee1-a17d-2b656f9464b6-etc-swift\") pod \"swift-ring-rebalance-4qmzq\" (UID: \"c888f2ed-bb7b-4ee1-a17d-2b656f9464b6\") " pod="openstack/swift-ring-rebalance-4qmzq" Jan 31 09:22:32 crc kubenswrapper[4830]: I0131 09:22:32.878403 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/1023f27a-9c1d-4818-a3f5-94946296ae46-etc-swift\") pod \"swift-storage-0\" (UID: \"1023f27a-9c1d-4818-a3f5-94946296ae46\") " pod="openstack/swift-storage-0" Jan 31 09:22:32 crc kubenswrapper[4830]: I0131 09:22:32.878451 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/c888f2ed-bb7b-4ee1-a17d-2b656f9464b6-swiftconf\") pod \"swift-ring-rebalance-4qmzq\" (UID: \"c888f2ed-bb7b-4ee1-a17d-2b656f9464b6\") " pod="openstack/swift-ring-rebalance-4qmzq" Jan 31 09:22:32 crc kubenswrapper[4830]: I0131 09:22:32.878490 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c888f2ed-bb7b-4ee1-a17d-2b656f9464b6-combined-ca-bundle\") pod \"swift-ring-rebalance-4qmzq\" (UID: \"c888f2ed-bb7b-4ee1-a17d-2b656f9464b6\") " pod="openstack/swift-ring-rebalance-4qmzq" Jan 31 09:22:32 crc kubenswrapper[4830]: I0131 09:22:32.878526 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c888f2ed-bb7b-4ee1-a17d-2b656f9464b6-scripts\") pod \"swift-ring-rebalance-4qmzq\" (UID: \"c888f2ed-bb7b-4ee1-a17d-2b656f9464b6\") " pod="openstack/swift-ring-rebalance-4qmzq" Jan 31 09:22:32 crc kubenswrapper[4830]: I0131 09:22:32.878556 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/c888f2ed-bb7b-4ee1-a17d-2b656f9464b6-dispersionconf\") pod \"swift-ring-rebalance-4qmzq\" (UID: \"c888f2ed-bb7b-4ee1-a17d-2b656f9464b6\") " pod="openstack/swift-ring-rebalance-4qmzq" Jan 31 09:22:32 crc kubenswrapper[4830]: I0131 09:22:32.878583 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b6zv9\" (UniqueName: \"kubernetes.io/projected/c888f2ed-bb7b-4ee1-a17d-2b656f9464b6-kube-api-access-b6zv9\") pod \"swift-ring-rebalance-4qmzq\" (UID: \"c888f2ed-bb7b-4ee1-a17d-2b656f9464b6\") " pod="openstack/swift-ring-rebalance-4qmzq" Jan 31 09:22:32 crc kubenswrapper[4830]: I0131 09:22:32.878616 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/c888f2ed-bb7b-4ee1-a17d-2b656f9464b6-ring-data-devices\") pod \"swift-ring-rebalance-4qmzq\" (UID: \"c888f2ed-bb7b-4ee1-a17d-2b656f9464b6\") " pod="openstack/swift-ring-rebalance-4qmzq" Jan 31 09:22:32 crc kubenswrapper[4830]: I0131 09:22:32.879713 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/c888f2ed-bb7b-4ee1-a17d-2b656f9464b6-ring-data-devices\") pod \"swift-ring-rebalance-4qmzq\" (UID: \"c888f2ed-bb7b-4ee1-a17d-2b656f9464b6\") " pod="openstack/swift-ring-rebalance-4qmzq" Jan 31 09:22:32 crc kubenswrapper[4830]: E0131 09:22:32.879766 4830 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 31 09:22:32 crc kubenswrapper[4830]: E0131 09:22:32.879806 4830 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 31 09:22:32 crc kubenswrapper[4830]: E0131 09:22:32.879899 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/1023f27a-9c1d-4818-a3f5-94946296ae46-etc-swift podName:1023f27a-9c1d-4818-a3f5-94946296ae46 nodeName:}" failed. No retries permitted until 2026-01-31 09:22:33.879864312 +0000 UTC m=+1298.373226914 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/1023f27a-9c1d-4818-a3f5-94946296ae46-etc-swift") pod "swift-storage-0" (UID: "1023f27a-9c1d-4818-a3f5-94946296ae46") : configmap "swift-ring-files" not found Jan 31 09:22:32 crc kubenswrapper[4830]: I0131 09:22:32.880667 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c888f2ed-bb7b-4ee1-a17d-2b656f9464b6-scripts\") pod \"swift-ring-rebalance-4qmzq\" (UID: \"c888f2ed-bb7b-4ee1-a17d-2b656f9464b6\") " pod="openstack/swift-ring-rebalance-4qmzq" Jan 31 09:22:32 crc kubenswrapper[4830]: I0131 09:22:32.881066 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/c888f2ed-bb7b-4ee1-a17d-2b656f9464b6-etc-swift\") pod \"swift-ring-rebalance-4qmzq\" (UID: \"c888f2ed-bb7b-4ee1-a17d-2b656f9464b6\") " pod="openstack/swift-ring-rebalance-4qmzq" Jan 31 09:22:32 crc kubenswrapper[4830]: I0131 09:22:32.887016 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/c888f2ed-bb7b-4ee1-a17d-2b656f9464b6-dispersionconf\") pod \"swift-ring-rebalance-4qmzq\" (UID: \"c888f2ed-bb7b-4ee1-a17d-2b656f9464b6\") " pod="openstack/swift-ring-rebalance-4qmzq" Jan 31 09:22:32 crc kubenswrapper[4830]: I0131 09:22:32.890711 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c888f2ed-bb7b-4ee1-a17d-2b656f9464b6-combined-ca-bundle\") pod \"swift-ring-rebalance-4qmzq\" (UID: \"c888f2ed-bb7b-4ee1-a17d-2b656f9464b6\") " pod="openstack/swift-ring-rebalance-4qmzq" Jan 31 09:22:32 crc kubenswrapper[4830]: I0131 09:22:32.898432 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b6zv9\" (UniqueName: \"kubernetes.io/projected/c888f2ed-bb7b-4ee1-a17d-2b656f9464b6-kube-api-access-b6zv9\") pod \"swift-ring-rebalance-4qmzq\" (UID: \"c888f2ed-bb7b-4ee1-a17d-2b656f9464b6\") " pod="openstack/swift-ring-rebalance-4qmzq" Jan 31 09:22:32 crc kubenswrapper[4830]: I0131 09:22:32.899043 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/c888f2ed-bb7b-4ee1-a17d-2b656f9464b6-swiftconf\") pod \"swift-ring-rebalance-4qmzq\" (UID: \"c888f2ed-bb7b-4ee1-a17d-2b656f9464b6\") " pod="openstack/swift-ring-rebalance-4qmzq" Jan 31 09:22:33 crc kubenswrapper[4830]: I0131 09:22:33.048611 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-4qmzq" Jan 31 09:22:33 crc kubenswrapper[4830]: I0131 09:22:33.905135 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/1023f27a-9c1d-4818-a3f5-94946296ae46-etc-swift\") pod \"swift-storage-0\" (UID: \"1023f27a-9c1d-4818-a3f5-94946296ae46\") " pod="openstack/swift-storage-0" Jan 31 09:22:33 crc kubenswrapper[4830]: E0131 09:22:33.905395 4830 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 31 09:22:33 crc kubenswrapper[4830]: E0131 09:22:33.905753 4830 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 31 09:22:33 crc kubenswrapper[4830]: E0131 09:22:33.905825 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/1023f27a-9c1d-4818-a3f5-94946296ae46-etc-swift podName:1023f27a-9c1d-4818-a3f5-94946296ae46 nodeName:}" failed. No retries permitted until 2026-01-31 09:22:35.905803239 +0000 UTC m=+1300.399165681 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/1023f27a-9c1d-4818-a3f5-94946296ae46-etc-swift") pod "swift-storage-0" (UID: "1023f27a-9c1d-4818-a3f5-94946296ae46") : configmap "swift-ring-files" not found Jan 31 09:22:35 crc kubenswrapper[4830]: I0131 09:22:35.978593 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/1023f27a-9c1d-4818-a3f5-94946296ae46-etc-swift\") pod \"swift-storage-0\" (UID: \"1023f27a-9c1d-4818-a3f5-94946296ae46\") " pod="openstack/swift-storage-0" Jan 31 09:22:35 crc kubenswrapper[4830]: E0131 09:22:35.978975 4830 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 31 09:22:35 crc kubenswrapper[4830]: E0131 09:22:35.980302 4830 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 31 09:22:35 crc kubenswrapper[4830]: E0131 09:22:35.980507 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/1023f27a-9c1d-4818-a3f5-94946296ae46-etc-swift podName:1023f27a-9c1d-4818-a3f5-94946296ae46 nodeName:}" failed. No retries permitted until 2026-01-31 09:22:39.980472146 +0000 UTC m=+1304.473834608 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/1023f27a-9c1d-4818-a3f5-94946296ae46-etc-swift") pod "swift-storage-0" (UID: "1023f27a-9c1d-4818-a3f5-94946296ae46") : configmap "swift-ring-files" not found Jan 31 09:22:38 crc kubenswrapper[4830]: E0131 09:22:38.092361 4830 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified" Jan 31 09:22:38 crc kubenswrapper[4830]: E0131 09:22:38.092941 4830 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:openstack-network-exporter,Image:quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified,Command:[/app/openstack-network-exporter],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:OPENSTACK_NETWORK_EXPORTER_YAML,Value:/etc/config/openstack-network-exporter.yaml,ValueFrom:nil,},EnvVar{Name:CONFIG_HASH,Value:n66bh668h698h57bh66h5fh68ch78h5ddh8bh675h589h565h5d5h54bhddh5d5h5d4h5b7hfbh9dh645h678hfbh679hb7h5c8h59fh694h688h5d7h65dq,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ovs-rundir,ReadOnly:true,MountPath:/var/run/openvswitch,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovn-rundir,ReadOnly:true,MountPath:/var/run/ovn,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-certs-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovnmetrics.crt,SubPath:tls.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-certs-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/private/ovnmetrics.key,SubPath:tls.key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-certs-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndbca.crt,SubPath:ca.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xmfz2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[NET_ADMIN SYS_ADMIN SYS_NICE],Drop:[],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-controller-metrics-dqnl9_openstack(c7f2be11-cbc3-426b-8d36-55d2bec20af6): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 31 09:22:38 crc kubenswrapper[4830]: E0131 09:22:38.094742 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openstack-network-exporter\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ovn-controller-metrics-dqnl9" podUID="c7f2be11-cbc3-426b-8d36-55d2bec20af6" Jan 31 09:22:38 crc kubenswrapper[4830]: E0131 09:22:38.099946 4830 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified" Jan 31 09:22:38 crc kubenswrapper[4830]: E0131 09:22:38.100175 4830 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:openstack-network-exporter,Image:quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified,Command:[/app/openstack-network-exporter],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:OPENSTACK_NETWORK_EXPORTER_YAML,Value:/etc/config/openstack-network-exporter.yaml,ValueFrom:nil,},EnvVar{Name:CONFIG_HASH,Value:n55bh66ch5c8h555hb8h587h59chbh94h67dh66dhbh565h687h674h668h555h5fch5bfh646hf9h667hc4h64fhf8h5f7h5b9h5c8hf8h6dh9dh644q,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ovsdb-rundir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-certs-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovnmetrics.crt,SubPath:tls.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-certs-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/private/ovnmetrics.key,SubPath:tls.key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-certs-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndbca.crt,SubPath:ca.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4r6rj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovsdbserver-nb-0_openstack(6f46adde-a4fc-42fc-aa3b-de8154dbc99c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 31 09:22:38 crc kubenswrapper[4830]: E0131 09:22:38.101559 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openstack-network-exporter\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ovsdbserver-nb-0" podUID="6f46adde-a4fc-42fc-aa3b-de8154dbc99c" Jan 31 09:22:38 crc kubenswrapper[4830]: I0131 09:22:38.212874 4830 generic.go:334] "Generic (PLEG): container finished" podID="9644fbf3-9d54-43de-992f-7bccf944c31f" containerID="95558a00c0b065522e42a6dc000df9142d743e6d9868ed0f83b3796ce405b1af" exitCode=0 Jan 31 09:22:38 crc kubenswrapper[4830]: I0131 09:22:38.213002 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-ck5gn" event={"ID":"9644fbf3-9d54-43de-992f-7bccf944c31f","Type":"ContainerDied","Data":"95558a00c0b065522e42a6dc000df9142d743e6d9868ed0f83b3796ce405b1af"} Jan 31 09:22:38 crc kubenswrapper[4830]: I0131 09:22:38.224597 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"68109d40-9af0-4c37-bf02-7b4744dbab5f","Type":"ContainerStarted","Data":"39dcfcca13639143aaebae3cb77d40e361f67c6338ad727f1999e2a36e3ffabd"} Jan 31 09:22:38 crc kubenswrapper[4830]: I0131 09:22:38.293182 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"f60eed79-badf-4909-869b-edbfdfb774ac","Type":"ContainerStarted","Data":"55ba60a30982fec7fc25c3710647f237bdb8bc45991a8a20664ac57e97a9a09e"} Jan 31 09:22:38 crc kubenswrapper[4830]: I0131 09:22:38.293259 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"8e40a106-74cd-45ea-a936-c34daaf9ce6e","Type":"ContainerStarted","Data":"11ff703748b26671c1b2a53117176e9a226db42f7c70f7520609779e5573b1cb"} Jan 31 09:22:38 crc kubenswrapper[4830]: I0131 09:22:38.295248 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"18af810d-9de4-4822-86d2-bb7e8a8a449b","Type":"ContainerStarted","Data":"b82d566e252a5e263e93f29e01a43117ffad3aa3827523d8c1a930eedb4b72fd"} Jan 31 09:22:38 crc kubenswrapper[4830]: E0131 09:22:38.395959 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openstack-network-exporter\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified\\\"\"" pod="openstack/ovsdbserver-nb-0" podUID="6f46adde-a4fc-42fc-aa3b-de8154dbc99c" Jan 31 09:22:38 crc kubenswrapper[4830]: I0131 09:22:38.736930 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-4qmzq"] Jan 31 09:22:38 crc kubenswrapper[4830]: I0131 09:22:38.864503 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-698758b865-wmdpv"] Jan 31 09:22:38 crc kubenswrapper[4830]: I0131 09:22:38.875280 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Jan 31 09:22:39 crc kubenswrapper[4830]: W0131 09:22:39.065225 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0db9fdf8_5944_4eac_b0fe_9ca72f89ea5d.slice/crio-26bcc215a4ade8767a1389a9f600fd1bd72f1c942dff001fa3f4cdd72d57aee4 WatchSource:0}: Error finding container 26bcc215a4ade8767a1389a9f600fd1bd72f1c942dff001fa3f4cdd72d57aee4: Status 404 returned error can't find the container with id 26bcc215a4ade8767a1389a9f600fd1bd72f1c942dff001fa3f4cdd72d57aee4 Jan 31 09:22:39 crc kubenswrapper[4830]: I0131 09:22:39.279091 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Jan 31 09:22:39 crc kubenswrapper[4830]: I0131 09:22:39.313798 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-gk8dv" event={"ID":"e3aa8ed2-c434-4c0c-9a0e-8fe7ce3cc5d1","Type":"ContainerStarted","Data":"876708325f56d0de9d79a5c38115f4f07ecff7851d03e8e144909435919afabb"} Jan 31 09:22:39 crc kubenswrapper[4830]: I0131 09:22:39.317663 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-ck5gn" event={"ID":"9644fbf3-9d54-43de-992f-7bccf944c31f","Type":"ContainerStarted","Data":"3745a68a5912defd6e394798fac7f86ac3fa5f8bec603680a7a6fee33012ce78"} Jan 31 09:22:39 crc kubenswrapper[4830]: I0131 09:22:39.317815 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-86db49b7ff-ck5gn" Jan 31 09:22:39 crc kubenswrapper[4830]: I0131 09:22:39.320245 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-wmdpv" event={"ID":"0db9fdf8-5944-4eac-b0fe-9ca72f89ea5d","Type":"ContainerStarted","Data":"26bcc215a4ade8767a1389a9f600fd1bd72f1c942dff001fa3f4cdd72d57aee4"} Jan 31 09:22:39 crc kubenswrapper[4830]: I0131 09:22:39.322805 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-4qmzq" event={"ID":"c888f2ed-bb7b-4ee1-a17d-2b656f9464b6","Type":"ContainerStarted","Data":"9a4cb37c97a90840ebc2a05bc842ee0e9e18cab468c8775bd3c4993d2bcfc35b"} Jan 31 09:22:39 crc kubenswrapper[4830]: I0131 09:22:39.324555 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-ui-dashboards-66cbf594b5-swjf6" event={"ID":"51e241ad-2d92-41fb-a218-1a14cd40534d","Type":"ContainerStarted","Data":"7f7345068f427e53e2e9eef9e2129a920df4d244560406c0733a3e656fa4cc90"} Jan 31 09:22:39 crc kubenswrapper[4830]: I0131 09:22:39.328195 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-dqnl9" event={"ID":"c7f2be11-cbc3-426b-8d36-55d2bec20af6","Type":"ContainerStarted","Data":"fefa58e63e51b1dca1e1edc0ddb917762f1c2a152a0c2e8fe51f9eb5744f5d83"} Jan 31 09:22:39 crc kubenswrapper[4830]: I0131 09:22:39.342194 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-ll6c8" event={"ID":"feb7542a-b048-4323-a00f-9cdba1b8713f","Type":"ContainerStarted","Data":"7eea604f8f4ad7fac69b2968ddf80bf85cc10c1cdc141647f80b606a0132cf56"} Jan 31 09:22:39 crc kubenswrapper[4830]: I0131 09:22:39.342649 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7fd796d7df-ll6c8" podUID="feb7542a-b048-4323-a00f-9cdba1b8713f" containerName="dnsmasq-dns" containerID="cri-o://7eea604f8f4ad7fac69b2968ddf80bf85cc10c1cdc141647f80b606a0132cf56" gracePeriod=10 Jan 31 09:22:39 crc kubenswrapper[4830]: I0131 09:22:39.342826 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Jan 31 09:22:39 crc kubenswrapper[4830]: I0131 09:22:39.342854 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7fd796d7df-ll6c8" Jan 31 09:22:39 crc kubenswrapper[4830]: I0131 09:22:39.349497 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-86db49b7ff-ck5gn" podStartSLOduration=30.240733759 podStartE2EDuration="31.349471056s" podCreationTimestamp="2026-01-31 09:22:08 +0000 UTC" firstStartedPulling="2026-01-31 09:22:22.663455023 +0000 UTC m=+1287.156817465" lastFinishedPulling="2026-01-31 09:22:23.77219232 +0000 UTC m=+1288.265554762" observedRunningTime="2026-01-31 09:22:39.338860604 +0000 UTC m=+1303.832223066" watchObservedRunningTime="2026-01-31 09:22:39.349471056 +0000 UTC m=+1303.842833508" Jan 31 09:22:39 crc kubenswrapper[4830]: I0131 09:22:39.365677 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-ui-dashboards-66cbf594b5-swjf6" podStartSLOduration=8.229052181 podStartE2EDuration="58.365643337s" podCreationTimestamp="2026-01-31 09:21:41 +0000 UTC" firstStartedPulling="2026-01-31 09:21:48.300107576 +0000 UTC m=+1252.793470018" lastFinishedPulling="2026-01-31 09:22:38.436698732 +0000 UTC m=+1302.930061174" observedRunningTime="2026-01-31 09:22:39.358229826 +0000 UTC m=+1303.851592278" watchObservedRunningTime="2026-01-31 09:22:39.365643337 +0000 UTC m=+1303.859005779" Jan 31 09:22:39 crc kubenswrapper[4830]: I0131 09:22:39.406886 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-dqnl9" podStartSLOduration=-9223372004.447918 podStartE2EDuration="32.406858581s" podCreationTimestamp="2026-01-31 09:22:07 +0000 UTC" firstStartedPulling="2026-01-31 09:22:22.600056158 +0000 UTC m=+1287.093418600" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 09:22:39.390006131 +0000 UTC m=+1303.883368603" watchObservedRunningTime="2026-01-31 09:22:39.406858581 +0000 UTC m=+1303.900221023" Jan 31 09:22:39 crc kubenswrapper[4830]: I0131 09:22:39.430465 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7fd796d7df-ll6c8" podStartSLOduration=31.18984653 podStartE2EDuration="32.430436243s" podCreationTimestamp="2026-01-31 09:22:07 +0000 UTC" firstStartedPulling="2026-01-31 09:22:22.585463872 +0000 UTC m=+1287.078826314" lastFinishedPulling="2026-01-31 09:22:23.826053585 +0000 UTC m=+1288.319416027" observedRunningTime="2026-01-31 09:22:39.426515281 +0000 UTC m=+1303.919877723" watchObservedRunningTime="2026-01-31 09:22:39.430436243 +0000 UTC m=+1303.923798685" Jan 31 09:22:39 crc kubenswrapper[4830]: I0131 09:22:39.735600 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Jan 31 09:22:40 crc kubenswrapper[4830]: I0131 09:22:40.028029 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/1023f27a-9c1d-4818-a3f5-94946296ae46-etc-swift\") pod \"swift-storage-0\" (UID: \"1023f27a-9c1d-4818-a3f5-94946296ae46\") " pod="openstack/swift-storage-0" Jan 31 09:22:40 crc kubenswrapper[4830]: E0131 09:22:40.028697 4830 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 31 09:22:40 crc kubenswrapper[4830]: E0131 09:22:40.028716 4830 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 31 09:22:40 crc kubenswrapper[4830]: E0131 09:22:40.028801 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/1023f27a-9c1d-4818-a3f5-94946296ae46-etc-swift podName:1023f27a-9c1d-4818-a3f5-94946296ae46 nodeName:}" failed. No retries permitted until 2026-01-31 09:22:48.028777709 +0000 UTC m=+1312.522140151 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/1023f27a-9c1d-4818-a3f5-94946296ae46-etc-swift") pod "swift-storage-0" (UID: "1023f27a-9c1d-4818-a3f5-94946296ae46") : configmap "swift-ring-files" not found Jan 31 09:22:40 crc kubenswrapper[4830]: I0131 09:22:40.358740 4830 generic.go:334] "Generic (PLEG): container finished" podID="feb7542a-b048-4323-a00f-9cdba1b8713f" containerID="7eea604f8f4ad7fac69b2968ddf80bf85cc10c1cdc141647f80b606a0132cf56" exitCode=0 Jan 31 09:22:40 crc kubenswrapper[4830]: I0131 09:22:40.358903 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-ll6c8" event={"ID":"feb7542a-b048-4323-a00f-9cdba1b8713f","Type":"ContainerDied","Data":"7eea604f8f4ad7fac69b2968ddf80bf85cc10c1cdc141647f80b606a0132cf56"} Jan 31 09:22:40 crc kubenswrapper[4830]: I0131 09:22:40.358950 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-ll6c8" event={"ID":"feb7542a-b048-4323-a00f-9cdba1b8713f","Type":"ContainerDied","Data":"9589ae5e569582cf85030f479f8b9efd588d2e36fc98ed0035e60b3235545d48"} Jan 31 09:22:40 crc kubenswrapper[4830]: I0131 09:22:40.358967 4830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9589ae5e569582cf85030f479f8b9efd588d2e36fc98ed0035e60b3235545d48" Jan 31 09:22:40 crc kubenswrapper[4830]: I0131 09:22:40.364241 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"5359b6c7-375f-4424-bb43-f4b2a4d40329","Type":"ContainerStarted","Data":"8010b0b10105b7b4db334cd3b6743ebe9db3e797280e0df78b98d7bd4145477d"} Jan 31 09:22:40 crc kubenswrapper[4830]: I0131 09:22:40.373544 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"e47f665d-2a2a-464a-b6a3-e255f1440eda","Type":"ContainerStarted","Data":"84b0c8d77559723f83a47ce97a2a639465a3b3e92f2e47e82bd93a4cacfb6f25"} Jan 31 09:22:40 crc kubenswrapper[4830]: I0131 09:22:40.408532 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=27.187291706 podStartE2EDuration="54.408503778s" podCreationTimestamp="2026-01-31 09:21:46 +0000 UTC" firstStartedPulling="2026-01-31 09:22:11.270824076 +0000 UTC m=+1275.764186528" lastFinishedPulling="2026-01-31 09:22:38.492036168 +0000 UTC m=+1302.985398600" observedRunningTime="2026-01-31 09:22:40.399210383 +0000 UTC m=+1304.892572875" watchObservedRunningTime="2026-01-31 09:22:40.408503778 +0000 UTC m=+1304.901866220" Jan 31 09:22:40 crc kubenswrapper[4830]: I0131 09:22:40.409383 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-ll6c8" Jan 31 09:22:40 crc kubenswrapper[4830]: I0131 09:22:40.540274 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/feb7542a-b048-4323-a00f-9cdba1b8713f-dns-svc\") pod \"feb7542a-b048-4323-a00f-9cdba1b8713f\" (UID: \"feb7542a-b048-4323-a00f-9cdba1b8713f\") " Jan 31 09:22:40 crc kubenswrapper[4830]: I0131 09:22:40.540471 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/feb7542a-b048-4323-a00f-9cdba1b8713f-ovsdbserver-nb\") pod \"feb7542a-b048-4323-a00f-9cdba1b8713f\" (UID: \"feb7542a-b048-4323-a00f-9cdba1b8713f\") " Jan 31 09:22:40 crc kubenswrapper[4830]: I0131 09:22:40.540498 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/feb7542a-b048-4323-a00f-9cdba1b8713f-config\") pod \"feb7542a-b048-4323-a00f-9cdba1b8713f\" (UID: \"feb7542a-b048-4323-a00f-9cdba1b8713f\") " Jan 31 09:22:40 crc kubenswrapper[4830]: I0131 09:22:40.540551 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q29jq\" (UniqueName: \"kubernetes.io/projected/feb7542a-b048-4323-a00f-9cdba1b8713f-kube-api-access-q29jq\") pod \"feb7542a-b048-4323-a00f-9cdba1b8713f\" (UID: \"feb7542a-b048-4323-a00f-9cdba1b8713f\") " Jan 31 09:22:40 crc kubenswrapper[4830]: I0131 09:22:40.560955 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/feb7542a-b048-4323-a00f-9cdba1b8713f-kube-api-access-q29jq" (OuterVolumeSpecName: "kube-api-access-q29jq") pod "feb7542a-b048-4323-a00f-9cdba1b8713f" (UID: "feb7542a-b048-4323-a00f-9cdba1b8713f"). InnerVolumeSpecName "kube-api-access-q29jq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:22:40 crc kubenswrapper[4830]: I0131 09:22:40.624649 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/feb7542a-b048-4323-a00f-9cdba1b8713f-config" (OuterVolumeSpecName: "config") pod "feb7542a-b048-4323-a00f-9cdba1b8713f" (UID: "feb7542a-b048-4323-a00f-9cdba1b8713f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:22:40 crc kubenswrapper[4830]: I0131 09:22:40.650619 4830 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/feb7542a-b048-4323-a00f-9cdba1b8713f-config\") on node \"crc\" DevicePath \"\"" Jan 31 09:22:40 crc kubenswrapper[4830]: I0131 09:22:40.650669 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q29jq\" (UniqueName: \"kubernetes.io/projected/feb7542a-b048-4323-a00f-9cdba1b8713f-kube-api-access-q29jq\") on node \"crc\" DevicePath \"\"" Jan 31 09:22:40 crc kubenswrapper[4830]: I0131 09:22:40.651566 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/feb7542a-b048-4323-a00f-9cdba1b8713f-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "feb7542a-b048-4323-a00f-9cdba1b8713f" (UID: "feb7542a-b048-4323-a00f-9cdba1b8713f"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:22:40 crc kubenswrapper[4830]: I0131 09:22:40.700238 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/feb7542a-b048-4323-a00f-9cdba1b8713f-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "feb7542a-b048-4323-a00f-9cdba1b8713f" (UID: "feb7542a-b048-4323-a00f-9cdba1b8713f"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:22:40 crc kubenswrapper[4830]: I0131 09:22:40.753096 4830 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/feb7542a-b048-4323-a00f-9cdba1b8713f-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 31 09:22:40 crc kubenswrapper[4830]: I0131 09:22:40.753164 4830 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/feb7542a-b048-4323-a00f-9cdba1b8713f-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 31 09:22:41 crc kubenswrapper[4830]: I0131 09:22:41.387652 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-gk8dv" event={"ID":"e3aa8ed2-c434-4c0c-9a0e-8fe7ce3cc5d1","Type":"ContainerStarted","Data":"1db037ba5d0c3fef0615bc233874c187a426c13da7cb277252df23d5d37a9335"} Jan 31 09:22:41 crc kubenswrapper[4830]: I0131 09:22:41.388710 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-gk8dv" Jan 31 09:22:41 crc kubenswrapper[4830]: I0131 09:22:41.390685 4830 generic.go:334] "Generic (PLEG): container finished" podID="0db9fdf8-5944-4eac-b0fe-9ca72f89ea5d" containerID="98b81f451efacf26232ff028069eaa9d83f52e01303ee9156b0af10ee6e28bf4" exitCode=0 Jan 31 09:22:41 crc kubenswrapper[4830]: I0131 09:22:41.390910 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-wmdpv" event={"ID":"0db9fdf8-5944-4eac-b0fe-9ca72f89ea5d","Type":"ContainerDied","Data":"98b81f451efacf26232ff028069eaa9d83f52e01303ee9156b0af10ee6e28bf4"} Jan 31 09:22:41 crc kubenswrapper[4830]: I0131 09:22:41.393918 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"6f46adde-a4fc-42fc-aa3b-de8154dbc99c","Type":"ContainerStarted","Data":"4e8ff6c5fa2c65935a629f903ab205dce51a75067b65ba062c950972814dedb5"} Jan 31 09:22:41 crc kubenswrapper[4830]: I0131 09:22:41.398965 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"759f3f02-a9de-4e01-97f9-a97424c592a6","Type":"ContainerStarted","Data":"1bae92a840384b060e4d01df81c92a143f6bda7ee6adbcf67e5d9346d46a2d67"} Jan 31 09:22:41 crc kubenswrapper[4830]: I0131 09:22:41.399051 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-ll6c8" Jan 31 09:22:41 crc kubenswrapper[4830]: I0131 09:22:41.399322 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Jan 31 09:22:41 crc kubenswrapper[4830]: I0131 09:22:41.428150 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-gk8dv" podStartSLOduration=33.573311539 podStartE2EDuration="57.428124075s" podCreationTimestamp="2026-01-31 09:21:44 +0000 UTC" firstStartedPulling="2026-01-31 09:21:57.224664021 +0000 UTC m=+1261.718026463" lastFinishedPulling="2026-01-31 09:22:21.079476557 +0000 UTC m=+1285.572838999" observedRunningTime="2026-01-31 09:22:41.417791061 +0000 UTC m=+1305.911153503" watchObservedRunningTime="2026-01-31 09:22:41.428124075 +0000 UTC m=+1305.921486507" Jan 31 09:22:41 crc kubenswrapper[4830]: I0131 09:22:41.493415 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=33.325580782 podStartE2EDuration="58.493392035s" podCreationTimestamp="2026-01-31 09:21:43 +0000 UTC" firstStartedPulling="2026-01-31 09:21:57.236451067 +0000 UTC m=+1261.729813509" lastFinishedPulling="2026-01-31 09:22:22.40426232 +0000 UTC m=+1286.897624762" observedRunningTime="2026-01-31 09:22:41.483017379 +0000 UTC m=+1305.976379821" watchObservedRunningTime="2026-01-31 09:22:41.493392035 +0000 UTC m=+1305.986754477" Jan 31 09:22:41 crc kubenswrapper[4830]: I0131 09:22:41.543436 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=5.8326082790000005 podStartE2EDuration="1m1.54341248s" podCreationTimestamp="2026-01-31 09:21:40 +0000 UTC" firstStartedPulling="2026-01-31 09:21:42.867070573 +0000 UTC m=+1247.360433015" lastFinishedPulling="2026-01-31 09:22:38.577874774 +0000 UTC m=+1303.071237216" observedRunningTime="2026-01-31 09:22:41.530414979 +0000 UTC m=+1306.023777421" watchObservedRunningTime="2026-01-31 09:22:41.54341248 +0000 UTC m=+1306.036774922" Jan 31 09:22:41 crc kubenswrapper[4830]: I0131 09:22:41.604132 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-ll6c8"] Jan 31 09:22:41 crc kubenswrapper[4830]: I0131 09:22:41.616630 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-ll6c8"] Jan 31 09:22:42 crc kubenswrapper[4830]: I0131 09:22:42.270524 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="feb7542a-b048-4323-a00f-9cdba1b8713f" path="/var/lib/kubelet/pods/feb7542a-b048-4323-a00f-9cdba1b8713f/volumes" Jan 31 09:22:42 crc kubenswrapper[4830]: I0131 09:22:42.464167 4830 generic.go:334] "Generic (PLEG): container finished" podID="f37f41b4-3b56-45f9-a368-0f772bcf3002" containerID="cdb56f991da7c792dafd4bd87c59024a24b3277f0cc29c284bc867ed48845277" exitCode=0 Jan 31 09:22:42 crc kubenswrapper[4830]: I0131 09:22:42.464757 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"f37f41b4-3b56-45f9-a368-0f772bcf3002","Type":"ContainerDied","Data":"cdb56f991da7c792dafd4bd87c59024a24b3277f0cc29c284bc867ed48845277"} Jan 31 09:22:42 crc kubenswrapper[4830]: I0131 09:22:42.465206 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-gk8dv" Jan 31 09:22:42 crc kubenswrapper[4830]: I0131 09:22:42.788558 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Jan 31 09:22:42 crc kubenswrapper[4830]: I0131 09:22:42.850356 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Jan 31 09:22:43 crc kubenswrapper[4830]: I0131 09:22:43.477959 4830 generic.go:334] "Generic (PLEG): container finished" podID="2ca5d2f1-673e-4173-848a-8d32d33b8bcc" containerID="5a95c143dbe1eea918d6986ca854f7912f381ec8c8a8bca5adc962f6a3ac5aab" exitCode=0 Jan 31 09:22:43 crc kubenswrapper[4830]: I0131 09:22:43.478066 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"2ca5d2f1-673e-4173-848a-8d32d33b8bcc","Type":"ContainerDied","Data":"5a95c143dbe1eea918d6986ca854f7912f381ec8c8a8bca5adc962f6a3ac5aab"} Jan 31 09:22:43 crc kubenswrapper[4830]: I0131 09:22:43.479328 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Jan 31 09:22:43 crc kubenswrapper[4830]: I0131 09:22:43.532177 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Jan 31 09:22:43 crc kubenswrapper[4830]: I0131 09:22:43.929928 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Jan 31 09:22:43 crc kubenswrapper[4830]: E0131 09:22:43.931133 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="feb7542a-b048-4323-a00f-9cdba1b8713f" containerName="init" Jan 31 09:22:43 crc kubenswrapper[4830]: I0131 09:22:43.931160 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="feb7542a-b048-4323-a00f-9cdba1b8713f" containerName="init" Jan 31 09:22:43 crc kubenswrapper[4830]: E0131 09:22:43.931181 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="feb7542a-b048-4323-a00f-9cdba1b8713f" containerName="dnsmasq-dns" Jan 31 09:22:43 crc kubenswrapper[4830]: I0131 09:22:43.931188 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="feb7542a-b048-4323-a00f-9cdba1b8713f" containerName="dnsmasq-dns" Jan 31 09:22:43 crc kubenswrapper[4830]: I0131 09:22:43.931415 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="feb7542a-b048-4323-a00f-9cdba1b8713f" containerName="dnsmasq-dns" Jan 31 09:22:43 crc kubenswrapper[4830]: I0131 09:22:43.933084 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 31 09:22:43 crc kubenswrapper[4830]: I0131 09:22:43.939709 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-p4pdk" Jan 31 09:22:43 crc kubenswrapper[4830]: I0131 09:22:43.940153 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Jan 31 09:22:43 crc kubenswrapper[4830]: I0131 09:22:43.940268 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Jan 31 09:22:43 crc kubenswrapper[4830]: I0131 09:22:43.940596 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Jan 31 09:22:43 crc kubenswrapper[4830]: I0131 09:22:43.955473 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Jan 31 09:22:44 crc kubenswrapper[4830]: I0131 09:22:44.058215 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26868249-8749-44ba-9f03-e4691815285d-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"26868249-8749-44ba-9f03-e4691815285d\") " pod="openstack/ovn-northd-0" Jan 31 09:22:44 crc kubenswrapper[4830]: I0131 09:22:44.058265 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-blh8q\" (UniqueName: \"kubernetes.io/projected/26868249-8749-44ba-9f03-e4691815285d-kube-api-access-blh8q\") pod \"ovn-northd-0\" (UID: \"26868249-8749-44ba-9f03-e4691815285d\") " pod="openstack/ovn-northd-0" Jan 31 09:22:44 crc kubenswrapper[4830]: I0131 09:22:44.058290 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/26868249-8749-44ba-9f03-e4691815285d-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"26868249-8749-44ba-9f03-e4691815285d\") " pod="openstack/ovn-northd-0" Jan 31 09:22:44 crc kubenswrapper[4830]: I0131 09:22:44.058352 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/26868249-8749-44ba-9f03-e4691815285d-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"26868249-8749-44ba-9f03-e4691815285d\") " pod="openstack/ovn-northd-0" Jan 31 09:22:44 crc kubenswrapper[4830]: I0131 09:22:44.058416 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/26868249-8749-44ba-9f03-e4691815285d-config\") pod \"ovn-northd-0\" (UID: \"26868249-8749-44ba-9f03-e4691815285d\") " pod="openstack/ovn-northd-0" Jan 31 09:22:44 crc kubenswrapper[4830]: I0131 09:22:44.058449 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/26868249-8749-44ba-9f03-e4691815285d-scripts\") pod \"ovn-northd-0\" (UID: \"26868249-8749-44ba-9f03-e4691815285d\") " pod="openstack/ovn-northd-0" Jan 31 09:22:44 crc kubenswrapper[4830]: I0131 09:22:44.058474 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/26868249-8749-44ba-9f03-e4691815285d-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"26868249-8749-44ba-9f03-e4691815285d\") " pod="openstack/ovn-northd-0" Jan 31 09:22:44 crc kubenswrapper[4830]: I0131 09:22:44.160457 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/26868249-8749-44ba-9f03-e4691815285d-config\") pod \"ovn-northd-0\" (UID: \"26868249-8749-44ba-9f03-e4691815285d\") " pod="openstack/ovn-northd-0" Jan 31 09:22:44 crc kubenswrapper[4830]: I0131 09:22:44.160533 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/26868249-8749-44ba-9f03-e4691815285d-scripts\") pod \"ovn-northd-0\" (UID: \"26868249-8749-44ba-9f03-e4691815285d\") " pod="openstack/ovn-northd-0" Jan 31 09:22:44 crc kubenswrapper[4830]: I0131 09:22:44.160561 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/26868249-8749-44ba-9f03-e4691815285d-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"26868249-8749-44ba-9f03-e4691815285d\") " pod="openstack/ovn-northd-0" Jan 31 09:22:44 crc kubenswrapper[4830]: I0131 09:22:44.160647 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26868249-8749-44ba-9f03-e4691815285d-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"26868249-8749-44ba-9f03-e4691815285d\") " pod="openstack/ovn-northd-0" Jan 31 09:22:44 crc kubenswrapper[4830]: I0131 09:22:44.160675 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-blh8q\" (UniqueName: \"kubernetes.io/projected/26868249-8749-44ba-9f03-e4691815285d-kube-api-access-blh8q\") pod \"ovn-northd-0\" (UID: \"26868249-8749-44ba-9f03-e4691815285d\") " pod="openstack/ovn-northd-0" Jan 31 09:22:44 crc kubenswrapper[4830]: I0131 09:22:44.160699 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/26868249-8749-44ba-9f03-e4691815285d-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"26868249-8749-44ba-9f03-e4691815285d\") " pod="openstack/ovn-northd-0" Jan 31 09:22:44 crc kubenswrapper[4830]: I0131 09:22:44.160770 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/26868249-8749-44ba-9f03-e4691815285d-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"26868249-8749-44ba-9f03-e4691815285d\") " pod="openstack/ovn-northd-0" Jan 31 09:22:44 crc kubenswrapper[4830]: I0131 09:22:44.162612 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/26868249-8749-44ba-9f03-e4691815285d-config\") pod \"ovn-northd-0\" (UID: \"26868249-8749-44ba-9f03-e4691815285d\") " pod="openstack/ovn-northd-0" Jan 31 09:22:44 crc kubenswrapper[4830]: I0131 09:22:44.162937 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/26868249-8749-44ba-9f03-e4691815285d-scripts\") pod \"ovn-northd-0\" (UID: \"26868249-8749-44ba-9f03-e4691815285d\") " pod="openstack/ovn-northd-0" Jan 31 09:22:44 crc kubenswrapper[4830]: I0131 09:22:44.163358 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/26868249-8749-44ba-9f03-e4691815285d-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"26868249-8749-44ba-9f03-e4691815285d\") " pod="openstack/ovn-northd-0" Jan 31 09:22:44 crc kubenswrapper[4830]: I0131 09:22:44.168094 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/26868249-8749-44ba-9f03-e4691815285d-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"26868249-8749-44ba-9f03-e4691815285d\") " pod="openstack/ovn-northd-0" Jan 31 09:22:44 crc kubenswrapper[4830]: I0131 09:22:44.168984 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/26868249-8749-44ba-9f03-e4691815285d-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"26868249-8749-44ba-9f03-e4691815285d\") " pod="openstack/ovn-northd-0" Jan 31 09:22:44 crc kubenswrapper[4830]: I0131 09:22:44.173820 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26868249-8749-44ba-9f03-e4691815285d-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"26868249-8749-44ba-9f03-e4691815285d\") " pod="openstack/ovn-northd-0" Jan 31 09:22:44 crc kubenswrapper[4830]: I0131 09:22:44.192576 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-blh8q\" (UniqueName: \"kubernetes.io/projected/26868249-8749-44ba-9f03-e4691815285d-kube-api-access-blh8q\") pod \"ovn-northd-0\" (UID: \"26868249-8749-44ba-9f03-e4691815285d\") " pod="openstack/ovn-northd-0" Jan 31 09:22:44 crc kubenswrapper[4830]: I0131 09:22:44.353760 4830 patch_prober.go:28] interesting pod/machine-config-daemon-gt7kd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 09:22:44 crc kubenswrapper[4830]: I0131 09:22:44.353836 4830 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" podUID="158dbfda-9b0a-4809-9946-3c6ee2d082dc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 09:22:44 crc kubenswrapper[4830]: I0131 09:22:44.353898 4830 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" Jan 31 09:22:44 crc kubenswrapper[4830]: I0131 09:22:44.355017 4830 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"67bf188d9d9b9ad6793313549c12d77b38caf6229dc0633ec340b752f089c942"} pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 31 09:22:44 crc kubenswrapper[4830]: I0131 09:22:44.355096 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" podUID="158dbfda-9b0a-4809-9946-3c6ee2d082dc" containerName="machine-config-daemon" containerID="cri-o://67bf188d9d9b9ad6793313549c12d77b38caf6229dc0633ec340b752f089c942" gracePeriod=600 Jan 31 09:22:44 crc kubenswrapper[4830]: I0131 09:22:44.360983 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 31 09:22:44 crc kubenswrapper[4830]: I0131 09:22:44.579146 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-4qmzq" event={"ID":"c888f2ed-bb7b-4ee1-a17d-2b656f9464b6","Type":"ContainerStarted","Data":"13c203f8bf1a0bceabe2abc4c7a364e5f715e00e845d10c6ceabe3bf7c434090"} Jan 31 09:22:44 crc kubenswrapper[4830]: I0131 09:22:44.589895 4830 generic.go:334] "Generic (PLEG): container finished" podID="158dbfda-9b0a-4809-9946-3c6ee2d082dc" containerID="67bf188d9d9b9ad6793313549c12d77b38caf6229dc0633ec340b752f089c942" exitCode=0 Jan 31 09:22:44 crc kubenswrapper[4830]: I0131 09:22:44.589998 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" event={"ID":"158dbfda-9b0a-4809-9946-3c6ee2d082dc","Type":"ContainerDied","Data":"67bf188d9d9b9ad6793313549c12d77b38caf6229dc0633ec340b752f089c942"} Jan 31 09:22:44 crc kubenswrapper[4830]: I0131 09:22:44.590048 4830 scope.go:117] "RemoveContainer" containerID="6ae573c7c9ad02ecbf718005230310a2ac720cf9510afe4a2b4cb658fc772187" Jan 31 09:22:44 crc kubenswrapper[4830]: I0131 09:22:44.595387 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"f37f41b4-3b56-45f9-a368-0f772bcf3002","Type":"ContainerStarted","Data":"5e7b646f4ff6e1b24d55539a3bc21143cce21d3f36a569975a8acf1b82a40d40"} Jan 31 09:22:44 crc kubenswrapper[4830]: I0131 09:22:44.615667 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"2ca5d2f1-673e-4173-848a-8d32d33b8bcc","Type":"ContainerStarted","Data":"e774409d73ea3f7c6d1de27e1c877dc73032596ee68ca15941563cc71678e875"} Jan 31 09:22:44 crc kubenswrapper[4830]: I0131 09:22:44.620256 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-4qmzq" podStartSLOduration=7.574109146 podStartE2EDuration="12.620228986s" podCreationTimestamp="2026-01-31 09:22:32 +0000 UTC" firstStartedPulling="2026-01-31 09:22:38.754571878 +0000 UTC m=+1303.247934320" lastFinishedPulling="2026-01-31 09:22:43.800691718 +0000 UTC m=+1308.294054160" observedRunningTime="2026-01-31 09:22:44.601641286 +0000 UTC m=+1309.095003738" watchObservedRunningTime="2026-01-31 09:22:44.620228986 +0000 UTC m=+1309.113591428" Jan 31 09:22:44 crc kubenswrapper[4830]: I0131 09:22:44.628933 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-wmdpv" event={"ID":"0db9fdf8-5944-4eac-b0fe-9ca72f89ea5d","Type":"ContainerStarted","Data":"4472a48a2935c8db74155633a3dcdb219db2cf8c39b94d354056988bd895c681"} Jan 31 09:22:44 crc kubenswrapper[4830]: I0131 09:22:44.629215 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-698758b865-wmdpv" Jan 31 09:22:44 crc kubenswrapper[4830]: I0131 09:22:44.644510 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=25.995528799 podStartE2EDuration="1m7.644489117s" podCreationTimestamp="2026-01-31 09:21:37 +0000 UTC" firstStartedPulling="2026-01-31 09:21:41.819369025 +0000 UTC m=+1246.312731467" lastFinishedPulling="2026-01-31 09:22:23.468329343 +0000 UTC m=+1287.961691785" observedRunningTime="2026-01-31 09:22:44.641898493 +0000 UTC m=+1309.135260935" watchObservedRunningTime="2026-01-31 09:22:44.644489117 +0000 UTC m=+1309.137851559" Jan 31 09:22:44 crc kubenswrapper[4830]: I0131 09:22:44.727245 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-698758b865-wmdpv" podStartSLOduration=14.727213564 podStartE2EDuration="14.727213564s" podCreationTimestamp="2026-01-31 09:22:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 09:22:44.671644001 +0000 UTC m=+1309.165006443" watchObservedRunningTime="2026-01-31 09:22:44.727213564 +0000 UTC m=+1309.220576006" Jan 31 09:22:44 crc kubenswrapper[4830]: I0131 09:22:44.729539 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=26.033987916 podStartE2EDuration="1m9.729526279s" podCreationTimestamp="2026-01-31 09:21:35 +0000 UTC" firstStartedPulling="2026-01-31 09:21:38.959688495 +0000 UTC m=+1243.453050937" lastFinishedPulling="2026-01-31 09:22:22.655226858 +0000 UTC m=+1287.148589300" observedRunningTime="2026-01-31 09:22:44.695625124 +0000 UTC m=+1309.188987586" watchObservedRunningTime="2026-01-31 09:22:44.729526279 +0000 UTC m=+1309.222888731" Jan 31 09:22:44 crc kubenswrapper[4830]: I0131 09:22:44.971454 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Jan 31 09:22:45 crc kubenswrapper[4830]: I0131 09:22:45.643581 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"26868249-8749-44ba-9f03-e4691815285d","Type":"ContainerStarted","Data":"df3ab56f6ee7e5630ac865b16e012b9d68c110e00f427e10d4fc705281f7a8d5"} Jan 31 09:22:45 crc kubenswrapper[4830]: I0131 09:22:45.649261 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" event={"ID":"158dbfda-9b0a-4809-9946-3c6ee2d082dc","Type":"ContainerStarted","Data":"a04fad3617a9e38076099693ce6bd6f0b7e1a9b845b3b8a22acffddfa772e8f0"} Jan 31 09:22:46 crc kubenswrapper[4830]: I0131 09:22:46.661632 4830 generic.go:334] "Generic (PLEG): container finished" podID="68109d40-9af0-4c37-bf02-7b4744dbab5f" containerID="39dcfcca13639143aaebae3cb77d40e361f67c6338ad727f1999e2a36e3ffabd" exitCode=0 Jan 31 09:22:46 crc kubenswrapper[4830]: I0131 09:22:46.661795 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"68109d40-9af0-4c37-bf02-7b4744dbab5f","Type":"ContainerDied","Data":"39dcfcca13639143aaebae3cb77d40e361f67c6338ad727f1999e2a36e3ffabd"} Jan 31 09:22:47 crc kubenswrapper[4830]: I0131 09:22:47.569535 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Jan 31 09:22:47 crc kubenswrapper[4830]: I0131 09:22:47.570064 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Jan 31 09:22:47 crc kubenswrapper[4830]: I0131 09:22:47.690616 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"26868249-8749-44ba-9f03-e4691815285d","Type":"ContainerStarted","Data":"d2bed481cd6e945faa5e19d2a8ca5aaa128e3aadf0c4b87d395f77210f1c0435"} Jan 31 09:22:47 crc kubenswrapper[4830]: I0131 09:22:47.690688 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"26868249-8749-44ba-9f03-e4691815285d","Type":"ContainerStarted","Data":"3a4e9c2dfe9be1361395a0d302f3ffc47f050595a3179f95b955d7420707e9ff"} Jan 31 09:22:47 crc kubenswrapper[4830]: I0131 09:22:47.690989 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Jan 31 09:22:47 crc kubenswrapper[4830]: I0131 09:22:47.720905 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=3.225404895 podStartE2EDuration="4.720881441s" podCreationTimestamp="2026-01-31 09:22:43 +0000 UTC" firstStartedPulling="2026-01-31 09:22:44.98613566 +0000 UTC m=+1309.479498102" lastFinishedPulling="2026-01-31 09:22:46.481612166 +0000 UTC m=+1310.974974648" observedRunningTime="2026-01-31 09:22:47.715852648 +0000 UTC m=+1312.209215100" watchObservedRunningTime="2026-01-31 09:22:47.720881441 +0000 UTC m=+1312.214243883" Jan 31 09:22:48 crc kubenswrapper[4830]: I0131 09:22:48.081184 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/1023f27a-9c1d-4818-a3f5-94946296ae46-etc-swift\") pod \"swift-storage-0\" (UID: \"1023f27a-9c1d-4818-a3f5-94946296ae46\") " pod="openstack/swift-storage-0" Jan 31 09:22:48 crc kubenswrapper[4830]: E0131 09:22:48.081798 4830 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 31 09:22:48 crc kubenswrapper[4830]: E0131 09:22:48.081817 4830 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 31 09:22:48 crc kubenswrapper[4830]: E0131 09:22:48.081875 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/1023f27a-9c1d-4818-a3f5-94946296ae46-etc-swift podName:1023f27a-9c1d-4818-a3f5-94946296ae46 nodeName:}" failed. No retries permitted until 2026-01-31 09:23:04.081855655 +0000 UTC m=+1328.575218097 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/1023f27a-9c1d-4818-a3f5-94946296ae46-etc-swift") pod "swift-storage-0" (UID: "1023f27a-9c1d-4818-a3f5-94946296ae46") : configmap "swift-ring-files" not found Jan 31 09:22:48 crc kubenswrapper[4830]: I0131 09:22:48.535994 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-86db49b7ff-ck5gn" Jan 31 09:22:49 crc kubenswrapper[4830]: I0131 09:22:49.035014 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Jan 31 09:22:49 crc kubenswrapper[4830]: I0131 09:22:49.035567 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Jan 31 09:22:50 crc kubenswrapper[4830]: I0131 09:22:50.020622 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Jan 31 09:22:50 crc kubenswrapper[4830]: I0131 09:22:50.144024 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Jan 31 09:22:50 crc kubenswrapper[4830]: I0131 09:22:50.996742 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-openstack-db-create-xvrr7"] Jan 31 09:22:50 crc kubenswrapper[4830]: I0131 09:22:50.998487 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-db-create-xvrr7" Jan 31 09:22:51 crc kubenswrapper[4830]: I0131 09:22:51.040787 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-openstack-db-create-xvrr7"] Jan 31 09:22:51 crc kubenswrapper[4830]: I0131 09:22:51.064132 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-571a-account-create-update-95fgz"] Jan 31 09:22:51 crc kubenswrapper[4830]: I0131 09:22:51.066161 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-571a-account-create-update-95fgz" Jan 31 09:22:51 crc kubenswrapper[4830]: I0131 09:22:51.072019 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"mysqld-exporter-openstack-db-secret" Jan 31 09:22:51 crc kubenswrapper[4830]: I0131 09:22:51.077128 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8c5176e5-9abf-4ae4-b4da-4b50704cb0a4-operator-scripts\") pod \"mysqld-exporter-openstack-db-create-xvrr7\" (UID: \"8c5176e5-9abf-4ae4-b4da-4b50704cb0a4\") " pod="openstack/mysqld-exporter-openstack-db-create-xvrr7" Jan 31 09:22:51 crc kubenswrapper[4830]: I0131 09:22:51.077702 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nm7zq\" (UniqueName: \"kubernetes.io/projected/8c5176e5-9abf-4ae4-b4da-4b50704cb0a4-kube-api-access-nm7zq\") pod \"mysqld-exporter-openstack-db-create-xvrr7\" (UID: \"8c5176e5-9abf-4ae4-b4da-4b50704cb0a4\") " pod="openstack/mysqld-exporter-openstack-db-create-xvrr7" Jan 31 09:22:51 crc kubenswrapper[4830]: I0131 09:22:51.080323 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-571a-account-create-update-95fgz"] Jan 31 09:22:51 crc kubenswrapper[4830]: I0131 09:22:51.174504 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Jan 31 09:22:51 crc kubenswrapper[4830]: I0131 09:22:51.179879 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8c9dab0c-38e9-435b-8d48-9dfdaa4af87b-operator-scripts\") pod \"mysqld-exporter-571a-account-create-update-95fgz\" (UID: \"8c9dab0c-38e9-435b-8d48-9dfdaa4af87b\") " pod="openstack/mysqld-exporter-571a-account-create-update-95fgz" Jan 31 09:22:51 crc kubenswrapper[4830]: I0131 09:22:51.179964 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nm7zq\" (UniqueName: \"kubernetes.io/projected/8c5176e5-9abf-4ae4-b4da-4b50704cb0a4-kube-api-access-nm7zq\") pod \"mysqld-exporter-openstack-db-create-xvrr7\" (UID: \"8c5176e5-9abf-4ae4-b4da-4b50704cb0a4\") " pod="openstack/mysqld-exporter-openstack-db-create-xvrr7" Jan 31 09:22:51 crc kubenswrapper[4830]: I0131 09:22:51.180122 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8c5176e5-9abf-4ae4-b4da-4b50704cb0a4-operator-scripts\") pod \"mysqld-exporter-openstack-db-create-xvrr7\" (UID: \"8c5176e5-9abf-4ae4-b4da-4b50704cb0a4\") " pod="openstack/mysqld-exporter-openstack-db-create-xvrr7" Jan 31 09:22:51 crc kubenswrapper[4830]: I0131 09:22:51.180293 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-btgzh\" (UniqueName: \"kubernetes.io/projected/8c9dab0c-38e9-435b-8d48-9dfdaa4af87b-kube-api-access-btgzh\") pod \"mysqld-exporter-571a-account-create-update-95fgz\" (UID: \"8c9dab0c-38e9-435b-8d48-9dfdaa4af87b\") " pod="openstack/mysqld-exporter-571a-account-create-update-95fgz" Jan 31 09:22:51 crc kubenswrapper[4830]: I0131 09:22:51.181216 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8c5176e5-9abf-4ae4-b4da-4b50704cb0a4-operator-scripts\") pod \"mysqld-exporter-openstack-db-create-xvrr7\" (UID: \"8c5176e5-9abf-4ae4-b4da-4b50704cb0a4\") " pod="openstack/mysqld-exporter-openstack-db-create-xvrr7" Jan 31 09:22:51 crc kubenswrapper[4830]: I0131 09:22:51.208339 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nm7zq\" (UniqueName: \"kubernetes.io/projected/8c5176e5-9abf-4ae4-b4da-4b50704cb0a4-kube-api-access-nm7zq\") pod \"mysqld-exporter-openstack-db-create-xvrr7\" (UID: \"8c5176e5-9abf-4ae4-b4da-4b50704cb0a4\") " pod="openstack/mysqld-exporter-openstack-db-create-xvrr7" Jan 31 09:22:51 crc kubenswrapper[4830]: I0131 09:22:51.285460 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-btgzh\" (UniqueName: \"kubernetes.io/projected/8c9dab0c-38e9-435b-8d48-9dfdaa4af87b-kube-api-access-btgzh\") pod \"mysqld-exporter-571a-account-create-update-95fgz\" (UID: \"8c9dab0c-38e9-435b-8d48-9dfdaa4af87b\") " pod="openstack/mysqld-exporter-571a-account-create-update-95fgz" Jan 31 09:22:51 crc kubenswrapper[4830]: I0131 09:22:51.285849 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8c9dab0c-38e9-435b-8d48-9dfdaa4af87b-operator-scripts\") pod \"mysqld-exporter-571a-account-create-update-95fgz\" (UID: \"8c9dab0c-38e9-435b-8d48-9dfdaa4af87b\") " pod="openstack/mysqld-exporter-571a-account-create-update-95fgz" Jan 31 09:22:51 crc kubenswrapper[4830]: I0131 09:22:51.286904 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8c9dab0c-38e9-435b-8d48-9dfdaa4af87b-operator-scripts\") pod \"mysqld-exporter-571a-account-create-update-95fgz\" (UID: \"8c9dab0c-38e9-435b-8d48-9dfdaa4af87b\") " pod="openstack/mysqld-exporter-571a-account-create-update-95fgz" Jan 31 09:22:51 crc kubenswrapper[4830]: I0131 09:22:51.303065 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-698758b865-wmdpv" Jan 31 09:22:51 crc kubenswrapper[4830]: I0131 09:22:51.306506 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-btgzh\" (UniqueName: \"kubernetes.io/projected/8c9dab0c-38e9-435b-8d48-9dfdaa4af87b-kube-api-access-btgzh\") pod \"mysqld-exporter-571a-account-create-update-95fgz\" (UID: \"8c9dab0c-38e9-435b-8d48-9dfdaa4af87b\") " pod="openstack/mysqld-exporter-571a-account-create-update-95fgz" Jan 31 09:22:51 crc kubenswrapper[4830]: I0131 09:22:51.331937 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-db-create-xvrr7" Jan 31 09:22:51 crc kubenswrapper[4830]: I0131 09:22:51.391634 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-ck5gn"] Jan 31 09:22:51 crc kubenswrapper[4830]: I0131 09:22:51.392245 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-86db49b7ff-ck5gn" podUID="9644fbf3-9d54-43de-992f-7bccf944c31f" containerName="dnsmasq-dns" containerID="cri-o://3745a68a5912defd6e394798fac7f86ac3fa5f8bec603680a7a6fee33012ce78" gracePeriod=10 Jan 31 09:22:51 crc kubenswrapper[4830]: I0131 09:22:51.392660 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-571a-account-create-update-95fgz" Jan 31 09:22:51 crc kubenswrapper[4830]: I0131 09:22:51.757415 4830 generic.go:334] "Generic (PLEG): container finished" podID="9644fbf3-9d54-43de-992f-7bccf944c31f" containerID="3745a68a5912defd6e394798fac7f86ac3fa5f8bec603680a7a6fee33012ce78" exitCode=0 Jan 31 09:22:51 crc kubenswrapper[4830]: I0131 09:22:51.757498 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-ck5gn" event={"ID":"9644fbf3-9d54-43de-992f-7bccf944c31f","Type":"ContainerDied","Data":"3745a68a5912defd6e394798fac7f86ac3fa5f8bec603680a7a6fee33012ce78"} Jan 31 09:22:52 crc kubenswrapper[4830]: I0131 09:22:52.008741 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Jan 31 09:22:52 crc kubenswrapper[4830]: I0131 09:22:52.366438 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-ck5gn" Jan 31 09:22:52 crc kubenswrapper[4830]: I0131 09:22:52.376061 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-571a-account-create-update-95fgz"] Jan 31 09:22:52 crc kubenswrapper[4830]: I0131 09:22:52.442945 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9644fbf3-9d54-43de-992f-7bccf944c31f-dns-svc\") pod \"9644fbf3-9d54-43de-992f-7bccf944c31f\" (UID: \"9644fbf3-9d54-43de-992f-7bccf944c31f\") " Jan 31 09:22:52 crc kubenswrapper[4830]: I0131 09:22:52.443083 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9644fbf3-9d54-43de-992f-7bccf944c31f-ovsdbserver-sb\") pod \"9644fbf3-9d54-43de-992f-7bccf944c31f\" (UID: \"9644fbf3-9d54-43de-992f-7bccf944c31f\") " Jan 31 09:22:52 crc kubenswrapper[4830]: I0131 09:22:52.443282 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9644fbf3-9d54-43de-992f-7bccf944c31f-ovsdbserver-nb\") pod \"9644fbf3-9d54-43de-992f-7bccf944c31f\" (UID: \"9644fbf3-9d54-43de-992f-7bccf944c31f\") " Jan 31 09:22:52 crc kubenswrapper[4830]: I0131 09:22:52.443363 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9644fbf3-9d54-43de-992f-7bccf944c31f-config\") pod \"9644fbf3-9d54-43de-992f-7bccf944c31f\" (UID: \"9644fbf3-9d54-43de-992f-7bccf944c31f\") " Jan 31 09:22:52 crc kubenswrapper[4830]: I0131 09:22:52.443403 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fjs6z\" (UniqueName: \"kubernetes.io/projected/9644fbf3-9d54-43de-992f-7bccf944c31f-kube-api-access-fjs6z\") pod \"9644fbf3-9d54-43de-992f-7bccf944c31f\" (UID: \"9644fbf3-9d54-43de-992f-7bccf944c31f\") " Jan 31 09:22:52 crc kubenswrapper[4830]: I0131 09:22:52.514405 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9644fbf3-9d54-43de-992f-7bccf944c31f-kube-api-access-fjs6z" (OuterVolumeSpecName: "kube-api-access-fjs6z") pod "9644fbf3-9d54-43de-992f-7bccf944c31f" (UID: "9644fbf3-9d54-43de-992f-7bccf944c31f"). InnerVolumeSpecName "kube-api-access-fjs6z". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:22:52 crc kubenswrapper[4830]: I0131 09:22:52.517114 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-openstack-db-create-xvrr7"] Jan 31 09:22:52 crc kubenswrapper[4830]: I0131 09:22:52.567753 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fjs6z\" (UniqueName: \"kubernetes.io/projected/9644fbf3-9d54-43de-992f-7bccf944c31f-kube-api-access-fjs6z\") on node \"crc\" DevicePath \"\"" Jan 31 09:22:52 crc kubenswrapper[4830]: I0131 09:22:52.640010 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Jan 31 09:22:52 crc kubenswrapper[4830]: I0131 09:22:52.667947 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9644fbf3-9d54-43de-992f-7bccf944c31f-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "9644fbf3-9d54-43de-992f-7bccf944c31f" (UID: "9644fbf3-9d54-43de-992f-7bccf944c31f"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:22:52 crc kubenswrapper[4830]: I0131 09:22:52.670554 4830 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9644fbf3-9d54-43de-992f-7bccf944c31f-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 31 09:22:52 crc kubenswrapper[4830]: I0131 09:22:52.678954 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9644fbf3-9d54-43de-992f-7bccf944c31f-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "9644fbf3-9d54-43de-992f-7bccf944c31f" (UID: "9644fbf3-9d54-43de-992f-7bccf944c31f"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:22:52 crc kubenswrapper[4830]: I0131 09:22:52.731050 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9644fbf3-9d54-43de-992f-7bccf944c31f-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "9644fbf3-9d54-43de-992f-7bccf944c31f" (UID: "9644fbf3-9d54-43de-992f-7bccf944c31f"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:22:52 crc kubenswrapper[4830]: I0131 09:22:52.744677 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9644fbf3-9d54-43de-992f-7bccf944c31f-config" (OuterVolumeSpecName: "config") pod "9644fbf3-9d54-43de-992f-7bccf944c31f" (UID: "9644fbf3-9d54-43de-992f-7bccf944c31f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:22:52 crc kubenswrapper[4830]: I0131 09:22:52.773121 4830 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9644fbf3-9d54-43de-992f-7bccf944c31f-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 31 09:22:52 crc kubenswrapper[4830]: I0131 09:22:52.773155 4830 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9644fbf3-9d54-43de-992f-7bccf944c31f-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 31 09:22:52 crc kubenswrapper[4830]: I0131 09:22:52.773165 4830 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9644fbf3-9d54-43de-992f-7bccf944c31f-config\") on node \"crc\" DevicePath \"\"" Jan 31 09:22:52 crc kubenswrapper[4830]: I0131 09:22:52.779204 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-571a-account-create-update-95fgz" event={"ID":"8c9dab0c-38e9-435b-8d48-9dfdaa4af87b","Type":"ContainerStarted","Data":"ca98657bdff7be7640cc76109b634f62faae377876b3ddfce4f713a241a82f25"} Jan 31 09:22:52 crc kubenswrapper[4830]: I0131 09:22:52.791525 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-ck5gn" event={"ID":"9644fbf3-9d54-43de-992f-7bccf944c31f","Type":"ContainerDied","Data":"e30f4d3e016320cfb76856e779a7b17c29b6cc7ad0e9b23135ba943288b6cd30"} Jan 31 09:22:52 crc kubenswrapper[4830]: I0131 09:22:52.791606 4830 scope.go:117] "RemoveContainer" containerID="3745a68a5912defd6e394798fac7f86ac3fa5f8bec603680a7a6fee33012ce78" Jan 31 09:22:52 crc kubenswrapper[4830]: I0131 09:22:52.792082 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-ck5gn" Jan 31 09:22:52 crc kubenswrapper[4830]: I0131 09:22:52.807622 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-db-create-xvrr7" event={"ID":"8c5176e5-9abf-4ae4-b4da-4b50704cb0a4","Type":"ContainerStarted","Data":"af92362ad5d7556310aca92f1d5ef3b15b2613123ed9aceec58cedf80deecb1b"} Jan 31 09:22:52 crc kubenswrapper[4830]: I0131 09:22:52.846809 4830 scope.go:117] "RemoveContainer" containerID="95558a00c0b065522e42a6dc000df9142d743e6d9868ed0f83b3796ce405b1af" Jan 31 09:22:52 crc kubenswrapper[4830]: I0131 09:22:52.847040 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-ck5gn"] Jan 31 09:22:52 crc kubenswrapper[4830]: I0131 09:22:52.862181 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-ck5gn"] Jan 31 09:22:54 crc kubenswrapper[4830]: I0131 09:22:54.287092 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9644fbf3-9d54-43de-992f-7bccf944c31f" path="/var/lib/kubelet/pods/9644fbf3-9d54-43de-992f-7bccf944c31f/volumes" Jan 31 09:22:54 crc kubenswrapper[4830]: I0131 09:22:54.503374 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-ps27t" podUID="dcebcaa8-8cb1-4a94-a15a-0ef39c9bee73" containerName="ovn-controller" probeResult="failure" output=< Jan 31 09:22:54 crc kubenswrapper[4830]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Jan 31 09:22:54 crc kubenswrapper[4830]: > Jan 31 09:22:54 crc kubenswrapper[4830]: I0131 09:22:54.838795 4830 generic.go:334] "Generic (PLEG): container finished" podID="8c9dab0c-38e9-435b-8d48-9dfdaa4af87b" containerID="1edad15e2be8ae446fa14322acc01be55a42b8ced2bea799c4d280293645b05a" exitCode=0 Jan 31 09:22:54 crc kubenswrapper[4830]: I0131 09:22:54.839030 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-571a-account-create-update-95fgz" event={"ID":"8c9dab0c-38e9-435b-8d48-9dfdaa4af87b","Type":"ContainerDied","Data":"1edad15e2be8ae446fa14322acc01be55a42b8ced2bea799c4d280293645b05a"} Jan 31 09:22:54 crc kubenswrapper[4830]: I0131 09:22:54.843467 4830 generic.go:334] "Generic (PLEG): container finished" podID="8c5176e5-9abf-4ae4-b4da-4b50704cb0a4" containerID="8d1c4eee78341f44f679b500f0d207a5e474b317db60cd624ef0f401abd5b231" exitCode=0 Jan 31 09:22:54 crc kubenswrapper[4830]: I0131 09:22:54.843667 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-db-create-xvrr7" event={"ID":"8c5176e5-9abf-4ae4-b4da-4b50704cb0a4","Type":"ContainerDied","Data":"8d1c4eee78341f44f679b500f0d207a5e474b317db60cd624ef0f401abd5b231"} Jan 31 09:22:54 crc kubenswrapper[4830]: I0131 09:22:54.849461 4830 generic.go:334] "Generic (PLEG): container finished" podID="c888f2ed-bb7b-4ee1-a17d-2b656f9464b6" containerID="13c203f8bf1a0bceabe2abc4c7a364e5f715e00e845d10c6ceabe3bf7c434090" exitCode=0 Jan 31 09:22:54 crc kubenswrapper[4830]: I0131 09:22:54.849545 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-4qmzq" event={"ID":"c888f2ed-bb7b-4ee1-a17d-2b656f9464b6","Type":"ContainerDied","Data":"13c203f8bf1a0bceabe2abc4c7a364e5f715e00e845d10c6ceabe3bf7c434090"} Jan 31 09:22:55 crc kubenswrapper[4830]: I0131 09:22:55.430231 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-vzsv5"] Jan 31 09:22:55 crc kubenswrapper[4830]: E0131 09:22:55.430992 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9644fbf3-9d54-43de-992f-7bccf944c31f" containerName="init" Jan 31 09:22:55 crc kubenswrapper[4830]: I0131 09:22:55.431013 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="9644fbf3-9d54-43de-992f-7bccf944c31f" containerName="init" Jan 31 09:22:55 crc kubenswrapper[4830]: E0131 09:22:55.431037 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9644fbf3-9d54-43de-992f-7bccf944c31f" containerName="dnsmasq-dns" Jan 31 09:22:55 crc kubenswrapper[4830]: I0131 09:22:55.431047 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="9644fbf3-9d54-43de-992f-7bccf944c31f" containerName="dnsmasq-dns" Jan 31 09:22:55 crc kubenswrapper[4830]: I0131 09:22:55.431308 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="9644fbf3-9d54-43de-992f-7bccf944c31f" containerName="dnsmasq-dns" Jan 31 09:22:55 crc kubenswrapper[4830]: I0131 09:22:55.432353 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-vzsv5" Jan 31 09:22:55 crc kubenswrapper[4830]: I0131 09:22:55.441324 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Jan 31 09:22:55 crc kubenswrapper[4830]: I0131 09:22:55.443295 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-vzsv5"] Jan 31 09:22:55 crc kubenswrapper[4830]: I0131 09:22:55.555432 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7ee3d5b9-18cf-49db-95a9-e2dfa666e15e-operator-scripts\") pod \"root-account-create-update-vzsv5\" (UID: \"7ee3d5b9-18cf-49db-95a9-e2dfa666e15e\") " pod="openstack/root-account-create-update-vzsv5" Jan 31 09:22:55 crc kubenswrapper[4830]: I0131 09:22:55.555518 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tq7gn\" (UniqueName: \"kubernetes.io/projected/7ee3d5b9-18cf-49db-95a9-e2dfa666e15e-kube-api-access-tq7gn\") pod \"root-account-create-update-vzsv5\" (UID: \"7ee3d5b9-18cf-49db-95a9-e2dfa666e15e\") " pod="openstack/root-account-create-update-vzsv5" Jan 31 09:22:55 crc kubenswrapper[4830]: I0131 09:22:55.657886 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7ee3d5b9-18cf-49db-95a9-e2dfa666e15e-operator-scripts\") pod \"root-account-create-update-vzsv5\" (UID: \"7ee3d5b9-18cf-49db-95a9-e2dfa666e15e\") " pod="openstack/root-account-create-update-vzsv5" Jan 31 09:22:55 crc kubenswrapper[4830]: I0131 09:22:55.657972 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tq7gn\" (UniqueName: \"kubernetes.io/projected/7ee3d5b9-18cf-49db-95a9-e2dfa666e15e-kube-api-access-tq7gn\") pod \"root-account-create-update-vzsv5\" (UID: \"7ee3d5b9-18cf-49db-95a9-e2dfa666e15e\") " pod="openstack/root-account-create-update-vzsv5" Jan 31 09:22:55 crc kubenswrapper[4830]: I0131 09:22:55.658788 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7ee3d5b9-18cf-49db-95a9-e2dfa666e15e-operator-scripts\") pod \"root-account-create-update-vzsv5\" (UID: \"7ee3d5b9-18cf-49db-95a9-e2dfa666e15e\") " pod="openstack/root-account-create-update-vzsv5" Jan 31 09:22:55 crc kubenswrapper[4830]: I0131 09:22:55.682555 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tq7gn\" (UniqueName: \"kubernetes.io/projected/7ee3d5b9-18cf-49db-95a9-e2dfa666e15e-kube-api-access-tq7gn\") pod \"root-account-create-update-vzsv5\" (UID: \"7ee3d5b9-18cf-49db-95a9-e2dfa666e15e\") " pod="openstack/root-account-create-update-vzsv5" Jan 31 09:22:55 crc kubenswrapper[4830]: I0131 09:22:55.791115 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-vzsv5" Jan 31 09:22:58 crc kubenswrapper[4830]: I0131 09:22:58.229292 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-wfp8z"] Jan 31 09:22:58 crc kubenswrapper[4830]: I0131 09:22:58.231370 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-wfp8z" Jan 31 09:22:58 crc kubenswrapper[4830]: I0131 09:22:58.248562 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-wfp8z"] Jan 31 09:22:58 crc kubenswrapper[4830]: I0131 09:22:58.353685 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-66rn6\" (UniqueName: \"kubernetes.io/projected/2830665c-1d23-4c36-8324-7362068ae08f-kube-api-access-66rn6\") pod \"keystone-db-create-wfp8z\" (UID: \"2830665c-1d23-4c36-8324-7362068ae08f\") " pod="openstack/keystone-db-create-wfp8z" Jan 31 09:22:58 crc kubenswrapper[4830]: I0131 09:22:58.353926 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2830665c-1d23-4c36-8324-7362068ae08f-operator-scripts\") pod \"keystone-db-create-wfp8z\" (UID: \"2830665c-1d23-4c36-8324-7362068ae08f\") " pod="openstack/keystone-db-create-wfp8z" Jan 31 09:22:58 crc kubenswrapper[4830]: I0131 09:22:58.359608 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-1d60-account-create-update-vbk6p"] Jan 31 09:22:58 crc kubenswrapper[4830]: I0131 09:22:58.364561 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-1d60-account-create-update-vbk6p" Jan 31 09:22:58 crc kubenswrapper[4830]: I0131 09:22:58.367132 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Jan 31 09:22:58 crc kubenswrapper[4830]: I0131 09:22:58.376365 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-1d60-account-create-update-vbk6p"] Jan 31 09:22:58 crc kubenswrapper[4830]: I0131 09:22:58.456464 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8af6a38f-c8ba-464d-acd5-417848530657-operator-scripts\") pod \"keystone-1d60-account-create-update-vbk6p\" (UID: \"8af6a38f-c8ba-464d-acd5-417848530657\") " pod="openstack/keystone-1d60-account-create-update-vbk6p" Jan 31 09:22:58 crc kubenswrapper[4830]: I0131 09:22:58.456555 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-66rn6\" (UniqueName: \"kubernetes.io/projected/2830665c-1d23-4c36-8324-7362068ae08f-kube-api-access-66rn6\") pod \"keystone-db-create-wfp8z\" (UID: \"2830665c-1d23-4c36-8324-7362068ae08f\") " pod="openstack/keystone-db-create-wfp8z" Jan 31 09:22:58 crc kubenswrapper[4830]: I0131 09:22:58.456615 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8krvx\" (UniqueName: \"kubernetes.io/projected/8af6a38f-c8ba-464d-acd5-417848530657-kube-api-access-8krvx\") pod \"keystone-1d60-account-create-update-vbk6p\" (UID: \"8af6a38f-c8ba-464d-acd5-417848530657\") " pod="openstack/keystone-1d60-account-create-update-vbk6p" Jan 31 09:22:58 crc kubenswrapper[4830]: I0131 09:22:58.456694 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2830665c-1d23-4c36-8324-7362068ae08f-operator-scripts\") pod \"keystone-db-create-wfp8z\" (UID: \"2830665c-1d23-4c36-8324-7362068ae08f\") " pod="openstack/keystone-db-create-wfp8z" Jan 31 09:22:58 crc kubenswrapper[4830]: I0131 09:22:58.457675 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2830665c-1d23-4c36-8324-7362068ae08f-operator-scripts\") pod \"keystone-db-create-wfp8z\" (UID: \"2830665c-1d23-4c36-8324-7362068ae08f\") " pod="openstack/keystone-db-create-wfp8z" Jan 31 09:22:58 crc kubenswrapper[4830]: I0131 09:22:58.481529 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-66rn6\" (UniqueName: \"kubernetes.io/projected/2830665c-1d23-4c36-8324-7362068ae08f-kube-api-access-66rn6\") pod \"keystone-db-create-wfp8z\" (UID: \"2830665c-1d23-4c36-8324-7362068ae08f\") " pod="openstack/keystone-db-create-wfp8z" Jan 31 09:22:58 crc kubenswrapper[4830]: I0131 09:22:58.551821 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-wfp8z" Jan 31 09:22:58 crc kubenswrapper[4830]: I0131 09:22:58.561186 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8krvx\" (UniqueName: \"kubernetes.io/projected/8af6a38f-c8ba-464d-acd5-417848530657-kube-api-access-8krvx\") pod \"keystone-1d60-account-create-update-vbk6p\" (UID: \"8af6a38f-c8ba-464d-acd5-417848530657\") " pod="openstack/keystone-1d60-account-create-update-vbk6p" Jan 31 09:22:58 crc kubenswrapper[4830]: I0131 09:22:58.561457 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8af6a38f-c8ba-464d-acd5-417848530657-operator-scripts\") pod \"keystone-1d60-account-create-update-vbk6p\" (UID: \"8af6a38f-c8ba-464d-acd5-417848530657\") " pod="openstack/keystone-1d60-account-create-update-vbk6p" Jan 31 09:22:58 crc kubenswrapper[4830]: I0131 09:22:58.562370 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8af6a38f-c8ba-464d-acd5-417848530657-operator-scripts\") pod \"keystone-1d60-account-create-update-vbk6p\" (UID: \"8af6a38f-c8ba-464d-acd5-417848530657\") " pod="openstack/keystone-1d60-account-create-update-vbk6p" Jan 31 09:22:58 crc kubenswrapper[4830]: I0131 09:22:58.575334 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-56987"] Jan 31 09:22:58 crc kubenswrapper[4830]: I0131 09:22:58.577259 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-56987" Jan 31 09:22:58 crc kubenswrapper[4830]: I0131 09:22:58.579667 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8krvx\" (UniqueName: \"kubernetes.io/projected/8af6a38f-c8ba-464d-acd5-417848530657-kube-api-access-8krvx\") pod \"keystone-1d60-account-create-update-vbk6p\" (UID: \"8af6a38f-c8ba-464d-acd5-417848530657\") " pod="openstack/keystone-1d60-account-create-update-vbk6p" Jan 31 09:22:58 crc kubenswrapper[4830]: I0131 09:22:58.586098 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-56987"] Jan 31 09:22:58 crc kubenswrapper[4830]: I0131 09:22:58.664907 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cf11bdf9-7bbe-4713-9711-e6aff7e0c0c5-operator-scripts\") pod \"placement-db-create-56987\" (UID: \"cf11bdf9-7bbe-4713-9711-e6aff7e0c0c5\") " pod="openstack/placement-db-create-56987" Jan 31 09:22:58 crc kubenswrapper[4830]: I0131 09:22:58.665183 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fz45c\" (UniqueName: \"kubernetes.io/projected/cf11bdf9-7bbe-4713-9711-e6aff7e0c0c5-kube-api-access-fz45c\") pod \"placement-db-create-56987\" (UID: \"cf11bdf9-7bbe-4713-9711-e6aff7e0c0c5\") " pod="openstack/placement-db-create-56987" Jan 31 09:22:58 crc kubenswrapper[4830]: I0131 09:22:58.682899 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-1d60-account-create-update-vbk6p" Jan 31 09:22:58 crc kubenswrapper[4830]: I0131 09:22:58.692998 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-04a9-account-create-update-xswbl"] Jan 31 09:22:58 crc kubenswrapper[4830]: I0131 09:22:58.694683 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-04a9-account-create-update-xswbl" Jan 31 09:22:58 crc kubenswrapper[4830]: I0131 09:22:58.697279 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Jan 31 09:22:58 crc kubenswrapper[4830]: I0131 09:22:58.707868 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-04a9-account-create-update-xswbl"] Jan 31 09:22:58 crc kubenswrapper[4830]: I0131 09:22:58.767280 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cf11bdf9-7bbe-4713-9711-e6aff7e0c0c5-operator-scripts\") pod \"placement-db-create-56987\" (UID: \"cf11bdf9-7bbe-4713-9711-e6aff7e0c0c5\") " pod="openstack/placement-db-create-56987" Jan 31 09:22:58 crc kubenswrapper[4830]: I0131 09:22:58.768668 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cf11bdf9-7bbe-4713-9711-e6aff7e0c0c5-operator-scripts\") pod \"placement-db-create-56987\" (UID: \"cf11bdf9-7bbe-4713-9711-e6aff7e0c0c5\") " pod="openstack/placement-db-create-56987" Jan 31 09:22:58 crc kubenswrapper[4830]: I0131 09:22:58.768934 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rq7zz\" (UniqueName: \"kubernetes.io/projected/2d4e564d-bb74-4fb4-b180-bdd6c81a3d6c-kube-api-access-rq7zz\") pod \"placement-04a9-account-create-update-xswbl\" (UID: \"2d4e564d-bb74-4fb4-b180-bdd6c81a3d6c\") " pod="openstack/placement-04a9-account-create-update-xswbl" Jan 31 09:22:58 crc kubenswrapper[4830]: I0131 09:22:58.769213 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fz45c\" (UniqueName: \"kubernetes.io/projected/cf11bdf9-7bbe-4713-9711-e6aff7e0c0c5-kube-api-access-fz45c\") pod \"placement-db-create-56987\" (UID: \"cf11bdf9-7bbe-4713-9711-e6aff7e0c0c5\") " pod="openstack/placement-db-create-56987" Jan 31 09:22:58 crc kubenswrapper[4830]: I0131 09:22:58.771828 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2d4e564d-bb74-4fb4-b180-bdd6c81a3d6c-operator-scripts\") pod \"placement-04a9-account-create-update-xswbl\" (UID: \"2d4e564d-bb74-4fb4-b180-bdd6c81a3d6c\") " pod="openstack/placement-04a9-account-create-update-xswbl" Jan 31 09:22:58 crc kubenswrapper[4830]: I0131 09:22:58.795434 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fz45c\" (UniqueName: \"kubernetes.io/projected/cf11bdf9-7bbe-4713-9711-e6aff7e0c0c5-kube-api-access-fz45c\") pod \"placement-db-create-56987\" (UID: \"cf11bdf9-7bbe-4713-9711-e6aff7e0c0c5\") " pod="openstack/placement-db-create-56987" Jan 31 09:22:58 crc kubenswrapper[4830]: I0131 09:22:58.874400 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rq7zz\" (UniqueName: \"kubernetes.io/projected/2d4e564d-bb74-4fb4-b180-bdd6c81a3d6c-kube-api-access-rq7zz\") pod \"placement-04a9-account-create-update-xswbl\" (UID: \"2d4e564d-bb74-4fb4-b180-bdd6c81a3d6c\") " pod="openstack/placement-04a9-account-create-update-xswbl" Jan 31 09:22:58 crc kubenswrapper[4830]: I0131 09:22:58.874865 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2d4e564d-bb74-4fb4-b180-bdd6c81a3d6c-operator-scripts\") pod \"placement-04a9-account-create-update-xswbl\" (UID: \"2d4e564d-bb74-4fb4-b180-bdd6c81a3d6c\") " pod="openstack/placement-04a9-account-create-update-xswbl" Jan 31 09:22:58 crc kubenswrapper[4830]: I0131 09:22:58.875683 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2d4e564d-bb74-4fb4-b180-bdd6c81a3d6c-operator-scripts\") pod \"placement-04a9-account-create-update-xswbl\" (UID: \"2d4e564d-bb74-4fb4-b180-bdd6c81a3d6c\") " pod="openstack/placement-04a9-account-create-update-xswbl" Jan 31 09:22:58 crc kubenswrapper[4830]: I0131 09:22:58.894856 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rq7zz\" (UniqueName: \"kubernetes.io/projected/2d4e564d-bb74-4fb4-b180-bdd6c81a3d6c-kube-api-access-rq7zz\") pod \"placement-04a9-account-create-update-xswbl\" (UID: \"2d4e564d-bb74-4fb4-b180-bdd6c81a3d6c\") " pod="openstack/placement-04a9-account-create-update-xswbl" Jan 31 09:22:58 crc kubenswrapper[4830]: I0131 09:22:58.959883 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-56987" Jan 31 09:22:59 crc kubenswrapper[4830]: I0131 09:22:59.035529 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-sqm2h"] Jan 31 09:22:59 crc kubenswrapper[4830]: I0131 09:22:59.037261 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-sqm2h" Jan 31 09:22:59 crc kubenswrapper[4830]: I0131 09:22:59.059684 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-sqm2h"] Jan 31 09:22:59 crc kubenswrapper[4830]: I0131 09:22:59.086136 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-04a9-account-create-update-xswbl" Jan 31 09:22:59 crc kubenswrapper[4830]: I0131 09:22:59.109594 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bzhsb\" (UniqueName: \"kubernetes.io/projected/b4141b8b-513a-4210-9abd-bfba363d6986-kube-api-access-bzhsb\") pod \"glance-db-create-sqm2h\" (UID: \"b4141b8b-513a-4210-9abd-bfba363d6986\") " pod="openstack/glance-db-create-sqm2h" Jan 31 09:22:59 crc kubenswrapper[4830]: I0131 09:22:59.109905 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b4141b8b-513a-4210-9abd-bfba363d6986-operator-scripts\") pod \"glance-db-create-sqm2h\" (UID: \"b4141b8b-513a-4210-9abd-bfba363d6986\") " pod="openstack/glance-db-create-sqm2h" Jan 31 09:22:59 crc kubenswrapper[4830]: I0131 09:22:59.153475 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-a9ea-account-create-update-cvwgh"] Jan 31 09:22:59 crc kubenswrapper[4830]: I0131 09:22:59.170608 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-a9ea-account-create-update-cvwgh" Jan 31 09:22:59 crc kubenswrapper[4830]: I0131 09:22:59.172176 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-a9ea-account-create-update-cvwgh"] Jan 31 09:22:59 crc kubenswrapper[4830]: I0131 09:22:59.178431 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Jan 31 09:22:59 crc kubenswrapper[4830]: I0131 09:22:59.226056 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bzhsb\" (UniqueName: \"kubernetes.io/projected/b4141b8b-513a-4210-9abd-bfba363d6986-kube-api-access-bzhsb\") pod \"glance-db-create-sqm2h\" (UID: \"b4141b8b-513a-4210-9abd-bfba363d6986\") " pod="openstack/glance-db-create-sqm2h" Jan 31 09:22:59 crc kubenswrapper[4830]: I0131 09:22:59.226556 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b4141b8b-513a-4210-9abd-bfba363d6986-operator-scripts\") pod \"glance-db-create-sqm2h\" (UID: \"b4141b8b-513a-4210-9abd-bfba363d6986\") " pod="openstack/glance-db-create-sqm2h" Jan 31 09:22:59 crc kubenswrapper[4830]: I0131 09:22:59.227428 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b4141b8b-513a-4210-9abd-bfba363d6986-operator-scripts\") pod \"glance-db-create-sqm2h\" (UID: \"b4141b8b-513a-4210-9abd-bfba363d6986\") " pod="openstack/glance-db-create-sqm2h" Jan 31 09:22:59 crc kubenswrapper[4830]: I0131 09:22:59.258590 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bzhsb\" (UniqueName: \"kubernetes.io/projected/b4141b8b-513a-4210-9abd-bfba363d6986-kube-api-access-bzhsb\") pod \"glance-db-create-sqm2h\" (UID: \"b4141b8b-513a-4210-9abd-bfba363d6986\") " pod="openstack/glance-db-create-sqm2h" Jan 31 09:22:59 crc kubenswrapper[4830]: I0131 09:22:59.328874 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ca85abf4-a6ba-4080-a544-fcce2de88b2b-operator-scripts\") pod \"glance-a9ea-account-create-update-cvwgh\" (UID: \"ca85abf4-a6ba-4080-a544-fcce2de88b2b\") " pod="openstack/glance-a9ea-account-create-update-cvwgh" Jan 31 09:22:59 crc kubenswrapper[4830]: I0131 09:22:59.329403 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hjw56\" (UniqueName: \"kubernetes.io/projected/ca85abf4-a6ba-4080-a544-fcce2de88b2b-kube-api-access-hjw56\") pod \"glance-a9ea-account-create-update-cvwgh\" (UID: \"ca85abf4-a6ba-4080-a544-fcce2de88b2b\") " pod="openstack/glance-a9ea-account-create-update-cvwgh" Jan 31 09:22:59 crc kubenswrapper[4830]: I0131 09:22:59.362113 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-sqm2h" Jan 31 09:22:59 crc kubenswrapper[4830]: I0131 09:22:59.432010 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hjw56\" (UniqueName: \"kubernetes.io/projected/ca85abf4-a6ba-4080-a544-fcce2de88b2b-kube-api-access-hjw56\") pod \"glance-a9ea-account-create-update-cvwgh\" (UID: \"ca85abf4-a6ba-4080-a544-fcce2de88b2b\") " pod="openstack/glance-a9ea-account-create-update-cvwgh" Jan 31 09:22:59 crc kubenswrapper[4830]: I0131 09:22:59.432435 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ca85abf4-a6ba-4080-a544-fcce2de88b2b-operator-scripts\") pod \"glance-a9ea-account-create-update-cvwgh\" (UID: \"ca85abf4-a6ba-4080-a544-fcce2de88b2b\") " pod="openstack/glance-a9ea-account-create-update-cvwgh" Jan 31 09:22:59 crc kubenswrapper[4830]: I0131 09:22:59.433709 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ca85abf4-a6ba-4080-a544-fcce2de88b2b-operator-scripts\") pod \"glance-a9ea-account-create-update-cvwgh\" (UID: \"ca85abf4-a6ba-4080-a544-fcce2de88b2b\") " pod="openstack/glance-a9ea-account-create-update-cvwgh" Jan 31 09:22:59 crc kubenswrapper[4830]: I0131 09:22:59.459439 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hjw56\" (UniqueName: \"kubernetes.io/projected/ca85abf4-a6ba-4080-a544-fcce2de88b2b-kube-api-access-hjw56\") pod \"glance-a9ea-account-create-update-cvwgh\" (UID: \"ca85abf4-a6ba-4080-a544-fcce2de88b2b\") " pod="openstack/glance-a9ea-account-create-update-cvwgh" Jan 31 09:22:59 crc kubenswrapper[4830]: I0131 09:22:59.494419 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-a9ea-account-create-update-cvwgh" Jan 31 09:22:59 crc kubenswrapper[4830]: I0131 09:22:59.569676 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-ps27t" podUID="dcebcaa8-8cb1-4a94-a15a-0ef39c9bee73" containerName="ovn-controller" probeResult="failure" output=< Jan 31 09:22:59 crc kubenswrapper[4830]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Jan 31 09:22:59 crc kubenswrapper[4830]: > Jan 31 09:22:59 crc kubenswrapper[4830]: I0131 09:22:59.814228 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-571a-account-create-update-95fgz" Jan 31 09:22:59 crc kubenswrapper[4830]: I0131 09:22:59.820040 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-4qmzq" Jan 31 09:22:59 crc kubenswrapper[4830]: I0131 09:22:59.835887 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-db-create-xvrr7" Jan 31 09:22:59 crc kubenswrapper[4830]: I0131 09:22:59.935298 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-571a-account-create-update-95fgz" event={"ID":"8c9dab0c-38e9-435b-8d48-9dfdaa4af87b","Type":"ContainerDied","Data":"ca98657bdff7be7640cc76109b634f62faae377876b3ddfce4f713a241a82f25"} Jan 31 09:22:59 crc kubenswrapper[4830]: I0131 09:22:59.935346 4830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ca98657bdff7be7640cc76109b634f62faae377876b3ddfce4f713a241a82f25" Jan 31 09:22:59 crc kubenswrapper[4830]: I0131 09:22:59.935311 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-571a-account-create-update-95fgz" Jan 31 09:22:59 crc kubenswrapper[4830]: I0131 09:22:59.938894 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-db-create-xvrr7" event={"ID":"8c5176e5-9abf-4ae4-b4da-4b50704cb0a4","Type":"ContainerDied","Data":"af92362ad5d7556310aca92f1d5ef3b15b2613123ed9aceec58cedf80deecb1b"} Jan 31 09:22:59 crc kubenswrapper[4830]: I0131 09:22:59.938936 4830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="af92362ad5d7556310aca92f1d5ef3b15b2613123ed9aceec58cedf80deecb1b" Jan 31 09:22:59 crc kubenswrapper[4830]: I0131 09:22:59.938976 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-db-create-xvrr7" Jan 31 09:22:59 crc kubenswrapper[4830]: I0131 09:22:59.945563 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8c9dab0c-38e9-435b-8d48-9dfdaa4af87b-operator-scripts\") pod \"8c9dab0c-38e9-435b-8d48-9dfdaa4af87b\" (UID: \"8c9dab0c-38e9-435b-8d48-9dfdaa4af87b\") " Jan 31 09:22:59 crc kubenswrapper[4830]: I0131 09:22:59.946032 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-4qmzq" event={"ID":"c888f2ed-bb7b-4ee1-a17d-2b656f9464b6","Type":"ContainerDied","Data":"9a4cb37c97a90840ebc2a05bc842ee0e9e18cab468c8775bd3c4993d2bcfc35b"} Jan 31 09:22:59 crc kubenswrapper[4830]: I0131 09:22:59.946091 4830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9a4cb37c97a90840ebc2a05bc842ee0e9e18cab468c8775bd3c4993d2bcfc35b" Jan 31 09:22:59 crc kubenswrapper[4830]: I0131 09:22:59.946106 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-4qmzq" Jan 31 09:22:59 crc kubenswrapper[4830]: I0131 09:22:59.946114 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b6zv9\" (UniqueName: \"kubernetes.io/projected/c888f2ed-bb7b-4ee1-a17d-2b656f9464b6-kube-api-access-b6zv9\") pod \"c888f2ed-bb7b-4ee1-a17d-2b656f9464b6\" (UID: \"c888f2ed-bb7b-4ee1-a17d-2b656f9464b6\") " Jan 31 09:22:59 crc kubenswrapper[4830]: I0131 09:22:59.946420 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8c5176e5-9abf-4ae4-b4da-4b50704cb0a4-operator-scripts\") pod \"8c5176e5-9abf-4ae4-b4da-4b50704cb0a4\" (UID: \"8c5176e5-9abf-4ae4-b4da-4b50704cb0a4\") " Jan 31 09:22:59 crc kubenswrapper[4830]: I0131 09:22:59.946755 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8c9dab0c-38e9-435b-8d48-9dfdaa4af87b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8c9dab0c-38e9-435b-8d48-9dfdaa4af87b" (UID: "8c9dab0c-38e9-435b-8d48-9dfdaa4af87b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:22:59 crc kubenswrapper[4830]: I0131 09:22:59.950526 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8c5176e5-9abf-4ae4-b4da-4b50704cb0a4-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8c5176e5-9abf-4ae4-b4da-4b50704cb0a4" (UID: "8c5176e5-9abf-4ae4-b4da-4b50704cb0a4"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:22:59 crc kubenswrapper[4830]: I0131 09:22:59.950645 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c888f2ed-bb7b-4ee1-a17d-2b656f9464b6-scripts\") pod \"c888f2ed-bb7b-4ee1-a17d-2b656f9464b6\" (UID: \"c888f2ed-bb7b-4ee1-a17d-2b656f9464b6\") " Jan 31 09:22:59 crc kubenswrapper[4830]: I0131 09:22:59.951448 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/c888f2ed-bb7b-4ee1-a17d-2b656f9464b6-swiftconf\") pod \"c888f2ed-bb7b-4ee1-a17d-2b656f9464b6\" (UID: \"c888f2ed-bb7b-4ee1-a17d-2b656f9464b6\") " Jan 31 09:22:59 crc kubenswrapper[4830]: I0131 09:22:59.951598 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c888f2ed-bb7b-4ee1-a17d-2b656f9464b6-combined-ca-bundle\") pod \"c888f2ed-bb7b-4ee1-a17d-2b656f9464b6\" (UID: \"c888f2ed-bb7b-4ee1-a17d-2b656f9464b6\") " Jan 31 09:22:59 crc kubenswrapper[4830]: I0131 09:22:59.951658 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/c888f2ed-bb7b-4ee1-a17d-2b656f9464b6-ring-data-devices\") pod \"c888f2ed-bb7b-4ee1-a17d-2b656f9464b6\" (UID: \"c888f2ed-bb7b-4ee1-a17d-2b656f9464b6\") " Jan 31 09:22:59 crc kubenswrapper[4830]: I0131 09:22:59.951701 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/c888f2ed-bb7b-4ee1-a17d-2b656f9464b6-dispersionconf\") pod \"c888f2ed-bb7b-4ee1-a17d-2b656f9464b6\" (UID: \"c888f2ed-bb7b-4ee1-a17d-2b656f9464b6\") " Jan 31 09:22:59 crc kubenswrapper[4830]: I0131 09:22:59.951814 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nm7zq\" (UniqueName: \"kubernetes.io/projected/8c5176e5-9abf-4ae4-b4da-4b50704cb0a4-kube-api-access-nm7zq\") pod \"8c5176e5-9abf-4ae4-b4da-4b50704cb0a4\" (UID: \"8c5176e5-9abf-4ae4-b4da-4b50704cb0a4\") " Jan 31 09:22:59 crc kubenswrapper[4830]: I0131 09:22:59.951881 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/c888f2ed-bb7b-4ee1-a17d-2b656f9464b6-etc-swift\") pod \"c888f2ed-bb7b-4ee1-a17d-2b656f9464b6\" (UID: \"c888f2ed-bb7b-4ee1-a17d-2b656f9464b6\") " Jan 31 09:22:59 crc kubenswrapper[4830]: I0131 09:22:59.951929 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-btgzh\" (UniqueName: \"kubernetes.io/projected/8c9dab0c-38e9-435b-8d48-9dfdaa4af87b-kube-api-access-btgzh\") pod \"8c9dab0c-38e9-435b-8d48-9dfdaa4af87b\" (UID: \"8c9dab0c-38e9-435b-8d48-9dfdaa4af87b\") " Jan 31 09:22:59 crc kubenswrapper[4830]: I0131 09:22:59.953262 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c888f2ed-bb7b-4ee1-a17d-2b656f9464b6-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "c888f2ed-bb7b-4ee1-a17d-2b656f9464b6" (UID: "c888f2ed-bb7b-4ee1-a17d-2b656f9464b6"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:22:59 crc kubenswrapper[4830]: I0131 09:22:59.953336 4830 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8c5176e5-9abf-4ae4-b4da-4b50704cb0a4-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 09:22:59 crc kubenswrapper[4830]: I0131 09:22:59.953367 4830 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8c9dab0c-38e9-435b-8d48-9dfdaa4af87b-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 09:22:59 crc kubenswrapper[4830]: I0131 09:22:59.954267 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c888f2ed-bb7b-4ee1-a17d-2b656f9464b6-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "c888f2ed-bb7b-4ee1-a17d-2b656f9464b6" (UID: "c888f2ed-bb7b-4ee1-a17d-2b656f9464b6"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 09:22:59 crc kubenswrapper[4830]: I0131 09:22:59.958225 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c888f2ed-bb7b-4ee1-a17d-2b656f9464b6-kube-api-access-b6zv9" (OuterVolumeSpecName: "kube-api-access-b6zv9") pod "c888f2ed-bb7b-4ee1-a17d-2b656f9464b6" (UID: "c888f2ed-bb7b-4ee1-a17d-2b656f9464b6"). InnerVolumeSpecName "kube-api-access-b6zv9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:22:59 crc kubenswrapper[4830]: I0131 09:22:59.961265 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8c5176e5-9abf-4ae4-b4da-4b50704cb0a4-kube-api-access-nm7zq" (OuterVolumeSpecName: "kube-api-access-nm7zq") pod "8c5176e5-9abf-4ae4-b4da-4b50704cb0a4" (UID: "8c5176e5-9abf-4ae4-b4da-4b50704cb0a4"). InnerVolumeSpecName "kube-api-access-nm7zq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:22:59 crc kubenswrapper[4830]: I0131 09:22:59.966524 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8c9dab0c-38e9-435b-8d48-9dfdaa4af87b-kube-api-access-btgzh" (OuterVolumeSpecName: "kube-api-access-btgzh") pod "8c9dab0c-38e9-435b-8d48-9dfdaa4af87b" (UID: "8c9dab0c-38e9-435b-8d48-9dfdaa4af87b"). InnerVolumeSpecName "kube-api-access-btgzh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:22:59 crc kubenswrapper[4830]: I0131 09:22:59.978837 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c888f2ed-bb7b-4ee1-a17d-2b656f9464b6-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "c888f2ed-bb7b-4ee1-a17d-2b656f9464b6" (UID: "c888f2ed-bb7b-4ee1-a17d-2b656f9464b6"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:23:00 crc kubenswrapper[4830]: I0131 09:23:00.008302 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c888f2ed-bb7b-4ee1-a17d-2b656f9464b6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c888f2ed-bb7b-4ee1-a17d-2b656f9464b6" (UID: "c888f2ed-bb7b-4ee1-a17d-2b656f9464b6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:23:00 crc kubenswrapper[4830]: I0131 09:23:00.017360 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c888f2ed-bb7b-4ee1-a17d-2b656f9464b6-scripts" (OuterVolumeSpecName: "scripts") pod "c888f2ed-bb7b-4ee1-a17d-2b656f9464b6" (UID: "c888f2ed-bb7b-4ee1-a17d-2b656f9464b6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:23:00 crc kubenswrapper[4830]: I0131 09:23:00.025838 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c888f2ed-bb7b-4ee1-a17d-2b656f9464b6-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "c888f2ed-bb7b-4ee1-a17d-2b656f9464b6" (UID: "c888f2ed-bb7b-4ee1-a17d-2b656f9464b6"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:23:00 crc kubenswrapper[4830]: I0131 09:23:00.055803 4830 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c888f2ed-bb7b-4ee1-a17d-2b656f9464b6-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 09:23:00 crc kubenswrapper[4830]: I0131 09:23:00.055843 4830 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/c888f2ed-bb7b-4ee1-a17d-2b656f9464b6-swiftconf\") on node \"crc\" DevicePath \"\"" Jan 31 09:23:00 crc kubenswrapper[4830]: I0131 09:23:00.055859 4830 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c888f2ed-bb7b-4ee1-a17d-2b656f9464b6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 09:23:00 crc kubenswrapper[4830]: I0131 09:23:00.055872 4830 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/c888f2ed-bb7b-4ee1-a17d-2b656f9464b6-ring-data-devices\") on node \"crc\" DevicePath \"\"" Jan 31 09:23:00 crc kubenswrapper[4830]: I0131 09:23:00.055882 4830 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/c888f2ed-bb7b-4ee1-a17d-2b656f9464b6-dispersionconf\") on node \"crc\" DevicePath \"\"" Jan 31 09:23:00 crc kubenswrapper[4830]: I0131 09:23:00.055892 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nm7zq\" (UniqueName: \"kubernetes.io/projected/8c5176e5-9abf-4ae4-b4da-4b50704cb0a4-kube-api-access-nm7zq\") on node \"crc\" DevicePath \"\"" Jan 31 09:23:00 crc kubenswrapper[4830]: I0131 09:23:00.055904 4830 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/c888f2ed-bb7b-4ee1-a17d-2b656f9464b6-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 31 09:23:00 crc kubenswrapper[4830]: I0131 09:23:00.055914 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-btgzh\" (UniqueName: \"kubernetes.io/projected/8c9dab0c-38e9-435b-8d48-9dfdaa4af87b-kube-api-access-btgzh\") on node \"crc\" DevicePath \"\"" Jan 31 09:23:00 crc kubenswrapper[4830]: I0131 09:23:00.055925 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b6zv9\" (UniqueName: \"kubernetes.io/projected/c888f2ed-bb7b-4ee1-a17d-2b656f9464b6-kube-api-access-b6zv9\") on node \"crc\" DevicePath \"\"" Jan 31 09:23:00 crc kubenswrapper[4830]: I0131 09:23:00.511698 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-vzsv5"] Jan 31 09:23:00 crc kubenswrapper[4830]: I0131 09:23:00.549235 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Jan 31 09:23:00 crc kubenswrapper[4830]: I0131 09:23:00.607421 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-wfp8z"] Jan 31 09:23:00 crc kubenswrapper[4830]: I0131 09:23:00.619807 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-04a9-account-create-update-xswbl"] Jan 31 09:23:00 crc kubenswrapper[4830]: I0131 09:23:00.804005 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-sqm2h"] Jan 31 09:23:00 crc kubenswrapper[4830]: I0131 09:23:00.982915 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"68109d40-9af0-4c37-bf02-7b4744dbab5f","Type":"ContainerStarted","Data":"7adaf06fa536ac40db61bcf932640a9fb6f67d5b1eeca9af5a4e09a11f98afc7"} Jan 31 09:23:00 crc kubenswrapper[4830]: I0131 09:23:00.993506 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-wfp8z" event={"ID":"2830665c-1d23-4c36-8324-7362068ae08f","Type":"ContainerStarted","Data":"0837fe18d7b7f60e553f6d1a8f52360c25350fd898e52ae7ecea1b3692ad46dd"} Jan 31 09:23:01 crc kubenswrapper[4830]: I0131 09:23:01.001130 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-04a9-account-create-update-xswbl" event={"ID":"2d4e564d-bb74-4fb4-b180-bdd6c81a3d6c","Type":"ContainerStarted","Data":"33a04be4b8c51f0f6bd431e96aba6cbd7d7412fe7c92fb944415f158280b4f69"} Jan 31 09:23:01 crc kubenswrapper[4830]: I0131 09:23:01.006274 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-sqm2h" event={"ID":"b4141b8b-513a-4210-9abd-bfba363d6986","Type":"ContainerStarted","Data":"6fa27682be3113342f88b8ea04f737746075d33cd2b7b132ab38296d056e6540"} Jan 31 09:23:01 crc kubenswrapper[4830]: I0131 09:23:01.017266 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-create-wfp8z" podStartSLOduration=3.017241005 podStartE2EDuration="3.017241005s" podCreationTimestamp="2026-01-31 09:22:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 09:23:01.010520074 +0000 UTC m=+1325.503882516" watchObservedRunningTime="2026-01-31 09:23:01.017241005 +0000 UTC m=+1325.510603457" Jan 31 09:23:01 crc kubenswrapper[4830]: I0131 09:23:01.018795 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-56987"] Jan 31 09:23:01 crc kubenswrapper[4830]: I0131 09:23:01.035583 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-vzsv5" event={"ID":"7ee3d5b9-18cf-49db-95a9-e2dfa666e15e","Type":"ContainerStarted","Data":"38c44ca9254931b596825a148ec4cc55293f302133ec133038736ed2f7a4c568"} Jan 31 09:23:01 crc kubenswrapper[4830]: W0131 09:23:01.060918 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcf11bdf9_7bbe_4713_9711_e6aff7e0c0c5.slice/crio-25a0b736bd8ecca4de97a29944d297b37ee8ed7ffddaf81f2b765575a74befbe WatchSource:0}: Error finding container 25a0b736bd8ecca4de97a29944d297b37ee8ed7ffddaf81f2b765575a74befbe: Status 404 returned error can't find the container with id 25a0b736bd8ecca4de97a29944d297b37ee8ed7ffddaf81f2b765575a74befbe Jan 31 09:23:01 crc kubenswrapper[4830]: I0131 09:23:01.070406 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-a9ea-account-create-update-cvwgh"] Jan 31 09:23:01 crc kubenswrapper[4830]: I0131 09:23:01.106053 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-04a9-account-create-update-xswbl" podStartSLOduration=3.105990874 podStartE2EDuration="3.105990874s" podCreationTimestamp="2026-01-31 09:22:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 09:23:01.029121464 +0000 UTC m=+1325.522483906" watchObservedRunningTime="2026-01-31 09:23:01.105990874 +0000 UTC m=+1325.599353316" Jan 31 09:23:01 crc kubenswrapper[4830]: I0131 09:23:01.134565 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-1d60-account-create-update-vbk6p"] Jan 31 09:23:01 crc kubenswrapper[4830]: I0131 09:23:01.148862 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/root-account-create-update-vzsv5" podStartSLOduration=6.148815644 podStartE2EDuration="6.148815644s" podCreationTimestamp="2026-01-31 09:22:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 09:23:01.063448772 +0000 UTC m=+1325.556811214" watchObservedRunningTime="2026-01-31 09:23:01.148815644 +0000 UTC m=+1325.642178086" Jan 31 09:23:01 crc kubenswrapper[4830]: I0131 09:23:01.338396 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-openstack-cell1-db-create-8sjj4"] Jan 31 09:23:01 crc kubenswrapper[4830]: E0131 09:23:01.339037 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c9dab0c-38e9-435b-8d48-9dfdaa4af87b" containerName="mariadb-account-create-update" Jan 31 09:23:01 crc kubenswrapper[4830]: I0131 09:23:01.339062 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c9dab0c-38e9-435b-8d48-9dfdaa4af87b" containerName="mariadb-account-create-update" Jan 31 09:23:01 crc kubenswrapper[4830]: E0131 09:23:01.339107 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c5176e5-9abf-4ae4-b4da-4b50704cb0a4" containerName="mariadb-database-create" Jan 31 09:23:01 crc kubenswrapper[4830]: I0131 09:23:01.339115 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c5176e5-9abf-4ae4-b4da-4b50704cb0a4" containerName="mariadb-database-create" Jan 31 09:23:01 crc kubenswrapper[4830]: E0131 09:23:01.339134 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c888f2ed-bb7b-4ee1-a17d-2b656f9464b6" containerName="swift-ring-rebalance" Jan 31 09:23:01 crc kubenswrapper[4830]: I0131 09:23:01.339140 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="c888f2ed-bb7b-4ee1-a17d-2b656f9464b6" containerName="swift-ring-rebalance" Jan 31 09:23:01 crc kubenswrapper[4830]: I0131 09:23:01.339359 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="8c5176e5-9abf-4ae4-b4da-4b50704cb0a4" containerName="mariadb-database-create" Jan 31 09:23:01 crc kubenswrapper[4830]: I0131 09:23:01.339371 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="c888f2ed-bb7b-4ee1-a17d-2b656f9464b6" containerName="swift-ring-rebalance" Jan 31 09:23:01 crc kubenswrapper[4830]: I0131 09:23:01.339383 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="8c9dab0c-38e9-435b-8d48-9dfdaa4af87b" containerName="mariadb-account-create-update" Jan 31 09:23:01 crc kubenswrapper[4830]: I0131 09:23:01.340401 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-cell1-db-create-8sjj4" Jan 31 09:23:01 crc kubenswrapper[4830]: I0131 09:23:01.349391 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-openstack-cell1-db-create-8sjj4"] Jan 31 09:23:01 crc kubenswrapper[4830]: I0131 09:23:01.429274 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/73de016b-d1c0-45cf-b3a6-fe6d3138f630-operator-scripts\") pod \"mysqld-exporter-openstack-cell1-db-create-8sjj4\" (UID: \"73de016b-d1c0-45cf-b3a6-fe6d3138f630\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-8sjj4" Jan 31 09:23:01 crc kubenswrapper[4830]: I0131 09:23:01.429768 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zppkx\" (UniqueName: \"kubernetes.io/projected/73de016b-d1c0-45cf-b3a6-fe6d3138f630-kube-api-access-zppkx\") pod \"mysqld-exporter-openstack-cell1-db-create-8sjj4\" (UID: \"73de016b-d1c0-45cf-b3a6-fe6d3138f630\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-8sjj4" Jan 31 09:23:01 crc kubenswrapper[4830]: I0131 09:23:01.533192 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/73de016b-d1c0-45cf-b3a6-fe6d3138f630-operator-scripts\") pod \"mysqld-exporter-openstack-cell1-db-create-8sjj4\" (UID: \"73de016b-d1c0-45cf-b3a6-fe6d3138f630\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-8sjj4" Jan 31 09:23:01 crc kubenswrapper[4830]: I0131 09:23:01.533344 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zppkx\" (UniqueName: \"kubernetes.io/projected/73de016b-d1c0-45cf-b3a6-fe6d3138f630-kube-api-access-zppkx\") pod \"mysqld-exporter-openstack-cell1-db-create-8sjj4\" (UID: \"73de016b-d1c0-45cf-b3a6-fe6d3138f630\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-8sjj4" Jan 31 09:23:01 crc kubenswrapper[4830]: I0131 09:23:01.534316 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/73de016b-d1c0-45cf-b3a6-fe6d3138f630-operator-scripts\") pod \"mysqld-exporter-openstack-cell1-db-create-8sjj4\" (UID: \"73de016b-d1c0-45cf-b3a6-fe6d3138f630\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-8sjj4" Jan 31 09:23:01 crc kubenswrapper[4830]: I0131 09:23:01.544861 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-c662-account-create-update-rwxdv"] Jan 31 09:23:01 crc kubenswrapper[4830]: I0131 09:23:01.547248 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-c662-account-create-update-rwxdv" Jan 31 09:23:01 crc kubenswrapper[4830]: I0131 09:23:01.550188 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"mysqld-exporter-openstack-cell1-db-secret" Jan 31 09:23:01 crc kubenswrapper[4830]: I0131 09:23:01.562784 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zppkx\" (UniqueName: \"kubernetes.io/projected/73de016b-d1c0-45cf-b3a6-fe6d3138f630-kube-api-access-zppkx\") pod \"mysqld-exporter-openstack-cell1-db-create-8sjj4\" (UID: \"73de016b-d1c0-45cf-b3a6-fe6d3138f630\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-8sjj4" Jan 31 09:23:01 crc kubenswrapper[4830]: I0131 09:23:01.571684 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-c662-account-create-update-rwxdv"] Jan 31 09:23:01 crc kubenswrapper[4830]: I0131 09:23:01.636578 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nfgdn\" (UniqueName: \"kubernetes.io/projected/035a3263-c7af-45d8-a14c-5b86e594c818-kube-api-access-nfgdn\") pod \"mysqld-exporter-c662-account-create-update-rwxdv\" (UID: \"035a3263-c7af-45d8-a14c-5b86e594c818\") " pod="openstack/mysqld-exporter-c662-account-create-update-rwxdv" Jan 31 09:23:01 crc kubenswrapper[4830]: I0131 09:23:01.636709 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/035a3263-c7af-45d8-a14c-5b86e594c818-operator-scripts\") pod \"mysqld-exporter-c662-account-create-update-rwxdv\" (UID: \"035a3263-c7af-45d8-a14c-5b86e594c818\") " pod="openstack/mysqld-exporter-c662-account-create-update-rwxdv" Jan 31 09:23:01 crc kubenswrapper[4830]: I0131 09:23:01.728961 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-cell1-db-create-8sjj4" Jan 31 09:23:01 crc kubenswrapper[4830]: I0131 09:23:01.739229 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/035a3263-c7af-45d8-a14c-5b86e594c818-operator-scripts\") pod \"mysqld-exporter-c662-account-create-update-rwxdv\" (UID: \"035a3263-c7af-45d8-a14c-5b86e594c818\") " pod="openstack/mysqld-exporter-c662-account-create-update-rwxdv" Jan 31 09:23:01 crc kubenswrapper[4830]: I0131 09:23:01.739464 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nfgdn\" (UniqueName: \"kubernetes.io/projected/035a3263-c7af-45d8-a14c-5b86e594c818-kube-api-access-nfgdn\") pod \"mysqld-exporter-c662-account-create-update-rwxdv\" (UID: \"035a3263-c7af-45d8-a14c-5b86e594c818\") " pod="openstack/mysqld-exporter-c662-account-create-update-rwxdv" Jan 31 09:23:01 crc kubenswrapper[4830]: I0131 09:23:01.740747 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/035a3263-c7af-45d8-a14c-5b86e594c818-operator-scripts\") pod \"mysqld-exporter-c662-account-create-update-rwxdv\" (UID: \"035a3263-c7af-45d8-a14c-5b86e594c818\") " pod="openstack/mysqld-exporter-c662-account-create-update-rwxdv" Jan 31 09:23:01 crc kubenswrapper[4830]: I0131 09:23:01.776478 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nfgdn\" (UniqueName: \"kubernetes.io/projected/035a3263-c7af-45d8-a14c-5b86e594c818-kube-api-access-nfgdn\") pod \"mysqld-exporter-c662-account-create-update-rwxdv\" (UID: \"035a3263-c7af-45d8-a14c-5b86e594c818\") " pod="openstack/mysqld-exporter-c662-account-create-update-rwxdv" Jan 31 09:23:02 crc kubenswrapper[4830]: I0131 09:23:02.019878 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-c662-account-create-update-rwxdv" Jan 31 09:23:02 crc kubenswrapper[4830]: I0131 09:23:02.054102 4830 generic.go:334] "Generic (PLEG): container finished" podID="2830665c-1d23-4c36-8324-7362068ae08f" containerID="01206d5478a26bc2285e3b5be49ac89f5002949ad540ee4794b6867baaa5d0fd" exitCode=0 Jan 31 09:23:02 crc kubenswrapper[4830]: I0131 09:23:02.054188 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-wfp8z" event={"ID":"2830665c-1d23-4c36-8324-7362068ae08f","Type":"ContainerDied","Data":"01206d5478a26bc2285e3b5be49ac89f5002949ad540ee4794b6867baaa5d0fd"} Jan 31 09:23:02 crc kubenswrapper[4830]: I0131 09:23:02.061973 4830 generic.go:334] "Generic (PLEG): container finished" podID="b4141b8b-513a-4210-9abd-bfba363d6986" containerID="e3131aa63899f34c9258a9403856ddb9e084db8fda9d9677b7fef5eeb6a7b503" exitCode=0 Jan 31 09:23:02 crc kubenswrapper[4830]: I0131 09:23:02.062029 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-sqm2h" event={"ID":"b4141b8b-513a-4210-9abd-bfba363d6986","Type":"ContainerDied","Data":"e3131aa63899f34c9258a9403856ddb9e084db8fda9d9677b7fef5eeb6a7b503"} Jan 31 09:23:02 crc kubenswrapper[4830]: I0131 09:23:02.064793 4830 generic.go:334] "Generic (PLEG): container finished" podID="cf11bdf9-7bbe-4713-9711-e6aff7e0c0c5" containerID="988a85f4c50f4c98a103f88f1face3040057a4ba445958e813df2ce5f514b4e1" exitCode=0 Jan 31 09:23:02 crc kubenswrapper[4830]: I0131 09:23:02.064859 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-56987" event={"ID":"cf11bdf9-7bbe-4713-9711-e6aff7e0c0c5","Type":"ContainerDied","Data":"988a85f4c50f4c98a103f88f1face3040057a4ba445958e813df2ce5f514b4e1"} Jan 31 09:23:02 crc kubenswrapper[4830]: I0131 09:23:02.064900 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-56987" event={"ID":"cf11bdf9-7bbe-4713-9711-e6aff7e0c0c5","Type":"ContainerStarted","Data":"25a0b736bd8ecca4de97a29944d297b37ee8ed7ffddaf81f2b765575a74befbe"} Jan 31 09:23:02 crc kubenswrapper[4830]: I0131 09:23:02.068504 4830 generic.go:334] "Generic (PLEG): container finished" podID="7ee3d5b9-18cf-49db-95a9-e2dfa666e15e" containerID="67eb3a84cc34c3e1d7b3d5410cd4c3f7e9c2645411c53ddc5435603ce6326921" exitCode=0 Jan 31 09:23:02 crc kubenswrapper[4830]: I0131 09:23:02.068601 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-vzsv5" event={"ID":"7ee3d5b9-18cf-49db-95a9-e2dfa666e15e","Type":"ContainerDied","Data":"67eb3a84cc34c3e1d7b3d5410cd4c3f7e9c2645411c53ddc5435603ce6326921"} Jan 31 09:23:02 crc kubenswrapper[4830]: I0131 09:23:02.081770 4830 generic.go:334] "Generic (PLEG): container finished" podID="8af6a38f-c8ba-464d-acd5-417848530657" containerID="1a540976789812b4e3da15c2e7ea712bb4f6a503080de42ca1fa2374180f34fd" exitCode=0 Jan 31 09:23:02 crc kubenswrapper[4830]: I0131 09:23:02.082470 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-1d60-account-create-update-vbk6p" event={"ID":"8af6a38f-c8ba-464d-acd5-417848530657","Type":"ContainerDied","Data":"1a540976789812b4e3da15c2e7ea712bb4f6a503080de42ca1fa2374180f34fd"} Jan 31 09:23:02 crc kubenswrapper[4830]: I0131 09:23:02.082533 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-1d60-account-create-update-vbk6p" event={"ID":"8af6a38f-c8ba-464d-acd5-417848530657","Type":"ContainerStarted","Data":"b68af3f2f396de3bb34a76d312d15ba25cfe0fcb04f5e4ab50a9dfb83311ad2d"} Jan 31 09:23:02 crc kubenswrapper[4830]: I0131 09:23:02.086436 4830 generic.go:334] "Generic (PLEG): container finished" podID="2d4e564d-bb74-4fb4-b180-bdd6c81a3d6c" containerID="19ba30438c2d9de9341594c603a0643b6578f2d4736c1e125ccc4407bbf7a309" exitCode=0 Jan 31 09:23:02 crc kubenswrapper[4830]: I0131 09:23:02.086610 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-04a9-account-create-update-xswbl" event={"ID":"2d4e564d-bb74-4fb4-b180-bdd6c81a3d6c","Type":"ContainerDied","Data":"19ba30438c2d9de9341594c603a0643b6578f2d4736c1e125ccc4407bbf7a309"} Jan 31 09:23:02 crc kubenswrapper[4830]: I0131 09:23:02.088661 4830 generic.go:334] "Generic (PLEG): container finished" podID="ca85abf4-a6ba-4080-a544-fcce2de88b2b" containerID="9496b9bef2552761732cd1b337753278cf6c9a5d77c6293d395c3002a513b34c" exitCode=0 Jan 31 09:23:02 crc kubenswrapper[4830]: I0131 09:23:02.088699 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-a9ea-account-create-update-cvwgh" event={"ID":"ca85abf4-a6ba-4080-a544-fcce2de88b2b","Type":"ContainerDied","Data":"9496b9bef2552761732cd1b337753278cf6c9a5d77c6293d395c3002a513b34c"} Jan 31 09:23:02 crc kubenswrapper[4830]: I0131 09:23:02.088735 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-a9ea-account-create-update-cvwgh" event={"ID":"ca85abf4-a6ba-4080-a544-fcce2de88b2b","Type":"ContainerStarted","Data":"53395f171a0839bc3a3c859dbcdc19015ac6bef90c1800e701064675e27384b1"} Jan 31 09:23:02 crc kubenswrapper[4830]: I0131 09:23:02.431388 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-openstack-cell1-db-create-8sjj4"] Jan 31 09:23:02 crc kubenswrapper[4830]: I0131 09:23:02.532166 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-c662-account-create-update-rwxdv"] Jan 31 09:23:03 crc kubenswrapper[4830]: I0131 09:23:03.105185 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-cell1-db-create-8sjj4" event={"ID":"73de016b-d1c0-45cf-b3a6-fe6d3138f630","Type":"ContainerStarted","Data":"08726d443e962f560c4c7a00bb3fbc90b8bf85df9df11a78cc3cff705f1ab571"} Jan 31 09:23:03 crc kubenswrapper[4830]: I0131 09:23:03.106208 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-cell1-db-create-8sjj4" event={"ID":"73de016b-d1c0-45cf-b3a6-fe6d3138f630","Type":"ContainerStarted","Data":"cd2643c727b0df749ba80af88cd070c5a52ca27a9299d19e110f4e1c18ea0f35"} Jan 31 09:23:03 crc kubenswrapper[4830]: I0131 09:23:03.112771 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-c662-account-create-update-rwxdv" event={"ID":"035a3263-c7af-45d8-a14c-5b86e594c818","Type":"ContainerStarted","Data":"279ba1094f266cb5d4dae197ff642f8341a0402b74dcfa2eace5939eb69a0e7d"} Jan 31 09:23:03 crc kubenswrapper[4830]: I0131 09:23:03.112832 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-c662-account-create-update-rwxdv" event={"ID":"035a3263-c7af-45d8-a14c-5b86e594c818","Type":"ContainerStarted","Data":"40d75eaa1d78afd657770812676357f9810ab5707d12823d9a914cdc70c481a7"} Jan 31 09:23:03 crc kubenswrapper[4830]: I0131 09:23:03.161214 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mysqld-exporter-openstack-cell1-db-create-8sjj4" podStartSLOduration=2.161195665 podStartE2EDuration="2.161195665s" podCreationTimestamp="2026-01-31 09:23:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 09:23:03.136004627 +0000 UTC m=+1327.629367089" watchObservedRunningTime="2026-01-31 09:23:03.161195665 +0000 UTC m=+1327.654558107" Jan 31 09:23:03 crc kubenswrapper[4830]: I0131 09:23:03.172752 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mysqld-exporter-c662-account-create-update-rwxdv" podStartSLOduration=2.172716573 podStartE2EDuration="2.172716573s" podCreationTimestamp="2026-01-31 09:23:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 09:23:03.156270534 +0000 UTC m=+1327.649632966" watchObservedRunningTime="2026-01-31 09:23:03.172716573 +0000 UTC m=+1327.666079015" Jan 31 09:23:03 crc kubenswrapper[4830]: I0131 09:23:03.863785 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-56987" Jan 31 09:23:03 crc kubenswrapper[4830]: I0131 09:23:03.954150 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fz45c\" (UniqueName: \"kubernetes.io/projected/cf11bdf9-7bbe-4713-9711-e6aff7e0c0c5-kube-api-access-fz45c\") pod \"cf11bdf9-7bbe-4713-9711-e6aff7e0c0c5\" (UID: \"cf11bdf9-7bbe-4713-9711-e6aff7e0c0c5\") " Jan 31 09:23:03 crc kubenswrapper[4830]: I0131 09:23:03.955876 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cf11bdf9-7bbe-4713-9711-e6aff7e0c0c5-operator-scripts\") pod \"cf11bdf9-7bbe-4713-9711-e6aff7e0c0c5\" (UID: \"cf11bdf9-7bbe-4713-9711-e6aff7e0c0c5\") " Jan 31 09:23:03 crc kubenswrapper[4830]: I0131 09:23:03.956823 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cf11bdf9-7bbe-4713-9711-e6aff7e0c0c5-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "cf11bdf9-7bbe-4713-9711-e6aff7e0c0c5" (UID: "cf11bdf9-7bbe-4713-9711-e6aff7e0c0c5"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:23:03 crc kubenswrapper[4830]: I0131 09:23:03.958018 4830 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cf11bdf9-7bbe-4713-9711-e6aff7e0c0c5-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 09:23:03 crc kubenswrapper[4830]: I0131 09:23:03.961528 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cf11bdf9-7bbe-4713-9711-e6aff7e0c0c5-kube-api-access-fz45c" (OuterVolumeSpecName: "kube-api-access-fz45c") pod "cf11bdf9-7bbe-4713-9711-e6aff7e0c0c5" (UID: "cf11bdf9-7bbe-4713-9711-e6aff7e0c0c5"). InnerVolumeSpecName "kube-api-access-fz45c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:23:04 crc kubenswrapper[4830]: I0131 09:23:04.061219 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fz45c\" (UniqueName: \"kubernetes.io/projected/cf11bdf9-7bbe-4713-9711-e6aff7e0c0c5-kube-api-access-fz45c\") on node \"crc\" DevicePath \"\"" Jan 31 09:23:04 crc kubenswrapper[4830]: I0131 09:23:04.095435 4830 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 31 09:23:04 crc kubenswrapper[4830]: I0131 09:23:04.164711 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/1023f27a-9c1d-4818-a3f5-94946296ae46-etc-swift\") pod \"swift-storage-0\" (UID: \"1023f27a-9c1d-4818-a3f5-94946296ae46\") " pod="openstack/swift-storage-0" Jan 31 09:23:04 crc kubenswrapper[4830]: I0131 09:23:04.171183 4830 generic.go:334] "Generic (PLEG): container finished" podID="73de016b-d1c0-45cf-b3a6-fe6d3138f630" containerID="08726d443e962f560c4c7a00bb3fbc90b8bf85df9df11a78cc3cff705f1ab571" exitCode=0 Jan 31 09:23:04 crc kubenswrapper[4830]: I0131 09:23:04.171321 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-cell1-db-create-8sjj4" event={"ID":"73de016b-d1c0-45cf-b3a6-fe6d3138f630","Type":"ContainerDied","Data":"08726d443e962f560c4c7a00bb3fbc90b8bf85df9df11a78cc3cff705f1ab571"} Jan 31 09:23:04 crc kubenswrapper[4830]: I0131 09:23:04.182166 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/1023f27a-9c1d-4818-a3f5-94946296ae46-etc-swift\") pod \"swift-storage-0\" (UID: \"1023f27a-9c1d-4818-a3f5-94946296ae46\") " pod="openstack/swift-storage-0" Jan 31 09:23:04 crc kubenswrapper[4830]: I0131 09:23:04.211447 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-56987" event={"ID":"cf11bdf9-7bbe-4713-9711-e6aff7e0c0c5","Type":"ContainerDied","Data":"25a0b736bd8ecca4de97a29944d297b37ee8ed7ffddaf81f2b765575a74befbe"} Jan 31 09:23:04 crc kubenswrapper[4830]: I0131 09:23:04.211533 4830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="25a0b736bd8ecca4de97a29944d297b37ee8ed7ffddaf81f2b765575a74befbe" Jan 31 09:23:04 crc kubenswrapper[4830]: I0131 09:23:04.213445 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-56987" Jan 31 09:23:04 crc kubenswrapper[4830]: I0131 09:23:04.225870 4830 generic.go:334] "Generic (PLEG): container finished" podID="035a3263-c7af-45d8-a14c-5b86e594c818" containerID="279ba1094f266cb5d4dae197ff642f8341a0402b74dcfa2eace5939eb69a0e7d" exitCode=0 Jan 31 09:23:04 crc kubenswrapper[4830]: I0131 09:23:04.225967 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-c662-account-create-update-rwxdv" event={"ID":"035a3263-c7af-45d8-a14c-5b86e594c818","Type":"ContainerDied","Data":"279ba1094f266cb5d4dae197ff642f8341a0402b74dcfa2eace5939eb69a0e7d"} Jan 31 09:23:04 crc kubenswrapper[4830]: I0131 09:23:04.232694 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"68109d40-9af0-4c37-bf02-7b4744dbab5f","Type":"ContainerStarted","Data":"a88cc0ab549485c84a69e7634ff67075f9eef9e6a569736ca8920015bf3445a5"} Jan 31 09:23:04 crc kubenswrapper[4830]: I0131 09:23:04.311081 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Jan 31 09:23:04 crc kubenswrapper[4830]: I0131 09:23:04.330941 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-vzsv5" Jan 31 09:23:04 crc kubenswrapper[4830]: I0131 09:23:04.351402 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-a9ea-account-create-update-cvwgh" Jan 31 09:23:04 crc kubenswrapper[4830]: I0131 09:23:04.362944 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-1d60-account-create-update-vbk6p" Jan 31 09:23:04 crc kubenswrapper[4830]: I0131 09:23:04.406821 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-wfp8z" Jan 31 09:23:04 crc kubenswrapper[4830]: I0131 09:23:04.416671 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-sqm2h" Jan 31 09:23:04 crc kubenswrapper[4830]: I0131 09:23:04.435311 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-04a9-account-create-update-xswbl" Jan 31 09:23:04 crc kubenswrapper[4830]: I0131 09:23:04.477687 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Jan 31 09:23:04 crc kubenswrapper[4830]: I0131 09:23:04.486442 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-66rn6\" (UniqueName: \"kubernetes.io/projected/2830665c-1d23-4c36-8324-7362068ae08f-kube-api-access-66rn6\") pod \"2830665c-1d23-4c36-8324-7362068ae08f\" (UID: \"2830665c-1d23-4c36-8324-7362068ae08f\") " Jan 31 09:23:04 crc kubenswrapper[4830]: I0131 09:23:04.486511 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ca85abf4-a6ba-4080-a544-fcce2de88b2b-operator-scripts\") pod \"ca85abf4-a6ba-4080-a544-fcce2de88b2b\" (UID: \"ca85abf4-a6ba-4080-a544-fcce2de88b2b\") " Jan 31 09:23:04 crc kubenswrapper[4830]: I0131 09:23:04.486667 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hjw56\" (UniqueName: \"kubernetes.io/projected/ca85abf4-a6ba-4080-a544-fcce2de88b2b-kube-api-access-hjw56\") pod \"ca85abf4-a6ba-4080-a544-fcce2de88b2b\" (UID: \"ca85abf4-a6ba-4080-a544-fcce2de88b2b\") " Jan 31 09:23:04 crc kubenswrapper[4830]: I0131 09:23:04.486708 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2830665c-1d23-4c36-8324-7362068ae08f-operator-scripts\") pod \"2830665c-1d23-4c36-8324-7362068ae08f\" (UID: \"2830665c-1d23-4c36-8324-7362068ae08f\") " Jan 31 09:23:04 crc kubenswrapper[4830]: I0131 09:23:04.486769 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7ee3d5b9-18cf-49db-95a9-e2dfa666e15e-operator-scripts\") pod \"7ee3d5b9-18cf-49db-95a9-e2dfa666e15e\" (UID: \"7ee3d5b9-18cf-49db-95a9-e2dfa666e15e\") " Jan 31 09:23:04 crc kubenswrapper[4830]: I0131 09:23:04.486842 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8af6a38f-c8ba-464d-acd5-417848530657-operator-scripts\") pod \"8af6a38f-c8ba-464d-acd5-417848530657\" (UID: \"8af6a38f-c8ba-464d-acd5-417848530657\") " Jan 31 09:23:04 crc kubenswrapper[4830]: I0131 09:23:04.487012 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tq7gn\" (UniqueName: \"kubernetes.io/projected/7ee3d5b9-18cf-49db-95a9-e2dfa666e15e-kube-api-access-tq7gn\") pod \"7ee3d5b9-18cf-49db-95a9-e2dfa666e15e\" (UID: \"7ee3d5b9-18cf-49db-95a9-e2dfa666e15e\") " Jan 31 09:23:04 crc kubenswrapper[4830]: I0131 09:23:04.487049 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8krvx\" (UniqueName: \"kubernetes.io/projected/8af6a38f-c8ba-464d-acd5-417848530657-kube-api-access-8krvx\") pod \"8af6a38f-c8ba-464d-acd5-417848530657\" (UID: \"8af6a38f-c8ba-464d-acd5-417848530657\") " Jan 31 09:23:04 crc kubenswrapper[4830]: I0131 09:23:04.487806 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ca85abf4-a6ba-4080-a544-fcce2de88b2b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ca85abf4-a6ba-4080-a544-fcce2de88b2b" (UID: "ca85abf4-a6ba-4080-a544-fcce2de88b2b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:23:04 crc kubenswrapper[4830]: I0131 09:23:04.488123 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7ee3d5b9-18cf-49db-95a9-e2dfa666e15e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "7ee3d5b9-18cf-49db-95a9-e2dfa666e15e" (UID: "7ee3d5b9-18cf-49db-95a9-e2dfa666e15e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:23:04 crc kubenswrapper[4830]: I0131 09:23:04.488871 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2830665c-1d23-4c36-8324-7362068ae08f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2830665c-1d23-4c36-8324-7362068ae08f" (UID: "2830665c-1d23-4c36-8324-7362068ae08f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:23:04 crc kubenswrapper[4830]: I0131 09:23:04.489193 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8af6a38f-c8ba-464d-acd5-417848530657-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8af6a38f-c8ba-464d-acd5-417848530657" (UID: "8af6a38f-c8ba-464d-acd5-417848530657"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:23:04 crc kubenswrapper[4830]: I0131 09:23:04.498377 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7ee3d5b9-18cf-49db-95a9-e2dfa666e15e-kube-api-access-tq7gn" (OuterVolumeSpecName: "kube-api-access-tq7gn") pod "7ee3d5b9-18cf-49db-95a9-e2dfa666e15e" (UID: "7ee3d5b9-18cf-49db-95a9-e2dfa666e15e"). InnerVolumeSpecName "kube-api-access-tq7gn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:23:04 crc kubenswrapper[4830]: I0131 09:23:04.498882 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8af6a38f-c8ba-464d-acd5-417848530657-kube-api-access-8krvx" (OuterVolumeSpecName: "kube-api-access-8krvx") pod "8af6a38f-c8ba-464d-acd5-417848530657" (UID: "8af6a38f-c8ba-464d-acd5-417848530657"). InnerVolumeSpecName "kube-api-access-8krvx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:23:04 crc kubenswrapper[4830]: I0131 09:23:04.503264 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ca85abf4-a6ba-4080-a544-fcce2de88b2b-kube-api-access-hjw56" (OuterVolumeSpecName: "kube-api-access-hjw56") pod "ca85abf4-a6ba-4080-a544-fcce2de88b2b" (UID: "ca85abf4-a6ba-4080-a544-fcce2de88b2b"). InnerVolumeSpecName "kube-api-access-hjw56". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:23:04 crc kubenswrapper[4830]: I0131 09:23:04.503484 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2830665c-1d23-4c36-8324-7362068ae08f-kube-api-access-66rn6" (OuterVolumeSpecName: "kube-api-access-66rn6") pod "2830665c-1d23-4c36-8324-7362068ae08f" (UID: "2830665c-1d23-4c36-8324-7362068ae08f"). InnerVolumeSpecName "kube-api-access-66rn6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:23:04 crc kubenswrapper[4830]: I0131 09:23:04.549791 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-ps27t" podUID="dcebcaa8-8cb1-4a94-a15a-0ef39c9bee73" containerName="ovn-controller" probeResult="failure" output=< Jan 31 09:23:04 crc kubenswrapper[4830]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Jan 31 09:23:04 crc kubenswrapper[4830]: > Jan 31 09:23:04 crc kubenswrapper[4830]: I0131 09:23:04.589732 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rq7zz\" (UniqueName: \"kubernetes.io/projected/2d4e564d-bb74-4fb4-b180-bdd6c81a3d6c-kube-api-access-rq7zz\") pod \"2d4e564d-bb74-4fb4-b180-bdd6c81a3d6c\" (UID: \"2d4e564d-bb74-4fb4-b180-bdd6c81a3d6c\") " Jan 31 09:23:04 crc kubenswrapper[4830]: I0131 09:23:04.589914 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bzhsb\" (UniqueName: \"kubernetes.io/projected/b4141b8b-513a-4210-9abd-bfba363d6986-kube-api-access-bzhsb\") pod \"b4141b8b-513a-4210-9abd-bfba363d6986\" (UID: \"b4141b8b-513a-4210-9abd-bfba363d6986\") " Jan 31 09:23:04 crc kubenswrapper[4830]: I0131 09:23:04.589954 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2d4e564d-bb74-4fb4-b180-bdd6c81a3d6c-operator-scripts\") pod \"2d4e564d-bb74-4fb4-b180-bdd6c81a3d6c\" (UID: \"2d4e564d-bb74-4fb4-b180-bdd6c81a3d6c\") " Jan 31 09:23:04 crc kubenswrapper[4830]: I0131 09:23:04.590197 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b4141b8b-513a-4210-9abd-bfba363d6986-operator-scripts\") pod \"b4141b8b-513a-4210-9abd-bfba363d6986\" (UID: \"b4141b8b-513a-4210-9abd-bfba363d6986\") " Jan 31 09:23:04 crc kubenswrapper[4830]: I0131 09:23:04.591098 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8krvx\" (UniqueName: \"kubernetes.io/projected/8af6a38f-c8ba-464d-acd5-417848530657-kube-api-access-8krvx\") on node \"crc\" DevicePath \"\"" Jan 31 09:23:04 crc kubenswrapper[4830]: I0131 09:23:04.591174 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-66rn6\" (UniqueName: \"kubernetes.io/projected/2830665c-1d23-4c36-8324-7362068ae08f-kube-api-access-66rn6\") on node \"crc\" DevicePath \"\"" Jan 31 09:23:04 crc kubenswrapper[4830]: I0131 09:23:04.591193 4830 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ca85abf4-a6ba-4080-a544-fcce2de88b2b-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 09:23:04 crc kubenswrapper[4830]: I0131 09:23:04.591206 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hjw56\" (UniqueName: \"kubernetes.io/projected/ca85abf4-a6ba-4080-a544-fcce2de88b2b-kube-api-access-hjw56\") on node \"crc\" DevicePath \"\"" Jan 31 09:23:04 crc kubenswrapper[4830]: I0131 09:23:04.591221 4830 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2830665c-1d23-4c36-8324-7362068ae08f-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 09:23:04 crc kubenswrapper[4830]: I0131 09:23:04.591234 4830 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7ee3d5b9-18cf-49db-95a9-e2dfa666e15e-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 09:23:04 crc kubenswrapper[4830]: I0131 09:23:04.591250 4830 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8af6a38f-c8ba-464d-acd5-417848530657-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 09:23:04 crc kubenswrapper[4830]: I0131 09:23:04.591262 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tq7gn\" (UniqueName: \"kubernetes.io/projected/7ee3d5b9-18cf-49db-95a9-e2dfa666e15e-kube-api-access-tq7gn\") on node \"crc\" DevicePath \"\"" Jan 31 09:23:04 crc kubenswrapper[4830]: I0131 09:23:04.592257 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2d4e564d-bb74-4fb4-b180-bdd6c81a3d6c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2d4e564d-bb74-4fb4-b180-bdd6c81a3d6c" (UID: "2d4e564d-bb74-4fb4-b180-bdd6c81a3d6c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:23:04 crc kubenswrapper[4830]: I0131 09:23:04.594113 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2d4e564d-bb74-4fb4-b180-bdd6c81a3d6c-kube-api-access-rq7zz" (OuterVolumeSpecName: "kube-api-access-rq7zz") pod "2d4e564d-bb74-4fb4-b180-bdd6c81a3d6c" (UID: "2d4e564d-bb74-4fb4-b180-bdd6c81a3d6c"). InnerVolumeSpecName "kube-api-access-rq7zz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:23:04 crc kubenswrapper[4830]: I0131 09:23:04.594219 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b4141b8b-513a-4210-9abd-bfba363d6986-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b4141b8b-513a-4210-9abd-bfba363d6986" (UID: "b4141b8b-513a-4210-9abd-bfba363d6986"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:23:04 crc kubenswrapper[4830]: I0131 09:23:04.597084 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b4141b8b-513a-4210-9abd-bfba363d6986-kube-api-access-bzhsb" (OuterVolumeSpecName: "kube-api-access-bzhsb") pod "b4141b8b-513a-4210-9abd-bfba363d6986" (UID: "b4141b8b-513a-4210-9abd-bfba363d6986"). InnerVolumeSpecName "kube-api-access-bzhsb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:23:04 crc kubenswrapper[4830]: I0131 09:23:04.694005 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bzhsb\" (UniqueName: \"kubernetes.io/projected/b4141b8b-513a-4210-9abd-bfba363d6986-kube-api-access-bzhsb\") on node \"crc\" DevicePath \"\"" Jan 31 09:23:04 crc kubenswrapper[4830]: I0131 09:23:04.694050 4830 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2d4e564d-bb74-4fb4-b180-bdd6c81a3d6c-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 09:23:04 crc kubenswrapper[4830]: I0131 09:23:04.694061 4830 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b4141b8b-513a-4210-9abd-bfba363d6986-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 09:23:04 crc kubenswrapper[4830]: I0131 09:23:04.694076 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rq7zz\" (UniqueName: \"kubernetes.io/projected/2d4e564d-bb74-4fb4-b180-bdd6c81a3d6c-kube-api-access-rq7zz\") on node \"crc\" DevicePath \"\"" Jan 31 09:23:05 crc kubenswrapper[4830]: I0131 09:23:05.012520 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Jan 31 09:23:05 crc kubenswrapper[4830]: I0131 09:23:05.244703 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-sqm2h" Jan 31 09:23:05 crc kubenswrapper[4830]: I0131 09:23:05.244687 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-sqm2h" event={"ID":"b4141b8b-513a-4210-9abd-bfba363d6986","Type":"ContainerDied","Data":"6fa27682be3113342f88b8ea04f737746075d33cd2b7b132ab38296d056e6540"} Jan 31 09:23:05 crc kubenswrapper[4830]: I0131 09:23:05.244773 4830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6fa27682be3113342f88b8ea04f737746075d33cd2b7b132ab38296d056e6540" Jan 31 09:23:05 crc kubenswrapper[4830]: I0131 09:23:05.247835 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-vzsv5" event={"ID":"7ee3d5b9-18cf-49db-95a9-e2dfa666e15e","Type":"ContainerDied","Data":"38c44ca9254931b596825a148ec4cc55293f302133ec133038736ed2f7a4c568"} Jan 31 09:23:05 crc kubenswrapper[4830]: I0131 09:23:05.247885 4830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="38c44ca9254931b596825a148ec4cc55293f302133ec133038736ed2f7a4c568" Jan 31 09:23:05 crc kubenswrapper[4830]: I0131 09:23:05.247958 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-vzsv5" Jan 31 09:23:05 crc kubenswrapper[4830]: I0131 09:23:05.254774 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"1023f27a-9c1d-4818-a3f5-94946296ae46","Type":"ContainerStarted","Data":"840f608fc82843742637bdd9e527c554a4ba5a7eed6d60b7e5f159224c7694a5"} Jan 31 09:23:05 crc kubenswrapper[4830]: I0131 09:23:05.258502 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-1d60-account-create-update-vbk6p" event={"ID":"8af6a38f-c8ba-464d-acd5-417848530657","Type":"ContainerDied","Data":"b68af3f2f396de3bb34a76d312d15ba25cfe0fcb04f5e4ab50a9dfb83311ad2d"} Jan 31 09:23:05 crc kubenswrapper[4830]: I0131 09:23:05.258557 4830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b68af3f2f396de3bb34a76d312d15ba25cfe0fcb04f5e4ab50a9dfb83311ad2d" Jan 31 09:23:05 crc kubenswrapper[4830]: I0131 09:23:05.258638 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-1d60-account-create-update-vbk6p" Jan 31 09:23:05 crc kubenswrapper[4830]: I0131 09:23:05.261696 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-04a9-account-create-update-xswbl" event={"ID":"2d4e564d-bb74-4fb4-b180-bdd6c81a3d6c","Type":"ContainerDied","Data":"33a04be4b8c51f0f6bd431e96aba6cbd7d7412fe7c92fb944415f158280b4f69"} Jan 31 09:23:05 crc kubenswrapper[4830]: I0131 09:23:05.261784 4830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="33a04be4b8c51f0f6bd431e96aba6cbd7d7412fe7c92fb944415f158280b4f69" Jan 31 09:23:05 crc kubenswrapper[4830]: I0131 09:23:05.261868 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-04a9-account-create-update-xswbl" Jan 31 09:23:05 crc kubenswrapper[4830]: I0131 09:23:05.267347 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-a9ea-account-create-update-cvwgh" event={"ID":"ca85abf4-a6ba-4080-a544-fcce2de88b2b","Type":"ContainerDied","Data":"53395f171a0839bc3a3c859dbcdc19015ac6bef90c1800e701064675e27384b1"} Jan 31 09:23:05 crc kubenswrapper[4830]: I0131 09:23:05.267405 4830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="53395f171a0839bc3a3c859dbcdc19015ac6bef90c1800e701064675e27384b1" Jan 31 09:23:05 crc kubenswrapper[4830]: I0131 09:23:05.267511 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-a9ea-account-create-update-cvwgh" Jan 31 09:23:05 crc kubenswrapper[4830]: I0131 09:23:05.271920 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-wfp8z" Jan 31 09:23:05 crc kubenswrapper[4830]: I0131 09:23:05.271987 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-wfp8z" event={"ID":"2830665c-1d23-4c36-8324-7362068ae08f","Type":"ContainerDied","Data":"0837fe18d7b7f60e553f6d1a8f52360c25350fd898e52ae7ecea1b3692ad46dd"} Jan 31 09:23:05 crc kubenswrapper[4830]: I0131 09:23:05.272044 4830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0837fe18d7b7f60e553f6d1a8f52360c25350fd898e52ae7ecea1b3692ad46dd" Jan 31 09:23:06 crc kubenswrapper[4830]: I0131 09:23:06.165260 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-c662-account-create-update-rwxdv" Jan 31 09:23:06 crc kubenswrapper[4830]: I0131 09:23:06.171103 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-cell1-db-create-8sjj4" Jan 31 09:23:06 crc kubenswrapper[4830]: I0131 09:23:06.270919 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/035a3263-c7af-45d8-a14c-5b86e594c818-operator-scripts\") pod \"035a3263-c7af-45d8-a14c-5b86e594c818\" (UID: \"035a3263-c7af-45d8-a14c-5b86e594c818\") " Jan 31 09:23:06 crc kubenswrapper[4830]: I0131 09:23:06.271512 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zppkx\" (UniqueName: \"kubernetes.io/projected/73de016b-d1c0-45cf-b3a6-fe6d3138f630-kube-api-access-zppkx\") pod \"73de016b-d1c0-45cf-b3a6-fe6d3138f630\" (UID: \"73de016b-d1c0-45cf-b3a6-fe6d3138f630\") " Jan 31 09:23:06 crc kubenswrapper[4830]: I0131 09:23:06.271571 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/73de016b-d1c0-45cf-b3a6-fe6d3138f630-operator-scripts\") pod \"73de016b-d1c0-45cf-b3a6-fe6d3138f630\" (UID: \"73de016b-d1c0-45cf-b3a6-fe6d3138f630\") " Jan 31 09:23:06 crc kubenswrapper[4830]: I0131 09:23:06.271921 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nfgdn\" (UniqueName: \"kubernetes.io/projected/035a3263-c7af-45d8-a14c-5b86e594c818-kube-api-access-nfgdn\") pod \"035a3263-c7af-45d8-a14c-5b86e594c818\" (UID: \"035a3263-c7af-45d8-a14c-5b86e594c818\") " Jan 31 09:23:06 crc kubenswrapper[4830]: I0131 09:23:06.272060 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/035a3263-c7af-45d8-a14c-5b86e594c818-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "035a3263-c7af-45d8-a14c-5b86e594c818" (UID: "035a3263-c7af-45d8-a14c-5b86e594c818"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:23:06 crc kubenswrapper[4830]: I0131 09:23:06.272536 4830 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/035a3263-c7af-45d8-a14c-5b86e594c818-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 09:23:06 crc kubenswrapper[4830]: I0131 09:23:06.272667 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/73de016b-d1c0-45cf-b3a6-fe6d3138f630-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "73de016b-d1c0-45cf-b3a6-fe6d3138f630" (UID: "73de016b-d1c0-45cf-b3a6-fe6d3138f630"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:23:06 crc kubenswrapper[4830]: I0131 09:23:06.287673 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/73de016b-d1c0-45cf-b3a6-fe6d3138f630-kube-api-access-zppkx" (OuterVolumeSpecName: "kube-api-access-zppkx") pod "73de016b-d1c0-45cf-b3a6-fe6d3138f630" (UID: "73de016b-d1c0-45cf-b3a6-fe6d3138f630"). InnerVolumeSpecName "kube-api-access-zppkx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:23:06 crc kubenswrapper[4830]: I0131 09:23:06.287987 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/035a3263-c7af-45d8-a14c-5b86e594c818-kube-api-access-nfgdn" (OuterVolumeSpecName: "kube-api-access-nfgdn") pod "035a3263-c7af-45d8-a14c-5b86e594c818" (UID: "035a3263-c7af-45d8-a14c-5b86e594c818"). InnerVolumeSpecName "kube-api-access-nfgdn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:23:06 crc kubenswrapper[4830]: I0131 09:23:06.288363 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-cell1-db-create-8sjj4" Jan 31 09:23:06 crc kubenswrapper[4830]: I0131 09:23:06.290934 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-c662-account-create-update-rwxdv" Jan 31 09:23:06 crc kubenswrapper[4830]: I0131 09:23:06.344412 4830 kubelet_pods.go:2476] "Failed to reduce cpu time for pod pending volume cleanup" podUID="73de016b-d1c0-45cf-b3a6-fe6d3138f630" err="openat2 /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod73de016b_d1c0_45cf_b3a6_fe6d3138f630.slice/cgroup.controllers: no such file or directory" Jan 31 09:23:06 crc kubenswrapper[4830]: I0131 09:23:06.344485 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-cell1-db-create-8sjj4" event={"ID":"73de016b-d1c0-45cf-b3a6-fe6d3138f630","Type":"ContainerDied","Data":"cd2643c727b0df749ba80af88cd070c5a52ca27a9299d19e110f4e1c18ea0f35"} Jan 31 09:23:06 crc kubenswrapper[4830]: I0131 09:23:06.344516 4830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cd2643c727b0df749ba80af88cd070c5a52ca27a9299d19e110f4e1c18ea0f35" Jan 31 09:23:06 crc kubenswrapper[4830]: I0131 09:23:06.344527 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-c662-account-create-update-rwxdv" event={"ID":"035a3263-c7af-45d8-a14c-5b86e594c818","Type":"ContainerDied","Data":"40d75eaa1d78afd657770812676357f9810ab5707d12823d9a914cdc70c481a7"} Jan 31 09:23:06 crc kubenswrapper[4830]: I0131 09:23:06.344544 4830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="40d75eaa1d78afd657770812676357f9810ab5707d12823d9a914cdc70c481a7" Jan 31 09:23:06 crc kubenswrapper[4830]: I0131 09:23:06.374762 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zppkx\" (UniqueName: \"kubernetes.io/projected/73de016b-d1c0-45cf-b3a6-fe6d3138f630-kube-api-access-zppkx\") on node \"crc\" DevicePath \"\"" Jan 31 09:23:06 crc kubenswrapper[4830]: I0131 09:23:06.374818 4830 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/73de016b-d1c0-45cf-b3a6-fe6d3138f630-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 09:23:06 crc kubenswrapper[4830]: I0131 09:23:06.374828 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nfgdn\" (UniqueName: \"kubernetes.io/projected/035a3263-c7af-45d8-a14c-5b86e594c818-kube-api-access-nfgdn\") on node \"crc\" DevicePath \"\"" Jan 31 09:23:07 crc kubenswrapper[4830]: I0131 09:23:07.128855 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-vzsv5"] Jan 31 09:23:07 crc kubenswrapper[4830]: I0131 09:23:07.142198 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-vzsv5"] Jan 31 09:23:07 crc kubenswrapper[4830]: I0131 09:23:07.304757 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"1023f27a-9c1d-4818-a3f5-94946296ae46","Type":"ContainerStarted","Data":"383d2275ce651c214b0eaba73d0c94ae7f5a449f7dc4c9eccdc8ef0ec6ac4f65"} Jan 31 09:23:08 crc kubenswrapper[4830]: I0131 09:23:08.271313 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7ee3d5b9-18cf-49db-95a9-e2dfa666e15e" path="/var/lib/kubelet/pods/7ee3d5b9-18cf-49db-95a9-e2dfa666e15e/volumes" Jan 31 09:23:09 crc kubenswrapper[4830]: I0131 09:23:09.334465 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"1023f27a-9c1d-4818-a3f5-94946296ae46","Type":"ContainerStarted","Data":"95ac188c8b3ea84a36c9cde316cf88e4cdad569dd49d2c1e17f82aaebc175896"} Jan 31 09:23:09 crc kubenswrapper[4830]: I0131 09:23:09.334938 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"1023f27a-9c1d-4818-a3f5-94946296ae46","Type":"ContainerStarted","Data":"21cf0471c7368c37c1c51c63fe0971a5610b7adae717422a24df2552b35a5352"} Jan 31 09:23:09 crc kubenswrapper[4830]: I0131 09:23:09.342322 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"68109d40-9af0-4c37-bf02-7b4744dbab5f","Type":"ContainerStarted","Data":"f314a67cf197f76f9e9553ea1a3af00ecf9d246f64890bb8e50abb3a4adb6a84"} Jan 31 09:23:09 crc kubenswrapper[4830]: I0131 09:23:09.406477 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-bktdp"] Jan 31 09:23:09 crc kubenswrapper[4830]: E0131 09:23:09.407506 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca85abf4-a6ba-4080-a544-fcce2de88b2b" containerName="mariadb-account-create-update" Jan 31 09:23:09 crc kubenswrapper[4830]: I0131 09:23:09.407530 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca85abf4-a6ba-4080-a544-fcce2de88b2b" containerName="mariadb-account-create-update" Jan 31 09:23:09 crc kubenswrapper[4830]: E0131 09:23:09.407546 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2830665c-1d23-4c36-8324-7362068ae08f" containerName="mariadb-database-create" Jan 31 09:23:09 crc kubenswrapper[4830]: I0131 09:23:09.407553 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="2830665c-1d23-4c36-8324-7362068ae08f" containerName="mariadb-database-create" Jan 31 09:23:09 crc kubenswrapper[4830]: E0131 09:23:09.407566 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4141b8b-513a-4210-9abd-bfba363d6986" containerName="mariadb-database-create" Jan 31 09:23:09 crc kubenswrapper[4830]: I0131 09:23:09.407571 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4141b8b-513a-4210-9abd-bfba363d6986" containerName="mariadb-database-create" Jan 31 09:23:09 crc kubenswrapper[4830]: E0131 09:23:09.407586 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8af6a38f-c8ba-464d-acd5-417848530657" containerName="mariadb-account-create-update" Jan 31 09:23:09 crc kubenswrapper[4830]: I0131 09:23:09.407592 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="8af6a38f-c8ba-464d-acd5-417848530657" containerName="mariadb-account-create-update" Jan 31 09:23:09 crc kubenswrapper[4830]: E0131 09:23:09.407600 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf11bdf9-7bbe-4713-9711-e6aff7e0c0c5" containerName="mariadb-database-create" Jan 31 09:23:09 crc kubenswrapper[4830]: I0131 09:23:09.407606 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf11bdf9-7bbe-4713-9711-e6aff7e0c0c5" containerName="mariadb-database-create" Jan 31 09:23:09 crc kubenswrapper[4830]: E0131 09:23:09.407620 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d4e564d-bb74-4fb4-b180-bdd6c81a3d6c" containerName="mariadb-account-create-update" Jan 31 09:23:09 crc kubenswrapper[4830]: I0131 09:23:09.407625 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d4e564d-bb74-4fb4-b180-bdd6c81a3d6c" containerName="mariadb-account-create-update" Jan 31 09:23:09 crc kubenswrapper[4830]: E0131 09:23:09.407640 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="035a3263-c7af-45d8-a14c-5b86e594c818" containerName="mariadb-account-create-update" Jan 31 09:23:09 crc kubenswrapper[4830]: I0131 09:23:09.407645 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="035a3263-c7af-45d8-a14c-5b86e594c818" containerName="mariadb-account-create-update" Jan 31 09:23:09 crc kubenswrapper[4830]: E0131 09:23:09.407653 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="73de016b-d1c0-45cf-b3a6-fe6d3138f630" containerName="mariadb-database-create" Jan 31 09:23:09 crc kubenswrapper[4830]: I0131 09:23:09.407662 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="73de016b-d1c0-45cf-b3a6-fe6d3138f630" containerName="mariadb-database-create" Jan 31 09:23:09 crc kubenswrapper[4830]: E0131 09:23:09.407671 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ee3d5b9-18cf-49db-95a9-e2dfa666e15e" containerName="mariadb-account-create-update" Jan 31 09:23:09 crc kubenswrapper[4830]: I0131 09:23:09.407677 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ee3d5b9-18cf-49db-95a9-e2dfa666e15e" containerName="mariadb-account-create-update" Jan 31 09:23:09 crc kubenswrapper[4830]: I0131 09:23:09.407886 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="73de016b-d1c0-45cf-b3a6-fe6d3138f630" containerName="mariadb-database-create" Jan 31 09:23:09 crc kubenswrapper[4830]: I0131 09:23:09.407896 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="cf11bdf9-7bbe-4713-9711-e6aff7e0c0c5" containerName="mariadb-database-create" Jan 31 09:23:09 crc kubenswrapper[4830]: I0131 09:23:09.407911 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="035a3263-c7af-45d8-a14c-5b86e594c818" containerName="mariadb-account-create-update" Jan 31 09:23:09 crc kubenswrapper[4830]: I0131 09:23:09.407928 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="7ee3d5b9-18cf-49db-95a9-e2dfa666e15e" containerName="mariadb-account-create-update" Jan 31 09:23:09 crc kubenswrapper[4830]: I0131 09:23:09.407937 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d4e564d-bb74-4fb4-b180-bdd6c81a3d6c" containerName="mariadb-account-create-update" Jan 31 09:23:09 crc kubenswrapper[4830]: I0131 09:23:09.407948 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="b4141b8b-513a-4210-9abd-bfba363d6986" containerName="mariadb-database-create" Jan 31 09:23:09 crc kubenswrapper[4830]: I0131 09:23:09.407954 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="ca85abf4-a6ba-4080-a544-fcce2de88b2b" containerName="mariadb-account-create-update" Jan 31 09:23:09 crc kubenswrapper[4830]: I0131 09:23:09.407967 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="8af6a38f-c8ba-464d-acd5-417848530657" containerName="mariadb-account-create-update" Jan 31 09:23:09 crc kubenswrapper[4830]: I0131 09:23:09.407973 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="2830665c-1d23-4c36-8324-7362068ae08f" containerName="mariadb-database-create" Jan 31 09:23:09 crc kubenswrapper[4830]: I0131 09:23:09.408866 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-bktdp" Jan 31 09:23:09 crc kubenswrapper[4830]: I0131 09:23:09.412876 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-8sfkk" Jan 31 09:23:09 crc kubenswrapper[4830]: I0131 09:23:09.412951 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Jan 31 09:23:09 crc kubenswrapper[4830]: I0131 09:23:09.456640 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-bktdp"] Jan 31 09:23:09 crc kubenswrapper[4830]: I0131 09:23:09.468843 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ht6rb\" (UniqueName: \"kubernetes.io/projected/42eafeb6-68c0-479b-bc77-62967566390e-kube-api-access-ht6rb\") pod \"glance-db-sync-bktdp\" (UID: \"42eafeb6-68c0-479b-bc77-62967566390e\") " pod="openstack/glance-db-sync-bktdp" Jan 31 09:23:09 crc kubenswrapper[4830]: I0131 09:23:09.468899 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/42eafeb6-68c0-479b-bc77-62967566390e-db-sync-config-data\") pod \"glance-db-sync-bktdp\" (UID: \"42eafeb6-68c0-479b-bc77-62967566390e\") " pod="openstack/glance-db-sync-bktdp" Jan 31 09:23:09 crc kubenswrapper[4830]: I0131 09:23:09.468924 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/42eafeb6-68c0-479b-bc77-62967566390e-config-data\") pod \"glance-db-sync-bktdp\" (UID: \"42eafeb6-68c0-479b-bc77-62967566390e\") " pod="openstack/glance-db-sync-bktdp" Jan 31 09:23:09 crc kubenswrapper[4830]: I0131 09:23:09.468945 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42eafeb6-68c0-479b-bc77-62967566390e-combined-ca-bundle\") pod \"glance-db-sync-bktdp\" (UID: \"42eafeb6-68c0-479b-bc77-62967566390e\") " pod="openstack/glance-db-sync-bktdp" Jan 31 09:23:09 crc kubenswrapper[4830]: I0131 09:23:09.470789 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=9.33748379 podStartE2EDuration="1m29.470751969s" podCreationTimestamp="2026-01-31 09:21:40 +0000 UTC" firstStartedPulling="2026-01-31 09:21:48.407906788 +0000 UTC m=+1252.901269230" lastFinishedPulling="2026-01-31 09:23:08.541174967 +0000 UTC m=+1333.034537409" observedRunningTime="2026-01-31 09:23:09.432465699 +0000 UTC m=+1333.925828141" watchObservedRunningTime="2026-01-31 09:23:09.470751969 +0000 UTC m=+1333.964114421" Jan 31 09:23:09 crc kubenswrapper[4830]: I0131 09:23:09.580619 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ht6rb\" (UniqueName: \"kubernetes.io/projected/42eafeb6-68c0-479b-bc77-62967566390e-kube-api-access-ht6rb\") pod \"glance-db-sync-bktdp\" (UID: \"42eafeb6-68c0-479b-bc77-62967566390e\") " pod="openstack/glance-db-sync-bktdp" Jan 31 09:23:09 crc kubenswrapper[4830]: I0131 09:23:09.580689 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/42eafeb6-68c0-479b-bc77-62967566390e-db-sync-config-data\") pod \"glance-db-sync-bktdp\" (UID: \"42eafeb6-68c0-479b-bc77-62967566390e\") " pod="openstack/glance-db-sync-bktdp" Jan 31 09:23:09 crc kubenswrapper[4830]: I0131 09:23:09.580717 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/42eafeb6-68c0-479b-bc77-62967566390e-config-data\") pod \"glance-db-sync-bktdp\" (UID: \"42eafeb6-68c0-479b-bc77-62967566390e\") " pod="openstack/glance-db-sync-bktdp" Jan 31 09:23:09 crc kubenswrapper[4830]: I0131 09:23:09.580753 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42eafeb6-68c0-479b-bc77-62967566390e-combined-ca-bundle\") pod \"glance-db-sync-bktdp\" (UID: \"42eafeb6-68c0-479b-bc77-62967566390e\") " pod="openstack/glance-db-sync-bktdp" Jan 31 09:23:09 crc kubenswrapper[4830]: I0131 09:23:09.593992 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42eafeb6-68c0-479b-bc77-62967566390e-combined-ca-bundle\") pod \"glance-db-sync-bktdp\" (UID: \"42eafeb6-68c0-479b-bc77-62967566390e\") " pod="openstack/glance-db-sync-bktdp" Jan 31 09:23:09 crc kubenswrapper[4830]: I0131 09:23:09.598917 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/42eafeb6-68c0-479b-bc77-62967566390e-db-sync-config-data\") pod \"glance-db-sync-bktdp\" (UID: \"42eafeb6-68c0-479b-bc77-62967566390e\") " pod="openstack/glance-db-sync-bktdp" Jan 31 09:23:09 crc kubenswrapper[4830]: I0131 09:23:09.618450 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/42eafeb6-68c0-479b-bc77-62967566390e-config-data\") pod \"glance-db-sync-bktdp\" (UID: \"42eafeb6-68c0-479b-bc77-62967566390e\") " pod="openstack/glance-db-sync-bktdp" Jan 31 09:23:09 crc kubenswrapper[4830]: I0131 09:23:09.643550 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ht6rb\" (UniqueName: \"kubernetes.io/projected/42eafeb6-68c0-479b-bc77-62967566390e-kube-api-access-ht6rb\") pod \"glance-db-sync-bktdp\" (UID: \"42eafeb6-68c0-479b-bc77-62967566390e\") " pod="openstack/glance-db-sync-bktdp" Jan 31 09:23:09 crc kubenswrapper[4830]: I0131 09:23:09.738664 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-ps27t" podUID="dcebcaa8-8cb1-4a94-a15a-0ef39c9bee73" containerName="ovn-controller" probeResult="failure" output=< Jan 31 09:23:09 crc kubenswrapper[4830]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Jan 31 09:23:09 crc kubenswrapper[4830]: > Jan 31 09:23:09 crc kubenswrapper[4830]: I0131 09:23:09.749957 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-gk8dv" Jan 31 09:23:09 crc kubenswrapper[4830]: I0131 09:23:09.755587 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-bktdp" Jan 31 09:23:10 crc kubenswrapper[4830]: I0131 09:23:10.387115 4830 generic.go:334] "Generic (PLEG): container finished" podID="18af810d-9de4-4822-86d2-bb7e8a8a449b" containerID="b82d566e252a5e263e93f29e01a43117ffad3aa3827523d8c1a930eedb4b72fd" exitCode=0 Jan 31 09:23:10 crc kubenswrapper[4830]: I0131 09:23:10.387620 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"18af810d-9de4-4822-86d2-bb7e8a8a449b","Type":"ContainerDied","Data":"b82d566e252a5e263e93f29e01a43117ffad3aa3827523d8c1a930eedb4b72fd"} Jan 31 09:23:10 crc kubenswrapper[4830]: I0131 09:23:10.395443 4830 generic.go:334] "Generic (PLEG): container finished" podID="8e40a106-74cd-45ea-a936-c34daaf9ce6e" containerID="11ff703748b26671c1b2a53117176e9a226db42f7c70f7520609779e5573b1cb" exitCode=0 Jan 31 09:23:10 crc kubenswrapper[4830]: I0131 09:23:10.395749 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"8e40a106-74cd-45ea-a936-c34daaf9ce6e","Type":"ContainerDied","Data":"11ff703748b26671c1b2a53117176e9a226db42f7c70f7520609779e5573b1cb"} Jan 31 09:23:10 crc kubenswrapper[4830]: I0131 09:23:10.431747 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"1023f27a-9c1d-4818-a3f5-94946296ae46","Type":"ContainerStarted","Data":"0ac5512cab3e75878921d15561385e28257399bf91826dfd8cf4cf1f581eb8b8"} Jan 31 09:23:10 crc kubenswrapper[4830]: I0131 09:23:10.440130 4830 generic.go:334] "Generic (PLEG): container finished" podID="f60eed79-badf-4909-869b-edbfdfb774ac" containerID="55ba60a30982fec7fc25c3710647f237bdb8bc45991a8a20664ac57e97a9a09e" exitCode=0 Jan 31 09:23:10 crc kubenswrapper[4830]: I0131 09:23:10.440257 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"f60eed79-badf-4909-869b-edbfdfb774ac","Type":"ContainerDied","Data":"55ba60a30982fec7fc25c3710647f237bdb8bc45991a8a20664ac57e97a9a09e"} Jan 31 09:23:10 crc kubenswrapper[4830]: I0131 09:23:10.526136 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-bktdp"] Jan 31 09:23:11 crc kubenswrapper[4830]: I0131 09:23:11.496468 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-bktdp" event={"ID":"42eafeb6-68c0-479b-bc77-62967566390e","Type":"ContainerStarted","Data":"4792b3501233d808deb263ee9da287d71ea8e3134c6c978497a515c8cf5247be"} Jan 31 09:23:11 crc kubenswrapper[4830]: I0131 09:23:11.519267 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"8e40a106-74cd-45ea-a936-c34daaf9ce6e","Type":"ContainerStarted","Data":"fdf0438ec0de61a2c93a2ff550b3e41400bda9762500c3287b5aa805765757b3"} Jan 31 09:23:11 crc kubenswrapper[4830]: I0131 09:23:11.519568 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-2" Jan 31 09:23:11 crc kubenswrapper[4830]: I0131 09:23:11.563162 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-2" podStartSLOduration=52.364198515 podStartE2EDuration="1m38.5631307s" podCreationTimestamp="2026-01-31 09:21:33 +0000 UTC" firstStartedPulling="2026-01-31 09:21:36.700771829 +0000 UTC m=+1241.194134271" lastFinishedPulling="2026-01-31 09:22:22.899704014 +0000 UTC m=+1287.393066456" observedRunningTime="2026-01-31 09:23:11.56033197 +0000 UTC m=+1336.053694432" watchObservedRunningTime="2026-01-31 09:23:11.5631307 +0000 UTC m=+1336.056493152" Jan 31 09:23:11 crc kubenswrapper[4830]: I0131 09:23:11.803092 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-0"] Jan 31 09:23:11 crc kubenswrapper[4830]: I0131 09:23:11.808433 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Jan 31 09:23:11 crc kubenswrapper[4830]: I0131 09:23:11.820397 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"mysqld-exporter-config-data" Jan 31 09:23:11 crc kubenswrapper[4830]: I0131 09:23:11.837838 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-0"] Jan 31 09:23:11 crc kubenswrapper[4830]: I0131 09:23:11.956461 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2thbl\" (UniqueName: \"kubernetes.io/projected/d1ca860e-5493-40e2-bc10-ded100de4569-kube-api-access-2thbl\") pod \"mysqld-exporter-0\" (UID: \"d1ca860e-5493-40e2-bc10-ded100de4569\") " pod="openstack/mysqld-exporter-0" Jan 31 09:23:11 crc kubenswrapper[4830]: I0131 09:23:11.956890 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d1ca860e-5493-40e2-bc10-ded100de4569-config-data\") pod \"mysqld-exporter-0\" (UID: \"d1ca860e-5493-40e2-bc10-ded100de4569\") " pod="openstack/mysqld-exporter-0" Jan 31 09:23:11 crc kubenswrapper[4830]: I0131 09:23:11.956923 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1ca860e-5493-40e2-bc10-ded100de4569-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"d1ca860e-5493-40e2-bc10-ded100de4569\") " pod="openstack/mysqld-exporter-0" Jan 31 09:23:12 crc kubenswrapper[4830]: I0131 09:23:12.058640 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d1ca860e-5493-40e2-bc10-ded100de4569-config-data\") pod \"mysqld-exporter-0\" (UID: \"d1ca860e-5493-40e2-bc10-ded100de4569\") " pod="openstack/mysqld-exporter-0" Jan 31 09:23:12 crc kubenswrapper[4830]: I0131 09:23:12.058712 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1ca860e-5493-40e2-bc10-ded100de4569-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"d1ca860e-5493-40e2-bc10-ded100de4569\") " pod="openstack/mysqld-exporter-0" Jan 31 09:23:12 crc kubenswrapper[4830]: I0131 09:23:12.058784 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2thbl\" (UniqueName: \"kubernetes.io/projected/d1ca860e-5493-40e2-bc10-ded100de4569-kube-api-access-2thbl\") pod \"mysqld-exporter-0\" (UID: \"d1ca860e-5493-40e2-bc10-ded100de4569\") " pod="openstack/mysqld-exporter-0" Jan 31 09:23:12 crc kubenswrapper[4830]: I0131 09:23:12.066083 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d1ca860e-5493-40e2-bc10-ded100de4569-config-data\") pod \"mysqld-exporter-0\" (UID: \"d1ca860e-5493-40e2-bc10-ded100de4569\") " pod="openstack/mysqld-exporter-0" Jan 31 09:23:12 crc kubenswrapper[4830]: I0131 09:23:12.076129 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1ca860e-5493-40e2-bc10-ded100de4569-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"d1ca860e-5493-40e2-bc10-ded100de4569\") " pod="openstack/mysqld-exporter-0" Jan 31 09:23:12 crc kubenswrapper[4830]: I0131 09:23:12.084628 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2thbl\" (UniqueName: \"kubernetes.io/projected/d1ca860e-5493-40e2-bc10-ded100de4569-kube-api-access-2thbl\") pod \"mysqld-exporter-0\" (UID: \"d1ca860e-5493-40e2-bc10-ded100de4569\") " pod="openstack/mysqld-exporter-0" Jan 31 09:23:12 crc kubenswrapper[4830]: I0131 09:23:12.145579 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-22bk9"] Jan 31 09:23:12 crc kubenswrapper[4830]: I0131 09:23:12.148138 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-22bk9" Jan 31 09:23:12 crc kubenswrapper[4830]: I0131 09:23:12.152961 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Jan 31 09:23:12 crc kubenswrapper[4830]: I0131 09:23:12.163736 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-22bk9"] Jan 31 09:23:12 crc kubenswrapper[4830]: I0131 09:23:12.178458 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Jan 31 09:23:12 crc kubenswrapper[4830]: I0131 09:23:12.270775 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ef16ab0e-944c-4b5c-9203-e15202c4a3eb-operator-scripts\") pod \"root-account-create-update-22bk9\" (UID: \"ef16ab0e-944c-4b5c-9203-e15202c4a3eb\") " pod="openstack/root-account-create-update-22bk9" Jan 31 09:23:12 crc kubenswrapper[4830]: I0131 09:23:12.273215 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qfgq6\" (UniqueName: \"kubernetes.io/projected/ef16ab0e-944c-4b5c-9203-e15202c4a3eb-kube-api-access-qfgq6\") pod \"root-account-create-update-22bk9\" (UID: \"ef16ab0e-944c-4b5c-9203-e15202c4a3eb\") " pod="openstack/root-account-create-update-22bk9" Jan 31 09:23:12 crc kubenswrapper[4830]: I0131 09:23:12.378485 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ef16ab0e-944c-4b5c-9203-e15202c4a3eb-operator-scripts\") pod \"root-account-create-update-22bk9\" (UID: \"ef16ab0e-944c-4b5c-9203-e15202c4a3eb\") " pod="openstack/root-account-create-update-22bk9" Jan 31 09:23:12 crc kubenswrapper[4830]: I0131 09:23:12.378685 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qfgq6\" (UniqueName: \"kubernetes.io/projected/ef16ab0e-944c-4b5c-9203-e15202c4a3eb-kube-api-access-qfgq6\") pod \"root-account-create-update-22bk9\" (UID: \"ef16ab0e-944c-4b5c-9203-e15202c4a3eb\") " pod="openstack/root-account-create-update-22bk9" Jan 31 09:23:12 crc kubenswrapper[4830]: I0131 09:23:12.379716 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ef16ab0e-944c-4b5c-9203-e15202c4a3eb-operator-scripts\") pod \"root-account-create-update-22bk9\" (UID: \"ef16ab0e-944c-4b5c-9203-e15202c4a3eb\") " pod="openstack/root-account-create-update-22bk9" Jan 31 09:23:12 crc kubenswrapper[4830]: I0131 09:23:12.435437 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qfgq6\" (UniqueName: \"kubernetes.io/projected/ef16ab0e-944c-4b5c-9203-e15202c4a3eb-kube-api-access-qfgq6\") pod \"root-account-create-update-22bk9\" (UID: \"ef16ab0e-944c-4b5c-9203-e15202c4a3eb\") " pod="openstack/root-account-create-update-22bk9" Jan 31 09:23:12 crc kubenswrapper[4830]: I0131 09:23:12.478425 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-22bk9" Jan 31 09:23:12 crc kubenswrapper[4830]: I0131 09:23:12.581425 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"18af810d-9de4-4822-86d2-bb7e8a8a449b","Type":"ContainerStarted","Data":"acc702009ec1b1c264fd284a800bc7eafae655c03f62683636397a46f06f969c"} Jan 31 09:23:12 crc kubenswrapper[4830]: I0131 09:23:12.583200 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Jan 31 09:23:12 crc kubenswrapper[4830]: I0131 09:23:12.611459 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"1023f27a-9c1d-4818-a3f5-94946296ae46","Type":"ContainerStarted","Data":"5a0c6221abea15731f0a333b72a475e192c7533b84cadae2b38ea6a3e460c2c6"} Jan 31 09:23:12 crc kubenswrapper[4830]: I0131 09:23:12.611516 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"1023f27a-9c1d-4818-a3f5-94946296ae46","Type":"ContainerStarted","Data":"a01f313331624fb4a48b824de3ede9487723cd330f0d21852bf74bdf8746f6e5"} Jan 31 09:23:12 crc kubenswrapper[4830]: I0131 09:23:12.611525 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"1023f27a-9c1d-4818-a3f5-94946296ae46","Type":"ContainerStarted","Data":"36e5bd50782850dc85425ff5f9d0f8c26ae96fb40824bed609da0619ee418c2d"} Jan 31 09:23:12 crc kubenswrapper[4830]: I0131 09:23:12.690580 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"f60eed79-badf-4909-869b-edbfdfb774ac","Type":"ContainerStarted","Data":"fa124a78a1a97bd95e6654b767ef2cfde5330565858aede9e940e0d0c99c16bf"} Jan 31 09:23:12 crc kubenswrapper[4830]: I0131 09:23:12.690927 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-1" Jan 31 09:23:12 crc kubenswrapper[4830]: I0131 09:23:12.812431 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-1" podStartSLOduration=53.303482825 podStartE2EDuration="1m39.812409731s" podCreationTimestamp="2026-01-31 09:21:33 +0000 UTC" firstStartedPulling="2026-01-31 09:21:36.387587257 +0000 UTC m=+1240.880949699" lastFinishedPulling="2026-01-31 09:22:22.896514163 +0000 UTC m=+1287.389876605" observedRunningTime="2026-01-31 09:23:12.806661997 +0000 UTC m=+1337.300024439" watchObservedRunningTime="2026-01-31 09:23:12.812409731 +0000 UTC m=+1337.305772173" Jan 31 09:23:12 crc kubenswrapper[4830]: I0131 09:23:12.821569 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=53.415749283 podStartE2EDuration="1m38.82151737s" podCreationTimestamp="2026-01-31 09:21:34 +0000 UTC" firstStartedPulling="2026-01-31 09:21:37.360470474 +0000 UTC m=+1241.853832906" lastFinishedPulling="2026-01-31 09:22:22.766238551 +0000 UTC m=+1287.259600993" observedRunningTime="2026-01-31 09:23:12.652963199 +0000 UTC m=+1337.146325631" watchObservedRunningTime="2026-01-31 09:23:12.82151737 +0000 UTC m=+1337.314879812" Jan 31 09:23:12 crc kubenswrapper[4830]: I0131 09:23:12.823768 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Jan 31 09:23:12 crc kubenswrapper[4830]: I0131 09:23:12.823809 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Jan 31 09:23:12 crc kubenswrapper[4830]: I0131 09:23:12.827803 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Jan 31 09:23:12 crc kubenswrapper[4830]: I0131 09:23:12.864606 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-0"] Jan 31 09:23:13 crc kubenswrapper[4830]: I0131 09:23:13.546965 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-22bk9"] Jan 31 09:23:13 crc kubenswrapper[4830]: W0131 09:23:13.555469 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podef16ab0e_944c_4b5c_9203_e15202c4a3eb.slice/crio-1af579d7536bdd18eee642949c1127580a199eb2800f524f36fe15e8ef27d2a8 WatchSource:0}: Error finding container 1af579d7536bdd18eee642949c1127580a199eb2800f524f36fe15e8ef27d2a8: Status 404 returned error can't find the container with id 1af579d7536bdd18eee642949c1127580a199eb2800f524f36fe15e8ef27d2a8 Jan 31 09:23:13 crc kubenswrapper[4830]: I0131 09:23:13.704610 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"d1ca860e-5493-40e2-bc10-ded100de4569","Type":"ContainerStarted","Data":"653d1ef297897642a2b36c89b67d4f6c55eb948472451e2be6750b1dec0c1c07"} Jan 31 09:23:13 crc kubenswrapper[4830]: I0131 09:23:13.719488 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"1023f27a-9c1d-4818-a3f5-94946296ae46","Type":"ContainerStarted","Data":"8fe08d80e2fb6a7ee53a885b98343b172d01ecee909671998afc2b550759b0a8"} Jan 31 09:23:13 crc kubenswrapper[4830]: I0131 09:23:13.724647 4830 generic.go:334] "Generic (PLEG): container finished" podID="759f3f02-a9de-4e01-97f9-a97424c592a6" containerID="1bae92a840384b060e4d01df81c92a143f6bda7ee6adbcf67e5d9346d46a2d67" exitCode=0 Jan 31 09:23:13 crc kubenswrapper[4830]: I0131 09:23:13.724808 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"759f3f02-a9de-4e01-97f9-a97424c592a6","Type":"ContainerDied","Data":"1bae92a840384b060e4d01df81c92a143f6bda7ee6adbcf67e5d9346d46a2d67"} Jan 31 09:23:13 crc kubenswrapper[4830]: I0131 09:23:13.729123 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-22bk9" event={"ID":"ef16ab0e-944c-4b5c-9203-e15202c4a3eb","Type":"ContainerStarted","Data":"1af579d7536bdd18eee642949c1127580a199eb2800f524f36fe15e8ef27d2a8"} Jan 31 09:23:13 crc kubenswrapper[4830]: I0131 09:23:13.734090 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Jan 31 09:23:14 crc kubenswrapper[4830]: I0131 09:23:14.541852 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-ps27t" podUID="dcebcaa8-8cb1-4a94-a15a-0ef39c9bee73" containerName="ovn-controller" probeResult="failure" output=< Jan 31 09:23:14 crc kubenswrapper[4830]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Jan 31 09:23:14 crc kubenswrapper[4830]: > Jan 31 09:23:14 crc kubenswrapper[4830]: I0131 09:23:14.599339 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-gk8dv" Jan 31 09:23:14 crc kubenswrapper[4830]: I0131 09:23:14.766698 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"759f3f02-a9de-4e01-97f9-a97424c592a6","Type":"ContainerStarted","Data":"4314798a487c6540490c04ced2ec23ba07d093b0c7ea88a6d7766d38ea5280fb"} Jan 31 09:23:14 crc kubenswrapper[4830]: I0131 09:23:14.768036 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Jan 31 09:23:14 crc kubenswrapper[4830]: I0131 09:23:14.783649 4830 generic.go:334] "Generic (PLEG): container finished" podID="ef16ab0e-944c-4b5c-9203-e15202c4a3eb" containerID="67905eb9bab90d265771b80fa447edbfdada0d23a9a8ebd6c567074c7d71c248" exitCode=0 Jan 31 09:23:14 crc kubenswrapper[4830]: I0131 09:23:14.783992 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-22bk9" event={"ID":"ef16ab0e-944c-4b5c-9203-e15202c4a3eb","Type":"ContainerDied","Data":"67905eb9bab90d265771b80fa447edbfdada0d23a9a8ebd6c567074c7d71c248"} Jan 31 09:23:14 crc kubenswrapper[4830]: I0131 09:23:14.848945 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=-9223371935.005856 podStartE2EDuration="1m41.84892102s" podCreationTimestamp="2026-01-31 09:21:33 +0000 UTC" firstStartedPulling="2026-01-31 09:21:36.579662999 +0000 UTC m=+1241.073025441" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 09:23:14.810445684 +0000 UTC m=+1339.303808136" watchObservedRunningTime="2026-01-31 09:23:14.84892102 +0000 UTC m=+1339.342283452" Jan 31 09:23:14 crc kubenswrapper[4830]: I0131 09:23:14.892686 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ps27t-config-9j8bp"] Jan 31 09:23:14 crc kubenswrapper[4830]: I0131 09:23:14.904197 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ps27t-config-9j8bp" Jan 31 09:23:14 crc kubenswrapper[4830]: I0131 09:23:14.912973 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ps27t-config-9j8bp"] Jan 31 09:23:14 crc kubenswrapper[4830]: I0131 09:23:14.916656 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Jan 31 09:23:15 crc kubenswrapper[4830]: I0131 09:23:15.072964 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/781d272e-1da3-4dea-a516-a0156b7110e3-var-run-ovn\") pod \"ovn-controller-ps27t-config-9j8bp\" (UID: \"781d272e-1da3-4dea-a516-a0156b7110e3\") " pod="openstack/ovn-controller-ps27t-config-9j8bp" Jan 31 09:23:15 crc kubenswrapper[4830]: I0131 09:23:15.073055 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/781d272e-1da3-4dea-a516-a0156b7110e3-var-log-ovn\") pod \"ovn-controller-ps27t-config-9j8bp\" (UID: \"781d272e-1da3-4dea-a516-a0156b7110e3\") " pod="openstack/ovn-controller-ps27t-config-9j8bp" Jan 31 09:23:15 crc kubenswrapper[4830]: I0131 09:23:15.073103 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/781d272e-1da3-4dea-a516-a0156b7110e3-additional-scripts\") pod \"ovn-controller-ps27t-config-9j8bp\" (UID: \"781d272e-1da3-4dea-a516-a0156b7110e3\") " pod="openstack/ovn-controller-ps27t-config-9j8bp" Jan 31 09:23:15 crc kubenswrapper[4830]: I0131 09:23:15.073156 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/781d272e-1da3-4dea-a516-a0156b7110e3-scripts\") pod \"ovn-controller-ps27t-config-9j8bp\" (UID: \"781d272e-1da3-4dea-a516-a0156b7110e3\") " pod="openstack/ovn-controller-ps27t-config-9j8bp" Jan 31 09:23:15 crc kubenswrapper[4830]: I0131 09:23:15.073198 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/781d272e-1da3-4dea-a516-a0156b7110e3-var-run\") pod \"ovn-controller-ps27t-config-9j8bp\" (UID: \"781d272e-1da3-4dea-a516-a0156b7110e3\") " pod="openstack/ovn-controller-ps27t-config-9j8bp" Jan 31 09:23:15 crc kubenswrapper[4830]: I0131 09:23:15.073278 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jkv9n\" (UniqueName: \"kubernetes.io/projected/781d272e-1da3-4dea-a516-a0156b7110e3-kube-api-access-jkv9n\") pod \"ovn-controller-ps27t-config-9j8bp\" (UID: \"781d272e-1da3-4dea-a516-a0156b7110e3\") " pod="openstack/ovn-controller-ps27t-config-9j8bp" Jan 31 09:23:15 crc kubenswrapper[4830]: I0131 09:23:15.175663 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/781d272e-1da3-4dea-a516-a0156b7110e3-var-run\") pod \"ovn-controller-ps27t-config-9j8bp\" (UID: \"781d272e-1da3-4dea-a516-a0156b7110e3\") " pod="openstack/ovn-controller-ps27t-config-9j8bp" Jan 31 09:23:15 crc kubenswrapper[4830]: I0131 09:23:15.175831 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jkv9n\" (UniqueName: \"kubernetes.io/projected/781d272e-1da3-4dea-a516-a0156b7110e3-kube-api-access-jkv9n\") pod \"ovn-controller-ps27t-config-9j8bp\" (UID: \"781d272e-1da3-4dea-a516-a0156b7110e3\") " pod="openstack/ovn-controller-ps27t-config-9j8bp" Jan 31 09:23:15 crc kubenswrapper[4830]: I0131 09:23:15.175883 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/781d272e-1da3-4dea-a516-a0156b7110e3-var-run-ovn\") pod \"ovn-controller-ps27t-config-9j8bp\" (UID: \"781d272e-1da3-4dea-a516-a0156b7110e3\") " pod="openstack/ovn-controller-ps27t-config-9j8bp" Jan 31 09:23:15 crc kubenswrapper[4830]: I0131 09:23:15.175955 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/781d272e-1da3-4dea-a516-a0156b7110e3-var-log-ovn\") pod \"ovn-controller-ps27t-config-9j8bp\" (UID: \"781d272e-1da3-4dea-a516-a0156b7110e3\") " pod="openstack/ovn-controller-ps27t-config-9j8bp" Jan 31 09:23:15 crc kubenswrapper[4830]: I0131 09:23:15.175989 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/781d272e-1da3-4dea-a516-a0156b7110e3-additional-scripts\") pod \"ovn-controller-ps27t-config-9j8bp\" (UID: \"781d272e-1da3-4dea-a516-a0156b7110e3\") " pod="openstack/ovn-controller-ps27t-config-9j8bp" Jan 31 09:23:15 crc kubenswrapper[4830]: I0131 09:23:15.176033 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/781d272e-1da3-4dea-a516-a0156b7110e3-scripts\") pod \"ovn-controller-ps27t-config-9j8bp\" (UID: \"781d272e-1da3-4dea-a516-a0156b7110e3\") " pod="openstack/ovn-controller-ps27t-config-9j8bp" Jan 31 09:23:15 crc kubenswrapper[4830]: I0131 09:23:15.176197 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/781d272e-1da3-4dea-a516-a0156b7110e3-var-run\") pod \"ovn-controller-ps27t-config-9j8bp\" (UID: \"781d272e-1da3-4dea-a516-a0156b7110e3\") " pod="openstack/ovn-controller-ps27t-config-9j8bp" Jan 31 09:23:15 crc kubenswrapper[4830]: I0131 09:23:15.176317 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/781d272e-1da3-4dea-a516-a0156b7110e3-var-run-ovn\") pod \"ovn-controller-ps27t-config-9j8bp\" (UID: \"781d272e-1da3-4dea-a516-a0156b7110e3\") " pod="openstack/ovn-controller-ps27t-config-9j8bp" Jan 31 09:23:15 crc kubenswrapper[4830]: I0131 09:23:15.176468 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/781d272e-1da3-4dea-a516-a0156b7110e3-var-log-ovn\") pod \"ovn-controller-ps27t-config-9j8bp\" (UID: \"781d272e-1da3-4dea-a516-a0156b7110e3\") " pod="openstack/ovn-controller-ps27t-config-9j8bp" Jan 31 09:23:15 crc kubenswrapper[4830]: I0131 09:23:15.177195 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/781d272e-1da3-4dea-a516-a0156b7110e3-additional-scripts\") pod \"ovn-controller-ps27t-config-9j8bp\" (UID: \"781d272e-1da3-4dea-a516-a0156b7110e3\") " pod="openstack/ovn-controller-ps27t-config-9j8bp" Jan 31 09:23:15 crc kubenswrapper[4830]: I0131 09:23:15.178357 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/781d272e-1da3-4dea-a516-a0156b7110e3-scripts\") pod \"ovn-controller-ps27t-config-9j8bp\" (UID: \"781d272e-1da3-4dea-a516-a0156b7110e3\") " pod="openstack/ovn-controller-ps27t-config-9j8bp" Jan 31 09:23:15 crc kubenswrapper[4830]: I0131 09:23:15.216440 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jkv9n\" (UniqueName: \"kubernetes.io/projected/781d272e-1da3-4dea-a516-a0156b7110e3-kube-api-access-jkv9n\") pod \"ovn-controller-ps27t-config-9j8bp\" (UID: \"781d272e-1da3-4dea-a516-a0156b7110e3\") " pod="openstack/ovn-controller-ps27t-config-9j8bp" Jan 31 09:23:15 crc kubenswrapper[4830]: I0131 09:23:15.240902 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ps27t-config-9j8bp" Jan 31 09:23:16 crc kubenswrapper[4830]: I0131 09:23:16.679451 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ps27t-config-9j8bp"] Jan 31 09:23:16 crc kubenswrapper[4830]: I0131 09:23:16.763479 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-22bk9" Jan 31 09:23:16 crc kubenswrapper[4830]: I0131 09:23:16.819089 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ef16ab0e-944c-4b5c-9203-e15202c4a3eb-operator-scripts\") pod \"ef16ab0e-944c-4b5c-9203-e15202c4a3eb\" (UID: \"ef16ab0e-944c-4b5c-9203-e15202c4a3eb\") " Jan 31 09:23:16 crc kubenswrapper[4830]: I0131 09:23:16.819383 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qfgq6\" (UniqueName: \"kubernetes.io/projected/ef16ab0e-944c-4b5c-9203-e15202c4a3eb-kube-api-access-qfgq6\") pod \"ef16ab0e-944c-4b5c-9203-e15202c4a3eb\" (UID: \"ef16ab0e-944c-4b5c-9203-e15202c4a3eb\") " Jan 31 09:23:16 crc kubenswrapper[4830]: I0131 09:23:16.821861 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ef16ab0e-944c-4b5c-9203-e15202c4a3eb-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ef16ab0e-944c-4b5c-9203-e15202c4a3eb" (UID: "ef16ab0e-944c-4b5c-9203-e15202c4a3eb"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:23:16 crc kubenswrapper[4830]: I0131 09:23:16.831942 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ef16ab0e-944c-4b5c-9203-e15202c4a3eb-kube-api-access-qfgq6" (OuterVolumeSpecName: "kube-api-access-qfgq6") pod "ef16ab0e-944c-4b5c-9203-e15202c4a3eb" (UID: "ef16ab0e-944c-4b5c-9203-e15202c4a3eb"). InnerVolumeSpecName "kube-api-access-qfgq6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:23:16 crc kubenswrapper[4830]: I0131 09:23:16.838587 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ps27t-config-9j8bp" event={"ID":"781d272e-1da3-4dea-a516-a0156b7110e3","Type":"ContainerStarted","Data":"c0e2e97f478c2b0412c44d930d4eaf902c5236a9c08a6a1fb72a3e68f4e2d5bf"} Jan 31 09:23:16 crc kubenswrapper[4830]: I0131 09:23:16.851784 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-22bk9" event={"ID":"ef16ab0e-944c-4b5c-9203-e15202c4a3eb","Type":"ContainerDied","Data":"1af579d7536bdd18eee642949c1127580a199eb2800f524f36fe15e8ef27d2a8"} Jan 31 09:23:16 crc kubenswrapper[4830]: I0131 09:23:16.852231 4830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1af579d7536bdd18eee642949c1127580a199eb2800f524f36fe15e8ef27d2a8" Jan 31 09:23:16 crc kubenswrapper[4830]: I0131 09:23:16.852359 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-22bk9" Jan 31 09:23:16 crc kubenswrapper[4830]: I0131 09:23:16.928803 4830 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ef16ab0e-944c-4b5c-9203-e15202c4a3eb-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 09:23:16 crc kubenswrapper[4830]: I0131 09:23:16.929120 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qfgq6\" (UniqueName: \"kubernetes.io/projected/ef16ab0e-944c-4b5c-9203-e15202c4a3eb-kube-api-access-qfgq6\") on node \"crc\" DevicePath \"\"" Jan 31 09:23:17 crc kubenswrapper[4830]: I0131 09:23:17.700013 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 31 09:23:17 crc kubenswrapper[4830]: I0131 09:23:17.701070 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="68109d40-9af0-4c37-bf02-7b4744dbab5f" containerName="prometheus" containerID="cri-o://7adaf06fa536ac40db61bcf932640a9fb6f67d5b1eeca9af5a4e09a11f98afc7" gracePeriod=600 Jan 31 09:23:17 crc kubenswrapper[4830]: I0131 09:23:17.701324 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="68109d40-9af0-4c37-bf02-7b4744dbab5f" containerName="thanos-sidecar" containerID="cri-o://f314a67cf197f76f9e9553ea1a3af00ecf9d246f64890bb8e50abb3a4adb6a84" gracePeriod=600 Jan 31 09:23:17 crc kubenswrapper[4830]: I0131 09:23:17.701424 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="68109d40-9af0-4c37-bf02-7b4744dbab5f" containerName="config-reloader" containerID="cri-o://a88cc0ab549485c84a69e7634ff67075f9eef9e6a569736ca8920015bf3445a5" gracePeriod=600 Jan 31 09:23:17 crc kubenswrapper[4830]: I0131 09:23:17.823667 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/prometheus-metric-storage-0" podUID="68109d40-9af0-4c37-bf02-7b4744dbab5f" containerName="prometheus" probeResult="failure" output="Get \"http://10.217.0.138:9090/-/ready\": dial tcp 10.217.0.138:9090: connect: connection refused" Jan 31 09:23:17 crc kubenswrapper[4830]: I0131 09:23:17.897635 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"d1ca860e-5493-40e2-bc10-ded100de4569","Type":"ContainerStarted","Data":"ad2c590706b8dbc973c2b43163b0440fd6e3529f84e4d4ce4eb07edc4d7484f2"} Jan 31 09:23:17 crc kubenswrapper[4830]: I0131 09:23:17.934822 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mysqld-exporter-0" podStartSLOduration=3.8872872320000003 podStartE2EDuration="6.934794404s" podCreationTimestamp="2026-01-31 09:23:11 +0000 UTC" firstStartedPulling="2026-01-31 09:23:12.933010807 +0000 UTC m=+1337.426373239" lastFinishedPulling="2026-01-31 09:23:15.980517969 +0000 UTC m=+1340.473880411" observedRunningTime="2026-01-31 09:23:17.931612814 +0000 UTC m=+1342.424975256" watchObservedRunningTime="2026-01-31 09:23:17.934794404 +0000 UTC m=+1342.428156846" Jan 31 09:23:17 crc kubenswrapper[4830]: I0131 09:23:17.942269 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"1023f27a-9c1d-4818-a3f5-94946296ae46","Type":"ContainerStarted","Data":"400dab6229276e76461cfde46e1bd98137b6b69295c876a9373b9cb2b3a9c771"} Jan 31 09:23:17 crc kubenswrapper[4830]: I0131 09:23:17.942363 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"1023f27a-9c1d-4818-a3f5-94946296ae46","Type":"ContainerStarted","Data":"bc720fb27994b29b9d9a2c6905231d36d758c291830cce452bcb3e0a81ae8db7"} Jan 31 09:23:17 crc kubenswrapper[4830]: I0131 09:23:17.955309 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ps27t-config-9j8bp" event={"ID":"781d272e-1da3-4dea-a516-a0156b7110e3","Type":"ContainerStarted","Data":"bdc0cbdf11a607ea9e1342ee17c82a395fb7422100900716c1d145e147848ae5"} Jan 31 09:23:17 crc kubenswrapper[4830]: I0131 09:23:17.988532 4830 generic.go:334] "Generic (PLEG): container finished" podID="68109d40-9af0-4c37-bf02-7b4744dbab5f" containerID="f314a67cf197f76f9e9553ea1a3af00ecf9d246f64890bb8e50abb3a4adb6a84" exitCode=0 Jan 31 09:23:17 crc kubenswrapper[4830]: I0131 09:23:17.988585 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"68109d40-9af0-4c37-bf02-7b4744dbab5f","Type":"ContainerDied","Data":"f314a67cf197f76f9e9553ea1a3af00ecf9d246f64890bb8e50abb3a4adb6a84"} Jan 31 09:23:18 crc kubenswrapper[4830]: I0131 09:23:18.004758 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ps27t-config-9j8bp" podStartSLOduration=4.004732687 podStartE2EDuration="4.004732687s" podCreationTimestamp="2026-01-31 09:23:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 09:23:17.984963993 +0000 UTC m=+1342.478326445" watchObservedRunningTime="2026-01-31 09:23:18.004732687 +0000 UTC m=+1342.498095129" Jan 31 09:23:18 crc kubenswrapper[4830]: I0131 09:23:18.937315 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 31 09:23:19 crc kubenswrapper[4830]: I0131 09:23:19.031868 4830 generic.go:334] "Generic (PLEG): container finished" podID="781d272e-1da3-4dea-a516-a0156b7110e3" containerID="bdc0cbdf11a607ea9e1342ee17c82a395fb7422100900716c1d145e147848ae5" exitCode=0 Jan 31 09:23:19 crc kubenswrapper[4830]: I0131 09:23:19.032011 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ps27t-config-9j8bp" event={"ID":"781d272e-1da3-4dea-a516-a0156b7110e3","Type":"ContainerDied","Data":"bdc0cbdf11a607ea9e1342ee17c82a395fb7422100900716c1d145e147848ae5"} Jan 31 09:23:19 crc kubenswrapper[4830]: I0131 09:23:19.050517 4830 generic.go:334] "Generic (PLEG): container finished" podID="68109d40-9af0-4c37-bf02-7b4744dbab5f" containerID="a88cc0ab549485c84a69e7634ff67075f9eef9e6a569736ca8920015bf3445a5" exitCode=0 Jan 31 09:23:19 crc kubenswrapper[4830]: I0131 09:23:19.050605 4830 generic.go:334] "Generic (PLEG): container finished" podID="68109d40-9af0-4c37-bf02-7b4744dbab5f" containerID="7adaf06fa536ac40db61bcf932640a9fb6f67d5b1eeca9af5a4e09a11f98afc7" exitCode=0 Jan 31 09:23:19 crc kubenswrapper[4830]: I0131 09:23:19.050675 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"68109d40-9af0-4c37-bf02-7b4744dbab5f","Type":"ContainerDied","Data":"a88cc0ab549485c84a69e7634ff67075f9eef9e6a569736ca8920015bf3445a5"} Jan 31 09:23:19 crc kubenswrapper[4830]: I0131 09:23:19.050714 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"68109d40-9af0-4c37-bf02-7b4744dbab5f","Type":"ContainerDied","Data":"7adaf06fa536ac40db61bcf932640a9fb6f67d5b1eeca9af5a4e09a11f98afc7"} Jan 31 09:23:19 crc kubenswrapper[4830]: I0131 09:23:19.050784 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"68109d40-9af0-4c37-bf02-7b4744dbab5f","Type":"ContainerDied","Data":"6b8928a60366130aa9d4a34de626cddf20c7c7a4f5e9dd68404d2294d1a938d4"} Jan 31 09:23:19 crc kubenswrapper[4830]: I0131 09:23:19.050819 4830 scope.go:117] "RemoveContainer" containerID="f314a67cf197f76f9e9553ea1a3af00ecf9d246f64890bb8e50abb3a4adb6a84" Jan 31 09:23:19 crc kubenswrapper[4830]: I0131 09:23:19.050816 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 31 09:23:19 crc kubenswrapper[4830]: I0131 09:23:19.095517 4830 scope.go:117] "RemoveContainer" containerID="a88cc0ab549485c84a69e7634ff67075f9eef9e6a569736ca8920015bf3445a5" Jan 31 09:23:19 crc kubenswrapper[4830]: I0131 09:23:19.095710 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"1023f27a-9c1d-4818-a3f5-94946296ae46","Type":"ContainerStarted","Data":"ca8f11a70b8998bc6a80f91881a04220e3a7d8980daf0df0c74b44a4868d1f6f"} Jan 31 09:23:19 crc kubenswrapper[4830]: I0131 09:23:19.095771 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"1023f27a-9c1d-4818-a3f5-94946296ae46","Type":"ContainerStarted","Data":"d4f5cf43a5df6c580b1d8420b6532bced369a18ffbcbbe43fdf0bf478d40b2da"} Jan 31 09:23:19 crc kubenswrapper[4830]: I0131 09:23:19.132420 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/68109d40-9af0-4c37-bf02-7b4744dbab5f-prometheus-metric-storage-rulefiles-2\") pod \"68109d40-9af0-4c37-bf02-7b4744dbab5f\" (UID: \"68109d40-9af0-4c37-bf02-7b4744dbab5f\") " Jan 31 09:23:19 crc kubenswrapper[4830]: I0131 09:23:19.132485 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/68109d40-9af0-4c37-bf02-7b4744dbab5f-thanos-prometheus-http-client-file\") pod \"68109d40-9af0-4c37-bf02-7b4744dbab5f\" (UID: \"68109d40-9af0-4c37-bf02-7b4744dbab5f\") " Jan 31 09:23:19 crc kubenswrapper[4830]: I0131 09:23:19.132602 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/68109d40-9af0-4c37-bf02-7b4744dbab5f-config-out\") pod \"68109d40-9af0-4c37-bf02-7b4744dbab5f\" (UID: \"68109d40-9af0-4c37-bf02-7b4744dbab5f\") " Jan 31 09:23:19 crc kubenswrapper[4830]: I0131 09:23:19.132658 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/68109d40-9af0-4c37-bf02-7b4744dbab5f-prometheus-metric-storage-rulefiles-0\") pod \"68109d40-9af0-4c37-bf02-7b4744dbab5f\" (UID: \"68109d40-9af0-4c37-bf02-7b4744dbab5f\") " Jan 31 09:23:19 crc kubenswrapper[4830]: I0131 09:23:19.132708 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/68109d40-9af0-4c37-bf02-7b4744dbab5f-prometheus-metric-storage-rulefiles-1\") pod \"68109d40-9af0-4c37-bf02-7b4744dbab5f\" (UID: \"68109d40-9af0-4c37-bf02-7b4744dbab5f\") " Jan 31 09:23:19 crc kubenswrapper[4830]: I0131 09:23:19.132927 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-db\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7635d675-22a8-4009-89b3-dfdef75167b6\") pod \"68109d40-9af0-4c37-bf02-7b4744dbab5f\" (UID: \"68109d40-9af0-4c37-bf02-7b4744dbab5f\") " Jan 31 09:23:19 crc kubenswrapper[4830]: I0131 09:23:19.133034 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/68109d40-9af0-4c37-bf02-7b4744dbab5f-config\") pod \"68109d40-9af0-4c37-bf02-7b4744dbab5f\" (UID: \"68109d40-9af0-4c37-bf02-7b4744dbab5f\") " Jan 31 09:23:19 crc kubenswrapper[4830]: I0131 09:23:19.133057 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5rk7p\" (UniqueName: \"kubernetes.io/projected/68109d40-9af0-4c37-bf02-7b4744dbab5f-kube-api-access-5rk7p\") pod \"68109d40-9af0-4c37-bf02-7b4744dbab5f\" (UID: \"68109d40-9af0-4c37-bf02-7b4744dbab5f\") " Jan 31 09:23:19 crc kubenswrapper[4830]: I0131 09:23:19.133122 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/68109d40-9af0-4c37-bf02-7b4744dbab5f-web-config\") pod \"68109d40-9af0-4c37-bf02-7b4744dbab5f\" (UID: \"68109d40-9af0-4c37-bf02-7b4744dbab5f\") " Jan 31 09:23:19 crc kubenswrapper[4830]: I0131 09:23:19.133157 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/68109d40-9af0-4c37-bf02-7b4744dbab5f-tls-assets\") pod \"68109d40-9af0-4c37-bf02-7b4744dbab5f\" (UID: \"68109d40-9af0-4c37-bf02-7b4744dbab5f\") " Jan 31 09:23:19 crc kubenswrapper[4830]: I0131 09:23:19.135383 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/68109d40-9af0-4c37-bf02-7b4744dbab5f-prometheus-metric-storage-rulefiles-1" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-1") pod "68109d40-9af0-4c37-bf02-7b4744dbab5f" (UID: "68109d40-9af0-4c37-bf02-7b4744dbab5f"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-1". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:23:19 crc kubenswrapper[4830]: I0131 09:23:19.137260 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/68109d40-9af0-4c37-bf02-7b4744dbab5f-prometheus-metric-storage-rulefiles-0" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-0") pod "68109d40-9af0-4c37-bf02-7b4744dbab5f" (UID: "68109d40-9af0-4c37-bf02-7b4744dbab5f"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:23:19 crc kubenswrapper[4830]: I0131 09:23:19.137608 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/68109d40-9af0-4c37-bf02-7b4744dbab5f-prometheus-metric-storage-rulefiles-2" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-2") pod "68109d40-9af0-4c37-bf02-7b4744dbab5f" (UID: "68109d40-9af0-4c37-bf02-7b4744dbab5f"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-2". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:23:19 crc kubenswrapper[4830]: I0131 09:23:19.140421 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/68109d40-9af0-4c37-bf02-7b4744dbab5f-tls-assets" (OuterVolumeSpecName: "tls-assets") pod "68109d40-9af0-4c37-bf02-7b4744dbab5f" (UID: "68109d40-9af0-4c37-bf02-7b4744dbab5f"). InnerVolumeSpecName "tls-assets". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:23:19 crc kubenswrapper[4830]: I0131 09:23:19.141234 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/68109d40-9af0-4c37-bf02-7b4744dbab5f-thanos-prometheus-http-client-file" (OuterVolumeSpecName: "thanos-prometheus-http-client-file") pod "68109d40-9af0-4c37-bf02-7b4744dbab5f" (UID: "68109d40-9af0-4c37-bf02-7b4744dbab5f"). InnerVolumeSpecName "thanos-prometheus-http-client-file". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:23:19 crc kubenswrapper[4830]: I0131 09:23:19.142153 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/68109d40-9af0-4c37-bf02-7b4744dbab5f-config" (OuterVolumeSpecName: "config") pod "68109d40-9af0-4c37-bf02-7b4744dbab5f" (UID: "68109d40-9af0-4c37-bf02-7b4744dbab5f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:23:19 crc kubenswrapper[4830]: I0131 09:23:19.147616 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/68109d40-9af0-4c37-bf02-7b4744dbab5f-kube-api-access-5rk7p" (OuterVolumeSpecName: "kube-api-access-5rk7p") pod "68109d40-9af0-4c37-bf02-7b4744dbab5f" (UID: "68109d40-9af0-4c37-bf02-7b4744dbab5f"). InnerVolumeSpecName "kube-api-access-5rk7p". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:23:19 crc kubenswrapper[4830]: I0131 09:23:19.157999 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/68109d40-9af0-4c37-bf02-7b4744dbab5f-config-out" (OuterVolumeSpecName: "config-out") pod "68109d40-9af0-4c37-bf02-7b4744dbab5f" (UID: "68109d40-9af0-4c37-bf02-7b4744dbab5f"). InnerVolumeSpecName "config-out". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 09:23:19 crc kubenswrapper[4830]: I0131 09:23:19.177881 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/68109d40-9af0-4c37-bf02-7b4744dbab5f-web-config" (OuterVolumeSpecName: "web-config") pod "68109d40-9af0-4c37-bf02-7b4744dbab5f" (UID: "68109d40-9af0-4c37-bf02-7b4744dbab5f"). InnerVolumeSpecName "web-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:23:19 crc kubenswrapper[4830]: I0131 09:23:19.178065 4830 scope.go:117] "RemoveContainer" containerID="7adaf06fa536ac40db61bcf932640a9fb6f67d5b1eeca9af5a4e09a11f98afc7" Jan 31 09:23:19 crc kubenswrapper[4830]: I0131 09:23:19.178334 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7635d675-22a8-4009-89b3-dfdef75167b6" (OuterVolumeSpecName: "prometheus-metric-storage-db") pod "68109d40-9af0-4c37-bf02-7b4744dbab5f" (UID: "68109d40-9af0-4c37-bf02-7b4744dbab5f"). InnerVolumeSpecName "pvc-7635d675-22a8-4009-89b3-dfdef75167b6". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 31 09:23:19 crc kubenswrapper[4830]: I0131 09:23:19.209020 4830 scope.go:117] "RemoveContainer" containerID="39dcfcca13639143aaebae3cb77d40e361f67c6338ad727f1999e2a36e3ffabd" Jan 31 09:23:19 crc kubenswrapper[4830]: I0131 09:23:19.238158 4830 reconciler_common.go:293] "Volume detached for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/68109d40-9af0-4c37-bf02-7b4744dbab5f-tls-assets\") on node \"crc\" DevicePath \"\"" Jan 31 09:23:19 crc kubenswrapper[4830]: I0131 09:23:19.238194 4830 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/68109d40-9af0-4c37-bf02-7b4744dbab5f-prometheus-metric-storage-rulefiles-2\") on node \"crc\" DevicePath \"\"" Jan 31 09:23:19 crc kubenswrapper[4830]: I0131 09:23:19.238208 4830 reconciler_common.go:293] "Volume detached for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/68109d40-9af0-4c37-bf02-7b4744dbab5f-thanos-prometheus-http-client-file\") on node \"crc\" DevicePath \"\"" Jan 31 09:23:19 crc kubenswrapper[4830]: I0131 09:23:19.238220 4830 reconciler_common.go:293] "Volume detached for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/68109d40-9af0-4c37-bf02-7b4744dbab5f-config-out\") on node \"crc\" DevicePath \"\"" Jan 31 09:23:19 crc kubenswrapper[4830]: I0131 09:23:19.238229 4830 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/68109d40-9af0-4c37-bf02-7b4744dbab5f-prometheus-metric-storage-rulefiles-0\") on node \"crc\" DevicePath \"\"" Jan 31 09:23:19 crc kubenswrapper[4830]: I0131 09:23:19.238240 4830 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/68109d40-9af0-4c37-bf02-7b4744dbab5f-prometheus-metric-storage-rulefiles-1\") on node \"crc\" DevicePath \"\"" Jan 31 09:23:19 crc kubenswrapper[4830]: I0131 09:23:19.238268 4830 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-7635d675-22a8-4009-89b3-dfdef75167b6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7635d675-22a8-4009-89b3-dfdef75167b6\") on node \"crc\" " Jan 31 09:23:19 crc kubenswrapper[4830]: I0131 09:23:19.238283 4830 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/68109d40-9af0-4c37-bf02-7b4744dbab5f-config\") on node \"crc\" DevicePath \"\"" Jan 31 09:23:19 crc kubenswrapper[4830]: I0131 09:23:19.238295 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5rk7p\" (UniqueName: \"kubernetes.io/projected/68109d40-9af0-4c37-bf02-7b4744dbab5f-kube-api-access-5rk7p\") on node \"crc\" DevicePath \"\"" Jan 31 09:23:19 crc kubenswrapper[4830]: I0131 09:23:19.238306 4830 reconciler_common.go:293] "Volume detached for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/68109d40-9af0-4c37-bf02-7b4744dbab5f-web-config\") on node \"crc\" DevicePath \"\"" Jan 31 09:23:19 crc kubenswrapper[4830]: I0131 09:23:19.277203 4830 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Jan 31 09:23:19 crc kubenswrapper[4830]: I0131 09:23:19.277387 4830 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-7635d675-22a8-4009-89b3-dfdef75167b6" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7635d675-22a8-4009-89b3-dfdef75167b6") on node "crc" Jan 31 09:23:19 crc kubenswrapper[4830]: I0131 09:23:19.314454 4830 scope.go:117] "RemoveContainer" containerID="f314a67cf197f76f9e9553ea1a3af00ecf9d246f64890bb8e50abb3a4adb6a84" Jan 31 09:23:19 crc kubenswrapper[4830]: E0131 09:23:19.315253 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f314a67cf197f76f9e9553ea1a3af00ecf9d246f64890bb8e50abb3a4adb6a84\": container with ID starting with f314a67cf197f76f9e9553ea1a3af00ecf9d246f64890bb8e50abb3a4adb6a84 not found: ID does not exist" containerID="f314a67cf197f76f9e9553ea1a3af00ecf9d246f64890bb8e50abb3a4adb6a84" Jan 31 09:23:19 crc kubenswrapper[4830]: I0131 09:23:19.315325 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f314a67cf197f76f9e9553ea1a3af00ecf9d246f64890bb8e50abb3a4adb6a84"} err="failed to get container status \"f314a67cf197f76f9e9553ea1a3af00ecf9d246f64890bb8e50abb3a4adb6a84\": rpc error: code = NotFound desc = could not find container \"f314a67cf197f76f9e9553ea1a3af00ecf9d246f64890bb8e50abb3a4adb6a84\": container with ID starting with f314a67cf197f76f9e9553ea1a3af00ecf9d246f64890bb8e50abb3a4adb6a84 not found: ID does not exist" Jan 31 09:23:19 crc kubenswrapper[4830]: I0131 09:23:19.315368 4830 scope.go:117] "RemoveContainer" containerID="a88cc0ab549485c84a69e7634ff67075f9eef9e6a569736ca8920015bf3445a5" Jan 31 09:23:19 crc kubenswrapper[4830]: E0131 09:23:19.316542 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a88cc0ab549485c84a69e7634ff67075f9eef9e6a569736ca8920015bf3445a5\": container with ID starting with a88cc0ab549485c84a69e7634ff67075f9eef9e6a569736ca8920015bf3445a5 not found: ID does not exist" containerID="a88cc0ab549485c84a69e7634ff67075f9eef9e6a569736ca8920015bf3445a5" Jan 31 09:23:19 crc kubenswrapper[4830]: I0131 09:23:19.316580 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a88cc0ab549485c84a69e7634ff67075f9eef9e6a569736ca8920015bf3445a5"} err="failed to get container status \"a88cc0ab549485c84a69e7634ff67075f9eef9e6a569736ca8920015bf3445a5\": rpc error: code = NotFound desc = could not find container \"a88cc0ab549485c84a69e7634ff67075f9eef9e6a569736ca8920015bf3445a5\": container with ID starting with a88cc0ab549485c84a69e7634ff67075f9eef9e6a569736ca8920015bf3445a5 not found: ID does not exist" Jan 31 09:23:19 crc kubenswrapper[4830]: I0131 09:23:19.316610 4830 scope.go:117] "RemoveContainer" containerID="7adaf06fa536ac40db61bcf932640a9fb6f67d5b1eeca9af5a4e09a11f98afc7" Jan 31 09:23:19 crc kubenswrapper[4830]: E0131 09:23:19.317125 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7adaf06fa536ac40db61bcf932640a9fb6f67d5b1eeca9af5a4e09a11f98afc7\": container with ID starting with 7adaf06fa536ac40db61bcf932640a9fb6f67d5b1eeca9af5a4e09a11f98afc7 not found: ID does not exist" containerID="7adaf06fa536ac40db61bcf932640a9fb6f67d5b1eeca9af5a4e09a11f98afc7" Jan 31 09:23:19 crc kubenswrapper[4830]: I0131 09:23:19.317176 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7adaf06fa536ac40db61bcf932640a9fb6f67d5b1eeca9af5a4e09a11f98afc7"} err="failed to get container status \"7adaf06fa536ac40db61bcf932640a9fb6f67d5b1eeca9af5a4e09a11f98afc7\": rpc error: code = NotFound desc = could not find container \"7adaf06fa536ac40db61bcf932640a9fb6f67d5b1eeca9af5a4e09a11f98afc7\": container with ID starting with 7adaf06fa536ac40db61bcf932640a9fb6f67d5b1eeca9af5a4e09a11f98afc7 not found: ID does not exist" Jan 31 09:23:19 crc kubenswrapper[4830]: I0131 09:23:19.317213 4830 scope.go:117] "RemoveContainer" containerID="39dcfcca13639143aaebae3cb77d40e361f67c6338ad727f1999e2a36e3ffabd" Jan 31 09:23:19 crc kubenswrapper[4830]: E0131 09:23:19.317689 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"39dcfcca13639143aaebae3cb77d40e361f67c6338ad727f1999e2a36e3ffabd\": container with ID starting with 39dcfcca13639143aaebae3cb77d40e361f67c6338ad727f1999e2a36e3ffabd not found: ID does not exist" containerID="39dcfcca13639143aaebae3cb77d40e361f67c6338ad727f1999e2a36e3ffabd" Jan 31 09:23:19 crc kubenswrapper[4830]: I0131 09:23:19.317746 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"39dcfcca13639143aaebae3cb77d40e361f67c6338ad727f1999e2a36e3ffabd"} err="failed to get container status \"39dcfcca13639143aaebae3cb77d40e361f67c6338ad727f1999e2a36e3ffabd\": rpc error: code = NotFound desc = could not find container \"39dcfcca13639143aaebae3cb77d40e361f67c6338ad727f1999e2a36e3ffabd\": container with ID starting with 39dcfcca13639143aaebae3cb77d40e361f67c6338ad727f1999e2a36e3ffabd not found: ID does not exist" Jan 31 09:23:19 crc kubenswrapper[4830]: I0131 09:23:19.317781 4830 scope.go:117] "RemoveContainer" containerID="f314a67cf197f76f9e9553ea1a3af00ecf9d246f64890bb8e50abb3a4adb6a84" Jan 31 09:23:19 crc kubenswrapper[4830]: I0131 09:23:19.318294 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f314a67cf197f76f9e9553ea1a3af00ecf9d246f64890bb8e50abb3a4adb6a84"} err="failed to get container status \"f314a67cf197f76f9e9553ea1a3af00ecf9d246f64890bb8e50abb3a4adb6a84\": rpc error: code = NotFound desc = could not find container \"f314a67cf197f76f9e9553ea1a3af00ecf9d246f64890bb8e50abb3a4adb6a84\": container with ID starting with f314a67cf197f76f9e9553ea1a3af00ecf9d246f64890bb8e50abb3a4adb6a84 not found: ID does not exist" Jan 31 09:23:19 crc kubenswrapper[4830]: I0131 09:23:19.318315 4830 scope.go:117] "RemoveContainer" containerID="a88cc0ab549485c84a69e7634ff67075f9eef9e6a569736ca8920015bf3445a5" Jan 31 09:23:19 crc kubenswrapper[4830]: I0131 09:23:19.318527 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a88cc0ab549485c84a69e7634ff67075f9eef9e6a569736ca8920015bf3445a5"} err="failed to get container status \"a88cc0ab549485c84a69e7634ff67075f9eef9e6a569736ca8920015bf3445a5\": rpc error: code = NotFound desc = could not find container \"a88cc0ab549485c84a69e7634ff67075f9eef9e6a569736ca8920015bf3445a5\": container with ID starting with a88cc0ab549485c84a69e7634ff67075f9eef9e6a569736ca8920015bf3445a5 not found: ID does not exist" Jan 31 09:23:19 crc kubenswrapper[4830]: I0131 09:23:19.318549 4830 scope.go:117] "RemoveContainer" containerID="7adaf06fa536ac40db61bcf932640a9fb6f67d5b1eeca9af5a4e09a11f98afc7" Jan 31 09:23:19 crc kubenswrapper[4830]: I0131 09:23:19.320636 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7adaf06fa536ac40db61bcf932640a9fb6f67d5b1eeca9af5a4e09a11f98afc7"} err="failed to get container status \"7adaf06fa536ac40db61bcf932640a9fb6f67d5b1eeca9af5a4e09a11f98afc7\": rpc error: code = NotFound desc = could not find container \"7adaf06fa536ac40db61bcf932640a9fb6f67d5b1eeca9af5a4e09a11f98afc7\": container with ID starting with 7adaf06fa536ac40db61bcf932640a9fb6f67d5b1eeca9af5a4e09a11f98afc7 not found: ID does not exist" Jan 31 09:23:19 crc kubenswrapper[4830]: I0131 09:23:19.320660 4830 scope.go:117] "RemoveContainer" containerID="39dcfcca13639143aaebae3cb77d40e361f67c6338ad727f1999e2a36e3ffabd" Jan 31 09:23:19 crc kubenswrapper[4830]: I0131 09:23:19.321869 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"39dcfcca13639143aaebae3cb77d40e361f67c6338ad727f1999e2a36e3ffabd"} err="failed to get container status \"39dcfcca13639143aaebae3cb77d40e361f67c6338ad727f1999e2a36e3ffabd\": rpc error: code = NotFound desc = could not find container \"39dcfcca13639143aaebae3cb77d40e361f67c6338ad727f1999e2a36e3ffabd\": container with ID starting with 39dcfcca13639143aaebae3cb77d40e361f67c6338ad727f1999e2a36e3ffabd not found: ID does not exist" Jan 31 09:23:19 crc kubenswrapper[4830]: I0131 09:23:19.341105 4830 reconciler_common.go:293] "Volume detached for volume \"pvc-7635d675-22a8-4009-89b3-dfdef75167b6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7635d675-22a8-4009-89b3-dfdef75167b6\") on node \"crc\" DevicePath \"\"" Jan 31 09:23:19 crc kubenswrapper[4830]: I0131 09:23:19.414799 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 31 09:23:19 crc kubenswrapper[4830]: I0131 09:23:19.449982 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 31 09:23:19 crc kubenswrapper[4830]: I0131 09:23:19.482829 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 31 09:23:19 crc kubenswrapper[4830]: E0131 09:23:19.483442 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="68109d40-9af0-4c37-bf02-7b4744dbab5f" containerName="config-reloader" Jan 31 09:23:19 crc kubenswrapper[4830]: I0131 09:23:19.483466 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="68109d40-9af0-4c37-bf02-7b4744dbab5f" containerName="config-reloader" Jan 31 09:23:19 crc kubenswrapper[4830]: E0131 09:23:19.483491 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="68109d40-9af0-4c37-bf02-7b4744dbab5f" containerName="init-config-reloader" Jan 31 09:23:19 crc kubenswrapper[4830]: I0131 09:23:19.483501 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="68109d40-9af0-4c37-bf02-7b4744dbab5f" containerName="init-config-reloader" Jan 31 09:23:19 crc kubenswrapper[4830]: E0131 09:23:19.483517 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="68109d40-9af0-4c37-bf02-7b4744dbab5f" containerName="thanos-sidecar" Jan 31 09:23:19 crc kubenswrapper[4830]: I0131 09:23:19.483525 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="68109d40-9af0-4c37-bf02-7b4744dbab5f" containerName="thanos-sidecar" Jan 31 09:23:19 crc kubenswrapper[4830]: E0131 09:23:19.483540 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef16ab0e-944c-4b5c-9203-e15202c4a3eb" containerName="mariadb-account-create-update" Jan 31 09:23:19 crc kubenswrapper[4830]: I0131 09:23:19.483547 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef16ab0e-944c-4b5c-9203-e15202c4a3eb" containerName="mariadb-account-create-update" Jan 31 09:23:19 crc kubenswrapper[4830]: E0131 09:23:19.483575 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="68109d40-9af0-4c37-bf02-7b4744dbab5f" containerName="prometheus" Jan 31 09:23:19 crc kubenswrapper[4830]: I0131 09:23:19.483583 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="68109d40-9af0-4c37-bf02-7b4744dbab5f" containerName="prometheus" Jan 31 09:23:19 crc kubenswrapper[4830]: I0131 09:23:19.483857 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="68109d40-9af0-4c37-bf02-7b4744dbab5f" containerName="thanos-sidecar" Jan 31 09:23:19 crc kubenswrapper[4830]: I0131 09:23:19.483879 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="68109d40-9af0-4c37-bf02-7b4744dbab5f" containerName="config-reloader" Jan 31 09:23:19 crc kubenswrapper[4830]: I0131 09:23:19.483911 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="68109d40-9af0-4c37-bf02-7b4744dbab5f" containerName="prometheus" Jan 31 09:23:19 crc kubenswrapper[4830]: I0131 09:23:19.483926 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="ef16ab0e-944c-4b5c-9203-e15202c4a3eb" containerName="mariadb-account-create-update" Jan 31 09:23:19 crc kubenswrapper[4830]: I0131 09:23:19.560506 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 31 09:23:19 crc kubenswrapper[4830]: I0131 09:23:19.565564 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Jan 31 09:23:19 crc kubenswrapper[4830]: I0131 09:23:19.567149 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Jan 31 09:23:19 crc kubenswrapper[4830]: I0131 09:23:19.567424 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Jan 31 09:23:19 crc kubenswrapper[4830]: I0131 09:23:19.568321 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Jan 31 09:23:19 crc kubenswrapper[4830]: I0131 09:23:19.568529 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-metric-storage-prometheus-svc" Jan 31 09:23:19 crc kubenswrapper[4830]: I0131 09:23:19.568699 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-2" Jan 31 09:23:19 crc kubenswrapper[4830]: I0131 09:23:19.576275 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-1" Jan 31 09:23:19 crc kubenswrapper[4830]: I0131 09:23:19.586918 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Jan 31 09:23:19 crc kubenswrapper[4830]: I0131 09:23:19.597061 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-tcqxf" Jan 31 09:23:19 crc kubenswrapper[4830]: I0131 09:23:19.648028 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ps27t" Jan 31 09:23:19 crc kubenswrapper[4830]: I0131 09:23:19.655805 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/7b3b4d1e-8963-469f-abe7-204392275c48-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"7b3b4d1e-8963-469f-abe7-204392275c48\") " pod="openstack/prometheus-metric-storage-0" Jan 31 09:23:19 crc kubenswrapper[4830]: I0131 09:23:19.656203 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/7b3b4d1e-8963-469f-abe7-204392275c48-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"7b3b4d1e-8963-469f-abe7-204392275c48\") " pod="openstack/prometheus-metric-storage-0" Jan 31 09:23:19 crc kubenswrapper[4830]: I0131 09:23:19.656282 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/7b3b4d1e-8963-469f-abe7-204392275c48-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"7b3b4d1e-8963-469f-abe7-204392275c48\") " pod="openstack/prometheus-metric-storage-0" Jan 31 09:23:19 crc kubenswrapper[4830]: I0131 09:23:19.656308 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/7b3b4d1e-8963-469f-abe7-204392275c48-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"7b3b4d1e-8963-469f-abe7-204392275c48\") " pod="openstack/prometheus-metric-storage-0" Jan 31 09:23:19 crc kubenswrapper[4830]: I0131 09:23:19.656357 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cm9xv\" (UniqueName: \"kubernetes.io/projected/7b3b4d1e-8963-469f-abe7-204392275c48-kube-api-access-cm9xv\") pod \"prometheus-metric-storage-0\" (UID: \"7b3b4d1e-8963-469f-abe7-204392275c48\") " pod="openstack/prometheus-metric-storage-0" Jan 31 09:23:19 crc kubenswrapper[4830]: I0131 09:23:19.656405 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/7b3b4d1e-8963-469f-abe7-204392275c48-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"7b3b4d1e-8963-469f-abe7-204392275c48\") " pod="openstack/prometheus-metric-storage-0" Jan 31 09:23:19 crc kubenswrapper[4830]: I0131 09:23:19.656452 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/7b3b4d1e-8963-469f-abe7-204392275c48-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"7b3b4d1e-8963-469f-abe7-204392275c48\") " pod="openstack/prometheus-metric-storage-0" Jan 31 09:23:19 crc kubenswrapper[4830]: I0131 09:23:19.656472 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-7635d675-22a8-4009-89b3-dfdef75167b6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7635d675-22a8-4009-89b3-dfdef75167b6\") pod \"prometheus-metric-storage-0\" (UID: \"7b3b4d1e-8963-469f-abe7-204392275c48\") " pod="openstack/prometheus-metric-storage-0" Jan 31 09:23:19 crc kubenswrapper[4830]: I0131 09:23:19.656509 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b3b4d1e-8963-469f-abe7-204392275c48-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"7b3b4d1e-8963-469f-abe7-204392275c48\") " pod="openstack/prometheus-metric-storage-0" Jan 31 09:23:19 crc kubenswrapper[4830]: I0131 09:23:19.658272 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/7b3b4d1e-8963-469f-abe7-204392275c48-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"7b3b4d1e-8963-469f-abe7-204392275c48\") " pod="openstack/prometheus-metric-storage-0" Jan 31 09:23:19 crc kubenswrapper[4830]: I0131 09:23:19.658386 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/7b3b4d1e-8963-469f-abe7-204392275c48-config\") pod \"prometheus-metric-storage-0\" (UID: \"7b3b4d1e-8963-469f-abe7-204392275c48\") " pod="openstack/prometheus-metric-storage-0" Jan 31 09:23:19 crc kubenswrapper[4830]: I0131 09:23:19.658433 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/7b3b4d1e-8963-469f-abe7-204392275c48-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"7b3b4d1e-8963-469f-abe7-204392275c48\") " pod="openstack/prometheus-metric-storage-0" Jan 31 09:23:19 crc kubenswrapper[4830]: I0131 09:23:19.658669 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/7b3b4d1e-8963-469f-abe7-204392275c48-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"7b3b4d1e-8963-469f-abe7-204392275c48\") " pod="openstack/prometheus-metric-storage-0" Jan 31 09:23:19 crc kubenswrapper[4830]: I0131 09:23:19.659267 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 31 09:23:19 crc kubenswrapper[4830]: I0131 09:23:19.760908 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/7b3b4d1e-8963-469f-abe7-204392275c48-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"7b3b4d1e-8963-469f-abe7-204392275c48\") " pod="openstack/prometheus-metric-storage-0" Jan 31 09:23:19 crc kubenswrapper[4830]: I0131 09:23:19.761262 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/7b3b4d1e-8963-469f-abe7-204392275c48-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"7b3b4d1e-8963-469f-abe7-204392275c48\") " pod="openstack/prometheus-metric-storage-0" Jan 31 09:23:19 crc kubenswrapper[4830]: I0131 09:23:19.761445 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-7635d675-22a8-4009-89b3-dfdef75167b6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7635d675-22a8-4009-89b3-dfdef75167b6\") pod \"prometheus-metric-storage-0\" (UID: \"7b3b4d1e-8963-469f-abe7-204392275c48\") " pod="openstack/prometheus-metric-storage-0" Jan 31 09:23:19 crc kubenswrapper[4830]: I0131 09:23:19.761579 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b3b4d1e-8963-469f-abe7-204392275c48-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"7b3b4d1e-8963-469f-abe7-204392275c48\") " pod="openstack/prometheus-metric-storage-0" Jan 31 09:23:19 crc kubenswrapper[4830]: I0131 09:23:19.761760 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/7b3b4d1e-8963-469f-abe7-204392275c48-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"7b3b4d1e-8963-469f-abe7-204392275c48\") " pod="openstack/prometheus-metric-storage-0" Jan 31 09:23:19 crc kubenswrapper[4830]: I0131 09:23:19.761881 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/7b3b4d1e-8963-469f-abe7-204392275c48-config\") pod \"prometheus-metric-storage-0\" (UID: \"7b3b4d1e-8963-469f-abe7-204392275c48\") " pod="openstack/prometheus-metric-storage-0" Jan 31 09:23:19 crc kubenswrapper[4830]: I0131 09:23:19.761983 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/7b3b4d1e-8963-469f-abe7-204392275c48-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"7b3b4d1e-8963-469f-abe7-204392275c48\") " pod="openstack/prometheus-metric-storage-0" Jan 31 09:23:19 crc kubenswrapper[4830]: I0131 09:23:19.762080 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/7b3b4d1e-8963-469f-abe7-204392275c48-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"7b3b4d1e-8963-469f-abe7-204392275c48\") " pod="openstack/prometheus-metric-storage-0" Jan 31 09:23:19 crc kubenswrapper[4830]: I0131 09:23:19.762212 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/7b3b4d1e-8963-469f-abe7-204392275c48-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"7b3b4d1e-8963-469f-abe7-204392275c48\") " pod="openstack/prometheus-metric-storage-0" Jan 31 09:23:19 crc kubenswrapper[4830]: I0131 09:23:19.762316 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/7b3b4d1e-8963-469f-abe7-204392275c48-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"7b3b4d1e-8963-469f-abe7-204392275c48\") " pod="openstack/prometheus-metric-storage-0" Jan 31 09:23:19 crc kubenswrapper[4830]: I0131 09:23:19.762459 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/7b3b4d1e-8963-469f-abe7-204392275c48-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"7b3b4d1e-8963-469f-abe7-204392275c48\") " pod="openstack/prometheus-metric-storage-0" Jan 31 09:23:19 crc kubenswrapper[4830]: I0131 09:23:19.762554 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/7b3b4d1e-8963-469f-abe7-204392275c48-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"7b3b4d1e-8963-469f-abe7-204392275c48\") " pod="openstack/prometheus-metric-storage-0" Jan 31 09:23:19 crc kubenswrapper[4830]: I0131 09:23:19.762769 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cm9xv\" (UniqueName: \"kubernetes.io/projected/7b3b4d1e-8963-469f-abe7-204392275c48-kube-api-access-cm9xv\") pod \"prometheus-metric-storage-0\" (UID: \"7b3b4d1e-8963-469f-abe7-204392275c48\") " pod="openstack/prometheus-metric-storage-0" Jan 31 09:23:19 crc kubenswrapper[4830]: I0131 09:23:19.763743 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/7b3b4d1e-8963-469f-abe7-204392275c48-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"7b3b4d1e-8963-469f-abe7-204392275c48\") " pod="openstack/prometheus-metric-storage-0" Jan 31 09:23:19 crc kubenswrapper[4830]: I0131 09:23:19.764171 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/7b3b4d1e-8963-469f-abe7-204392275c48-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"7b3b4d1e-8963-469f-abe7-204392275c48\") " pod="openstack/prometheus-metric-storage-0" Jan 31 09:23:19 crc kubenswrapper[4830]: I0131 09:23:19.778770 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/7b3b4d1e-8963-469f-abe7-204392275c48-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"7b3b4d1e-8963-469f-abe7-204392275c48\") " pod="openstack/prometheus-metric-storage-0" Jan 31 09:23:19 crc kubenswrapper[4830]: I0131 09:23:19.779399 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/7b3b4d1e-8963-469f-abe7-204392275c48-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"7b3b4d1e-8963-469f-abe7-204392275c48\") " pod="openstack/prometheus-metric-storage-0" Jan 31 09:23:19 crc kubenswrapper[4830]: I0131 09:23:19.780454 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/7b3b4d1e-8963-469f-abe7-204392275c48-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"7b3b4d1e-8963-469f-abe7-204392275c48\") " pod="openstack/prometheus-metric-storage-0" Jan 31 09:23:19 crc kubenswrapper[4830]: I0131 09:23:19.781104 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/7b3b4d1e-8963-469f-abe7-204392275c48-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"7b3b4d1e-8963-469f-abe7-204392275c48\") " pod="openstack/prometheus-metric-storage-0" Jan 31 09:23:19 crc kubenswrapper[4830]: I0131 09:23:19.781104 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/7b3b4d1e-8963-469f-abe7-204392275c48-config\") pod \"prometheus-metric-storage-0\" (UID: \"7b3b4d1e-8963-469f-abe7-204392275c48\") " pod="openstack/prometheus-metric-storage-0" Jan 31 09:23:19 crc kubenswrapper[4830]: I0131 09:23:19.781250 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/7b3b4d1e-8963-469f-abe7-204392275c48-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"7b3b4d1e-8963-469f-abe7-204392275c48\") " pod="openstack/prometheus-metric-storage-0" Jan 31 09:23:19 crc kubenswrapper[4830]: I0131 09:23:19.781742 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/7b3b4d1e-8963-469f-abe7-204392275c48-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"7b3b4d1e-8963-469f-abe7-204392275c48\") " pod="openstack/prometheus-metric-storage-0" Jan 31 09:23:19 crc kubenswrapper[4830]: I0131 09:23:19.785702 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b3b4d1e-8963-469f-abe7-204392275c48-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"7b3b4d1e-8963-469f-abe7-204392275c48\") " pod="openstack/prometheus-metric-storage-0" Jan 31 09:23:19 crc kubenswrapper[4830]: I0131 09:23:19.786264 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/7b3b4d1e-8963-469f-abe7-204392275c48-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"7b3b4d1e-8963-469f-abe7-204392275c48\") " pod="openstack/prometheus-metric-storage-0" Jan 31 09:23:19 crc kubenswrapper[4830]: I0131 09:23:19.996282 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cm9xv\" (UniqueName: \"kubernetes.io/projected/7b3b4d1e-8963-469f-abe7-204392275c48-kube-api-access-cm9xv\") pod \"prometheus-metric-storage-0\" (UID: \"7b3b4d1e-8963-469f-abe7-204392275c48\") " pod="openstack/prometheus-metric-storage-0" Jan 31 09:23:20 crc kubenswrapper[4830]: I0131 09:23:20.014244 4830 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 31 09:23:20 crc kubenswrapper[4830]: I0131 09:23:20.014302 4830 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-7635d675-22a8-4009-89b3-dfdef75167b6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7635d675-22a8-4009-89b3-dfdef75167b6\") pod \"prometheus-metric-storage-0\" (UID: \"7b3b4d1e-8963-469f-abe7-204392275c48\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/5aaf80fa6ac263624dc34aeab406fa0928a0afca3643198b3250e21367e491fb/globalmount\"" pod="openstack/prometheus-metric-storage-0" Jan 31 09:23:20 crc kubenswrapper[4830]: I0131 09:23:20.123083 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"1023f27a-9c1d-4818-a3f5-94946296ae46","Type":"ContainerStarted","Data":"dfdaa2d8ff00c4703ad66df96482345bbf027bde96b8b6c38c842ee4c70388ce"} Jan 31 09:23:20 crc kubenswrapper[4830]: I0131 09:23:20.123154 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"1023f27a-9c1d-4818-a3f5-94946296ae46","Type":"ContainerStarted","Data":"818c7139389734c7bc0900cdae03fc891e7c92b2dff70f4a2a4c657791a9a8e6"} Jan 31 09:23:20 crc kubenswrapper[4830]: I0131 09:23:20.263407 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="68109d40-9af0-4c37-bf02-7b4744dbab5f" path="/var/lib/kubelet/pods/68109d40-9af0-4c37-bf02-7b4744dbab5f/volumes" Jan 31 09:23:20 crc kubenswrapper[4830]: I0131 09:23:20.452516 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-7635d675-22a8-4009-89b3-dfdef75167b6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7635d675-22a8-4009-89b3-dfdef75167b6\") pod \"prometheus-metric-storage-0\" (UID: \"7b3b4d1e-8963-469f-abe7-204392275c48\") " pod="openstack/prometheus-metric-storage-0" Jan 31 09:23:20 crc kubenswrapper[4830]: I0131 09:23:20.513880 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 31 09:23:20 crc kubenswrapper[4830]: I0131 09:23:20.918227 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ps27t-config-9j8bp" Jan 31 09:23:20 crc kubenswrapper[4830]: I0131 09:23:20.992809 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/781d272e-1da3-4dea-a516-a0156b7110e3-var-run-ovn\") pod \"781d272e-1da3-4dea-a516-a0156b7110e3\" (UID: \"781d272e-1da3-4dea-a516-a0156b7110e3\") " Jan 31 09:23:20 crc kubenswrapper[4830]: I0131 09:23:20.992890 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkv9n\" (UniqueName: \"kubernetes.io/projected/781d272e-1da3-4dea-a516-a0156b7110e3-kube-api-access-jkv9n\") pod \"781d272e-1da3-4dea-a516-a0156b7110e3\" (UID: \"781d272e-1da3-4dea-a516-a0156b7110e3\") " Jan 31 09:23:20 crc kubenswrapper[4830]: I0131 09:23:20.992972 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/781d272e-1da3-4dea-a516-a0156b7110e3-var-run\") pod \"781d272e-1da3-4dea-a516-a0156b7110e3\" (UID: \"781d272e-1da3-4dea-a516-a0156b7110e3\") " Jan 31 09:23:20 crc kubenswrapper[4830]: I0131 09:23:20.993015 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/781d272e-1da3-4dea-a516-a0156b7110e3-additional-scripts\") pod \"781d272e-1da3-4dea-a516-a0156b7110e3\" (UID: \"781d272e-1da3-4dea-a516-a0156b7110e3\") " Jan 31 09:23:20 crc kubenswrapper[4830]: I0131 09:23:20.993008 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/781d272e-1da3-4dea-a516-a0156b7110e3-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "781d272e-1da3-4dea-a516-a0156b7110e3" (UID: "781d272e-1da3-4dea-a516-a0156b7110e3"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 09:23:20 crc kubenswrapper[4830]: I0131 09:23:20.993139 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/781d272e-1da3-4dea-a516-a0156b7110e3-var-run" (OuterVolumeSpecName: "var-run") pod "781d272e-1da3-4dea-a516-a0156b7110e3" (UID: "781d272e-1da3-4dea-a516-a0156b7110e3"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 09:23:20 crc kubenswrapper[4830]: I0131 09:23:20.993244 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/781d272e-1da3-4dea-a516-a0156b7110e3-scripts\") pod \"781d272e-1da3-4dea-a516-a0156b7110e3\" (UID: \"781d272e-1da3-4dea-a516-a0156b7110e3\") " Jan 31 09:23:20 crc kubenswrapper[4830]: I0131 09:23:20.993267 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/781d272e-1da3-4dea-a516-a0156b7110e3-var-log-ovn\") pod \"781d272e-1da3-4dea-a516-a0156b7110e3\" (UID: \"781d272e-1da3-4dea-a516-a0156b7110e3\") " Jan 31 09:23:20 crc kubenswrapper[4830]: I0131 09:23:20.993835 4830 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/781d272e-1da3-4dea-a516-a0156b7110e3-var-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 31 09:23:20 crc kubenswrapper[4830]: I0131 09:23:20.993847 4830 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/781d272e-1da3-4dea-a516-a0156b7110e3-var-run\") on node \"crc\" DevicePath \"\"" Jan 31 09:23:20 crc kubenswrapper[4830]: I0131 09:23:20.993900 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/781d272e-1da3-4dea-a516-a0156b7110e3-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "781d272e-1da3-4dea-a516-a0156b7110e3" (UID: "781d272e-1da3-4dea-a516-a0156b7110e3"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 09:23:20 crc kubenswrapper[4830]: I0131 09:23:20.994647 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/781d272e-1da3-4dea-a516-a0156b7110e3-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "781d272e-1da3-4dea-a516-a0156b7110e3" (UID: "781d272e-1da3-4dea-a516-a0156b7110e3"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:23:20 crc kubenswrapper[4830]: I0131 09:23:20.995369 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/781d272e-1da3-4dea-a516-a0156b7110e3-scripts" (OuterVolumeSpecName: "scripts") pod "781d272e-1da3-4dea-a516-a0156b7110e3" (UID: "781d272e-1da3-4dea-a516-a0156b7110e3"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:23:21 crc kubenswrapper[4830]: I0131 09:23:21.000191 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/781d272e-1da3-4dea-a516-a0156b7110e3-kube-api-access-jkv9n" (OuterVolumeSpecName: "kube-api-access-jkv9n") pod "781d272e-1da3-4dea-a516-a0156b7110e3" (UID: "781d272e-1da3-4dea-a516-a0156b7110e3"). InnerVolumeSpecName "kube-api-access-jkv9n". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:23:21 crc kubenswrapper[4830]: I0131 09:23:21.096289 4830 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/781d272e-1da3-4dea-a516-a0156b7110e3-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 09:23:21 crc kubenswrapper[4830]: I0131 09:23:21.096335 4830 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/781d272e-1da3-4dea-a516-a0156b7110e3-var-log-ovn\") on node \"crc\" DevicePath \"\"" Jan 31 09:23:21 crc kubenswrapper[4830]: I0131 09:23:21.096348 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkv9n\" (UniqueName: \"kubernetes.io/projected/781d272e-1da3-4dea-a516-a0156b7110e3-kube-api-access-jkv9n\") on node \"crc\" DevicePath \"\"" Jan 31 09:23:21 crc kubenswrapper[4830]: I0131 09:23:21.096357 4830 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/781d272e-1da3-4dea-a516-a0156b7110e3-additional-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 09:23:21 crc kubenswrapper[4830]: I0131 09:23:21.152398 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"1023f27a-9c1d-4818-a3f5-94946296ae46","Type":"ContainerStarted","Data":"a939f59e036b0fad1ed620427f382ce6947432360a3ec335e1b7bb5251a999ef"} Jan 31 09:23:21 crc kubenswrapper[4830]: I0131 09:23:21.155255 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ps27t-config-9j8bp" event={"ID":"781d272e-1da3-4dea-a516-a0156b7110e3","Type":"ContainerDied","Data":"c0e2e97f478c2b0412c44d930d4eaf902c5236a9c08a6a1fb72a3e68f4e2d5bf"} Jan 31 09:23:21 crc kubenswrapper[4830]: I0131 09:23:21.155294 4830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c0e2e97f478c2b0412c44d930d4eaf902c5236a9c08a6a1fb72a3e68f4e2d5bf" Jan 31 09:23:21 crc kubenswrapper[4830]: I0131 09:23:21.155361 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ps27t-config-9j8bp" Jan 31 09:23:21 crc kubenswrapper[4830]: I0131 09:23:21.186615 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-ps27t-config-9j8bp"] Jan 31 09:23:21 crc kubenswrapper[4830]: I0131 09:23:21.202292 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-ps27t-config-9j8bp"] Jan 31 09:23:21 crc kubenswrapper[4830]: I0131 09:23:21.238108 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=39.273918451 podStartE2EDuration="50.238078862s" podCreationTimestamp="2026-01-31 09:22:31 +0000 UTC" firstStartedPulling="2026-01-31 09:23:05.01820145 +0000 UTC m=+1329.511563892" lastFinishedPulling="2026-01-31 09:23:15.982361861 +0000 UTC m=+1340.475724303" observedRunningTime="2026-01-31 09:23:21.229150068 +0000 UTC m=+1345.722512510" watchObservedRunningTime="2026-01-31 09:23:21.238078862 +0000 UTC m=+1345.731441304" Jan 31 09:23:21 crc kubenswrapper[4830]: I0131 09:23:21.274267 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 31 09:23:21 crc kubenswrapper[4830]: W0131 09:23:21.285796 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7b3b4d1e_8963_469f_abe7_204392275c48.slice/crio-51c43fb85166fe506c5aa4e03341ce9a4d4aefc9d0b19be8b8c75fb289049eb0 WatchSource:0}: Error finding container 51c43fb85166fe506c5aa4e03341ce9a4d4aefc9d0b19be8b8c75fb289049eb0: Status 404 returned error can't find the container with id 51c43fb85166fe506c5aa4e03341ce9a4d4aefc9d0b19be8b8c75fb289049eb0 Jan 31 09:23:21 crc kubenswrapper[4830]: I0131 09:23:21.353822 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ps27t-config-qxbdx"] Jan 31 09:23:21 crc kubenswrapper[4830]: E0131 09:23:21.354696 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="781d272e-1da3-4dea-a516-a0156b7110e3" containerName="ovn-config" Jan 31 09:23:21 crc kubenswrapper[4830]: I0131 09:23:21.354713 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="781d272e-1da3-4dea-a516-a0156b7110e3" containerName="ovn-config" Jan 31 09:23:21 crc kubenswrapper[4830]: I0131 09:23:21.354955 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="781d272e-1da3-4dea-a516-a0156b7110e3" containerName="ovn-config" Jan 31 09:23:21 crc kubenswrapper[4830]: I0131 09:23:21.355792 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ps27t-config-qxbdx" Jan 31 09:23:21 crc kubenswrapper[4830]: I0131 09:23:21.362152 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Jan 31 09:23:21 crc kubenswrapper[4830]: I0131 09:23:21.377543 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ps27t-config-qxbdx"] Jan 31 09:23:21 crc kubenswrapper[4830]: I0131 09:23:21.508974 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/6b9089a4-8c83-4a85-a025-04d41faac9cc-additional-scripts\") pod \"ovn-controller-ps27t-config-qxbdx\" (UID: \"6b9089a4-8c83-4a85-a025-04d41faac9cc\") " pod="openstack/ovn-controller-ps27t-config-qxbdx" Jan 31 09:23:21 crc kubenswrapper[4830]: I0131 09:23:21.509074 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kcgrp\" (UniqueName: \"kubernetes.io/projected/6b9089a4-8c83-4a85-a025-04d41faac9cc-kube-api-access-kcgrp\") pod \"ovn-controller-ps27t-config-qxbdx\" (UID: \"6b9089a4-8c83-4a85-a025-04d41faac9cc\") " pod="openstack/ovn-controller-ps27t-config-qxbdx" Jan 31 09:23:21 crc kubenswrapper[4830]: I0131 09:23:21.510197 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/6b9089a4-8c83-4a85-a025-04d41faac9cc-var-run-ovn\") pod \"ovn-controller-ps27t-config-qxbdx\" (UID: \"6b9089a4-8c83-4a85-a025-04d41faac9cc\") " pod="openstack/ovn-controller-ps27t-config-qxbdx" Jan 31 09:23:21 crc kubenswrapper[4830]: I0131 09:23:21.510245 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/6b9089a4-8c83-4a85-a025-04d41faac9cc-var-log-ovn\") pod \"ovn-controller-ps27t-config-qxbdx\" (UID: \"6b9089a4-8c83-4a85-a025-04d41faac9cc\") " pod="openstack/ovn-controller-ps27t-config-qxbdx" Jan 31 09:23:21 crc kubenswrapper[4830]: I0131 09:23:21.510362 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/6b9089a4-8c83-4a85-a025-04d41faac9cc-var-run\") pod \"ovn-controller-ps27t-config-qxbdx\" (UID: \"6b9089a4-8c83-4a85-a025-04d41faac9cc\") " pod="openstack/ovn-controller-ps27t-config-qxbdx" Jan 31 09:23:21 crc kubenswrapper[4830]: I0131 09:23:21.510434 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6b9089a4-8c83-4a85-a025-04d41faac9cc-scripts\") pod \"ovn-controller-ps27t-config-qxbdx\" (UID: \"6b9089a4-8c83-4a85-a025-04d41faac9cc\") " pod="openstack/ovn-controller-ps27t-config-qxbdx" Jan 31 09:23:21 crc kubenswrapper[4830]: I0131 09:23:21.612327 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/6b9089a4-8c83-4a85-a025-04d41faac9cc-var-run\") pod \"ovn-controller-ps27t-config-qxbdx\" (UID: \"6b9089a4-8c83-4a85-a025-04d41faac9cc\") " pod="openstack/ovn-controller-ps27t-config-qxbdx" Jan 31 09:23:21 crc kubenswrapper[4830]: I0131 09:23:21.612407 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6b9089a4-8c83-4a85-a025-04d41faac9cc-scripts\") pod \"ovn-controller-ps27t-config-qxbdx\" (UID: \"6b9089a4-8c83-4a85-a025-04d41faac9cc\") " pod="openstack/ovn-controller-ps27t-config-qxbdx" Jan 31 09:23:21 crc kubenswrapper[4830]: I0131 09:23:21.612504 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/6b9089a4-8c83-4a85-a025-04d41faac9cc-additional-scripts\") pod \"ovn-controller-ps27t-config-qxbdx\" (UID: \"6b9089a4-8c83-4a85-a025-04d41faac9cc\") " pod="openstack/ovn-controller-ps27t-config-qxbdx" Jan 31 09:23:21 crc kubenswrapper[4830]: I0131 09:23:21.612552 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kcgrp\" (UniqueName: \"kubernetes.io/projected/6b9089a4-8c83-4a85-a025-04d41faac9cc-kube-api-access-kcgrp\") pod \"ovn-controller-ps27t-config-qxbdx\" (UID: \"6b9089a4-8c83-4a85-a025-04d41faac9cc\") " pod="openstack/ovn-controller-ps27t-config-qxbdx" Jan 31 09:23:21 crc kubenswrapper[4830]: I0131 09:23:21.612615 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/6b9089a4-8c83-4a85-a025-04d41faac9cc-var-run-ovn\") pod \"ovn-controller-ps27t-config-qxbdx\" (UID: \"6b9089a4-8c83-4a85-a025-04d41faac9cc\") " pod="openstack/ovn-controller-ps27t-config-qxbdx" Jan 31 09:23:21 crc kubenswrapper[4830]: I0131 09:23:21.612631 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/6b9089a4-8c83-4a85-a025-04d41faac9cc-var-log-ovn\") pod \"ovn-controller-ps27t-config-qxbdx\" (UID: \"6b9089a4-8c83-4a85-a025-04d41faac9cc\") " pod="openstack/ovn-controller-ps27t-config-qxbdx" Jan 31 09:23:21 crc kubenswrapper[4830]: I0131 09:23:21.613046 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/6b9089a4-8c83-4a85-a025-04d41faac9cc-var-log-ovn\") pod \"ovn-controller-ps27t-config-qxbdx\" (UID: \"6b9089a4-8c83-4a85-a025-04d41faac9cc\") " pod="openstack/ovn-controller-ps27t-config-qxbdx" Jan 31 09:23:21 crc kubenswrapper[4830]: I0131 09:23:21.613268 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/6b9089a4-8c83-4a85-a025-04d41faac9cc-var-run\") pod \"ovn-controller-ps27t-config-qxbdx\" (UID: \"6b9089a4-8c83-4a85-a025-04d41faac9cc\") " pod="openstack/ovn-controller-ps27t-config-qxbdx" Jan 31 09:23:21 crc kubenswrapper[4830]: I0131 09:23:21.615506 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6b9089a4-8c83-4a85-a025-04d41faac9cc-scripts\") pod \"ovn-controller-ps27t-config-qxbdx\" (UID: \"6b9089a4-8c83-4a85-a025-04d41faac9cc\") " pod="openstack/ovn-controller-ps27t-config-qxbdx" Jan 31 09:23:21 crc kubenswrapper[4830]: I0131 09:23:21.615595 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/6b9089a4-8c83-4a85-a025-04d41faac9cc-var-run-ovn\") pod \"ovn-controller-ps27t-config-qxbdx\" (UID: \"6b9089a4-8c83-4a85-a025-04d41faac9cc\") " pod="openstack/ovn-controller-ps27t-config-qxbdx" Jan 31 09:23:21 crc kubenswrapper[4830]: I0131 09:23:21.615974 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/6b9089a4-8c83-4a85-a025-04d41faac9cc-additional-scripts\") pod \"ovn-controller-ps27t-config-qxbdx\" (UID: \"6b9089a4-8c83-4a85-a025-04d41faac9cc\") " pod="openstack/ovn-controller-ps27t-config-qxbdx" Jan 31 09:23:21 crc kubenswrapper[4830]: I0131 09:23:21.655413 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kcgrp\" (UniqueName: \"kubernetes.io/projected/6b9089a4-8c83-4a85-a025-04d41faac9cc-kube-api-access-kcgrp\") pod \"ovn-controller-ps27t-config-qxbdx\" (UID: \"6b9089a4-8c83-4a85-a025-04d41faac9cc\") " pod="openstack/ovn-controller-ps27t-config-qxbdx" Jan 31 09:23:21 crc kubenswrapper[4830]: I0131 09:23:21.659173 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-77585f5f8c-v7bq9"] Jan 31 09:23:21 crc kubenswrapper[4830]: I0131 09:23:21.665865 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-77585f5f8c-v7bq9" Jan 31 09:23:21 crc kubenswrapper[4830]: I0131 09:23:21.676426 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Jan 31 09:23:21 crc kubenswrapper[4830]: I0131 09:23:21.687879 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-77585f5f8c-v7bq9"] Jan 31 09:23:21 crc kubenswrapper[4830]: I0131 09:23:21.819007 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/78be811e-7bfb-400f-9e75-b2853dc051bd-dns-swift-storage-0\") pod \"dnsmasq-dns-77585f5f8c-v7bq9\" (UID: \"78be811e-7bfb-400f-9e75-b2853dc051bd\") " pod="openstack/dnsmasq-dns-77585f5f8c-v7bq9" Jan 31 09:23:21 crc kubenswrapper[4830]: I0131 09:23:21.819348 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/78be811e-7bfb-400f-9e75-b2853dc051bd-config\") pod \"dnsmasq-dns-77585f5f8c-v7bq9\" (UID: \"78be811e-7bfb-400f-9e75-b2853dc051bd\") " pod="openstack/dnsmasq-dns-77585f5f8c-v7bq9" Jan 31 09:23:21 crc kubenswrapper[4830]: I0131 09:23:21.819712 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/78be811e-7bfb-400f-9e75-b2853dc051bd-ovsdbserver-nb\") pod \"dnsmasq-dns-77585f5f8c-v7bq9\" (UID: \"78be811e-7bfb-400f-9e75-b2853dc051bd\") " pod="openstack/dnsmasq-dns-77585f5f8c-v7bq9" Jan 31 09:23:21 crc kubenswrapper[4830]: I0131 09:23:21.820167 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/78be811e-7bfb-400f-9e75-b2853dc051bd-ovsdbserver-sb\") pod \"dnsmasq-dns-77585f5f8c-v7bq9\" (UID: \"78be811e-7bfb-400f-9e75-b2853dc051bd\") " pod="openstack/dnsmasq-dns-77585f5f8c-v7bq9" Jan 31 09:23:21 crc kubenswrapper[4830]: I0131 09:23:21.820223 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/78be811e-7bfb-400f-9e75-b2853dc051bd-dns-svc\") pod \"dnsmasq-dns-77585f5f8c-v7bq9\" (UID: \"78be811e-7bfb-400f-9e75-b2853dc051bd\") " pod="openstack/dnsmasq-dns-77585f5f8c-v7bq9" Jan 31 09:23:21 crc kubenswrapper[4830]: I0131 09:23:21.820386 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h8xhh\" (UniqueName: \"kubernetes.io/projected/78be811e-7bfb-400f-9e75-b2853dc051bd-kube-api-access-h8xhh\") pod \"dnsmasq-dns-77585f5f8c-v7bq9\" (UID: \"78be811e-7bfb-400f-9e75-b2853dc051bd\") " pod="openstack/dnsmasq-dns-77585f5f8c-v7bq9" Jan 31 09:23:21 crc kubenswrapper[4830]: I0131 09:23:21.847120 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ps27t-config-qxbdx" Jan 31 09:23:21 crc kubenswrapper[4830]: I0131 09:23:21.922985 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/78be811e-7bfb-400f-9e75-b2853dc051bd-config\") pod \"dnsmasq-dns-77585f5f8c-v7bq9\" (UID: \"78be811e-7bfb-400f-9e75-b2853dc051bd\") " pod="openstack/dnsmasq-dns-77585f5f8c-v7bq9" Jan 31 09:23:21 crc kubenswrapper[4830]: I0131 09:23:21.923110 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/78be811e-7bfb-400f-9e75-b2853dc051bd-ovsdbserver-nb\") pod \"dnsmasq-dns-77585f5f8c-v7bq9\" (UID: \"78be811e-7bfb-400f-9e75-b2853dc051bd\") " pod="openstack/dnsmasq-dns-77585f5f8c-v7bq9" Jan 31 09:23:21 crc kubenswrapper[4830]: I0131 09:23:21.923205 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/78be811e-7bfb-400f-9e75-b2853dc051bd-ovsdbserver-sb\") pod \"dnsmasq-dns-77585f5f8c-v7bq9\" (UID: \"78be811e-7bfb-400f-9e75-b2853dc051bd\") " pod="openstack/dnsmasq-dns-77585f5f8c-v7bq9" Jan 31 09:23:21 crc kubenswrapper[4830]: I0131 09:23:21.923237 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/78be811e-7bfb-400f-9e75-b2853dc051bd-dns-svc\") pod \"dnsmasq-dns-77585f5f8c-v7bq9\" (UID: \"78be811e-7bfb-400f-9e75-b2853dc051bd\") " pod="openstack/dnsmasq-dns-77585f5f8c-v7bq9" Jan 31 09:23:21 crc kubenswrapper[4830]: I0131 09:23:21.923318 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h8xhh\" (UniqueName: \"kubernetes.io/projected/78be811e-7bfb-400f-9e75-b2853dc051bd-kube-api-access-h8xhh\") pod \"dnsmasq-dns-77585f5f8c-v7bq9\" (UID: \"78be811e-7bfb-400f-9e75-b2853dc051bd\") " pod="openstack/dnsmasq-dns-77585f5f8c-v7bq9" Jan 31 09:23:21 crc kubenswrapper[4830]: I0131 09:23:21.923359 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/78be811e-7bfb-400f-9e75-b2853dc051bd-dns-swift-storage-0\") pod \"dnsmasq-dns-77585f5f8c-v7bq9\" (UID: \"78be811e-7bfb-400f-9e75-b2853dc051bd\") " pod="openstack/dnsmasq-dns-77585f5f8c-v7bq9" Jan 31 09:23:21 crc kubenswrapper[4830]: I0131 09:23:21.926483 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/78be811e-7bfb-400f-9e75-b2853dc051bd-dns-swift-storage-0\") pod \"dnsmasq-dns-77585f5f8c-v7bq9\" (UID: \"78be811e-7bfb-400f-9e75-b2853dc051bd\") " pod="openstack/dnsmasq-dns-77585f5f8c-v7bq9" Jan 31 09:23:21 crc kubenswrapper[4830]: I0131 09:23:21.926549 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/78be811e-7bfb-400f-9e75-b2853dc051bd-config\") pod \"dnsmasq-dns-77585f5f8c-v7bq9\" (UID: \"78be811e-7bfb-400f-9e75-b2853dc051bd\") " pod="openstack/dnsmasq-dns-77585f5f8c-v7bq9" Jan 31 09:23:21 crc kubenswrapper[4830]: I0131 09:23:21.937039 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/78be811e-7bfb-400f-9e75-b2853dc051bd-dns-svc\") pod \"dnsmasq-dns-77585f5f8c-v7bq9\" (UID: \"78be811e-7bfb-400f-9e75-b2853dc051bd\") " pod="openstack/dnsmasq-dns-77585f5f8c-v7bq9" Jan 31 09:23:21 crc kubenswrapper[4830]: I0131 09:23:21.937473 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/78be811e-7bfb-400f-9e75-b2853dc051bd-ovsdbserver-sb\") pod \"dnsmasq-dns-77585f5f8c-v7bq9\" (UID: \"78be811e-7bfb-400f-9e75-b2853dc051bd\") " pod="openstack/dnsmasq-dns-77585f5f8c-v7bq9" Jan 31 09:23:21 crc kubenswrapper[4830]: I0131 09:23:21.938056 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/78be811e-7bfb-400f-9e75-b2853dc051bd-ovsdbserver-nb\") pod \"dnsmasq-dns-77585f5f8c-v7bq9\" (UID: \"78be811e-7bfb-400f-9e75-b2853dc051bd\") " pod="openstack/dnsmasq-dns-77585f5f8c-v7bq9" Jan 31 09:23:21 crc kubenswrapper[4830]: I0131 09:23:21.962745 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h8xhh\" (UniqueName: \"kubernetes.io/projected/78be811e-7bfb-400f-9e75-b2853dc051bd-kube-api-access-h8xhh\") pod \"dnsmasq-dns-77585f5f8c-v7bq9\" (UID: \"78be811e-7bfb-400f-9e75-b2853dc051bd\") " pod="openstack/dnsmasq-dns-77585f5f8c-v7bq9" Jan 31 09:23:22 crc kubenswrapper[4830]: I0131 09:23:22.034300 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-77585f5f8c-v7bq9" Jan 31 09:23:22 crc kubenswrapper[4830]: I0131 09:23:22.227864 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"7b3b4d1e-8963-469f-abe7-204392275c48","Type":"ContainerStarted","Data":"51c43fb85166fe506c5aa4e03341ce9a4d4aefc9d0b19be8b8c75fb289049eb0"} Jan 31 09:23:22 crc kubenswrapper[4830]: I0131 09:23:22.279143 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="781d272e-1da3-4dea-a516-a0156b7110e3" path="/var/lib/kubelet/pods/781d272e-1da3-4dea-a516-a0156b7110e3/volumes" Jan 31 09:23:22 crc kubenswrapper[4830]: I0131 09:23:22.516055 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ps27t-config-qxbdx"] Jan 31 09:23:22 crc kubenswrapper[4830]: I0131 09:23:22.757005 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-77585f5f8c-v7bq9"] Jan 31 09:23:23 crc kubenswrapper[4830]: I0131 09:23:23.347195 4830 generic.go:334] "Generic (PLEG): container finished" podID="78be811e-7bfb-400f-9e75-b2853dc051bd" containerID="9ad64aa0976e8d861bc684c8ab460f86d03eb1ae0e1dc4fe39e49e703048b4b8" exitCode=0 Jan 31 09:23:23 crc kubenswrapper[4830]: I0131 09:23:23.347628 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77585f5f8c-v7bq9" event={"ID":"78be811e-7bfb-400f-9e75-b2853dc051bd","Type":"ContainerDied","Data":"9ad64aa0976e8d861bc684c8ab460f86d03eb1ae0e1dc4fe39e49e703048b4b8"} Jan 31 09:23:23 crc kubenswrapper[4830]: I0131 09:23:23.347664 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77585f5f8c-v7bq9" event={"ID":"78be811e-7bfb-400f-9e75-b2853dc051bd","Type":"ContainerStarted","Data":"a0eaaa87ac3f539f94453eae7e1519c3d57257cf4cec2c117d948deae1dc7619"} Jan 31 09:23:23 crc kubenswrapper[4830]: I0131 09:23:23.362069 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ps27t-config-qxbdx" event={"ID":"6b9089a4-8c83-4a85-a025-04d41faac9cc","Type":"ContainerStarted","Data":"687f0bb00bc26d9a9a27626a9d07e5ffacb0e5e031c3732b898d3b85f3fbe4a0"} Jan 31 09:23:23 crc kubenswrapper[4830]: I0131 09:23:23.362137 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ps27t-config-qxbdx" event={"ID":"6b9089a4-8c83-4a85-a025-04d41faac9cc","Type":"ContainerStarted","Data":"6a872fce81c67c9affd84b58c865ee9afbde879bf8fc97b264ddead34876c815"} Jan 31 09:23:24 crc kubenswrapper[4830]: I0131 09:23:24.374967 4830 generic.go:334] "Generic (PLEG): container finished" podID="6b9089a4-8c83-4a85-a025-04d41faac9cc" containerID="687f0bb00bc26d9a9a27626a9d07e5ffacb0e5e031c3732b898d3b85f3fbe4a0" exitCode=0 Jan 31 09:23:24 crc kubenswrapper[4830]: I0131 09:23:24.375156 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ps27t-config-qxbdx" event={"ID":"6b9089a4-8c83-4a85-a025-04d41faac9cc","Type":"ContainerDied","Data":"687f0bb00bc26d9a9a27626a9d07e5ffacb0e5e031c3732b898d3b85f3fbe4a0"} Jan 31 09:23:24 crc kubenswrapper[4830]: I0131 09:23:24.379291 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77585f5f8c-v7bq9" event={"ID":"78be811e-7bfb-400f-9e75-b2853dc051bd","Type":"ContainerStarted","Data":"a431904c615c8eab3f850504c49e2d9ad100a3fa1f1f5f56c5c038f7f2641a8f"} Jan 31 09:23:24 crc kubenswrapper[4830]: I0131 09:23:24.380379 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-77585f5f8c-v7bq9" Jan 31 09:23:24 crc kubenswrapper[4830]: I0131 09:23:24.443794 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-77585f5f8c-v7bq9" podStartSLOduration=3.44376583 podStartE2EDuration="3.44376583s" podCreationTimestamp="2026-01-31 09:23:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 09:23:24.427429334 +0000 UTC m=+1348.920791776" watchObservedRunningTime="2026-01-31 09:23:24.44376583 +0000 UTC m=+1348.937128262" Jan 31 09:23:25 crc kubenswrapper[4830]: I0131 09:23:25.396345 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"7b3b4d1e-8963-469f-abe7-204392275c48","Type":"ContainerStarted","Data":"632d2b939e20247d0a6ecc7f549d51286caca5147eabc633baa30f954bf60b54"} Jan 31 09:23:25 crc kubenswrapper[4830]: I0131 09:23:25.592272 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-1" podUID="f60eed79-badf-4909-869b-edbfdfb774ac" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.130:5671: connect: connection refused" Jan 31 09:23:25 crc kubenswrapper[4830]: I0131 09:23:25.661588 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="759f3f02-a9de-4e01-97f9-a97424c592a6" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.128:5671: connect: connection refused" Jan 31 09:23:25 crc kubenswrapper[4830]: I0131 09:23:25.871100 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-2" podUID="8e40a106-74cd-45ea-a936-c34daaf9ce6e" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.129:5671: connect: connection refused" Jan 31 09:23:26 crc kubenswrapper[4830]: I0131 09:23:26.239555 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="18af810d-9de4-4822-86d2-bb7e8a8a449b" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.131:5671: connect: connection refused" Jan 31 09:23:32 crc kubenswrapper[4830]: I0131 09:23:32.037100 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-77585f5f8c-v7bq9" Jan 31 09:23:32 crc kubenswrapper[4830]: I0131 09:23:32.124366 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-698758b865-wmdpv"] Jan 31 09:23:32 crc kubenswrapper[4830]: I0131 09:23:32.133372 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-698758b865-wmdpv" podUID="0db9fdf8-5944-4eac-b0fe-9ca72f89ea5d" containerName="dnsmasq-dns" containerID="cri-o://4472a48a2935c8db74155633a3dcdb219db2cf8c39b94d354056988bd895c681" gracePeriod=10 Jan 31 09:23:32 crc kubenswrapper[4830]: I0131 09:23:32.491437 4830 generic.go:334] "Generic (PLEG): container finished" podID="0db9fdf8-5944-4eac-b0fe-9ca72f89ea5d" containerID="4472a48a2935c8db74155633a3dcdb219db2cf8c39b94d354056988bd895c681" exitCode=0 Jan 31 09:23:32 crc kubenswrapper[4830]: I0131 09:23:32.491526 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-wmdpv" event={"ID":"0db9fdf8-5944-4eac-b0fe-9ca72f89ea5d","Type":"ContainerDied","Data":"4472a48a2935c8db74155633a3dcdb219db2cf8c39b94d354056988bd895c681"} Jan 31 09:23:32 crc kubenswrapper[4830]: I0131 09:23:32.494316 4830 generic.go:334] "Generic (PLEG): container finished" podID="7b3b4d1e-8963-469f-abe7-204392275c48" containerID="632d2b939e20247d0a6ecc7f549d51286caca5147eabc633baa30f954bf60b54" exitCode=0 Jan 31 09:23:32 crc kubenswrapper[4830]: I0131 09:23:32.494404 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"7b3b4d1e-8963-469f-abe7-204392275c48","Type":"ContainerDied","Data":"632d2b939e20247d0a6ecc7f549d51286caca5147eabc633baa30f954bf60b54"} Jan 31 09:23:34 crc kubenswrapper[4830]: I0131 09:23:34.068461 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ps27t-config-qxbdx" Jan 31 09:23:34 crc kubenswrapper[4830]: I0131 09:23:34.182749 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6b9089a4-8c83-4a85-a025-04d41faac9cc-scripts\") pod \"6b9089a4-8c83-4a85-a025-04d41faac9cc\" (UID: \"6b9089a4-8c83-4a85-a025-04d41faac9cc\") " Jan 31 09:23:34 crc kubenswrapper[4830]: I0131 09:23:34.183300 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kcgrp\" (UniqueName: \"kubernetes.io/projected/6b9089a4-8c83-4a85-a025-04d41faac9cc-kube-api-access-kcgrp\") pod \"6b9089a4-8c83-4a85-a025-04d41faac9cc\" (UID: \"6b9089a4-8c83-4a85-a025-04d41faac9cc\") " Jan 31 09:23:34 crc kubenswrapper[4830]: I0131 09:23:34.183389 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/6b9089a4-8c83-4a85-a025-04d41faac9cc-additional-scripts\") pod \"6b9089a4-8c83-4a85-a025-04d41faac9cc\" (UID: \"6b9089a4-8c83-4a85-a025-04d41faac9cc\") " Jan 31 09:23:34 crc kubenswrapper[4830]: I0131 09:23:34.183434 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/6b9089a4-8c83-4a85-a025-04d41faac9cc-var-run-ovn\") pod \"6b9089a4-8c83-4a85-a025-04d41faac9cc\" (UID: \"6b9089a4-8c83-4a85-a025-04d41faac9cc\") " Jan 31 09:23:34 crc kubenswrapper[4830]: I0131 09:23:34.183478 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/6b9089a4-8c83-4a85-a025-04d41faac9cc-var-log-ovn\") pod \"6b9089a4-8c83-4a85-a025-04d41faac9cc\" (UID: \"6b9089a4-8c83-4a85-a025-04d41faac9cc\") " Jan 31 09:23:34 crc kubenswrapper[4830]: I0131 09:23:34.183511 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/6b9089a4-8c83-4a85-a025-04d41faac9cc-var-run\") pod \"6b9089a4-8c83-4a85-a025-04d41faac9cc\" (UID: \"6b9089a4-8c83-4a85-a025-04d41faac9cc\") " Jan 31 09:23:34 crc kubenswrapper[4830]: I0131 09:23:34.184063 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6b9089a4-8c83-4a85-a025-04d41faac9cc-scripts" (OuterVolumeSpecName: "scripts") pod "6b9089a4-8c83-4a85-a025-04d41faac9cc" (UID: "6b9089a4-8c83-4a85-a025-04d41faac9cc"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:23:34 crc kubenswrapper[4830]: I0131 09:23:34.184197 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6b9089a4-8c83-4a85-a025-04d41faac9cc-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "6b9089a4-8c83-4a85-a025-04d41faac9cc" (UID: "6b9089a4-8c83-4a85-a025-04d41faac9cc"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 09:23:34 crc kubenswrapper[4830]: I0131 09:23:34.184238 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6b9089a4-8c83-4a85-a025-04d41faac9cc-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "6b9089a4-8c83-4a85-a025-04d41faac9cc" (UID: "6b9089a4-8c83-4a85-a025-04d41faac9cc"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 09:23:34 crc kubenswrapper[4830]: I0131 09:23:34.184263 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6b9089a4-8c83-4a85-a025-04d41faac9cc-var-run" (OuterVolumeSpecName: "var-run") pod "6b9089a4-8c83-4a85-a025-04d41faac9cc" (UID: "6b9089a4-8c83-4a85-a025-04d41faac9cc"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 09:23:34 crc kubenswrapper[4830]: I0131 09:23:34.185028 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6b9089a4-8c83-4a85-a025-04d41faac9cc-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "6b9089a4-8c83-4a85-a025-04d41faac9cc" (UID: "6b9089a4-8c83-4a85-a025-04d41faac9cc"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:23:34 crc kubenswrapper[4830]: I0131 09:23:34.188393 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6b9089a4-8c83-4a85-a025-04d41faac9cc-kube-api-access-kcgrp" (OuterVolumeSpecName: "kube-api-access-kcgrp") pod "6b9089a4-8c83-4a85-a025-04d41faac9cc" (UID: "6b9089a4-8c83-4a85-a025-04d41faac9cc"). InnerVolumeSpecName "kube-api-access-kcgrp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:23:34 crc kubenswrapper[4830]: I0131 09:23:34.287007 4830 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6b9089a4-8c83-4a85-a025-04d41faac9cc-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 09:23:34 crc kubenswrapper[4830]: I0131 09:23:34.287054 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kcgrp\" (UniqueName: \"kubernetes.io/projected/6b9089a4-8c83-4a85-a025-04d41faac9cc-kube-api-access-kcgrp\") on node \"crc\" DevicePath \"\"" Jan 31 09:23:34 crc kubenswrapper[4830]: I0131 09:23:34.287069 4830 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/6b9089a4-8c83-4a85-a025-04d41faac9cc-additional-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 09:23:34 crc kubenswrapper[4830]: I0131 09:23:34.287081 4830 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/6b9089a4-8c83-4a85-a025-04d41faac9cc-var-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 31 09:23:34 crc kubenswrapper[4830]: I0131 09:23:34.287096 4830 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/6b9089a4-8c83-4a85-a025-04d41faac9cc-var-log-ovn\") on node \"crc\" DevicePath \"\"" Jan 31 09:23:34 crc kubenswrapper[4830]: I0131 09:23:34.287108 4830 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/6b9089a4-8c83-4a85-a025-04d41faac9cc-var-run\") on node \"crc\" DevicePath \"\"" Jan 31 09:23:34 crc kubenswrapper[4830]: I0131 09:23:34.313693 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-wmdpv" Jan 31 09:23:34 crc kubenswrapper[4830]: I0131 09:23:34.495654 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-72ksn\" (UniqueName: \"kubernetes.io/projected/0db9fdf8-5944-4eac-b0fe-9ca72f89ea5d-kube-api-access-72ksn\") pod \"0db9fdf8-5944-4eac-b0fe-9ca72f89ea5d\" (UID: \"0db9fdf8-5944-4eac-b0fe-9ca72f89ea5d\") " Jan 31 09:23:34 crc kubenswrapper[4830]: I0131 09:23:34.495751 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0db9fdf8-5944-4eac-b0fe-9ca72f89ea5d-config\") pod \"0db9fdf8-5944-4eac-b0fe-9ca72f89ea5d\" (UID: \"0db9fdf8-5944-4eac-b0fe-9ca72f89ea5d\") " Jan 31 09:23:34 crc kubenswrapper[4830]: I0131 09:23:34.495811 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0db9fdf8-5944-4eac-b0fe-9ca72f89ea5d-ovsdbserver-sb\") pod \"0db9fdf8-5944-4eac-b0fe-9ca72f89ea5d\" (UID: \"0db9fdf8-5944-4eac-b0fe-9ca72f89ea5d\") " Jan 31 09:23:34 crc kubenswrapper[4830]: I0131 09:23:34.495867 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0db9fdf8-5944-4eac-b0fe-9ca72f89ea5d-dns-svc\") pod \"0db9fdf8-5944-4eac-b0fe-9ca72f89ea5d\" (UID: \"0db9fdf8-5944-4eac-b0fe-9ca72f89ea5d\") " Jan 31 09:23:34 crc kubenswrapper[4830]: I0131 09:23:34.495983 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0db9fdf8-5944-4eac-b0fe-9ca72f89ea5d-ovsdbserver-nb\") pod \"0db9fdf8-5944-4eac-b0fe-9ca72f89ea5d\" (UID: \"0db9fdf8-5944-4eac-b0fe-9ca72f89ea5d\") " Jan 31 09:23:34 crc kubenswrapper[4830]: I0131 09:23:34.505942 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0db9fdf8-5944-4eac-b0fe-9ca72f89ea5d-kube-api-access-72ksn" (OuterVolumeSpecName: "kube-api-access-72ksn") pod "0db9fdf8-5944-4eac-b0fe-9ca72f89ea5d" (UID: "0db9fdf8-5944-4eac-b0fe-9ca72f89ea5d"). InnerVolumeSpecName "kube-api-access-72ksn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:23:34 crc kubenswrapper[4830]: I0131 09:23:34.564302 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-wmdpv" event={"ID":"0db9fdf8-5944-4eac-b0fe-9ca72f89ea5d","Type":"ContainerDied","Data":"26bcc215a4ade8767a1389a9f600fd1bd72f1c942dff001fa3f4cdd72d57aee4"} Jan 31 09:23:34 crc kubenswrapper[4830]: I0131 09:23:34.564790 4830 scope.go:117] "RemoveContainer" containerID="4472a48a2935c8db74155633a3dcdb219db2cf8c39b94d354056988bd895c681" Jan 31 09:23:34 crc kubenswrapper[4830]: I0131 09:23:34.564811 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-wmdpv" Jan 31 09:23:34 crc kubenswrapper[4830]: I0131 09:23:34.577863 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ps27t-config-qxbdx" event={"ID":"6b9089a4-8c83-4a85-a025-04d41faac9cc","Type":"ContainerDied","Data":"6a872fce81c67c9affd84b58c865ee9afbde879bf8fc97b264ddead34876c815"} Jan 31 09:23:34 crc kubenswrapper[4830]: I0131 09:23:34.577919 4830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6a872fce81c67c9affd84b58c865ee9afbde879bf8fc97b264ddead34876c815" Jan 31 09:23:34 crc kubenswrapper[4830]: I0131 09:23:34.578170 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ps27t-config-qxbdx" Jan 31 09:23:34 crc kubenswrapper[4830]: I0131 09:23:34.589796 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0db9fdf8-5944-4eac-b0fe-9ca72f89ea5d-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "0db9fdf8-5944-4eac-b0fe-9ca72f89ea5d" (UID: "0db9fdf8-5944-4eac-b0fe-9ca72f89ea5d"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:23:34 crc kubenswrapper[4830]: I0131 09:23:34.596359 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0db9fdf8-5944-4eac-b0fe-9ca72f89ea5d-config" (OuterVolumeSpecName: "config") pod "0db9fdf8-5944-4eac-b0fe-9ca72f89ea5d" (UID: "0db9fdf8-5944-4eac-b0fe-9ca72f89ea5d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:23:34 crc kubenswrapper[4830]: I0131 09:23:34.600633 4830 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0db9fdf8-5944-4eac-b0fe-9ca72f89ea5d-config\") on node \"crc\" DevicePath \"\"" Jan 31 09:23:34 crc kubenswrapper[4830]: I0131 09:23:34.600677 4830 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0db9fdf8-5944-4eac-b0fe-9ca72f89ea5d-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 31 09:23:34 crc kubenswrapper[4830]: I0131 09:23:34.600693 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-72ksn\" (UniqueName: \"kubernetes.io/projected/0db9fdf8-5944-4eac-b0fe-9ca72f89ea5d-kube-api-access-72ksn\") on node \"crc\" DevicePath \"\"" Jan 31 09:23:34 crc kubenswrapper[4830]: I0131 09:23:34.601699 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"7b3b4d1e-8963-469f-abe7-204392275c48","Type":"ContainerStarted","Data":"17474667dca8c54d375ef5b9b3f31a4c8d18f36e50819cb3aeed96e173071fec"} Jan 31 09:23:34 crc kubenswrapper[4830]: I0131 09:23:34.613799 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0db9fdf8-5944-4eac-b0fe-9ca72f89ea5d-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "0db9fdf8-5944-4eac-b0fe-9ca72f89ea5d" (UID: "0db9fdf8-5944-4eac-b0fe-9ca72f89ea5d"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:23:34 crc kubenswrapper[4830]: I0131 09:23:34.617205 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0db9fdf8-5944-4eac-b0fe-9ca72f89ea5d-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "0db9fdf8-5944-4eac-b0fe-9ca72f89ea5d" (UID: "0db9fdf8-5944-4eac-b0fe-9ca72f89ea5d"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:23:34 crc kubenswrapper[4830]: I0131 09:23:34.618179 4830 scope.go:117] "RemoveContainer" containerID="98b81f451efacf26232ff028069eaa9d83f52e01303ee9156b0af10ee6e28bf4" Jan 31 09:23:34 crc kubenswrapper[4830]: I0131 09:23:34.703509 4830 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0db9fdf8-5944-4eac-b0fe-9ca72f89ea5d-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 31 09:23:34 crc kubenswrapper[4830]: I0131 09:23:34.703552 4830 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0db9fdf8-5944-4eac-b0fe-9ca72f89ea5d-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 31 09:23:34 crc kubenswrapper[4830]: I0131 09:23:34.909202 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-698758b865-wmdpv"] Jan 31 09:23:34 crc kubenswrapper[4830]: I0131 09:23:34.921592 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-698758b865-wmdpv"] Jan 31 09:23:35 crc kubenswrapper[4830]: I0131 09:23:35.201156 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-ps27t-config-qxbdx"] Jan 31 09:23:35 crc kubenswrapper[4830]: I0131 09:23:35.212098 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-ps27t-config-qxbdx"] Jan 31 09:23:35 crc kubenswrapper[4830]: I0131 09:23:35.593130 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-1" Jan 31 09:23:35 crc kubenswrapper[4830]: I0131 09:23:35.614543 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-bktdp" event={"ID":"42eafeb6-68c0-479b-bc77-62967566390e","Type":"ContainerStarted","Data":"6f37797019de65423359308de85954a8c167fc047dac50a8bb217196a6d744b8"} Jan 31 09:23:35 crc kubenswrapper[4830]: I0131 09:23:35.641463 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-bktdp" podStartSLOduration=3.272572061 podStartE2EDuration="26.641439674s" podCreationTimestamp="2026-01-31 09:23:09 +0000 UTC" firstStartedPulling="2026-01-31 09:23:10.7684533 +0000 UTC m=+1335.261815742" lastFinishedPulling="2026-01-31 09:23:34.137320913 +0000 UTC m=+1358.630683355" observedRunningTime="2026-01-31 09:23:35.63814928 +0000 UTC m=+1360.131511732" watchObservedRunningTime="2026-01-31 09:23:35.641439674 +0000 UTC m=+1360.134802126" Jan 31 09:23:35 crc kubenswrapper[4830]: I0131 09:23:35.655007 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Jan 31 09:23:35 crc kubenswrapper[4830]: I0131 09:23:35.868117 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-2" Jan 31 09:23:36 crc kubenswrapper[4830]: I0131 09:23:36.228943 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Jan 31 09:23:36 crc kubenswrapper[4830]: I0131 09:23:36.283965 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0db9fdf8-5944-4eac-b0fe-9ca72f89ea5d" path="/var/lib/kubelet/pods/0db9fdf8-5944-4eac-b0fe-9ca72f89ea5d/volumes" Jan 31 09:23:36 crc kubenswrapper[4830]: I0131 09:23:36.284925 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6b9089a4-8c83-4a85-a025-04d41faac9cc" path="/var/lib/kubelet/pods/6b9089a4-8c83-4a85-a025-04d41faac9cc/volumes" Jan 31 09:23:39 crc kubenswrapper[4830]: I0131 09:23:39.661285 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"7b3b4d1e-8963-469f-abe7-204392275c48","Type":"ContainerStarted","Data":"def852bf59ab3db4940cb82c79f5cbf387b4a8be63957c659a73a1c8be445d92"} Jan 31 09:23:39 crc kubenswrapper[4830]: I0131 09:23:39.663251 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"7b3b4d1e-8963-469f-abe7-204392275c48","Type":"ContainerStarted","Data":"c5dbd4207f789610394b8fcf9116b5d8254b5baf4381d04eb032909a62aee6ab"} Jan 31 09:23:39 crc kubenswrapper[4830]: I0131 09:23:39.695799 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=20.695775089 podStartE2EDuration="20.695775089s" podCreationTimestamp="2026-01-31 09:23:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 09:23:39.694449591 +0000 UTC m=+1364.187812043" watchObservedRunningTime="2026-01-31 09:23:39.695775089 +0000 UTC m=+1364.189137531" Jan 31 09:23:40 crc kubenswrapper[4830]: I0131 09:23:40.316165 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-7jb5m"] Jan 31 09:23:40 crc kubenswrapper[4830]: E0131 09:23:40.316608 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6b9089a4-8c83-4a85-a025-04d41faac9cc" containerName="ovn-config" Jan 31 09:23:40 crc kubenswrapper[4830]: I0131 09:23:40.316626 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b9089a4-8c83-4a85-a025-04d41faac9cc" containerName="ovn-config" Jan 31 09:23:40 crc kubenswrapper[4830]: E0131 09:23:40.316650 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0db9fdf8-5944-4eac-b0fe-9ca72f89ea5d" containerName="dnsmasq-dns" Jan 31 09:23:40 crc kubenswrapper[4830]: I0131 09:23:40.316658 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="0db9fdf8-5944-4eac-b0fe-9ca72f89ea5d" containerName="dnsmasq-dns" Jan 31 09:23:40 crc kubenswrapper[4830]: E0131 09:23:40.316698 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0db9fdf8-5944-4eac-b0fe-9ca72f89ea5d" containerName="init" Jan 31 09:23:40 crc kubenswrapper[4830]: I0131 09:23:40.316705 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="0db9fdf8-5944-4eac-b0fe-9ca72f89ea5d" containerName="init" Jan 31 09:23:40 crc kubenswrapper[4830]: I0131 09:23:40.316893 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="0db9fdf8-5944-4eac-b0fe-9ca72f89ea5d" containerName="dnsmasq-dns" Jan 31 09:23:40 crc kubenswrapper[4830]: I0131 09:23:40.316919 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="6b9089a4-8c83-4a85-a025-04d41faac9cc" containerName="ovn-config" Jan 31 09:23:40 crc kubenswrapper[4830]: I0131 09:23:40.317678 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-7jb5m" Jan 31 09:23:40 crc kubenswrapper[4830]: I0131 09:23:40.335779 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-7jb5m"] Jan 31 09:23:40 crc kubenswrapper[4830]: I0131 09:23:40.436707 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-3e38-account-create-update-4vpgq"] Jan 31 09:23:40 crc kubenswrapper[4830]: I0131 09:23:40.438916 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-3e38-account-create-update-4vpgq" Jan 31 09:23:40 crc kubenswrapper[4830]: I0131 09:23:40.440859 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Jan 31 09:23:40 crc kubenswrapper[4830]: I0131 09:23:40.450282 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-cn5jd"] Jan 31 09:23:40 crc kubenswrapper[4830]: I0131 09:23:40.452369 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-cn5jd" Jan 31 09:23:40 crc kubenswrapper[4830]: I0131 09:23:40.460134 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-3e38-account-create-update-4vpgq"] Jan 31 09:23:40 crc kubenswrapper[4830]: I0131 09:23:40.462674 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ce165a30-da01-4e57-996c-de05fbe74498-operator-scripts\") pod \"cinder-db-create-7jb5m\" (UID: \"ce165a30-da01-4e57-996c-de05fbe74498\") " pod="openstack/cinder-db-create-7jb5m" Jan 31 09:23:40 crc kubenswrapper[4830]: I0131 09:23:40.462901 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lnds2\" (UniqueName: \"kubernetes.io/projected/ce165a30-da01-4e57-996c-de05fbe74498-kube-api-access-lnds2\") pod \"cinder-db-create-7jb5m\" (UID: \"ce165a30-da01-4e57-996c-de05fbe74498\") " pod="openstack/cinder-db-create-7jb5m" Jan 31 09:23:40 crc kubenswrapper[4830]: I0131 09:23:40.482760 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-cn5jd"] Jan 31 09:23:40 crc kubenswrapper[4830]: I0131 09:23:40.514795 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Jan 31 09:23:40 crc kubenswrapper[4830]: I0131 09:23:40.549881 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-87e0-account-create-update-6vbtx"] Jan 31 09:23:40 crc kubenswrapper[4830]: I0131 09:23:40.552176 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-87e0-account-create-update-6vbtx" Jan 31 09:23:40 crc kubenswrapper[4830]: I0131 09:23:40.557822 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Jan 31 09:23:40 crc kubenswrapper[4830]: I0131 09:23:40.565571 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ce165a30-da01-4e57-996c-de05fbe74498-operator-scripts\") pod \"cinder-db-create-7jb5m\" (UID: \"ce165a30-da01-4e57-996c-de05fbe74498\") " pod="openstack/cinder-db-create-7jb5m" Jan 31 09:23:40 crc kubenswrapper[4830]: I0131 09:23:40.565705 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2af2731d-2c7c-46c2-abcc-4846583de531-operator-scripts\") pod \"cinder-3e38-account-create-update-4vpgq\" (UID: \"2af2731d-2c7c-46c2-abcc-4846583de531\") " pod="openstack/cinder-3e38-account-create-update-4vpgq" Jan 31 09:23:40 crc kubenswrapper[4830]: I0131 09:23:40.565806 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lnds2\" (UniqueName: \"kubernetes.io/projected/ce165a30-da01-4e57-996c-de05fbe74498-kube-api-access-lnds2\") pod \"cinder-db-create-7jb5m\" (UID: \"ce165a30-da01-4e57-996c-de05fbe74498\") " pod="openstack/cinder-db-create-7jb5m" Jan 31 09:23:40 crc kubenswrapper[4830]: I0131 09:23:40.565832 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e1444b15-29b5-4433-8ea5-4b533b54f08a-operator-scripts\") pod \"barbican-db-create-cn5jd\" (UID: \"e1444b15-29b5-4433-8ea5-4b533b54f08a\") " pod="openstack/barbican-db-create-cn5jd" Jan 31 09:23:40 crc kubenswrapper[4830]: I0131 09:23:40.565869 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5z55s\" (UniqueName: \"kubernetes.io/projected/e1444b15-29b5-4433-8ea5-4b533b54f08a-kube-api-access-5z55s\") pod \"barbican-db-create-cn5jd\" (UID: \"e1444b15-29b5-4433-8ea5-4b533b54f08a\") " pod="openstack/barbican-db-create-cn5jd" Jan 31 09:23:40 crc kubenswrapper[4830]: I0131 09:23:40.565936 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mt8rw\" (UniqueName: \"kubernetes.io/projected/2af2731d-2c7c-46c2-abcc-4846583de531-kube-api-access-mt8rw\") pod \"cinder-3e38-account-create-update-4vpgq\" (UID: \"2af2731d-2c7c-46c2-abcc-4846583de531\") " pod="openstack/cinder-3e38-account-create-update-4vpgq" Jan 31 09:23:40 crc kubenswrapper[4830]: I0131 09:23:40.567009 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ce165a30-da01-4e57-996c-de05fbe74498-operator-scripts\") pod \"cinder-db-create-7jb5m\" (UID: \"ce165a30-da01-4e57-996c-de05fbe74498\") " pod="openstack/cinder-db-create-7jb5m" Jan 31 09:23:40 crc kubenswrapper[4830]: I0131 09:23:40.571234 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-87e0-account-create-update-6vbtx"] Jan 31 09:23:40 crc kubenswrapper[4830]: I0131 09:23:40.625276 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lnds2\" (UniqueName: \"kubernetes.io/projected/ce165a30-da01-4e57-996c-de05fbe74498-kube-api-access-lnds2\") pod \"cinder-db-create-7jb5m\" (UID: \"ce165a30-da01-4e57-996c-de05fbe74498\") " pod="openstack/cinder-db-create-7jb5m" Jan 31 09:23:40 crc kubenswrapper[4830]: I0131 09:23:40.630353 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-db-create-w9h2w"] Jan 31 09:23:40 crc kubenswrapper[4830]: I0131 09:23:40.632067 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-w9h2w" Jan 31 09:23:40 crc kubenswrapper[4830]: I0131 09:23:40.638861 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-7jb5m" Jan 31 09:23:40 crc kubenswrapper[4830]: I0131 09:23:40.660528 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-create-w9h2w"] Jan 31 09:23:40 crc kubenswrapper[4830]: I0131 09:23:40.668128 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ctgr2\" (UniqueName: \"kubernetes.io/projected/4140bbd2-fcdd-482d-9224-5248d75e4317-kube-api-access-ctgr2\") pod \"barbican-87e0-account-create-update-6vbtx\" (UID: \"4140bbd2-fcdd-482d-9224-5248d75e4317\") " pod="openstack/barbican-87e0-account-create-update-6vbtx" Jan 31 09:23:40 crc kubenswrapper[4830]: I0131 09:23:40.670991 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mt8rw\" (UniqueName: \"kubernetes.io/projected/2af2731d-2c7c-46c2-abcc-4846583de531-kube-api-access-mt8rw\") pod \"cinder-3e38-account-create-update-4vpgq\" (UID: \"2af2731d-2c7c-46c2-abcc-4846583de531\") " pod="openstack/cinder-3e38-account-create-update-4vpgq" Jan 31 09:23:40 crc kubenswrapper[4830]: I0131 09:23:40.671314 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2af2731d-2c7c-46c2-abcc-4846583de531-operator-scripts\") pod \"cinder-3e38-account-create-update-4vpgq\" (UID: \"2af2731d-2c7c-46c2-abcc-4846583de531\") " pod="openstack/cinder-3e38-account-create-update-4vpgq" Jan 31 09:23:40 crc kubenswrapper[4830]: I0131 09:23:40.671535 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e1444b15-29b5-4433-8ea5-4b533b54f08a-operator-scripts\") pod \"barbican-db-create-cn5jd\" (UID: \"e1444b15-29b5-4433-8ea5-4b533b54f08a\") " pod="openstack/barbican-db-create-cn5jd" Jan 31 09:23:40 crc kubenswrapper[4830]: I0131 09:23:40.671620 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4140bbd2-fcdd-482d-9224-5248d75e4317-operator-scripts\") pod \"barbican-87e0-account-create-update-6vbtx\" (UID: \"4140bbd2-fcdd-482d-9224-5248d75e4317\") " pod="openstack/barbican-87e0-account-create-update-6vbtx" Jan 31 09:23:40 crc kubenswrapper[4830]: I0131 09:23:40.671660 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5z55s\" (UniqueName: \"kubernetes.io/projected/e1444b15-29b5-4433-8ea5-4b533b54f08a-kube-api-access-5z55s\") pod \"barbican-db-create-cn5jd\" (UID: \"e1444b15-29b5-4433-8ea5-4b533b54f08a\") " pod="openstack/barbican-db-create-cn5jd" Jan 31 09:23:40 crc kubenswrapper[4830]: I0131 09:23:40.672199 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2af2731d-2c7c-46c2-abcc-4846583de531-operator-scripts\") pod \"cinder-3e38-account-create-update-4vpgq\" (UID: \"2af2731d-2c7c-46c2-abcc-4846583de531\") " pod="openstack/cinder-3e38-account-create-update-4vpgq" Jan 31 09:23:40 crc kubenswrapper[4830]: I0131 09:23:40.672426 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e1444b15-29b5-4433-8ea5-4b533b54f08a-operator-scripts\") pod \"barbican-db-create-cn5jd\" (UID: \"e1444b15-29b5-4433-8ea5-4b533b54f08a\") " pod="openstack/barbican-db-create-cn5jd" Jan 31 09:23:40 crc kubenswrapper[4830]: I0131 09:23:40.702759 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mt8rw\" (UniqueName: \"kubernetes.io/projected/2af2731d-2c7c-46c2-abcc-4846583de531-kube-api-access-mt8rw\") pod \"cinder-3e38-account-create-update-4vpgq\" (UID: \"2af2731d-2c7c-46c2-abcc-4846583de531\") " pod="openstack/cinder-3e38-account-create-update-4vpgq" Jan 31 09:23:40 crc kubenswrapper[4830]: I0131 09:23:40.722248 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5z55s\" (UniqueName: \"kubernetes.io/projected/e1444b15-29b5-4433-8ea5-4b533b54f08a-kube-api-access-5z55s\") pod \"barbican-db-create-cn5jd\" (UID: \"e1444b15-29b5-4433-8ea5-4b533b54f08a\") " pod="openstack/barbican-db-create-cn5jd" Jan 31 09:23:40 crc kubenswrapper[4830]: I0131 09:23:40.726851 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-hccz6"] Jan 31 09:23:40 crc kubenswrapper[4830]: I0131 09:23:40.733498 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-hccz6" Jan 31 09:23:40 crc kubenswrapper[4830]: I0131 09:23:40.754521 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 31 09:23:40 crc kubenswrapper[4830]: I0131 09:23:40.754776 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-r84d8" Jan 31 09:23:40 crc kubenswrapper[4830]: I0131 09:23:40.755189 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 31 09:23:40 crc kubenswrapper[4830]: I0131 09:23:40.755408 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 31 09:23:40 crc kubenswrapper[4830]: I0131 09:23:40.766845 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-3e38-account-create-update-4vpgq" Jan 31 09:23:40 crc kubenswrapper[4830]: I0131 09:23:40.783289 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/52fdb459-dc6a-4e56-8a6b-379d4c74ce62-operator-scripts\") pod \"heat-db-create-w9h2w\" (UID: \"52fdb459-dc6a-4e56-8a6b-379d4c74ce62\") " pod="openstack/heat-db-create-w9h2w" Jan 31 09:23:40 crc kubenswrapper[4830]: I0131 09:23:40.784126 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4140bbd2-fcdd-482d-9224-5248d75e4317-operator-scripts\") pod \"barbican-87e0-account-create-update-6vbtx\" (UID: \"4140bbd2-fcdd-482d-9224-5248d75e4317\") " pod="openstack/barbican-87e0-account-create-update-6vbtx" Jan 31 09:23:40 crc kubenswrapper[4830]: I0131 09:23:40.784396 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jn8dm\" (UniqueName: \"kubernetes.io/projected/52fdb459-dc6a-4e56-8a6b-379d4c74ce62-kube-api-access-jn8dm\") pod \"heat-db-create-w9h2w\" (UID: \"52fdb459-dc6a-4e56-8a6b-379d4c74ce62\") " pod="openstack/heat-db-create-w9h2w" Jan 31 09:23:40 crc kubenswrapper[4830]: I0131 09:23:40.784577 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ctgr2\" (UniqueName: \"kubernetes.io/projected/4140bbd2-fcdd-482d-9224-5248d75e4317-kube-api-access-ctgr2\") pod \"barbican-87e0-account-create-update-6vbtx\" (UID: \"4140bbd2-fcdd-482d-9224-5248d75e4317\") " pod="openstack/barbican-87e0-account-create-update-6vbtx" Jan 31 09:23:40 crc kubenswrapper[4830]: I0131 09:23:40.788692 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4140bbd2-fcdd-482d-9224-5248d75e4317-operator-scripts\") pod \"barbican-87e0-account-create-update-6vbtx\" (UID: \"4140bbd2-fcdd-482d-9224-5248d75e4317\") " pod="openstack/barbican-87e0-account-create-update-6vbtx" Jan 31 09:23:40 crc kubenswrapper[4830]: I0131 09:23:40.794124 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-hccz6"] Jan 31 09:23:40 crc kubenswrapper[4830]: I0131 09:23:40.796664 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-cn5jd" Jan 31 09:23:40 crc kubenswrapper[4830]: I0131 09:23:40.816189 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ctgr2\" (UniqueName: \"kubernetes.io/projected/4140bbd2-fcdd-482d-9224-5248d75e4317-kube-api-access-ctgr2\") pod \"barbican-87e0-account-create-update-6vbtx\" (UID: \"4140bbd2-fcdd-482d-9224-5248d75e4317\") " pod="openstack/barbican-87e0-account-create-update-6vbtx" Jan 31 09:23:40 crc kubenswrapper[4830]: I0131 09:23:40.848040 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-541b-account-create-update-pssj9"] Jan 31 09:23:40 crc kubenswrapper[4830]: I0131 09:23:40.850431 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-541b-account-create-update-pssj9" Jan 31 09:23:40 crc kubenswrapper[4830]: I0131 09:23:40.853192 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Jan 31 09:23:40 crc kubenswrapper[4830]: I0131 09:23:40.890097 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dhsbn\" (UniqueName: \"kubernetes.io/projected/f97b7b49-f0d6-4f7c-a8ed-792cbfa32504-kube-api-access-dhsbn\") pod \"keystone-db-sync-hccz6\" (UID: \"f97b7b49-f0d6-4f7c-a8ed-792cbfa32504\") " pod="openstack/keystone-db-sync-hccz6" Jan 31 09:23:40 crc kubenswrapper[4830]: I0131 09:23:40.890740 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f97b7b49-f0d6-4f7c-a8ed-792cbfa32504-combined-ca-bundle\") pod \"keystone-db-sync-hccz6\" (UID: \"f97b7b49-f0d6-4f7c-a8ed-792cbfa32504\") " pod="openstack/keystone-db-sync-hccz6" Jan 31 09:23:40 crc kubenswrapper[4830]: I0131 09:23:40.890769 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f97b7b49-f0d6-4f7c-a8ed-792cbfa32504-config-data\") pod \"keystone-db-sync-hccz6\" (UID: \"f97b7b49-f0d6-4f7c-a8ed-792cbfa32504\") " pod="openstack/keystone-db-sync-hccz6" Jan 31 09:23:40 crc kubenswrapper[4830]: I0131 09:23:40.891011 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jn8dm\" (UniqueName: \"kubernetes.io/projected/52fdb459-dc6a-4e56-8a6b-379d4c74ce62-kube-api-access-jn8dm\") pod \"heat-db-create-w9h2w\" (UID: \"52fdb459-dc6a-4e56-8a6b-379d4c74ce62\") " pod="openstack/heat-db-create-w9h2w" Jan 31 09:23:40 crc kubenswrapper[4830]: I0131 09:23:40.891246 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/52fdb459-dc6a-4e56-8a6b-379d4c74ce62-operator-scripts\") pod \"heat-db-create-w9h2w\" (UID: \"52fdb459-dc6a-4e56-8a6b-379d4c74ce62\") " pod="openstack/heat-db-create-w9h2w" Jan 31 09:23:40 crc kubenswrapper[4830]: I0131 09:23:40.894654 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-87e0-account-create-update-6vbtx" Jan 31 09:23:40 crc kubenswrapper[4830]: I0131 09:23:40.910068 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-w7rt2"] Jan 31 09:23:40 crc kubenswrapper[4830]: I0131 09:23:40.915249 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/52fdb459-dc6a-4e56-8a6b-379d4c74ce62-operator-scripts\") pod \"heat-db-create-w9h2w\" (UID: \"52fdb459-dc6a-4e56-8a6b-379d4c74ce62\") " pod="openstack/heat-db-create-w9h2w" Jan 31 09:23:40 crc kubenswrapper[4830]: I0131 09:23:40.920302 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-w7rt2" Jan 31 09:23:40 crc kubenswrapper[4830]: I0131 09:23:40.922251 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jn8dm\" (UniqueName: \"kubernetes.io/projected/52fdb459-dc6a-4e56-8a6b-379d4c74ce62-kube-api-access-jn8dm\") pod \"heat-db-create-w9h2w\" (UID: \"52fdb459-dc6a-4e56-8a6b-379d4c74ce62\") " pod="openstack/heat-db-create-w9h2w" Jan 31 09:23:40 crc kubenswrapper[4830]: I0131 09:23:40.925223 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-541b-account-create-update-pssj9"] Jan 31 09:23:40 crc kubenswrapper[4830]: I0131 09:23:40.936032 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-w9h2w" Jan 31 09:23:40 crc kubenswrapper[4830]: I0131 09:23:40.988811 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-w7rt2"] Jan 31 09:23:41 crc kubenswrapper[4830]: I0131 09:23:41.037709 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hs7sj\" (UniqueName: \"kubernetes.io/projected/1d8e473a-4e99-400b-be95-bd490bd2228b-kube-api-access-hs7sj\") pod \"neutron-541b-account-create-update-pssj9\" (UID: \"1d8e473a-4e99-400b-be95-bd490bd2228b\") " pod="openstack/neutron-541b-account-create-update-pssj9" Jan 31 09:23:41 crc kubenswrapper[4830]: I0131 09:23:41.039168 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pd7c8\" (UniqueName: \"kubernetes.io/projected/8e66d083-dfc7-41d1-b955-752fdc14a3c2-kube-api-access-pd7c8\") pod \"neutron-db-create-w7rt2\" (UID: \"8e66d083-dfc7-41d1-b955-752fdc14a3c2\") " pod="openstack/neutron-db-create-w7rt2" Jan 31 09:23:41 crc kubenswrapper[4830]: I0131 09:23:41.039241 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dhsbn\" (UniqueName: \"kubernetes.io/projected/f97b7b49-f0d6-4f7c-a8ed-792cbfa32504-kube-api-access-dhsbn\") pod \"keystone-db-sync-hccz6\" (UID: \"f97b7b49-f0d6-4f7c-a8ed-792cbfa32504\") " pod="openstack/keystone-db-sync-hccz6" Jan 31 09:23:41 crc kubenswrapper[4830]: I0131 09:23:41.039288 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f97b7b49-f0d6-4f7c-a8ed-792cbfa32504-combined-ca-bundle\") pod \"keystone-db-sync-hccz6\" (UID: \"f97b7b49-f0d6-4f7c-a8ed-792cbfa32504\") " pod="openstack/keystone-db-sync-hccz6" Jan 31 09:23:41 crc kubenswrapper[4830]: I0131 09:23:41.039340 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f97b7b49-f0d6-4f7c-a8ed-792cbfa32504-config-data\") pod \"keystone-db-sync-hccz6\" (UID: \"f97b7b49-f0d6-4f7c-a8ed-792cbfa32504\") " pod="openstack/keystone-db-sync-hccz6" Jan 31 09:23:41 crc kubenswrapper[4830]: I0131 09:23:41.039389 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8e66d083-dfc7-41d1-b955-752fdc14a3c2-operator-scripts\") pod \"neutron-db-create-w7rt2\" (UID: \"8e66d083-dfc7-41d1-b955-752fdc14a3c2\") " pod="openstack/neutron-db-create-w7rt2" Jan 31 09:23:41 crc kubenswrapper[4830]: I0131 09:23:41.043452 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1d8e473a-4e99-400b-be95-bd490bd2228b-operator-scripts\") pod \"neutron-541b-account-create-update-pssj9\" (UID: \"1d8e473a-4e99-400b-be95-bd490bd2228b\") " pod="openstack/neutron-541b-account-create-update-pssj9" Jan 31 09:23:41 crc kubenswrapper[4830]: I0131 09:23:41.050165 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f97b7b49-f0d6-4f7c-a8ed-792cbfa32504-config-data\") pod \"keystone-db-sync-hccz6\" (UID: \"f97b7b49-f0d6-4f7c-a8ed-792cbfa32504\") " pod="openstack/keystone-db-sync-hccz6" Jan 31 09:23:41 crc kubenswrapper[4830]: I0131 09:23:41.066388 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f97b7b49-f0d6-4f7c-a8ed-792cbfa32504-combined-ca-bundle\") pod \"keystone-db-sync-hccz6\" (UID: \"f97b7b49-f0d6-4f7c-a8ed-792cbfa32504\") " pod="openstack/keystone-db-sync-hccz6" Jan 31 09:23:41 crc kubenswrapper[4830]: I0131 09:23:41.075982 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-b610-account-create-update-c8ck9"] Jan 31 09:23:41 crc kubenswrapper[4830]: I0131 09:23:41.078954 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-b610-account-create-update-c8ck9" Jan 31 09:23:41 crc kubenswrapper[4830]: I0131 09:23:41.085873 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-db-secret" Jan 31 09:23:41 crc kubenswrapper[4830]: I0131 09:23:41.138376 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dhsbn\" (UniqueName: \"kubernetes.io/projected/f97b7b49-f0d6-4f7c-a8ed-792cbfa32504-kube-api-access-dhsbn\") pod \"keystone-db-sync-hccz6\" (UID: \"f97b7b49-f0d6-4f7c-a8ed-792cbfa32504\") " pod="openstack/keystone-db-sync-hccz6" Jan 31 09:23:41 crc kubenswrapper[4830]: I0131 09:23:41.149095 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-b610-account-create-update-c8ck9"] Jan 31 09:23:41 crc kubenswrapper[4830]: I0131 09:23:41.158181 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8e66d083-dfc7-41d1-b955-752fdc14a3c2-operator-scripts\") pod \"neutron-db-create-w7rt2\" (UID: \"8e66d083-dfc7-41d1-b955-752fdc14a3c2\") " pod="openstack/neutron-db-create-w7rt2" Jan 31 09:23:41 crc kubenswrapper[4830]: I0131 09:23:41.159005 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1d8e473a-4e99-400b-be95-bd490bd2228b-operator-scripts\") pod \"neutron-541b-account-create-update-pssj9\" (UID: \"1d8e473a-4e99-400b-be95-bd490bd2228b\") " pod="openstack/neutron-541b-account-create-update-pssj9" Jan 31 09:23:41 crc kubenswrapper[4830]: I0131 09:23:41.160255 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hs7sj\" (UniqueName: \"kubernetes.io/projected/1d8e473a-4e99-400b-be95-bd490bd2228b-kube-api-access-hs7sj\") pod \"neutron-541b-account-create-update-pssj9\" (UID: \"1d8e473a-4e99-400b-be95-bd490bd2228b\") " pod="openstack/neutron-541b-account-create-update-pssj9" Jan 31 09:23:41 crc kubenswrapper[4830]: I0131 09:23:41.160439 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pd7c8\" (UniqueName: \"kubernetes.io/projected/8e66d083-dfc7-41d1-b955-752fdc14a3c2-kube-api-access-pd7c8\") pod \"neutron-db-create-w7rt2\" (UID: \"8e66d083-dfc7-41d1-b955-752fdc14a3c2\") " pod="openstack/neutron-db-create-w7rt2" Jan 31 09:23:41 crc kubenswrapper[4830]: I0131 09:23:41.164592 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8e66d083-dfc7-41d1-b955-752fdc14a3c2-operator-scripts\") pod \"neutron-db-create-w7rt2\" (UID: \"8e66d083-dfc7-41d1-b955-752fdc14a3c2\") " pod="openstack/neutron-db-create-w7rt2" Jan 31 09:23:41 crc kubenswrapper[4830]: I0131 09:23:41.171156 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1d8e473a-4e99-400b-be95-bd490bd2228b-operator-scripts\") pod \"neutron-541b-account-create-update-pssj9\" (UID: \"1d8e473a-4e99-400b-be95-bd490bd2228b\") " pod="openstack/neutron-541b-account-create-update-pssj9" Jan 31 09:23:41 crc kubenswrapper[4830]: I0131 09:23:41.193280 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pd7c8\" (UniqueName: \"kubernetes.io/projected/8e66d083-dfc7-41d1-b955-752fdc14a3c2-kube-api-access-pd7c8\") pod \"neutron-db-create-w7rt2\" (UID: \"8e66d083-dfc7-41d1-b955-752fdc14a3c2\") " pod="openstack/neutron-db-create-w7rt2" Jan 31 09:23:41 crc kubenswrapper[4830]: I0131 09:23:41.201684 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hs7sj\" (UniqueName: \"kubernetes.io/projected/1d8e473a-4e99-400b-be95-bd490bd2228b-kube-api-access-hs7sj\") pod \"neutron-541b-account-create-update-pssj9\" (UID: \"1d8e473a-4e99-400b-be95-bd490bd2228b\") " pod="openstack/neutron-541b-account-create-update-pssj9" Jan 31 09:23:41 crc kubenswrapper[4830]: I0131 09:23:41.253191 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-hccz6" Jan 31 09:23:41 crc kubenswrapper[4830]: I0131 09:23:41.264784 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mxd9f\" (UniqueName: \"kubernetes.io/projected/528add2c-0e7d-4050-a900-0970487688f3-kube-api-access-mxd9f\") pod \"heat-b610-account-create-update-c8ck9\" (UID: \"528add2c-0e7d-4050-a900-0970487688f3\") " pod="openstack/heat-b610-account-create-update-c8ck9" Jan 31 09:23:41 crc kubenswrapper[4830]: I0131 09:23:41.280975 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/528add2c-0e7d-4050-a900-0970487688f3-operator-scripts\") pod \"heat-b610-account-create-update-c8ck9\" (UID: \"528add2c-0e7d-4050-a900-0970487688f3\") " pod="openstack/heat-b610-account-create-update-c8ck9" Jan 31 09:23:41 crc kubenswrapper[4830]: I0131 09:23:41.281755 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-541b-account-create-update-pssj9" Jan 31 09:23:41 crc kubenswrapper[4830]: I0131 09:23:41.291329 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-w7rt2" Jan 31 09:23:41 crc kubenswrapper[4830]: I0131 09:23:41.386043 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/528add2c-0e7d-4050-a900-0970487688f3-operator-scripts\") pod \"heat-b610-account-create-update-c8ck9\" (UID: \"528add2c-0e7d-4050-a900-0970487688f3\") " pod="openstack/heat-b610-account-create-update-c8ck9" Jan 31 09:23:41 crc kubenswrapper[4830]: I0131 09:23:41.386214 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mxd9f\" (UniqueName: \"kubernetes.io/projected/528add2c-0e7d-4050-a900-0970487688f3-kube-api-access-mxd9f\") pod \"heat-b610-account-create-update-c8ck9\" (UID: \"528add2c-0e7d-4050-a900-0970487688f3\") " pod="openstack/heat-b610-account-create-update-c8ck9" Jan 31 09:23:41 crc kubenswrapper[4830]: I0131 09:23:41.387787 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/528add2c-0e7d-4050-a900-0970487688f3-operator-scripts\") pod \"heat-b610-account-create-update-c8ck9\" (UID: \"528add2c-0e7d-4050-a900-0970487688f3\") " pod="openstack/heat-b610-account-create-update-c8ck9" Jan 31 09:23:41 crc kubenswrapper[4830]: I0131 09:23:41.422910 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mxd9f\" (UniqueName: \"kubernetes.io/projected/528add2c-0e7d-4050-a900-0970487688f3-kube-api-access-mxd9f\") pod \"heat-b610-account-create-update-c8ck9\" (UID: \"528add2c-0e7d-4050-a900-0970487688f3\") " pod="openstack/heat-b610-account-create-update-c8ck9" Jan 31 09:23:41 crc kubenswrapper[4830]: I0131 09:23:41.454843 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-7jb5m"] Jan 31 09:23:41 crc kubenswrapper[4830]: I0131 09:23:41.490466 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-b610-account-create-update-c8ck9" Jan 31 09:23:41 crc kubenswrapper[4830]: I0131 09:23:41.578698 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-3e38-account-create-update-4vpgq"] Jan 31 09:23:41 crc kubenswrapper[4830]: W0131 09:23:41.592281 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2af2731d_2c7c_46c2_abcc_4846583de531.slice/crio-31e37041bcaae28ed8dfb307f3d5bc2dfd46fcf43009664d63514a67ce917b71 WatchSource:0}: Error finding container 31e37041bcaae28ed8dfb307f3d5bc2dfd46fcf43009664d63514a67ce917b71: Status 404 returned error can't find the container with id 31e37041bcaae28ed8dfb307f3d5bc2dfd46fcf43009664d63514a67ce917b71 Jan 31 09:23:41 crc kubenswrapper[4830]: I0131 09:23:41.733107 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-7jb5m" event={"ID":"ce165a30-da01-4e57-996c-de05fbe74498","Type":"ContainerStarted","Data":"df5f2d851a5f38d049523002c4bf334d5a9b9d332dd0a491534af8fcb5f15f7f"} Jan 31 09:23:41 crc kubenswrapper[4830]: I0131 09:23:41.738578 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-cn5jd"] Jan 31 09:23:41 crc kubenswrapper[4830]: I0131 09:23:41.740819 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-3e38-account-create-update-4vpgq" event={"ID":"2af2731d-2c7c-46c2-abcc-4846583de531","Type":"ContainerStarted","Data":"31e37041bcaae28ed8dfb307f3d5bc2dfd46fcf43009664d63514a67ce917b71"} Jan 31 09:23:41 crc kubenswrapper[4830]: I0131 09:23:41.965195 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-create-w9h2w"] Jan 31 09:23:42 crc kubenswrapper[4830]: W0131 09:23:42.001606 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod52fdb459_dc6a_4e56_8a6b_379d4c74ce62.slice/crio-d2898fda51a505889b6d31f0230e7b14ac2ee83e9fe5f16500149b1bb9ffcef6 WatchSource:0}: Error finding container d2898fda51a505889b6d31f0230e7b14ac2ee83e9fe5f16500149b1bb9ffcef6: Status 404 returned error can't find the container with id d2898fda51a505889b6d31f0230e7b14ac2ee83e9fe5f16500149b1bb9ffcef6 Jan 31 09:23:42 crc kubenswrapper[4830]: I0131 09:23:42.152820 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-hccz6"] Jan 31 09:23:42 crc kubenswrapper[4830]: I0131 09:23:42.164132 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-87e0-account-create-update-6vbtx"] Jan 31 09:23:42 crc kubenswrapper[4830]: I0131 09:23:42.291461 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-b610-account-create-update-c8ck9"] Jan 31 09:23:42 crc kubenswrapper[4830]: W0131 09:23:42.320408 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod528add2c_0e7d_4050_a900_0970487688f3.slice/crio-bb034e030dadd2a6a2267910a4160bf201801b47d979753ef8d0b9e1accd7494 WatchSource:0}: Error finding container bb034e030dadd2a6a2267910a4160bf201801b47d979753ef8d0b9e1accd7494: Status 404 returned error can't find the container with id bb034e030dadd2a6a2267910a4160bf201801b47d979753ef8d0b9e1accd7494 Jan 31 09:23:42 crc kubenswrapper[4830]: I0131 09:23:42.492813 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-w7rt2"] Jan 31 09:23:42 crc kubenswrapper[4830]: I0131 09:23:42.562837 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-541b-account-create-update-pssj9"] Jan 31 09:23:42 crc kubenswrapper[4830]: I0131 09:23:42.777886 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-w9h2w" event={"ID":"52fdb459-dc6a-4e56-8a6b-379d4c74ce62","Type":"ContainerStarted","Data":"cd09f39c8b606e0207ce294d7fcfea1783f3cddd44a037ff3ed316fd176521a6"} Jan 31 09:23:42 crc kubenswrapper[4830]: I0131 09:23:42.778375 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-w9h2w" event={"ID":"52fdb459-dc6a-4e56-8a6b-379d4c74ce62","Type":"ContainerStarted","Data":"d2898fda51a505889b6d31f0230e7b14ac2ee83e9fe5f16500149b1bb9ffcef6"} Jan 31 09:23:42 crc kubenswrapper[4830]: I0131 09:23:42.817584 4830 generic.go:334] "Generic (PLEG): container finished" podID="2af2731d-2c7c-46c2-abcc-4846583de531" containerID="2425a143d49eee8c420681009c13f58ef81bd146f09c31561f96fc2adad60cab" exitCode=0 Jan 31 09:23:42 crc kubenswrapper[4830]: I0131 09:23:42.817699 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-3e38-account-create-update-4vpgq" event={"ID":"2af2731d-2c7c-46c2-abcc-4846583de531","Type":"ContainerDied","Data":"2425a143d49eee8c420681009c13f58ef81bd146f09c31561f96fc2adad60cab"} Jan 31 09:23:42 crc kubenswrapper[4830]: I0131 09:23:42.829507 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-w7rt2" event={"ID":"8e66d083-dfc7-41d1-b955-752fdc14a3c2","Type":"ContainerStarted","Data":"388b602c22398d15901d9e72f83870d1898a97a2016a148a7be4836915195384"} Jan 31 09:23:42 crc kubenswrapper[4830]: I0131 09:23:42.838789 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-cn5jd" event={"ID":"e1444b15-29b5-4433-8ea5-4b533b54f08a","Type":"ContainerStarted","Data":"b44444dadc89a2a407b63173c75f695b101f9bbf2eafcb9d8f114430787f4991"} Jan 31 09:23:42 crc kubenswrapper[4830]: I0131 09:23:42.838850 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-cn5jd" event={"ID":"e1444b15-29b5-4433-8ea5-4b533b54f08a","Type":"ContainerStarted","Data":"8656308559e4b282ea8c542b7609aa986f2501173b389c73f1857a18229a2ef4"} Jan 31 09:23:42 crc kubenswrapper[4830]: I0131 09:23:42.841403 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-541b-account-create-update-pssj9" event={"ID":"1d8e473a-4e99-400b-be95-bd490bd2228b","Type":"ContainerStarted","Data":"957d9b611c86cdedb706ffa9651f0af9700ce81192cb12f5be3dc037228f14d2"} Jan 31 09:23:42 crc kubenswrapper[4830]: I0131 09:23:42.842745 4830 generic.go:334] "Generic (PLEG): container finished" podID="ce165a30-da01-4e57-996c-de05fbe74498" containerID="cbe92bf9a4067d1c05c9f7af4a36a6499a7ce9ba145c65a23dc54f836a20bb44" exitCode=0 Jan 31 09:23:42 crc kubenswrapper[4830]: I0131 09:23:42.842797 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-7jb5m" event={"ID":"ce165a30-da01-4e57-996c-de05fbe74498","Type":"ContainerDied","Data":"cbe92bf9a4067d1c05c9f7af4a36a6499a7ce9ba145c65a23dc54f836a20bb44"} Jan 31 09:23:42 crc kubenswrapper[4830]: I0131 09:23:42.844033 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-hccz6" event={"ID":"f97b7b49-f0d6-4f7c-a8ed-792cbfa32504","Type":"ContainerStarted","Data":"8a1d3dfec006e608c43bd1ba34f56de1b51c948c99dae36fc984199c646d18f5"} Jan 31 09:23:42 crc kubenswrapper[4830]: I0131 09:23:42.850373 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-b610-account-create-update-c8ck9" event={"ID":"528add2c-0e7d-4050-a900-0970487688f3","Type":"ContainerStarted","Data":"bb034e030dadd2a6a2267910a4160bf201801b47d979753ef8d0b9e1accd7494"} Jan 31 09:23:42 crc kubenswrapper[4830]: I0131 09:23:42.859752 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-db-create-w9h2w" podStartSLOduration=2.85971563 podStartE2EDuration="2.85971563s" podCreationTimestamp="2026-01-31 09:23:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 09:23:42.826886827 +0000 UTC m=+1367.320249269" watchObservedRunningTime="2026-01-31 09:23:42.85971563 +0000 UTC m=+1367.353078072" Jan 31 09:23:42 crc kubenswrapper[4830]: I0131 09:23:42.867415 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-87e0-account-create-update-6vbtx" event={"ID":"4140bbd2-fcdd-482d-9224-5248d75e4317","Type":"ContainerStarted","Data":"92ba01ee4673a32e19630a1774c6b842ef9680ec14c9c3a47683549e748e3cf0"} Jan 31 09:23:42 crc kubenswrapper[4830]: I0131 09:23:42.867481 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-87e0-account-create-update-6vbtx" event={"ID":"4140bbd2-fcdd-482d-9224-5248d75e4317","Type":"ContainerStarted","Data":"526bb401f2a491ae57e49ea4ecf87e3e9e02b31607d3e4ef832303922a6e1792"} Jan 31 09:23:42 crc kubenswrapper[4830]: I0131 09:23:42.912189 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-create-cn5jd" podStartSLOduration=2.912154031 podStartE2EDuration="2.912154031s" podCreationTimestamp="2026-01-31 09:23:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 09:23:42.90226401 +0000 UTC m=+1367.395626452" watchObservedRunningTime="2026-01-31 09:23:42.912154031 +0000 UTC m=+1367.405516493" Jan 31 09:23:42 crc kubenswrapper[4830]: I0131 09:23:42.942080 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-87e0-account-create-update-6vbtx" podStartSLOduration=2.942050531 podStartE2EDuration="2.942050531s" podCreationTimestamp="2026-01-31 09:23:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 09:23:42.931154381 +0000 UTC m=+1367.424516823" watchObservedRunningTime="2026-01-31 09:23:42.942050531 +0000 UTC m=+1367.435413253" Jan 31 09:23:43 crc kubenswrapper[4830]: I0131 09:23:43.882126 4830 generic.go:334] "Generic (PLEG): container finished" podID="e1444b15-29b5-4433-8ea5-4b533b54f08a" containerID="b44444dadc89a2a407b63173c75f695b101f9bbf2eafcb9d8f114430787f4991" exitCode=0 Jan 31 09:23:43 crc kubenswrapper[4830]: I0131 09:23:43.882289 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-cn5jd" event={"ID":"e1444b15-29b5-4433-8ea5-4b533b54f08a","Type":"ContainerDied","Data":"b44444dadc89a2a407b63173c75f695b101f9bbf2eafcb9d8f114430787f4991"} Jan 31 09:23:43 crc kubenswrapper[4830]: I0131 09:23:43.884546 4830 generic.go:334] "Generic (PLEG): container finished" podID="528add2c-0e7d-4050-a900-0970487688f3" containerID="a9fea2162be47a2617675b01ee25975cc0f969ac4085fe30a352a60229108deb" exitCode=0 Jan 31 09:23:43 crc kubenswrapper[4830]: I0131 09:23:43.884607 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-b610-account-create-update-c8ck9" event={"ID":"528add2c-0e7d-4050-a900-0970487688f3","Type":"ContainerDied","Data":"a9fea2162be47a2617675b01ee25975cc0f969ac4085fe30a352a60229108deb"} Jan 31 09:23:43 crc kubenswrapper[4830]: I0131 09:23:43.895470 4830 generic.go:334] "Generic (PLEG): container finished" podID="4140bbd2-fcdd-482d-9224-5248d75e4317" containerID="92ba01ee4673a32e19630a1774c6b842ef9680ec14c9c3a47683549e748e3cf0" exitCode=0 Jan 31 09:23:43 crc kubenswrapper[4830]: I0131 09:23:43.896214 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-87e0-account-create-update-6vbtx" event={"ID":"4140bbd2-fcdd-482d-9224-5248d75e4317","Type":"ContainerDied","Data":"92ba01ee4673a32e19630a1774c6b842ef9680ec14c9c3a47683549e748e3cf0"} Jan 31 09:23:43 crc kubenswrapper[4830]: I0131 09:23:43.914997 4830 generic.go:334] "Generic (PLEG): container finished" podID="52fdb459-dc6a-4e56-8a6b-379d4c74ce62" containerID="cd09f39c8b606e0207ce294d7fcfea1783f3cddd44a037ff3ed316fd176521a6" exitCode=0 Jan 31 09:23:43 crc kubenswrapper[4830]: I0131 09:23:43.915084 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-w9h2w" event={"ID":"52fdb459-dc6a-4e56-8a6b-379d4c74ce62","Type":"ContainerDied","Data":"cd09f39c8b606e0207ce294d7fcfea1783f3cddd44a037ff3ed316fd176521a6"} Jan 31 09:23:43 crc kubenswrapper[4830]: I0131 09:23:43.918555 4830 generic.go:334] "Generic (PLEG): container finished" podID="1d8e473a-4e99-400b-be95-bd490bd2228b" containerID="c11a8c154f7fd5ec5bd847b50555ad9c581e8302954c26162b88dd6d00ba2007" exitCode=0 Jan 31 09:23:43 crc kubenswrapper[4830]: I0131 09:23:43.918636 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-541b-account-create-update-pssj9" event={"ID":"1d8e473a-4e99-400b-be95-bd490bd2228b","Type":"ContainerDied","Data":"c11a8c154f7fd5ec5bd847b50555ad9c581e8302954c26162b88dd6d00ba2007"} Jan 31 09:23:43 crc kubenswrapper[4830]: I0131 09:23:43.937508 4830 generic.go:334] "Generic (PLEG): container finished" podID="8e66d083-dfc7-41d1-b955-752fdc14a3c2" containerID="948d37a0adf9cf63fe9c284791563ef7e989cc67ae703351cf0e15dccc7ba20e" exitCode=0 Jan 31 09:23:43 crc kubenswrapper[4830]: I0131 09:23:43.937814 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-w7rt2" event={"ID":"8e66d083-dfc7-41d1-b955-752fdc14a3c2","Type":"ContainerDied","Data":"948d37a0adf9cf63fe9c284791563ef7e989cc67ae703351cf0e15dccc7ba20e"} Jan 31 09:23:44 crc kubenswrapper[4830]: I0131 09:23:44.544399 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-7jb5m" Jan 31 09:23:44 crc kubenswrapper[4830]: I0131 09:23:44.547913 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-3e38-account-create-update-4vpgq" Jan 31 09:23:44 crc kubenswrapper[4830]: I0131 09:23:44.636446 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lnds2\" (UniqueName: \"kubernetes.io/projected/ce165a30-da01-4e57-996c-de05fbe74498-kube-api-access-lnds2\") pod \"ce165a30-da01-4e57-996c-de05fbe74498\" (UID: \"ce165a30-da01-4e57-996c-de05fbe74498\") " Jan 31 09:23:44 crc kubenswrapper[4830]: I0131 09:23:44.636643 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mt8rw\" (UniqueName: \"kubernetes.io/projected/2af2731d-2c7c-46c2-abcc-4846583de531-kube-api-access-mt8rw\") pod \"2af2731d-2c7c-46c2-abcc-4846583de531\" (UID: \"2af2731d-2c7c-46c2-abcc-4846583de531\") " Jan 31 09:23:44 crc kubenswrapper[4830]: I0131 09:23:44.636831 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2af2731d-2c7c-46c2-abcc-4846583de531-operator-scripts\") pod \"2af2731d-2c7c-46c2-abcc-4846583de531\" (UID: \"2af2731d-2c7c-46c2-abcc-4846583de531\") " Jan 31 09:23:44 crc kubenswrapper[4830]: I0131 09:23:44.636889 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ce165a30-da01-4e57-996c-de05fbe74498-operator-scripts\") pod \"ce165a30-da01-4e57-996c-de05fbe74498\" (UID: \"ce165a30-da01-4e57-996c-de05fbe74498\") " Jan 31 09:23:44 crc kubenswrapper[4830]: I0131 09:23:44.638236 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2af2731d-2c7c-46c2-abcc-4846583de531-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2af2731d-2c7c-46c2-abcc-4846583de531" (UID: "2af2731d-2c7c-46c2-abcc-4846583de531"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:23:44 crc kubenswrapper[4830]: I0131 09:23:44.638918 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ce165a30-da01-4e57-996c-de05fbe74498-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ce165a30-da01-4e57-996c-de05fbe74498" (UID: "ce165a30-da01-4e57-996c-de05fbe74498"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:23:44 crc kubenswrapper[4830]: I0131 09:23:44.650214 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2af2731d-2c7c-46c2-abcc-4846583de531-kube-api-access-mt8rw" (OuterVolumeSpecName: "kube-api-access-mt8rw") pod "2af2731d-2c7c-46c2-abcc-4846583de531" (UID: "2af2731d-2c7c-46c2-abcc-4846583de531"). InnerVolumeSpecName "kube-api-access-mt8rw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:23:44 crc kubenswrapper[4830]: I0131 09:23:44.666176 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce165a30-da01-4e57-996c-de05fbe74498-kube-api-access-lnds2" (OuterVolumeSpecName: "kube-api-access-lnds2") pod "ce165a30-da01-4e57-996c-de05fbe74498" (UID: "ce165a30-da01-4e57-996c-de05fbe74498"). InnerVolumeSpecName "kube-api-access-lnds2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:23:44 crc kubenswrapper[4830]: I0131 09:23:44.740039 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lnds2\" (UniqueName: \"kubernetes.io/projected/ce165a30-da01-4e57-996c-de05fbe74498-kube-api-access-lnds2\") on node \"crc\" DevicePath \"\"" Jan 31 09:23:44 crc kubenswrapper[4830]: I0131 09:23:44.740077 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mt8rw\" (UniqueName: \"kubernetes.io/projected/2af2731d-2c7c-46c2-abcc-4846583de531-kube-api-access-mt8rw\") on node \"crc\" DevicePath \"\"" Jan 31 09:23:44 crc kubenswrapper[4830]: I0131 09:23:44.740087 4830 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2af2731d-2c7c-46c2-abcc-4846583de531-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 09:23:44 crc kubenswrapper[4830]: I0131 09:23:44.740096 4830 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ce165a30-da01-4e57-996c-de05fbe74498-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 09:23:44 crc kubenswrapper[4830]: I0131 09:23:44.952629 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-7jb5m" Jan 31 09:23:44 crc kubenswrapper[4830]: I0131 09:23:44.952685 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-7jb5m" event={"ID":"ce165a30-da01-4e57-996c-de05fbe74498","Type":"ContainerDied","Data":"df5f2d851a5f38d049523002c4bf334d5a9b9d332dd0a491534af8fcb5f15f7f"} Jan 31 09:23:44 crc kubenswrapper[4830]: I0131 09:23:44.952770 4830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="df5f2d851a5f38d049523002c4bf334d5a9b9d332dd0a491534af8fcb5f15f7f" Jan 31 09:23:44 crc kubenswrapper[4830]: I0131 09:23:44.956010 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-3e38-account-create-update-4vpgq" event={"ID":"2af2731d-2c7c-46c2-abcc-4846583de531","Type":"ContainerDied","Data":"31e37041bcaae28ed8dfb307f3d5bc2dfd46fcf43009664d63514a67ce917b71"} Jan 31 09:23:44 crc kubenswrapper[4830]: I0131 09:23:44.956078 4830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="31e37041bcaae28ed8dfb307f3d5bc2dfd46fcf43009664d63514a67ce917b71" Jan 31 09:23:44 crc kubenswrapper[4830]: I0131 09:23:44.956172 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-3e38-account-create-update-4vpgq" Jan 31 09:23:45 crc kubenswrapper[4830]: I0131 09:23:45.586797 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-w9h2w" Jan 31 09:23:45 crc kubenswrapper[4830]: I0131 09:23:45.768476 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/52fdb459-dc6a-4e56-8a6b-379d4c74ce62-operator-scripts\") pod \"52fdb459-dc6a-4e56-8a6b-379d4c74ce62\" (UID: \"52fdb459-dc6a-4e56-8a6b-379d4c74ce62\") " Jan 31 09:23:45 crc kubenswrapper[4830]: I0131 09:23:45.768611 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jn8dm\" (UniqueName: \"kubernetes.io/projected/52fdb459-dc6a-4e56-8a6b-379d4c74ce62-kube-api-access-jn8dm\") pod \"52fdb459-dc6a-4e56-8a6b-379d4c74ce62\" (UID: \"52fdb459-dc6a-4e56-8a6b-379d4c74ce62\") " Jan 31 09:23:45 crc kubenswrapper[4830]: I0131 09:23:45.770182 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/52fdb459-dc6a-4e56-8a6b-379d4c74ce62-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "52fdb459-dc6a-4e56-8a6b-379d4c74ce62" (UID: "52fdb459-dc6a-4e56-8a6b-379d4c74ce62"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:23:45 crc kubenswrapper[4830]: I0131 09:23:45.789047 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/52fdb459-dc6a-4e56-8a6b-379d4c74ce62-kube-api-access-jn8dm" (OuterVolumeSpecName: "kube-api-access-jn8dm") pod "52fdb459-dc6a-4e56-8a6b-379d4c74ce62" (UID: "52fdb459-dc6a-4e56-8a6b-379d4c74ce62"). InnerVolumeSpecName "kube-api-access-jn8dm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:23:45 crc kubenswrapper[4830]: I0131 09:23:45.872506 4830 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/52fdb459-dc6a-4e56-8a6b-379d4c74ce62-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 09:23:45 crc kubenswrapper[4830]: I0131 09:23:45.872877 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jn8dm\" (UniqueName: \"kubernetes.io/projected/52fdb459-dc6a-4e56-8a6b-379d4c74ce62-kube-api-access-jn8dm\") on node \"crc\" DevicePath \"\"" Jan 31 09:23:45 crc kubenswrapper[4830]: I0131 09:23:45.964376 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-cn5jd" Jan 31 09:23:45 crc kubenswrapper[4830]: I0131 09:23:45.974020 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-cn5jd" event={"ID":"e1444b15-29b5-4433-8ea5-4b533b54f08a","Type":"ContainerDied","Data":"8656308559e4b282ea8c542b7609aa986f2501173b389c73f1857a18229a2ef4"} Jan 31 09:23:45 crc kubenswrapper[4830]: I0131 09:23:45.974074 4830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8656308559e4b282ea8c542b7609aa986f2501173b389c73f1857a18229a2ef4" Jan 31 09:23:45 crc kubenswrapper[4830]: I0131 09:23:45.974150 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-cn5jd" Jan 31 09:23:45 crc kubenswrapper[4830]: I0131 09:23:45.990119 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-w9h2w" event={"ID":"52fdb459-dc6a-4e56-8a6b-379d4c74ce62","Type":"ContainerDied","Data":"d2898fda51a505889b6d31f0230e7b14ac2ee83e9fe5f16500149b1bb9ffcef6"} Jan 31 09:23:45 crc kubenswrapper[4830]: I0131 09:23:45.990180 4830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d2898fda51a505889b6d31f0230e7b14ac2ee83e9fe5f16500149b1bb9ffcef6" Jan 31 09:23:45 crc kubenswrapper[4830]: I0131 09:23:45.990259 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-w9h2w" Jan 31 09:23:46 crc kubenswrapper[4830]: I0131 09:23:46.084385 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e1444b15-29b5-4433-8ea5-4b533b54f08a-operator-scripts\") pod \"e1444b15-29b5-4433-8ea5-4b533b54f08a\" (UID: \"e1444b15-29b5-4433-8ea5-4b533b54f08a\") " Jan 31 09:23:46 crc kubenswrapper[4830]: I0131 09:23:46.084601 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5z55s\" (UniqueName: \"kubernetes.io/projected/e1444b15-29b5-4433-8ea5-4b533b54f08a-kube-api-access-5z55s\") pod \"e1444b15-29b5-4433-8ea5-4b533b54f08a\" (UID: \"e1444b15-29b5-4433-8ea5-4b533b54f08a\") " Jan 31 09:23:46 crc kubenswrapper[4830]: I0131 09:23:46.085144 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e1444b15-29b5-4433-8ea5-4b533b54f08a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e1444b15-29b5-4433-8ea5-4b533b54f08a" (UID: "e1444b15-29b5-4433-8ea5-4b533b54f08a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:23:46 crc kubenswrapper[4830]: I0131 09:23:46.085665 4830 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e1444b15-29b5-4433-8ea5-4b533b54f08a-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 09:23:46 crc kubenswrapper[4830]: I0131 09:23:46.092432 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e1444b15-29b5-4433-8ea5-4b533b54f08a-kube-api-access-5z55s" (OuterVolumeSpecName: "kube-api-access-5z55s") pod "e1444b15-29b5-4433-8ea5-4b533b54f08a" (UID: "e1444b15-29b5-4433-8ea5-4b533b54f08a"). InnerVolumeSpecName "kube-api-access-5z55s". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:23:46 crc kubenswrapper[4830]: I0131 09:23:46.099734 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-87e0-account-create-update-6vbtx" Jan 31 09:23:46 crc kubenswrapper[4830]: I0131 09:23:46.135692 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-541b-account-create-update-pssj9" Jan 31 09:23:46 crc kubenswrapper[4830]: I0131 09:23:46.145390 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-b610-account-create-update-c8ck9" Jan 31 09:23:46 crc kubenswrapper[4830]: I0131 09:23:46.152198 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-w7rt2" Jan 31 09:23:46 crc kubenswrapper[4830]: I0131 09:23:46.187303 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4140bbd2-fcdd-482d-9224-5248d75e4317-operator-scripts\") pod \"4140bbd2-fcdd-482d-9224-5248d75e4317\" (UID: \"4140bbd2-fcdd-482d-9224-5248d75e4317\") " Jan 31 09:23:46 crc kubenswrapper[4830]: I0131 09:23:46.187623 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ctgr2\" (UniqueName: \"kubernetes.io/projected/4140bbd2-fcdd-482d-9224-5248d75e4317-kube-api-access-ctgr2\") pod \"4140bbd2-fcdd-482d-9224-5248d75e4317\" (UID: \"4140bbd2-fcdd-482d-9224-5248d75e4317\") " Jan 31 09:23:46 crc kubenswrapper[4830]: I0131 09:23:46.188713 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5z55s\" (UniqueName: \"kubernetes.io/projected/e1444b15-29b5-4433-8ea5-4b533b54f08a-kube-api-access-5z55s\") on node \"crc\" DevicePath \"\"" Jan 31 09:23:46 crc kubenswrapper[4830]: I0131 09:23:46.198718 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4140bbd2-fcdd-482d-9224-5248d75e4317-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4140bbd2-fcdd-482d-9224-5248d75e4317" (UID: "4140bbd2-fcdd-482d-9224-5248d75e4317"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:23:46 crc kubenswrapper[4830]: I0131 09:23:46.201819 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4140bbd2-fcdd-482d-9224-5248d75e4317-kube-api-access-ctgr2" (OuterVolumeSpecName: "kube-api-access-ctgr2") pod "4140bbd2-fcdd-482d-9224-5248d75e4317" (UID: "4140bbd2-fcdd-482d-9224-5248d75e4317"). InnerVolumeSpecName "kube-api-access-ctgr2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:23:46 crc kubenswrapper[4830]: I0131 09:23:46.290382 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8e66d083-dfc7-41d1-b955-752fdc14a3c2-operator-scripts\") pod \"8e66d083-dfc7-41d1-b955-752fdc14a3c2\" (UID: \"8e66d083-dfc7-41d1-b955-752fdc14a3c2\") " Jan 31 09:23:46 crc kubenswrapper[4830]: I0131 09:23:46.290620 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1d8e473a-4e99-400b-be95-bd490bd2228b-operator-scripts\") pod \"1d8e473a-4e99-400b-be95-bd490bd2228b\" (UID: \"1d8e473a-4e99-400b-be95-bd490bd2228b\") " Jan 31 09:23:46 crc kubenswrapper[4830]: I0131 09:23:46.290678 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mxd9f\" (UniqueName: \"kubernetes.io/projected/528add2c-0e7d-4050-a900-0970487688f3-kube-api-access-mxd9f\") pod \"528add2c-0e7d-4050-a900-0970487688f3\" (UID: \"528add2c-0e7d-4050-a900-0970487688f3\") " Jan 31 09:23:46 crc kubenswrapper[4830]: I0131 09:23:46.290776 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hs7sj\" (UniqueName: \"kubernetes.io/projected/1d8e473a-4e99-400b-be95-bd490bd2228b-kube-api-access-hs7sj\") pod \"1d8e473a-4e99-400b-be95-bd490bd2228b\" (UID: \"1d8e473a-4e99-400b-be95-bd490bd2228b\") " Jan 31 09:23:46 crc kubenswrapper[4830]: I0131 09:23:46.290862 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/528add2c-0e7d-4050-a900-0970487688f3-operator-scripts\") pod \"528add2c-0e7d-4050-a900-0970487688f3\" (UID: \"528add2c-0e7d-4050-a900-0970487688f3\") " Jan 31 09:23:46 crc kubenswrapper[4830]: I0131 09:23:46.290920 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pd7c8\" (UniqueName: \"kubernetes.io/projected/8e66d083-dfc7-41d1-b955-752fdc14a3c2-kube-api-access-pd7c8\") pod \"8e66d083-dfc7-41d1-b955-752fdc14a3c2\" (UID: \"8e66d083-dfc7-41d1-b955-752fdc14a3c2\") " Jan 31 09:23:46 crc kubenswrapper[4830]: I0131 09:23:46.291423 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8e66d083-dfc7-41d1-b955-752fdc14a3c2-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8e66d083-dfc7-41d1-b955-752fdc14a3c2" (UID: "8e66d083-dfc7-41d1-b955-752fdc14a3c2"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:23:46 crc kubenswrapper[4830]: I0131 09:23:46.291500 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ctgr2\" (UniqueName: \"kubernetes.io/projected/4140bbd2-fcdd-482d-9224-5248d75e4317-kube-api-access-ctgr2\") on node \"crc\" DevicePath \"\"" Jan 31 09:23:46 crc kubenswrapper[4830]: I0131 09:23:46.291516 4830 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4140bbd2-fcdd-482d-9224-5248d75e4317-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 09:23:46 crc kubenswrapper[4830]: I0131 09:23:46.292045 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/528add2c-0e7d-4050-a900-0970487688f3-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "528add2c-0e7d-4050-a900-0970487688f3" (UID: "528add2c-0e7d-4050-a900-0970487688f3"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:23:46 crc kubenswrapper[4830]: I0131 09:23:46.292503 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1d8e473a-4e99-400b-be95-bd490bd2228b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "1d8e473a-4e99-400b-be95-bd490bd2228b" (UID: "1d8e473a-4e99-400b-be95-bd490bd2228b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:23:46 crc kubenswrapper[4830]: I0131 09:23:46.301359 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d8e473a-4e99-400b-be95-bd490bd2228b-kube-api-access-hs7sj" (OuterVolumeSpecName: "kube-api-access-hs7sj") pod "1d8e473a-4e99-400b-be95-bd490bd2228b" (UID: "1d8e473a-4e99-400b-be95-bd490bd2228b"). InnerVolumeSpecName "kube-api-access-hs7sj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:23:46 crc kubenswrapper[4830]: I0131 09:23:46.303069 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/528add2c-0e7d-4050-a900-0970487688f3-kube-api-access-mxd9f" (OuterVolumeSpecName: "kube-api-access-mxd9f") pod "528add2c-0e7d-4050-a900-0970487688f3" (UID: "528add2c-0e7d-4050-a900-0970487688f3"). InnerVolumeSpecName "kube-api-access-mxd9f". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:23:46 crc kubenswrapper[4830]: I0131 09:23:46.303495 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8e66d083-dfc7-41d1-b955-752fdc14a3c2-kube-api-access-pd7c8" (OuterVolumeSpecName: "kube-api-access-pd7c8") pod "8e66d083-dfc7-41d1-b955-752fdc14a3c2" (UID: "8e66d083-dfc7-41d1-b955-752fdc14a3c2"). InnerVolumeSpecName "kube-api-access-pd7c8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:23:46 crc kubenswrapper[4830]: I0131 09:23:46.395951 4830 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1d8e473a-4e99-400b-be95-bd490bd2228b-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 09:23:46 crc kubenswrapper[4830]: I0131 09:23:46.396001 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mxd9f\" (UniqueName: \"kubernetes.io/projected/528add2c-0e7d-4050-a900-0970487688f3-kube-api-access-mxd9f\") on node \"crc\" DevicePath \"\"" Jan 31 09:23:46 crc kubenswrapper[4830]: I0131 09:23:46.396016 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hs7sj\" (UniqueName: \"kubernetes.io/projected/1d8e473a-4e99-400b-be95-bd490bd2228b-kube-api-access-hs7sj\") on node \"crc\" DevicePath \"\"" Jan 31 09:23:46 crc kubenswrapper[4830]: I0131 09:23:46.396030 4830 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/528add2c-0e7d-4050-a900-0970487688f3-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 09:23:46 crc kubenswrapper[4830]: I0131 09:23:46.396047 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pd7c8\" (UniqueName: \"kubernetes.io/projected/8e66d083-dfc7-41d1-b955-752fdc14a3c2-kube-api-access-pd7c8\") on node \"crc\" DevicePath \"\"" Jan 31 09:23:46 crc kubenswrapper[4830]: I0131 09:23:46.396060 4830 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8e66d083-dfc7-41d1-b955-752fdc14a3c2-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 09:23:47 crc kubenswrapper[4830]: I0131 09:23:47.002657 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-b610-account-create-update-c8ck9" event={"ID":"528add2c-0e7d-4050-a900-0970487688f3","Type":"ContainerDied","Data":"bb034e030dadd2a6a2267910a4160bf201801b47d979753ef8d0b9e1accd7494"} Jan 31 09:23:47 crc kubenswrapper[4830]: I0131 09:23:47.002710 4830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bb034e030dadd2a6a2267910a4160bf201801b47d979753ef8d0b9e1accd7494" Jan 31 09:23:47 crc kubenswrapper[4830]: I0131 09:23:47.002779 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-b610-account-create-update-c8ck9" Jan 31 09:23:47 crc kubenswrapper[4830]: I0131 09:23:47.006695 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-87e0-account-create-update-6vbtx" event={"ID":"4140bbd2-fcdd-482d-9224-5248d75e4317","Type":"ContainerDied","Data":"526bb401f2a491ae57e49ea4ecf87e3e9e02b31607d3e4ef832303922a6e1792"} Jan 31 09:23:47 crc kubenswrapper[4830]: I0131 09:23:47.006718 4830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="526bb401f2a491ae57e49ea4ecf87e3e9e02b31607d3e4ef832303922a6e1792" Jan 31 09:23:47 crc kubenswrapper[4830]: I0131 09:23:47.006742 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-87e0-account-create-update-6vbtx" Jan 31 09:23:47 crc kubenswrapper[4830]: I0131 09:23:47.010610 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-541b-account-create-update-pssj9" Jan 31 09:23:47 crc kubenswrapper[4830]: I0131 09:23:47.010581 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-541b-account-create-update-pssj9" event={"ID":"1d8e473a-4e99-400b-be95-bd490bd2228b","Type":"ContainerDied","Data":"957d9b611c86cdedb706ffa9651f0af9700ce81192cb12f5be3dc037228f14d2"} Jan 31 09:23:47 crc kubenswrapper[4830]: I0131 09:23:47.010788 4830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="957d9b611c86cdedb706ffa9651f0af9700ce81192cb12f5be3dc037228f14d2" Jan 31 09:23:47 crc kubenswrapper[4830]: I0131 09:23:47.014337 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-w7rt2" event={"ID":"8e66d083-dfc7-41d1-b955-752fdc14a3c2","Type":"ContainerDied","Data":"388b602c22398d15901d9e72f83870d1898a97a2016a148a7be4836915195384"} Jan 31 09:23:47 crc kubenswrapper[4830]: I0131 09:23:47.014401 4830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="388b602c22398d15901d9e72f83870d1898a97a2016a148a7be4836915195384" Jan 31 09:23:47 crc kubenswrapper[4830]: I0131 09:23:47.014489 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-w7rt2" Jan 31 09:23:48 crc kubenswrapper[4830]: E0131 09:23:48.295557 4830 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod42eafeb6_68c0_479b_bc77_62967566390e.slice/crio-6f37797019de65423359308de85954a8c167fc047dac50a8bb217196a6d744b8.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod42eafeb6_68c0_479b_bc77_62967566390e.slice/crio-conmon-6f37797019de65423359308de85954a8c167fc047dac50a8bb217196a6d744b8.scope\": RecentStats: unable to find data in memory cache]" Jan 31 09:23:48 crc kubenswrapper[4830]: E0131 09:23:48.296451 4830 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod42eafeb6_68c0_479b_bc77_62967566390e.slice/crio-conmon-6f37797019de65423359308de85954a8c167fc047dac50a8bb217196a6d744b8.scope\": RecentStats: unable to find data in memory cache]" Jan 31 09:23:49 crc kubenswrapper[4830]: I0131 09:23:49.038594 4830 generic.go:334] "Generic (PLEG): container finished" podID="42eafeb6-68c0-479b-bc77-62967566390e" containerID="6f37797019de65423359308de85954a8c167fc047dac50a8bb217196a6d744b8" exitCode=0 Jan 31 09:23:49 crc kubenswrapper[4830]: I0131 09:23:49.038653 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-bktdp" event={"ID":"42eafeb6-68c0-479b-bc77-62967566390e","Type":"ContainerDied","Data":"6f37797019de65423359308de85954a8c167fc047dac50a8bb217196a6d744b8"} Jan 31 09:23:50 crc kubenswrapper[4830]: I0131 09:23:50.053074 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-hccz6" event={"ID":"f97b7b49-f0d6-4f7c-a8ed-792cbfa32504","Type":"ContainerStarted","Data":"81ad4efb4d6ce18fe32a6ba05e07e2b4ad4998b237f01d1422dedc8b209c0f22"} Jan 31 09:23:50 crc kubenswrapper[4830]: I0131 09:23:50.096987 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-hccz6" podStartSLOduration=3.155865631 podStartE2EDuration="10.096960107s" podCreationTimestamp="2026-01-31 09:23:40 +0000 UTC" firstStartedPulling="2026-01-31 09:23:42.187061387 +0000 UTC m=+1366.680423829" lastFinishedPulling="2026-01-31 09:23:49.128155863 +0000 UTC m=+1373.621518305" observedRunningTime="2026-01-31 09:23:50.078259445 +0000 UTC m=+1374.571621887" watchObservedRunningTime="2026-01-31 09:23:50.096960107 +0000 UTC m=+1374.590322549" Jan 31 09:23:50 crc kubenswrapper[4830]: I0131 09:23:50.514217 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Jan 31 09:23:50 crc kubenswrapper[4830]: I0131 09:23:50.521943 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Jan 31 09:23:50 crc kubenswrapper[4830]: I0131 09:23:50.616050 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-bktdp" Jan 31 09:23:50 crc kubenswrapper[4830]: I0131 09:23:50.724001 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ht6rb\" (UniqueName: \"kubernetes.io/projected/42eafeb6-68c0-479b-bc77-62967566390e-kube-api-access-ht6rb\") pod \"42eafeb6-68c0-479b-bc77-62967566390e\" (UID: \"42eafeb6-68c0-479b-bc77-62967566390e\") " Jan 31 09:23:50 crc kubenswrapper[4830]: I0131 09:23:50.724305 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42eafeb6-68c0-479b-bc77-62967566390e-combined-ca-bundle\") pod \"42eafeb6-68c0-479b-bc77-62967566390e\" (UID: \"42eafeb6-68c0-479b-bc77-62967566390e\") " Jan 31 09:23:50 crc kubenswrapper[4830]: I0131 09:23:50.724331 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/42eafeb6-68c0-479b-bc77-62967566390e-db-sync-config-data\") pod \"42eafeb6-68c0-479b-bc77-62967566390e\" (UID: \"42eafeb6-68c0-479b-bc77-62967566390e\") " Jan 31 09:23:50 crc kubenswrapper[4830]: I0131 09:23:50.724375 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/42eafeb6-68c0-479b-bc77-62967566390e-config-data\") pod \"42eafeb6-68c0-479b-bc77-62967566390e\" (UID: \"42eafeb6-68c0-479b-bc77-62967566390e\") " Jan 31 09:23:50 crc kubenswrapper[4830]: I0131 09:23:50.731312 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42eafeb6-68c0-479b-bc77-62967566390e-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "42eafeb6-68c0-479b-bc77-62967566390e" (UID: "42eafeb6-68c0-479b-bc77-62967566390e"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:23:50 crc kubenswrapper[4830]: I0131 09:23:50.734099 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42eafeb6-68c0-479b-bc77-62967566390e-kube-api-access-ht6rb" (OuterVolumeSpecName: "kube-api-access-ht6rb") pod "42eafeb6-68c0-479b-bc77-62967566390e" (UID: "42eafeb6-68c0-479b-bc77-62967566390e"). InnerVolumeSpecName "kube-api-access-ht6rb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:23:50 crc kubenswrapper[4830]: I0131 09:23:50.762455 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42eafeb6-68c0-479b-bc77-62967566390e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "42eafeb6-68c0-479b-bc77-62967566390e" (UID: "42eafeb6-68c0-479b-bc77-62967566390e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:23:50 crc kubenswrapper[4830]: I0131 09:23:50.783819 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42eafeb6-68c0-479b-bc77-62967566390e-config-data" (OuterVolumeSpecName: "config-data") pod "42eafeb6-68c0-479b-bc77-62967566390e" (UID: "42eafeb6-68c0-479b-bc77-62967566390e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:23:50 crc kubenswrapper[4830]: I0131 09:23:50.827184 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ht6rb\" (UniqueName: \"kubernetes.io/projected/42eafeb6-68c0-479b-bc77-62967566390e-kube-api-access-ht6rb\") on node \"crc\" DevicePath \"\"" Jan 31 09:23:50 crc kubenswrapper[4830]: I0131 09:23:50.827255 4830 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42eafeb6-68c0-479b-bc77-62967566390e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 09:23:50 crc kubenswrapper[4830]: I0131 09:23:50.827265 4830 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/42eafeb6-68c0-479b-bc77-62967566390e-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 09:23:50 crc kubenswrapper[4830]: I0131 09:23:50.827280 4830 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/42eafeb6-68c0-479b-bc77-62967566390e-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 09:23:51 crc kubenswrapper[4830]: I0131 09:23:51.065743 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-bktdp" event={"ID":"42eafeb6-68c0-479b-bc77-62967566390e","Type":"ContainerDied","Data":"4792b3501233d808deb263ee9da287d71ea8e3134c6c978497a515c8cf5247be"} Jan 31 09:23:51 crc kubenswrapper[4830]: I0131 09:23:51.066156 4830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4792b3501233d808deb263ee9da287d71ea8e3134c6c978497a515c8cf5247be" Jan 31 09:23:51 crc kubenswrapper[4830]: I0131 09:23:51.065823 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-bktdp" Jan 31 09:23:51 crc kubenswrapper[4830]: I0131 09:23:51.072895 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Jan 31 09:23:51 crc kubenswrapper[4830]: I0131 09:23:51.600893 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7ff5475cc9-j5dc2"] Jan 31 09:23:51 crc kubenswrapper[4830]: E0131 09:23:51.601536 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52fdb459-dc6a-4e56-8a6b-379d4c74ce62" containerName="mariadb-database-create" Jan 31 09:23:51 crc kubenswrapper[4830]: I0131 09:23:51.601558 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="52fdb459-dc6a-4e56-8a6b-379d4c74ce62" containerName="mariadb-database-create" Jan 31 09:23:51 crc kubenswrapper[4830]: E0131 09:23:51.601593 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e66d083-dfc7-41d1-b955-752fdc14a3c2" containerName="mariadb-database-create" Jan 31 09:23:51 crc kubenswrapper[4830]: I0131 09:23:51.601600 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e66d083-dfc7-41d1-b955-752fdc14a3c2" containerName="mariadb-database-create" Jan 31 09:23:51 crc kubenswrapper[4830]: E0131 09:23:51.601609 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="528add2c-0e7d-4050-a900-0970487688f3" containerName="mariadb-account-create-update" Jan 31 09:23:51 crc kubenswrapper[4830]: I0131 09:23:51.601616 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="528add2c-0e7d-4050-a900-0970487688f3" containerName="mariadb-account-create-update" Jan 31 09:23:51 crc kubenswrapper[4830]: E0131 09:23:51.601627 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42eafeb6-68c0-479b-bc77-62967566390e" containerName="glance-db-sync" Jan 31 09:23:51 crc kubenswrapper[4830]: I0131 09:23:51.601633 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="42eafeb6-68c0-479b-bc77-62967566390e" containerName="glance-db-sync" Jan 31 09:23:51 crc kubenswrapper[4830]: E0131 09:23:51.601645 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1444b15-29b5-4433-8ea5-4b533b54f08a" containerName="mariadb-database-create" Jan 31 09:23:51 crc kubenswrapper[4830]: I0131 09:23:51.601652 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1444b15-29b5-4433-8ea5-4b533b54f08a" containerName="mariadb-database-create" Jan 31 09:23:51 crc kubenswrapper[4830]: E0131 09:23:51.601659 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2af2731d-2c7c-46c2-abcc-4846583de531" containerName="mariadb-account-create-update" Jan 31 09:23:51 crc kubenswrapper[4830]: I0131 09:23:51.601665 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="2af2731d-2c7c-46c2-abcc-4846583de531" containerName="mariadb-account-create-update" Jan 31 09:23:51 crc kubenswrapper[4830]: E0131 09:23:51.601685 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce165a30-da01-4e57-996c-de05fbe74498" containerName="mariadb-database-create" Jan 31 09:23:51 crc kubenswrapper[4830]: I0131 09:23:51.601691 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce165a30-da01-4e57-996c-de05fbe74498" containerName="mariadb-database-create" Jan 31 09:23:51 crc kubenswrapper[4830]: E0131 09:23:51.601703 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d8e473a-4e99-400b-be95-bd490bd2228b" containerName="mariadb-account-create-update" Jan 31 09:23:51 crc kubenswrapper[4830]: I0131 09:23:51.601709 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d8e473a-4e99-400b-be95-bd490bd2228b" containerName="mariadb-account-create-update" Jan 31 09:23:51 crc kubenswrapper[4830]: E0131 09:23:51.601742 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4140bbd2-fcdd-482d-9224-5248d75e4317" containerName="mariadb-account-create-update" Jan 31 09:23:51 crc kubenswrapper[4830]: I0131 09:23:51.601749 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="4140bbd2-fcdd-482d-9224-5248d75e4317" containerName="mariadb-account-create-update" Jan 31 09:23:51 crc kubenswrapper[4830]: I0131 09:23:51.602008 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="4140bbd2-fcdd-482d-9224-5248d75e4317" containerName="mariadb-account-create-update" Jan 31 09:23:51 crc kubenswrapper[4830]: I0131 09:23:51.602026 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="1d8e473a-4e99-400b-be95-bd490bd2228b" containerName="mariadb-account-create-update" Jan 31 09:23:51 crc kubenswrapper[4830]: I0131 09:23:51.602038 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e66d083-dfc7-41d1-b955-752fdc14a3c2" containerName="mariadb-database-create" Jan 31 09:23:51 crc kubenswrapper[4830]: I0131 09:23:51.602048 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="e1444b15-29b5-4433-8ea5-4b533b54f08a" containerName="mariadb-database-create" Jan 31 09:23:51 crc kubenswrapper[4830]: I0131 09:23:51.602059 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="2af2731d-2c7c-46c2-abcc-4846583de531" containerName="mariadb-account-create-update" Jan 31 09:23:51 crc kubenswrapper[4830]: I0131 09:23:51.602068 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="52fdb459-dc6a-4e56-8a6b-379d4c74ce62" containerName="mariadb-database-create" Jan 31 09:23:51 crc kubenswrapper[4830]: I0131 09:23:51.602073 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="528add2c-0e7d-4050-a900-0970487688f3" containerName="mariadb-account-create-update" Jan 31 09:23:51 crc kubenswrapper[4830]: I0131 09:23:51.602082 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="42eafeb6-68c0-479b-bc77-62967566390e" containerName="glance-db-sync" Jan 31 09:23:51 crc kubenswrapper[4830]: I0131 09:23:51.602092 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce165a30-da01-4e57-996c-de05fbe74498" containerName="mariadb-database-create" Jan 31 09:23:51 crc kubenswrapper[4830]: I0131 09:23:51.603522 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7ff5475cc9-j5dc2" Jan 31 09:23:51 crc kubenswrapper[4830]: I0131 09:23:51.637228 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7ff5475cc9-j5dc2"] Jan 31 09:23:51 crc kubenswrapper[4830]: I0131 09:23:51.758045 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cc653318-d8c5-4663-90ef-38b8f4b19275-ovsdbserver-nb\") pod \"dnsmasq-dns-7ff5475cc9-j5dc2\" (UID: \"cc653318-d8c5-4663-90ef-38b8f4b19275\") " pod="openstack/dnsmasq-dns-7ff5475cc9-j5dc2" Jan 31 09:23:51 crc kubenswrapper[4830]: I0131 09:23:51.758110 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cc653318-d8c5-4663-90ef-38b8f4b19275-config\") pod \"dnsmasq-dns-7ff5475cc9-j5dc2\" (UID: \"cc653318-d8c5-4663-90ef-38b8f4b19275\") " pod="openstack/dnsmasq-dns-7ff5475cc9-j5dc2" Jan 31 09:23:51 crc kubenswrapper[4830]: I0131 09:23:51.758155 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cc653318-d8c5-4663-90ef-38b8f4b19275-ovsdbserver-sb\") pod \"dnsmasq-dns-7ff5475cc9-j5dc2\" (UID: \"cc653318-d8c5-4663-90ef-38b8f4b19275\") " pod="openstack/dnsmasq-dns-7ff5475cc9-j5dc2" Jan 31 09:23:51 crc kubenswrapper[4830]: I0131 09:23:51.758312 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cc653318-d8c5-4663-90ef-38b8f4b19275-dns-swift-storage-0\") pod \"dnsmasq-dns-7ff5475cc9-j5dc2\" (UID: \"cc653318-d8c5-4663-90ef-38b8f4b19275\") " pod="openstack/dnsmasq-dns-7ff5475cc9-j5dc2" Jan 31 09:23:51 crc kubenswrapper[4830]: I0131 09:23:51.758341 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cjlmz\" (UniqueName: \"kubernetes.io/projected/cc653318-d8c5-4663-90ef-38b8f4b19275-kube-api-access-cjlmz\") pod \"dnsmasq-dns-7ff5475cc9-j5dc2\" (UID: \"cc653318-d8c5-4663-90ef-38b8f4b19275\") " pod="openstack/dnsmasq-dns-7ff5475cc9-j5dc2" Jan 31 09:23:51 crc kubenswrapper[4830]: I0131 09:23:51.758497 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cc653318-d8c5-4663-90ef-38b8f4b19275-dns-svc\") pod \"dnsmasq-dns-7ff5475cc9-j5dc2\" (UID: \"cc653318-d8c5-4663-90ef-38b8f4b19275\") " pod="openstack/dnsmasq-dns-7ff5475cc9-j5dc2" Jan 31 09:23:51 crc kubenswrapper[4830]: I0131 09:23:51.860939 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cc653318-d8c5-4663-90ef-38b8f4b19275-ovsdbserver-nb\") pod \"dnsmasq-dns-7ff5475cc9-j5dc2\" (UID: \"cc653318-d8c5-4663-90ef-38b8f4b19275\") " pod="openstack/dnsmasq-dns-7ff5475cc9-j5dc2" Jan 31 09:23:51 crc kubenswrapper[4830]: I0131 09:23:51.861427 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cc653318-d8c5-4663-90ef-38b8f4b19275-config\") pod \"dnsmasq-dns-7ff5475cc9-j5dc2\" (UID: \"cc653318-d8c5-4663-90ef-38b8f4b19275\") " pod="openstack/dnsmasq-dns-7ff5475cc9-j5dc2" Jan 31 09:23:51 crc kubenswrapper[4830]: I0131 09:23:51.861462 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cc653318-d8c5-4663-90ef-38b8f4b19275-ovsdbserver-sb\") pod \"dnsmasq-dns-7ff5475cc9-j5dc2\" (UID: \"cc653318-d8c5-4663-90ef-38b8f4b19275\") " pod="openstack/dnsmasq-dns-7ff5475cc9-j5dc2" Jan 31 09:23:51 crc kubenswrapper[4830]: I0131 09:23:51.861531 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cc653318-d8c5-4663-90ef-38b8f4b19275-dns-swift-storage-0\") pod \"dnsmasq-dns-7ff5475cc9-j5dc2\" (UID: \"cc653318-d8c5-4663-90ef-38b8f4b19275\") " pod="openstack/dnsmasq-dns-7ff5475cc9-j5dc2" Jan 31 09:23:51 crc kubenswrapper[4830]: I0131 09:23:51.861557 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cjlmz\" (UniqueName: \"kubernetes.io/projected/cc653318-d8c5-4663-90ef-38b8f4b19275-kube-api-access-cjlmz\") pod \"dnsmasq-dns-7ff5475cc9-j5dc2\" (UID: \"cc653318-d8c5-4663-90ef-38b8f4b19275\") " pod="openstack/dnsmasq-dns-7ff5475cc9-j5dc2" Jan 31 09:23:51 crc kubenswrapper[4830]: I0131 09:23:51.861650 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cc653318-d8c5-4663-90ef-38b8f4b19275-dns-svc\") pod \"dnsmasq-dns-7ff5475cc9-j5dc2\" (UID: \"cc653318-d8c5-4663-90ef-38b8f4b19275\") " pod="openstack/dnsmasq-dns-7ff5475cc9-j5dc2" Jan 31 09:23:51 crc kubenswrapper[4830]: I0131 09:23:51.863514 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cc653318-d8c5-4663-90ef-38b8f4b19275-ovsdbserver-sb\") pod \"dnsmasq-dns-7ff5475cc9-j5dc2\" (UID: \"cc653318-d8c5-4663-90ef-38b8f4b19275\") " pod="openstack/dnsmasq-dns-7ff5475cc9-j5dc2" Jan 31 09:23:51 crc kubenswrapper[4830]: I0131 09:23:51.863655 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cc653318-d8c5-4663-90ef-38b8f4b19275-ovsdbserver-nb\") pod \"dnsmasq-dns-7ff5475cc9-j5dc2\" (UID: \"cc653318-d8c5-4663-90ef-38b8f4b19275\") " pod="openstack/dnsmasq-dns-7ff5475cc9-j5dc2" Jan 31 09:23:51 crc kubenswrapper[4830]: I0131 09:23:51.864269 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cc653318-d8c5-4663-90ef-38b8f4b19275-config\") pod \"dnsmasq-dns-7ff5475cc9-j5dc2\" (UID: \"cc653318-d8c5-4663-90ef-38b8f4b19275\") " pod="openstack/dnsmasq-dns-7ff5475cc9-j5dc2" Jan 31 09:23:51 crc kubenswrapper[4830]: I0131 09:23:51.864337 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cc653318-d8c5-4663-90ef-38b8f4b19275-dns-swift-storage-0\") pod \"dnsmasq-dns-7ff5475cc9-j5dc2\" (UID: \"cc653318-d8c5-4663-90ef-38b8f4b19275\") " pod="openstack/dnsmasq-dns-7ff5475cc9-j5dc2" Jan 31 09:23:51 crc kubenswrapper[4830]: I0131 09:23:51.866557 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cc653318-d8c5-4663-90ef-38b8f4b19275-dns-svc\") pod \"dnsmasq-dns-7ff5475cc9-j5dc2\" (UID: \"cc653318-d8c5-4663-90ef-38b8f4b19275\") " pod="openstack/dnsmasq-dns-7ff5475cc9-j5dc2" Jan 31 09:23:51 crc kubenswrapper[4830]: I0131 09:23:51.890354 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cjlmz\" (UniqueName: \"kubernetes.io/projected/cc653318-d8c5-4663-90ef-38b8f4b19275-kube-api-access-cjlmz\") pod \"dnsmasq-dns-7ff5475cc9-j5dc2\" (UID: \"cc653318-d8c5-4663-90ef-38b8f4b19275\") " pod="openstack/dnsmasq-dns-7ff5475cc9-j5dc2" Jan 31 09:23:51 crc kubenswrapper[4830]: I0131 09:23:51.925669 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7ff5475cc9-j5dc2" Jan 31 09:23:52 crc kubenswrapper[4830]: I0131 09:23:52.467201 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7ff5475cc9-j5dc2"] Jan 31 09:23:53 crc kubenswrapper[4830]: I0131 09:23:53.105565 4830 generic.go:334] "Generic (PLEG): container finished" podID="cc653318-d8c5-4663-90ef-38b8f4b19275" containerID="33cae71d971a098a127b212b048279f41d15396ccf35f3ddc0013f5c9d3c6fbe" exitCode=0 Jan 31 09:23:53 crc kubenswrapper[4830]: I0131 09:23:53.105637 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7ff5475cc9-j5dc2" event={"ID":"cc653318-d8c5-4663-90ef-38b8f4b19275","Type":"ContainerDied","Data":"33cae71d971a098a127b212b048279f41d15396ccf35f3ddc0013f5c9d3c6fbe"} Jan 31 09:23:53 crc kubenswrapper[4830]: I0131 09:23:53.106087 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7ff5475cc9-j5dc2" event={"ID":"cc653318-d8c5-4663-90ef-38b8f4b19275","Type":"ContainerStarted","Data":"765b637d85bba8cd56653ccb257ed1c31e714fb731f0e8282ad086dc1a54c81a"} Jan 31 09:23:54 crc kubenswrapper[4830]: E0131 09:23:54.020358 4830 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf97b7b49_f0d6_4f7c_a8ed_792cbfa32504.slice/crio-81ad4efb4d6ce18fe32a6ba05e07e2b4ad4998b237f01d1422dedc8b209c0f22.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf97b7b49_f0d6_4f7c_a8ed_792cbfa32504.slice/crio-conmon-81ad4efb4d6ce18fe32a6ba05e07e2b4ad4998b237f01d1422dedc8b209c0f22.scope\": RecentStats: unable to find data in memory cache]" Jan 31 09:23:54 crc kubenswrapper[4830]: I0131 09:23:54.119683 4830 generic.go:334] "Generic (PLEG): container finished" podID="f97b7b49-f0d6-4f7c-a8ed-792cbfa32504" containerID="81ad4efb4d6ce18fe32a6ba05e07e2b4ad4998b237f01d1422dedc8b209c0f22" exitCode=0 Jan 31 09:23:54 crc kubenswrapper[4830]: I0131 09:23:54.119771 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-hccz6" event={"ID":"f97b7b49-f0d6-4f7c-a8ed-792cbfa32504","Type":"ContainerDied","Data":"81ad4efb4d6ce18fe32a6ba05e07e2b4ad4998b237f01d1422dedc8b209c0f22"} Jan 31 09:23:54 crc kubenswrapper[4830]: I0131 09:23:54.123188 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7ff5475cc9-j5dc2" event={"ID":"cc653318-d8c5-4663-90ef-38b8f4b19275","Type":"ContainerStarted","Data":"c5e00d42d4a86dfe091d8277956f521ee05a78845d63433c8924aa95a212cc99"} Jan 31 09:23:54 crc kubenswrapper[4830]: I0131 09:23:54.123473 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7ff5475cc9-j5dc2" Jan 31 09:23:54 crc kubenswrapper[4830]: I0131 09:23:54.165659 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7ff5475cc9-j5dc2" podStartSLOduration=3.16563606 podStartE2EDuration="3.16563606s" podCreationTimestamp="2026-01-31 09:23:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 09:23:54.162897652 +0000 UTC m=+1378.656260114" watchObservedRunningTime="2026-01-31 09:23:54.16563606 +0000 UTC m=+1378.658998502" Jan 31 09:23:55 crc kubenswrapper[4830]: I0131 09:23:55.672967 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-hccz6" Jan 31 09:23:55 crc kubenswrapper[4830]: I0131 09:23:55.769099 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f97b7b49-f0d6-4f7c-a8ed-792cbfa32504-combined-ca-bundle\") pod \"f97b7b49-f0d6-4f7c-a8ed-792cbfa32504\" (UID: \"f97b7b49-f0d6-4f7c-a8ed-792cbfa32504\") " Jan 31 09:23:55 crc kubenswrapper[4830]: I0131 09:23:55.769296 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f97b7b49-f0d6-4f7c-a8ed-792cbfa32504-config-data\") pod \"f97b7b49-f0d6-4f7c-a8ed-792cbfa32504\" (UID: \"f97b7b49-f0d6-4f7c-a8ed-792cbfa32504\") " Jan 31 09:23:55 crc kubenswrapper[4830]: I0131 09:23:55.769456 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dhsbn\" (UniqueName: \"kubernetes.io/projected/f97b7b49-f0d6-4f7c-a8ed-792cbfa32504-kube-api-access-dhsbn\") pod \"f97b7b49-f0d6-4f7c-a8ed-792cbfa32504\" (UID: \"f97b7b49-f0d6-4f7c-a8ed-792cbfa32504\") " Jan 31 09:23:55 crc kubenswrapper[4830]: I0131 09:23:55.785130 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f97b7b49-f0d6-4f7c-a8ed-792cbfa32504-kube-api-access-dhsbn" (OuterVolumeSpecName: "kube-api-access-dhsbn") pod "f97b7b49-f0d6-4f7c-a8ed-792cbfa32504" (UID: "f97b7b49-f0d6-4f7c-a8ed-792cbfa32504"). InnerVolumeSpecName "kube-api-access-dhsbn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:23:55 crc kubenswrapper[4830]: I0131 09:23:55.853603 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f97b7b49-f0d6-4f7c-a8ed-792cbfa32504-config-data" (OuterVolumeSpecName: "config-data") pod "f97b7b49-f0d6-4f7c-a8ed-792cbfa32504" (UID: "f97b7b49-f0d6-4f7c-a8ed-792cbfa32504"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:23:55 crc kubenswrapper[4830]: I0131 09:23:55.854191 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f97b7b49-f0d6-4f7c-a8ed-792cbfa32504-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f97b7b49-f0d6-4f7c-a8ed-792cbfa32504" (UID: "f97b7b49-f0d6-4f7c-a8ed-792cbfa32504"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:23:55 crc kubenswrapper[4830]: I0131 09:23:55.872384 4830 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f97b7b49-f0d6-4f7c-a8ed-792cbfa32504-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 09:23:55 crc kubenswrapper[4830]: I0131 09:23:55.872438 4830 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f97b7b49-f0d6-4f7c-a8ed-792cbfa32504-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 09:23:55 crc kubenswrapper[4830]: I0131 09:23:55.872450 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dhsbn\" (UniqueName: \"kubernetes.io/projected/f97b7b49-f0d6-4f7c-a8ed-792cbfa32504-kube-api-access-dhsbn\") on node \"crc\" DevicePath \"\"" Jan 31 09:23:56 crc kubenswrapper[4830]: I0131 09:23:56.149146 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-hccz6" event={"ID":"f97b7b49-f0d6-4f7c-a8ed-792cbfa32504","Type":"ContainerDied","Data":"8a1d3dfec006e608c43bd1ba34f56de1b51c948c99dae36fc984199c646d18f5"} Jan 31 09:23:56 crc kubenswrapper[4830]: I0131 09:23:56.149204 4830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8a1d3dfec006e608c43bd1ba34f56de1b51c948c99dae36fc984199c646d18f5" Jan 31 09:23:56 crc kubenswrapper[4830]: I0131 09:23:56.149274 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-hccz6" Jan 31 09:23:56 crc kubenswrapper[4830]: I0131 09:23:56.411807 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7ff5475cc9-j5dc2"] Jan 31 09:23:56 crc kubenswrapper[4830]: I0131 09:23:56.412524 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7ff5475cc9-j5dc2" podUID="cc653318-d8c5-4663-90ef-38b8f4b19275" containerName="dnsmasq-dns" containerID="cri-o://c5e00d42d4a86dfe091d8277956f521ee05a78845d63433c8924aa95a212cc99" gracePeriod=10 Jan 31 09:23:56 crc kubenswrapper[4830]: I0131 09:23:56.447820 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-blgpz"] Jan 31 09:23:56 crc kubenswrapper[4830]: E0131 09:23:56.448455 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f97b7b49-f0d6-4f7c-a8ed-792cbfa32504" containerName="keystone-db-sync" Jan 31 09:23:56 crc kubenswrapper[4830]: I0131 09:23:56.448480 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="f97b7b49-f0d6-4f7c-a8ed-792cbfa32504" containerName="keystone-db-sync" Jan 31 09:23:56 crc kubenswrapper[4830]: I0131 09:23:56.448711 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="f97b7b49-f0d6-4f7c-a8ed-792cbfa32504" containerName="keystone-db-sync" Jan 31 09:23:56 crc kubenswrapper[4830]: I0131 09:23:56.450037 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-blgpz" Jan 31 09:23:56 crc kubenswrapper[4830]: I0131 09:23:56.458390 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 31 09:23:56 crc kubenswrapper[4830]: I0131 09:23:56.458706 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 31 09:23:56 crc kubenswrapper[4830]: I0131 09:23:56.458978 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 31 09:23:56 crc kubenswrapper[4830]: I0131 09:23:56.459133 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-r84d8" Jan 31 09:23:56 crc kubenswrapper[4830]: I0131 09:23:56.459254 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 31 09:23:56 crc kubenswrapper[4830]: I0131 09:23:56.466852 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-blgpz"] Jan 31 09:23:56 crc kubenswrapper[4830]: I0131 09:23:56.497421 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2786z\" (UniqueName: \"kubernetes.io/projected/29262d41-4dc9-4d3e-9d2d-411076ab11c6-kube-api-access-2786z\") pod \"keystone-bootstrap-blgpz\" (UID: \"29262d41-4dc9-4d3e-9d2d-411076ab11c6\") " pod="openstack/keystone-bootstrap-blgpz" Jan 31 09:23:56 crc kubenswrapper[4830]: I0131 09:23:56.497476 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/29262d41-4dc9-4d3e-9d2d-411076ab11c6-scripts\") pod \"keystone-bootstrap-blgpz\" (UID: \"29262d41-4dc9-4d3e-9d2d-411076ab11c6\") " pod="openstack/keystone-bootstrap-blgpz" Jan 31 09:23:56 crc kubenswrapper[4830]: I0131 09:23:56.497508 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/29262d41-4dc9-4d3e-9d2d-411076ab11c6-config-data\") pod \"keystone-bootstrap-blgpz\" (UID: \"29262d41-4dc9-4d3e-9d2d-411076ab11c6\") " pod="openstack/keystone-bootstrap-blgpz" Jan 31 09:23:56 crc kubenswrapper[4830]: I0131 09:23:56.497531 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/29262d41-4dc9-4d3e-9d2d-411076ab11c6-fernet-keys\") pod \"keystone-bootstrap-blgpz\" (UID: \"29262d41-4dc9-4d3e-9d2d-411076ab11c6\") " pod="openstack/keystone-bootstrap-blgpz" Jan 31 09:23:56 crc kubenswrapper[4830]: I0131 09:23:56.497554 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/29262d41-4dc9-4d3e-9d2d-411076ab11c6-credential-keys\") pod \"keystone-bootstrap-blgpz\" (UID: \"29262d41-4dc9-4d3e-9d2d-411076ab11c6\") " pod="openstack/keystone-bootstrap-blgpz" Jan 31 09:23:56 crc kubenswrapper[4830]: I0131 09:23:56.497677 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29262d41-4dc9-4d3e-9d2d-411076ab11c6-combined-ca-bundle\") pod \"keystone-bootstrap-blgpz\" (UID: \"29262d41-4dc9-4d3e-9d2d-411076ab11c6\") " pod="openstack/keystone-bootstrap-blgpz" Jan 31 09:23:56 crc kubenswrapper[4830]: I0131 09:23:56.566225 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5c5cc7c5ff-v8fbt"] Jan 31 09:23:56 crc kubenswrapper[4830]: I0131 09:23:56.571119 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c5cc7c5ff-v8fbt" Jan 31 09:23:56 crc kubenswrapper[4830]: I0131 09:23:56.584890 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c5cc7c5ff-v8fbt"] Jan 31 09:23:56 crc kubenswrapper[4830]: I0131 09:23:56.603703 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29262d41-4dc9-4d3e-9d2d-411076ab11c6-combined-ca-bundle\") pod \"keystone-bootstrap-blgpz\" (UID: \"29262d41-4dc9-4d3e-9d2d-411076ab11c6\") " pod="openstack/keystone-bootstrap-blgpz" Jan 31 09:23:56 crc kubenswrapper[4830]: I0131 09:23:56.603920 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2786z\" (UniqueName: \"kubernetes.io/projected/29262d41-4dc9-4d3e-9d2d-411076ab11c6-kube-api-access-2786z\") pod \"keystone-bootstrap-blgpz\" (UID: \"29262d41-4dc9-4d3e-9d2d-411076ab11c6\") " pod="openstack/keystone-bootstrap-blgpz" Jan 31 09:23:56 crc kubenswrapper[4830]: I0131 09:23:56.603954 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/29262d41-4dc9-4d3e-9d2d-411076ab11c6-scripts\") pod \"keystone-bootstrap-blgpz\" (UID: \"29262d41-4dc9-4d3e-9d2d-411076ab11c6\") " pod="openstack/keystone-bootstrap-blgpz" Jan 31 09:23:56 crc kubenswrapper[4830]: I0131 09:23:56.603988 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/29262d41-4dc9-4d3e-9d2d-411076ab11c6-config-data\") pod \"keystone-bootstrap-blgpz\" (UID: \"29262d41-4dc9-4d3e-9d2d-411076ab11c6\") " pod="openstack/keystone-bootstrap-blgpz" Jan 31 09:23:56 crc kubenswrapper[4830]: I0131 09:23:56.604015 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/29262d41-4dc9-4d3e-9d2d-411076ab11c6-fernet-keys\") pod \"keystone-bootstrap-blgpz\" (UID: \"29262d41-4dc9-4d3e-9d2d-411076ab11c6\") " pod="openstack/keystone-bootstrap-blgpz" Jan 31 09:23:56 crc kubenswrapper[4830]: I0131 09:23:56.604038 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/29262d41-4dc9-4d3e-9d2d-411076ab11c6-credential-keys\") pod \"keystone-bootstrap-blgpz\" (UID: \"29262d41-4dc9-4d3e-9d2d-411076ab11c6\") " pod="openstack/keystone-bootstrap-blgpz" Jan 31 09:23:56 crc kubenswrapper[4830]: I0131 09:23:56.623428 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/29262d41-4dc9-4d3e-9d2d-411076ab11c6-fernet-keys\") pod \"keystone-bootstrap-blgpz\" (UID: \"29262d41-4dc9-4d3e-9d2d-411076ab11c6\") " pod="openstack/keystone-bootstrap-blgpz" Jan 31 09:23:56 crc kubenswrapper[4830]: I0131 09:23:56.623800 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/29262d41-4dc9-4d3e-9d2d-411076ab11c6-scripts\") pod \"keystone-bootstrap-blgpz\" (UID: \"29262d41-4dc9-4d3e-9d2d-411076ab11c6\") " pod="openstack/keystone-bootstrap-blgpz" Jan 31 09:23:56 crc kubenswrapper[4830]: I0131 09:23:56.624386 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29262d41-4dc9-4d3e-9d2d-411076ab11c6-combined-ca-bundle\") pod \"keystone-bootstrap-blgpz\" (UID: \"29262d41-4dc9-4d3e-9d2d-411076ab11c6\") " pod="openstack/keystone-bootstrap-blgpz" Jan 31 09:23:56 crc kubenswrapper[4830]: I0131 09:23:56.674595 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/29262d41-4dc9-4d3e-9d2d-411076ab11c6-credential-keys\") pod \"keystone-bootstrap-blgpz\" (UID: \"29262d41-4dc9-4d3e-9d2d-411076ab11c6\") " pod="openstack/keystone-bootstrap-blgpz" Jan 31 09:23:56 crc kubenswrapper[4830]: I0131 09:23:56.676917 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/29262d41-4dc9-4d3e-9d2d-411076ab11c6-config-data\") pod \"keystone-bootstrap-blgpz\" (UID: \"29262d41-4dc9-4d3e-9d2d-411076ab11c6\") " pod="openstack/keystone-bootstrap-blgpz" Jan 31 09:23:56 crc kubenswrapper[4830]: I0131 09:23:56.733335 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/92faa0b1-7bae-4446-b6a1-52ea0d77aa52-ovsdbserver-sb\") pod \"dnsmasq-dns-5c5cc7c5ff-v8fbt\" (UID: \"92faa0b1-7bae-4446-b6a1-52ea0d77aa52\") " pod="openstack/dnsmasq-dns-5c5cc7c5ff-v8fbt" Jan 31 09:23:56 crc kubenswrapper[4830]: I0131 09:23:56.735004 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/92faa0b1-7bae-4446-b6a1-52ea0d77aa52-ovsdbserver-nb\") pod \"dnsmasq-dns-5c5cc7c5ff-v8fbt\" (UID: \"92faa0b1-7bae-4446-b6a1-52ea0d77aa52\") " pod="openstack/dnsmasq-dns-5c5cc7c5ff-v8fbt" Jan 31 09:23:56 crc kubenswrapper[4830]: I0131 09:23:56.735166 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cc5jd\" (UniqueName: \"kubernetes.io/projected/92faa0b1-7bae-4446-b6a1-52ea0d77aa52-kube-api-access-cc5jd\") pod \"dnsmasq-dns-5c5cc7c5ff-v8fbt\" (UID: \"92faa0b1-7bae-4446-b6a1-52ea0d77aa52\") " pod="openstack/dnsmasq-dns-5c5cc7c5ff-v8fbt" Jan 31 09:23:56 crc kubenswrapper[4830]: I0131 09:23:56.735227 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/92faa0b1-7bae-4446-b6a1-52ea0d77aa52-dns-swift-storage-0\") pod \"dnsmasq-dns-5c5cc7c5ff-v8fbt\" (UID: \"92faa0b1-7bae-4446-b6a1-52ea0d77aa52\") " pod="openstack/dnsmasq-dns-5c5cc7c5ff-v8fbt" Jan 31 09:23:56 crc kubenswrapper[4830]: I0131 09:23:56.735403 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/92faa0b1-7bae-4446-b6a1-52ea0d77aa52-dns-svc\") pod \"dnsmasq-dns-5c5cc7c5ff-v8fbt\" (UID: \"92faa0b1-7bae-4446-b6a1-52ea0d77aa52\") " pod="openstack/dnsmasq-dns-5c5cc7c5ff-v8fbt" Jan 31 09:23:56 crc kubenswrapper[4830]: I0131 09:23:56.735480 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/92faa0b1-7bae-4446-b6a1-52ea0d77aa52-config\") pod \"dnsmasq-dns-5c5cc7c5ff-v8fbt\" (UID: \"92faa0b1-7bae-4446-b6a1-52ea0d77aa52\") " pod="openstack/dnsmasq-dns-5c5cc7c5ff-v8fbt" Jan 31 09:23:56 crc kubenswrapper[4830]: I0131 09:23:56.741922 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2786z\" (UniqueName: \"kubernetes.io/projected/29262d41-4dc9-4d3e-9d2d-411076ab11c6-kube-api-access-2786z\") pod \"keystone-bootstrap-blgpz\" (UID: \"29262d41-4dc9-4d3e-9d2d-411076ab11c6\") " pod="openstack/keystone-bootstrap-blgpz" Jan 31 09:23:56 crc kubenswrapper[4830]: I0131 09:23:56.782825 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-db-sync-hh79w"] Jan 31 09:23:56 crc kubenswrapper[4830]: I0131 09:23:56.787413 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-blgpz" Jan 31 09:23:56 crc kubenswrapper[4830]: I0131 09:23:56.846509 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-hh79w" Jan 31 09:23:56 crc kubenswrapper[4830]: I0131 09:23:56.853247 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-heat-dockercfg-scjhm" Jan 31 09:23:56 crc kubenswrapper[4830]: I0131 09:23:56.853449 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-config-data" Jan 31 09:23:56 crc kubenswrapper[4830]: I0131 09:23:56.857049 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/92faa0b1-7bae-4446-b6a1-52ea0d77aa52-ovsdbserver-sb\") pod \"dnsmasq-dns-5c5cc7c5ff-v8fbt\" (UID: \"92faa0b1-7bae-4446-b6a1-52ea0d77aa52\") " pod="openstack/dnsmasq-dns-5c5cc7c5ff-v8fbt" Jan 31 09:23:56 crc kubenswrapper[4830]: I0131 09:23:56.857151 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/92faa0b1-7bae-4446-b6a1-52ea0d77aa52-ovsdbserver-nb\") pod \"dnsmasq-dns-5c5cc7c5ff-v8fbt\" (UID: \"92faa0b1-7bae-4446-b6a1-52ea0d77aa52\") " pod="openstack/dnsmasq-dns-5c5cc7c5ff-v8fbt" Jan 31 09:23:56 crc kubenswrapper[4830]: I0131 09:23:56.857192 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cc5jd\" (UniqueName: \"kubernetes.io/projected/92faa0b1-7bae-4446-b6a1-52ea0d77aa52-kube-api-access-cc5jd\") pod \"dnsmasq-dns-5c5cc7c5ff-v8fbt\" (UID: \"92faa0b1-7bae-4446-b6a1-52ea0d77aa52\") " pod="openstack/dnsmasq-dns-5c5cc7c5ff-v8fbt" Jan 31 09:23:56 crc kubenswrapper[4830]: I0131 09:23:56.857225 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/92faa0b1-7bae-4446-b6a1-52ea0d77aa52-dns-swift-storage-0\") pod \"dnsmasq-dns-5c5cc7c5ff-v8fbt\" (UID: \"92faa0b1-7bae-4446-b6a1-52ea0d77aa52\") " pod="openstack/dnsmasq-dns-5c5cc7c5ff-v8fbt" Jan 31 09:23:56 crc kubenswrapper[4830]: I0131 09:23:56.857284 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/92faa0b1-7bae-4446-b6a1-52ea0d77aa52-dns-svc\") pod \"dnsmasq-dns-5c5cc7c5ff-v8fbt\" (UID: \"92faa0b1-7bae-4446-b6a1-52ea0d77aa52\") " pod="openstack/dnsmasq-dns-5c5cc7c5ff-v8fbt" Jan 31 09:23:56 crc kubenswrapper[4830]: I0131 09:23:56.857323 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/92faa0b1-7bae-4446-b6a1-52ea0d77aa52-config\") pod \"dnsmasq-dns-5c5cc7c5ff-v8fbt\" (UID: \"92faa0b1-7bae-4446-b6a1-52ea0d77aa52\") " pod="openstack/dnsmasq-dns-5c5cc7c5ff-v8fbt" Jan 31 09:23:56 crc kubenswrapper[4830]: I0131 09:23:56.858574 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/92faa0b1-7bae-4446-b6a1-52ea0d77aa52-config\") pod \"dnsmasq-dns-5c5cc7c5ff-v8fbt\" (UID: \"92faa0b1-7bae-4446-b6a1-52ea0d77aa52\") " pod="openstack/dnsmasq-dns-5c5cc7c5ff-v8fbt" Jan 31 09:23:56 crc kubenswrapper[4830]: I0131 09:23:56.861527 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/92faa0b1-7bae-4446-b6a1-52ea0d77aa52-dns-swift-storage-0\") pod \"dnsmasq-dns-5c5cc7c5ff-v8fbt\" (UID: \"92faa0b1-7bae-4446-b6a1-52ea0d77aa52\") " pod="openstack/dnsmasq-dns-5c5cc7c5ff-v8fbt" Jan 31 09:23:56 crc kubenswrapper[4830]: I0131 09:23:56.866862 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/92faa0b1-7bae-4446-b6a1-52ea0d77aa52-ovsdbserver-nb\") pod \"dnsmasq-dns-5c5cc7c5ff-v8fbt\" (UID: \"92faa0b1-7bae-4446-b6a1-52ea0d77aa52\") " pod="openstack/dnsmasq-dns-5c5cc7c5ff-v8fbt" Jan 31 09:23:56 crc kubenswrapper[4830]: I0131 09:23:56.867432 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/92faa0b1-7bae-4446-b6a1-52ea0d77aa52-dns-svc\") pod \"dnsmasq-dns-5c5cc7c5ff-v8fbt\" (UID: \"92faa0b1-7bae-4446-b6a1-52ea0d77aa52\") " pod="openstack/dnsmasq-dns-5c5cc7c5ff-v8fbt" Jan 31 09:23:56 crc kubenswrapper[4830]: I0131 09:23:56.876410 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/92faa0b1-7bae-4446-b6a1-52ea0d77aa52-ovsdbserver-sb\") pod \"dnsmasq-dns-5c5cc7c5ff-v8fbt\" (UID: \"92faa0b1-7bae-4446-b6a1-52ea0d77aa52\") " pod="openstack/dnsmasq-dns-5c5cc7c5ff-v8fbt" Jan 31 09:23:56 crc kubenswrapper[4830]: I0131 09:23:56.918936 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-hh79w"] Jan 31 09:23:56 crc kubenswrapper[4830]: I0131 09:23:56.952432 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cc5jd\" (UniqueName: \"kubernetes.io/projected/92faa0b1-7bae-4446-b6a1-52ea0d77aa52-kube-api-access-cc5jd\") pod \"dnsmasq-dns-5c5cc7c5ff-v8fbt\" (UID: \"92faa0b1-7bae-4446-b6a1-52ea0d77aa52\") " pod="openstack/dnsmasq-dns-5c5cc7c5ff-v8fbt" Jan 31 09:23:56 crc kubenswrapper[4830]: I0131 09:23:56.974666 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-pr2kp"] Jan 31 09:23:56 crc kubenswrapper[4830]: I0131 09:23:56.974910 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6324b6ba-4288-44f4-bf87-1a4356c1a9f0-config-data\") pod \"heat-db-sync-hh79w\" (UID: \"6324b6ba-4288-44f4-bf87-1a4356c1a9f0\") " pod="openstack/heat-db-sync-hh79w" Jan 31 09:23:56 crc kubenswrapper[4830]: I0131 09:23:56.975076 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6324b6ba-4288-44f4-bf87-1a4356c1a9f0-combined-ca-bundle\") pod \"heat-db-sync-hh79w\" (UID: \"6324b6ba-4288-44f4-bf87-1a4356c1a9f0\") " pod="openstack/heat-db-sync-hh79w" Jan 31 09:23:56 crc kubenswrapper[4830]: I0131 09:23:56.975499 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bh7dj\" (UniqueName: \"kubernetes.io/projected/6324b6ba-4288-44f4-bf87-1a4356c1a9f0-kube-api-access-bh7dj\") pod \"heat-db-sync-hh79w\" (UID: \"6324b6ba-4288-44f4-bf87-1a4356c1a9f0\") " pod="openstack/heat-db-sync-hh79w" Jan 31 09:23:56 crc kubenswrapper[4830]: I0131 09:23:56.987331 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-pr2kp" Jan 31 09:23:56 crc kubenswrapper[4830]: I0131 09:23:56.993660 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-5m8j5" Jan 31 09:23:56 crc kubenswrapper[4830]: I0131 09:23:56.994074 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Jan 31 09:23:56 crc kubenswrapper[4830]: I0131 09:23:56.997164 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c5cc7c5ff-v8fbt" Jan 31 09:23:57 crc kubenswrapper[4830]: I0131 09:23:57.072364 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-pr2kp"] Jan 31 09:23:57 crc kubenswrapper[4830]: I0131 09:23:57.077394 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6324b6ba-4288-44f4-bf87-1a4356c1a9f0-combined-ca-bundle\") pod \"heat-db-sync-hh79w\" (UID: \"6324b6ba-4288-44f4-bf87-1a4356c1a9f0\") " pod="openstack/heat-db-sync-hh79w" Jan 31 09:23:57 crc kubenswrapper[4830]: I0131 09:23:57.077533 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bh7dj\" (UniqueName: \"kubernetes.io/projected/6324b6ba-4288-44f4-bf87-1a4356c1a9f0-kube-api-access-bh7dj\") pod \"heat-db-sync-hh79w\" (UID: \"6324b6ba-4288-44f4-bf87-1a4356c1a9f0\") " pod="openstack/heat-db-sync-hh79w" Jan 31 09:23:57 crc kubenswrapper[4830]: I0131 09:23:57.077640 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6324b6ba-4288-44f4-bf87-1a4356c1a9f0-config-data\") pod \"heat-db-sync-hh79w\" (UID: \"6324b6ba-4288-44f4-bf87-1a4356c1a9f0\") " pod="openstack/heat-db-sync-hh79w" Jan 31 09:23:57 crc kubenswrapper[4830]: I0131 09:23:57.093900 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-ztgnf"] Jan 31 09:23:57 crc kubenswrapper[4830]: I0131 09:23:57.095697 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6324b6ba-4288-44f4-bf87-1a4356c1a9f0-config-data\") pod \"heat-db-sync-hh79w\" (UID: \"6324b6ba-4288-44f4-bf87-1a4356c1a9f0\") " pod="openstack/heat-db-sync-hh79w" Jan 31 09:23:57 crc kubenswrapper[4830]: I0131 09:23:57.102785 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6324b6ba-4288-44f4-bf87-1a4356c1a9f0-combined-ca-bundle\") pod \"heat-db-sync-hh79w\" (UID: \"6324b6ba-4288-44f4-bf87-1a4356c1a9f0\") " pod="openstack/heat-db-sync-hh79w" Jan 31 09:23:57 crc kubenswrapper[4830]: I0131 09:23:57.104028 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-ztgnf" Jan 31 09:23:57 crc kubenswrapper[4830]: I0131 09:23:57.113631 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Jan 31 09:23:57 crc kubenswrapper[4830]: I0131 09:23:57.113920 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Jan 31 09:23:57 crc kubenswrapper[4830]: I0131 09:23:57.114165 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-brn7t" Jan 31 09:23:57 crc kubenswrapper[4830]: I0131 09:23:57.141539 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bh7dj\" (UniqueName: \"kubernetes.io/projected/6324b6ba-4288-44f4-bf87-1a4356c1a9f0-kube-api-access-bh7dj\") pod \"heat-db-sync-hh79w\" (UID: \"6324b6ba-4288-44f4-bf87-1a4356c1a9f0\") " pod="openstack/heat-db-sync-hh79w" Jan 31 09:23:57 crc kubenswrapper[4830]: I0131 09:23:57.182457 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb9aed03-7e56-43de-92fc-3ac6352194af-combined-ca-bundle\") pod \"barbican-db-sync-pr2kp\" (UID: \"bb9aed03-7e56-43de-92fc-3ac6352194af\") " pod="openstack/barbican-db-sync-pr2kp" Jan 31 09:23:57 crc kubenswrapper[4830]: I0131 09:23:57.184504 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/bb9aed03-7e56-43de-92fc-3ac6352194af-db-sync-config-data\") pod \"barbican-db-sync-pr2kp\" (UID: \"bb9aed03-7e56-43de-92fc-3ac6352194af\") " pod="openstack/barbican-db-sync-pr2kp" Jan 31 09:23:57 crc kubenswrapper[4830]: I0131 09:23:57.184542 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zb5hp\" (UniqueName: \"kubernetes.io/projected/bb9aed03-7e56-43de-92fc-3ac6352194af-kube-api-access-zb5hp\") pod \"barbican-db-sync-pr2kp\" (UID: \"bb9aed03-7e56-43de-92fc-3ac6352194af\") " pod="openstack/barbican-db-sync-pr2kp" Jan 31 09:23:57 crc kubenswrapper[4830]: I0131 09:23:57.194941 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-ztgnf"] Jan 31 09:23:57 crc kubenswrapper[4830]: I0131 09:23:57.210681 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-w6kxz"] Jan 31 09:23:57 crc kubenswrapper[4830]: I0131 09:23:57.213003 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-w6kxz" Jan 31 09:23:57 crc kubenswrapper[4830]: I0131 09:23:57.217338 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Jan 31 09:23:57 crc kubenswrapper[4830]: I0131 09:23:57.217578 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Jan 31 09:23:57 crc kubenswrapper[4830]: I0131 09:23:57.217781 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-lrffb" Jan 31 09:23:57 crc kubenswrapper[4830]: I0131 09:23:57.234343 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-t2klw"] Jan 31 09:23:57 crc kubenswrapper[4830]: I0131 09:23:57.237081 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-t2klw" Jan 31 09:23:57 crc kubenswrapper[4830]: I0131 09:23:57.254213 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-vg9sh" Jan 31 09:23:57 crc kubenswrapper[4830]: I0131 09:23:57.254531 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Jan 31 09:23:57 crc kubenswrapper[4830]: I0131 09:23:57.255087 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Jan 31 09:23:57 crc kubenswrapper[4830]: I0131 09:23:57.261313 4830 generic.go:334] "Generic (PLEG): container finished" podID="cc653318-d8c5-4663-90ef-38b8f4b19275" containerID="c5e00d42d4a86dfe091d8277956f521ee05a78845d63433c8924aa95a212cc99" exitCode=0 Jan 31 09:23:57 crc kubenswrapper[4830]: I0131 09:23:57.261396 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7ff5475cc9-j5dc2" event={"ID":"cc653318-d8c5-4663-90ef-38b8f4b19275","Type":"ContainerDied","Data":"c5e00d42d4a86dfe091d8277956f521ee05a78845d63433c8924aa95a212cc99"} Jan 31 09:23:57 crc kubenswrapper[4830]: I0131 09:23:57.272661 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-w6kxz"] Jan 31 09:23:57 crc kubenswrapper[4830]: I0131 09:23:57.286455 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0617092f-40a9-4d3d-b472-f284a2b24000-combined-ca-bundle\") pod \"cinder-db-sync-w6kxz\" (UID: \"0617092f-40a9-4d3d-b472-f284a2b24000\") " pod="openstack/cinder-db-sync-w6kxz" Jan 31 09:23:57 crc kubenswrapper[4830]: I0131 09:23:57.286569 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0617092f-40a9-4d3d-b472-f284a2b24000-config-data\") pod \"cinder-db-sync-w6kxz\" (UID: \"0617092f-40a9-4d3d-b472-f284a2b24000\") " pod="openstack/cinder-db-sync-w6kxz" Jan 31 09:23:57 crc kubenswrapper[4830]: I0131 09:23:57.286633 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0617092f-40a9-4d3d-b472-f284a2b24000-etc-machine-id\") pod \"cinder-db-sync-w6kxz\" (UID: \"0617092f-40a9-4d3d-b472-f284a2b24000\") " pod="openstack/cinder-db-sync-w6kxz" Jan 31 09:23:57 crc kubenswrapper[4830]: I0131 09:23:57.286905 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce550202-087a-49b1-8796-10f03f0ab9be-combined-ca-bundle\") pod \"neutron-db-sync-ztgnf\" (UID: \"ce550202-087a-49b1-8796-10f03f0ab9be\") " pod="openstack/neutron-db-sync-ztgnf" Jan 31 09:23:57 crc kubenswrapper[4830]: I0131 09:23:57.286971 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q5w5b\" (UniqueName: \"kubernetes.io/projected/0617092f-40a9-4d3d-b472-f284a2b24000-kube-api-access-q5w5b\") pod \"cinder-db-sync-w6kxz\" (UID: \"0617092f-40a9-4d3d-b472-f284a2b24000\") " pod="openstack/cinder-db-sync-w6kxz" Jan 31 09:23:57 crc kubenswrapper[4830]: I0131 09:23:57.286996 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b8de8318-1eda-43cc-b522-86d6492c6376-scripts\") pod \"placement-db-sync-t2klw\" (UID: \"b8de8318-1eda-43cc-b522-86d6492c6376\") " pod="openstack/placement-db-sync-t2klw" Jan 31 09:23:57 crc kubenswrapper[4830]: I0131 09:23:57.287019 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0617092f-40a9-4d3d-b472-f284a2b24000-scripts\") pod \"cinder-db-sync-w6kxz\" (UID: \"0617092f-40a9-4d3d-b472-f284a2b24000\") " pod="openstack/cinder-db-sync-w6kxz" Jan 31 09:23:57 crc kubenswrapper[4830]: I0131 09:23:57.287051 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b8de8318-1eda-43cc-b522-86d6492c6376-combined-ca-bundle\") pod \"placement-db-sync-t2klw\" (UID: \"b8de8318-1eda-43cc-b522-86d6492c6376\") " pod="openstack/placement-db-sync-t2klw" Jan 31 09:23:57 crc kubenswrapper[4830]: I0131 09:23:57.287109 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b28lz\" (UniqueName: \"kubernetes.io/projected/b8de8318-1eda-43cc-b522-86d6492c6376-kube-api-access-b28lz\") pod \"placement-db-sync-t2klw\" (UID: \"b8de8318-1eda-43cc-b522-86d6492c6376\") " pod="openstack/placement-db-sync-t2klw" Jan 31 09:23:57 crc kubenswrapper[4830]: I0131 09:23:57.287139 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/bb9aed03-7e56-43de-92fc-3ac6352194af-db-sync-config-data\") pod \"barbican-db-sync-pr2kp\" (UID: \"bb9aed03-7e56-43de-92fc-3ac6352194af\") " pod="openstack/barbican-db-sync-pr2kp" Jan 31 09:23:57 crc kubenswrapper[4830]: I0131 09:23:57.287176 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zb5hp\" (UniqueName: \"kubernetes.io/projected/bb9aed03-7e56-43de-92fc-3ac6352194af-kube-api-access-zb5hp\") pod \"barbican-db-sync-pr2kp\" (UID: \"bb9aed03-7e56-43de-92fc-3ac6352194af\") " pod="openstack/barbican-db-sync-pr2kp" Jan 31 09:23:57 crc kubenswrapper[4830]: I0131 09:23:57.287195 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mbzp5\" (UniqueName: \"kubernetes.io/projected/ce550202-087a-49b1-8796-10f03f0ab9be-kube-api-access-mbzp5\") pod \"neutron-db-sync-ztgnf\" (UID: \"ce550202-087a-49b1-8796-10f03f0ab9be\") " pod="openstack/neutron-db-sync-ztgnf" Jan 31 09:23:57 crc kubenswrapper[4830]: I0131 09:23:57.287247 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b8de8318-1eda-43cc-b522-86d6492c6376-config-data\") pod \"placement-db-sync-t2klw\" (UID: \"b8de8318-1eda-43cc-b522-86d6492c6376\") " pod="openstack/placement-db-sync-t2klw" Jan 31 09:23:57 crc kubenswrapper[4830]: I0131 09:23:57.287272 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/ce550202-087a-49b1-8796-10f03f0ab9be-config\") pod \"neutron-db-sync-ztgnf\" (UID: \"ce550202-087a-49b1-8796-10f03f0ab9be\") " pod="openstack/neutron-db-sync-ztgnf" Jan 31 09:23:57 crc kubenswrapper[4830]: I0131 09:23:57.287301 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb9aed03-7e56-43de-92fc-3ac6352194af-combined-ca-bundle\") pod \"barbican-db-sync-pr2kp\" (UID: \"bb9aed03-7e56-43de-92fc-3ac6352194af\") " pod="openstack/barbican-db-sync-pr2kp" Jan 31 09:23:57 crc kubenswrapper[4830]: I0131 09:23:57.287319 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/0617092f-40a9-4d3d-b472-f284a2b24000-db-sync-config-data\") pod \"cinder-db-sync-w6kxz\" (UID: \"0617092f-40a9-4d3d-b472-f284a2b24000\") " pod="openstack/cinder-db-sync-w6kxz" Jan 31 09:23:57 crc kubenswrapper[4830]: I0131 09:23:57.287346 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b8de8318-1eda-43cc-b522-86d6492c6376-logs\") pod \"placement-db-sync-t2klw\" (UID: \"b8de8318-1eda-43cc-b522-86d6492c6376\") " pod="openstack/placement-db-sync-t2klw" Jan 31 09:23:57 crc kubenswrapper[4830]: I0131 09:23:57.296370 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/bb9aed03-7e56-43de-92fc-3ac6352194af-db-sync-config-data\") pod \"barbican-db-sync-pr2kp\" (UID: \"bb9aed03-7e56-43de-92fc-3ac6352194af\") " pod="openstack/barbican-db-sync-pr2kp" Jan 31 09:23:57 crc kubenswrapper[4830]: I0131 09:23:57.298023 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb9aed03-7e56-43de-92fc-3ac6352194af-combined-ca-bundle\") pod \"barbican-db-sync-pr2kp\" (UID: \"bb9aed03-7e56-43de-92fc-3ac6352194af\") " pod="openstack/barbican-db-sync-pr2kp" Jan 31 09:23:57 crc kubenswrapper[4830]: I0131 09:23:57.323844 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zb5hp\" (UniqueName: \"kubernetes.io/projected/bb9aed03-7e56-43de-92fc-3ac6352194af-kube-api-access-zb5hp\") pod \"barbican-db-sync-pr2kp\" (UID: \"bb9aed03-7e56-43de-92fc-3ac6352194af\") " pod="openstack/barbican-db-sync-pr2kp" Jan 31 09:23:57 crc kubenswrapper[4830]: I0131 09:23:57.329709 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c5cc7c5ff-v8fbt"] Jan 31 09:23:57 crc kubenswrapper[4830]: I0131 09:23:57.337587 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-hh79w" Jan 31 09:23:57 crc kubenswrapper[4830]: I0131 09:23:57.383469 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-pr2kp" Jan 31 09:23:57 crc kubenswrapper[4830]: I0131 09:23:57.389512 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0617092f-40a9-4d3d-b472-f284a2b24000-combined-ca-bundle\") pod \"cinder-db-sync-w6kxz\" (UID: \"0617092f-40a9-4d3d-b472-f284a2b24000\") " pod="openstack/cinder-db-sync-w6kxz" Jan 31 09:23:57 crc kubenswrapper[4830]: I0131 09:23:57.389591 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0617092f-40a9-4d3d-b472-f284a2b24000-config-data\") pod \"cinder-db-sync-w6kxz\" (UID: \"0617092f-40a9-4d3d-b472-f284a2b24000\") " pod="openstack/cinder-db-sync-w6kxz" Jan 31 09:23:57 crc kubenswrapper[4830]: I0131 09:23:57.389860 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0617092f-40a9-4d3d-b472-f284a2b24000-etc-machine-id\") pod \"cinder-db-sync-w6kxz\" (UID: \"0617092f-40a9-4d3d-b472-f284a2b24000\") " pod="openstack/cinder-db-sync-w6kxz" Jan 31 09:23:57 crc kubenswrapper[4830]: I0131 09:23:57.389917 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce550202-087a-49b1-8796-10f03f0ab9be-combined-ca-bundle\") pod \"neutron-db-sync-ztgnf\" (UID: \"ce550202-087a-49b1-8796-10f03f0ab9be\") " pod="openstack/neutron-db-sync-ztgnf" Jan 31 09:23:57 crc kubenswrapper[4830]: I0131 09:23:57.389972 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q5w5b\" (UniqueName: \"kubernetes.io/projected/0617092f-40a9-4d3d-b472-f284a2b24000-kube-api-access-q5w5b\") pod \"cinder-db-sync-w6kxz\" (UID: \"0617092f-40a9-4d3d-b472-f284a2b24000\") " pod="openstack/cinder-db-sync-w6kxz" Jan 31 09:23:57 crc kubenswrapper[4830]: I0131 09:23:57.390002 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b8de8318-1eda-43cc-b522-86d6492c6376-scripts\") pod \"placement-db-sync-t2klw\" (UID: \"b8de8318-1eda-43cc-b522-86d6492c6376\") " pod="openstack/placement-db-sync-t2klw" Jan 31 09:23:57 crc kubenswrapper[4830]: I0131 09:23:57.390041 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0617092f-40a9-4d3d-b472-f284a2b24000-scripts\") pod \"cinder-db-sync-w6kxz\" (UID: \"0617092f-40a9-4d3d-b472-f284a2b24000\") " pod="openstack/cinder-db-sync-w6kxz" Jan 31 09:23:57 crc kubenswrapper[4830]: I0131 09:23:57.390088 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b8de8318-1eda-43cc-b522-86d6492c6376-combined-ca-bundle\") pod \"placement-db-sync-t2klw\" (UID: \"b8de8318-1eda-43cc-b522-86d6492c6376\") " pod="openstack/placement-db-sync-t2klw" Jan 31 09:23:57 crc kubenswrapper[4830]: I0131 09:23:57.390188 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b28lz\" (UniqueName: \"kubernetes.io/projected/b8de8318-1eda-43cc-b522-86d6492c6376-kube-api-access-b28lz\") pod \"placement-db-sync-t2klw\" (UID: \"b8de8318-1eda-43cc-b522-86d6492c6376\") " pod="openstack/placement-db-sync-t2klw" Jan 31 09:23:57 crc kubenswrapper[4830]: I0131 09:23:57.390232 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mbzp5\" (UniqueName: \"kubernetes.io/projected/ce550202-087a-49b1-8796-10f03f0ab9be-kube-api-access-mbzp5\") pod \"neutron-db-sync-ztgnf\" (UID: \"ce550202-087a-49b1-8796-10f03f0ab9be\") " pod="openstack/neutron-db-sync-ztgnf" Jan 31 09:23:57 crc kubenswrapper[4830]: I0131 09:23:57.390291 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b8de8318-1eda-43cc-b522-86d6492c6376-config-data\") pod \"placement-db-sync-t2klw\" (UID: \"b8de8318-1eda-43cc-b522-86d6492c6376\") " pod="openstack/placement-db-sync-t2klw" Jan 31 09:23:57 crc kubenswrapper[4830]: I0131 09:23:57.390325 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/ce550202-087a-49b1-8796-10f03f0ab9be-config\") pod \"neutron-db-sync-ztgnf\" (UID: \"ce550202-087a-49b1-8796-10f03f0ab9be\") " pod="openstack/neutron-db-sync-ztgnf" Jan 31 09:23:57 crc kubenswrapper[4830]: I0131 09:23:57.390364 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/0617092f-40a9-4d3d-b472-f284a2b24000-db-sync-config-data\") pod \"cinder-db-sync-w6kxz\" (UID: \"0617092f-40a9-4d3d-b472-f284a2b24000\") " pod="openstack/cinder-db-sync-w6kxz" Jan 31 09:23:57 crc kubenswrapper[4830]: I0131 09:23:57.390391 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b8de8318-1eda-43cc-b522-86d6492c6376-logs\") pod \"placement-db-sync-t2klw\" (UID: \"b8de8318-1eda-43cc-b522-86d6492c6376\") " pod="openstack/placement-db-sync-t2klw" Jan 31 09:23:57 crc kubenswrapper[4830]: I0131 09:23:57.391670 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b8de8318-1eda-43cc-b522-86d6492c6376-logs\") pod \"placement-db-sync-t2klw\" (UID: \"b8de8318-1eda-43cc-b522-86d6492c6376\") " pod="openstack/placement-db-sync-t2klw" Jan 31 09:23:57 crc kubenswrapper[4830]: I0131 09:23:57.392096 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0617092f-40a9-4d3d-b472-f284a2b24000-etc-machine-id\") pod \"cinder-db-sync-w6kxz\" (UID: \"0617092f-40a9-4d3d-b472-f284a2b24000\") " pod="openstack/cinder-db-sync-w6kxz" Jan 31 09:23:57 crc kubenswrapper[4830]: I0131 09:23:57.418947 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/0617092f-40a9-4d3d-b472-f284a2b24000-db-sync-config-data\") pod \"cinder-db-sync-w6kxz\" (UID: \"0617092f-40a9-4d3d-b472-f284a2b24000\") " pod="openstack/cinder-db-sync-w6kxz" Jan 31 09:23:57 crc kubenswrapper[4830]: I0131 09:23:57.425035 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b8de8318-1eda-43cc-b522-86d6492c6376-config-data\") pod \"placement-db-sync-t2klw\" (UID: \"b8de8318-1eda-43cc-b522-86d6492c6376\") " pod="openstack/placement-db-sync-t2klw" Jan 31 09:23:57 crc kubenswrapper[4830]: I0131 09:23:57.430841 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce550202-087a-49b1-8796-10f03f0ab9be-combined-ca-bundle\") pod \"neutron-db-sync-ztgnf\" (UID: \"ce550202-087a-49b1-8796-10f03f0ab9be\") " pod="openstack/neutron-db-sync-ztgnf" Jan 31 09:23:57 crc kubenswrapper[4830]: I0131 09:23:57.440091 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0617092f-40a9-4d3d-b472-f284a2b24000-combined-ca-bundle\") pod \"cinder-db-sync-w6kxz\" (UID: \"0617092f-40a9-4d3d-b472-f284a2b24000\") " pod="openstack/cinder-db-sync-w6kxz" Jan 31 09:23:57 crc kubenswrapper[4830]: I0131 09:23:57.441634 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mbzp5\" (UniqueName: \"kubernetes.io/projected/ce550202-087a-49b1-8796-10f03f0ab9be-kube-api-access-mbzp5\") pod \"neutron-db-sync-ztgnf\" (UID: \"ce550202-087a-49b1-8796-10f03f0ab9be\") " pod="openstack/neutron-db-sync-ztgnf" Jan 31 09:23:57 crc kubenswrapper[4830]: I0131 09:23:57.441538 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b28lz\" (UniqueName: \"kubernetes.io/projected/b8de8318-1eda-43cc-b522-86d6492c6376-kube-api-access-b28lz\") pod \"placement-db-sync-t2klw\" (UID: \"b8de8318-1eda-43cc-b522-86d6492c6376\") " pod="openstack/placement-db-sync-t2klw" Jan 31 09:23:57 crc kubenswrapper[4830]: I0131 09:23:57.442222 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0617092f-40a9-4d3d-b472-f284a2b24000-scripts\") pod \"cinder-db-sync-w6kxz\" (UID: \"0617092f-40a9-4d3d-b472-f284a2b24000\") " pod="openstack/cinder-db-sync-w6kxz" Jan 31 09:23:57 crc kubenswrapper[4830]: I0131 09:23:57.444393 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0617092f-40a9-4d3d-b472-f284a2b24000-config-data\") pod \"cinder-db-sync-w6kxz\" (UID: \"0617092f-40a9-4d3d-b472-f284a2b24000\") " pod="openstack/cinder-db-sync-w6kxz" Jan 31 09:23:57 crc kubenswrapper[4830]: I0131 09:23:57.449781 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b8de8318-1eda-43cc-b522-86d6492c6376-combined-ca-bundle\") pod \"placement-db-sync-t2klw\" (UID: \"b8de8318-1eda-43cc-b522-86d6492c6376\") " pod="openstack/placement-db-sync-t2klw" Jan 31 09:23:57 crc kubenswrapper[4830]: I0131 09:23:57.453334 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-t2klw"] Jan 31 09:23:57 crc kubenswrapper[4830]: I0131 09:23:57.456864 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b8de8318-1eda-43cc-b522-86d6492c6376-scripts\") pod \"placement-db-sync-t2klw\" (UID: \"b8de8318-1eda-43cc-b522-86d6492c6376\") " pod="openstack/placement-db-sync-t2klw" Jan 31 09:23:57 crc kubenswrapper[4830]: I0131 09:23:57.464666 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/ce550202-087a-49b1-8796-10f03f0ab9be-config\") pod \"neutron-db-sync-ztgnf\" (UID: \"ce550202-087a-49b1-8796-10f03f0ab9be\") " pod="openstack/neutron-db-sync-ztgnf" Jan 31 09:23:57 crc kubenswrapper[4830]: I0131 09:23:57.472113 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q5w5b\" (UniqueName: \"kubernetes.io/projected/0617092f-40a9-4d3d-b472-f284a2b24000-kube-api-access-q5w5b\") pod \"cinder-db-sync-w6kxz\" (UID: \"0617092f-40a9-4d3d-b472-f284a2b24000\") " pod="openstack/cinder-db-sync-w6kxz" Jan 31 09:23:57 crc kubenswrapper[4830]: I0131 09:23:57.516715 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-ztgnf" Jan 31 09:23:57 crc kubenswrapper[4830]: I0131 09:23:57.523594 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-8b5c85b87-7w87z"] Jan 31 09:23:57 crc kubenswrapper[4830]: I0131 09:23:57.526186 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8b5c85b87-7w87z" Jan 31 09:23:57 crc kubenswrapper[4830]: I0131 09:23:57.539999 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7ff5475cc9-j5dc2" Jan 31 09:23:57 crc kubenswrapper[4830]: I0131 09:23:57.546597 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8b5c85b87-7w87z"] Jan 31 09:23:57 crc kubenswrapper[4830]: I0131 09:23:57.565837 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-w6kxz" Jan 31 09:23:57 crc kubenswrapper[4830]: I0131 09:23:57.595503 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cjlmz\" (UniqueName: \"kubernetes.io/projected/cc653318-d8c5-4663-90ef-38b8f4b19275-kube-api-access-cjlmz\") pod \"cc653318-d8c5-4663-90ef-38b8f4b19275\" (UID: \"cc653318-d8c5-4663-90ef-38b8f4b19275\") " Jan 31 09:23:57 crc kubenswrapper[4830]: I0131 09:23:57.595584 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cc653318-d8c5-4663-90ef-38b8f4b19275-dns-swift-storage-0\") pod \"cc653318-d8c5-4663-90ef-38b8f4b19275\" (UID: \"cc653318-d8c5-4663-90ef-38b8f4b19275\") " Jan 31 09:23:57 crc kubenswrapper[4830]: I0131 09:23:57.595634 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cc653318-d8c5-4663-90ef-38b8f4b19275-dns-svc\") pod \"cc653318-d8c5-4663-90ef-38b8f4b19275\" (UID: \"cc653318-d8c5-4663-90ef-38b8f4b19275\") " Jan 31 09:23:57 crc kubenswrapper[4830]: I0131 09:23:57.597218 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cc653318-d8c5-4663-90ef-38b8f4b19275-ovsdbserver-nb\") pod \"cc653318-d8c5-4663-90ef-38b8f4b19275\" (UID: \"cc653318-d8c5-4663-90ef-38b8f4b19275\") " Jan 31 09:23:57 crc kubenswrapper[4830]: I0131 09:23:57.597442 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cc653318-d8c5-4663-90ef-38b8f4b19275-ovsdbserver-sb\") pod \"cc653318-d8c5-4663-90ef-38b8f4b19275\" (UID: \"cc653318-d8c5-4663-90ef-38b8f4b19275\") " Jan 31 09:23:57 crc kubenswrapper[4830]: I0131 09:23:57.597522 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cc653318-d8c5-4663-90ef-38b8f4b19275-config\") pod \"cc653318-d8c5-4663-90ef-38b8f4b19275\" (UID: \"cc653318-d8c5-4663-90ef-38b8f4b19275\") " Jan 31 09:23:57 crc kubenswrapper[4830]: I0131 09:23:57.603515 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cc653318-d8c5-4663-90ef-38b8f4b19275-kube-api-access-cjlmz" (OuterVolumeSpecName: "kube-api-access-cjlmz") pod "cc653318-d8c5-4663-90ef-38b8f4b19275" (UID: "cc653318-d8c5-4663-90ef-38b8f4b19275"). InnerVolumeSpecName "kube-api-access-cjlmz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:23:57 crc kubenswrapper[4830]: I0131 09:23:57.607764 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6d941bfc-a5bd-4764-8e53-a77414f25a21-dns-svc\") pod \"dnsmasq-dns-8b5c85b87-7w87z\" (UID: \"6d941bfc-a5bd-4764-8e53-a77414f25a21\") " pod="openstack/dnsmasq-dns-8b5c85b87-7w87z" Jan 31 09:23:57 crc kubenswrapper[4830]: I0131 09:23:57.607868 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6d941bfc-a5bd-4764-8e53-a77414f25a21-dns-swift-storage-0\") pod \"dnsmasq-dns-8b5c85b87-7w87z\" (UID: \"6d941bfc-a5bd-4764-8e53-a77414f25a21\") " pod="openstack/dnsmasq-dns-8b5c85b87-7w87z" Jan 31 09:23:57 crc kubenswrapper[4830]: I0131 09:23:57.608105 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6d941bfc-a5bd-4764-8e53-a77414f25a21-ovsdbserver-sb\") pod \"dnsmasq-dns-8b5c85b87-7w87z\" (UID: \"6d941bfc-a5bd-4764-8e53-a77414f25a21\") " pod="openstack/dnsmasq-dns-8b5c85b87-7w87z" Jan 31 09:23:57 crc kubenswrapper[4830]: I0131 09:23:57.608411 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6d941bfc-a5bd-4764-8e53-a77414f25a21-ovsdbserver-nb\") pod \"dnsmasq-dns-8b5c85b87-7w87z\" (UID: \"6d941bfc-a5bd-4764-8e53-a77414f25a21\") " pod="openstack/dnsmasq-dns-8b5c85b87-7w87z" Jan 31 09:23:57 crc kubenswrapper[4830]: I0131 09:23:57.608452 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6d941bfc-a5bd-4764-8e53-a77414f25a21-config\") pod \"dnsmasq-dns-8b5c85b87-7w87z\" (UID: \"6d941bfc-a5bd-4764-8e53-a77414f25a21\") " pod="openstack/dnsmasq-dns-8b5c85b87-7w87z" Jan 31 09:23:57 crc kubenswrapper[4830]: I0131 09:23:57.608500 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7dpkx\" (UniqueName: \"kubernetes.io/projected/6d941bfc-a5bd-4764-8e53-a77414f25a21-kube-api-access-7dpkx\") pod \"dnsmasq-dns-8b5c85b87-7w87z\" (UID: \"6d941bfc-a5bd-4764-8e53-a77414f25a21\") " pod="openstack/dnsmasq-dns-8b5c85b87-7w87z" Jan 31 09:23:57 crc kubenswrapper[4830]: I0131 09:23:57.608696 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cjlmz\" (UniqueName: \"kubernetes.io/projected/cc653318-d8c5-4663-90ef-38b8f4b19275-kube-api-access-cjlmz\") on node \"crc\" DevicePath \"\"" Jan 31 09:23:57 crc kubenswrapper[4830]: I0131 09:23:57.617555 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-t2klw" Jan 31 09:23:57 crc kubenswrapper[4830]: I0131 09:23:57.632566 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 31 09:23:57 crc kubenswrapper[4830]: E0131 09:23:57.635022 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc653318-d8c5-4663-90ef-38b8f4b19275" containerName="dnsmasq-dns" Jan 31 09:23:57 crc kubenswrapper[4830]: I0131 09:23:57.635149 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc653318-d8c5-4663-90ef-38b8f4b19275" containerName="dnsmasq-dns" Jan 31 09:23:57 crc kubenswrapper[4830]: E0131 09:23:57.635282 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc653318-d8c5-4663-90ef-38b8f4b19275" containerName="init" Jan 31 09:23:57 crc kubenswrapper[4830]: I0131 09:23:57.635451 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc653318-d8c5-4663-90ef-38b8f4b19275" containerName="init" Jan 31 09:23:57 crc kubenswrapper[4830]: I0131 09:23:57.635826 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="cc653318-d8c5-4663-90ef-38b8f4b19275" containerName="dnsmasq-dns" Jan 31 09:23:57 crc kubenswrapper[4830]: I0131 09:23:57.639021 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 31 09:23:57 crc kubenswrapper[4830]: I0131 09:23:57.650374 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 31 09:23:57 crc kubenswrapper[4830]: I0131 09:23:57.660860 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 31 09:23:57 crc kubenswrapper[4830]: I0131 09:23:57.674929 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 31 09:23:57 crc kubenswrapper[4830]: I0131 09:23:57.708098 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cc653318-d8c5-4663-90ef-38b8f4b19275-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "cc653318-d8c5-4663-90ef-38b8f4b19275" (UID: "cc653318-d8c5-4663-90ef-38b8f4b19275"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:23:57 crc kubenswrapper[4830]: I0131 09:23:57.820544 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/39688f84-c227-4658-aee1-ce5e5d450ca1-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"39688f84-c227-4658-aee1-ce5e5d450ca1\") " pod="openstack/ceilometer-0" Jan 31 09:23:57 crc kubenswrapper[4830]: I0131 09:23:57.821113 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6d941bfc-a5bd-4764-8e53-a77414f25a21-ovsdbserver-sb\") pod \"dnsmasq-dns-8b5c85b87-7w87z\" (UID: \"6d941bfc-a5bd-4764-8e53-a77414f25a21\") " pod="openstack/dnsmasq-dns-8b5c85b87-7w87z" Jan 31 09:23:57 crc kubenswrapper[4830]: I0131 09:23:57.821271 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zwkxh\" (UniqueName: \"kubernetes.io/projected/39688f84-c227-4658-aee1-ce5e5d450ca1-kube-api-access-zwkxh\") pod \"ceilometer-0\" (UID: \"39688f84-c227-4658-aee1-ce5e5d450ca1\") " pod="openstack/ceilometer-0" Jan 31 09:23:57 crc kubenswrapper[4830]: I0131 09:23:57.821323 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39688f84-c227-4658-aee1-ce5e5d450ca1-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"39688f84-c227-4658-aee1-ce5e5d450ca1\") " pod="openstack/ceilometer-0" Jan 31 09:23:57 crc kubenswrapper[4830]: I0131 09:23:57.821429 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/39688f84-c227-4658-aee1-ce5e5d450ca1-run-httpd\") pod \"ceilometer-0\" (UID: \"39688f84-c227-4658-aee1-ce5e5d450ca1\") " pod="openstack/ceilometer-0" Jan 31 09:23:57 crc kubenswrapper[4830]: I0131 09:23:57.821499 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6d941bfc-a5bd-4764-8e53-a77414f25a21-ovsdbserver-nb\") pod \"dnsmasq-dns-8b5c85b87-7w87z\" (UID: \"6d941bfc-a5bd-4764-8e53-a77414f25a21\") " pod="openstack/dnsmasq-dns-8b5c85b87-7w87z" Jan 31 09:23:57 crc kubenswrapper[4830]: I0131 09:23:57.821528 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6d941bfc-a5bd-4764-8e53-a77414f25a21-config\") pod \"dnsmasq-dns-8b5c85b87-7w87z\" (UID: \"6d941bfc-a5bd-4764-8e53-a77414f25a21\") " pod="openstack/dnsmasq-dns-8b5c85b87-7w87z" Jan 31 09:23:57 crc kubenswrapper[4830]: I0131 09:23:57.821574 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7dpkx\" (UniqueName: \"kubernetes.io/projected/6d941bfc-a5bd-4764-8e53-a77414f25a21-kube-api-access-7dpkx\") pod \"dnsmasq-dns-8b5c85b87-7w87z\" (UID: \"6d941bfc-a5bd-4764-8e53-a77414f25a21\") " pod="openstack/dnsmasq-dns-8b5c85b87-7w87z" Jan 31 09:23:57 crc kubenswrapper[4830]: I0131 09:23:57.821681 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/39688f84-c227-4658-aee1-ce5e5d450ca1-config-data\") pod \"ceilometer-0\" (UID: \"39688f84-c227-4658-aee1-ce5e5d450ca1\") " pod="openstack/ceilometer-0" Jan 31 09:23:57 crc kubenswrapper[4830]: I0131 09:23:57.821774 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/39688f84-c227-4658-aee1-ce5e5d450ca1-log-httpd\") pod \"ceilometer-0\" (UID: \"39688f84-c227-4658-aee1-ce5e5d450ca1\") " pod="openstack/ceilometer-0" Jan 31 09:23:57 crc kubenswrapper[4830]: I0131 09:23:57.821799 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6d941bfc-a5bd-4764-8e53-a77414f25a21-dns-svc\") pod \"dnsmasq-dns-8b5c85b87-7w87z\" (UID: \"6d941bfc-a5bd-4764-8e53-a77414f25a21\") " pod="openstack/dnsmasq-dns-8b5c85b87-7w87z" Jan 31 09:23:57 crc kubenswrapper[4830]: I0131 09:23:57.821854 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6d941bfc-a5bd-4764-8e53-a77414f25a21-dns-swift-storage-0\") pod \"dnsmasq-dns-8b5c85b87-7w87z\" (UID: \"6d941bfc-a5bd-4764-8e53-a77414f25a21\") " pod="openstack/dnsmasq-dns-8b5c85b87-7w87z" Jan 31 09:23:57 crc kubenswrapper[4830]: I0131 09:23:57.821894 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/39688f84-c227-4658-aee1-ce5e5d450ca1-scripts\") pod \"ceilometer-0\" (UID: \"39688f84-c227-4658-aee1-ce5e5d450ca1\") " pod="openstack/ceilometer-0" Jan 31 09:23:57 crc kubenswrapper[4830]: I0131 09:23:57.821973 4830 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cc653318-d8c5-4663-90ef-38b8f4b19275-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 31 09:23:57 crc kubenswrapper[4830]: I0131 09:23:57.822873 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6d941bfc-a5bd-4764-8e53-a77414f25a21-ovsdbserver-sb\") pod \"dnsmasq-dns-8b5c85b87-7w87z\" (UID: \"6d941bfc-a5bd-4764-8e53-a77414f25a21\") " pod="openstack/dnsmasq-dns-8b5c85b87-7w87z" Jan 31 09:23:57 crc kubenswrapper[4830]: I0131 09:23:57.823407 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6d941bfc-a5bd-4764-8e53-a77414f25a21-ovsdbserver-nb\") pod \"dnsmasq-dns-8b5c85b87-7w87z\" (UID: \"6d941bfc-a5bd-4764-8e53-a77414f25a21\") " pod="openstack/dnsmasq-dns-8b5c85b87-7w87z" Jan 31 09:23:57 crc kubenswrapper[4830]: I0131 09:23:57.824681 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6d941bfc-a5bd-4764-8e53-a77414f25a21-dns-svc\") pod \"dnsmasq-dns-8b5c85b87-7w87z\" (UID: \"6d941bfc-a5bd-4764-8e53-a77414f25a21\") " pod="openstack/dnsmasq-dns-8b5c85b87-7w87z" Jan 31 09:23:57 crc kubenswrapper[4830]: I0131 09:23:57.825272 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6d941bfc-a5bd-4764-8e53-a77414f25a21-dns-swift-storage-0\") pod \"dnsmasq-dns-8b5c85b87-7w87z\" (UID: \"6d941bfc-a5bd-4764-8e53-a77414f25a21\") " pod="openstack/dnsmasq-dns-8b5c85b87-7w87z" Jan 31 09:23:57 crc kubenswrapper[4830]: I0131 09:23:57.825692 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cc653318-d8c5-4663-90ef-38b8f4b19275-config" (OuterVolumeSpecName: "config") pod "cc653318-d8c5-4663-90ef-38b8f4b19275" (UID: "cc653318-d8c5-4663-90ef-38b8f4b19275"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:23:57 crc kubenswrapper[4830]: I0131 09:23:57.857396 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6d941bfc-a5bd-4764-8e53-a77414f25a21-config\") pod \"dnsmasq-dns-8b5c85b87-7w87z\" (UID: \"6d941bfc-a5bd-4764-8e53-a77414f25a21\") " pod="openstack/dnsmasq-dns-8b5c85b87-7w87z" Jan 31 09:23:57 crc kubenswrapper[4830]: I0131 09:23:57.891328 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7dpkx\" (UniqueName: \"kubernetes.io/projected/6d941bfc-a5bd-4764-8e53-a77414f25a21-kube-api-access-7dpkx\") pod \"dnsmasq-dns-8b5c85b87-7w87z\" (UID: \"6d941bfc-a5bd-4764-8e53-a77414f25a21\") " pod="openstack/dnsmasq-dns-8b5c85b87-7w87z" Jan 31 09:23:57 crc kubenswrapper[4830]: I0131 09:23:57.927553 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/39688f84-c227-4658-aee1-ce5e5d450ca1-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"39688f84-c227-4658-aee1-ce5e5d450ca1\") " pod="openstack/ceilometer-0" Jan 31 09:23:57 crc kubenswrapper[4830]: I0131 09:23:57.927677 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zwkxh\" (UniqueName: \"kubernetes.io/projected/39688f84-c227-4658-aee1-ce5e5d450ca1-kube-api-access-zwkxh\") pod \"ceilometer-0\" (UID: \"39688f84-c227-4658-aee1-ce5e5d450ca1\") " pod="openstack/ceilometer-0" Jan 31 09:23:57 crc kubenswrapper[4830]: I0131 09:23:57.927705 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39688f84-c227-4658-aee1-ce5e5d450ca1-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"39688f84-c227-4658-aee1-ce5e5d450ca1\") " pod="openstack/ceilometer-0" Jan 31 09:23:57 crc kubenswrapper[4830]: I0131 09:23:57.927760 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/39688f84-c227-4658-aee1-ce5e5d450ca1-run-httpd\") pod \"ceilometer-0\" (UID: \"39688f84-c227-4658-aee1-ce5e5d450ca1\") " pod="openstack/ceilometer-0" Jan 31 09:23:57 crc kubenswrapper[4830]: I0131 09:23:57.927826 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/39688f84-c227-4658-aee1-ce5e5d450ca1-config-data\") pod \"ceilometer-0\" (UID: \"39688f84-c227-4658-aee1-ce5e5d450ca1\") " pod="openstack/ceilometer-0" Jan 31 09:23:57 crc kubenswrapper[4830]: I0131 09:23:57.935637 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/39688f84-c227-4658-aee1-ce5e5d450ca1-log-httpd\") pod \"ceilometer-0\" (UID: \"39688f84-c227-4658-aee1-ce5e5d450ca1\") " pod="openstack/ceilometer-0" Jan 31 09:23:57 crc kubenswrapper[4830]: I0131 09:23:57.935788 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/39688f84-c227-4658-aee1-ce5e5d450ca1-scripts\") pod \"ceilometer-0\" (UID: \"39688f84-c227-4658-aee1-ce5e5d450ca1\") " pod="openstack/ceilometer-0" Jan 31 09:23:57 crc kubenswrapper[4830]: I0131 09:23:57.936114 4830 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cc653318-d8c5-4663-90ef-38b8f4b19275-config\") on node \"crc\" DevicePath \"\"" Jan 31 09:23:57 crc kubenswrapper[4830]: I0131 09:23:57.942900 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/39688f84-c227-4658-aee1-ce5e5d450ca1-log-httpd\") pod \"ceilometer-0\" (UID: \"39688f84-c227-4658-aee1-ce5e5d450ca1\") " pod="openstack/ceilometer-0" Jan 31 09:23:57 crc kubenswrapper[4830]: I0131 09:23:57.948493 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/39688f84-c227-4658-aee1-ce5e5d450ca1-run-httpd\") pod \"ceilometer-0\" (UID: \"39688f84-c227-4658-aee1-ce5e5d450ca1\") " pod="openstack/ceilometer-0" Jan 31 09:23:57 crc kubenswrapper[4830]: I0131 09:23:57.967667 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/39688f84-c227-4658-aee1-ce5e5d450ca1-config-data\") pod \"ceilometer-0\" (UID: \"39688f84-c227-4658-aee1-ce5e5d450ca1\") " pod="openstack/ceilometer-0" Jan 31 09:23:57 crc kubenswrapper[4830]: I0131 09:23:57.974014 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/39688f84-c227-4658-aee1-ce5e5d450ca1-scripts\") pod \"ceilometer-0\" (UID: \"39688f84-c227-4658-aee1-ce5e5d450ca1\") " pod="openstack/ceilometer-0" Jan 31 09:23:57 crc kubenswrapper[4830]: I0131 09:23:57.974413 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/39688f84-c227-4658-aee1-ce5e5d450ca1-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"39688f84-c227-4658-aee1-ce5e5d450ca1\") " pod="openstack/ceilometer-0" Jan 31 09:23:58 crc kubenswrapper[4830]: I0131 09:23:58.022061 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 31 09:23:58 crc kubenswrapper[4830]: I0131 09:23:58.027746 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 31 09:23:58 crc kubenswrapper[4830]: I0131 09:23:58.043832 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 31 09:23:58 crc kubenswrapper[4830]: I0131 09:23:58.063854 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cc653318-d8c5-4663-90ef-38b8f4b19275-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "cc653318-d8c5-4663-90ef-38b8f4b19275" (UID: "cc653318-d8c5-4663-90ef-38b8f4b19275"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:23:58 crc kubenswrapper[4830]: I0131 09:23:58.064251 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Jan 31 09:23:58 crc kubenswrapper[4830]: I0131 09:23:58.064483 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 31 09:23:58 crc kubenswrapper[4830]: I0131 09:23:58.064627 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-8sfkk" Jan 31 09:23:58 crc kubenswrapper[4830]: I0131 09:23:58.064498 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 31 09:23:58 crc kubenswrapper[4830]: I0131 09:23:58.064970 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39688f84-c227-4658-aee1-ce5e5d450ca1-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"39688f84-c227-4658-aee1-ce5e5d450ca1\") " pod="openstack/ceilometer-0" Jan 31 09:23:58 crc kubenswrapper[4830]: I0131 09:23:58.090998 4830 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cc653318-d8c5-4663-90ef-38b8f4b19275-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 31 09:23:58 crc kubenswrapper[4830]: I0131 09:23:58.096337 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zwkxh\" (UniqueName: \"kubernetes.io/projected/39688f84-c227-4658-aee1-ce5e5d450ca1-kube-api-access-zwkxh\") pod \"ceilometer-0\" (UID: \"39688f84-c227-4658-aee1-ce5e5d450ca1\") " pod="openstack/ceilometer-0" Jan 31 09:23:58 crc kubenswrapper[4830]: I0131 09:23:58.108672 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 31 09:23:58 crc kubenswrapper[4830]: I0131 09:23:58.200050 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8b5c85b87-7w87z" Jan 31 09:23:58 crc kubenswrapper[4830]: I0131 09:23:58.212837 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 31 09:23:58 crc kubenswrapper[4830]: I0131 09:23:58.221872 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 31 09:23:58 crc kubenswrapper[4830]: I0131 09:23:58.247897 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 31 09:23:58 crc kubenswrapper[4830]: I0131 09:23:58.248248 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ea230a4-e2bf-48fa-86fb-b2d6f049ca0b-config-data\") pod \"glance-default-external-api-0\" (UID: \"2ea230a4-e2bf-48fa-86fb-b2d6f049ca0b\") " pod="openstack/glance-default-external-api-0" Jan 31 09:23:58 crc kubenswrapper[4830]: I0131 09:23:58.248508 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2ea230a4-e2bf-48fa-86fb-b2d6f049ca0b-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"2ea230a4-e2bf-48fa-86fb-b2d6f049ca0b\") " pod="openstack/glance-default-external-api-0" Jan 31 09:23:58 crc kubenswrapper[4830]: I0131 09:23:58.248548 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ea230a4-e2bf-48fa-86fb-b2d6f049ca0b-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"2ea230a4-e2bf-48fa-86fb-b2d6f049ca0b\") " pod="openstack/glance-default-external-api-0" Jan 31 09:23:58 crc kubenswrapper[4830]: I0131 09:23:58.248600 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-534498b2-d616-470f-a82d-6fd5620e2438\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-534498b2-d616-470f-a82d-6fd5620e2438\") pod \"glance-default-external-api-0\" (UID: \"2ea230a4-e2bf-48fa-86fb-b2d6f049ca0b\") " pod="openstack/glance-default-external-api-0" Jan 31 09:23:58 crc kubenswrapper[4830]: I0131 09:23:58.248967 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2ea230a4-e2bf-48fa-86fb-b2d6f049ca0b-scripts\") pod \"glance-default-external-api-0\" (UID: \"2ea230a4-e2bf-48fa-86fb-b2d6f049ca0b\") " pod="openstack/glance-default-external-api-0" Jan 31 09:23:58 crc kubenswrapper[4830]: I0131 09:23:58.249100 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2ea230a4-e2bf-48fa-86fb-b2d6f049ca0b-logs\") pod \"glance-default-external-api-0\" (UID: \"2ea230a4-e2bf-48fa-86fb-b2d6f049ca0b\") " pod="openstack/glance-default-external-api-0" Jan 31 09:23:58 crc kubenswrapper[4830]: I0131 09:23:58.249191 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2ea230a4-e2bf-48fa-86fb-b2d6f049ca0b-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"2ea230a4-e2bf-48fa-86fb-b2d6f049ca0b\") " pod="openstack/glance-default-external-api-0" Jan 31 09:23:58 crc kubenswrapper[4830]: I0131 09:23:58.249585 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7txf7\" (UniqueName: \"kubernetes.io/projected/2ea230a4-e2bf-48fa-86fb-b2d6f049ca0b-kube-api-access-7txf7\") pod \"glance-default-external-api-0\" (UID: \"2ea230a4-e2bf-48fa-86fb-b2d6f049ca0b\") " pod="openstack/glance-default-external-api-0" Jan 31 09:23:58 crc kubenswrapper[4830]: I0131 09:23:58.261381 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cc653318-d8c5-4663-90ef-38b8f4b19275-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "cc653318-d8c5-4663-90ef-38b8f4b19275" (UID: "cc653318-d8c5-4663-90ef-38b8f4b19275"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:23:58 crc kubenswrapper[4830]: I0131 09:23:58.278863 4830 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cc653318-d8c5-4663-90ef-38b8f4b19275-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 31 09:23:58 crc kubenswrapper[4830]: I0131 09:23:58.298505 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cc653318-d8c5-4663-90ef-38b8f4b19275-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "cc653318-d8c5-4663-90ef-38b8f4b19275" (UID: "cc653318-d8c5-4663-90ef-38b8f4b19275"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:23:58 crc kubenswrapper[4830]: I0131 09:23:58.373020 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 31 09:23:58 crc kubenswrapper[4830]: I0131 09:23:58.384084 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-9a2130b1-175e-4117-ab6c-14e39b8138c2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9a2130b1-175e-4117-ab6c-14e39b8138c2\") pod \"glance-default-internal-api-0\" (UID: \"d2d72ba4-adcb-41a9-b840-4996715f2cc1\") " pod="openstack/glance-default-internal-api-0" Jan 31 09:23:58 crc kubenswrapper[4830]: I0131 09:23:58.384165 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2ea230a4-e2bf-48fa-86fb-b2d6f049ca0b-logs\") pod \"glance-default-external-api-0\" (UID: \"2ea230a4-e2bf-48fa-86fb-b2d6f049ca0b\") " pod="openstack/glance-default-external-api-0" Jan 31 09:23:58 crc kubenswrapper[4830]: I0131 09:23:58.384264 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2ea230a4-e2bf-48fa-86fb-b2d6f049ca0b-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"2ea230a4-e2bf-48fa-86fb-b2d6f049ca0b\") " pod="openstack/glance-default-external-api-0" Jan 31 09:23:58 crc kubenswrapper[4830]: I0131 09:23:58.384319 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2d72ba4-adcb-41a9-b840-4996715f2cc1-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"d2d72ba4-adcb-41a9-b840-4996715f2cc1\") " pod="openstack/glance-default-internal-api-0" Jan 31 09:23:58 crc kubenswrapper[4830]: I0131 09:23:58.384372 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7txf7\" (UniqueName: \"kubernetes.io/projected/2ea230a4-e2bf-48fa-86fb-b2d6f049ca0b-kube-api-access-7txf7\") pod \"glance-default-external-api-0\" (UID: \"2ea230a4-e2bf-48fa-86fb-b2d6f049ca0b\") " pod="openstack/glance-default-external-api-0" Jan 31 09:23:58 crc kubenswrapper[4830]: I0131 09:23:58.384410 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d2d72ba4-adcb-41a9-b840-4996715f2cc1-logs\") pod \"glance-default-internal-api-0\" (UID: \"d2d72ba4-adcb-41a9-b840-4996715f2cc1\") " pod="openstack/glance-default-internal-api-0" Jan 31 09:23:58 crc kubenswrapper[4830]: I0131 09:23:58.384574 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d2d72ba4-adcb-41a9-b840-4996715f2cc1-scripts\") pod \"glance-default-internal-api-0\" (UID: \"d2d72ba4-adcb-41a9-b840-4996715f2cc1\") " pod="openstack/glance-default-internal-api-0" Jan 31 09:23:58 crc kubenswrapper[4830]: I0131 09:23:58.384642 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ea230a4-e2bf-48fa-86fb-b2d6f049ca0b-config-data\") pod \"glance-default-external-api-0\" (UID: \"2ea230a4-e2bf-48fa-86fb-b2d6f049ca0b\") " pod="openstack/glance-default-external-api-0" Jan 31 09:23:58 crc kubenswrapper[4830]: I0131 09:23:58.384670 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6p6zk\" (UniqueName: \"kubernetes.io/projected/d2d72ba4-adcb-41a9-b840-4996715f2cc1-kube-api-access-6p6zk\") pod \"glance-default-internal-api-0\" (UID: \"d2d72ba4-adcb-41a9-b840-4996715f2cc1\") " pod="openstack/glance-default-internal-api-0" Jan 31 09:23:58 crc kubenswrapper[4830]: I0131 09:23:58.384770 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d2d72ba4-adcb-41a9-b840-4996715f2cc1-config-data\") pod \"glance-default-internal-api-0\" (UID: \"d2d72ba4-adcb-41a9-b840-4996715f2cc1\") " pod="openstack/glance-default-internal-api-0" Jan 31 09:23:58 crc kubenswrapper[4830]: I0131 09:23:58.384815 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d2d72ba4-adcb-41a9-b840-4996715f2cc1-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"d2d72ba4-adcb-41a9-b840-4996715f2cc1\") " pod="openstack/glance-default-internal-api-0" Jan 31 09:23:58 crc kubenswrapper[4830]: I0131 09:23:58.384872 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2ea230a4-e2bf-48fa-86fb-b2d6f049ca0b-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"2ea230a4-e2bf-48fa-86fb-b2d6f049ca0b\") " pod="openstack/glance-default-external-api-0" Jan 31 09:23:58 crc kubenswrapper[4830]: I0131 09:23:58.384905 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ea230a4-e2bf-48fa-86fb-b2d6f049ca0b-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"2ea230a4-e2bf-48fa-86fb-b2d6f049ca0b\") " pod="openstack/glance-default-external-api-0" Jan 31 09:23:58 crc kubenswrapper[4830]: I0131 09:23:58.384964 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d2d72ba4-adcb-41a9-b840-4996715f2cc1-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"d2d72ba4-adcb-41a9-b840-4996715f2cc1\") " pod="openstack/glance-default-internal-api-0" Jan 31 09:23:58 crc kubenswrapper[4830]: I0131 09:23:58.384995 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-534498b2-d616-470f-a82d-6fd5620e2438\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-534498b2-d616-470f-a82d-6fd5620e2438\") pod \"glance-default-external-api-0\" (UID: \"2ea230a4-e2bf-48fa-86fb-b2d6f049ca0b\") " pod="openstack/glance-default-external-api-0" Jan 31 09:23:58 crc kubenswrapper[4830]: I0131 09:23:58.385040 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2ea230a4-e2bf-48fa-86fb-b2d6f049ca0b-scripts\") pod \"glance-default-external-api-0\" (UID: \"2ea230a4-e2bf-48fa-86fb-b2d6f049ca0b\") " pod="openstack/glance-default-external-api-0" Jan 31 09:23:58 crc kubenswrapper[4830]: I0131 09:23:58.386688 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2ea230a4-e2bf-48fa-86fb-b2d6f049ca0b-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"2ea230a4-e2bf-48fa-86fb-b2d6f049ca0b\") " pod="openstack/glance-default-external-api-0" Jan 31 09:23:58 crc kubenswrapper[4830]: I0131 09:23:58.387604 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2ea230a4-e2bf-48fa-86fb-b2d6f049ca0b-logs\") pod \"glance-default-external-api-0\" (UID: \"2ea230a4-e2bf-48fa-86fb-b2d6f049ca0b\") " pod="openstack/glance-default-external-api-0" Jan 31 09:23:58 crc kubenswrapper[4830]: I0131 09:23:58.388798 4830 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cc653318-d8c5-4663-90ef-38b8f4b19275-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 31 09:23:58 crc kubenswrapper[4830]: I0131 09:23:58.390825 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7ff5475cc9-j5dc2" Jan 31 09:23:58 crc kubenswrapper[4830]: I0131 09:23:58.417273 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7txf7\" (UniqueName: \"kubernetes.io/projected/2ea230a4-e2bf-48fa-86fb-b2d6f049ca0b-kube-api-access-7txf7\") pod \"glance-default-external-api-0\" (UID: \"2ea230a4-e2bf-48fa-86fb-b2d6f049ca0b\") " pod="openstack/glance-default-external-api-0" Jan 31 09:23:58 crc kubenswrapper[4830]: I0131 09:23:58.420572 4830 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 31 09:23:58 crc kubenswrapper[4830]: I0131 09:23:58.420644 4830 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-534498b2-d616-470f-a82d-6fd5620e2438\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-534498b2-d616-470f-a82d-6fd5620e2438\") pod \"glance-default-external-api-0\" (UID: \"2ea230a4-e2bf-48fa-86fb-b2d6f049ca0b\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/b2b5f65c3ddebbe1a693d29972c24b4f4a39793430c5c2cc47acd10e0b700ef0/globalmount\"" pod="openstack/glance-default-external-api-0" Jan 31 09:23:58 crc kubenswrapper[4830]: I0131 09:23:58.426562 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2ea230a4-e2bf-48fa-86fb-b2d6f049ca0b-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"2ea230a4-e2bf-48fa-86fb-b2d6f049ca0b\") " pod="openstack/glance-default-external-api-0" Jan 31 09:23:58 crc kubenswrapper[4830]: I0131 09:23:58.430521 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ea230a4-e2bf-48fa-86fb-b2d6f049ca0b-config-data\") pod \"glance-default-external-api-0\" (UID: \"2ea230a4-e2bf-48fa-86fb-b2d6f049ca0b\") " pod="openstack/glance-default-external-api-0" Jan 31 09:23:58 crc kubenswrapper[4830]: I0131 09:23:58.438049 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2ea230a4-e2bf-48fa-86fb-b2d6f049ca0b-scripts\") pod \"glance-default-external-api-0\" (UID: \"2ea230a4-e2bf-48fa-86fb-b2d6f049ca0b\") " pod="openstack/glance-default-external-api-0" Jan 31 09:23:58 crc kubenswrapper[4830]: I0131 09:23:58.443058 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ea230a4-e2bf-48fa-86fb-b2d6f049ca0b-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"2ea230a4-e2bf-48fa-86fb-b2d6f049ca0b\") " pod="openstack/glance-default-external-api-0" Jan 31 09:23:58 crc kubenswrapper[4830]: I0131 09:23:58.491999 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2d72ba4-adcb-41a9-b840-4996715f2cc1-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"d2d72ba4-adcb-41a9-b840-4996715f2cc1\") " pod="openstack/glance-default-internal-api-0" Jan 31 09:23:58 crc kubenswrapper[4830]: I0131 09:23:58.492067 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d2d72ba4-adcb-41a9-b840-4996715f2cc1-logs\") pod \"glance-default-internal-api-0\" (UID: \"d2d72ba4-adcb-41a9-b840-4996715f2cc1\") " pod="openstack/glance-default-internal-api-0" Jan 31 09:23:58 crc kubenswrapper[4830]: I0131 09:23:58.492190 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d2d72ba4-adcb-41a9-b840-4996715f2cc1-scripts\") pod \"glance-default-internal-api-0\" (UID: \"d2d72ba4-adcb-41a9-b840-4996715f2cc1\") " pod="openstack/glance-default-internal-api-0" Jan 31 09:23:58 crc kubenswrapper[4830]: I0131 09:23:58.492233 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6p6zk\" (UniqueName: \"kubernetes.io/projected/d2d72ba4-adcb-41a9-b840-4996715f2cc1-kube-api-access-6p6zk\") pod \"glance-default-internal-api-0\" (UID: \"d2d72ba4-adcb-41a9-b840-4996715f2cc1\") " pod="openstack/glance-default-internal-api-0" Jan 31 09:23:58 crc kubenswrapper[4830]: I0131 09:23:58.492313 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d2d72ba4-adcb-41a9-b840-4996715f2cc1-config-data\") pod \"glance-default-internal-api-0\" (UID: \"d2d72ba4-adcb-41a9-b840-4996715f2cc1\") " pod="openstack/glance-default-internal-api-0" Jan 31 09:23:58 crc kubenswrapper[4830]: I0131 09:23:58.492337 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d2d72ba4-adcb-41a9-b840-4996715f2cc1-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"d2d72ba4-adcb-41a9-b840-4996715f2cc1\") " pod="openstack/glance-default-internal-api-0" Jan 31 09:23:58 crc kubenswrapper[4830]: I0131 09:23:58.492393 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d2d72ba4-adcb-41a9-b840-4996715f2cc1-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"d2d72ba4-adcb-41a9-b840-4996715f2cc1\") " pod="openstack/glance-default-internal-api-0" Jan 31 09:23:58 crc kubenswrapper[4830]: I0131 09:23:58.492473 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-9a2130b1-175e-4117-ab6c-14e39b8138c2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9a2130b1-175e-4117-ab6c-14e39b8138c2\") pod \"glance-default-internal-api-0\" (UID: \"d2d72ba4-adcb-41a9-b840-4996715f2cc1\") " pod="openstack/glance-default-internal-api-0" Jan 31 09:23:58 crc kubenswrapper[4830]: I0131 09:23:58.503943 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2d72ba4-adcb-41a9-b840-4996715f2cc1-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"d2d72ba4-adcb-41a9-b840-4996715f2cc1\") " pod="openstack/glance-default-internal-api-0" Jan 31 09:23:58 crc kubenswrapper[4830]: I0131 09:23:58.507891 4830 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 31 09:23:58 crc kubenswrapper[4830]: I0131 09:23:58.507949 4830 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-9a2130b1-175e-4117-ab6c-14e39b8138c2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9a2130b1-175e-4117-ab6c-14e39b8138c2\") pod \"glance-default-internal-api-0\" (UID: \"d2d72ba4-adcb-41a9-b840-4996715f2cc1\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/c32ecbe26e9667ad28a7d3f49252f55c097486de0a04fe8536b2f1b0061aa335/globalmount\"" pod="openstack/glance-default-internal-api-0" Jan 31 09:23:58 crc kubenswrapper[4830]: I0131 09:23:58.529789 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d2d72ba4-adcb-41a9-b840-4996715f2cc1-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"d2d72ba4-adcb-41a9-b840-4996715f2cc1\") " pod="openstack/glance-default-internal-api-0" Jan 31 09:23:58 crc kubenswrapper[4830]: I0131 09:23:58.530295 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d2d72ba4-adcb-41a9-b840-4996715f2cc1-logs\") pod \"glance-default-internal-api-0\" (UID: \"d2d72ba4-adcb-41a9-b840-4996715f2cc1\") " pod="openstack/glance-default-internal-api-0" Jan 31 09:23:58 crc kubenswrapper[4830]: I0131 09:23:58.559542 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d2d72ba4-adcb-41a9-b840-4996715f2cc1-config-data\") pod \"glance-default-internal-api-0\" (UID: \"d2d72ba4-adcb-41a9-b840-4996715f2cc1\") " pod="openstack/glance-default-internal-api-0" Jan 31 09:23:58 crc kubenswrapper[4830]: I0131 09:23:58.576571 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6p6zk\" (UniqueName: \"kubernetes.io/projected/d2d72ba4-adcb-41a9-b840-4996715f2cc1-kube-api-access-6p6zk\") pod \"glance-default-internal-api-0\" (UID: \"d2d72ba4-adcb-41a9-b840-4996715f2cc1\") " pod="openstack/glance-default-internal-api-0" Jan 31 09:23:58 crc kubenswrapper[4830]: I0131 09:23:58.587017 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d2d72ba4-adcb-41a9-b840-4996715f2cc1-scripts\") pod \"glance-default-internal-api-0\" (UID: \"d2d72ba4-adcb-41a9-b840-4996715f2cc1\") " pod="openstack/glance-default-internal-api-0" Jan 31 09:23:58 crc kubenswrapper[4830]: I0131 09:23:58.604006 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-534498b2-d616-470f-a82d-6fd5620e2438\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-534498b2-d616-470f-a82d-6fd5620e2438\") pod \"glance-default-external-api-0\" (UID: \"2ea230a4-e2bf-48fa-86fb-b2d6f049ca0b\") " pod="openstack/glance-default-external-api-0" Jan 31 09:23:58 crc kubenswrapper[4830]: I0131 09:23:58.604587 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d2d72ba4-adcb-41a9-b840-4996715f2cc1-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"d2d72ba4-adcb-41a9-b840-4996715f2cc1\") " pod="openstack/glance-default-internal-api-0" Jan 31 09:23:58 crc kubenswrapper[4830]: I0131 09:23:58.764386 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 31 09:23:58 crc kubenswrapper[4830]: I0131 09:23:58.924709 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-9a2130b1-175e-4117-ab6c-14e39b8138c2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9a2130b1-175e-4117-ab6c-14e39b8138c2\") pod \"glance-default-internal-api-0\" (UID: \"d2d72ba4-adcb-41a9-b840-4996715f2cc1\") " pod="openstack/glance-default-internal-api-0" Jan 31 09:23:59 crc kubenswrapper[4830]: I0131 09:23:59.190448 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 31 09:23:59 crc kubenswrapper[4830]: I0131 09:23:59.191057 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7ff5475cc9-j5dc2" event={"ID":"cc653318-d8c5-4663-90ef-38b8f4b19275","Type":"ContainerDied","Data":"765b637d85bba8cd56653ccb257ed1c31e714fb731f0e8282ad086dc1a54c81a"} Jan 31 09:23:59 crc kubenswrapper[4830]: I0131 09:23:59.191106 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-blgpz" event={"ID":"29262d41-4dc9-4d3e-9d2d-411076ab11c6","Type":"ContainerStarted","Data":"15afe595d7a6c43fa8ff8fb7860a492ad92ac26c8a052d95f7dca31b089d702e"} Jan 31 09:23:59 crc kubenswrapper[4830]: I0131 09:23:59.191134 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-blgpz"] Jan 31 09:23:59 crc kubenswrapper[4830]: I0131 09:23:59.191167 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c5cc7c5ff-v8fbt"] Jan 31 09:23:59 crc kubenswrapper[4830]: I0131 09:23:59.191199 4830 scope.go:117] "RemoveContainer" containerID="c5e00d42d4a86dfe091d8277956f521ee05a78845d63433c8924aa95a212cc99" Jan 31 09:23:59 crc kubenswrapper[4830]: I0131 09:23:59.227059 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-pr2kp"] Jan 31 09:23:59 crc kubenswrapper[4830]: I0131 09:23:59.289559 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-hh79w"] Jan 31 09:23:59 crc kubenswrapper[4830]: I0131 09:23:59.431879 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-ztgnf"] Jan 31 09:23:59 crc kubenswrapper[4830]: W0131 09:23:59.438200 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podce550202_087a_49b1_8796_10f03f0ab9be.slice/crio-4320145e44df1fe2533a87c4c9a6e7205107bda4b8ad5460e91ed39ba5678570 WatchSource:0}: Error finding container 4320145e44df1fe2533a87c4c9a6e7205107bda4b8ad5460e91ed39ba5678570: Status 404 returned error can't find the container with id 4320145e44df1fe2533a87c4c9a6e7205107bda4b8ad5460e91ed39ba5678570 Jan 31 09:23:59 crc kubenswrapper[4830]: I0131 09:23:59.478351 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 31 09:23:59 crc kubenswrapper[4830]: I0131 09:23:59.497672 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-w6kxz"] Jan 31 09:23:59 crc kubenswrapper[4830]: I0131 09:23:59.505065 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-hh79w" event={"ID":"6324b6ba-4288-44f4-bf87-1a4356c1a9f0","Type":"ContainerStarted","Data":"ab181e61759699af586fd212a367c4ff5c4ea24b4ce13c2c9846f71f1f0cae7b"} Jan 31 09:23:59 crc kubenswrapper[4830]: W0131 09:23:59.509971 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0617092f_40a9_4d3d_b472_f284a2b24000.slice/crio-5c6e2dd7d3e5fcbec9754bdd3b47a9158084fa1e19f804470ed0b49c0684bd9e WatchSource:0}: Error finding container 5c6e2dd7d3e5fcbec9754bdd3b47a9158084fa1e19f804470ed0b49c0684bd9e: Status 404 returned error can't find the container with id 5c6e2dd7d3e5fcbec9754bdd3b47a9158084fa1e19f804470ed0b49c0684bd9e Jan 31 09:23:59 crc kubenswrapper[4830]: I0131 09:23:59.520052 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-pr2kp" event={"ID":"bb9aed03-7e56-43de-92fc-3ac6352194af","Type":"ContainerStarted","Data":"779065af83fb8314a2f0526b7cfddc5ba70a742532c637053ee88f496065f6dd"} Jan 31 09:23:59 crc kubenswrapper[4830]: I0131 09:23:59.527017 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c5cc7c5ff-v8fbt" event={"ID":"92faa0b1-7bae-4446-b6a1-52ea0d77aa52","Type":"ContainerStarted","Data":"e8a5afc0ad01731f46c3ccf1ffb2b2050653588807f54a1e6669f38c92ba4690"} Jan 31 09:23:59 crc kubenswrapper[4830]: I0131 09:23:59.590983 4830 scope.go:117] "RemoveContainer" containerID="33cae71d971a098a127b212b048279f41d15396ccf35f3ddc0013f5c9d3c6fbe" Jan 31 09:23:59 crc kubenswrapper[4830]: I0131 09:23:59.954936 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 31 09:23:59 crc kubenswrapper[4830]: I0131 09:23:59.987474 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-t2klw"] Jan 31 09:24:00 crc kubenswrapper[4830]: I0131 09:24:00.016115 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8b5c85b87-7w87z"] Jan 31 09:24:00 crc kubenswrapper[4830]: I0131 09:24:00.158906 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 31 09:24:00 crc kubenswrapper[4830]: I0131 09:24:00.337463 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 31 09:24:00 crc kubenswrapper[4830]: I0131 09:24:00.698633 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 31 09:24:00 crc kubenswrapper[4830]: I0131 09:24:00.806458 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8b5c85b87-7w87z" event={"ID":"6d941bfc-a5bd-4764-8e53-a77414f25a21","Type":"ContainerStarted","Data":"bdaea899f23c9c78e7578be191ab2a37d833affa561fa837992f99403c99e05f"} Jan 31 09:24:00 crc kubenswrapper[4830]: I0131 09:24:00.847301 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-blgpz" event={"ID":"29262d41-4dc9-4d3e-9d2d-411076ab11c6","Type":"ContainerStarted","Data":"fdb1043ccf73c9d37bcc827f69f1b9499e832f9028fa7da41f5a19e3692877ea"} Jan 31 09:24:00 crc kubenswrapper[4830]: I0131 09:24:00.908874 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-t2klw" event={"ID":"b8de8318-1eda-43cc-b522-86d6492c6376","Type":"ContainerStarted","Data":"52da16470151e23cc9ab10f0269d502dde6f3a04cb9e4692af08610763cec729"} Jan 31 09:24:00 crc kubenswrapper[4830]: I0131 09:24:00.928816 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 31 09:24:00 crc kubenswrapper[4830]: I0131 09:24:00.940143 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"d2d72ba4-adcb-41a9-b840-4996715f2cc1","Type":"ContainerStarted","Data":"eb90b2f232a09e7e29a45bdcb8d1270bf328ea944c8c33d058697a69cdb86ec9"} Jan 31 09:24:00 crc kubenswrapper[4830]: I0131 09:24:00.961256 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-blgpz" podStartSLOduration=4.961233051 podStartE2EDuration="4.961233051s" podCreationTimestamp="2026-01-31 09:23:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 09:24:00.91863942 +0000 UTC m=+1385.412001862" watchObservedRunningTime="2026-01-31 09:24:00.961233051 +0000 UTC m=+1385.454595493" Jan 31 09:24:00 crc kubenswrapper[4830]: I0131 09:24:00.976226 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-ztgnf" event={"ID":"ce550202-087a-49b1-8796-10f03f0ab9be","Type":"ContainerStarted","Data":"c9fb7d799f1d6dd9a5876fd3363ab7922287e7e766c564c9787e2b2952eb9668"} Jan 31 09:24:00 crc kubenswrapper[4830]: I0131 09:24:00.976282 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-ztgnf" event={"ID":"ce550202-087a-49b1-8796-10f03f0ab9be","Type":"ContainerStarted","Data":"4320145e44df1fe2533a87c4c9a6e7205107bda4b8ad5460e91ed39ba5678570"} Jan 31 09:24:01 crc kubenswrapper[4830]: I0131 09:24:01.038363 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-w6kxz" event={"ID":"0617092f-40a9-4d3d-b472-f284a2b24000","Type":"ContainerStarted","Data":"5c6e2dd7d3e5fcbec9754bdd3b47a9158084fa1e19f804470ed0b49c0684bd9e"} Jan 31 09:24:01 crc kubenswrapper[4830]: I0131 09:24:01.073669 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 31 09:24:01 crc kubenswrapper[4830]: I0131 09:24:01.105838 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-ztgnf" podStartSLOduration=5.105805861 podStartE2EDuration="5.105805861s" podCreationTimestamp="2026-01-31 09:23:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 09:24:01.039307491 +0000 UTC m=+1385.532669933" watchObservedRunningTime="2026-01-31 09:24:01.105805861 +0000 UTC m=+1385.599168303" Jan 31 09:24:01 crc kubenswrapper[4830]: I0131 09:24:01.125032 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"2ea230a4-e2bf-48fa-86fb-b2d6f049ca0b","Type":"ContainerStarted","Data":"a84e5a28453621c923b3f55ffe6aea29835377cfd584075487dcb0e0c29be37a"} Jan 31 09:24:01 crc kubenswrapper[4830]: I0131 09:24:01.173303 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"39688f84-c227-4658-aee1-ce5e5d450ca1","Type":"ContainerStarted","Data":"ee4289ea302093429d5df627640be772463a6449d5b3c652786a1c1df47a36e1"} Jan 31 09:24:01 crc kubenswrapper[4830]: I0131 09:24:01.241871 4830 generic.go:334] "Generic (PLEG): container finished" podID="92faa0b1-7bae-4446-b6a1-52ea0d77aa52" containerID="9da23614188063fb12e08f693beade668a207d079e25498447189e25ffd731f1" exitCode=0 Jan 31 09:24:01 crc kubenswrapper[4830]: I0131 09:24:01.241931 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c5cc7c5ff-v8fbt" event={"ID":"92faa0b1-7bae-4446-b6a1-52ea0d77aa52","Type":"ContainerDied","Data":"9da23614188063fb12e08f693beade668a207d079e25498447189e25ffd731f1"} Jan 31 09:24:02 crc kubenswrapper[4830]: I0131 09:24:02.208403 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c5cc7c5ff-v8fbt" Jan 31 09:24:02 crc kubenswrapper[4830]: I0131 09:24:02.263380 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c5cc7c5ff-v8fbt" Jan 31 09:24:02 crc kubenswrapper[4830]: I0131 09:24:02.277090 4830 generic.go:334] "Generic (PLEG): container finished" podID="6d941bfc-a5bd-4764-8e53-a77414f25a21" containerID="138c3c0e08a8105a6e1cae80a2cf9fc21dcf54e1d9169135a0e3b2b82e6fd73e" exitCode=0 Jan 31 09:24:02 crc kubenswrapper[4830]: I0131 09:24:02.293636 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c5cc7c5ff-v8fbt" event={"ID":"92faa0b1-7bae-4446-b6a1-52ea0d77aa52","Type":"ContainerDied","Data":"e8a5afc0ad01731f46c3ccf1ffb2b2050653588807f54a1e6669f38c92ba4690"} Jan 31 09:24:02 crc kubenswrapper[4830]: I0131 09:24:02.293694 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8b5c85b87-7w87z" event={"ID":"6d941bfc-a5bd-4764-8e53-a77414f25a21","Type":"ContainerDied","Data":"138c3c0e08a8105a6e1cae80a2cf9fc21dcf54e1d9169135a0e3b2b82e6fd73e"} Jan 31 09:24:02 crc kubenswrapper[4830]: I0131 09:24:02.293742 4830 scope.go:117] "RemoveContainer" containerID="9da23614188063fb12e08f693beade668a207d079e25498447189e25ffd731f1" Jan 31 09:24:02 crc kubenswrapper[4830]: I0131 09:24:02.329619 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/92faa0b1-7bae-4446-b6a1-52ea0d77aa52-ovsdbserver-nb\") pod \"92faa0b1-7bae-4446-b6a1-52ea0d77aa52\" (UID: \"92faa0b1-7bae-4446-b6a1-52ea0d77aa52\") " Jan 31 09:24:02 crc kubenswrapper[4830]: I0131 09:24:02.330638 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/92faa0b1-7bae-4446-b6a1-52ea0d77aa52-dns-swift-storage-0\") pod \"92faa0b1-7bae-4446-b6a1-52ea0d77aa52\" (UID: \"92faa0b1-7bae-4446-b6a1-52ea0d77aa52\") " Jan 31 09:24:02 crc kubenswrapper[4830]: I0131 09:24:02.330698 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/92faa0b1-7bae-4446-b6a1-52ea0d77aa52-ovsdbserver-sb\") pod \"92faa0b1-7bae-4446-b6a1-52ea0d77aa52\" (UID: \"92faa0b1-7bae-4446-b6a1-52ea0d77aa52\") " Jan 31 09:24:02 crc kubenswrapper[4830]: I0131 09:24:02.330764 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/92faa0b1-7bae-4446-b6a1-52ea0d77aa52-dns-svc\") pod \"92faa0b1-7bae-4446-b6a1-52ea0d77aa52\" (UID: \"92faa0b1-7bae-4446-b6a1-52ea0d77aa52\") " Jan 31 09:24:02 crc kubenswrapper[4830]: I0131 09:24:02.330881 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cc5jd\" (UniqueName: \"kubernetes.io/projected/92faa0b1-7bae-4446-b6a1-52ea0d77aa52-kube-api-access-cc5jd\") pod \"92faa0b1-7bae-4446-b6a1-52ea0d77aa52\" (UID: \"92faa0b1-7bae-4446-b6a1-52ea0d77aa52\") " Jan 31 09:24:02 crc kubenswrapper[4830]: I0131 09:24:02.331004 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/92faa0b1-7bae-4446-b6a1-52ea0d77aa52-config\") pod \"92faa0b1-7bae-4446-b6a1-52ea0d77aa52\" (UID: \"92faa0b1-7bae-4446-b6a1-52ea0d77aa52\") " Jan 31 09:24:02 crc kubenswrapper[4830]: I0131 09:24:02.341275 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/92faa0b1-7bae-4446-b6a1-52ea0d77aa52-kube-api-access-cc5jd" (OuterVolumeSpecName: "kube-api-access-cc5jd") pod "92faa0b1-7bae-4446-b6a1-52ea0d77aa52" (UID: "92faa0b1-7bae-4446-b6a1-52ea0d77aa52"). InnerVolumeSpecName "kube-api-access-cc5jd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:24:02 crc kubenswrapper[4830]: I0131 09:24:02.444376 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cc5jd\" (UniqueName: \"kubernetes.io/projected/92faa0b1-7bae-4446-b6a1-52ea0d77aa52-kube-api-access-cc5jd\") on node \"crc\" DevicePath \"\"" Jan 31 09:24:02 crc kubenswrapper[4830]: I0131 09:24:02.445603 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/92faa0b1-7bae-4446-b6a1-52ea0d77aa52-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "92faa0b1-7bae-4446-b6a1-52ea0d77aa52" (UID: "92faa0b1-7bae-4446-b6a1-52ea0d77aa52"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:24:02 crc kubenswrapper[4830]: I0131 09:24:02.505479 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/92faa0b1-7bae-4446-b6a1-52ea0d77aa52-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "92faa0b1-7bae-4446-b6a1-52ea0d77aa52" (UID: "92faa0b1-7bae-4446-b6a1-52ea0d77aa52"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:24:02 crc kubenswrapper[4830]: I0131 09:24:02.521512 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/92faa0b1-7bae-4446-b6a1-52ea0d77aa52-config" (OuterVolumeSpecName: "config") pod "92faa0b1-7bae-4446-b6a1-52ea0d77aa52" (UID: "92faa0b1-7bae-4446-b6a1-52ea0d77aa52"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:24:02 crc kubenswrapper[4830]: I0131 09:24:02.523271 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/92faa0b1-7bae-4446-b6a1-52ea0d77aa52-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "92faa0b1-7bae-4446-b6a1-52ea0d77aa52" (UID: "92faa0b1-7bae-4446-b6a1-52ea0d77aa52"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:24:02 crc kubenswrapper[4830]: I0131 09:24:02.548421 4830 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/92faa0b1-7bae-4446-b6a1-52ea0d77aa52-config\") on node \"crc\" DevicePath \"\"" Jan 31 09:24:02 crc kubenswrapper[4830]: I0131 09:24:02.548821 4830 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/92faa0b1-7bae-4446-b6a1-52ea0d77aa52-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 31 09:24:02 crc kubenswrapper[4830]: I0131 09:24:02.548832 4830 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/92faa0b1-7bae-4446-b6a1-52ea0d77aa52-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 31 09:24:02 crc kubenswrapper[4830]: I0131 09:24:02.548841 4830 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/92faa0b1-7bae-4446-b6a1-52ea0d77aa52-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 31 09:24:02 crc kubenswrapper[4830]: I0131 09:24:02.567920 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/92faa0b1-7bae-4446-b6a1-52ea0d77aa52-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "92faa0b1-7bae-4446-b6a1-52ea0d77aa52" (UID: "92faa0b1-7bae-4446-b6a1-52ea0d77aa52"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:24:02 crc kubenswrapper[4830]: I0131 09:24:02.664984 4830 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/92faa0b1-7bae-4446-b6a1-52ea0d77aa52-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 31 09:24:02 crc kubenswrapper[4830]: I0131 09:24:02.708801 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c5cc7c5ff-v8fbt"] Jan 31 09:24:02 crc kubenswrapper[4830]: I0131 09:24:02.716764 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5c5cc7c5ff-v8fbt"] Jan 31 09:24:03 crc kubenswrapper[4830]: I0131 09:24:03.399157 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8b5c85b87-7w87z" event={"ID":"6d941bfc-a5bd-4764-8e53-a77414f25a21","Type":"ContainerStarted","Data":"d84a1c23794a60f4c621178ff37b1c7344b9bb8cb7c28fc154e40f0e512c6728"} Jan 31 09:24:03 crc kubenswrapper[4830]: I0131 09:24:03.399676 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-8b5c85b87-7w87z" Jan 31 09:24:03 crc kubenswrapper[4830]: I0131 09:24:03.403387 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"2ea230a4-e2bf-48fa-86fb-b2d6f049ca0b","Type":"ContainerStarted","Data":"07fe93a34ba1ce78aab3f7fb664f3a9620d952a74febad7f37dd577d916dc82f"} Jan 31 09:24:03 crc kubenswrapper[4830]: I0131 09:24:03.419833 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"d2d72ba4-adcb-41a9-b840-4996715f2cc1","Type":"ContainerStarted","Data":"5ae40eff30442b5c5780e452596934db197d847ee4785925d6a706ebb8ecf683"} Jan 31 09:24:04 crc kubenswrapper[4830]: I0131 09:24:04.328969 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="92faa0b1-7bae-4446-b6a1-52ea0d77aa52" path="/var/lib/kubelet/pods/92faa0b1-7bae-4446-b6a1-52ea0d77aa52/volumes" Jan 31 09:24:04 crc kubenswrapper[4830]: I0131 09:24:04.539126 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"2ea230a4-e2bf-48fa-86fb-b2d6f049ca0b","Type":"ContainerStarted","Data":"8ee34c3a93461eb63dfa313d9ba98638f648a695aa93909eb7f49dda5872c10d"} Jan 31 09:24:04 crc kubenswrapper[4830]: I0131 09:24:04.540254 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="2ea230a4-e2bf-48fa-86fb-b2d6f049ca0b" containerName="glance-log" containerID="cri-o://07fe93a34ba1ce78aab3f7fb664f3a9620d952a74febad7f37dd577d916dc82f" gracePeriod=30 Jan 31 09:24:04 crc kubenswrapper[4830]: I0131 09:24:04.540996 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="2ea230a4-e2bf-48fa-86fb-b2d6f049ca0b" containerName="glance-httpd" containerID="cri-o://8ee34c3a93461eb63dfa313d9ba98638f648a695aa93909eb7f49dda5872c10d" gracePeriod=30 Jan 31 09:24:04 crc kubenswrapper[4830]: I0131 09:24:04.588145 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-8b5c85b87-7w87z" podStartSLOduration=7.588115204 podStartE2EDuration="7.588115204s" podCreationTimestamp="2026-01-31 09:23:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 09:24:03.441008961 +0000 UTC m=+1387.934371423" watchObservedRunningTime="2026-01-31 09:24:04.588115204 +0000 UTC m=+1389.081477646" Jan 31 09:24:04 crc kubenswrapper[4830]: I0131 09:24:04.599544 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=8.599517038 podStartE2EDuration="8.599517038s" podCreationTimestamp="2026-01-31 09:23:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 09:24:04.595627998 +0000 UTC m=+1389.088990440" watchObservedRunningTime="2026-01-31 09:24:04.599517038 +0000 UTC m=+1389.092879480" Jan 31 09:24:04 crc kubenswrapper[4830]: I0131 09:24:04.641261 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="d2d72ba4-adcb-41a9-b840-4996715f2cc1" containerName="glance-log" containerID="cri-o://5ae40eff30442b5c5780e452596934db197d847ee4785925d6a706ebb8ecf683" gracePeriod=30 Jan 31 09:24:04 crc kubenswrapper[4830]: I0131 09:24:04.641631 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"d2d72ba4-adcb-41a9-b840-4996715f2cc1","Type":"ContainerStarted","Data":"34039caa4f38af2072d9be7ebba86aca6b716fcb5dbfc034cc96e173e19f1da5"} Jan 31 09:24:04 crc kubenswrapper[4830]: I0131 09:24:04.642005 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="d2d72ba4-adcb-41a9-b840-4996715f2cc1" containerName="glance-httpd" containerID="cri-o://34039caa4f38af2072d9be7ebba86aca6b716fcb5dbfc034cc96e173e19f1da5" gracePeriod=30 Jan 31 09:24:04 crc kubenswrapper[4830]: I0131 09:24:04.731821 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=8.731796629 podStartE2EDuration="8.731796629s" podCreationTimestamp="2026-01-31 09:23:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 09:24:04.717711639 +0000 UTC m=+1389.211074091" watchObservedRunningTime="2026-01-31 09:24:04.731796629 +0000 UTC m=+1389.225159071" Jan 31 09:24:05 crc kubenswrapper[4830]: I0131 09:24:05.569763 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 31 09:24:05 crc kubenswrapper[4830]: I0131 09:24:05.681431 4830 generic.go:334] "Generic (PLEG): container finished" podID="2ea230a4-e2bf-48fa-86fb-b2d6f049ca0b" containerID="8ee34c3a93461eb63dfa313d9ba98638f648a695aa93909eb7f49dda5872c10d" exitCode=143 Jan 31 09:24:05 crc kubenswrapper[4830]: I0131 09:24:05.681476 4830 generic.go:334] "Generic (PLEG): container finished" podID="2ea230a4-e2bf-48fa-86fb-b2d6f049ca0b" containerID="07fe93a34ba1ce78aab3f7fb664f3a9620d952a74febad7f37dd577d916dc82f" exitCode=143 Jan 31 09:24:05 crc kubenswrapper[4830]: I0131 09:24:05.681551 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"2ea230a4-e2bf-48fa-86fb-b2d6f049ca0b","Type":"ContainerDied","Data":"8ee34c3a93461eb63dfa313d9ba98638f648a695aa93909eb7f49dda5872c10d"} Jan 31 09:24:05 crc kubenswrapper[4830]: I0131 09:24:05.681592 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"2ea230a4-e2bf-48fa-86fb-b2d6f049ca0b","Type":"ContainerDied","Data":"07fe93a34ba1ce78aab3f7fb664f3a9620d952a74febad7f37dd577d916dc82f"} Jan 31 09:24:05 crc kubenswrapper[4830]: I0131 09:24:05.681604 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"2ea230a4-e2bf-48fa-86fb-b2d6f049ca0b","Type":"ContainerDied","Data":"a84e5a28453621c923b3f55ffe6aea29835377cfd584075487dcb0e0c29be37a"} Jan 31 09:24:05 crc kubenswrapper[4830]: I0131 09:24:05.681623 4830 scope.go:117] "RemoveContainer" containerID="8ee34c3a93461eb63dfa313d9ba98638f648a695aa93909eb7f49dda5872c10d" Jan 31 09:24:05 crc kubenswrapper[4830]: I0131 09:24:05.682323 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 31 09:24:05 crc kubenswrapper[4830]: I0131 09:24:05.689398 4830 generic.go:334] "Generic (PLEG): container finished" podID="d2d72ba4-adcb-41a9-b840-4996715f2cc1" containerID="34039caa4f38af2072d9be7ebba86aca6b716fcb5dbfc034cc96e173e19f1da5" exitCode=143 Jan 31 09:24:05 crc kubenswrapper[4830]: I0131 09:24:05.689441 4830 generic.go:334] "Generic (PLEG): container finished" podID="d2d72ba4-adcb-41a9-b840-4996715f2cc1" containerID="5ae40eff30442b5c5780e452596934db197d847ee4785925d6a706ebb8ecf683" exitCode=143 Jan 31 09:24:05 crc kubenswrapper[4830]: I0131 09:24:05.689546 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"d2d72ba4-adcb-41a9-b840-4996715f2cc1","Type":"ContainerDied","Data":"34039caa4f38af2072d9be7ebba86aca6b716fcb5dbfc034cc96e173e19f1da5"} Jan 31 09:24:05 crc kubenswrapper[4830]: I0131 09:24:05.689600 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"d2d72ba4-adcb-41a9-b840-4996715f2cc1","Type":"ContainerDied","Data":"5ae40eff30442b5c5780e452596934db197d847ee4785925d6a706ebb8ecf683"} Jan 31 09:24:05 crc kubenswrapper[4830]: I0131 09:24:05.693343 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 31 09:24:05 crc kubenswrapper[4830]: I0131 09:24:05.743698 4830 scope.go:117] "RemoveContainer" containerID="07fe93a34ba1ce78aab3f7fb664f3a9620d952a74febad7f37dd577d916dc82f" Jan 31 09:24:05 crc kubenswrapper[4830]: I0131 09:24:05.756370 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2ea230a4-e2bf-48fa-86fb-b2d6f049ca0b-httpd-run\") pod \"2ea230a4-e2bf-48fa-86fb-b2d6f049ca0b\" (UID: \"2ea230a4-e2bf-48fa-86fb-b2d6f049ca0b\") " Jan 31 09:24:05 crc kubenswrapper[4830]: I0131 09:24:05.756489 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7txf7\" (UniqueName: \"kubernetes.io/projected/2ea230a4-e2bf-48fa-86fb-b2d6f049ca0b-kube-api-access-7txf7\") pod \"2ea230a4-e2bf-48fa-86fb-b2d6f049ca0b\" (UID: \"2ea230a4-e2bf-48fa-86fb-b2d6f049ca0b\") " Jan 31 09:24:05 crc kubenswrapper[4830]: I0131 09:24:05.756577 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ea230a4-e2bf-48fa-86fb-b2d6f049ca0b-config-data\") pod \"2ea230a4-e2bf-48fa-86fb-b2d6f049ca0b\" (UID: \"2ea230a4-e2bf-48fa-86fb-b2d6f049ca0b\") " Jan 31 09:24:05 crc kubenswrapper[4830]: I0131 09:24:05.756701 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2ea230a4-e2bf-48fa-86fb-b2d6f049ca0b-public-tls-certs\") pod \"2ea230a4-e2bf-48fa-86fb-b2d6f049ca0b\" (UID: \"2ea230a4-e2bf-48fa-86fb-b2d6f049ca0b\") " Jan 31 09:24:05 crc kubenswrapper[4830]: I0131 09:24:05.756843 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2ea230a4-e2bf-48fa-86fb-b2d6f049ca0b-logs\") pod \"2ea230a4-e2bf-48fa-86fb-b2d6f049ca0b\" (UID: \"2ea230a4-e2bf-48fa-86fb-b2d6f049ca0b\") " Jan 31 09:24:05 crc kubenswrapper[4830]: I0131 09:24:05.756873 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ea230a4-e2bf-48fa-86fb-b2d6f049ca0b-combined-ca-bundle\") pod \"2ea230a4-e2bf-48fa-86fb-b2d6f049ca0b\" (UID: \"2ea230a4-e2bf-48fa-86fb-b2d6f049ca0b\") " Jan 31 09:24:05 crc kubenswrapper[4830]: I0131 09:24:05.756947 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2ea230a4-e2bf-48fa-86fb-b2d6f049ca0b-scripts\") pod \"2ea230a4-e2bf-48fa-86fb-b2d6f049ca0b\" (UID: \"2ea230a4-e2bf-48fa-86fb-b2d6f049ca0b\") " Jan 31 09:24:05 crc kubenswrapper[4830]: I0131 09:24:05.757279 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-534498b2-d616-470f-a82d-6fd5620e2438\") pod \"2ea230a4-e2bf-48fa-86fb-b2d6f049ca0b\" (UID: \"2ea230a4-e2bf-48fa-86fb-b2d6f049ca0b\") " Jan 31 09:24:05 crc kubenswrapper[4830]: I0131 09:24:05.757702 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2ea230a4-e2bf-48fa-86fb-b2d6f049ca0b-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "2ea230a4-e2bf-48fa-86fb-b2d6f049ca0b" (UID: "2ea230a4-e2bf-48fa-86fb-b2d6f049ca0b"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 09:24:05 crc kubenswrapper[4830]: I0131 09:24:05.757939 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2ea230a4-e2bf-48fa-86fb-b2d6f049ca0b-logs" (OuterVolumeSpecName: "logs") pod "2ea230a4-e2bf-48fa-86fb-b2d6f049ca0b" (UID: "2ea230a4-e2bf-48fa-86fb-b2d6f049ca0b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 09:24:05 crc kubenswrapper[4830]: I0131 09:24:05.764294 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ea230a4-e2bf-48fa-86fb-b2d6f049ca0b-scripts" (OuterVolumeSpecName: "scripts") pod "2ea230a4-e2bf-48fa-86fb-b2d6f049ca0b" (UID: "2ea230a4-e2bf-48fa-86fb-b2d6f049ca0b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:24:05 crc kubenswrapper[4830]: I0131 09:24:05.767913 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2ea230a4-e2bf-48fa-86fb-b2d6f049ca0b-kube-api-access-7txf7" (OuterVolumeSpecName: "kube-api-access-7txf7") pod "2ea230a4-e2bf-48fa-86fb-b2d6f049ca0b" (UID: "2ea230a4-e2bf-48fa-86fb-b2d6f049ca0b"). InnerVolumeSpecName "kube-api-access-7txf7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:24:05 crc kubenswrapper[4830]: I0131 09:24:05.775709 4830 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2ea230a4-e2bf-48fa-86fb-b2d6f049ca0b-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 31 09:24:05 crc kubenswrapper[4830]: I0131 09:24:05.775791 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7txf7\" (UniqueName: \"kubernetes.io/projected/2ea230a4-e2bf-48fa-86fb-b2d6f049ca0b-kube-api-access-7txf7\") on node \"crc\" DevicePath \"\"" Jan 31 09:24:05 crc kubenswrapper[4830]: I0131 09:24:05.775813 4830 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2ea230a4-e2bf-48fa-86fb-b2d6f049ca0b-logs\") on node \"crc\" DevicePath \"\"" Jan 31 09:24:05 crc kubenswrapper[4830]: I0131 09:24:05.775843 4830 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2ea230a4-e2bf-48fa-86fb-b2d6f049ca0b-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 09:24:05 crc kubenswrapper[4830]: I0131 09:24:05.825926 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ea230a4-e2bf-48fa-86fb-b2d6f049ca0b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2ea230a4-e2bf-48fa-86fb-b2d6f049ca0b" (UID: "2ea230a4-e2bf-48fa-86fb-b2d6f049ca0b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:24:05 crc kubenswrapper[4830]: I0131 09:24:05.830645 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-534498b2-d616-470f-a82d-6fd5620e2438" (OuterVolumeSpecName: "glance") pod "2ea230a4-e2bf-48fa-86fb-b2d6f049ca0b" (UID: "2ea230a4-e2bf-48fa-86fb-b2d6f049ca0b"). InnerVolumeSpecName "pvc-534498b2-d616-470f-a82d-6fd5620e2438". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 31 09:24:05 crc kubenswrapper[4830]: I0131 09:24:05.843985 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ea230a4-e2bf-48fa-86fb-b2d6f049ca0b-config-data" (OuterVolumeSpecName: "config-data") pod "2ea230a4-e2bf-48fa-86fb-b2d6f049ca0b" (UID: "2ea230a4-e2bf-48fa-86fb-b2d6f049ca0b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:24:05 crc kubenswrapper[4830]: I0131 09:24:05.878236 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9a2130b1-175e-4117-ab6c-14e39b8138c2\") pod \"d2d72ba4-adcb-41a9-b840-4996715f2cc1\" (UID: \"d2d72ba4-adcb-41a9-b840-4996715f2cc1\") " Jan 31 09:24:05 crc kubenswrapper[4830]: I0131 09:24:05.878504 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d2d72ba4-adcb-41a9-b840-4996715f2cc1-httpd-run\") pod \"d2d72ba4-adcb-41a9-b840-4996715f2cc1\" (UID: \"d2d72ba4-adcb-41a9-b840-4996715f2cc1\") " Jan 31 09:24:05 crc kubenswrapper[4830]: I0131 09:24:05.878534 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d2d72ba4-adcb-41a9-b840-4996715f2cc1-internal-tls-certs\") pod \"d2d72ba4-adcb-41a9-b840-4996715f2cc1\" (UID: \"d2d72ba4-adcb-41a9-b840-4996715f2cc1\") " Jan 31 09:24:05 crc kubenswrapper[4830]: I0131 09:24:05.878632 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d2d72ba4-adcb-41a9-b840-4996715f2cc1-scripts\") pod \"d2d72ba4-adcb-41a9-b840-4996715f2cc1\" (UID: \"d2d72ba4-adcb-41a9-b840-4996715f2cc1\") " Jan 31 09:24:05 crc kubenswrapper[4830]: I0131 09:24:05.878697 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6p6zk\" (UniqueName: \"kubernetes.io/projected/d2d72ba4-adcb-41a9-b840-4996715f2cc1-kube-api-access-6p6zk\") pod \"d2d72ba4-adcb-41a9-b840-4996715f2cc1\" (UID: \"d2d72ba4-adcb-41a9-b840-4996715f2cc1\") " Jan 31 09:24:05 crc kubenswrapper[4830]: I0131 09:24:05.879435 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d2d72ba4-adcb-41a9-b840-4996715f2cc1-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "d2d72ba4-adcb-41a9-b840-4996715f2cc1" (UID: "d2d72ba4-adcb-41a9-b840-4996715f2cc1"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 09:24:05 crc kubenswrapper[4830]: I0131 09:24:05.879559 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2d72ba4-adcb-41a9-b840-4996715f2cc1-combined-ca-bundle\") pod \"d2d72ba4-adcb-41a9-b840-4996715f2cc1\" (UID: \"d2d72ba4-adcb-41a9-b840-4996715f2cc1\") " Jan 31 09:24:05 crc kubenswrapper[4830]: I0131 09:24:05.879658 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d2d72ba4-adcb-41a9-b840-4996715f2cc1-config-data\") pod \"d2d72ba4-adcb-41a9-b840-4996715f2cc1\" (UID: \"d2d72ba4-adcb-41a9-b840-4996715f2cc1\") " Jan 31 09:24:05 crc kubenswrapper[4830]: I0131 09:24:05.879706 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d2d72ba4-adcb-41a9-b840-4996715f2cc1-logs\") pod \"d2d72ba4-adcb-41a9-b840-4996715f2cc1\" (UID: \"d2d72ba4-adcb-41a9-b840-4996715f2cc1\") " Jan 31 09:24:05 crc kubenswrapper[4830]: I0131 09:24:05.881903 4830 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d2d72ba4-adcb-41a9-b840-4996715f2cc1-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 31 09:24:05 crc kubenswrapper[4830]: I0131 09:24:05.881928 4830 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ea230a4-e2bf-48fa-86fb-b2d6f049ca0b-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 09:24:05 crc kubenswrapper[4830]: I0131 09:24:05.881946 4830 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ea230a4-e2bf-48fa-86fb-b2d6f049ca0b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 09:24:05 crc kubenswrapper[4830]: I0131 09:24:05.881984 4830 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-534498b2-d616-470f-a82d-6fd5620e2438\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-534498b2-d616-470f-a82d-6fd5620e2438\") on node \"crc\" " Jan 31 09:24:05 crc kubenswrapper[4830]: I0131 09:24:05.887021 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d2d72ba4-adcb-41a9-b840-4996715f2cc1-logs" (OuterVolumeSpecName: "logs") pod "d2d72ba4-adcb-41a9-b840-4996715f2cc1" (UID: "d2d72ba4-adcb-41a9-b840-4996715f2cc1"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 09:24:05 crc kubenswrapper[4830]: I0131 09:24:05.889673 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d2d72ba4-adcb-41a9-b840-4996715f2cc1-scripts" (OuterVolumeSpecName: "scripts") pod "d2d72ba4-adcb-41a9-b840-4996715f2cc1" (UID: "d2d72ba4-adcb-41a9-b840-4996715f2cc1"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:24:05 crc kubenswrapper[4830]: I0131 09:24:05.898054 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ea230a4-e2bf-48fa-86fb-b2d6f049ca0b-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "2ea230a4-e2bf-48fa-86fb-b2d6f049ca0b" (UID: "2ea230a4-e2bf-48fa-86fb-b2d6f049ca0b"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:24:05 crc kubenswrapper[4830]: I0131 09:24:05.898078 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d2d72ba4-adcb-41a9-b840-4996715f2cc1-kube-api-access-6p6zk" (OuterVolumeSpecName: "kube-api-access-6p6zk") pod "d2d72ba4-adcb-41a9-b840-4996715f2cc1" (UID: "d2d72ba4-adcb-41a9-b840-4996715f2cc1"). InnerVolumeSpecName "kube-api-access-6p6zk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:24:05 crc kubenswrapper[4830]: I0131 09:24:05.940470 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9a2130b1-175e-4117-ab6c-14e39b8138c2" (OuterVolumeSpecName: "glance") pod "d2d72ba4-adcb-41a9-b840-4996715f2cc1" (UID: "d2d72ba4-adcb-41a9-b840-4996715f2cc1"). InnerVolumeSpecName "pvc-9a2130b1-175e-4117-ab6c-14e39b8138c2". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 31 09:24:05 crc kubenswrapper[4830]: I0131 09:24:05.977391 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d2d72ba4-adcb-41a9-b840-4996715f2cc1-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "d2d72ba4-adcb-41a9-b840-4996715f2cc1" (UID: "d2d72ba4-adcb-41a9-b840-4996715f2cc1"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:24:06 crc kubenswrapper[4830]: I0131 09:24:06.006115 4830 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d2d72ba4-adcb-41a9-b840-4996715f2cc1-logs\") on node \"crc\" DevicePath \"\"" Jan 31 09:24:06 crc kubenswrapper[4830]: I0131 09:24:06.006188 4830 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-9a2130b1-175e-4117-ab6c-14e39b8138c2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9a2130b1-175e-4117-ab6c-14e39b8138c2\") on node \"crc\" " Jan 31 09:24:06 crc kubenswrapper[4830]: I0131 09:24:06.006217 4830 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d2d72ba4-adcb-41a9-b840-4996715f2cc1-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 31 09:24:06 crc kubenswrapper[4830]: I0131 09:24:06.006230 4830 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d2d72ba4-adcb-41a9-b840-4996715f2cc1-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 09:24:06 crc kubenswrapper[4830]: I0131 09:24:06.006241 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6p6zk\" (UniqueName: \"kubernetes.io/projected/d2d72ba4-adcb-41a9-b840-4996715f2cc1-kube-api-access-6p6zk\") on node \"crc\" DevicePath \"\"" Jan 31 09:24:06 crc kubenswrapper[4830]: I0131 09:24:06.006257 4830 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2ea230a4-e2bf-48fa-86fb-b2d6f049ca0b-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 31 09:24:06 crc kubenswrapper[4830]: I0131 09:24:06.041288 4830 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Jan 31 09:24:06 crc kubenswrapper[4830]: I0131 09:24:06.041647 4830 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-534498b2-d616-470f-a82d-6fd5620e2438" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-534498b2-d616-470f-a82d-6fd5620e2438") on node "crc" Jan 31 09:24:06 crc kubenswrapper[4830]: I0131 09:24:06.096381 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d2d72ba4-adcb-41a9-b840-4996715f2cc1-config-data" (OuterVolumeSpecName: "config-data") pod "d2d72ba4-adcb-41a9-b840-4996715f2cc1" (UID: "d2d72ba4-adcb-41a9-b840-4996715f2cc1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:24:06 crc kubenswrapper[4830]: I0131 09:24:06.131466 4830 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d2d72ba4-adcb-41a9-b840-4996715f2cc1-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 09:24:06 crc kubenswrapper[4830]: I0131 09:24:06.132361 4830 reconciler_common.go:293] "Volume detached for volume \"pvc-534498b2-d616-470f-a82d-6fd5620e2438\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-534498b2-d616-470f-a82d-6fd5620e2438\") on node \"crc\" DevicePath \"\"" Jan 31 09:24:06 crc kubenswrapper[4830]: I0131 09:24:06.132059 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 31 09:24:06 crc kubenswrapper[4830]: I0131 09:24:06.145752 4830 scope.go:117] "RemoveContainer" containerID="8ee34c3a93461eb63dfa313d9ba98638f648a695aa93909eb7f49dda5872c10d" Jan 31 09:24:06 crc kubenswrapper[4830]: I0131 09:24:06.145753 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d2d72ba4-adcb-41a9-b840-4996715f2cc1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d2d72ba4-adcb-41a9-b840-4996715f2cc1" (UID: "d2d72ba4-adcb-41a9-b840-4996715f2cc1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:24:06 crc kubenswrapper[4830]: E0131 09:24:06.150069 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8ee34c3a93461eb63dfa313d9ba98638f648a695aa93909eb7f49dda5872c10d\": container with ID starting with 8ee34c3a93461eb63dfa313d9ba98638f648a695aa93909eb7f49dda5872c10d not found: ID does not exist" containerID="8ee34c3a93461eb63dfa313d9ba98638f648a695aa93909eb7f49dda5872c10d" Jan 31 09:24:06 crc kubenswrapper[4830]: I0131 09:24:06.150120 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8ee34c3a93461eb63dfa313d9ba98638f648a695aa93909eb7f49dda5872c10d"} err="failed to get container status \"8ee34c3a93461eb63dfa313d9ba98638f648a695aa93909eb7f49dda5872c10d\": rpc error: code = NotFound desc = could not find container \"8ee34c3a93461eb63dfa313d9ba98638f648a695aa93909eb7f49dda5872c10d\": container with ID starting with 8ee34c3a93461eb63dfa313d9ba98638f648a695aa93909eb7f49dda5872c10d not found: ID does not exist" Jan 31 09:24:06 crc kubenswrapper[4830]: I0131 09:24:06.150152 4830 scope.go:117] "RemoveContainer" containerID="07fe93a34ba1ce78aab3f7fb664f3a9620d952a74febad7f37dd577d916dc82f" Jan 31 09:24:06 crc kubenswrapper[4830]: E0131 09:24:06.151484 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"07fe93a34ba1ce78aab3f7fb664f3a9620d952a74febad7f37dd577d916dc82f\": container with ID starting with 07fe93a34ba1ce78aab3f7fb664f3a9620d952a74febad7f37dd577d916dc82f not found: ID does not exist" containerID="07fe93a34ba1ce78aab3f7fb664f3a9620d952a74febad7f37dd577d916dc82f" Jan 31 09:24:06 crc kubenswrapper[4830]: I0131 09:24:06.151514 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"07fe93a34ba1ce78aab3f7fb664f3a9620d952a74febad7f37dd577d916dc82f"} err="failed to get container status \"07fe93a34ba1ce78aab3f7fb664f3a9620d952a74febad7f37dd577d916dc82f\": rpc error: code = NotFound desc = could not find container \"07fe93a34ba1ce78aab3f7fb664f3a9620d952a74febad7f37dd577d916dc82f\": container with ID starting with 07fe93a34ba1ce78aab3f7fb664f3a9620d952a74febad7f37dd577d916dc82f not found: ID does not exist" Jan 31 09:24:06 crc kubenswrapper[4830]: I0131 09:24:06.151532 4830 scope.go:117] "RemoveContainer" containerID="8ee34c3a93461eb63dfa313d9ba98638f648a695aa93909eb7f49dda5872c10d" Jan 31 09:24:06 crc kubenswrapper[4830]: I0131 09:24:06.152590 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8ee34c3a93461eb63dfa313d9ba98638f648a695aa93909eb7f49dda5872c10d"} err="failed to get container status \"8ee34c3a93461eb63dfa313d9ba98638f648a695aa93909eb7f49dda5872c10d\": rpc error: code = NotFound desc = could not find container \"8ee34c3a93461eb63dfa313d9ba98638f648a695aa93909eb7f49dda5872c10d\": container with ID starting with 8ee34c3a93461eb63dfa313d9ba98638f648a695aa93909eb7f49dda5872c10d not found: ID does not exist" Jan 31 09:24:06 crc kubenswrapper[4830]: I0131 09:24:06.152619 4830 scope.go:117] "RemoveContainer" containerID="07fe93a34ba1ce78aab3f7fb664f3a9620d952a74febad7f37dd577d916dc82f" Jan 31 09:24:06 crc kubenswrapper[4830]: I0131 09:24:06.154662 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"07fe93a34ba1ce78aab3f7fb664f3a9620d952a74febad7f37dd577d916dc82f"} err="failed to get container status \"07fe93a34ba1ce78aab3f7fb664f3a9620d952a74febad7f37dd577d916dc82f\": rpc error: code = NotFound desc = could not find container \"07fe93a34ba1ce78aab3f7fb664f3a9620d952a74febad7f37dd577d916dc82f\": container with ID starting with 07fe93a34ba1ce78aab3f7fb664f3a9620d952a74febad7f37dd577d916dc82f not found: ID does not exist" Jan 31 09:24:06 crc kubenswrapper[4830]: I0131 09:24:06.163321 4830 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Jan 31 09:24:06 crc kubenswrapper[4830]: I0131 09:24:06.163643 4830 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-9a2130b1-175e-4117-ab6c-14e39b8138c2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9a2130b1-175e-4117-ab6c-14e39b8138c2") on node "crc" Jan 31 09:24:06 crc kubenswrapper[4830]: I0131 09:24:06.167929 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 31 09:24:06 crc kubenswrapper[4830]: I0131 09:24:06.182981 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 31 09:24:06 crc kubenswrapper[4830]: E0131 09:24:06.183974 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d2d72ba4-adcb-41a9-b840-4996715f2cc1" containerName="glance-httpd" Jan 31 09:24:06 crc kubenswrapper[4830]: I0131 09:24:06.184004 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="d2d72ba4-adcb-41a9-b840-4996715f2cc1" containerName="glance-httpd" Jan 31 09:24:06 crc kubenswrapper[4830]: E0131 09:24:06.184015 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ea230a4-e2bf-48fa-86fb-b2d6f049ca0b" containerName="glance-log" Jan 31 09:24:06 crc kubenswrapper[4830]: I0131 09:24:06.184021 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ea230a4-e2bf-48fa-86fb-b2d6f049ca0b" containerName="glance-log" Jan 31 09:24:06 crc kubenswrapper[4830]: E0131 09:24:06.184058 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ea230a4-e2bf-48fa-86fb-b2d6f049ca0b" containerName="glance-httpd" Jan 31 09:24:06 crc kubenswrapper[4830]: I0131 09:24:06.184066 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ea230a4-e2bf-48fa-86fb-b2d6f049ca0b" containerName="glance-httpd" Jan 31 09:24:06 crc kubenswrapper[4830]: E0131 09:24:06.184088 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d2d72ba4-adcb-41a9-b840-4996715f2cc1" containerName="glance-log" Jan 31 09:24:06 crc kubenswrapper[4830]: I0131 09:24:06.184097 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="d2d72ba4-adcb-41a9-b840-4996715f2cc1" containerName="glance-log" Jan 31 09:24:06 crc kubenswrapper[4830]: E0131 09:24:06.184132 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="92faa0b1-7bae-4446-b6a1-52ea0d77aa52" containerName="init" Jan 31 09:24:06 crc kubenswrapper[4830]: I0131 09:24:06.184139 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="92faa0b1-7bae-4446-b6a1-52ea0d77aa52" containerName="init" Jan 31 09:24:06 crc kubenswrapper[4830]: I0131 09:24:06.184393 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="d2d72ba4-adcb-41a9-b840-4996715f2cc1" containerName="glance-httpd" Jan 31 09:24:06 crc kubenswrapper[4830]: I0131 09:24:06.184409 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="d2d72ba4-adcb-41a9-b840-4996715f2cc1" containerName="glance-log" Jan 31 09:24:06 crc kubenswrapper[4830]: I0131 09:24:06.184426 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="2ea230a4-e2bf-48fa-86fb-b2d6f049ca0b" containerName="glance-httpd" Jan 31 09:24:06 crc kubenswrapper[4830]: I0131 09:24:06.184442 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="2ea230a4-e2bf-48fa-86fb-b2d6f049ca0b" containerName="glance-log" Jan 31 09:24:06 crc kubenswrapper[4830]: I0131 09:24:06.184450 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="92faa0b1-7bae-4446-b6a1-52ea0d77aa52" containerName="init" Jan 31 09:24:06 crc kubenswrapper[4830]: I0131 09:24:06.186796 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 31 09:24:06 crc kubenswrapper[4830]: I0131 09:24:06.191378 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 31 09:24:06 crc kubenswrapper[4830]: I0131 09:24:06.191468 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 31 09:24:06 crc kubenswrapper[4830]: I0131 09:24:06.197887 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 31 09:24:06 crc kubenswrapper[4830]: I0131 09:24:06.235239 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ff4e5fbc-7e45-42b7-8af6-ff34b36bb594-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"ff4e5fbc-7e45-42b7-8af6-ff34b36bb594\") " pod="openstack/glance-default-external-api-0" Jan 31 09:24:06 crc kubenswrapper[4830]: I0131 09:24:06.235316 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ff4e5fbc-7e45-42b7-8af6-ff34b36bb594-scripts\") pod \"glance-default-external-api-0\" (UID: \"ff4e5fbc-7e45-42b7-8af6-ff34b36bb594\") " pod="openstack/glance-default-external-api-0" Jan 31 09:24:06 crc kubenswrapper[4830]: I0131 09:24:06.235354 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff4e5fbc-7e45-42b7-8af6-ff34b36bb594-config-data\") pod \"glance-default-external-api-0\" (UID: \"ff4e5fbc-7e45-42b7-8af6-ff34b36bb594\") " pod="openstack/glance-default-external-api-0" Jan 31 09:24:06 crc kubenswrapper[4830]: I0131 09:24:06.235413 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ff4e5fbc-7e45-42b7-8af6-ff34b36bb594-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"ff4e5fbc-7e45-42b7-8af6-ff34b36bb594\") " pod="openstack/glance-default-external-api-0" Jan 31 09:24:06 crc kubenswrapper[4830]: I0131 09:24:06.235462 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff4e5fbc-7e45-42b7-8af6-ff34b36bb594-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"ff4e5fbc-7e45-42b7-8af6-ff34b36bb594\") " pod="openstack/glance-default-external-api-0" Jan 31 09:24:06 crc kubenswrapper[4830]: I0131 09:24:06.235566 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ff4e5fbc-7e45-42b7-8af6-ff34b36bb594-logs\") pod \"glance-default-external-api-0\" (UID: \"ff4e5fbc-7e45-42b7-8af6-ff34b36bb594\") " pod="openstack/glance-default-external-api-0" Jan 31 09:24:06 crc kubenswrapper[4830]: I0131 09:24:06.235676 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-534498b2-d616-470f-a82d-6fd5620e2438\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-534498b2-d616-470f-a82d-6fd5620e2438\") pod \"glance-default-external-api-0\" (UID: \"ff4e5fbc-7e45-42b7-8af6-ff34b36bb594\") " pod="openstack/glance-default-external-api-0" Jan 31 09:24:06 crc kubenswrapper[4830]: I0131 09:24:06.235743 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w4xpj\" (UniqueName: \"kubernetes.io/projected/ff4e5fbc-7e45-42b7-8af6-ff34b36bb594-kube-api-access-w4xpj\") pod \"glance-default-external-api-0\" (UID: \"ff4e5fbc-7e45-42b7-8af6-ff34b36bb594\") " pod="openstack/glance-default-external-api-0" Jan 31 09:24:06 crc kubenswrapper[4830]: I0131 09:24:06.235860 4830 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2d72ba4-adcb-41a9-b840-4996715f2cc1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 09:24:06 crc kubenswrapper[4830]: I0131 09:24:06.235879 4830 reconciler_common.go:293] "Volume detached for volume \"pvc-9a2130b1-175e-4117-ab6c-14e39b8138c2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9a2130b1-175e-4117-ab6c-14e39b8138c2\") on node \"crc\" DevicePath \"\"" Jan 31 09:24:06 crc kubenswrapper[4830]: I0131 09:24:06.276302 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2ea230a4-e2bf-48fa-86fb-b2d6f049ca0b" path="/var/lib/kubelet/pods/2ea230a4-e2bf-48fa-86fb-b2d6f049ca0b/volumes" Jan 31 09:24:06 crc kubenswrapper[4830]: I0131 09:24:06.337837 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ff4e5fbc-7e45-42b7-8af6-ff34b36bb594-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"ff4e5fbc-7e45-42b7-8af6-ff34b36bb594\") " pod="openstack/glance-default-external-api-0" Jan 31 09:24:06 crc kubenswrapper[4830]: I0131 09:24:06.337906 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ff4e5fbc-7e45-42b7-8af6-ff34b36bb594-scripts\") pod \"glance-default-external-api-0\" (UID: \"ff4e5fbc-7e45-42b7-8af6-ff34b36bb594\") " pod="openstack/glance-default-external-api-0" Jan 31 09:24:06 crc kubenswrapper[4830]: I0131 09:24:06.337930 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff4e5fbc-7e45-42b7-8af6-ff34b36bb594-config-data\") pod \"glance-default-external-api-0\" (UID: \"ff4e5fbc-7e45-42b7-8af6-ff34b36bb594\") " pod="openstack/glance-default-external-api-0" Jan 31 09:24:06 crc kubenswrapper[4830]: I0131 09:24:06.337974 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ff4e5fbc-7e45-42b7-8af6-ff34b36bb594-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"ff4e5fbc-7e45-42b7-8af6-ff34b36bb594\") " pod="openstack/glance-default-external-api-0" Jan 31 09:24:06 crc kubenswrapper[4830]: I0131 09:24:06.338013 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff4e5fbc-7e45-42b7-8af6-ff34b36bb594-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"ff4e5fbc-7e45-42b7-8af6-ff34b36bb594\") " pod="openstack/glance-default-external-api-0" Jan 31 09:24:06 crc kubenswrapper[4830]: I0131 09:24:06.338167 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ff4e5fbc-7e45-42b7-8af6-ff34b36bb594-logs\") pod \"glance-default-external-api-0\" (UID: \"ff4e5fbc-7e45-42b7-8af6-ff34b36bb594\") " pod="openstack/glance-default-external-api-0" Jan 31 09:24:06 crc kubenswrapper[4830]: I0131 09:24:06.343332 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ff4e5fbc-7e45-42b7-8af6-ff34b36bb594-scripts\") pod \"glance-default-external-api-0\" (UID: \"ff4e5fbc-7e45-42b7-8af6-ff34b36bb594\") " pod="openstack/glance-default-external-api-0" Jan 31 09:24:06 crc kubenswrapper[4830]: I0131 09:24:06.352611 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ff4e5fbc-7e45-42b7-8af6-ff34b36bb594-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"ff4e5fbc-7e45-42b7-8af6-ff34b36bb594\") " pod="openstack/glance-default-external-api-0" Jan 31 09:24:06 crc kubenswrapper[4830]: I0131 09:24:06.354146 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ff4e5fbc-7e45-42b7-8af6-ff34b36bb594-logs\") pod \"glance-default-external-api-0\" (UID: \"ff4e5fbc-7e45-42b7-8af6-ff34b36bb594\") " pod="openstack/glance-default-external-api-0" Jan 31 09:24:06 crc kubenswrapper[4830]: I0131 09:24:06.354289 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-534498b2-d616-470f-a82d-6fd5620e2438\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-534498b2-d616-470f-a82d-6fd5620e2438\") pod \"glance-default-external-api-0\" (UID: \"ff4e5fbc-7e45-42b7-8af6-ff34b36bb594\") " pod="openstack/glance-default-external-api-0" Jan 31 09:24:06 crc kubenswrapper[4830]: I0131 09:24:06.354369 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w4xpj\" (UniqueName: \"kubernetes.io/projected/ff4e5fbc-7e45-42b7-8af6-ff34b36bb594-kube-api-access-w4xpj\") pod \"glance-default-external-api-0\" (UID: \"ff4e5fbc-7e45-42b7-8af6-ff34b36bb594\") " pod="openstack/glance-default-external-api-0" Jan 31 09:24:06 crc kubenswrapper[4830]: I0131 09:24:06.357329 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ff4e5fbc-7e45-42b7-8af6-ff34b36bb594-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"ff4e5fbc-7e45-42b7-8af6-ff34b36bb594\") " pod="openstack/glance-default-external-api-0" Jan 31 09:24:06 crc kubenswrapper[4830]: I0131 09:24:06.363344 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff4e5fbc-7e45-42b7-8af6-ff34b36bb594-config-data\") pod \"glance-default-external-api-0\" (UID: \"ff4e5fbc-7e45-42b7-8af6-ff34b36bb594\") " pod="openstack/glance-default-external-api-0" Jan 31 09:24:06 crc kubenswrapper[4830]: I0131 09:24:06.364917 4830 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 31 09:24:06 crc kubenswrapper[4830]: I0131 09:24:06.364944 4830 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-534498b2-d616-470f-a82d-6fd5620e2438\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-534498b2-d616-470f-a82d-6fd5620e2438\") pod \"glance-default-external-api-0\" (UID: \"ff4e5fbc-7e45-42b7-8af6-ff34b36bb594\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/b2b5f65c3ddebbe1a693d29972c24b4f4a39793430c5c2cc47acd10e0b700ef0/globalmount\"" pod="openstack/glance-default-external-api-0" Jan 31 09:24:06 crc kubenswrapper[4830]: I0131 09:24:06.365418 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff4e5fbc-7e45-42b7-8af6-ff34b36bb594-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"ff4e5fbc-7e45-42b7-8af6-ff34b36bb594\") " pod="openstack/glance-default-external-api-0" Jan 31 09:24:06 crc kubenswrapper[4830]: I0131 09:24:06.375421 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w4xpj\" (UniqueName: \"kubernetes.io/projected/ff4e5fbc-7e45-42b7-8af6-ff34b36bb594-kube-api-access-w4xpj\") pod \"glance-default-external-api-0\" (UID: \"ff4e5fbc-7e45-42b7-8af6-ff34b36bb594\") " pod="openstack/glance-default-external-api-0" Jan 31 09:24:06 crc kubenswrapper[4830]: I0131 09:24:06.428021 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-534498b2-d616-470f-a82d-6fd5620e2438\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-534498b2-d616-470f-a82d-6fd5620e2438\") pod \"glance-default-external-api-0\" (UID: \"ff4e5fbc-7e45-42b7-8af6-ff34b36bb594\") " pod="openstack/glance-default-external-api-0" Jan 31 09:24:06 crc kubenswrapper[4830]: I0131 09:24:06.516473 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 31 09:24:06 crc kubenswrapper[4830]: I0131 09:24:06.720764 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"d2d72ba4-adcb-41a9-b840-4996715f2cc1","Type":"ContainerDied","Data":"eb90b2f232a09e7e29a45bdcb8d1270bf328ea944c8c33d058697a69cdb86ec9"} Jan 31 09:24:06 crc kubenswrapper[4830]: I0131 09:24:06.720842 4830 scope.go:117] "RemoveContainer" containerID="34039caa4f38af2072d9be7ebba86aca6b716fcb5dbfc034cc96e173e19f1da5" Jan 31 09:24:06 crc kubenswrapper[4830]: I0131 09:24:06.720929 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 31 09:24:06 crc kubenswrapper[4830]: I0131 09:24:06.840432 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 31 09:24:06 crc kubenswrapper[4830]: I0131 09:24:06.869417 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 31 09:24:06 crc kubenswrapper[4830]: I0131 09:24:06.897953 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 31 09:24:06 crc kubenswrapper[4830]: I0131 09:24:06.900502 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 31 09:24:06 crc kubenswrapper[4830]: I0131 09:24:06.903560 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 31 09:24:06 crc kubenswrapper[4830]: I0131 09:24:06.905089 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 31 09:24:06 crc kubenswrapper[4830]: I0131 09:24:06.921168 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 31 09:24:06 crc kubenswrapper[4830]: I0131 09:24:06.971831 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0ec35101-03e3-421d-8799-a7a0b1864b9b-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"0ec35101-03e3-421d-8799-a7a0b1864b9b\") " pod="openstack/glance-default-internal-api-0" Jan 31 09:24:06 crc kubenswrapper[4830]: I0131 09:24:06.971987 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0ec35101-03e3-421d-8799-a7a0b1864b9b-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"0ec35101-03e3-421d-8799-a7a0b1864b9b\") " pod="openstack/glance-default-internal-api-0" Jan 31 09:24:06 crc kubenswrapper[4830]: I0131 09:24:06.972046 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-srpvg\" (UniqueName: \"kubernetes.io/projected/0ec35101-03e3-421d-8799-a7a0b1864b9b-kube-api-access-srpvg\") pod \"glance-default-internal-api-0\" (UID: \"0ec35101-03e3-421d-8799-a7a0b1864b9b\") " pod="openstack/glance-default-internal-api-0" Jan 31 09:24:06 crc kubenswrapper[4830]: I0131 09:24:06.972162 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0ec35101-03e3-421d-8799-a7a0b1864b9b-scripts\") pod \"glance-default-internal-api-0\" (UID: \"0ec35101-03e3-421d-8799-a7a0b1864b9b\") " pod="openstack/glance-default-internal-api-0" Jan 31 09:24:06 crc kubenswrapper[4830]: I0131 09:24:06.972370 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0ec35101-03e3-421d-8799-a7a0b1864b9b-logs\") pod \"glance-default-internal-api-0\" (UID: \"0ec35101-03e3-421d-8799-a7a0b1864b9b\") " pod="openstack/glance-default-internal-api-0" Jan 31 09:24:06 crc kubenswrapper[4830]: I0131 09:24:06.972537 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-9a2130b1-175e-4117-ab6c-14e39b8138c2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9a2130b1-175e-4117-ab6c-14e39b8138c2\") pod \"glance-default-internal-api-0\" (UID: \"0ec35101-03e3-421d-8799-a7a0b1864b9b\") " pod="openstack/glance-default-internal-api-0" Jan 31 09:24:06 crc kubenswrapper[4830]: I0131 09:24:06.972649 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ec35101-03e3-421d-8799-a7a0b1864b9b-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"0ec35101-03e3-421d-8799-a7a0b1864b9b\") " pod="openstack/glance-default-internal-api-0" Jan 31 09:24:06 crc kubenswrapper[4830]: I0131 09:24:06.973000 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ec35101-03e3-421d-8799-a7a0b1864b9b-config-data\") pod \"glance-default-internal-api-0\" (UID: \"0ec35101-03e3-421d-8799-a7a0b1864b9b\") " pod="openstack/glance-default-internal-api-0" Jan 31 09:24:07 crc kubenswrapper[4830]: I0131 09:24:07.075340 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0ec35101-03e3-421d-8799-a7a0b1864b9b-scripts\") pod \"glance-default-internal-api-0\" (UID: \"0ec35101-03e3-421d-8799-a7a0b1864b9b\") " pod="openstack/glance-default-internal-api-0" Jan 31 09:24:07 crc kubenswrapper[4830]: I0131 09:24:07.075415 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0ec35101-03e3-421d-8799-a7a0b1864b9b-logs\") pod \"glance-default-internal-api-0\" (UID: \"0ec35101-03e3-421d-8799-a7a0b1864b9b\") " pod="openstack/glance-default-internal-api-0" Jan 31 09:24:07 crc kubenswrapper[4830]: I0131 09:24:07.075487 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-9a2130b1-175e-4117-ab6c-14e39b8138c2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9a2130b1-175e-4117-ab6c-14e39b8138c2\") pod \"glance-default-internal-api-0\" (UID: \"0ec35101-03e3-421d-8799-a7a0b1864b9b\") " pod="openstack/glance-default-internal-api-0" Jan 31 09:24:07 crc kubenswrapper[4830]: I0131 09:24:07.075521 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ec35101-03e3-421d-8799-a7a0b1864b9b-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"0ec35101-03e3-421d-8799-a7a0b1864b9b\") " pod="openstack/glance-default-internal-api-0" Jan 31 09:24:07 crc kubenswrapper[4830]: I0131 09:24:07.075601 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ec35101-03e3-421d-8799-a7a0b1864b9b-config-data\") pod \"glance-default-internal-api-0\" (UID: \"0ec35101-03e3-421d-8799-a7a0b1864b9b\") " pod="openstack/glance-default-internal-api-0" Jan 31 09:24:07 crc kubenswrapper[4830]: I0131 09:24:07.075646 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0ec35101-03e3-421d-8799-a7a0b1864b9b-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"0ec35101-03e3-421d-8799-a7a0b1864b9b\") " pod="openstack/glance-default-internal-api-0" Jan 31 09:24:07 crc kubenswrapper[4830]: I0131 09:24:07.075688 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0ec35101-03e3-421d-8799-a7a0b1864b9b-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"0ec35101-03e3-421d-8799-a7a0b1864b9b\") " pod="openstack/glance-default-internal-api-0" Jan 31 09:24:07 crc kubenswrapper[4830]: I0131 09:24:07.075743 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-srpvg\" (UniqueName: \"kubernetes.io/projected/0ec35101-03e3-421d-8799-a7a0b1864b9b-kube-api-access-srpvg\") pod \"glance-default-internal-api-0\" (UID: \"0ec35101-03e3-421d-8799-a7a0b1864b9b\") " pod="openstack/glance-default-internal-api-0" Jan 31 09:24:07 crc kubenswrapper[4830]: I0131 09:24:07.078528 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0ec35101-03e3-421d-8799-a7a0b1864b9b-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"0ec35101-03e3-421d-8799-a7a0b1864b9b\") " pod="openstack/glance-default-internal-api-0" Jan 31 09:24:07 crc kubenswrapper[4830]: I0131 09:24:07.078784 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0ec35101-03e3-421d-8799-a7a0b1864b9b-logs\") pod \"glance-default-internal-api-0\" (UID: \"0ec35101-03e3-421d-8799-a7a0b1864b9b\") " pod="openstack/glance-default-internal-api-0" Jan 31 09:24:07 crc kubenswrapper[4830]: I0131 09:24:07.081859 4830 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 31 09:24:07 crc kubenswrapper[4830]: I0131 09:24:07.081892 4830 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-9a2130b1-175e-4117-ab6c-14e39b8138c2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9a2130b1-175e-4117-ab6c-14e39b8138c2\") pod \"glance-default-internal-api-0\" (UID: \"0ec35101-03e3-421d-8799-a7a0b1864b9b\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/c32ecbe26e9667ad28a7d3f49252f55c097486de0a04fe8536b2f1b0061aa335/globalmount\"" pod="openstack/glance-default-internal-api-0" Jan 31 09:24:07 crc kubenswrapper[4830]: I0131 09:24:07.084660 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ec35101-03e3-421d-8799-a7a0b1864b9b-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"0ec35101-03e3-421d-8799-a7a0b1864b9b\") " pod="openstack/glance-default-internal-api-0" Jan 31 09:24:07 crc kubenswrapper[4830]: I0131 09:24:07.084998 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0ec35101-03e3-421d-8799-a7a0b1864b9b-scripts\") pod \"glance-default-internal-api-0\" (UID: \"0ec35101-03e3-421d-8799-a7a0b1864b9b\") " pod="openstack/glance-default-internal-api-0" Jan 31 09:24:07 crc kubenswrapper[4830]: I0131 09:24:07.085118 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ec35101-03e3-421d-8799-a7a0b1864b9b-config-data\") pod \"glance-default-internal-api-0\" (UID: \"0ec35101-03e3-421d-8799-a7a0b1864b9b\") " pod="openstack/glance-default-internal-api-0" Jan 31 09:24:07 crc kubenswrapper[4830]: I0131 09:24:07.105918 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0ec35101-03e3-421d-8799-a7a0b1864b9b-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"0ec35101-03e3-421d-8799-a7a0b1864b9b\") " pod="openstack/glance-default-internal-api-0" Jan 31 09:24:07 crc kubenswrapper[4830]: I0131 09:24:07.106507 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-srpvg\" (UniqueName: \"kubernetes.io/projected/0ec35101-03e3-421d-8799-a7a0b1864b9b-kube-api-access-srpvg\") pod \"glance-default-internal-api-0\" (UID: \"0ec35101-03e3-421d-8799-a7a0b1864b9b\") " pod="openstack/glance-default-internal-api-0" Jan 31 09:24:07 crc kubenswrapper[4830]: I0131 09:24:07.202493 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-9a2130b1-175e-4117-ab6c-14e39b8138c2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9a2130b1-175e-4117-ab6c-14e39b8138c2\") pod \"glance-default-internal-api-0\" (UID: \"0ec35101-03e3-421d-8799-a7a0b1864b9b\") " pod="openstack/glance-default-internal-api-0" Jan 31 09:24:07 crc kubenswrapper[4830]: I0131 09:24:07.234576 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 31 09:24:07 crc kubenswrapper[4830]: I0131 09:24:07.742293 4830 generic.go:334] "Generic (PLEG): container finished" podID="29262d41-4dc9-4d3e-9d2d-411076ab11c6" containerID="fdb1043ccf73c9d37bcc827f69f1b9499e832f9028fa7da41f5a19e3692877ea" exitCode=0 Jan 31 09:24:07 crc kubenswrapper[4830]: I0131 09:24:07.742685 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-blgpz" event={"ID":"29262d41-4dc9-4d3e-9d2d-411076ab11c6","Type":"ContainerDied","Data":"fdb1043ccf73c9d37bcc827f69f1b9499e832f9028fa7da41f5a19e3692877ea"} Jan 31 09:24:08 crc kubenswrapper[4830]: I0131 09:24:08.203995 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-8b5c85b87-7w87z" Jan 31 09:24:08 crc kubenswrapper[4830]: I0131 09:24:08.286837 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d2d72ba4-adcb-41a9-b840-4996715f2cc1" path="/var/lib/kubelet/pods/d2d72ba4-adcb-41a9-b840-4996715f2cc1/volumes" Jan 31 09:24:08 crc kubenswrapper[4830]: I0131 09:24:08.288406 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-77585f5f8c-v7bq9"] Jan 31 09:24:08 crc kubenswrapper[4830]: I0131 09:24:08.288671 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-77585f5f8c-v7bq9" podUID="78be811e-7bfb-400f-9e75-b2853dc051bd" containerName="dnsmasq-dns" containerID="cri-o://a431904c615c8eab3f850504c49e2d9ad100a3fa1f1f5f56c5c038f7f2641a8f" gracePeriod=10 Jan 31 09:24:08 crc kubenswrapper[4830]: I0131 09:24:08.759916 4830 generic.go:334] "Generic (PLEG): container finished" podID="78be811e-7bfb-400f-9e75-b2853dc051bd" containerID="a431904c615c8eab3f850504c49e2d9ad100a3fa1f1f5f56c5c038f7f2641a8f" exitCode=0 Jan 31 09:24:08 crc kubenswrapper[4830]: I0131 09:24:08.760009 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77585f5f8c-v7bq9" event={"ID":"78be811e-7bfb-400f-9e75-b2853dc051bd","Type":"ContainerDied","Data":"a431904c615c8eab3f850504c49e2d9ad100a3fa1f1f5f56c5c038f7f2641a8f"} Jan 31 09:24:12 crc kubenswrapper[4830]: I0131 09:24:12.036134 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-77585f5f8c-v7bq9" podUID="78be811e-7bfb-400f-9e75-b2853dc051bd" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.167:5353: connect: connection refused" Jan 31 09:24:13 crc kubenswrapper[4830]: I0131 09:24:13.436914 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-blgpz" Jan 31 09:24:13 crc kubenswrapper[4830]: I0131 09:24:13.551858 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2786z\" (UniqueName: \"kubernetes.io/projected/29262d41-4dc9-4d3e-9d2d-411076ab11c6-kube-api-access-2786z\") pod \"29262d41-4dc9-4d3e-9d2d-411076ab11c6\" (UID: \"29262d41-4dc9-4d3e-9d2d-411076ab11c6\") " Jan 31 09:24:13 crc kubenswrapper[4830]: I0131 09:24:13.551953 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/29262d41-4dc9-4d3e-9d2d-411076ab11c6-config-data\") pod \"29262d41-4dc9-4d3e-9d2d-411076ab11c6\" (UID: \"29262d41-4dc9-4d3e-9d2d-411076ab11c6\") " Jan 31 09:24:13 crc kubenswrapper[4830]: I0131 09:24:13.551974 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/29262d41-4dc9-4d3e-9d2d-411076ab11c6-scripts\") pod \"29262d41-4dc9-4d3e-9d2d-411076ab11c6\" (UID: \"29262d41-4dc9-4d3e-9d2d-411076ab11c6\") " Jan 31 09:24:13 crc kubenswrapper[4830]: I0131 09:24:13.552046 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/29262d41-4dc9-4d3e-9d2d-411076ab11c6-fernet-keys\") pod \"29262d41-4dc9-4d3e-9d2d-411076ab11c6\" (UID: \"29262d41-4dc9-4d3e-9d2d-411076ab11c6\") " Jan 31 09:24:13 crc kubenswrapper[4830]: I0131 09:24:13.553946 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29262d41-4dc9-4d3e-9d2d-411076ab11c6-combined-ca-bundle\") pod \"29262d41-4dc9-4d3e-9d2d-411076ab11c6\" (UID: \"29262d41-4dc9-4d3e-9d2d-411076ab11c6\") " Jan 31 09:24:13 crc kubenswrapper[4830]: I0131 09:24:13.554245 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/29262d41-4dc9-4d3e-9d2d-411076ab11c6-credential-keys\") pod \"29262d41-4dc9-4d3e-9d2d-411076ab11c6\" (UID: \"29262d41-4dc9-4d3e-9d2d-411076ab11c6\") " Jan 31 09:24:13 crc kubenswrapper[4830]: I0131 09:24:13.571903 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/29262d41-4dc9-4d3e-9d2d-411076ab11c6-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "29262d41-4dc9-4d3e-9d2d-411076ab11c6" (UID: "29262d41-4dc9-4d3e-9d2d-411076ab11c6"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:24:13 crc kubenswrapper[4830]: I0131 09:24:13.572008 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/29262d41-4dc9-4d3e-9d2d-411076ab11c6-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "29262d41-4dc9-4d3e-9d2d-411076ab11c6" (UID: "29262d41-4dc9-4d3e-9d2d-411076ab11c6"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:24:13 crc kubenswrapper[4830]: I0131 09:24:13.572012 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/29262d41-4dc9-4d3e-9d2d-411076ab11c6-kube-api-access-2786z" (OuterVolumeSpecName: "kube-api-access-2786z") pod "29262d41-4dc9-4d3e-9d2d-411076ab11c6" (UID: "29262d41-4dc9-4d3e-9d2d-411076ab11c6"). InnerVolumeSpecName "kube-api-access-2786z". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:24:13 crc kubenswrapper[4830]: I0131 09:24:13.571952 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/29262d41-4dc9-4d3e-9d2d-411076ab11c6-scripts" (OuterVolumeSpecName: "scripts") pod "29262d41-4dc9-4d3e-9d2d-411076ab11c6" (UID: "29262d41-4dc9-4d3e-9d2d-411076ab11c6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:24:13 crc kubenswrapper[4830]: I0131 09:24:13.595830 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/29262d41-4dc9-4d3e-9d2d-411076ab11c6-config-data" (OuterVolumeSpecName: "config-data") pod "29262d41-4dc9-4d3e-9d2d-411076ab11c6" (UID: "29262d41-4dc9-4d3e-9d2d-411076ab11c6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:24:13 crc kubenswrapper[4830]: I0131 09:24:13.607250 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/29262d41-4dc9-4d3e-9d2d-411076ab11c6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "29262d41-4dc9-4d3e-9d2d-411076ab11c6" (UID: "29262d41-4dc9-4d3e-9d2d-411076ab11c6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:24:13 crc kubenswrapper[4830]: I0131 09:24:13.659098 4830 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29262d41-4dc9-4d3e-9d2d-411076ab11c6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 09:24:13 crc kubenswrapper[4830]: I0131 09:24:13.659157 4830 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/29262d41-4dc9-4d3e-9d2d-411076ab11c6-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 31 09:24:13 crc kubenswrapper[4830]: I0131 09:24:13.659170 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2786z\" (UniqueName: \"kubernetes.io/projected/29262d41-4dc9-4d3e-9d2d-411076ab11c6-kube-api-access-2786z\") on node \"crc\" DevicePath \"\"" Jan 31 09:24:13 crc kubenswrapper[4830]: I0131 09:24:13.659185 4830 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/29262d41-4dc9-4d3e-9d2d-411076ab11c6-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 09:24:13 crc kubenswrapper[4830]: I0131 09:24:13.659197 4830 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/29262d41-4dc9-4d3e-9d2d-411076ab11c6-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 09:24:13 crc kubenswrapper[4830]: I0131 09:24:13.659210 4830 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/29262d41-4dc9-4d3e-9d2d-411076ab11c6-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 31 09:24:13 crc kubenswrapper[4830]: I0131 09:24:13.926942 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-blgpz" Jan 31 09:24:13 crc kubenswrapper[4830]: I0131 09:24:13.926816 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-blgpz" event={"ID":"29262d41-4dc9-4d3e-9d2d-411076ab11c6","Type":"ContainerDied","Data":"15afe595d7a6c43fa8ff8fb7860a492ad92ac26c8a052d95f7dca31b089d702e"} Jan 31 09:24:13 crc kubenswrapper[4830]: I0131 09:24:13.931534 4830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="15afe595d7a6c43fa8ff8fb7860a492ad92ac26c8a052d95f7dca31b089d702e" Jan 31 09:24:14 crc kubenswrapper[4830]: I0131 09:24:14.539648 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-blgpz"] Jan 31 09:24:14 crc kubenswrapper[4830]: I0131 09:24:14.556468 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-blgpz"] Jan 31 09:24:14 crc kubenswrapper[4830]: I0131 09:24:14.633794 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-t2pjz"] Jan 31 09:24:14 crc kubenswrapper[4830]: E0131 09:24:14.634640 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29262d41-4dc9-4d3e-9d2d-411076ab11c6" containerName="keystone-bootstrap" Jan 31 09:24:14 crc kubenswrapper[4830]: I0131 09:24:14.634667 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="29262d41-4dc9-4d3e-9d2d-411076ab11c6" containerName="keystone-bootstrap" Jan 31 09:24:14 crc kubenswrapper[4830]: I0131 09:24:14.635020 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="29262d41-4dc9-4d3e-9d2d-411076ab11c6" containerName="keystone-bootstrap" Jan 31 09:24:14 crc kubenswrapper[4830]: I0131 09:24:14.636167 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-t2pjz" Jan 31 09:24:14 crc kubenswrapper[4830]: I0131 09:24:14.639514 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 31 09:24:14 crc kubenswrapper[4830]: I0131 09:24:14.639970 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 31 09:24:14 crc kubenswrapper[4830]: I0131 09:24:14.640119 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 31 09:24:14 crc kubenswrapper[4830]: I0131 09:24:14.640351 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-r84d8" Jan 31 09:24:14 crc kubenswrapper[4830]: I0131 09:24:14.640527 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 31 09:24:14 crc kubenswrapper[4830]: I0131 09:24:14.648098 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-t2pjz"] Jan 31 09:24:14 crc kubenswrapper[4830]: I0131 09:24:14.789384 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a9974259-ec4c-411a-ba74-95664c116f34-scripts\") pod \"keystone-bootstrap-t2pjz\" (UID: \"a9974259-ec4c-411a-ba74-95664c116f34\") " pod="openstack/keystone-bootstrap-t2pjz" Jan 31 09:24:14 crc kubenswrapper[4830]: I0131 09:24:14.789953 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4l7s2\" (UniqueName: \"kubernetes.io/projected/a9974259-ec4c-411a-ba74-95664c116f34-kube-api-access-4l7s2\") pod \"keystone-bootstrap-t2pjz\" (UID: \"a9974259-ec4c-411a-ba74-95664c116f34\") " pod="openstack/keystone-bootstrap-t2pjz" Jan 31 09:24:14 crc kubenswrapper[4830]: I0131 09:24:14.789994 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/a9974259-ec4c-411a-ba74-95664c116f34-credential-keys\") pod \"keystone-bootstrap-t2pjz\" (UID: \"a9974259-ec4c-411a-ba74-95664c116f34\") " pod="openstack/keystone-bootstrap-t2pjz" Jan 31 09:24:14 crc kubenswrapper[4830]: I0131 09:24:14.790059 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9974259-ec4c-411a-ba74-95664c116f34-combined-ca-bundle\") pod \"keystone-bootstrap-t2pjz\" (UID: \"a9974259-ec4c-411a-ba74-95664c116f34\") " pod="openstack/keystone-bootstrap-t2pjz" Jan 31 09:24:14 crc kubenswrapper[4830]: I0131 09:24:14.790151 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a9974259-ec4c-411a-ba74-95664c116f34-config-data\") pod \"keystone-bootstrap-t2pjz\" (UID: \"a9974259-ec4c-411a-ba74-95664c116f34\") " pod="openstack/keystone-bootstrap-t2pjz" Jan 31 09:24:14 crc kubenswrapper[4830]: I0131 09:24:14.790343 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a9974259-ec4c-411a-ba74-95664c116f34-fernet-keys\") pod \"keystone-bootstrap-t2pjz\" (UID: \"a9974259-ec4c-411a-ba74-95664c116f34\") " pod="openstack/keystone-bootstrap-t2pjz" Jan 31 09:24:14 crc kubenswrapper[4830]: I0131 09:24:14.892612 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a9974259-ec4c-411a-ba74-95664c116f34-fernet-keys\") pod \"keystone-bootstrap-t2pjz\" (UID: \"a9974259-ec4c-411a-ba74-95664c116f34\") " pod="openstack/keystone-bootstrap-t2pjz" Jan 31 09:24:14 crc kubenswrapper[4830]: I0131 09:24:14.892773 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a9974259-ec4c-411a-ba74-95664c116f34-scripts\") pod \"keystone-bootstrap-t2pjz\" (UID: \"a9974259-ec4c-411a-ba74-95664c116f34\") " pod="openstack/keystone-bootstrap-t2pjz" Jan 31 09:24:14 crc kubenswrapper[4830]: I0131 09:24:14.892817 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4l7s2\" (UniqueName: \"kubernetes.io/projected/a9974259-ec4c-411a-ba74-95664c116f34-kube-api-access-4l7s2\") pod \"keystone-bootstrap-t2pjz\" (UID: \"a9974259-ec4c-411a-ba74-95664c116f34\") " pod="openstack/keystone-bootstrap-t2pjz" Jan 31 09:24:14 crc kubenswrapper[4830]: I0131 09:24:14.892856 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/a9974259-ec4c-411a-ba74-95664c116f34-credential-keys\") pod \"keystone-bootstrap-t2pjz\" (UID: \"a9974259-ec4c-411a-ba74-95664c116f34\") " pod="openstack/keystone-bootstrap-t2pjz" Jan 31 09:24:14 crc kubenswrapper[4830]: I0131 09:24:14.892929 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9974259-ec4c-411a-ba74-95664c116f34-combined-ca-bundle\") pod \"keystone-bootstrap-t2pjz\" (UID: \"a9974259-ec4c-411a-ba74-95664c116f34\") " pod="openstack/keystone-bootstrap-t2pjz" Jan 31 09:24:14 crc kubenswrapper[4830]: I0131 09:24:14.893015 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a9974259-ec4c-411a-ba74-95664c116f34-config-data\") pod \"keystone-bootstrap-t2pjz\" (UID: \"a9974259-ec4c-411a-ba74-95664c116f34\") " pod="openstack/keystone-bootstrap-t2pjz" Jan 31 09:24:14 crc kubenswrapper[4830]: I0131 09:24:14.900527 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a9974259-ec4c-411a-ba74-95664c116f34-fernet-keys\") pod \"keystone-bootstrap-t2pjz\" (UID: \"a9974259-ec4c-411a-ba74-95664c116f34\") " pod="openstack/keystone-bootstrap-t2pjz" Jan 31 09:24:14 crc kubenswrapper[4830]: I0131 09:24:14.901490 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9974259-ec4c-411a-ba74-95664c116f34-combined-ca-bundle\") pod \"keystone-bootstrap-t2pjz\" (UID: \"a9974259-ec4c-411a-ba74-95664c116f34\") " pod="openstack/keystone-bootstrap-t2pjz" Jan 31 09:24:14 crc kubenswrapper[4830]: I0131 09:24:14.901818 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/a9974259-ec4c-411a-ba74-95664c116f34-credential-keys\") pod \"keystone-bootstrap-t2pjz\" (UID: \"a9974259-ec4c-411a-ba74-95664c116f34\") " pod="openstack/keystone-bootstrap-t2pjz" Jan 31 09:24:14 crc kubenswrapper[4830]: I0131 09:24:14.901921 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a9974259-ec4c-411a-ba74-95664c116f34-config-data\") pod \"keystone-bootstrap-t2pjz\" (UID: \"a9974259-ec4c-411a-ba74-95664c116f34\") " pod="openstack/keystone-bootstrap-t2pjz" Jan 31 09:24:14 crc kubenswrapper[4830]: I0131 09:24:14.902317 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a9974259-ec4c-411a-ba74-95664c116f34-scripts\") pod \"keystone-bootstrap-t2pjz\" (UID: \"a9974259-ec4c-411a-ba74-95664c116f34\") " pod="openstack/keystone-bootstrap-t2pjz" Jan 31 09:24:14 crc kubenswrapper[4830]: I0131 09:24:14.916462 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4l7s2\" (UniqueName: \"kubernetes.io/projected/a9974259-ec4c-411a-ba74-95664c116f34-kube-api-access-4l7s2\") pod \"keystone-bootstrap-t2pjz\" (UID: \"a9974259-ec4c-411a-ba74-95664c116f34\") " pod="openstack/keystone-bootstrap-t2pjz" Jan 31 09:24:14 crc kubenswrapper[4830]: I0131 09:24:14.980679 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-t2pjz" Jan 31 09:24:16 crc kubenswrapper[4830]: I0131 09:24:16.274503 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="29262d41-4dc9-4d3e-9d2d-411076ab11c6" path="/var/lib/kubelet/pods/29262d41-4dc9-4d3e-9d2d-411076ab11c6/volumes" Jan 31 09:24:22 crc kubenswrapper[4830]: I0131 09:24:22.035327 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-77585f5f8c-v7bq9" podUID="78be811e-7bfb-400f-9e75-b2853dc051bd" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.167:5353: i/o timeout" Jan 31 09:24:22 crc kubenswrapper[4830]: I0131 09:24:22.789331 4830 scope.go:117] "RemoveContainer" containerID="5ae40eff30442b5c5780e452596934db197d847ee4785925d6a706ebb8ecf683" Jan 31 09:24:22 crc kubenswrapper[4830]: I0131 09:24:22.928761 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-77585f5f8c-v7bq9" Jan 31 09:24:23 crc kubenswrapper[4830]: I0131 09:24:23.026688 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/78be811e-7bfb-400f-9e75-b2853dc051bd-dns-svc\") pod \"78be811e-7bfb-400f-9e75-b2853dc051bd\" (UID: \"78be811e-7bfb-400f-9e75-b2853dc051bd\") " Jan 31 09:24:23 crc kubenswrapper[4830]: I0131 09:24:23.026757 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/78be811e-7bfb-400f-9e75-b2853dc051bd-config\") pod \"78be811e-7bfb-400f-9e75-b2853dc051bd\" (UID: \"78be811e-7bfb-400f-9e75-b2853dc051bd\") " Jan 31 09:24:23 crc kubenswrapper[4830]: I0131 09:24:23.026910 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/78be811e-7bfb-400f-9e75-b2853dc051bd-ovsdbserver-sb\") pod \"78be811e-7bfb-400f-9e75-b2853dc051bd\" (UID: \"78be811e-7bfb-400f-9e75-b2853dc051bd\") " Jan 31 09:24:23 crc kubenswrapper[4830]: I0131 09:24:23.027032 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/78be811e-7bfb-400f-9e75-b2853dc051bd-dns-swift-storage-0\") pod \"78be811e-7bfb-400f-9e75-b2853dc051bd\" (UID: \"78be811e-7bfb-400f-9e75-b2853dc051bd\") " Jan 31 09:24:23 crc kubenswrapper[4830]: I0131 09:24:23.027098 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/78be811e-7bfb-400f-9e75-b2853dc051bd-ovsdbserver-nb\") pod \"78be811e-7bfb-400f-9e75-b2853dc051bd\" (UID: \"78be811e-7bfb-400f-9e75-b2853dc051bd\") " Jan 31 09:24:23 crc kubenswrapper[4830]: I0131 09:24:23.027154 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h8xhh\" (UniqueName: \"kubernetes.io/projected/78be811e-7bfb-400f-9e75-b2853dc051bd-kube-api-access-h8xhh\") pod \"78be811e-7bfb-400f-9e75-b2853dc051bd\" (UID: \"78be811e-7bfb-400f-9e75-b2853dc051bd\") " Jan 31 09:24:23 crc kubenswrapper[4830]: I0131 09:24:23.038020 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/78be811e-7bfb-400f-9e75-b2853dc051bd-kube-api-access-h8xhh" (OuterVolumeSpecName: "kube-api-access-h8xhh") pod "78be811e-7bfb-400f-9e75-b2853dc051bd" (UID: "78be811e-7bfb-400f-9e75-b2853dc051bd"). InnerVolumeSpecName "kube-api-access-h8xhh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:24:23 crc kubenswrapper[4830]: I0131 09:24:23.048968 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77585f5f8c-v7bq9" event={"ID":"78be811e-7bfb-400f-9e75-b2853dc051bd","Type":"ContainerDied","Data":"a0eaaa87ac3f539f94453eae7e1519c3d57257cf4cec2c117d948deae1dc7619"} Jan 31 09:24:23 crc kubenswrapper[4830]: I0131 09:24:23.049103 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-77585f5f8c-v7bq9" Jan 31 09:24:23 crc kubenswrapper[4830]: I0131 09:24:23.094118 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/78be811e-7bfb-400f-9e75-b2853dc051bd-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "78be811e-7bfb-400f-9e75-b2853dc051bd" (UID: "78be811e-7bfb-400f-9e75-b2853dc051bd"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:24:23 crc kubenswrapper[4830]: I0131 09:24:23.108921 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/78be811e-7bfb-400f-9e75-b2853dc051bd-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "78be811e-7bfb-400f-9e75-b2853dc051bd" (UID: "78be811e-7bfb-400f-9e75-b2853dc051bd"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:24:23 crc kubenswrapper[4830]: I0131 09:24:23.112526 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/78be811e-7bfb-400f-9e75-b2853dc051bd-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "78be811e-7bfb-400f-9e75-b2853dc051bd" (UID: "78be811e-7bfb-400f-9e75-b2853dc051bd"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:24:23 crc kubenswrapper[4830]: I0131 09:24:23.116177 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/78be811e-7bfb-400f-9e75-b2853dc051bd-config" (OuterVolumeSpecName: "config") pod "78be811e-7bfb-400f-9e75-b2853dc051bd" (UID: "78be811e-7bfb-400f-9e75-b2853dc051bd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:24:23 crc kubenswrapper[4830]: I0131 09:24:23.116391 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/78be811e-7bfb-400f-9e75-b2853dc051bd-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "78be811e-7bfb-400f-9e75-b2853dc051bd" (UID: "78be811e-7bfb-400f-9e75-b2853dc051bd"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:24:23 crc kubenswrapper[4830]: I0131 09:24:23.130788 4830 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/78be811e-7bfb-400f-9e75-b2853dc051bd-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 31 09:24:23 crc kubenswrapper[4830]: I0131 09:24:23.130829 4830 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/78be811e-7bfb-400f-9e75-b2853dc051bd-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 31 09:24:23 crc kubenswrapper[4830]: I0131 09:24:23.130843 4830 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/78be811e-7bfb-400f-9e75-b2853dc051bd-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 31 09:24:23 crc kubenswrapper[4830]: I0131 09:24:23.130859 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h8xhh\" (UniqueName: \"kubernetes.io/projected/78be811e-7bfb-400f-9e75-b2853dc051bd-kube-api-access-h8xhh\") on node \"crc\" DevicePath \"\"" Jan 31 09:24:23 crc kubenswrapper[4830]: I0131 09:24:23.130873 4830 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/78be811e-7bfb-400f-9e75-b2853dc051bd-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 31 09:24:23 crc kubenswrapper[4830]: I0131 09:24:23.130884 4830 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/78be811e-7bfb-400f-9e75-b2853dc051bd-config\") on node \"crc\" DevicePath \"\"" Jan 31 09:24:23 crc kubenswrapper[4830]: I0131 09:24:23.393471 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-77585f5f8c-v7bq9"] Jan 31 09:24:23 crc kubenswrapper[4830]: I0131 09:24:23.408495 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-77585f5f8c-v7bq9"] Jan 31 09:24:24 crc kubenswrapper[4830]: I0131 09:24:24.268881 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="78be811e-7bfb-400f-9e75-b2853dc051bd" path="/var/lib/kubelet/pods/78be811e-7bfb-400f-9e75-b2853dc051bd/volumes" Jan 31 09:24:27 crc kubenswrapper[4830]: I0131 09:24:27.037438 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-77585f5f8c-v7bq9" podUID="78be811e-7bfb-400f-9e75-b2853dc051bd" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.167:5353: i/o timeout" Jan 31 09:24:29 crc kubenswrapper[4830]: I0131 09:24:29.473866 4830 pod_container_manager_linux.go:210] "Failed to delete cgroup paths" cgroupName=["kubepods","besteffort","podcc653318-d8c5-4663-90ef-38b8f4b19275"] err="unable to destroy cgroup paths for cgroup [kubepods besteffort podcc653318-d8c5-4663-90ef-38b8f4b19275] : Timed out while waiting for systemd to remove kubepods-besteffort-podcc653318_d8c5_4663_90ef_38b8f4b19275.slice" Jan 31 09:24:29 crc kubenswrapper[4830]: E0131 09:24:29.474302 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to delete cgroup paths for [kubepods besteffort podcc653318-d8c5-4663-90ef-38b8f4b19275] : unable to destroy cgroup paths for cgroup [kubepods besteffort podcc653318-d8c5-4663-90ef-38b8f4b19275] : Timed out while waiting for systemd to remove kubepods-besteffort-podcc653318_d8c5_4663_90ef_38b8f4b19275.slice" pod="openstack/dnsmasq-dns-7ff5475cc9-j5dc2" podUID="cc653318-d8c5-4663-90ef-38b8f4b19275" Jan 31 09:24:30 crc kubenswrapper[4830]: I0131 09:24:30.143158 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7ff5475cc9-j5dc2" Jan 31 09:24:30 crc kubenswrapper[4830]: I0131 09:24:30.199645 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7ff5475cc9-j5dc2"] Jan 31 09:24:30 crc kubenswrapper[4830]: I0131 09:24:30.209788 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7ff5475cc9-j5dc2"] Jan 31 09:24:30 crc kubenswrapper[4830]: I0131 09:24:30.270091 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cc653318-d8c5-4663-90ef-38b8f4b19275" path="/var/lib/kubelet/pods/cc653318-d8c5-4663-90ef-38b8f4b19275/volumes" Jan 31 09:24:32 crc kubenswrapper[4830]: E0131 09:24:32.150619 4830 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified" Jan 31 09:24:32 crc kubenswrapper[4830]: E0131 09:24:32.151493 4830 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n94h5f4h699h8dh565h5d6h88h549h5dch685hb9h64ch58ch7chddh54fhd9h655h66fh5f7h96h564h5b6h675h5b9h79h577h56h55chbfh675h577q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zwkxh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(39688f84-c227-4658-aee1-ce5e5d450ca1): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 31 09:24:32 crc kubenswrapper[4830]: E0131 09:24:32.476574 4830 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-heat-engine:current-podified" Jan 31 09:24:32 crc kubenswrapper[4830]: E0131 09:24:32.476805 4830 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.io/podified-antelope-centos9/openstack-heat-engine:current-podified,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bh7dj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-hh79w_openstack(6324b6ba-4288-44f4-bf87-1a4356c1a9f0): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 31 09:24:32 crc kubenswrapper[4830]: E0131 09:24:32.478776 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/heat-db-sync-hh79w" podUID="6324b6ba-4288-44f4-bf87-1a4356c1a9f0" Jan 31 09:24:33 crc kubenswrapper[4830]: I0131 09:24:33.119191 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 31 09:24:33 crc kubenswrapper[4830]: I0131 09:24:33.190516 4830 generic.go:334] "Generic (PLEG): container finished" podID="ce550202-087a-49b1-8796-10f03f0ab9be" containerID="c9fb7d799f1d6dd9a5876fd3363ab7922287e7e766c564c9787e2b2952eb9668" exitCode=0 Jan 31 09:24:33 crc kubenswrapper[4830]: I0131 09:24:33.190622 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-ztgnf" event={"ID":"ce550202-087a-49b1-8796-10f03f0ab9be","Type":"ContainerDied","Data":"c9fb7d799f1d6dd9a5876fd3363ab7922287e7e766c564c9787e2b2952eb9668"} Jan 31 09:24:33 crc kubenswrapper[4830]: E0131 09:24:33.193551 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-heat-engine:current-podified\\\"\"" pod="openstack/heat-db-sync-hh79w" podUID="6324b6ba-4288-44f4-bf87-1a4356c1a9f0" Jan 31 09:24:33 crc kubenswrapper[4830]: W0131 09:24:33.951400 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podff4e5fbc_7e45_42b7_8af6_ff34b36bb594.slice/crio-84929e956f010095fa7add1e8963285c7075dd230723bdf7490ef2a4f736eb55 WatchSource:0}: Error finding container 84929e956f010095fa7add1e8963285c7075dd230723bdf7490ef2a4f736eb55: Status 404 returned error can't find the container with id 84929e956f010095fa7add1e8963285c7075dd230723bdf7490ef2a4f736eb55 Jan 31 09:24:33 crc kubenswrapper[4830]: I0131 09:24:33.965775 4830 scope.go:117] "RemoveContainer" containerID="a431904c615c8eab3f850504c49e2d9ad100a3fa1f1f5f56c5c038f7f2641a8f" Jan 31 09:24:33 crc kubenswrapper[4830]: E0131 09:24:33.996467 4830 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified" Jan 31 09:24:33 crc kubenswrapper[4830]: E0131 09:24:33.996627 4830 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-q5w5b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-w6kxz_openstack(0617092f-40a9-4d3d-b472-f284a2b24000): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 31 09:24:33 crc kubenswrapper[4830]: E0131 09:24:33.997833 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-w6kxz" podUID="0617092f-40a9-4d3d-b472-f284a2b24000" Jan 31 09:24:34 crc kubenswrapper[4830]: I0131 09:24:34.246967 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"ff4e5fbc-7e45-42b7-8af6-ff34b36bb594","Type":"ContainerStarted","Data":"84929e956f010095fa7add1e8963285c7075dd230723bdf7490ef2a4f736eb55"} Jan 31 09:24:34 crc kubenswrapper[4830]: E0131 09:24:34.368040 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified\\\"\"" pod="openstack/cinder-db-sync-w6kxz" podUID="0617092f-40a9-4d3d-b472-f284a2b24000" Jan 31 09:24:34 crc kubenswrapper[4830]: I0131 09:24:34.368445 4830 scope.go:117] "RemoveContainer" containerID="9ad64aa0976e8d861bc684c8ab460f86d03eb1ae0e1dc4fe39e49e703048b4b8" Jan 31 09:24:34 crc kubenswrapper[4830]: I0131 09:24:34.456588 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-t2pjz"] Jan 31 09:24:34 crc kubenswrapper[4830]: I0131 09:24:34.941052 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 31 09:24:34 crc kubenswrapper[4830]: I0131 09:24:34.969454 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-ztgnf" Jan 31 09:24:35 crc kubenswrapper[4830]: I0131 09:24:35.033569 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce550202-087a-49b1-8796-10f03f0ab9be-combined-ca-bundle\") pod \"ce550202-087a-49b1-8796-10f03f0ab9be\" (UID: \"ce550202-087a-49b1-8796-10f03f0ab9be\") " Jan 31 09:24:35 crc kubenswrapper[4830]: I0131 09:24:35.033888 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/ce550202-087a-49b1-8796-10f03f0ab9be-config\") pod \"ce550202-087a-49b1-8796-10f03f0ab9be\" (UID: \"ce550202-087a-49b1-8796-10f03f0ab9be\") " Jan 31 09:24:35 crc kubenswrapper[4830]: I0131 09:24:35.034216 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mbzp5\" (UniqueName: \"kubernetes.io/projected/ce550202-087a-49b1-8796-10f03f0ab9be-kube-api-access-mbzp5\") pod \"ce550202-087a-49b1-8796-10f03f0ab9be\" (UID: \"ce550202-087a-49b1-8796-10f03f0ab9be\") " Jan 31 09:24:35 crc kubenswrapper[4830]: I0131 09:24:35.049006 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce550202-087a-49b1-8796-10f03f0ab9be-kube-api-access-mbzp5" (OuterVolumeSpecName: "kube-api-access-mbzp5") pod "ce550202-087a-49b1-8796-10f03f0ab9be" (UID: "ce550202-087a-49b1-8796-10f03f0ab9be"). InnerVolumeSpecName "kube-api-access-mbzp5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:24:35 crc kubenswrapper[4830]: I0131 09:24:35.073063 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce550202-087a-49b1-8796-10f03f0ab9be-config" (OuterVolumeSpecName: "config") pod "ce550202-087a-49b1-8796-10f03f0ab9be" (UID: "ce550202-087a-49b1-8796-10f03f0ab9be"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:24:35 crc kubenswrapper[4830]: I0131 09:24:35.091553 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce550202-087a-49b1-8796-10f03f0ab9be-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ce550202-087a-49b1-8796-10f03f0ab9be" (UID: "ce550202-087a-49b1-8796-10f03f0ab9be"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:24:35 crc kubenswrapper[4830]: I0131 09:24:35.138429 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mbzp5\" (UniqueName: \"kubernetes.io/projected/ce550202-087a-49b1-8796-10f03f0ab9be-kube-api-access-mbzp5\") on node \"crc\" DevicePath \"\"" Jan 31 09:24:35 crc kubenswrapper[4830]: I0131 09:24:35.138962 4830 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce550202-087a-49b1-8796-10f03f0ab9be-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 09:24:35 crc kubenswrapper[4830]: I0131 09:24:35.138974 4830 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/ce550202-087a-49b1-8796-10f03f0ab9be-config\") on node \"crc\" DevicePath \"\"" Jan 31 09:24:35 crc kubenswrapper[4830]: I0131 09:24:35.293404 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"0ec35101-03e3-421d-8799-a7a0b1864b9b","Type":"ContainerStarted","Data":"9cbfffa7f21453dce27e8b4d361906ec1abf2d13f356ffc985e453344268c914"} Jan 31 09:24:35 crc kubenswrapper[4830]: I0131 09:24:35.302857 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-t2klw" event={"ID":"b8de8318-1eda-43cc-b522-86d6492c6376","Type":"ContainerStarted","Data":"296c2ca9ab37f0c52f42055edffd7d00fb9c21ff74b16a10dc1b092b5ca95878"} Jan 31 09:24:35 crc kubenswrapper[4830]: I0131 09:24:35.314262 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-t2pjz" event={"ID":"a9974259-ec4c-411a-ba74-95664c116f34","Type":"ContainerStarted","Data":"322baa3e2f6855883d523ac77dd9bbb2e8423fc4f8a4b4da22034d570a99227b"} Jan 31 09:24:35 crc kubenswrapper[4830]: I0131 09:24:35.314342 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-t2pjz" event={"ID":"a9974259-ec4c-411a-ba74-95664c116f34","Type":"ContainerStarted","Data":"18ce2ca5e38335d576138cda0fb7a63556aeca53db1c0092f15ac3f34a5b2400"} Jan 31 09:24:35 crc kubenswrapper[4830]: I0131 09:24:35.334075 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-pr2kp" event={"ID":"bb9aed03-7e56-43de-92fc-3ac6352194af","Type":"ContainerStarted","Data":"e10402784224c0a8f234f5cbb87b74b8d71f6b4e4370aa3a10b8d8b1768b3e70"} Jan 31 09:24:35 crc kubenswrapper[4830]: I0131 09:24:35.351105 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-ztgnf" event={"ID":"ce550202-087a-49b1-8796-10f03f0ab9be","Type":"ContainerDied","Data":"4320145e44df1fe2533a87c4c9a6e7205107bda4b8ad5460e91ed39ba5678570"} Jan 31 09:24:35 crc kubenswrapper[4830]: I0131 09:24:35.351159 4830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4320145e44df1fe2533a87c4c9a6e7205107bda4b8ad5460e91ed39ba5678570" Jan 31 09:24:35 crc kubenswrapper[4830]: I0131 09:24:35.351227 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-ztgnf" Jan 31 09:24:35 crc kubenswrapper[4830]: I0131 09:24:35.356112 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-t2klw" podStartSLOduration=6.934475751 podStartE2EDuration="39.356080013s" podCreationTimestamp="2026-01-31 09:23:56 +0000 UTC" firstStartedPulling="2026-01-31 09:24:00.029297846 +0000 UTC m=+1384.522660288" lastFinishedPulling="2026-01-31 09:24:32.450902108 +0000 UTC m=+1416.944264550" observedRunningTime="2026-01-31 09:24:35.333271994 +0000 UTC m=+1419.826634436" watchObservedRunningTime="2026-01-31 09:24:35.356080013 +0000 UTC m=+1419.849442455" Jan 31 09:24:35 crc kubenswrapper[4830]: I0131 09:24:35.407317 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-t2pjz" podStartSLOduration=21.407292649 podStartE2EDuration="21.407292649s" podCreationTimestamp="2026-01-31 09:24:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 09:24:35.362863946 +0000 UTC m=+1419.856226398" watchObservedRunningTime="2026-01-31 09:24:35.407292649 +0000 UTC m=+1419.900655081" Jan 31 09:24:35 crc kubenswrapper[4830]: I0131 09:24:35.422639 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-pr2kp" podStartSLOduration=6.180596988 podStartE2EDuration="39.422618884s" podCreationTimestamp="2026-01-31 09:23:56 +0000 UTC" firstStartedPulling="2026-01-31 09:23:59.225441353 +0000 UTC m=+1383.718803795" lastFinishedPulling="2026-01-31 09:24:32.467463249 +0000 UTC m=+1416.960825691" observedRunningTime="2026-01-31 09:24:35.388474424 +0000 UTC m=+1419.881836866" watchObservedRunningTime="2026-01-31 09:24:35.422618884 +0000 UTC m=+1419.915981326" Jan 31 09:24:35 crc kubenswrapper[4830]: I0131 09:24:35.608844 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-84b966f6c9-w98lj"] Jan 31 09:24:35 crc kubenswrapper[4830]: E0131 09:24:35.611114 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce550202-087a-49b1-8796-10f03f0ab9be" containerName="neutron-db-sync" Jan 31 09:24:35 crc kubenswrapper[4830]: I0131 09:24:35.611145 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce550202-087a-49b1-8796-10f03f0ab9be" containerName="neutron-db-sync" Jan 31 09:24:35 crc kubenswrapper[4830]: E0131 09:24:35.611182 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="78be811e-7bfb-400f-9e75-b2853dc051bd" containerName="dnsmasq-dns" Jan 31 09:24:35 crc kubenswrapper[4830]: I0131 09:24:35.611189 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="78be811e-7bfb-400f-9e75-b2853dc051bd" containerName="dnsmasq-dns" Jan 31 09:24:35 crc kubenswrapper[4830]: E0131 09:24:35.611206 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="78be811e-7bfb-400f-9e75-b2853dc051bd" containerName="init" Jan 31 09:24:35 crc kubenswrapper[4830]: I0131 09:24:35.611212 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="78be811e-7bfb-400f-9e75-b2853dc051bd" containerName="init" Jan 31 09:24:35 crc kubenswrapper[4830]: I0131 09:24:35.611424 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="78be811e-7bfb-400f-9e75-b2853dc051bd" containerName="dnsmasq-dns" Jan 31 09:24:35 crc kubenswrapper[4830]: I0131 09:24:35.611445 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce550202-087a-49b1-8796-10f03f0ab9be" containerName="neutron-db-sync" Jan 31 09:24:35 crc kubenswrapper[4830]: I0131 09:24:35.615344 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84b966f6c9-w98lj" Jan 31 09:24:35 crc kubenswrapper[4830]: I0131 09:24:35.645116 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-84b966f6c9-w98lj"] Jan 31 09:24:35 crc kubenswrapper[4830]: I0131 09:24:35.670380 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d473ad04-829e-427a-81a8-68d368eb9cfc-dns-swift-storage-0\") pod \"dnsmasq-dns-84b966f6c9-w98lj\" (UID: \"d473ad04-829e-427a-81a8-68d368eb9cfc\") " pod="openstack/dnsmasq-dns-84b966f6c9-w98lj" Jan 31 09:24:35 crc kubenswrapper[4830]: I0131 09:24:35.670472 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mjzmj\" (UniqueName: \"kubernetes.io/projected/d473ad04-829e-427a-81a8-68d368eb9cfc-kube-api-access-mjzmj\") pod \"dnsmasq-dns-84b966f6c9-w98lj\" (UID: \"d473ad04-829e-427a-81a8-68d368eb9cfc\") " pod="openstack/dnsmasq-dns-84b966f6c9-w98lj" Jan 31 09:24:35 crc kubenswrapper[4830]: I0131 09:24:35.670588 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d473ad04-829e-427a-81a8-68d368eb9cfc-ovsdbserver-nb\") pod \"dnsmasq-dns-84b966f6c9-w98lj\" (UID: \"d473ad04-829e-427a-81a8-68d368eb9cfc\") " pod="openstack/dnsmasq-dns-84b966f6c9-w98lj" Jan 31 09:24:35 crc kubenswrapper[4830]: I0131 09:24:35.670617 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d473ad04-829e-427a-81a8-68d368eb9cfc-dns-svc\") pod \"dnsmasq-dns-84b966f6c9-w98lj\" (UID: \"d473ad04-829e-427a-81a8-68d368eb9cfc\") " pod="openstack/dnsmasq-dns-84b966f6c9-w98lj" Jan 31 09:24:35 crc kubenswrapper[4830]: I0131 09:24:35.670647 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d473ad04-829e-427a-81a8-68d368eb9cfc-config\") pod \"dnsmasq-dns-84b966f6c9-w98lj\" (UID: \"d473ad04-829e-427a-81a8-68d368eb9cfc\") " pod="openstack/dnsmasq-dns-84b966f6c9-w98lj" Jan 31 09:24:35 crc kubenswrapper[4830]: I0131 09:24:35.670668 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d473ad04-829e-427a-81a8-68d368eb9cfc-ovsdbserver-sb\") pod \"dnsmasq-dns-84b966f6c9-w98lj\" (UID: \"d473ad04-829e-427a-81a8-68d368eb9cfc\") " pod="openstack/dnsmasq-dns-84b966f6c9-w98lj" Jan 31 09:24:35 crc kubenswrapper[4830]: I0131 09:24:35.769211 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-6cd8b566d4-4q75x"] Jan 31 09:24:35 crc kubenswrapper[4830]: I0131 09:24:35.774469 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d473ad04-829e-427a-81a8-68d368eb9cfc-ovsdbserver-nb\") pod \"dnsmasq-dns-84b966f6c9-w98lj\" (UID: \"d473ad04-829e-427a-81a8-68d368eb9cfc\") " pod="openstack/dnsmasq-dns-84b966f6c9-w98lj" Jan 31 09:24:35 crc kubenswrapper[4830]: I0131 09:24:35.774527 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d473ad04-829e-427a-81a8-68d368eb9cfc-dns-svc\") pod \"dnsmasq-dns-84b966f6c9-w98lj\" (UID: \"d473ad04-829e-427a-81a8-68d368eb9cfc\") " pod="openstack/dnsmasq-dns-84b966f6c9-w98lj" Jan 31 09:24:35 crc kubenswrapper[4830]: I0131 09:24:35.774556 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d473ad04-829e-427a-81a8-68d368eb9cfc-config\") pod \"dnsmasq-dns-84b966f6c9-w98lj\" (UID: \"d473ad04-829e-427a-81a8-68d368eb9cfc\") " pod="openstack/dnsmasq-dns-84b966f6c9-w98lj" Jan 31 09:24:35 crc kubenswrapper[4830]: I0131 09:24:35.774583 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d473ad04-829e-427a-81a8-68d368eb9cfc-ovsdbserver-sb\") pod \"dnsmasq-dns-84b966f6c9-w98lj\" (UID: \"d473ad04-829e-427a-81a8-68d368eb9cfc\") " pod="openstack/dnsmasq-dns-84b966f6c9-w98lj" Jan 31 09:24:35 crc kubenswrapper[4830]: I0131 09:24:35.774658 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d473ad04-829e-427a-81a8-68d368eb9cfc-dns-swift-storage-0\") pod \"dnsmasq-dns-84b966f6c9-w98lj\" (UID: \"d473ad04-829e-427a-81a8-68d368eb9cfc\") " pod="openstack/dnsmasq-dns-84b966f6c9-w98lj" Jan 31 09:24:35 crc kubenswrapper[4830]: I0131 09:24:35.774711 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mjzmj\" (UniqueName: \"kubernetes.io/projected/d473ad04-829e-427a-81a8-68d368eb9cfc-kube-api-access-mjzmj\") pod \"dnsmasq-dns-84b966f6c9-w98lj\" (UID: \"d473ad04-829e-427a-81a8-68d368eb9cfc\") " pod="openstack/dnsmasq-dns-84b966f6c9-w98lj" Jan 31 09:24:35 crc kubenswrapper[4830]: I0131 09:24:35.775745 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d473ad04-829e-427a-81a8-68d368eb9cfc-ovsdbserver-nb\") pod \"dnsmasq-dns-84b966f6c9-w98lj\" (UID: \"d473ad04-829e-427a-81a8-68d368eb9cfc\") " pod="openstack/dnsmasq-dns-84b966f6c9-w98lj" Jan 31 09:24:35 crc kubenswrapper[4830]: I0131 09:24:35.776205 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d473ad04-829e-427a-81a8-68d368eb9cfc-config\") pod \"dnsmasq-dns-84b966f6c9-w98lj\" (UID: \"d473ad04-829e-427a-81a8-68d368eb9cfc\") " pod="openstack/dnsmasq-dns-84b966f6c9-w98lj" Jan 31 09:24:35 crc kubenswrapper[4830]: I0131 09:24:35.776322 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d473ad04-829e-427a-81a8-68d368eb9cfc-dns-swift-storage-0\") pod \"dnsmasq-dns-84b966f6c9-w98lj\" (UID: \"d473ad04-829e-427a-81a8-68d368eb9cfc\") " pod="openstack/dnsmasq-dns-84b966f6c9-w98lj" Jan 31 09:24:35 crc kubenswrapper[4830]: I0131 09:24:35.779159 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d473ad04-829e-427a-81a8-68d368eb9cfc-dns-svc\") pod \"dnsmasq-dns-84b966f6c9-w98lj\" (UID: \"d473ad04-829e-427a-81a8-68d368eb9cfc\") " pod="openstack/dnsmasq-dns-84b966f6c9-w98lj" Jan 31 09:24:35 crc kubenswrapper[4830]: I0131 09:24:35.781052 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d473ad04-829e-427a-81a8-68d368eb9cfc-ovsdbserver-sb\") pod \"dnsmasq-dns-84b966f6c9-w98lj\" (UID: \"d473ad04-829e-427a-81a8-68d368eb9cfc\") " pod="openstack/dnsmasq-dns-84b966f6c9-w98lj" Jan 31 09:24:35 crc kubenswrapper[4830]: I0131 09:24:35.781398 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6cd8b566d4-4q75x" Jan 31 09:24:35 crc kubenswrapper[4830]: I0131 09:24:35.793646 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Jan 31 09:24:35 crc kubenswrapper[4830]: I0131 09:24:35.793923 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Jan 31 09:24:35 crc kubenswrapper[4830]: I0131 09:24:35.794071 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Jan 31 09:24:35 crc kubenswrapper[4830]: I0131 09:24:35.794239 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-brn7t" Jan 31 09:24:35 crc kubenswrapper[4830]: I0131 09:24:35.797532 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-6cd8b566d4-4q75x"] Jan 31 09:24:35 crc kubenswrapper[4830]: I0131 09:24:35.806460 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mjzmj\" (UniqueName: \"kubernetes.io/projected/d473ad04-829e-427a-81a8-68d368eb9cfc-kube-api-access-mjzmj\") pod \"dnsmasq-dns-84b966f6c9-w98lj\" (UID: \"d473ad04-829e-427a-81a8-68d368eb9cfc\") " pod="openstack/dnsmasq-dns-84b966f6c9-w98lj" Jan 31 09:24:35 crc kubenswrapper[4830]: I0131 09:24:35.886709 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/74254e68-cbf8-446e-a2d8-768185ec778f-httpd-config\") pod \"neutron-6cd8b566d4-4q75x\" (UID: \"74254e68-cbf8-446e-a2d8-768185ec778f\") " pod="openstack/neutron-6cd8b566d4-4q75x" Jan 31 09:24:35 crc kubenswrapper[4830]: I0131 09:24:35.886966 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/74254e68-cbf8-446e-a2d8-768185ec778f-combined-ca-bundle\") pod \"neutron-6cd8b566d4-4q75x\" (UID: \"74254e68-cbf8-446e-a2d8-768185ec778f\") " pod="openstack/neutron-6cd8b566d4-4q75x" Jan 31 09:24:35 crc kubenswrapper[4830]: I0131 09:24:35.887262 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/74254e68-cbf8-446e-a2d8-768185ec778f-ovndb-tls-certs\") pod \"neutron-6cd8b566d4-4q75x\" (UID: \"74254e68-cbf8-446e-a2d8-768185ec778f\") " pod="openstack/neutron-6cd8b566d4-4q75x" Jan 31 09:24:35 crc kubenswrapper[4830]: I0131 09:24:35.887800 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qrmgs\" (UniqueName: \"kubernetes.io/projected/74254e68-cbf8-446e-a2d8-768185ec778f-kube-api-access-qrmgs\") pod \"neutron-6cd8b566d4-4q75x\" (UID: \"74254e68-cbf8-446e-a2d8-768185ec778f\") " pod="openstack/neutron-6cd8b566d4-4q75x" Jan 31 09:24:35 crc kubenswrapper[4830]: I0131 09:24:35.888822 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/74254e68-cbf8-446e-a2d8-768185ec778f-config\") pod \"neutron-6cd8b566d4-4q75x\" (UID: \"74254e68-cbf8-446e-a2d8-768185ec778f\") " pod="openstack/neutron-6cd8b566d4-4q75x" Jan 31 09:24:35 crc kubenswrapper[4830]: I0131 09:24:35.998452 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/74254e68-cbf8-446e-a2d8-768185ec778f-ovndb-tls-certs\") pod \"neutron-6cd8b566d4-4q75x\" (UID: \"74254e68-cbf8-446e-a2d8-768185ec778f\") " pod="openstack/neutron-6cd8b566d4-4q75x" Jan 31 09:24:35 crc kubenswrapper[4830]: I0131 09:24:35.998924 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qrmgs\" (UniqueName: \"kubernetes.io/projected/74254e68-cbf8-446e-a2d8-768185ec778f-kube-api-access-qrmgs\") pod \"neutron-6cd8b566d4-4q75x\" (UID: \"74254e68-cbf8-446e-a2d8-768185ec778f\") " pod="openstack/neutron-6cd8b566d4-4q75x" Jan 31 09:24:35 crc kubenswrapper[4830]: I0131 09:24:35.999164 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/74254e68-cbf8-446e-a2d8-768185ec778f-config\") pod \"neutron-6cd8b566d4-4q75x\" (UID: \"74254e68-cbf8-446e-a2d8-768185ec778f\") " pod="openstack/neutron-6cd8b566d4-4q75x" Jan 31 09:24:35 crc kubenswrapper[4830]: I0131 09:24:35.999299 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/74254e68-cbf8-446e-a2d8-768185ec778f-httpd-config\") pod \"neutron-6cd8b566d4-4q75x\" (UID: \"74254e68-cbf8-446e-a2d8-768185ec778f\") " pod="openstack/neutron-6cd8b566d4-4q75x" Jan 31 09:24:35 crc kubenswrapper[4830]: I0131 09:24:35.999420 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/74254e68-cbf8-446e-a2d8-768185ec778f-combined-ca-bundle\") pod \"neutron-6cd8b566d4-4q75x\" (UID: \"74254e68-cbf8-446e-a2d8-768185ec778f\") " pod="openstack/neutron-6cd8b566d4-4q75x" Jan 31 09:24:36 crc kubenswrapper[4830]: I0131 09:24:35.998792 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84b966f6c9-w98lj" Jan 31 09:24:36 crc kubenswrapper[4830]: I0131 09:24:36.005490 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/74254e68-cbf8-446e-a2d8-768185ec778f-combined-ca-bundle\") pod \"neutron-6cd8b566d4-4q75x\" (UID: \"74254e68-cbf8-446e-a2d8-768185ec778f\") " pod="openstack/neutron-6cd8b566d4-4q75x" Jan 31 09:24:36 crc kubenswrapper[4830]: I0131 09:24:36.008230 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/74254e68-cbf8-446e-a2d8-768185ec778f-httpd-config\") pod \"neutron-6cd8b566d4-4q75x\" (UID: \"74254e68-cbf8-446e-a2d8-768185ec778f\") " pod="openstack/neutron-6cd8b566d4-4q75x" Jan 31 09:24:36 crc kubenswrapper[4830]: I0131 09:24:36.011100 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/74254e68-cbf8-446e-a2d8-768185ec778f-config\") pod \"neutron-6cd8b566d4-4q75x\" (UID: \"74254e68-cbf8-446e-a2d8-768185ec778f\") " pod="openstack/neutron-6cd8b566d4-4q75x" Jan 31 09:24:36 crc kubenswrapper[4830]: I0131 09:24:36.017502 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/74254e68-cbf8-446e-a2d8-768185ec778f-ovndb-tls-certs\") pod \"neutron-6cd8b566d4-4q75x\" (UID: \"74254e68-cbf8-446e-a2d8-768185ec778f\") " pod="openstack/neutron-6cd8b566d4-4q75x" Jan 31 09:24:36 crc kubenswrapper[4830]: I0131 09:24:36.053573 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qrmgs\" (UniqueName: \"kubernetes.io/projected/74254e68-cbf8-446e-a2d8-768185ec778f-kube-api-access-qrmgs\") pod \"neutron-6cd8b566d4-4q75x\" (UID: \"74254e68-cbf8-446e-a2d8-768185ec778f\") " pod="openstack/neutron-6cd8b566d4-4q75x" Jan 31 09:24:36 crc kubenswrapper[4830]: I0131 09:24:36.148582 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6cd8b566d4-4q75x" Jan 31 09:24:36 crc kubenswrapper[4830]: I0131 09:24:36.435976 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"ff4e5fbc-7e45-42b7-8af6-ff34b36bb594","Type":"ContainerStarted","Data":"7cac1cf1ee9f45c0bb4a831735025c733474ea6d0e388d5961a9e10c557f87a2"} Jan 31 09:24:36 crc kubenswrapper[4830]: I0131 09:24:36.444290 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"0ec35101-03e3-421d-8799-a7a0b1864b9b","Type":"ContainerStarted","Data":"e8d65d3b6016cd3404d2d4f61f24bafc19156c82cbfa0497b23e98f7f4e9893e"} Jan 31 09:24:36 crc kubenswrapper[4830]: I0131 09:24:36.883107 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-84b966f6c9-w98lj"] Jan 31 09:24:37 crc kubenswrapper[4830]: I0131 09:24:37.495566 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"ff4e5fbc-7e45-42b7-8af6-ff34b36bb594","Type":"ContainerStarted","Data":"c0f6af8c9ac8c455376ada0690cecdd3516ef4b1a4487609897ae6527c19432d"} Jan 31 09:24:37 crc kubenswrapper[4830]: I0131 09:24:37.551557 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=31.551533481 podStartE2EDuration="31.551533481s" podCreationTimestamp="2026-01-31 09:24:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 09:24:37.535191936 +0000 UTC m=+1422.028554398" watchObservedRunningTime="2026-01-31 09:24:37.551533481 +0000 UTC m=+1422.044895923" Jan 31 09:24:38 crc kubenswrapper[4830]: I0131 09:24:38.536475 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"39688f84-c227-4658-aee1-ce5e5d450ca1","Type":"ContainerStarted","Data":"6c78395d815c0f304dabbb72d124784561343be071e34588d43374ea0a8c7ab6"} Jan 31 09:24:38 crc kubenswrapper[4830]: I0131 09:24:38.543975 4830 generic.go:334] "Generic (PLEG): container finished" podID="d473ad04-829e-427a-81a8-68d368eb9cfc" containerID="7b1ca9a60b825f69e176739217c4eff0340e3928f32495c4796519075ea2277f" exitCode=0 Jan 31 09:24:38 crc kubenswrapper[4830]: I0131 09:24:38.545157 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-84b966f6c9-w98lj" event={"ID":"d473ad04-829e-427a-81a8-68d368eb9cfc","Type":"ContainerDied","Data":"7b1ca9a60b825f69e176739217c4eff0340e3928f32495c4796519075ea2277f"} Jan 31 09:24:38 crc kubenswrapper[4830]: I0131 09:24:38.545192 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-84b966f6c9-w98lj" event={"ID":"d473ad04-829e-427a-81a8-68d368eb9cfc","Type":"ContainerStarted","Data":"07d3878cff7371eec5fc1af43e88f699f6df355a200e7c3966fdbb978e8c3520"} Jan 31 09:24:38 crc kubenswrapper[4830]: I0131 09:24:38.656434 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-59d6cd4869-w2rrr"] Jan 31 09:24:38 crc kubenswrapper[4830]: I0131 09:24:38.658911 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-59d6cd4869-w2rrr" Jan 31 09:24:38 crc kubenswrapper[4830]: I0131 09:24:38.661837 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Jan 31 09:24:38 crc kubenswrapper[4830]: I0131 09:24:38.662081 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Jan 31 09:24:38 crc kubenswrapper[4830]: I0131 09:24:38.666653 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-59d6cd4869-w2rrr"] Jan 31 09:24:38 crc kubenswrapper[4830]: I0131 09:24:38.755510 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/9404af59-7e12-483b-90d0-9ebdc4140cc2-config\") pod \"neutron-59d6cd4869-w2rrr\" (UID: \"9404af59-7e12-483b-90d0-9ebdc4140cc2\") " pod="openstack/neutron-59d6cd4869-w2rrr" Jan 31 09:24:38 crc kubenswrapper[4830]: I0131 09:24:38.756922 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9404af59-7e12-483b-90d0-9ebdc4140cc2-combined-ca-bundle\") pod \"neutron-59d6cd4869-w2rrr\" (UID: \"9404af59-7e12-483b-90d0-9ebdc4140cc2\") " pod="openstack/neutron-59d6cd4869-w2rrr" Jan 31 09:24:38 crc kubenswrapper[4830]: I0131 09:24:38.757018 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9404af59-7e12-483b-90d0-9ebdc4140cc2-public-tls-certs\") pod \"neutron-59d6cd4869-w2rrr\" (UID: \"9404af59-7e12-483b-90d0-9ebdc4140cc2\") " pod="openstack/neutron-59d6cd4869-w2rrr" Jan 31 09:24:38 crc kubenswrapper[4830]: I0131 09:24:38.757284 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/9404af59-7e12-483b-90d0-9ebdc4140cc2-httpd-config\") pod \"neutron-59d6cd4869-w2rrr\" (UID: \"9404af59-7e12-483b-90d0-9ebdc4140cc2\") " pod="openstack/neutron-59d6cd4869-w2rrr" Jan 31 09:24:38 crc kubenswrapper[4830]: I0131 09:24:38.757388 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/9404af59-7e12-483b-90d0-9ebdc4140cc2-ovndb-tls-certs\") pod \"neutron-59d6cd4869-w2rrr\" (UID: \"9404af59-7e12-483b-90d0-9ebdc4140cc2\") " pod="openstack/neutron-59d6cd4869-w2rrr" Jan 31 09:24:38 crc kubenswrapper[4830]: I0131 09:24:38.757498 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9404af59-7e12-483b-90d0-9ebdc4140cc2-internal-tls-certs\") pod \"neutron-59d6cd4869-w2rrr\" (UID: \"9404af59-7e12-483b-90d0-9ebdc4140cc2\") " pod="openstack/neutron-59d6cd4869-w2rrr" Jan 31 09:24:38 crc kubenswrapper[4830]: I0131 09:24:38.757703 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8nxhr\" (UniqueName: \"kubernetes.io/projected/9404af59-7e12-483b-90d0-9ebdc4140cc2-kube-api-access-8nxhr\") pod \"neutron-59d6cd4869-w2rrr\" (UID: \"9404af59-7e12-483b-90d0-9ebdc4140cc2\") " pod="openstack/neutron-59d6cd4869-w2rrr" Jan 31 09:24:38 crc kubenswrapper[4830]: I0131 09:24:38.817145 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-6cd8b566d4-4q75x"] Jan 31 09:24:38 crc kubenswrapper[4830]: I0131 09:24:38.860870 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8nxhr\" (UniqueName: \"kubernetes.io/projected/9404af59-7e12-483b-90d0-9ebdc4140cc2-kube-api-access-8nxhr\") pod \"neutron-59d6cd4869-w2rrr\" (UID: \"9404af59-7e12-483b-90d0-9ebdc4140cc2\") " pod="openstack/neutron-59d6cd4869-w2rrr" Jan 31 09:24:38 crc kubenswrapper[4830]: I0131 09:24:38.861282 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/9404af59-7e12-483b-90d0-9ebdc4140cc2-config\") pod \"neutron-59d6cd4869-w2rrr\" (UID: \"9404af59-7e12-483b-90d0-9ebdc4140cc2\") " pod="openstack/neutron-59d6cd4869-w2rrr" Jan 31 09:24:38 crc kubenswrapper[4830]: I0131 09:24:38.861535 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9404af59-7e12-483b-90d0-9ebdc4140cc2-combined-ca-bundle\") pod \"neutron-59d6cd4869-w2rrr\" (UID: \"9404af59-7e12-483b-90d0-9ebdc4140cc2\") " pod="openstack/neutron-59d6cd4869-w2rrr" Jan 31 09:24:38 crc kubenswrapper[4830]: I0131 09:24:38.861713 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9404af59-7e12-483b-90d0-9ebdc4140cc2-public-tls-certs\") pod \"neutron-59d6cd4869-w2rrr\" (UID: \"9404af59-7e12-483b-90d0-9ebdc4140cc2\") " pod="openstack/neutron-59d6cd4869-w2rrr" Jan 31 09:24:38 crc kubenswrapper[4830]: I0131 09:24:38.861939 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/9404af59-7e12-483b-90d0-9ebdc4140cc2-httpd-config\") pod \"neutron-59d6cd4869-w2rrr\" (UID: \"9404af59-7e12-483b-90d0-9ebdc4140cc2\") " pod="openstack/neutron-59d6cd4869-w2rrr" Jan 31 09:24:38 crc kubenswrapper[4830]: I0131 09:24:38.862058 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/9404af59-7e12-483b-90d0-9ebdc4140cc2-ovndb-tls-certs\") pod \"neutron-59d6cd4869-w2rrr\" (UID: \"9404af59-7e12-483b-90d0-9ebdc4140cc2\") " pod="openstack/neutron-59d6cd4869-w2rrr" Jan 31 09:24:38 crc kubenswrapper[4830]: I0131 09:24:38.862150 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9404af59-7e12-483b-90d0-9ebdc4140cc2-internal-tls-certs\") pod \"neutron-59d6cd4869-w2rrr\" (UID: \"9404af59-7e12-483b-90d0-9ebdc4140cc2\") " pod="openstack/neutron-59d6cd4869-w2rrr" Jan 31 09:24:38 crc kubenswrapper[4830]: I0131 09:24:38.872134 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/9404af59-7e12-483b-90d0-9ebdc4140cc2-httpd-config\") pod \"neutron-59d6cd4869-w2rrr\" (UID: \"9404af59-7e12-483b-90d0-9ebdc4140cc2\") " pod="openstack/neutron-59d6cd4869-w2rrr" Jan 31 09:24:38 crc kubenswrapper[4830]: I0131 09:24:38.874560 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9404af59-7e12-483b-90d0-9ebdc4140cc2-combined-ca-bundle\") pod \"neutron-59d6cd4869-w2rrr\" (UID: \"9404af59-7e12-483b-90d0-9ebdc4140cc2\") " pod="openstack/neutron-59d6cd4869-w2rrr" Jan 31 09:24:38 crc kubenswrapper[4830]: I0131 09:24:38.875247 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/9404af59-7e12-483b-90d0-9ebdc4140cc2-ovndb-tls-certs\") pod \"neutron-59d6cd4869-w2rrr\" (UID: \"9404af59-7e12-483b-90d0-9ebdc4140cc2\") " pod="openstack/neutron-59d6cd4869-w2rrr" Jan 31 09:24:38 crc kubenswrapper[4830]: I0131 09:24:38.875366 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9404af59-7e12-483b-90d0-9ebdc4140cc2-public-tls-certs\") pod \"neutron-59d6cd4869-w2rrr\" (UID: \"9404af59-7e12-483b-90d0-9ebdc4140cc2\") " pod="openstack/neutron-59d6cd4869-w2rrr" Jan 31 09:24:38 crc kubenswrapper[4830]: I0131 09:24:38.875748 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9404af59-7e12-483b-90d0-9ebdc4140cc2-internal-tls-certs\") pod \"neutron-59d6cd4869-w2rrr\" (UID: \"9404af59-7e12-483b-90d0-9ebdc4140cc2\") " pod="openstack/neutron-59d6cd4869-w2rrr" Jan 31 09:24:38 crc kubenswrapper[4830]: I0131 09:24:38.889867 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8nxhr\" (UniqueName: \"kubernetes.io/projected/9404af59-7e12-483b-90d0-9ebdc4140cc2-kube-api-access-8nxhr\") pod \"neutron-59d6cd4869-w2rrr\" (UID: \"9404af59-7e12-483b-90d0-9ebdc4140cc2\") " pod="openstack/neutron-59d6cd4869-w2rrr" Jan 31 09:24:38 crc kubenswrapper[4830]: I0131 09:24:38.891165 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/9404af59-7e12-483b-90d0-9ebdc4140cc2-config\") pod \"neutron-59d6cd4869-w2rrr\" (UID: \"9404af59-7e12-483b-90d0-9ebdc4140cc2\") " pod="openstack/neutron-59d6cd4869-w2rrr" Jan 31 09:24:39 crc kubenswrapper[4830]: I0131 09:24:39.008007 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-59d6cd4869-w2rrr" Jan 31 09:24:39 crc kubenswrapper[4830]: I0131 09:24:39.576249 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-84b966f6c9-w98lj" event={"ID":"d473ad04-829e-427a-81a8-68d368eb9cfc","Type":"ContainerStarted","Data":"a481c487e1517bdaec4fe7f910b511c110b08e6441926a14617274c5841089c2"} Jan 31 09:24:39 crc kubenswrapper[4830]: I0131 09:24:39.577040 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-84b966f6c9-w98lj" Jan 31 09:24:39 crc kubenswrapper[4830]: I0131 09:24:39.582216 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"0ec35101-03e3-421d-8799-a7a0b1864b9b","Type":"ContainerStarted","Data":"be1a139ca62a9ee5841b9e7d34ba9750b8cdc0d9b26aa9a3ed0ba027497b53ec"} Jan 31 09:24:39 crc kubenswrapper[4830]: I0131 09:24:39.604475 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6cd8b566d4-4q75x" event={"ID":"74254e68-cbf8-446e-a2d8-768185ec778f","Type":"ContainerStarted","Data":"9c7ed3187c5fd1fce5ed05e9c48a484b4b9883935cce4cff33ab889828b9bc46"} Jan 31 09:24:39 crc kubenswrapper[4830]: I0131 09:24:39.604533 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6cd8b566d4-4q75x" event={"ID":"74254e68-cbf8-446e-a2d8-768185ec778f","Type":"ContainerStarted","Data":"b6be3b1263caaf0265275aa0905943ece9fac8b1520e8c7e36464dae7cf5b417"} Jan 31 09:24:39 crc kubenswrapper[4830]: I0131 09:24:39.618359 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-84b966f6c9-w98lj" podStartSLOduration=4.61833675 podStartE2EDuration="4.61833675s" podCreationTimestamp="2026-01-31 09:24:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 09:24:39.613467622 +0000 UTC m=+1424.106830064" watchObservedRunningTime="2026-01-31 09:24:39.61833675 +0000 UTC m=+1424.111699192" Jan 31 09:24:39 crc kubenswrapper[4830]: I0131 09:24:39.630880 4830 generic.go:334] "Generic (PLEG): container finished" podID="b8de8318-1eda-43cc-b522-86d6492c6376" containerID="296c2ca9ab37f0c52f42055edffd7d00fb9c21ff74b16a10dc1b092b5ca95878" exitCode=0 Jan 31 09:24:39 crc kubenswrapper[4830]: I0131 09:24:39.630947 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-t2klw" event={"ID":"b8de8318-1eda-43cc-b522-86d6492c6376","Type":"ContainerDied","Data":"296c2ca9ab37f0c52f42055edffd7d00fb9c21ff74b16a10dc1b092b5ca95878"} Jan 31 09:24:39 crc kubenswrapper[4830]: I0131 09:24:39.666016 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=33.665993205 podStartE2EDuration="33.665993205s" podCreationTimestamp="2026-01-31 09:24:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 09:24:39.653063297 +0000 UTC m=+1424.146425739" watchObservedRunningTime="2026-01-31 09:24:39.665993205 +0000 UTC m=+1424.159355647" Jan 31 09:24:39 crc kubenswrapper[4830]: I0131 09:24:39.867225 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-59d6cd4869-w2rrr"] Jan 31 09:24:40 crc kubenswrapper[4830]: I0131 09:24:40.646612 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-59d6cd4869-w2rrr" event={"ID":"9404af59-7e12-483b-90d0-9ebdc4140cc2","Type":"ContainerStarted","Data":"bd674d6826529c7c8a216cc6649c22beca723f2881ec0549ff5e3b4f031f896a"} Jan 31 09:24:40 crc kubenswrapper[4830]: I0131 09:24:40.648518 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6cd8b566d4-4q75x" event={"ID":"74254e68-cbf8-446e-a2d8-768185ec778f","Type":"ContainerStarted","Data":"4c6decc22e93c41d7227bdefca13848b3fc35f5adc6c7d1553fa05be847967dc"} Jan 31 09:24:41 crc kubenswrapper[4830]: I0131 09:24:41.168008 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-t2klw" Jan 31 09:24:41 crc kubenswrapper[4830]: I0131 09:24:41.356261 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b8de8318-1eda-43cc-b522-86d6492c6376-logs\") pod \"b8de8318-1eda-43cc-b522-86d6492c6376\" (UID: \"b8de8318-1eda-43cc-b522-86d6492c6376\") " Jan 31 09:24:41 crc kubenswrapper[4830]: I0131 09:24:41.356662 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b28lz\" (UniqueName: \"kubernetes.io/projected/b8de8318-1eda-43cc-b522-86d6492c6376-kube-api-access-b28lz\") pod \"b8de8318-1eda-43cc-b522-86d6492c6376\" (UID: \"b8de8318-1eda-43cc-b522-86d6492c6376\") " Jan 31 09:24:41 crc kubenswrapper[4830]: I0131 09:24:41.356706 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b8de8318-1eda-43cc-b522-86d6492c6376-config-data\") pod \"b8de8318-1eda-43cc-b522-86d6492c6376\" (UID: \"b8de8318-1eda-43cc-b522-86d6492c6376\") " Jan 31 09:24:41 crc kubenswrapper[4830]: I0131 09:24:41.356754 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b8de8318-1eda-43cc-b522-86d6492c6376-logs" (OuterVolumeSpecName: "logs") pod "b8de8318-1eda-43cc-b522-86d6492c6376" (UID: "b8de8318-1eda-43cc-b522-86d6492c6376"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 09:24:41 crc kubenswrapper[4830]: I0131 09:24:41.356953 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b8de8318-1eda-43cc-b522-86d6492c6376-combined-ca-bundle\") pod \"b8de8318-1eda-43cc-b522-86d6492c6376\" (UID: \"b8de8318-1eda-43cc-b522-86d6492c6376\") " Jan 31 09:24:41 crc kubenswrapper[4830]: I0131 09:24:41.357119 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b8de8318-1eda-43cc-b522-86d6492c6376-scripts\") pod \"b8de8318-1eda-43cc-b522-86d6492c6376\" (UID: \"b8de8318-1eda-43cc-b522-86d6492c6376\") " Jan 31 09:24:41 crc kubenswrapper[4830]: I0131 09:24:41.357941 4830 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b8de8318-1eda-43cc-b522-86d6492c6376-logs\") on node \"crc\" DevicePath \"\"" Jan 31 09:24:41 crc kubenswrapper[4830]: I0131 09:24:41.363337 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b8de8318-1eda-43cc-b522-86d6492c6376-kube-api-access-b28lz" (OuterVolumeSpecName: "kube-api-access-b28lz") pod "b8de8318-1eda-43cc-b522-86d6492c6376" (UID: "b8de8318-1eda-43cc-b522-86d6492c6376"). InnerVolumeSpecName "kube-api-access-b28lz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:24:41 crc kubenswrapper[4830]: I0131 09:24:41.363758 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b8de8318-1eda-43cc-b522-86d6492c6376-scripts" (OuterVolumeSpecName: "scripts") pod "b8de8318-1eda-43cc-b522-86d6492c6376" (UID: "b8de8318-1eda-43cc-b522-86d6492c6376"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:24:41 crc kubenswrapper[4830]: I0131 09:24:41.437223 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b8de8318-1eda-43cc-b522-86d6492c6376-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b8de8318-1eda-43cc-b522-86d6492c6376" (UID: "b8de8318-1eda-43cc-b522-86d6492c6376"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:24:41 crc kubenswrapper[4830]: I0131 09:24:41.452413 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b8de8318-1eda-43cc-b522-86d6492c6376-config-data" (OuterVolumeSpecName: "config-data") pod "b8de8318-1eda-43cc-b522-86d6492c6376" (UID: "b8de8318-1eda-43cc-b522-86d6492c6376"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:24:41 crc kubenswrapper[4830]: I0131 09:24:41.460206 4830 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b8de8318-1eda-43cc-b522-86d6492c6376-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 09:24:41 crc kubenswrapper[4830]: I0131 09:24:41.460250 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b28lz\" (UniqueName: \"kubernetes.io/projected/b8de8318-1eda-43cc-b522-86d6492c6376-kube-api-access-b28lz\") on node \"crc\" DevicePath \"\"" Jan 31 09:24:41 crc kubenswrapper[4830]: I0131 09:24:41.460265 4830 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b8de8318-1eda-43cc-b522-86d6492c6376-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 09:24:41 crc kubenswrapper[4830]: I0131 09:24:41.460277 4830 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b8de8318-1eda-43cc-b522-86d6492c6376-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 09:24:41 crc kubenswrapper[4830]: I0131 09:24:41.671237 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-t2klw" event={"ID":"b8de8318-1eda-43cc-b522-86d6492c6376","Type":"ContainerDied","Data":"52da16470151e23cc9ab10f0269d502dde6f3a04cb9e4692af08610763cec729"} Jan 31 09:24:41 crc kubenswrapper[4830]: I0131 09:24:41.671293 4830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="52da16470151e23cc9ab10f0269d502dde6f3a04cb9e4692af08610763cec729" Jan 31 09:24:41 crc kubenswrapper[4830]: I0131 09:24:41.671363 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-t2klw" Jan 31 09:24:41 crc kubenswrapper[4830]: I0131 09:24:41.797226 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-7995f9f9fb-6r8k4"] Jan 31 09:24:41 crc kubenswrapper[4830]: E0131 09:24:41.798227 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b8de8318-1eda-43cc-b522-86d6492c6376" containerName="placement-db-sync" Jan 31 09:24:41 crc kubenswrapper[4830]: I0131 09:24:41.798257 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="b8de8318-1eda-43cc-b522-86d6492c6376" containerName="placement-db-sync" Jan 31 09:24:41 crc kubenswrapper[4830]: I0131 09:24:41.798573 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="b8de8318-1eda-43cc-b522-86d6492c6376" containerName="placement-db-sync" Jan 31 09:24:41 crc kubenswrapper[4830]: I0131 09:24:41.800263 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-7995f9f9fb-6r8k4" Jan 31 09:24:41 crc kubenswrapper[4830]: I0131 09:24:41.806086 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Jan 31 09:24:41 crc kubenswrapper[4830]: I0131 09:24:41.806301 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Jan 31 09:24:41 crc kubenswrapper[4830]: I0131 09:24:41.806580 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Jan 31 09:24:41 crc kubenswrapper[4830]: I0131 09:24:41.806612 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-vg9sh" Jan 31 09:24:41 crc kubenswrapper[4830]: I0131 09:24:41.806740 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Jan 31 09:24:41 crc kubenswrapper[4830]: I0131 09:24:41.818290 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-7995f9f9fb-6r8k4"] Jan 31 09:24:41 crc kubenswrapper[4830]: I0131 09:24:41.898140 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/43cbd586-1683-440f-992a-113173028a37-config-data\") pod \"placement-7995f9f9fb-6r8k4\" (UID: \"43cbd586-1683-440f-992a-113173028a37\") " pod="openstack/placement-7995f9f9fb-6r8k4" Jan 31 09:24:41 crc kubenswrapper[4830]: I0131 09:24:41.899029 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/43cbd586-1683-440f-992a-113173028a37-public-tls-certs\") pod \"placement-7995f9f9fb-6r8k4\" (UID: \"43cbd586-1683-440f-992a-113173028a37\") " pod="openstack/placement-7995f9f9fb-6r8k4" Jan 31 09:24:41 crc kubenswrapper[4830]: I0131 09:24:41.899313 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/43cbd586-1683-440f-992a-113173028a37-scripts\") pod \"placement-7995f9f9fb-6r8k4\" (UID: \"43cbd586-1683-440f-992a-113173028a37\") " pod="openstack/placement-7995f9f9fb-6r8k4" Jan 31 09:24:41 crc kubenswrapper[4830]: I0131 09:24:41.899512 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b9wwc\" (UniqueName: \"kubernetes.io/projected/43cbd586-1683-440f-992a-113173028a37-kube-api-access-b9wwc\") pod \"placement-7995f9f9fb-6r8k4\" (UID: \"43cbd586-1683-440f-992a-113173028a37\") " pod="openstack/placement-7995f9f9fb-6r8k4" Jan 31 09:24:41 crc kubenswrapper[4830]: I0131 09:24:41.899577 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/43cbd586-1683-440f-992a-113173028a37-logs\") pod \"placement-7995f9f9fb-6r8k4\" (UID: \"43cbd586-1683-440f-992a-113173028a37\") " pod="openstack/placement-7995f9f9fb-6r8k4" Jan 31 09:24:41 crc kubenswrapper[4830]: I0131 09:24:41.899631 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/43cbd586-1683-440f-992a-113173028a37-internal-tls-certs\") pod \"placement-7995f9f9fb-6r8k4\" (UID: \"43cbd586-1683-440f-992a-113173028a37\") " pod="openstack/placement-7995f9f9fb-6r8k4" Jan 31 09:24:41 crc kubenswrapper[4830]: I0131 09:24:41.901592 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43cbd586-1683-440f-992a-113173028a37-combined-ca-bundle\") pod \"placement-7995f9f9fb-6r8k4\" (UID: \"43cbd586-1683-440f-992a-113173028a37\") " pod="openstack/placement-7995f9f9fb-6r8k4" Jan 31 09:24:42 crc kubenswrapper[4830]: I0131 09:24:42.006351 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/43cbd586-1683-440f-992a-113173028a37-public-tls-certs\") pod \"placement-7995f9f9fb-6r8k4\" (UID: \"43cbd586-1683-440f-992a-113173028a37\") " pod="openstack/placement-7995f9f9fb-6r8k4" Jan 31 09:24:42 crc kubenswrapper[4830]: I0131 09:24:42.006462 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/43cbd586-1683-440f-992a-113173028a37-scripts\") pod \"placement-7995f9f9fb-6r8k4\" (UID: \"43cbd586-1683-440f-992a-113173028a37\") " pod="openstack/placement-7995f9f9fb-6r8k4" Jan 31 09:24:42 crc kubenswrapper[4830]: I0131 09:24:42.006513 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b9wwc\" (UniqueName: \"kubernetes.io/projected/43cbd586-1683-440f-992a-113173028a37-kube-api-access-b9wwc\") pod \"placement-7995f9f9fb-6r8k4\" (UID: \"43cbd586-1683-440f-992a-113173028a37\") " pod="openstack/placement-7995f9f9fb-6r8k4" Jan 31 09:24:42 crc kubenswrapper[4830]: I0131 09:24:42.006535 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/43cbd586-1683-440f-992a-113173028a37-logs\") pod \"placement-7995f9f9fb-6r8k4\" (UID: \"43cbd586-1683-440f-992a-113173028a37\") " pod="openstack/placement-7995f9f9fb-6r8k4" Jan 31 09:24:42 crc kubenswrapper[4830]: I0131 09:24:42.006560 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/43cbd586-1683-440f-992a-113173028a37-internal-tls-certs\") pod \"placement-7995f9f9fb-6r8k4\" (UID: \"43cbd586-1683-440f-992a-113173028a37\") " pod="openstack/placement-7995f9f9fb-6r8k4" Jan 31 09:24:42 crc kubenswrapper[4830]: I0131 09:24:42.006593 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43cbd586-1683-440f-992a-113173028a37-combined-ca-bundle\") pod \"placement-7995f9f9fb-6r8k4\" (UID: \"43cbd586-1683-440f-992a-113173028a37\") " pod="openstack/placement-7995f9f9fb-6r8k4" Jan 31 09:24:42 crc kubenswrapper[4830]: I0131 09:24:42.006699 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/43cbd586-1683-440f-992a-113173028a37-config-data\") pod \"placement-7995f9f9fb-6r8k4\" (UID: \"43cbd586-1683-440f-992a-113173028a37\") " pod="openstack/placement-7995f9f9fb-6r8k4" Jan 31 09:24:42 crc kubenswrapper[4830]: I0131 09:24:42.007916 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/43cbd586-1683-440f-992a-113173028a37-logs\") pod \"placement-7995f9f9fb-6r8k4\" (UID: \"43cbd586-1683-440f-992a-113173028a37\") " pod="openstack/placement-7995f9f9fb-6r8k4" Jan 31 09:24:42 crc kubenswrapper[4830]: I0131 09:24:42.010882 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/43cbd586-1683-440f-992a-113173028a37-public-tls-certs\") pod \"placement-7995f9f9fb-6r8k4\" (UID: \"43cbd586-1683-440f-992a-113173028a37\") " pod="openstack/placement-7995f9f9fb-6r8k4" Jan 31 09:24:42 crc kubenswrapper[4830]: I0131 09:24:42.012979 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/43cbd586-1683-440f-992a-113173028a37-config-data\") pod \"placement-7995f9f9fb-6r8k4\" (UID: \"43cbd586-1683-440f-992a-113173028a37\") " pod="openstack/placement-7995f9f9fb-6r8k4" Jan 31 09:24:42 crc kubenswrapper[4830]: I0131 09:24:42.013150 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/43cbd586-1683-440f-992a-113173028a37-internal-tls-certs\") pod \"placement-7995f9f9fb-6r8k4\" (UID: \"43cbd586-1683-440f-992a-113173028a37\") " pod="openstack/placement-7995f9f9fb-6r8k4" Jan 31 09:24:42 crc kubenswrapper[4830]: I0131 09:24:42.013876 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43cbd586-1683-440f-992a-113173028a37-combined-ca-bundle\") pod \"placement-7995f9f9fb-6r8k4\" (UID: \"43cbd586-1683-440f-992a-113173028a37\") " pod="openstack/placement-7995f9f9fb-6r8k4" Jan 31 09:24:42 crc kubenswrapper[4830]: I0131 09:24:42.014189 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/43cbd586-1683-440f-992a-113173028a37-scripts\") pod \"placement-7995f9f9fb-6r8k4\" (UID: \"43cbd586-1683-440f-992a-113173028a37\") " pod="openstack/placement-7995f9f9fb-6r8k4" Jan 31 09:24:42 crc kubenswrapper[4830]: I0131 09:24:42.027183 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b9wwc\" (UniqueName: \"kubernetes.io/projected/43cbd586-1683-440f-992a-113173028a37-kube-api-access-b9wwc\") pod \"placement-7995f9f9fb-6r8k4\" (UID: \"43cbd586-1683-440f-992a-113173028a37\") " pod="openstack/placement-7995f9f9fb-6r8k4" Jan 31 09:24:42 crc kubenswrapper[4830]: I0131 09:24:42.124426 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-7995f9f9fb-6r8k4" Jan 31 09:24:42 crc kubenswrapper[4830]: I0131 09:24:42.683157 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-59d6cd4869-w2rrr" event={"ID":"9404af59-7e12-483b-90d0-9ebdc4140cc2","Type":"ContainerStarted","Data":"1cec5eaefe29b55b53814da42acd0c523600e78af1749ea2cf9bbaa730773373"} Jan 31 09:24:44 crc kubenswrapper[4830]: I0131 09:24:44.353323 4830 patch_prober.go:28] interesting pod/machine-config-daemon-gt7kd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 09:24:44 crc kubenswrapper[4830]: I0131 09:24:44.353891 4830 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" podUID="158dbfda-9b0a-4809-9946-3c6ee2d082dc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 09:24:45 crc kubenswrapper[4830]: I0131 09:24:45.722378 4830 generic.go:334] "Generic (PLEG): container finished" podID="a9974259-ec4c-411a-ba74-95664c116f34" containerID="322baa3e2f6855883d523ac77dd9bbb2e8423fc4f8a4b4da22034d570a99227b" exitCode=0 Jan 31 09:24:45 crc kubenswrapper[4830]: I0131 09:24:45.723078 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-t2pjz" event={"ID":"a9974259-ec4c-411a-ba74-95664c116f34","Type":"ContainerDied","Data":"322baa3e2f6855883d523ac77dd9bbb2e8423fc4f8a4b4da22034d570a99227b"} Jan 31 09:24:46 crc kubenswrapper[4830]: I0131 09:24:46.000981 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-84b966f6c9-w98lj" Jan 31 09:24:46 crc kubenswrapper[4830]: I0131 09:24:46.398766 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8b5c85b87-7w87z"] Jan 31 09:24:46 crc kubenswrapper[4830]: I0131 09:24:46.399611 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-8b5c85b87-7w87z" podUID="6d941bfc-a5bd-4764-8e53-a77414f25a21" containerName="dnsmasq-dns" containerID="cri-o://d84a1c23794a60f4c621178ff37b1c7344b9bb8cb7c28fc154e40f0e512c6728" gracePeriod=10 Jan 31 09:24:46 crc kubenswrapper[4830]: I0131 09:24:46.517317 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 31 09:24:46 crc kubenswrapper[4830]: I0131 09:24:46.517405 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 31 09:24:46 crc kubenswrapper[4830]: I0131 09:24:46.583071 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 31 09:24:46 crc kubenswrapper[4830]: I0131 09:24:46.599502 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 31 09:24:46 crc kubenswrapper[4830]: I0131 09:24:46.747698 4830 generic.go:334] "Generic (PLEG): container finished" podID="6d941bfc-a5bd-4764-8e53-a77414f25a21" containerID="d84a1c23794a60f4c621178ff37b1c7344b9bb8cb7c28fc154e40f0e512c6728" exitCode=0 Jan 31 09:24:46 crc kubenswrapper[4830]: I0131 09:24:46.747791 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8b5c85b87-7w87z" event={"ID":"6d941bfc-a5bd-4764-8e53-a77414f25a21","Type":"ContainerDied","Data":"d84a1c23794a60f4c621178ff37b1c7344b9bb8cb7c28fc154e40f0e512c6728"} Jan 31 09:24:46 crc kubenswrapper[4830]: I0131 09:24:46.750060 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 31 09:24:46 crc kubenswrapper[4830]: I0131 09:24:46.750104 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 31 09:24:46 crc kubenswrapper[4830]: I0131 09:24:46.783877 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-6cd8b566d4-4q75x" podStartSLOduration=11.783852647 podStartE2EDuration="11.783852647s" podCreationTimestamp="2026-01-31 09:24:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 09:24:46.77620968 +0000 UTC m=+1431.269572122" watchObservedRunningTime="2026-01-31 09:24:46.783852647 +0000 UTC m=+1431.277215099" Jan 31 09:24:47 crc kubenswrapper[4830]: I0131 09:24:47.235868 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 31 09:24:47 crc kubenswrapper[4830]: I0131 09:24:47.235930 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 31 09:24:47 crc kubenswrapper[4830]: I0131 09:24:47.297561 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 31 09:24:47 crc kubenswrapper[4830]: I0131 09:24:47.318516 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 31 09:24:47 crc kubenswrapper[4830]: I0131 09:24:47.767488 4830 generic.go:334] "Generic (PLEG): container finished" podID="bb9aed03-7e56-43de-92fc-3ac6352194af" containerID="e10402784224c0a8f234f5cbb87b74b8d71f6b4e4370aa3a10b8d8b1768b3e70" exitCode=0 Jan 31 09:24:47 crc kubenswrapper[4830]: I0131 09:24:47.771214 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-pr2kp" event={"ID":"bb9aed03-7e56-43de-92fc-3ac6352194af","Type":"ContainerDied","Data":"e10402784224c0a8f234f5cbb87b74b8d71f6b4e4370aa3a10b8d8b1768b3e70"} Jan 31 09:24:47 crc kubenswrapper[4830]: I0131 09:24:47.771366 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 31 09:24:47 crc kubenswrapper[4830]: I0131 09:24:47.775431 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 31 09:24:48 crc kubenswrapper[4830]: I0131 09:24:48.205214 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-8b5c85b87-7w87z" podUID="6d941bfc-a5bd-4764-8e53-a77414f25a21" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.185:5353: connect: connection refused" Jan 31 09:24:49 crc kubenswrapper[4830]: I0131 09:24:49.804380 4830 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 31 09:24:49 crc kubenswrapper[4830]: I0131 09:24:49.808568 4830 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 31 09:24:52 crc kubenswrapper[4830]: I0131 09:24:52.555355 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-t2pjz" Jan 31 09:24:52 crc kubenswrapper[4830]: I0131 09:24:52.620538 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-pr2kp" Jan 31 09:24:52 crc kubenswrapper[4830]: I0131 09:24:52.780018 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a9974259-ec4c-411a-ba74-95664c116f34-config-data\") pod \"a9974259-ec4c-411a-ba74-95664c116f34\" (UID: \"a9974259-ec4c-411a-ba74-95664c116f34\") " Jan 31 09:24:52 crc kubenswrapper[4830]: I0131 09:24:52.780362 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a9974259-ec4c-411a-ba74-95664c116f34-fernet-keys\") pod \"a9974259-ec4c-411a-ba74-95664c116f34\" (UID: \"a9974259-ec4c-411a-ba74-95664c116f34\") " Jan 31 09:24:52 crc kubenswrapper[4830]: I0131 09:24:52.780419 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zb5hp\" (UniqueName: \"kubernetes.io/projected/bb9aed03-7e56-43de-92fc-3ac6352194af-kube-api-access-zb5hp\") pod \"bb9aed03-7e56-43de-92fc-3ac6352194af\" (UID: \"bb9aed03-7e56-43de-92fc-3ac6352194af\") " Jan 31 09:24:52 crc kubenswrapper[4830]: I0131 09:24:52.780534 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/bb9aed03-7e56-43de-92fc-3ac6352194af-db-sync-config-data\") pod \"bb9aed03-7e56-43de-92fc-3ac6352194af\" (UID: \"bb9aed03-7e56-43de-92fc-3ac6352194af\") " Jan 31 09:24:52 crc kubenswrapper[4830]: I0131 09:24:52.780597 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb9aed03-7e56-43de-92fc-3ac6352194af-combined-ca-bundle\") pod \"bb9aed03-7e56-43de-92fc-3ac6352194af\" (UID: \"bb9aed03-7e56-43de-92fc-3ac6352194af\") " Jan 31 09:24:52 crc kubenswrapper[4830]: I0131 09:24:52.780655 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4l7s2\" (UniqueName: \"kubernetes.io/projected/a9974259-ec4c-411a-ba74-95664c116f34-kube-api-access-4l7s2\") pod \"a9974259-ec4c-411a-ba74-95664c116f34\" (UID: \"a9974259-ec4c-411a-ba74-95664c116f34\") " Jan 31 09:24:52 crc kubenswrapper[4830]: I0131 09:24:52.780740 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9974259-ec4c-411a-ba74-95664c116f34-combined-ca-bundle\") pod \"a9974259-ec4c-411a-ba74-95664c116f34\" (UID: \"a9974259-ec4c-411a-ba74-95664c116f34\") " Jan 31 09:24:52 crc kubenswrapper[4830]: I0131 09:24:52.780846 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a9974259-ec4c-411a-ba74-95664c116f34-scripts\") pod \"a9974259-ec4c-411a-ba74-95664c116f34\" (UID: \"a9974259-ec4c-411a-ba74-95664c116f34\") " Jan 31 09:24:52 crc kubenswrapper[4830]: I0131 09:24:52.780900 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/a9974259-ec4c-411a-ba74-95664c116f34-credential-keys\") pod \"a9974259-ec4c-411a-ba74-95664c116f34\" (UID: \"a9974259-ec4c-411a-ba74-95664c116f34\") " Jan 31 09:24:52 crc kubenswrapper[4830]: I0131 09:24:52.791042 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a9974259-ec4c-411a-ba74-95664c116f34-scripts" (OuterVolumeSpecName: "scripts") pod "a9974259-ec4c-411a-ba74-95664c116f34" (UID: "a9974259-ec4c-411a-ba74-95664c116f34"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:24:52 crc kubenswrapper[4830]: I0131 09:24:52.791763 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a9974259-ec4c-411a-ba74-95664c116f34-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "a9974259-ec4c-411a-ba74-95664c116f34" (UID: "a9974259-ec4c-411a-ba74-95664c116f34"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:24:52 crc kubenswrapper[4830]: I0131 09:24:52.792349 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb9aed03-7e56-43de-92fc-3ac6352194af-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "bb9aed03-7e56-43de-92fc-3ac6352194af" (UID: "bb9aed03-7e56-43de-92fc-3ac6352194af"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:24:52 crc kubenswrapper[4830]: I0131 09:24:52.792397 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a9974259-ec4c-411a-ba74-95664c116f34-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "a9974259-ec4c-411a-ba74-95664c116f34" (UID: "a9974259-ec4c-411a-ba74-95664c116f34"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:24:52 crc kubenswrapper[4830]: I0131 09:24:52.792856 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a9974259-ec4c-411a-ba74-95664c116f34-kube-api-access-4l7s2" (OuterVolumeSpecName: "kube-api-access-4l7s2") pod "a9974259-ec4c-411a-ba74-95664c116f34" (UID: "a9974259-ec4c-411a-ba74-95664c116f34"). InnerVolumeSpecName "kube-api-access-4l7s2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:24:52 crc kubenswrapper[4830]: I0131 09:24:52.805705 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bb9aed03-7e56-43de-92fc-3ac6352194af-kube-api-access-zb5hp" (OuterVolumeSpecName: "kube-api-access-zb5hp") pod "bb9aed03-7e56-43de-92fc-3ac6352194af" (UID: "bb9aed03-7e56-43de-92fc-3ac6352194af"). InnerVolumeSpecName "kube-api-access-zb5hp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:24:52 crc kubenswrapper[4830]: I0131 09:24:52.847222 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a9974259-ec4c-411a-ba74-95664c116f34-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a9974259-ec4c-411a-ba74-95664c116f34" (UID: "a9974259-ec4c-411a-ba74-95664c116f34"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:24:52 crc kubenswrapper[4830]: I0131 09:24:52.885124 4830 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a9974259-ec4c-411a-ba74-95664c116f34-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 31 09:24:52 crc kubenswrapper[4830]: I0131 09:24:52.885167 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zb5hp\" (UniqueName: \"kubernetes.io/projected/bb9aed03-7e56-43de-92fc-3ac6352194af-kube-api-access-zb5hp\") on node \"crc\" DevicePath \"\"" Jan 31 09:24:52 crc kubenswrapper[4830]: I0131 09:24:52.885179 4830 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/bb9aed03-7e56-43de-92fc-3ac6352194af-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 09:24:52 crc kubenswrapper[4830]: I0131 09:24:52.885187 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4l7s2\" (UniqueName: \"kubernetes.io/projected/a9974259-ec4c-411a-ba74-95664c116f34-kube-api-access-4l7s2\") on node \"crc\" DevicePath \"\"" Jan 31 09:24:52 crc kubenswrapper[4830]: I0131 09:24:52.885197 4830 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9974259-ec4c-411a-ba74-95664c116f34-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 09:24:52 crc kubenswrapper[4830]: I0131 09:24:52.885206 4830 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a9974259-ec4c-411a-ba74-95664c116f34-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 09:24:52 crc kubenswrapper[4830]: I0131 09:24:52.885214 4830 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/a9974259-ec4c-411a-ba74-95664c116f34-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 31 09:24:52 crc kubenswrapper[4830]: I0131 09:24:52.885380 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb9aed03-7e56-43de-92fc-3ac6352194af-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bb9aed03-7e56-43de-92fc-3ac6352194af" (UID: "bb9aed03-7e56-43de-92fc-3ac6352194af"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:24:52 crc kubenswrapper[4830]: I0131 09:24:52.897116 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-t2pjz" event={"ID":"a9974259-ec4c-411a-ba74-95664c116f34","Type":"ContainerDied","Data":"18ce2ca5e38335d576138cda0fb7a63556aeca53db1c0092f15ac3f34a5b2400"} Jan 31 09:24:52 crc kubenswrapper[4830]: I0131 09:24:52.897627 4830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="18ce2ca5e38335d576138cda0fb7a63556aeca53db1c0092f15ac3f34a5b2400" Jan 31 09:24:52 crc kubenswrapper[4830]: I0131 09:24:52.897578 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-t2pjz" Jan 31 09:24:52 crc kubenswrapper[4830]: I0131 09:24:52.906436 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-pr2kp" event={"ID":"bb9aed03-7e56-43de-92fc-3ac6352194af","Type":"ContainerDied","Data":"779065af83fb8314a2f0526b7cfddc5ba70a742532c637053ee88f496065f6dd"} Jan 31 09:24:52 crc kubenswrapper[4830]: I0131 09:24:52.906524 4830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="779065af83fb8314a2f0526b7cfddc5ba70a742532c637053ee88f496065f6dd" Jan 31 09:24:52 crc kubenswrapper[4830]: I0131 09:24:52.906749 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-pr2kp" Jan 31 09:24:52 crc kubenswrapper[4830]: I0131 09:24:52.931842 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a9974259-ec4c-411a-ba74-95664c116f34-config-data" (OuterVolumeSpecName: "config-data") pod "a9974259-ec4c-411a-ba74-95664c116f34" (UID: "a9974259-ec4c-411a-ba74-95664c116f34"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:24:52 crc kubenswrapper[4830]: I0131 09:24:52.989361 4830 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb9aed03-7e56-43de-92fc-3ac6352194af-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 09:24:52 crc kubenswrapper[4830]: I0131 09:24:52.989445 4830 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a9974259-ec4c-411a-ba74-95664c116f34-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 09:24:52 crc kubenswrapper[4830]: I0131 09:24:52.989480 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-25d9r"] Jan 31 09:24:52 crc kubenswrapper[4830]: E0131 09:24:52.990041 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a9974259-ec4c-411a-ba74-95664c116f34" containerName="keystone-bootstrap" Jan 31 09:24:52 crc kubenswrapper[4830]: I0131 09:24:52.990060 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9974259-ec4c-411a-ba74-95664c116f34" containerName="keystone-bootstrap" Jan 31 09:24:52 crc kubenswrapper[4830]: E0131 09:24:52.990079 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb9aed03-7e56-43de-92fc-3ac6352194af" containerName="barbican-db-sync" Jan 31 09:24:52 crc kubenswrapper[4830]: I0131 09:24:52.990086 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb9aed03-7e56-43de-92fc-3ac6352194af" containerName="barbican-db-sync" Jan 31 09:24:52 crc kubenswrapper[4830]: I0131 09:24:52.990317 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb9aed03-7e56-43de-92fc-3ac6352194af" containerName="barbican-db-sync" Jan 31 09:24:52 crc kubenswrapper[4830]: I0131 09:24:52.990333 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="a9974259-ec4c-411a-ba74-95664c116f34" containerName="keystone-bootstrap" Jan 31 09:24:52 crc kubenswrapper[4830]: I0131 09:24:52.992121 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-25d9r" Jan 31 09:24:53 crc kubenswrapper[4830]: I0131 09:24:53.028608 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-25d9r"] Jan 31 09:24:53 crc kubenswrapper[4830]: I0131 09:24:53.092486 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d136e2d6-6468-43c5-942f-71b672962cae-utilities\") pod \"redhat-marketplace-25d9r\" (UID: \"d136e2d6-6468-43c5-942f-71b672962cae\") " pod="openshift-marketplace/redhat-marketplace-25d9r" Jan 31 09:24:53 crc kubenswrapper[4830]: I0131 09:24:53.092562 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d136e2d6-6468-43c5-942f-71b672962cae-catalog-content\") pod \"redhat-marketplace-25d9r\" (UID: \"d136e2d6-6468-43c5-942f-71b672962cae\") " pod="openshift-marketplace/redhat-marketplace-25d9r" Jan 31 09:24:53 crc kubenswrapper[4830]: I0131 09:24:53.092636 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qwvtw\" (UniqueName: \"kubernetes.io/projected/d136e2d6-6468-43c5-942f-71b672962cae-kube-api-access-qwvtw\") pod \"redhat-marketplace-25d9r\" (UID: \"d136e2d6-6468-43c5-942f-71b672962cae\") " pod="openshift-marketplace/redhat-marketplace-25d9r" Jan 31 09:24:53 crc kubenswrapper[4830]: I0131 09:24:53.188301 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-7995f9f9fb-6r8k4"] Jan 31 09:24:53 crc kubenswrapper[4830]: I0131 09:24:53.194782 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qwvtw\" (UniqueName: \"kubernetes.io/projected/d136e2d6-6468-43c5-942f-71b672962cae-kube-api-access-qwvtw\") pod \"redhat-marketplace-25d9r\" (UID: \"d136e2d6-6468-43c5-942f-71b672962cae\") " pod="openshift-marketplace/redhat-marketplace-25d9r" Jan 31 09:24:53 crc kubenswrapper[4830]: I0131 09:24:53.195095 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d136e2d6-6468-43c5-942f-71b672962cae-utilities\") pod \"redhat-marketplace-25d9r\" (UID: \"d136e2d6-6468-43c5-942f-71b672962cae\") " pod="openshift-marketplace/redhat-marketplace-25d9r" Jan 31 09:24:53 crc kubenswrapper[4830]: I0131 09:24:53.195150 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d136e2d6-6468-43c5-942f-71b672962cae-catalog-content\") pod \"redhat-marketplace-25d9r\" (UID: \"d136e2d6-6468-43c5-942f-71b672962cae\") " pod="openshift-marketplace/redhat-marketplace-25d9r" Jan 31 09:24:53 crc kubenswrapper[4830]: I0131 09:24:53.196284 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d136e2d6-6468-43c5-942f-71b672962cae-catalog-content\") pod \"redhat-marketplace-25d9r\" (UID: \"d136e2d6-6468-43c5-942f-71b672962cae\") " pod="openshift-marketplace/redhat-marketplace-25d9r" Jan 31 09:24:53 crc kubenswrapper[4830]: I0131 09:24:53.196297 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d136e2d6-6468-43c5-942f-71b672962cae-utilities\") pod \"redhat-marketplace-25d9r\" (UID: \"d136e2d6-6468-43c5-942f-71b672962cae\") " pod="openshift-marketplace/redhat-marketplace-25d9r" Jan 31 09:24:53 crc kubenswrapper[4830]: I0131 09:24:53.225816 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qwvtw\" (UniqueName: \"kubernetes.io/projected/d136e2d6-6468-43c5-942f-71b672962cae-kube-api-access-qwvtw\") pod \"redhat-marketplace-25d9r\" (UID: \"d136e2d6-6468-43c5-942f-71b672962cae\") " pod="openshift-marketplace/redhat-marketplace-25d9r" Jan 31 09:24:53 crc kubenswrapper[4830]: I0131 09:24:53.344123 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-25d9r" Jan 31 09:24:53 crc kubenswrapper[4830]: I0131 09:24:53.767956 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-57d8f8c487-sqqph"] Jan 31 09:24:53 crc kubenswrapper[4830]: I0131 09:24:53.770145 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-57d8f8c487-sqqph" Jan 31 09:24:53 crc kubenswrapper[4830]: I0131 09:24:53.777962 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 31 09:24:53 crc kubenswrapper[4830]: I0131 09:24:53.778254 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-r84d8" Jan 31 09:24:53 crc kubenswrapper[4830]: I0131 09:24:53.778396 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Jan 31 09:24:53 crc kubenswrapper[4830]: I0131 09:24:53.778557 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 31 09:24:53 crc kubenswrapper[4830]: I0131 09:24:53.778811 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 31 09:24:53 crc kubenswrapper[4830]: I0131 09:24:53.779092 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Jan 31 09:24:53 crc kubenswrapper[4830]: I0131 09:24:53.793149 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-57d8f8c487-sqqph"] Jan 31 09:24:53 crc kubenswrapper[4830]: I0131 09:24:53.914466 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/230488d2-6bec-4165-8ff4-4854cc6d53f6-scripts\") pod \"keystone-57d8f8c487-sqqph\" (UID: \"230488d2-6bec-4165-8ff4-4854cc6d53f6\") " pod="openstack/keystone-57d8f8c487-sqqph" Jan 31 09:24:53 crc kubenswrapper[4830]: I0131 09:24:53.914544 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/230488d2-6bec-4165-8ff4-4854cc6d53f6-config-data\") pod \"keystone-57d8f8c487-sqqph\" (UID: \"230488d2-6bec-4165-8ff4-4854cc6d53f6\") " pod="openstack/keystone-57d8f8c487-sqqph" Jan 31 09:24:53 crc kubenswrapper[4830]: I0131 09:24:53.914595 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/230488d2-6bec-4165-8ff4-4854cc6d53f6-credential-keys\") pod \"keystone-57d8f8c487-sqqph\" (UID: \"230488d2-6bec-4165-8ff4-4854cc6d53f6\") " pod="openstack/keystone-57d8f8c487-sqqph" Jan 31 09:24:53 crc kubenswrapper[4830]: I0131 09:24:53.914626 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/230488d2-6bec-4165-8ff4-4854cc6d53f6-fernet-keys\") pod \"keystone-57d8f8c487-sqqph\" (UID: \"230488d2-6bec-4165-8ff4-4854cc6d53f6\") " pod="openstack/keystone-57d8f8c487-sqqph" Jan 31 09:24:53 crc kubenswrapper[4830]: I0131 09:24:53.914689 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/230488d2-6bec-4165-8ff4-4854cc6d53f6-internal-tls-certs\") pod \"keystone-57d8f8c487-sqqph\" (UID: \"230488d2-6bec-4165-8ff4-4854cc6d53f6\") " pod="openstack/keystone-57d8f8c487-sqqph" Jan 31 09:24:53 crc kubenswrapper[4830]: I0131 09:24:53.914823 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/230488d2-6bec-4165-8ff4-4854cc6d53f6-combined-ca-bundle\") pod \"keystone-57d8f8c487-sqqph\" (UID: \"230488d2-6bec-4165-8ff4-4854cc6d53f6\") " pod="openstack/keystone-57d8f8c487-sqqph" Jan 31 09:24:53 crc kubenswrapper[4830]: I0131 09:24:53.914977 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/230488d2-6bec-4165-8ff4-4854cc6d53f6-public-tls-certs\") pod \"keystone-57d8f8c487-sqqph\" (UID: \"230488d2-6bec-4165-8ff4-4854cc6d53f6\") " pod="openstack/keystone-57d8f8c487-sqqph" Jan 31 09:24:53 crc kubenswrapper[4830]: I0131 09:24:53.915002 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xcjp5\" (UniqueName: \"kubernetes.io/projected/230488d2-6bec-4165-8ff4-4854cc6d53f6-kube-api-access-xcjp5\") pod \"keystone-57d8f8c487-sqqph\" (UID: \"230488d2-6bec-4165-8ff4-4854cc6d53f6\") " pod="openstack/keystone-57d8f8c487-sqqph" Jan 31 09:24:53 crc kubenswrapper[4830]: I0131 09:24:53.954062 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-59d6cd4869-w2rrr" event={"ID":"9404af59-7e12-483b-90d0-9ebdc4140cc2","Type":"ContainerStarted","Data":"ebbfc0576c942e0e24080af4a45767ccb924675876b9993065a3eeec34f93cb2"} Jan 31 09:24:53 crc kubenswrapper[4830]: I0131 09:24:53.954596 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-59d6cd4869-w2rrr" Jan 31 09:24:53 crc kubenswrapper[4830]: I0131 09:24:53.958808 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-9d5f95fb7-h7vp9"] Jan 31 09:24:53 crc kubenswrapper[4830]: I0131 09:24:53.961102 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-9d5f95fb7-h7vp9" Jan 31 09:24:53 crc kubenswrapper[4830]: I0131 09:24:53.963944 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-5m8j5" Jan 31 09:24:53 crc kubenswrapper[4830]: I0131 09:24:53.973547 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Jan 31 09:24:53 crc kubenswrapper[4830]: I0131 09:24:53.973778 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Jan 31 09:24:53 crc kubenswrapper[4830]: I0131 09:24:53.996147 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-9d5f95fb7-h7vp9"] Jan 31 09:24:54 crc kubenswrapper[4830]: I0131 09:24:54.019921 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/230488d2-6bec-4165-8ff4-4854cc6d53f6-public-tls-certs\") pod \"keystone-57d8f8c487-sqqph\" (UID: \"230488d2-6bec-4165-8ff4-4854cc6d53f6\") " pod="openstack/keystone-57d8f8c487-sqqph" Jan 31 09:24:54 crc kubenswrapper[4830]: I0131 09:24:54.019986 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xcjp5\" (UniqueName: \"kubernetes.io/projected/230488d2-6bec-4165-8ff4-4854cc6d53f6-kube-api-access-xcjp5\") pod \"keystone-57d8f8c487-sqqph\" (UID: \"230488d2-6bec-4165-8ff4-4854cc6d53f6\") " pod="openstack/keystone-57d8f8c487-sqqph" Jan 31 09:24:54 crc kubenswrapper[4830]: I0131 09:24:54.020064 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/230488d2-6bec-4165-8ff4-4854cc6d53f6-scripts\") pod \"keystone-57d8f8c487-sqqph\" (UID: \"230488d2-6bec-4165-8ff4-4854cc6d53f6\") " pod="openstack/keystone-57d8f8c487-sqqph" Jan 31 09:24:54 crc kubenswrapper[4830]: I0131 09:24:54.020097 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/230488d2-6bec-4165-8ff4-4854cc6d53f6-config-data\") pod \"keystone-57d8f8c487-sqqph\" (UID: \"230488d2-6bec-4165-8ff4-4854cc6d53f6\") " pod="openstack/keystone-57d8f8c487-sqqph" Jan 31 09:24:54 crc kubenswrapper[4830]: I0131 09:24:54.020125 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/230488d2-6bec-4165-8ff4-4854cc6d53f6-credential-keys\") pod \"keystone-57d8f8c487-sqqph\" (UID: \"230488d2-6bec-4165-8ff4-4854cc6d53f6\") " pod="openstack/keystone-57d8f8c487-sqqph" Jan 31 09:24:54 crc kubenswrapper[4830]: I0131 09:24:54.020161 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/230488d2-6bec-4165-8ff4-4854cc6d53f6-fernet-keys\") pod \"keystone-57d8f8c487-sqqph\" (UID: \"230488d2-6bec-4165-8ff4-4854cc6d53f6\") " pod="openstack/keystone-57d8f8c487-sqqph" Jan 31 09:24:54 crc kubenswrapper[4830]: I0131 09:24:54.020211 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/230488d2-6bec-4165-8ff4-4854cc6d53f6-internal-tls-certs\") pod \"keystone-57d8f8c487-sqqph\" (UID: \"230488d2-6bec-4165-8ff4-4854cc6d53f6\") " pod="openstack/keystone-57d8f8c487-sqqph" Jan 31 09:24:54 crc kubenswrapper[4830]: I0131 09:24:54.020317 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/230488d2-6bec-4165-8ff4-4854cc6d53f6-combined-ca-bundle\") pod \"keystone-57d8f8c487-sqqph\" (UID: \"230488d2-6bec-4165-8ff4-4854cc6d53f6\") " pod="openstack/keystone-57d8f8c487-sqqph" Jan 31 09:24:54 crc kubenswrapper[4830]: I0131 09:24:54.031909 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/230488d2-6bec-4165-8ff4-4854cc6d53f6-combined-ca-bundle\") pod \"keystone-57d8f8c487-sqqph\" (UID: \"230488d2-6bec-4165-8ff4-4854cc6d53f6\") " pod="openstack/keystone-57d8f8c487-sqqph" Jan 31 09:24:54 crc kubenswrapper[4830]: I0131 09:24:54.044464 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/230488d2-6bec-4165-8ff4-4854cc6d53f6-config-data\") pod \"keystone-57d8f8c487-sqqph\" (UID: \"230488d2-6bec-4165-8ff4-4854cc6d53f6\") " pod="openstack/keystone-57d8f8c487-sqqph" Jan 31 09:24:54 crc kubenswrapper[4830]: I0131 09:24:54.047504 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/230488d2-6bec-4165-8ff4-4854cc6d53f6-public-tls-certs\") pod \"keystone-57d8f8c487-sqqph\" (UID: \"230488d2-6bec-4165-8ff4-4854cc6d53f6\") " pod="openstack/keystone-57d8f8c487-sqqph" Jan 31 09:24:54 crc kubenswrapper[4830]: I0131 09:24:54.053584 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/230488d2-6bec-4165-8ff4-4854cc6d53f6-fernet-keys\") pod \"keystone-57d8f8c487-sqqph\" (UID: \"230488d2-6bec-4165-8ff4-4854cc6d53f6\") " pod="openstack/keystone-57d8f8c487-sqqph" Jan 31 09:24:54 crc kubenswrapper[4830]: I0131 09:24:54.061950 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/230488d2-6bec-4165-8ff4-4854cc6d53f6-internal-tls-certs\") pod \"keystone-57d8f8c487-sqqph\" (UID: \"230488d2-6bec-4165-8ff4-4854cc6d53f6\") " pod="openstack/keystone-57d8f8c487-sqqph" Jan 31 09:24:54 crc kubenswrapper[4830]: I0131 09:24:54.063205 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/230488d2-6bec-4165-8ff4-4854cc6d53f6-credential-keys\") pod \"keystone-57d8f8c487-sqqph\" (UID: \"230488d2-6bec-4165-8ff4-4854cc6d53f6\") " pod="openstack/keystone-57d8f8c487-sqqph" Jan 31 09:24:54 crc kubenswrapper[4830]: I0131 09:24:54.070286 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/230488d2-6bec-4165-8ff4-4854cc6d53f6-scripts\") pod \"keystone-57d8f8c487-sqqph\" (UID: \"230488d2-6bec-4165-8ff4-4854cc6d53f6\") " pod="openstack/keystone-57d8f8c487-sqqph" Jan 31 09:24:54 crc kubenswrapper[4830]: I0131 09:24:54.091949 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xcjp5\" (UniqueName: \"kubernetes.io/projected/230488d2-6bec-4165-8ff4-4854cc6d53f6-kube-api-access-xcjp5\") pod \"keystone-57d8f8c487-sqqph\" (UID: \"230488d2-6bec-4165-8ff4-4854cc6d53f6\") " pod="openstack/keystone-57d8f8c487-sqqph" Jan 31 09:24:54 crc kubenswrapper[4830]: I0131 09:24:54.131470 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-57d8f8c487-sqqph" Jan 31 09:24:54 crc kubenswrapper[4830]: I0131 09:24:54.138078 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e1fe9f02-72ff-45af-8728-91cecff0d1ac-config-data-custom\") pod \"barbican-worker-9d5f95fb7-h7vp9\" (UID: \"e1fe9f02-72ff-45af-8728-91cecff0d1ac\") " pod="openstack/barbican-worker-9d5f95fb7-h7vp9" Jan 31 09:24:54 crc kubenswrapper[4830]: I0131 09:24:54.151610 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hg8gm\" (UniqueName: \"kubernetes.io/projected/e1fe9f02-72ff-45af-8728-91cecff0d1ac-kube-api-access-hg8gm\") pod \"barbican-worker-9d5f95fb7-h7vp9\" (UID: \"e1fe9f02-72ff-45af-8728-91cecff0d1ac\") " pod="openstack/barbican-worker-9d5f95fb7-h7vp9" Jan 31 09:24:54 crc kubenswrapper[4830]: I0131 09:24:54.151871 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e1fe9f02-72ff-45af-8728-91cecff0d1ac-combined-ca-bundle\") pod \"barbican-worker-9d5f95fb7-h7vp9\" (UID: \"e1fe9f02-72ff-45af-8728-91cecff0d1ac\") " pod="openstack/barbican-worker-9d5f95fb7-h7vp9" Jan 31 09:24:54 crc kubenswrapper[4830]: I0131 09:24:54.151914 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e1fe9f02-72ff-45af-8728-91cecff0d1ac-config-data\") pod \"barbican-worker-9d5f95fb7-h7vp9\" (UID: \"e1fe9f02-72ff-45af-8728-91cecff0d1ac\") " pod="openstack/barbican-worker-9d5f95fb7-h7vp9" Jan 31 09:24:54 crc kubenswrapper[4830]: I0131 09:24:54.152115 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e1fe9f02-72ff-45af-8728-91cecff0d1ac-logs\") pod \"barbican-worker-9d5f95fb7-h7vp9\" (UID: \"e1fe9f02-72ff-45af-8728-91cecff0d1ac\") " pod="openstack/barbican-worker-9d5f95fb7-h7vp9" Jan 31 09:24:54 crc kubenswrapper[4830]: I0131 09:24:54.229273 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-59d6cd4869-w2rrr" podStartSLOduration=16.229245701 podStartE2EDuration="16.229245701s" podCreationTimestamp="2026-01-31 09:24:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 09:24:54.130287998 +0000 UTC m=+1438.623650450" watchObservedRunningTime="2026-01-31 09:24:54.229245701 +0000 UTC m=+1438.722608143" Jan 31 09:24:54 crc kubenswrapper[4830]: I0131 09:24:54.258465 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-795c4b4b5d-76dwx"] Jan 31 09:24:54 crc kubenswrapper[4830]: I0131 09:24:54.260079 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e1fe9f02-72ff-45af-8728-91cecff0d1ac-combined-ca-bundle\") pod \"barbican-worker-9d5f95fb7-h7vp9\" (UID: \"e1fe9f02-72ff-45af-8728-91cecff0d1ac\") " pod="openstack/barbican-worker-9d5f95fb7-h7vp9" Jan 31 09:24:54 crc kubenswrapper[4830]: I0131 09:24:54.260157 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e1fe9f02-72ff-45af-8728-91cecff0d1ac-config-data\") pod \"barbican-worker-9d5f95fb7-h7vp9\" (UID: \"e1fe9f02-72ff-45af-8728-91cecff0d1ac\") " pod="openstack/barbican-worker-9d5f95fb7-h7vp9" Jan 31 09:24:54 crc kubenswrapper[4830]: I0131 09:24:54.260266 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e1fe9f02-72ff-45af-8728-91cecff0d1ac-logs\") pod \"barbican-worker-9d5f95fb7-h7vp9\" (UID: \"e1fe9f02-72ff-45af-8728-91cecff0d1ac\") " pod="openstack/barbican-worker-9d5f95fb7-h7vp9" Jan 31 09:24:54 crc kubenswrapper[4830]: I0131 09:24:54.260318 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e1fe9f02-72ff-45af-8728-91cecff0d1ac-config-data-custom\") pod \"barbican-worker-9d5f95fb7-h7vp9\" (UID: \"e1fe9f02-72ff-45af-8728-91cecff0d1ac\") " pod="openstack/barbican-worker-9d5f95fb7-h7vp9" Jan 31 09:24:54 crc kubenswrapper[4830]: I0131 09:24:54.260411 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hg8gm\" (UniqueName: \"kubernetes.io/projected/e1fe9f02-72ff-45af-8728-91cecff0d1ac-kube-api-access-hg8gm\") pod \"barbican-worker-9d5f95fb7-h7vp9\" (UID: \"e1fe9f02-72ff-45af-8728-91cecff0d1ac\") " pod="openstack/barbican-worker-9d5f95fb7-h7vp9" Jan 31 09:24:54 crc kubenswrapper[4830]: I0131 09:24:54.261611 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e1fe9f02-72ff-45af-8728-91cecff0d1ac-logs\") pod \"barbican-worker-9d5f95fb7-h7vp9\" (UID: \"e1fe9f02-72ff-45af-8728-91cecff0d1ac\") " pod="openstack/barbican-worker-9d5f95fb7-h7vp9" Jan 31 09:24:54 crc kubenswrapper[4830]: I0131 09:24:54.267580 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e1fe9f02-72ff-45af-8728-91cecff0d1ac-config-data\") pod \"barbican-worker-9d5f95fb7-h7vp9\" (UID: \"e1fe9f02-72ff-45af-8728-91cecff0d1ac\") " pod="openstack/barbican-worker-9d5f95fb7-h7vp9" Jan 31 09:24:54 crc kubenswrapper[4830]: I0131 09:24:54.270344 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-795c4b4b5d-76dwx" Jan 31 09:24:54 crc kubenswrapper[4830]: I0131 09:24:54.285820 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e1fe9f02-72ff-45af-8728-91cecff0d1ac-config-data-custom\") pod \"barbican-worker-9d5f95fb7-h7vp9\" (UID: \"e1fe9f02-72ff-45af-8728-91cecff0d1ac\") " pod="openstack/barbican-worker-9d5f95fb7-h7vp9" Jan 31 09:24:54 crc kubenswrapper[4830]: I0131 09:24:54.292556 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e1fe9f02-72ff-45af-8728-91cecff0d1ac-combined-ca-bundle\") pod \"barbican-worker-9d5f95fb7-h7vp9\" (UID: \"e1fe9f02-72ff-45af-8728-91cecff0d1ac\") " pod="openstack/barbican-worker-9d5f95fb7-h7vp9" Jan 31 09:24:54 crc kubenswrapper[4830]: I0131 09:24:54.296531 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Jan 31 09:24:54 crc kubenswrapper[4830]: I0131 09:24:54.351783 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hg8gm\" (UniqueName: \"kubernetes.io/projected/e1fe9f02-72ff-45af-8728-91cecff0d1ac-kube-api-access-hg8gm\") pod \"barbican-worker-9d5f95fb7-h7vp9\" (UID: \"e1fe9f02-72ff-45af-8728-91cecff0d1ac\") " pod="openstack/barbican-worker-9d5f95fb7-h7vp9" Jan 31 09:24:54 crc kubenswrapper[4830]: I0131 09:24:54.471737 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/95712f82-07ef-4b0f-b1c8-af74932c2c4c-logs\") pod \"barbican-keystone-listener-795c4b4b5d-76dwx\" (UID: \"95712f82-07ef-4b0f-b1c8-af74932c2c4c\") " pod="openstack/barbican-keystone-listener-795c4b4b5d-76dwx" Jan 31 09:24:54 crc kubenswrapper[4830]: I0131 09:24:54.472105 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/95712f82-07ef-4b0f-b1c8-af74932c2c4c-config-data\") pod \"barbican-keystone-listener-795c4b4b5d-76dwx\" (UID: \"95712f82-07ef-4b0f-b1c8-af74932c2c4c\") " pod="openstack/barbican-keystone-listener-795c4b4b5d-76dwx" Jan 31 09:24:54 crc kubenswrapper[4830]: I0131 09:24:54.472261 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-llck6\" (UniqueName: \"kubernetes.io/projected/95712f82-07ef-4b0f-b1c8-af74932c2c4c-kube-api-access-llck6\") pod \"barbican-keystone-listener-795c4b4b5d-76dwx\" (UID: \"95712f82-07ef-4b0f-b1c8-af74932c2c4c\") " pod="openstack/barbican-keystone-listener-795c4b4b5d-76dwx" Jan 31 09:24:54 crc kubenswrapper[4830]: I0131 09:24:54.472375 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/95712f82-07ef-4b0f-b1c8-af74932c2c4c-config-data-custom\") pod \"barbican-keystone-listener-795c4b4b5d-76dwx\" (UID: \"95712f82-07ef-4b0f-b1c8-af74932c2c4c\") " pod="openstack/barbican-keystone-listener-795c4b4b5d-76dwx" Jan 31 09:24:54 crc kubenswrapper[4830]: I0131 09:24:54.472609 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95712f82-07ef-4b0f-b1c8-af74932c2c4c-combined-ca-bundle\") pod \"barbican-keystone-listener-795c4b4b5d-76dwx\" (UID: \"95712f82-07ef-4b0f-b1c8-af74932c2c4c\") " pod="openstack/barbican-keystone-listener-795c4b4b5d-76dwx" Jan 31 09:24:54 crc kubenswrapper[4830]: I0131 09:24:54.487694 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-795c4b4b5d-76dwx"] Jan 31 09:24:54 crc kubenswrapper[4830]: I0131 09:24:54.496164 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8b5c85b87-7w87z" Jan 31 09:24:54 crc kubenswrapper[4830]: I0131 09:24:54.576287 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/95712f82-07ef-4b0f-b1c8-af74932c2c4c-config-data\") pod \"barbican-keystone-listener-795c4b4b5d-76dwx\" (UID: \"95712f82-07ef-4b0f-b1c8-af74932c2c4c\") " pod="openstack/barbican-keystone-listener-795c4b4b5d-76dwx" Jan 31 09:24:54 crc kubenswrapper[4830]: I0131 09:24:54.576823 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-llck6\" (UniqueName: \"kubernetes.io/projected/95712f82-07ef-4b0f-b1c8-af74932c2c4c-kube-api-access-llck6\") pod \"barbican-keystone-listener-795c4b4b5d-76dwx\" (UID: \"95712f82-07ef-4b0f-b1c8-af74932c2c4c\") " pod="openstack/barbican-keystone-listener-795c4b4b5d-76dwx" Jan 31 09:24:54 crc kubenswrapper[4830]: I0131 09:24:54.576938 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/95712f82-07ef-4b0f-b1c8-af74932c2c4c-config-data-custom\") pod \"barbican-keystone-listener-795c4b4b5d-76dwx\" (UID: \"95712f82-07ef-4b0f-b1c8-af74932c2c4c\") " pod="openstack/barbican-keystone-listener-795c4b4b5d-76dwx" Jan 31 09:24:54 crc kubenswrapper[4830]: I0131 09:24:54.577063 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95712f82-07ef-4b0f-b1c8-af74932c2c4c-combined-ca-bundle\") pod \"barbican-keystone-listener-795c4b4b5d-76dwx\" (UID: \"95712f82-07ef-4b0f-b1c8-af74932c2c4c\") " pod="openstack/barbican-keystone-listener-795c4b4b5d-76dwx" Jan 31 09:24:54 crc kubenswrapper[4830]: I0131 09:24:54.577186 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/95712f82-07ef-4b0f-b1c8-af74932c2c4c-logs\") pod \"barbican-keystone-listener-795c4b4b5d-76dwx\" (UID: \"95712f82-07ef-4b0f-b1c8-af74932c2c4c\") " pod="openstack/barbican-keystone-listener-795c4b4b5d-76dwx" Jan 31 09:24:54 crc kubenswrapper[4830]: I0131 09:24:54.577698 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/95712f82-07ef-4b0f-b1c8-af74932c2c4c-logs\") pod \"barbican-keystone-listener-795c4b4b5d-76dwx\" (UID: \"95712f82-07ef-4b0f-b1c8-af74932c2c4c\") " pod="openstack/barbican-keystone-listener-795c4b4b5d-76dwx" Jan 31 09:24:54 crc kubenswrapper[4830]: I0131 09:24:54.580090 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-75c8ddd69c-pfcld"] Jan 31 09:24:54 crc kubenswrapper[4830]: E0131 09:24:54.580756 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d941bfc-a5bd-4764-8e53-a77414f25a21" containerName="init" Jan 31 09:24:54 crc kubenswrapper[4830]: I0131 09:24:54.580780 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d941bfc-a5bd-4764-8e53-a77414f25a21" containerName="init" Jan 31 09:24:54 crc kubenswrapper[4830]: E0131 09:24:54.580800 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d941bfc-a5bd-4764-8e53-a77414f25a21" containerName="dnsmasq-dns" Jan 31 09:24:54 crc kubenswrapper[4830]: I0131 09:24:54.580807 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d941bfc-a5bd-4764-8e53-a77414f25a21" containerName="dnsmasq-dns" Jan 31 09:24:54 crc kubenswrapper[4830]: I0131 09:24:54.581025 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="6d941bfc-a5bd-4764-8e53-a77414f25a21" containerName="dnsmasq-dns" Jan 31 09:24:54 crc kubenswrapper[4830]: I0131 09:24:54.582605 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75c8ddd69c-pfcld" Jan 31 09:24:54 crc kubenswrapper[4830]: I0131 09:24:54.644518 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/95712f82-07ef-4b0f-b1c8-af74932c2c4c-config-data\") pod \"barbican-keystone-listener-795c4b4b5d-76dwx\" (UID: \"95712f82-07ef-4b0f-b1c8-af74932c2c4c\") " pod="openstack/barbican-keystone-listener-795c4b4b5d-76dwx" Jan 31 09:24:54 crc kubenswrapper[4830]: I0131 09:24:54.658405 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-9d5f95fb7-h7vp9" Jan 31 09:24:54 crc kubenswrapper[4830]: I0131 09:24:54.669470 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95712f82-07ef-4b0f-b1c8-af74932c2c4c-combined-ca-bundle\") pod \"barbican-keystone-listener-795c4b4b5d-76dwx\" (UID: \"95712f82-07ef-4b0f-b1c8-af74932c2c4c\") " pod="openstack/barbican-keystone-listener-795c4b4b5d-76dwx" Jan 31 09:24:54 crc kubenswrapper[4830]: I0131 09:24:54.720156 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/95712f82-07ef-4b0f-b1c8-af74932c2c4c-config-data-custom\") pod \"barbican-keystone-listener-795c4b4b5d-76dwx\" (UID: \"95712f82-07ef-4b0f-b1c8-af74932c2c4c\") " pod="openstack/barbican-keystone-listener-795c4b4b5d-76dwx" Jan 31 09:24:54 crc kubenswrapper[4830]: I0131 09:24:54.724367 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6d941bfc-a5bd-4764-8e53-a77414f25a21-dns-swift-storage-0\") pod \"6d941bfc-a5bd-4764-8e53-a77414f25a21\" (UID: \"6d941bfc-a5bd-4764-8e53-a77414f25a21\") " Jan 31 09:24:54 crc kubenswrapper[4830]: I0131 09:24:54.724505 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6d941bfc-a5bd-4764-8e53-a77414f25a21-ovsdbserver-sb\") pod \"6d941bfc-a5bd-4764-8e53-a77414f25a21\" (UID: \"6d941bfc-a5bd-4764-8e53-a77414f25a21\") " Jan 31 09:24:54 crc kubenswrapper[4830]: I0131 09:24:54.724573 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6d941bfc-a5bd-4764-8e53-a77414f25a21-config\") pod \"6d941bfc-a5bd-4764-8e53-a77414f25a21\" (UID: \"6d941bfc-a5bd-4764-8e53-a77414f25a21\") " Jan 31 09:24:54 crc kubenswrapper[4830]: I0131 09:24:54.724837 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6d941bfc-a5bd-4764-8e53-a77414f25a21-ovsdbserver-nb\") pod \"6d941bfc-a5bd-4764-8e53-a77414f25a21\" (UID: \"6d941bfc-a5bd-4764-8e53-a77414f25a21\") " Jan 31 09:24:54 crc kubenswrapper[4830]: I0131 09:24:54.724927 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7dpkx\" (UniqueName: \"kubernetes.io/projected/6d941bfc-a5bd-4764-8e53-a77414f25a21-kube-api-access-7dpkx\") pod \"6d941bfc-a5bd-4764-8e53-a77414f25a21\" (UID: \"6d941bfc-a5bd-4764-8e53-a77414f25a21\") " Jan 31 09:24:54 crc kubenswrapper[4830]: I0131 09:24:54.724995 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6d941bfc-a5bd-4764-8e53-a77414f25a21-dns-svc\") pod \"6d941bfc-a5bd-4764-8e53-a77414f25a21\" (UID: \"6d941bfc-a5bd-4764-8e53-a77414f25a21\") " Jan 31 09:24:54 crc kubenswrapper[4830]: I0131 09:24:54.741044 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b9f0dccc-8d65-4aa7-81c9-548907df8af4-ovsdbserver-sb\") pod \"dnsmasq-dns-75c8ddd69c-pfcld\" (UID: \"b9f0dccc-8d65-4aa7-81c9-548907df8af4\") " pod="openstack/dnsmasq-dns-75c8ddd69c-pfcld" Jan 31 09:24:54 crc kubenswrapper[4830]: I0131 09:24:54.741134 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b9f0dccc-8d65-4aa7-81c9-548907df8af4-config\") pod \"dnsmasq-dns-75c8ddd69c-pfcld\" (UID: \"b9f0dccc-8d65-4aa7-81c9-548907df8af4\") " pod="openstack/dnsmasq-dns-75c8ddd69c-pfcld" Jan 31 09:24:54 crc kubenswrapper[4830]: I0131 09:24:54.741186 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b9f0dccc-8d65-4aa7-81c9-548907df8af4-dns-swift-storage-0\") pod \"dnsmasq-dns-75c8ddd69c-pfcld\" (UID: \"b9f0dccc-8d65-4aa7-81c9-548907df8af4\") " pod="openstack/dnsmasq-dns-75c8ddd69c-pfcld" Jan 31 09:24:54 crc kubenswrapper[4830]: I0131 09:24:54.741385 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fjjvk\" (UniqueName: \"kubernetes.io/projected/b9f0dccc-8d65-4aa7-81c9-548907df8af4-kube-api-access-fjjvk\") pod \"dnsmasq-dns-75c8ddd69c-pfcld\" (UID: \"b9f0dccc-8d65-4aa7-81c9-548907df8af4\") " pod="openstack/dnsmasq-dns-75c8ddd69c-pfcld" Jan 31 09:24:54 crc kubenswrapper[4830]: I0131 09:24:54.741689 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b9f0dccc-8d65-4aa7-81c9-548907df8af4-dns-svc\") pod \"dnsmasq-dns-75c8ddd69c-pfcld\" (UID: \"b9f0dccc-8d65-4aa7-81c9-548907df8af4\") " pod="openstack/dnsmasq-dns-75c8ddd69c-pfcld" Jan 31 09:24:54 crc kubenswrapper[4830]: I0131 09:24:54.741757 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b9f0dccc-8d65-4aa7-81c9-548907df8af4-ovsdbserver-nb\") pod \"dnsmasq-dns-75c8ddd69c-pfcld\" (UID: \"b9f0dccc-8d65-4aa7-81c9-548907df8af4\") " pod="openstack/dnsmasq-dns-75c8ddd69c-pfcld" Jan 31 09:24:54 crc kubenswrapper[4830]: I0131 09:24:54.754747 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-75c8ddd69c-pfcld"] Jan 31 09:24:54 crc kubenswrapper[4830]: I0131 09:24:54.853779 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-llck6\" (UniqueName: \"kubernetes.io/projected/95712f82-07ef-4b0f-b1c8-af74932c2c4c-kube-api-access-llck6\") pod \"barbican-keystone-listener-795c4b4b5d-76dwx\" (UID: \"95712f82-07ef-4b0f-b1c8-af74932c2c4c\") " pod="openstack/barbican-keystone-listener-795c4b4b5d-76dwx" Jan 31 09:24:54 crc kubenswrapper[4830]: I0131 09:24:54.911419 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b9f0dccc-8d65-4aa7-81c9-548907df8af4-ovsdbserver-nb\") pod \"dnsmasq-dns-75c8ddd69c-pfcld\" (UID: \"b9f0dccc-8d65-4aa7-81c9-548907df8af4\") " pod="openstack/dnsmasq-dns-75c8ddd69c-pfcld" Jan 31 09:24:54 crc kubenswrapper[4830]: I0131 09:24:54.912236 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6d941bfc-a5bd-4764-8e53-a77414f25a21-kube-api-access-7dpkx" (OuterVolumeSpecName: "kube-api-access-7dpkx") pod "6d941bfc-a5bd-4764-8e53-a77414f25a21" (UID: "6d941bfc-a5bd-4764-8e53-a77414f25a21"). InnerVolumeSpecName "kube-api-access-7dpkx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:24:54 crc kubenswrapper[4830]: I0131 09:24:54.914269 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-795c4b4b5d-76dwx" Jan 31 09:24:54 crc kubenswrapper[4830]: I0131 09:24:54.927813 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-566d86fcf5-mxs88"] Jan 31 09:24:54 crc kubenswrapper[4830]: I0131 09:24:54.934410 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-566d86fcf5-mxs88" Jan 31 09:24:54 crc kubenswrapper[4830]: I0131 09:24:54.935527 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b9f0dccc-8d65-4aa7-81c9-548907df8af4-ovsdbserver-sb\") pod \"dnsmasq-dns-75c8ddd69c-pfcld\" (UID: \"b9f0dccc-8d65-4aa7-81c9-548907df8af4\") " pod="openstack/dnsmasq-dns-75c8ddd69c-pfcld" Jan 31 09:24:54 crc kubenswrapper[4830]: I0131 09:24:54.935624 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b9f0dccc-8d65-4aa7-81c9-548907df8af4-config\") pod \"dnsmasq-dns-75c8ddd69c-pfcld\" (UID: \"b9f0dccc-8d65-4aa7-81c9-548907df8af4\") " pod="openstack/dnsmasq-dns-75c8ddd69c-pfcld" Jan 31 09:24:54 crc kubenswrapper[4830]: I0131 09:24:54.935668 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b9f0dccc-8d65-4aa7-81c9-548907df8af4-dns-swift-storage-0\") pod \"dnsmasq-dns-75c8ddd69c-pfcld\" (UID: \"b9f0dccc-8d65-4aa7-81c9-548907df8af4\") " pod="openstack/dnsmasq-dns-75c8ddd69c-pfcld" Jan 31 09:24:54 crc kubenswrapper[4830]: I0131 09:24:54.935902 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fjjvk\" (UniqueName: \"kubernetes.io/projected/b9f0dccc-8d65-4aa7-81c9-548907df8af4-kube-api-access-fjjvk\") pod \"dnsmasq-dns-75c8ddd69c-pfcld\" (UID: \"b9f0dccc-8d65-4aa7-81c9-548907df8af4\") " pod="openstack/dnsmasq-dns-75c8ddd69c-pfcld" Jan 31 09:24:54 crc kubenswrapper[4830]: I0131 09:24:54.936219 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b9f0dccc-8d65-4aa7-81c9-548907df8af4-dns-svc\") pod \"dnsmasq-dns-75c8ddd69c-pfcld\" (UID: \"b9f0dccc-8d65-4aa7-81c9-548907df8af4\") " pod="openstack/dnsmasq-dns-75c8ddd69c-pfcld" Jan 31 09:24:54 crc kubenswrapper[4830]: I0131 09:24:54.937554 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b9f0dccc-8d65-4aa7-81c9-548907df8af4-dns-svc\") pod \"dnsmasq-dns-75c8ddd69c-pfcld\" (UID: \"b9f0dccc-8d65-4aa7-81c9-548907df8af4\") " pod="openstack/dnsmasq-dns-75c8ddd69c-pfcld" Jan 31 09:24:54 crc kubenswrapper[4830]: I0131 09:24:54.938225 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b9f0dccc-8d65-4aa7-81c9-548907df8af4-ovsdbserver-nb\") pod \"dnsmasq-dns-75c8ddd69c-pfcld\" (UID: \"b9f0dccc-8d65-4aa7-81c9-548907df8af4\") " pod="openstack/dnsmasq-dns-75c8ddd69c-pfcld" Jan 31 09:24:54 crc kubenswrapper[4830]: I0131 09:24:54.938401 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7dpkx\" (UniqueName: \"kubernetes.io/projected/6d941bfc-a5bd-4764-8e53-a77414f25a21-kube-api-access-7dpkx\") on node \"crc\" DevicePath \"\"" Jan 31 09:24:54 crc kubenswrapper[4830]: I0131 09:24:54.939135 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b9f0dccc-8d65-4aa7-81c9-548907df8af4-config\") pod \"dnsmasq-dns-75c8ddd69c-pfcld\" (UID: \"b9f0dccc-8d65-4aa7-81c9-548907df8af4\") " pod="openstack/dnsmasq-dns-75c8ddd69c-pfcld" Jan 31 09:24:54 crc kubenswrapper[4830]: I0131 09:24:54.940749 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b9f0dccc-8d65-4aa7-81c9-548907df8af4-dns-swift-storage-0\") pod \"dnsmasq-dns-75c8ddd69c-pfcld\" (UID: \"b9f0dccc-8d65-4aa7-81c9-548907df8af4\") " pod="openstack/dnsmasq-dns-75c8ddd69c-pfcld" Jan 31 09:24:55 crc kubenswrapper[4830]: I0131 09:24:55.005636 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-845499c66-m62t7"] Jan 31 09:24:55 crc kubenswrapper[4830]: I0131 09:24:54.995709 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b9f0dccc-8d65-4aa7-81c9-548907df8af4-ovsdbserver-sb\") pod \"dnsmasq-dns-75c8ddd69c-pfcld\" (UID: \"b9f0dccc-8d65-4aa7-81c9-548907df8af4\") " pod="openstack/dnsmasq-dns-75c8ddd69c-pfcld" Jan 31 09:24:55 crc kubenswrapper[4830]: I0131 09:24:55.018545 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-845499c66-m62t7"] Jan 31 09:24:55 crc kubenswrapper[4830]: I0131 09:24:55.018749 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-845499c66-m62t7" Jan 31 09:24:55 crc kubenswrapper[4830]: I0131 09:24:55.040785 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6c1a8d52-a1f9-4faf-bceb-fdf75da19a4b-logs\") pod \"barbican-worker-566d86fcf5-mxs88\" (UID: \"6c1a8d52-a1f9-4faf-bceb-fdf75da19a4b\") " pod="openstack/barbican-worker-566d86fcf5-mxs88" Jan 31 09:24:55 crc kubenswrapper[4830]: I0131 09:24:55.042683 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6c1a8d52-a1f9-4faf-bceb-fdf75da19a4b-config-data-custom\") pod \"barbican-worker-566d86fcf5-mxs88\" (UID: \"6c1a8d52-a1f9-4faf-bceb-fdf75da19a4b\") " pod="openstack/barbican-worker-566d86fcf5-mxs88" Jan 31 09:24:55 crc kubenswrapper[4830]: I0131 09:24:55.042793 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c1a8d52-a1f9-4faf-bceb-fdf75da19a4b-config-data\") pod \"barbican-worker-566d86fcf5-mxs88\" (UID: \"6c1a8d52-a1f9-4faf-bceb-fdf75da19a4b\") " pod="openstack/barbican-worker-566d86fcf5-mxs88" Jan 31 09:24:55 crc kubenswrapper[4830]: I0131 09:24:55.042997 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xzrcv\" (UniqueName: \"kubernetes.io/projected/6c1a8d52-a1f9-4faf-bceb-fdf75da19a4b-kube-api-access-xzrcv\") pod \"barbican-worker-566d86fcf5-mxs88\" (UID: \"6c1a8d52-a1f9-4faf-bceb-fdf75da19a4b\") " pod="openstack/barbican-worker-566d86fcf5-mxs88" Jan 31 09:24:55 crc kubenswrapper[4830]: I0131 09:24:55.043064 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c1a8d52-a1f9-4faf-bceb-fdf75da19a4b-combined-ca-bundle\") pod \"barbican-worker-566d86fcf5-mxs88\" (UID: \"6c1a8d52-a1f9-4faf-bceb-fdf75da19a4b\") " pod="openstack/barbican-worker-566d86fcf5-mxs88" Jan 31 09:24:55 crc kubenswrapper[4830]: I0131 09:24:55.050324 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-566d86fcf5-mxs88"] Jan 31 09:24:55 crc kubenswrapper[4830]: I0131 09:24:55.093193 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fjjvk\" (UniqueName: \"kubernetes.io/projected/b9f0dccc-8d65-4aa7-81c9-548907df8af4-kube-api-access-fjjvk\") pod \"dnsmasq-dns-75c8ddd69c-pfcld\" (UID: \"b9f0dccc-8d65-4aa7-81c9-548907df8af4\") " pod="openstack/dnsmasq-dns-75c8ddd69c-pfcld" Jan 31 09:24:55 crc kubenswrapper[4830]: I0131 09:24:55.113571 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-7bdc9b7794-hvbg6"] Jan 31 09:24:55 crc kubenswrapper[4830]: I0131 09:24:55.116273 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7bdc9b7794-hvbg6" Jan 31 09:24:55 crc kubenswrapper[4830]: I0131 09:24:55.123282 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Jan 31 09:24:55 crc kubenswrapper[4830]: I0131 09:24:55.139794 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8b5c85b87-7w87z" event={"ID":"6d941bfc-a5bd-4764-8e53-a77414f25a21","Type":"ContainerDied","Data":"bdaea899f23c9c78e7578be191ab2a37d833affa561fa837992f99403c99e05f"} Jan 31 09:24:55 crc kubenswrapper[4830]: I0131 09:24:55.139867 4830 scope.go:117] "RemoveContainer" containerID="d84a1c23794a60f4c621178ff37b1c7344b9bb8cb7c28fc154e40f0e512c6728" Jan 31 09:24:55 crc kubenswrapper[4830]: I0131 09:24:55.140103 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8b5c85b87-7w87z" Jan 31 09:24:55 crc kubenswrapper[4830]: I0131 09:24:55.145585 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6c1a8d52-a1f9-4faf-bceb-fdf75da19a4b-config-data-custom\") pod \"barbican-worker-566d86fcf5-mxs88\" (UID: \"6c1a8d52-a1f9-4faf-bceb-fdf75da19a4b\") " pod="openstack/barbican-worker-566d86fcf5-mxs88" Jan 31 09:24:55 crc kubenswrapper[4830]: I0131 09:24:55.145642 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/050c36c9-2a82-4d10-a00f-c252a73374ba-combined-ca-bundle\") pod \"barbican-api-7bdc9b7794-hvbg6\" (UID: \"050c36c9-2a82-4d10-a00f-c252a73374ba\") " pod="openstack/barbican-api-7bdc9b7794-hvbg6" Jan 31 09:24:55 crc kubenswrapper[4830]: I0131 09:24:55.145776 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c1a8d52-a1f9-4faf-bceb-fdf75da19a4b-config-data\") pod \"barbican-worker-566d86fcf5-mxs88\" (UID: \"6c1a8d52-a1f9-4faf-bceb-fdf75da19a4b\") " pod="openstack/barbican-worker-566d86fcf5-mxs88" Jan 31 09:24:55 crc kubenswrapper[4830]: I0131 09:24:55.146341 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/050c36c9-2a82-4d10-a00f-c252a73374ba-logs\") pod \"barbican-api-7bdc9b7794-hvbg6\" (UID: \"050c36c9-2a82-4d10-a00f-c252a73374ba\") " pod="openstack/barbican-api-7bdc9b7794-hvbg6" Jan 31 09:24:55 crc kubenswrapper[4830]: I0131 09:24:55.146370 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4d9d32f8-4cbd-41db-b7a8-041cdbb90b29-config-data-custom\") pod \"barbican-keystone-listener-845499c66-m62t7\" (UID: \"4d9d32f8-4cbd-41db-b7a8-041cdbb90b29\") " pod="openstack/barbican-keystone-listener-845499c66-m62t7" Jan 31 09:24:55 crc kubenswrapper[4830]: I0131 09:24:55.146399 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xzrcv\" (UniqueName: \"kubernetes.io/projected/6c1a8d52-a1f9-4faf-bceb-fdf75da19a4b-kube-api-access-xzrcv\") pod \"barbican-worker-566d86fcf5-mxs88\" (UID: \"6c1a8d52-a1f9-4faf-bceb-fdf75da19a4b\") " pod="openstack/barbican-worker-566d86fcf5-mxs88" Jan 31 09:24:55 crc kubenswrapper[4830]: I0131 09:24:55.146429 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c1a8d52-a1f9-4faf-bceb-fdf75da19a4b-combined-ca-bundle\") pod \"barbican-worker-566d86fcf5-mxs88\" (UID: \"6c1a8d52-a1f9-4faf-bceb-fdf75da19a4b\") " pod="openstack/barbican-worker-566d86fcf5-mxs88" Jan 31 09:24:55 crc kubenswrapper[4830]: I0131 09:24:55.146458 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qsjn9\" (UniqueName: \"kubernetes.io/projected/4d9d32f8-4cbd-41db-b7a8-041cdbb90b29-kube-api-access-qsjn9\") pod \"barbican-keystone-listener-845499c66-m62t7\" (UID: \"4d9d32f8-4cbd-41db-b7a8-041cdbb90b29\") " pod="openstack/barbican-keystone-listener-845499c66-m62t7" Jan 31 09:24:55 crc kubenswrapper[4830]: I0131 09:24:55.146497 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4d9d32f8-4cbd-41db-b7a8-041cdbb90b29-logs\") pod \"barbican-keystone-listener-845499c66-m62t7\" (UID: \"4d9d32f8-4cbd-41db-b7a8-041cdbb90b29\") " pod="openstack/barbican-keystone-listener-845499c66-m62t7" Jan 31 09:24:55 crc kubenswrapper[4830]: I0131 09:24:55.146529 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6c1a8d52-a1f9-4faf-bceb-fdf75da19a4b-logs\") pod \"barbican-worker-566d86fcf5-mxs88\" (UID: \"6c1a8d52-a1f9-4faf-bceb-fdf75da19a4b\") " pod="openstack/barbican-worker-566d86fcf5-mxs88" Jan 31 09:24:55 crc kubenswrapper[4830]: I0131 09:24:55.146561 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/050c36c9-2a82-4d10-a00f-c252a73374ba-config-data\") pod \"barbican-api-7bdc9b7794-hvbg6\" (UID: \"050c36c9-2a82-4d10-a00f-c252a73374ba\") " pod="openstack/barbican-api-7bdc9b7794-hvbg6" Jan 31 09:24:55 crc kubenswrapper[4830]: I0131 09:24:55.146606 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4d9d32f8-4cbd-41db-b7a8-041cdbb90b29-config-data\") pod \"barbican-keystone-listener-845499c66-m62t7\" (UID: \"4d9d32f8-4cbd-41db-b7a8-041cdbb90b29\") " pod="openstack/barbican-keystone-listener-845499c66-m62t7" Jan 31 09:24:55 crc kubenswrapper[4830]: I0131 09:24:55.146712 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g4tz6\" (UniqueName: \"kubernetes.io/projected/050c36c9-2a82-4d10-a00f-c252a73374ba-kube-api-access-g4tz6\") pod \"barbican-api-7bdc9b7794-hvbg6\" (UID: \"050c36c9-2a82-4d10-a00f-c252a73374ba\") " pod="openstack/barbican-api-7bdc9b7794-hvbg6" Jan 31 09:24:55 crc kubenswrapper[4830]: I0131 09:24:55.146833 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/050c36c9-2a82-4d10-a00f-c252a73374ba-config-data-custom\") pod \"barbican-api-7bdc9b7794-hvbg6\" (UID: \"050c36c9-2a82-4d10-a00f-c252a73374ba\") " pod="openstack/barbican-api-7bdc9b7794-hvbg6" Jan 31 09:24:55 crc kubenswrapper[4830]: I0131 09:24:55.146906 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d9d32f8-4cbd-41db-b7a8-041cdbb90b29-combined-ca-bundle\") pod \"barbican-keystone-listener-845499c66-m62t7\" (UID: \"4d9d32f8-4cbd-41db-b7a8-041cdbb90b29\") " pod="openstack/barbican-keystone-listener-845499c66-m62t7" Jan 31 09:24:55 crc kubenswrapper[4830]: I0131 09:24:55.148031 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6c1a8d52-a1f9-4faf-bceb-fdf75da19a4b-logs\") pod \"barbican-worker-566d86fcf5-mxs88\" (UID: \"6c1a8d52-a1f9-4faf-bceb-fdf75da19a4b\") " pod="openstack/barbican-worker-566d86fcf5-mxs88" Jan 31 09:24:55 crc kubenswrapper[4830]: I0131 09:24:55.164359 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6c1a8d52-a1f9-4faf-bceb-fdf75da19a4b-config-data-custom\") pod \"barbican-worker-566d86fcf5-mxs88\" (UID: \"6c1a8d52-a1f9-4faf-bceb-fdf75da19a4b\") " pod="openstack/barbican-worker-566d86fcf5-mxs88" Jan 31 09:24:55 crc kubenswrapper[4830]: I0131 09:24:55.165485 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7995f9f9fb-6r8k4" event={"ID":"43cbd586-1683-440f-992a-113173028a37","Type":"ContainerStarted","Data":"8687d9d0250fd0b21fcf11768f66ecf3bc54a50a638ec7b0dfdb35d0934c31c2"} Jan 31 09:24:55 crc kubenswrapper[4830]: I0131 09:24:55.168444 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c1a8d52-a1f9-4faf-bceb-fdf75da19a4b-config-data\") pod \"barbican-worker-566d86fcf5-mxs88\" (UID: \"6c1a8d52-a1f9-4faf-bceb-fdf75da19a4b\") " pod="openstack/barbican-worker-566d86fcf5-mxs88" Jan 31 09:24:55 crc kubenswrapper[4830]: I0131 09:24:55.182851 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-7bdc9b7794-hvbg6"] Jan 31 09:24:55 crc kubenswrapper[4830]: I0131 09:24:55.195330 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xzrcv\" (UniqueName: \"kubernetes.io/projected/6c1a8d52-a1f9-4faf-bceb-fdf75da19a4b-kube-api-access-xzrcv\") pod \"barbican-worker-566d86fcf5-mxs88\" (UID: \"6c1a8d52-a1f9-4faf-bceb-fdf75da19a4b\") " pod="openstack/barbican-worker-566d86fcf5-mxs88" Jan 31 09:24:55 crc kubenswrapper[4830]: I0131 09:24:55.215991 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c1a8d52-a1f9-4faf-bceb-fdf75da19a4b-combined-ca-bundle\") pod \"barbican-worker-566d86fcf5-mxs88\" (UID: \"6c1a8d52-a1f9-4faf-bceb-fdf75da19a4b\") " pod="openstack/barbican-worker-566d86fcf5-mxs88" Jan 31 09:24:55 crc kubenswrapper[4830]: I0131 09:24:55.232393 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75c8ddd69c-pfcld" Jan 31 09:24:55 crc kubenswrapper[4830]: I0131 09:24:55.251714 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g4tz6\" (UniqueName: \"kubernetes.io/projected/050c36c9-2a82-4d10-a00f-c252a73374ba-kube-api-access-g4tz6\") pod \"barbican-api-7bdc9b7794-hvbg6\" (UID: \"050c36c9-2a82-4d10-a00f-c252a73374ba\") " pod="openstack/barbican-api-7bdc9b7794-hvbg6" Jan 31 09:24:55 crc kubenswrapper[4830]: I0131 09:24:55.251861 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/050c36c9-2a82-4d10-a00f-c252a73374ba-config-data-custom\") pod \"barbican-api-7bdc9b7794-hvbg6\" (UID: \"050c36c9-2a82-4d10-a00f-c252a73374ba\") " pod="openstack/barbican-api-7bdc9b7794-hvbg6" Jan 31 09:24:55 crc kubenswrapper[4830]: I0131 09:24:55.251942 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d9d32f8-4cbd-41db-b7a8-041cdbb90b29-combined-ca-bundle\") pod \"barbican-keystone-listener-845499c66-m62t7\" (UID: \"4d9d32f8-4cbd-41db-b7a8-041cdbb90b29\") " pod="openstack/barbican-keystone-listener-845499c66-m62t7" Jan 31 09:24:55 crc kubenswrapper[4830]: I0131 09:24:55.252044 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/050c36c9-2a82-4d10-a00f-c252a73374ba-combined-ca-bundle\") pod \"barbican-api-7bdc9b7794-hvbg6\" (UID: \"050c36c9-2a82-4d10-a00f-c252a73374ba\") " pod="openstack/barbican-api-7bdc9b7794-hvbg6" Jan 31 09:24:55 crc kubenswrapper[4830]: I0131 09:24:55.252119 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/050c36c9-2a82-4d10-a00f-c252a73374ba-logs\") pod \"barbican-api-7bdc9b7794-hvbg6\" (UID: \"050c36c9-2a82-4d10-a00f-c252a73374ba\") " pod="openstack/barbican-api-7bdc9b7794-hvbg6" Jan 31 09:24:55 crc kubenswrapper[4830]: I0131 09:24:55.252151 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4d9d32f8-4cbd-41db-b7a8-041cdbb90b29-config-data-custom\") pod \"barbican-keystone-listener-845499c66-m62t7\" (UID: \"4d9d32f8-4cbd-41db-b7a8-041cdbb90b29\") " pod="openstack/barbican-keystone-listener-845499c66-m62t7" Jan 31 09:24:55 crc kubenswrapper[4830]: I0131 09:24:55.252218 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qsjn9\" (UniqueName: \"kubernetes.io/projected/4d9d32f8-4cbd-41db-b7a8-041cdbb90b29-kube-api-access-qsjn9\") pod \"barbican-keystone-listener-845499c66-m62t7\" (UID: \"4d9d32f8-4cbd-41db-b7a8-041cdbb90b29\") " pod="openstack/barbican-keystone-listener-845499c66-m62t7" Jan 31 09:24:55 crc kubenswrapper[4830]: I0131 09:24:55.252267 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4d9d32f8-4cbd-41db-b7a8-041cdbb90b29-logs\") pod \"barbican-keystone-listener-845499c66-m62t7\" (UID: \"4d9d32f8-4cbd-41db-b7a8-041cdbb90b29\") " pod="openstack/barbican-keystone-listener-845499c66-m62t7" Jan 31 09:24:55 crc kubenswrapper[4830]: I0131 09:24:55.252317 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/050c36c9-2a82-4d10-a00f-c252a73374ba-config-data\") pod \"barbican-api-7bdc9b7794-hvbg6\" (UID: \"050c36c9-2a82-4d10-a00f-c252a73374ba\") " pod="openstack/barbican-api-7bdc9b7794-hvbg6" Jan 31 09:24:55 crc kubenswrapper[4830]: I0131 09:24:55.252339 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4d9d32f8-4cbd-41db-b7a8-041cdbb90b29-config-data\") pod \"barbican-keystone-listener-845499c66-m62t7\" (UID: \"4d9d32f8-4cbd-41db-b7a8-041cdbb90b29\") " pod="openstack/barbican-keystone-listener-845499c66-m62t7" Jan 31 09:24:55 crc kubenswrapper[4830]: I0131 09:24:55.259148 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4d9d32f8-4cbd-41db-b7a8-041cdbb90b29-config-data\") pod \"barbican-keystone-listener-845499c66-m62t7\" (UID: \"4d9d32f8-4cbd-41db-b7a8-041cdbb90b29\") " pod="openstack/barbican-keystone-listener-845499c66-m62t7" Jan 31 09:24:55 crc kubenswrapper[4830]: I0131 09:24:55.259635 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4d9d32f8-4cbd-41db-b7a8-041cdbb90b29-logs\") pod \"barbican-keystone-listener-845499c66-m62t7\" (UID: \"4d9d32f8-4cbd-41db-b7a8-041cdbb90b29\") " pod="openstack/barbican-keystone-listener-845499c66-m62t7" Jan 31 09:24:55 crc kubenswrapper[4830]: I0131 09:24:55.265421 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d9d32f8-4cbd-41db-b7a8-041cdbb90b29-combined-ca-bundle\") pod \"barbican-keystone-listener-845499c66-m62t7\" (UID: \"4d9d32f8-4cbd-41db-b7a8-041cdbb90b29\") " pod="openstack/barbican-keystone-listener-845499c66-m62t7" Jan 31 09:24:55 crc kubenswrapper[4830]: I0131 09:24:55.296434 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qsjn9\" (UniqueName: \"kubernetes.io/projected/4d9d32f8-4cbd-41db-b7a8-041cdbb90b29-kube-api-access-qsjn9\") pod \"barbican-keystone-listener-845499c66-m62t7\" (UID: \"4d9d32f8-4cbd-41db-b7a8-041cdbb90b29\") " pod="openstack/barbican-keystone-listener-845499c66-m62t7" Jan 31 09:24:55 crc kubenswrapper[4830]: I0131 09:24:55.296543 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4d9d32f8-4cbd-41db-b7a8-041cdbb90b29-config-data-custom\") pod \"barbican-keystone-listener-845499c66-m62t7\" (UID: \"4d9d32f8-4cbd-41db-b7a8-041cdbb90b29\") " pod="openstack/barbican-keystone-listener-845499c66-m62t7" Jan 31 09:24:55 crc kubenswrapper[4830]: I0131 09:24:55.309758 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/050c36c9-2a82-4d10-a00f-c252a73374ba-logs\") pod \"barbican-api-7bdc9b7794-hvbg6\" (UID: \"050c36c9-2a82-4d10-a00f-c252a73374ba\") " pod="openstack/barbican-api-7bdc9b7794-hvbg6" Jan 31 09:24:55 crc kubenswrapper[4830]: I0131 09:24:55.310764 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g4tz6\" (UniqueName: \"kubernetes.io/projected/050c36c9-2a82-4d10-a00f-c252a73374ba-kube-api-access-g4tz6\") pod \"barbican-api-7bdc9b7794-hvbg6\" (UID: \"050c36c9-2a82-4d10-a00f-c252a73374ba\") " pod="openstack/barbican-api-7bdc9b7794-hvbg6" Jan 31 09:24:55 crc kubenswrapper[4830]: I0131 09:24:55.310814 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/050c36c9-2a82-4d10-a00f-c252a73374ba-config-data-custom\") pod \"barbican-api-7bdc9b7794-hvbg6\" (UID: \"050c36c9-2a82-4d10-a00f-c252a73374ba\") " pod="openstack/barbican-api-7bdc9b7794-hvbg6" Jan 31 09:24:55 crc kubenswrapper[4830]: I0131 09:24:55.313901 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/050c36c9-2a82-4d10-a00f-c252a73374ba-config-data\") pod \"barbican-api-7bdc9b7794-hvbg6\" (UID: \"050c36c9-2a82-4d10-a00f-c252a73374ba\") " pod="openstack/barbican-api-7bdc9b7794-hvbg6" Jan 31 09:24:55 crc kubenswrapper[4830]: I0131 09:24:55.315836 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6d941bfc-a5bd-4764-8e53-a77414f25a21-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "6d941bfc-a5bd-4764-8e53-a77414f25a21" (UID: "6d941bfc-a5bd-4764-8e53-a77414f25a21"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:24:55 crc kubenswrapper[4830]: I0131 09:24:55.317075 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6d941bfc-a5bd-4764-8e53-a77414f25a21-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "6d941bfc-a5bd-4764-8e53-a77414f25a21" (UID: "6d941bfc-a5bd-4764-8e53-a77414f25a21"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:24:55 crc kubenswrapper[4830]: I0131 09:24:55.317533 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6d941bfc-a5bd-4764-8e53-a77414f25a21-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "6d941bfc-a5bd-4764-8e53-a77414f25a21" (UID: "6d941bfc-a5bd-4764-8e53-a77414f25a21"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:24:55 crc kubenswrapper[4830]: I0131 09:24:55.319085 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6d941bfc-a5bd-4764-8e53-a77414f25a21-config" (OuterVolumeSpecName: "config") pod "6d941bfc-a5bd-4764-8e53-a77414f25a21" (UID: "6d941bfc-a5bd-4764-8e53-a77414f25a21"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:24:55 crc kubenswrapper[4830]: I0131 09:24:55.318945 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/050c36c9-2a82-4d10-a00f-c252a73374ba-combined-ca-bundle\") pod \"barbican-api-7bdc9b7794-hvbg6\" (UID: \"050c36c9-2a82-4d10-a00f-c252a73374ba\") " pod="openstack/barbican-api-7bdc9b7794-hvbg6" Jan 31 09:24:55 crc kubenswrapper[4830]: I0131 09:24:55.336773 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-566d86fcf5-mxs88" Jan 31 09:24:55 crc kubenswrapper[4830]: I0131 09:24:55.337259 4830 scope.go:117] "RemoveContainer" containerID="138c3c0e08a8105a6e1cae80a2cf9fc21dcf54e1d9169135a0e3b2b82e6fd73e" Jan 31 09:24:55 crc kubenswrapper[4830]: I0131 09:24:55.357425 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6d941bfc-a5bd-4764-8e53-a77414f25a21-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "6d941bfc-a5bd-4764-8e53-a77414f25a21" (UID: "6d941bfc-a5bd-4764-8e53-a77414f25a21"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:24:55 crc kubenswrapper[4830]: I0131 09:24:55.357790 4830 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6d941bfc-a5bd-4764-8e53-a77414f25a21-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 31 09:24:55 crc kubenswrapper[4830]: I0131 09:24:55.357834 4830 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6d941bfc-a5bd-4764-8e53-a77414f25a21-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 31 09:24:55 crc kubenswrapper[4830]: I0131 09:24:55.357847 4830 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6d941bfc-a5bd-4764-8e53-a77414f25a21-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 31 09:24:55 crc kubenswrapper[4830]: I0131 09:24:55.357858 4830 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6d941bfc-a5bd-4764-8e53-a77414f25a21-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 31 09:24:55 crc kubenswrapper[4830]: I0131 09:24:55.357868 4830 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6d941bfc-a5bd-4764-8e53-a77414f25a21-config\") on node \"crc\" DevicePath \"\"" Jan 31 09:24:55 crc kubenswrapper[4830]: I0131 09:24:55.367171 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-845499c66-m62t7" Jan 31 09:24:55 crc kubenswrapper[4830]: I0131 09:24:55.450521 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7bdc9b7794-hvbg6" Jan 31 09:24:55 crc kubenswrapper[4830]: I0131 09:24:55.842174 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8b5c85b87-7w87z"] Jan 31 09:24:55 crc kubenswrapper[4830]: I0131 09:24:55.846069 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-8b5c85b87-7w87z"] Jan 31 09:24:56 crc kubenswrapper[4830]: I0131 09:24:56.452943 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6d941bfc-a5bd-4764-8e53-a77414f25a21" path="/var/lib/kubelet/pods/6d941bfc-a5bd-4764-8e53-a77414f25a21/volumes" Jan 31 09:24:56 crc kubenswrapper[4830]: I0131 09:24:56.455390 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"39688f84-c227-4658-aee1-ce5e5d450ca1","Type":"ContainerStarted","Data":"ada6cefce159c5c6e84f6c0ce9d82ac301872a4c0b6ad072a5e89202581763bc"} Jan 31 09:24:56 crc kubenswrapper[4830]: I0131 09:24:56.455448 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-57d8f8c487-sqqph"] Jan 31 09:24:56 crc kubenswrapper[4830]: I0131 09:24:56.455483 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7995f9f9fb-6r8k4" event={"ID":"43cbd586-1683-440f-992a-113173028a37","Type":"ContainerStarted","Data":"eabcd1235c056e6d23ed658dd38e5f2e72bac2c473103f6f3e4acd1aa0dacec8"} Jan 31 09:24:56 crc kubenswrapper[4830]: I0131 09:24:56.455506 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-25d9r"] Jan 31 09:24:56 crc kubenswrapper[4830]: I0131 09:24:56.763159 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-9d5f95fb7-h7vp9"] Jan 31 09:24:56 crc kubenswrapper[4830]: W0131 09:24:56.782006 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod95712f82_07ef_4b0f_b1c8_af74932c2c4c.slice/crio-5feb49ec710e5078e396b397234e0693c301f749ac98c78d4f9c94404cfc3cf2 WatchSource:0}: Error finding container 5feb49ec710e5078e396b397234e0693c301f749ac98c78d4f9c94404cfc3cf2: Status 404 returned error can't find the container with id 5feb49ec710e5078e396b397234e0693c301f749ac98c78d4f9c94404cfc3cf2 Jan 31 09:24:56 crc kubenswrapper[4830]: I0131 09:24:56.787423 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-795c4b4b5d-76dwx"] Jan 31 09:24:57 crc kubenswrapper[4830]: I0131 09:24:57.156922 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-75c8ddd69c-pfcld"] Jan 31 09:24:57 crc kubenswrapper[4830]: I0131 09:24:57.479907 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75c8ddd69c-pfcld" event={"ID":"b9f0dccc-8d65-4aa7-81c9-548907df8af4","Type":"ContainerStarted","Data":"c94c243fc25f354758328246b89afd6381ff241fdfdd3f787538de0920265d86"} Jan 31 09:24:57 crc kubenswrapper[4830]: I0131 09:24:57.482337 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-566d86fcf5-mxs88"] Jan 31 09:24:57 crc kubenswrapper[4830]: I0131 09:24:57.503707 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-845499c66-m62t7"] Jan 31 09:24:57 crc kubenswrapper[4830]: I0131 09:24:57.504872 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-57d8f8c487-sqqph" event={"ID":"230488d2-6bec-4165-8ff4-4854cc6d53f6","Type":"ContainerStarted","Data":"56d68538cf4c8263a2b4bd54b7de4cbec1d5609b8ae59ab30287e702e4435e96"} Jan 31 09:24:57 crc kubenswrapper[4830]: I0131 09:24:57.504913 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-57d8f8c487-sqqph" event={"ID":"230488d2-6bec-4165-8ff4-4854cc6d53f6","Type":"ContainerStarted","Data":"73c26828b6897e36d67685bfd7c67e01fc1c47786c614c30645414ee433d5613"} Jan 31 09:24:57 crc kubenswrapper[4830]: I0131 09:24:57.504977 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-57d8f8c487-sqqph" Jan 31 09:24:57 crc kubenswrapper[4830]: I0131 09:24:57.511094 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-hh79w" event={"ID":"6324b6ba-4288-44f4-bf87-1a4356c1a9f0","Type":"ContainerStarted","Data":"68201a955abf6cbbb906e8ea8b01b1ed9f44aea832f898360508559e3d2781fe"} Jan 31 09:24:57 crc kubenswrapper[4830]: I0131 09:24:57.530488 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-7bdc9b7794-hvbg6"] Jan 31 09:24:57 crc kubenswrapper[4830]: I0131 09:24:57.538701 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7995f9f9fb-6r8k4" event={"ID":"43cbd586-1683-440f-992a-113173028a37","Type":"ContainerStarted","Data":"7c6922b39c4dd9c7624db328248b385cabff90417731eff072afdbd30b6ab102"} Jan 31 09:24:57 crc kubenswrapper[4830]: I0131 09:24:57.539004 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-7995f9f9fb-6r8k4" Jan 31 09:24:57 crc kubenswrapper[4830]: I0131 09:24:57.546480 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-795c4b4b5d-76dwx" event={"ID":"95712f82-07ef-4b0f-b1c8-af74932c2c4c","Type":"ContainerStarted","Data":"5feb49ec710e5078e396b397234e0693c301f749ac98c78d4f9c94404cfc3cf2"} Jan 31 09:24:57 crc kubenswrapper[4830]: I0131 09:24:57.552669 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-9d5f95fb7-h7vp9" event={"ID":"e1fe9f02-72ff-45af-8728-91cecff0d1ac","Type":"ContainerStarted","Data":"e566cefeb57470cec6a2eff1bb891beb339e6e053f250d03ccca28789637710a"} Jan 31 09:24:57 crc kubenswrapper[4830]: I0131 09:24:57.558856 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-57d8f8c487-sqqph" podStartSLOduration=4.558822082 podStartE2EDuration="4.558822082s" podCreationTimestamp="2026-01-31 09:24:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 09:24:57.532349979 +0000 UTC m=+1442.025712431" watchObservedRunningTime="2026-01-31 09:24:57.558822082 +0000 UTC m=+1442.052184514" Jan 31 09:24:57 crc kubenswrapper[4830]: W0131 09:24:57.564615 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4d9d32f8_4cbd_41db_b7a8_041cdbb90b29.slice/crio-be9af9eddf27eaa28512c9293b109a78ba60e981d11196ee70102a51f65631f5 WatchSource:0}: Error finding container be9af9eddf27eaa28512c9293b109a78ba60e981d11196ee70102a51f65631f5: Status 404 returned error can't find the container with id be9af9eddf27eaa28512c9293b109a78ba60e981d11196ee70102a51f65631f5 Jan 31 09:24:57 crc kubenswrapper[4830]: I0131 09:24:57.570193 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-db-sync-hh79w" podStartSLOduration=5.856193844 podStartE2EDuration="1m1.570166624s" podCreationTimestamp="2026-01-31 09:23:56 +0000 UTC" firstStartedPulling="2026-01-31 09:23:59.336947452 +0000 UTC m=+1383.830309894" lastFinishedPulling="2026-01-31 09:24:55.050920232 +0000 UTC m=+1439.544282674" observedRunningTime="2026-01-31 09:24:57.556379142 +0000 UTC m=+1442.049741584" watchObservedRunningTime="2026-01-31 09:24:57.570166624 +0000 UTC m=+1442.063529066" Jan 31 09:24:57 crc kubenswrapper[4830]: I0131 09:24:57.592225 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-7995f9f9fb-6r8k4" podStartSLOduration=16.592202751 podStartE2EDuration="16.592202751s" podCreationTimestamp="2026-01-31 09:24:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 09:24:57.588856266 +0000 UTC m=+1442.082218708" watchObservedRunningTime="2026-01-31 09:24:57.592202751 +0000 UTC m=+1442.085565193" Jan 31 09:24:57 crc kubenswrapper[4830]: I0131 09:24:57.606616 4830 generic.go:334] "Generic (PLEG): container finished" podID="d136e2d6-6468-43c5-942f-71b672962cae" containerID="508754901a7bb2391c6303ffae85608aeb0943832edadd27887721c3e28c2281" exitCode=0 Jan 31 09:24:57 crc kubenswrapper[4830]: I0131 09:24:57.606689 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-25d9r" event={"ID":"d136e2d6-6468-43c5-942f-71b672962cae","Type":"ContainerDied","Data":"508754901a7bb2391c6303ffae85608aeb0943832edadd27887721c3e28c2281"} Jan 31 09:24:57 crc kubenswrapper[4830]: I0131 09:24:57.606760 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-25d9r" event={"ID":"d136e2d6-6468-43c5-942f-71b672962cae","Type":"ContainerStarted","Data":"67b840635369ed683f4532b40018dab542c18dcdc088b16d45f05a29d36f7d1d"} Jan 31 09:24:58 crc kubenswrapper[4830]: I0131 09:24:58.014470 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 31 09:24:58 crc kubenswrapper[4830]: I0131 09:24:58.014987 4830 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 31 09:24:58 crc kubenswrapper[4830]: I0131 09:24:58.046910 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 31 09:24:58 crc kubenswrapper[4830]: I0131 09:24:58.204927 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-8b5c85b87-7w87z" podUID="6d941bfc-a5bd-4764-8e53-a77414f25a21" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.185:5353: i/o timeout" Jan 31 09:24:58 crc kubenswrapper[4830]: I0131 09:24:58.505653 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 31 09:24:58 crc kubenswrapper[4830]: I0131 09:24:58.506129 4830 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 31 09:24:58 crc kubenswrapper[4830]: I0131 09:24:58.698640 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-845499c66-m62t7" event={"ID":"4d9d32f8-4cbd-41db-b7a8-041cdbb90b29","Type":"ContainerStarted","Data":"be9af9eddf27eaa28512c9293b109a78ba60e981d11196ee70102a51f65631f5"} Jan 31 09:24:58 crc kubenswrapper[4830]: I0131 09:24:58.712982 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7bdc9b7794-hvbg6" event={"ID":"050c36c9-2a82-4d10-a00f-c252a73374ba","Type":"ContainerStarted","Data":"c85b47e5ae3e82c083265949830b049a0d31fa8eab66ece86afe051157a570da"} Jan 31 09:24:58 crc kubenswrapper[4830]: I0131 09:24:58.713052 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7bdc9b7794-hvbg6" event={"ID":"050c36c9-2a82-4d10-a00f-c252a73374ba","Type":"ContainerStarted","Data":"6108ecb19ef4c0f5ca909b0b5d87dde289d34732e32ac4e51e3c00c063d98067"} Jan 31 09:24:58 crc kubenswrapper[4830]: I0131 09:24:58.748351 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-w6kxz" event={"ID":"0617092f-40a9-4d3d-b472-f284a2b24000","Type":"ContainerStarted","Data":"9bec4c50fcc4d62de1378f26906fe163c84640b76bf85c39066f6aa600cbcc69"} Jan 31 09:24:58 crc kubenswrapper[4830]: I0131 09:24:58.756655 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-566d86fcf5-mxs88" event={"ID":"6c1a8d52-a1f9-4faf-bceb-fdf75da19a4b","Type":"ContainerStarted","Data":"dd7a38bea187c81ed72e61b1191c848c23d63cca8c62d30282313c7b2836a8a6"} Jan 31 09:24:58 crc kubenswrapper[4830]: I0131 09:24:58.781236 4830 generic.go:334] "Generic (PLEG): container finished" podID="b9f0dccc-8d65-4aa7-81c9-548907df8af4" containerID="fc66f93cf2dd5e1ef6bdfeed1b2f9c16ad775a8d00fc09bdce347f7a625175bc" exitCode=0 Jan 31 09:24:58 crc kubenswrapper[4830]: I0131 09:24:58.781460 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75c8ddd69c-pfcld" event={"ID":"b9f0dccc-8d65-4aa7-81c9-548907df8af4","Type":"ContainerDied","Data":"fc66f93cf2dd5e1ef6bdfeed1b2f9c16ad775a8d00fc09bdce347f7a625175bc"} Jan 31 09:24:58 crc kubenswrapper[4830]: I0131 09:24:58.782337 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-7995f9f9fb-6r8k4" Jan 31 09:24:58 crc kubenswrapper[4830]: I0131 09:24:58.800042 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-w6kxz" podStartSLOduration=7.492420853 podStartE2EDuration="1m2.800011829s" podCreationTimestamp="2026-01-31 09:23:56 +0000 UTC" firstStartedPulling="2026-01-31 09:23:59.55847625 +0000 UTC m=+1384.051838682" lastFinishedPulling="2026-01-31 09:24:54.866067206 +0000 UTC m=+1439.359429658" observedRunningTime="2026-01-31 09:24:58.788469651 +0000 UTC m=+1443.281832093" watchObservedRunningTime="2026-01-31 09:24:58.800011829 +0000 UTC m=+1443.293374271" Jan 31 09:24:59 crc kubenswrapper[4830]: I0131 09:24:59.316508 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-bd45896b-5lsfl"] Jan 31 09:24:59 crc kubenswrapper[4830]: I0131 09:24:59.330673 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-bd45896b-5lsfl" Jan 31 09:24:59 crc kubenswrapper[4830]: I0131 09:24:59.337774 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Jan 31 09:24:59 crc kubenswrapper[4830]: I0131 09:24:59.339942 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Jan 31 09:24:59 crc kubenswrapper[4830]: I0131 09:24:59.354052 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-bd45896b-5lsfl"] Jan 31 09:24:59 crc kubenswrapper[4830]: I0131 09:24:59.367442 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 31 09:24:59 crc kubenswrapper[4830]: I0131 09:24:59.442700 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/68bf6013-de5a-401f-868a-79325ed5ab24-config-data-custom\") pod \"barbican-api-bd45896b-5lsfl\" (UID: \"68bf6013-de5a-401f-868a-79325ed5ab24\") " pod="openstack/barbican-api-bd45896b-5lsfl" Jan 31 09:24:59 crc kubenswrapper[4830]: I0131 09:24:59.442805 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/68bf6013-de5a-401f-868a-79325ed5ab24-internal-tls-certs\") pod \"barbican-api-bd45896b-5lsfl\" (UID: \"68bf6013-de5a-401f-868a-79325ed5ab24\") " pod="openstack/barbican-api-bd45896b-5lsfl" Jan 31 09:24:59 crc kubenswrapper[4830]: I0131 09:24:59.442864 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/68bf6013-de5a-401f-868a-79325ed5ab24-config-data\") pod \"barbican-api-bd45896b-5lsfl\" (UID: \"68bf6013-de5a-401f-868a-79325ed5ab24\") " pod="openstack/barbican-api-bd45896b-5lsfl" Jan 31 09:24:59 crc kubenswrapper[4830]: I0131 09:24:59.442962 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/68bf6013-de5a-401f-868a-79325ed5ab24-logs\") pod \"barbican-api-bd45896b-5lsfl\" (UID: \"68bf6013-de5a-401f-868a-79325ed5ab24\") " pod="openstack/barbican-api-bd45896b-5lsfl" Jan 31 09:24:59 crc kubenswrapper[4830]: I0131 09:24:59.443011 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/68bf6013-de5a-401f-868a-79325ed5ab24-combined-ca-bundle\") pod \"barbican-api-bd45896b-5lsfl\" (UID: \"68bf6013-de5a-401f-868a-79325ed5ab24\") " pod="openstack/barbican-api-bd45896b-5lsfl" Jan 31 09:24:59 crc kubenswrapper[4830]: I0131 09:24:59.443030 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/68bf6013-de5a-401f-868a-79325ed5ab24-public-tls-certs\") pod \"barbican-api-bd45896b-5lsfl\" (UID: \"68bf6013-de5a-401f-868a-79325ed5ab24\") " pod="openstack/barbican-api-bd45896b-5lsfl" Jan 31 09:24:59 crc kubenswrapper[4830]: I0131 09:24:59.443115 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5bhxv\" (UniqueName: \"kubernetes.io/projected/68bf6013-de5a-401f-868a-79325ed5ab24-kube-api-access-5bhxv\") pod \"barbican-api-bd45896b-5lsfl\" (UID: \"68bf6013-de5a-401f-868a-79325ed5ab24\") " pod="openstack/barbican-api-bd45896b-5lsfl" Jan 31 09:24:59 crc kubenswrapper[4830]: I0131 09:24:59.546016 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5bhxv\" (UniqueName: \"kubernetes.io/projected/68bf6013-de5a-401f-868a-79325ed5ab24-kube-api-access-5bhxv\") pod \"barbican-api-bd45896b-5lsfl\" (UID: \"68bf6013-de5a-401f-868a-79325ed5ab24\") " pod="openstack/barbican-api-bd45896b-5lsfl" Jan 31 09:24:59 crc kubenswrapper[4830]: I0131 09:24:59.546167 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/68bf6013-de5a-401f-868a-79325ed5ab24-config-data-custom\") pod \"barbican-api-bd45896b-5lsfl\" (UID: \"68bf6013-de5a-401f-868a-79325ed5ab24\") " pod="openstack/barbican-api-bd45896b-5lsfl" Jan 31 09:24:59 crc kubenswrapper[4830]: I0131 09:24:59.546256 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/68bf6013-de5a-401f-868a-79325ed5ab24-internal-tls-certs\") pod \"barbican-api-bd45896b-5lsfl\" (UID: \"68bf6013-de5a-401f-868a-79325ed5ab24\") " pod="openstack/barbican-api-bd45896b-5lsfl" Jan 31 09:24:59 crc kubenswrapper[4830]: I0131 09:24:59.546316 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/68bf6013-de5a-401f-868a-79325ed5ab24-config-data\") pod \"barbican-api-bd45896b-5lsfl\" (UID: \"68bf6013-de5a-401f-868a-79325ed5ab24\") " pod="openstack/barbican-api-bd45896b-5lsfl" Jan 31 09:24:59 crc kubenswrapper[4830]: I0131 09:24:59.546430 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/68bf6013-de5a-401f-868a-79325ed5ab24-logs\") pod \"barbican-api-bd45896b-5lsfl\" (UID: \"68bf6013-de5a-401f-868a-79325ed5ab24\") " pod="openstack/barbican-api-bd45896b-5lsfl" Jan 31 09:24:59 crc kubenswrapper[4830]: I0131 09:24:59.546470 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/68bf6013-de5a-401f-868a-79325ed5ab24-combined-ca-bundle\") pod \"barbican-api-bd45896b-5lsfl\" (UID: \"68bf6013-de5a-401f-868a-79325ed5ab24\") " pod="openstack/barbican-api-bd45896b-5lsfl" Jan 31 09:24:59 crc kubenswrapper[4830]: I0131 09:24:59.546494 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/68bf6013-de5a-401f-868a-79325ed5ab24-public-tls-certs\") pod \"barbican-api-bd45896b-5lsfl\" (UID: \"68bf6013-de5a-401f-868a-79325ed5ab24\") " pod="openstack/barbican-api-bd45896b-5lsfl" Jan 31 09:24:59 crc kubenswrapper[4830]: I0131 09:24:59.548688 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/68bf6013-de5a-401f-868a-79325ed5ab24-logs\") pod \"barbican-api-bd45896b-5lsfl\" (UID: \"68bf6013-de5a-401f-868a-79325ed5ab24\") " pod="openstack/barbican-api-bd45896b-5lsfl" Jan 31 09:24:59 crc kubenswrapper[4830]: I0131 09:24:59.556616 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/68bf6013-de5a-401f-868a-79325ed5ab24-public-tls-certs\") pod \"barbican-api-bd45896b-5lsfl\" (UID: \"68bf6013-de5a-401f-868a-79325ed5ab24\") " pod="openstack/barbican-api-bd45896b-5lsfl" Jan 31 09:24:59 crc kubenswrapper[4830]: I0131 09:24:59.561627 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/68bf6013-de5a-401f-868a-79325ed5ab24-combined-ca-bundle\") pod \"barbican-api-bd45896b-5lsfl\" (UID: \"68bf6013-de5a-401f-868a-79325ed5ab24\") " pod="openstack/barbican-api-bd45896b-5lsfl" Jan 31 09:24:59 crc kubenswrapper[4830]: I0131 09:24:59.567958 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/68bf6013-de5a-401f-868a-79325ed5ab24-config-data-custom\") pod \"barbican-api-bd45896b-5lsfl\" (UID: \"68bf6013-de5a-401f-868a-79325ed5ab24\") " pod="openstack/barbican-api-bd45896b-5lsfl" Jan 31 09:24:59 crc kubenswrapper[4830]: I0131 09:24:59.576420 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/68bf6013-de5a-401f-868a-79325ed5ab24-internal-tls-certs\") pod \"barbican-api-bd45896b-5lsfl\" (UID: \"68bf6013-de5a-401f-868a-79325ed5ab24\") " pod="openstack/barbican-api-bd45896b-5lsfl" Jan 31 09:24:59 crc kubenswrapper[4830]: I0131 09:24:59.577465 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/68bf6013-de5a-401f-868a-79325ed5ab24-config-data\") pod \"barbican-api-bd45896b-5lsfl\" (UID: \"68bf6013-de5a-401f-868a-79325ed5ab24\") " pod="openstack/barbican-api-bd45896b-5lsfl" Jan 31 09:24:59 crc kubenswrapper[4830]: I0131 09:24:59.581796 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5bhxv\" (UniqueName: \"kubernetes.io/projected/68bf6013-de5a-401f-868a-79325ed5ab24-kube-api-access-5bhxv\") pod \"barbican-api-bd45896b-5lsfl\" (UID: \"68bf6013-de5a-401f-868a-79325ed5ab24\") " pod="openstack/barbican-api-bd45896b-5lsfl" Jan 31 09:24:59 crc kubenswrapper[4830]: I0131 09:24:59.686905 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-bd45896b-5lsfl" Jan 31 09:25:00 crc kubenswrapper[4830]: I0131 09:25:00.627573 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-bd45896b-5lsfl"] Jan 31 09:25:00 crc kubenswrapper[4830]: I0131 09:25:00.827027 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7bdc9b7794-hvbg6" event={"ID":"050c36c9-2a82-4d10-a00f-c252a73374ba","Type":"ContainerStarted","Data":"4e2cdd360766961e110664f97744c088c85575b7622712bc721ce9aa80105b32"} Jan 31 09:25:00 crc kubenswrapper[4830]: I0131 09:25:00.827128 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-7bdc9b7794-hvbg6" Jan 31 09:25:00 crc kubenswrapper[4830]: I0131 09:25:00.830479 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75c8ddd69c-pfcld" event={"ID":"b9f0dccc-8d65-4aa7-81c9-548907df8af4","Type":"ContainerStarted","Data":"8de8560fc90445522a14e354f527ee56a73e3d5d539f428fcc3ddd4040d2e3b9"} Jan 31 09:25:00 crc kubenswrapper[4830]: I0131 09:25:00.830672 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-75c8ddd69c-pfcld" Jan 31 09:25:00 crc kubenswrapper[4830]: I0131 09:25:00.871164 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-7bdc9b7794-hvbg6" podStartSLOduration=6.871125081 podStartE2EDuration="6.871125081s" podCreationTimestamp="2026-01-31 09:24:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 09:25:00.860637603 +0000 UTC m=+1445.354000035" watchObservedRunningTime="2026-01-31 09:25:00.871125081 +0000 UTC m=+1445.364487523" Jan 31 09:25:00 crc kubenswrapper[4830]: I0131 09:25:00.896309 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-75c8ddd69c-pfcld" podStartSLOduration=6.896286557 podStartE2EDuration="6.896286557s" podCreationTimestamp="2026-01-31 09:24:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 09:25:00.890621416 +0000 UTC m=+1445.383983858" watchObservedRunningTime="2026-01-31 09:25:00.896286557 +0000 UTC m=+1445.389649009" Jan 31 09:25:01 crc kubenswrapper[4830]: I0131 09:25:01.907025 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-7bdc9b7794-hvbg6" Jan 31 09:25:02 crc kubenswrapper[4830]: I0131 09:25:02.945277 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-bd45896b-5lsfl" event={"ID":"68bf6013-de5a-401f-868a-79325ed5ab24","Type":"ContainerStarted","Data":"6016d28791b9ce4cbfe71b8221b7bdfa1239354dbf19d9f3d360c36ffc7cc255"} Jan 31 09:25:03 crc kubenswrapper[4830]: I0131 09:25:03.968528 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-bd45896b-5lsfl" event={"ID":"68bf6013-de5a-401f-868a-79325ed5ab24","Type":"ContainerStarted","Data":"243a393264710141b9bf6a436efe73defa1e6466080cdd29f1af67bed9bcd661"} Jan 31 09:25:03 crc kubenswrapper[4830]: I0131 09:25:03.975181 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-795c4b4b5d-76dwx" event={"ID":"95712f82-07ef-4b0f-b1c8-af74932c2c4c","Type":"ContainerStarted","Data":"e7d6f105a83f9fe19fde582e262b440375a23956562ae712d887136a0e0fdf65"} Jan 31 09:25:03 crc kubenswrapper[4830]: I0131 09:25:03.977345 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-566d86fcf5-mxs88" event={"ID":"6c1a8d52-a1f9-4faf-bceb-fdf75da19a4b","Type":"ContainerStarted","Data":"73be57070033305a0c6baae45ab93164494650cc73661b4da8964cc0a130a169"} Jan 31 09:25:03 crc kubenswrapper[4830]: I0131 09:25:03.979514 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-9d5f95fb7-h7vp9" event={"ID":"e1fe9f02-72ff-45af-8728-91cecff0d1ac","Type":"ContainerStarted","Data":"389d4879fd2258b14fc830972976cb268d3d5cf196b20e4b6201d5082852e672"} Jan 31 09:25:03 crc kubenswrapper[4830]: I0131 09:25:03.993548 4830 generic.go:334] "Generic (PLEG): container finished" podID="d136e2d6-6468-43c5-942f-71b672962cae" containerID="773bab6729dbd54770124250a80e4a7d28587e68070d03405706327f0630e7ee" exitCode=0 Jan 31 09:25:03 crc kubenswrapper[4830]: I0131 09:25:03.993784 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-25d9r" event={"ID":"d136e2d6-6468-43c5-942f-71b672962cae","Type":"ContainerDied","Data":"773bab6729dbd54770124250a80e4a7d28587e68070d03405706327f0630e7ee"} Jan 31 09:25:04 crc kubenswrapper[4830]: I0131 09:25:04.003234 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-845499c66-m62t7" event={"ID":"4d9d32f8-4cbd-41db-b7a8-041cdbb90b29","Type":"ContainerStarted","Data":"1b57bc90094b593b7456f5c01702b34a0383c54107045d13ff4f1c509f738a4f"} Jan 31 09:25:05 crc kubenswrapper[4830]: I0131 09:25:05.025233 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-bd45896b-5lsfl" event={"ID":"68bf6013-de5a-401f-868a-79325ed5ab24","Type":"ContainerStarted","Data":"adf8fc8daeb4c36fa55fc79bc56abc26eea1b4afc2f3a8eec4aa77caa6ad2a0d"} Jan 31 09:25:05 crc kubenswrapper[4830]: I0131 09:25:05.031697 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-bd45896b-5lsfl" Jan 31 09:25:05 crc kubenswrapper[4830]: I0131 09:25:05.032974 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-bd45896b-5lsfl" Jan 31 09:25:05 crc kubenswrapper[4830]: I0131 09:25:05.036689 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-795c4b4b5d-76dwx" event={"ID":"95712f82-07ef-4b0f-b1c8-af74932c2c4c","Type":"ContainerStarted","Data":"4d944a21c183acfd35cdc755af8107e0e14d955b90c01cde24d1da3b845122be"} Jan 31 09:25:05 crc kubenswrapper[4830]: I0131 09:25:05.047186 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-566d86fcf5-mxs88" event={"ID":"6c1a8d52-a1f9-4faf-bceb-fdf75da19a4b","Type":"ContainerStarted","Data":"5567983daccade09473cbda1db29f33f4e2df662d5d366fc6e81c775b6356e04"} Jan 31 09:25:05 crc kubenswrapper[4830]: I0131 09:25:05.052068 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-9d5f95fb7-h7vp9" event={"ID":"e1fe9f02-72ff-45af-8728-91cecff0d1ac","Type":"ContainerStarted","Data":"812c2aae3528acf16bf86636dffabbac6b879f0d6db1779b988c28680f5f9e21"} Jan 31 09:25:05 crc kubenswrapper[4830]: I0131 09:25:05.056897 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-845499c66-m62t7" event={"ID":"4d9d32f8-4cbd-41db-b7a8-041cdbb90b29","Type":"ContainerStarted","Data":"ed44e3e401429f2837f3ff17ac8a2f3b72ceeb94a04bddfb9319bf888b9c51b8"} Jan 31 09:25:05 crc kubenswrapper[4830]: I0131 09:25:05.072554 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-bd45896b-5lsfl" podStartSLOduration=6.072529268 podStartE2EDuration="6.072529268s" podCreationTimestamp="2026-01-31 09:24:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 09:25:05.061614557 +0000 UTC m=+1449.554976999" watchObservedRunningTime="2026-01-31 09:25:05.072529268 +0000 UTC m=+1449.565891710" Jan 31 09:25:05 crc kubenswrapper[4830]: I0131 09:25:05.095159 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-845499c66-m62t7" podStartSLOduration=5.9618256689999996 podStartE2EDuration="11.09512491s" podCreationTimestamp="2026-01-31 09:24:54 +0000 UTC" firstStartedPulling="2026-01-31 09:24:57.607778004 +0000 UTC m=+1442.101140446" lastFinishedPulling="2026-01-31 09:25:02.741077245 +0000 UTC m=+1447.234439687" observedRunningTime="2026-01-31 09:25:05.088961955 +0000 UTC m=+1449.582324397" watchObservedRunningTime="2026-01-31 09:25:05.09512491 +0000 UTC m=+1449.588487352" Jan 31 09:25:05 crc kubenswrapper[4830]: I0131 09:25:05.132355 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-566d86fcf5-mxs88" podStartSLOduration=5.973845511 podStartE2EDuration="11.132325378s" podCreationTimestamp="2026-01-31 09:24:54 +0000 UTC" firstStartedPulling="2026-01-31 09:24:57.592464368 +0000 UTC m=+1442.085826810" lastFinishedPulling="2026-01-31 09:25:02.750944235 +0000 UTC m=+1447.244306677" observedRunningTime="2026-01-31 09:25:05.111410693 +0000 UTC m=+1449.604773145" watchObservedRunningTime="2026-01-31 09:25:05.132325378 +0000 UTC m=+1449.625687810" Jan 31 09:25:05 crc kubenswrapper[4830]: I0131 09:25:05.172146 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-keystone-listener-795c4b4b5d-76dwx"] Jan 31 09:25:05 crc kubenswrapper[4830]: I0131 09:25:05.177892 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-795c4b4b5d-76dwx" podStartSLOduration=5.278140802 podStartE2EDuration="11.177842892s" podCreationTimestamp="2026-01-31 09:24:54 +0000 UTC" firstStartedPulling="2026-01-31 09:24:56.817673141 +0000 UTC m=+1441.311035583" lastFinishedPulling="2026-01-31 09:25:02.717375231 +0000 UTC m=+1447.210737673" observedRunningTime="2026-01-31 09:25:05.146589043 +0000 UTC m=+1449.639951485" watchObservedRunningTime="2026-01-31 09:25:05.177842892 +0000 UTC m=+1449.671205324" Jan 31 09:25:05 crc kubenswrapper[4830]: I0131 09:25:05.200884 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-worker-9d5f95fb7-h7vp9"] Jan 31 09:25:05 crc kubenswrapper[4830]: I0131 09:25:05.203396 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-9d5f95fb7-h7vp9" podStartSLOduration=6.186576378 podStartE2EDuration="12.203358927s" podCreationTimestamp="2026-01-31 09:24:53 +0000 UTC" firstStartedPulling="2026-01-31 09:24:56.70229 +0000 UTC m=+1441.195652442" lastFinishedPulling="2026-01-31 09:25:02.719072549 +0000 UTC m=+1447.212434991" observedRunningTime="2026-01-31 09:25:05.18377153 +0000 UTC m=+1449.677133972" watchObservedRunningTime="2026-01-31 09:25:05.203358927 +0000 UTC m=+1449.696721369" Jan 31 09:25:05 crc kubenswrapper[4830]: I0131 09:25:05.234933 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-75c8ddd69c-pfcld" Jan 31 09:25:05 crc kubenswrapper[4830]: I0131 09:25:05.340178 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-84b966f6c9-w98lj"] Jan 31 09:25:05 crc kubenswrapper[4830]: I0131 09:25:05.340523 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-84b966f6c9-w98lj" podUID="d473ad04-829e-427a-81a8-68d368eb9cfc" containerName="dnsmasq-dns" containerID="cri-o://a481c487e1517bdaec4fe7f910b511c110b08e6441926a14617274c5841089c2" gracePeriod=10 Jan 31 09:25:06 crc kubenswrapper[4830]: I0131 09:25:05.999691 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-84b966f6c9-w98lj" podUID="d473ad04-829e-427a-81a8-68d368eb9cfc" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.192:5353: connect: connection refused" Jan 31 09:25:06 crc kubenswrapper[4830]: I0131 09:25:06.114874 4830 generic.go:334] "Generic (PLEG): container finished" podID="d473ad04-829e-427a-81a8-68d368eb9cfc" containerID="a481c487e1517bdaec4fe7f910b511c110b08e6441926a14617274c5841089c2" exitCode=0 Jan 31 09:25:06 crc kubenswrapper[4830]: I0131 09:25:06.115391 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-84b966f6c9-w98lj" event={"ID":"d473ad04-829e-427a-81a8-68d368eb9cfc","Type":"ContainerDied","Data":"a481c487e1517bdaec4fe7f910b511c110b08e6441926a14617274c5841089c2"} Jan 31 09:25:06 crc kubenswrapper[4830]: I0131 09:25:06.150183 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-6cd8b566d4-4q75x" Jan 31 09:25:06 crc kubenswrapper[4830]: I0131 09:25:06.157854 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-6cd8b566d4-4q75x" podUID="74254e68-cbf8-446e-a2d8-768185ec778f" containerName="neutron-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 31 09:25:06 crc kubenswrapper[4830]: I0131 09:25:06.158759 4830 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/neutron-6cd8b566d4-4q75x" podUID="74254e68-cbf8-446e-a2d8-768185ec778f" containerName="neutron-api" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 31 09:25:06 crc kubenswrapper[4830]: I0131 09:25:06.159064 4830 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/neutron-6cd8b566d4-4q75x" podUID="74254e68-cbf8-446e-a2d8-768185ec778f" containerName="neutron-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 31 09:25:06 crc kubenswrapper[4830]: I0131 09:25:06.172047 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-6cd8b566d4-4q75x" podUID="74254e68-cbf8-446e-a2d8-768185ec778f" containerName="neutron-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 31 09:25:06 crc kubenswrapper[4830]: I0131 09:25:06.622244 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84b966f6c9-w98lj" Jan 31 09:25:06 crc kubenswrapper[4830]: I0131 09:25:06.716787 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d473ad04-829e-427a-81a8-68d368eb9cfc-ovsdbserver-nb\") pod \"d473ad04-829e-427a-81a8-68d368eb9cfc\" (UID: \"d473ad04-829e-427a-81a8-68d368eb9cfc\") " Jan 31 09:25:06 crc kubenswrapper[4830]: I0131 09:25:06.716970 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d473ad04-829e-427a-81a8-68d368eb9cfc-ovsdbserver-sb\") pod \"d473ad04-829e-427a-81a8-68d368eb9cfc\" (UID: \"d473ad04-829e-427a-81a8-68d368eb9cfc\") " Jan 31 09:25:06 crc kubenswrapper[4830]: I0131 09:25:06.717071 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d473ad04-829e-427a-81a8-68d368eb9cfc-dns-svc\") pod \"d473ad04-829e-427a-81a8-68d368eb9cfc\" (UID: \"d473ad04-829e-427a-81a8-68d368eb9cfc\") " Jan 31 09:25:06 crc kubenswrapper[4830]: I0131 09:25:06.717206 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d473ad04-829e-427a-81a8-68d368eb9cfc-dns-swift-storage-0\") pod \"d473ad04-829e-427a-81a8-68d368eb9cfc\" (UID: \"d473ad04-829e-427a-81a8-68d368eb9cfc\") " Jan 31 09:25:06 crc kubenswrapper[4830]: I0131 09:25:06.717226 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d473ad04-829e-427a-81a8-68d368eb9cfc-config\") pod \"d473ad04-829e-427a-81a8-68d368eb9cfc\" (UID: \"d473ad04-829e-427a-81a8-68d368eb9cfc\") " Jan 31 09:25:06 crc kubenswrapper[4830]: I0131 09:25:06.717383 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mjzmj\" (UniqueName: \"kubernetes.io/projected/d473ad04-829e-427a-81a8-68d368eb9cfc-kube-api-access-mjzmj\") pod \"d473ad04-829e-427a-81a8-68d368eb9cfc\" (UID: \"d473ad04-829e-427a-81a8-68d368eb9cfc\") " Jan 31 09:25:06 crc kubenswrapper[4830]: I0131 09:25:06.750038 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d473ad04-829e-427a-81a8-68d368eb9cfc-kube-api-access-mjzmj" (OuterVolumeSpecName: "kube-api-access-mjzmj") pod "d473ad04-829e-427a-81a8-68d368eb9cfc" (UID: "d473ad04-829e-427a-81a8-68d368eb9cfc"). InnerVolumeSpecName "kube-api-access-mjzmj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:25:06 crc kubenswrapper[4830]: I0131 09:25:06.820966 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mjzmj\" (UniqueName: \"kubernetes.io/projected/d473ad04-829e-427a-81a8-68d368eb9cfc-kube-api-access-mjzmj\") on node \"crc\" DevicePath \"\"" Jan 31 09:25:06 crc kubenswrapper[4830]: I0131 09:25:06.892186 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d473ad04-829e-427a-81a8-68d368eb9cfc-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "d473ad04-829e-427a-81a8-68d368eb9cfc" (UID: "d473ad04-829e-427a-81a8-68d368eb9cfc"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:25:06 crc kubenswrapper[4830]: I0131 09:25:06.927827 4830 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d473ad04-829e-427a-81a8-68d368eb9cfc-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 31 09:25:06 crc kubenswrapper[4830]: I0131 09:25:06.944712 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d473ad04-829e-427a-81a8-68d368eb9cfc-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "d473ad04-829e-427a-81a8-68d368eb9cfc" (UID: "d473ad04-829e-427a-81a8-68d368eb9cfc"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:25:07 crc kubenswrapper[4830]: I0131 09:25:07.025374 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d473ad04-829e-427a-81a8-68d368eb9cfc-config" (OuterVolumeSpecName: "config") pod "d473ad04-829e-427a-81a8-68d368eb9cfc" (UID: "d473ad04-829e-427a-81a8-68d368eb9cfc"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:25:07 crc kubenswrapper[4830]: I0131 09:25:07.030994 4830 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d473ad04-829e-427a-81a8-68d368eb9cfc-config\") on node \"crc\" DevicePath \"\"" Jan 31 09:25:07 crc kubenswrapper[4830]: I0131 09:25:07.031036 4830 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d473ad04-829e-427a-81a8-68d368eb9cfc-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 31 09:25:07 crc kubenswrapper[4830]: I0131 09:25:07.034270 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d473ad04-829e-427a-81a8-68d368eb9cfc-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "d473ad04-829e-427a-81a8-68d368eb9cfc" (UID: "d473ad04-829e-427a-81a8-68d368eb9cfc"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:25:07 crc kubenswrapper[4830]: I0131 09:25:07.048714 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d473ad04-829e-427a-81a8-68d368eb9cfc-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "d473ad04-829e-427a-81a8-68d368eb9cfc" (UID: "d473ad04-829e-427a-81a8-68d368eb9cfc"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:25:07 crc kubenswrapper[4830]: I0131 09:25:07.139279 4830 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d473ad04-829e-427a-81a8-68d368eb9cfc-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 31 09:25:07 crc kubenswrapper[4830]: I0131 09:25:07.139318 4830 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d473ad04-829e-427a-81a8-68d368eb9cfc-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 31 09:25:07 crc kubenswrapper[4830]: I0131 09:25:07.191975 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-25d9r" event={"ID":"d136e2d6-6468-43c5-942f-71b672962cae","Type":"ContainerStarted","Data":"8db670658f44cd881cacd40a3d0a06b2519c2ec6152e7a77e36f1a0489d400d3"} Jan 31 09:25:07 crc kubenswrapper[4830]: I0131 09:25:07.211104 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-84b966f6c9-w98lj" event={"ID":"d473ad04-829e-427a-81a8-68d368eb9cfc","Type":"ContainerDied","Data":"07d3878cff7371eec5fc1af43e88f699f6df355a200e7c3966fdbb978e8c3520"} Jan 31 09:25:07 crc kubenswrapper[4830]: I0131 09:25:07.217094 4830 scope.go:117] "RemoveContainer" containerID="a481c487e1517bdaec4fe7f910b511c110b08e6441926a14617274c5841089c2" Jan 31 09:25:07 crc kubenswrapper[4830]: I0131 09:25:07.212003 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84b966f6c9-w98lj" Jan 31 09:25:07 crc kubenswrapper[4830]: I0131 09:25:07.212319 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-keystone-listener-795c4b4b5d-76dwx" podUID="95712f82-07ef-4b0f-b1c8-af74932c2c4c" containerName="barbican-keystone-listener-log" containerID="cri-o://e7d6f105a83f9fe19fde582e262b440375a23956562ae712d887136a0e0fdf65" gracePeriod=30 Jan 31 09:25:07 crc kubenswrapper[4830]: I0131 09:25:07.216136 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-worker-9d5f95fb7-h7vp9" podUID="e1fe9f02-72ff-45af-8728-91cecff0d1ac" containerName="barbican-worker-log" containerID="cri-o://389d4879fd2258b14fc830972976cb268d3d5cf196b20e4b6201d5082852e672" gracePeriod=30 Jan 31 09:25:07 crc kubenswrapper[4830]: I0131 09:25:07.216161 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-worker-9d5f95fb7-h7vp9" podUID="e1fe9f02-72ff-45af-8728-91cecff0d1ac" containerName="barbican-worker" containerID="cri-o://812c2aae3528acf16bf86636dffabbac6b879f0d6db1779b988c28680f5f9e21" gracePeriod=30 Jan 31 09:25:07 crc kubenswrapper[4830]: I0131 09:25:07.215689 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-keystone-listener-795c4b4b5d-76dwx" podUID="95712f82-07ef-4b0f-b1c8-af74932c2c4c" containerName="barbican-keystone-listener" containerID="cri-o://4d944a21c183acfd35cdc755af8107e0e14d955b90c01cde24d1da3b845122be" gracePeriod=30 Jan 31 09:25:07 crc kubenswrapper[4830]: I0131 09:25:07.282481 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-25d9r" podStartSLOduration=7.248876071 podStartE2EDuration="15.282454756s" podCreationTimestamp="2026-01-31 09:24:52 +0000 UTC" firstStartedPulling="2026-01-31 09:24:57.632538388 +0000 UTC m=+1442.125900830" lastFinishedPulling="2026-01-31 09:25:05.666117083 +0000 UTC m=+1450.159479515" observedRunningTime="2026-01-31 09:25:07.262602611 +0000 UTC m=+1451.755965063" watchObservedRunningTime="2026-01-31 09:25:07.282454756 +0000 UTC m=+1451.775817198" Jan 31 09:25:07 crc kubenswrapper[4830]: I0131 09:25:07.287272 4830 scope.go:117] "RemoveContainer" containerID="7b1ca9a60b825f69e176739217c4eff0340e3928f32495c4796519075ea2277f" Jan 31 09:25:07 crc kubenswrapper[4830]: I0131 09:25:07.367931 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-84b966f6c9-w98lj"] Jan 31 09:25:07 crc kubenswrapper[4830]: I0131 09:25:07.416883 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-84b966f6c9-w98lj"] Jan 31 09:25:08 crc kubenswrapper[4830]: I0131 09:25:08.227335 4830 generic.go:334] "Generic (PLEG): container finished" podID="95712f82-07ef-4b0f-b1c8-af74932c2c4c" containerID="e7d6f105a83f9fe19fde582e262b440375a23956562ae712d887136a0e0fdf65" exitCode=143 Jan 31 09:25:08 crc kubenswrapper[4830]: I0131 09:25:08.227419 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-795c4b4b5d-76dwx" event={"ID":"95712f82-07ef-4b0f-b1c8-af74932c2c4c","Type":"ContainerDied","Data":"e7d6f105a83f9fe19fde582e262b440375a23956562ae712d887136a0e0fdf65"} Jan 31 09:25:08 crc kubenswrapper[4830]: I0131 09:25:08.234691 4830 generic.go:334] "Generic (PLEG): container finished" podID="e1fe9f02-72ff-45af-8728-91cecff0d1ac" containerID="389d4879fd2258b14fc830972976cb268d3d5cf196b20e4b6201d5082852e672" exitCode=143 Jan 31 09:25:08 crc kubenswrapper[4830]: I0131 09:25:08.234794 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-9d5f95fb7-h7vp9" event={"ID":"e1fe9f02-72ff-45af-8728-91cecff0d1ac","Type":"ContainerDied","Data":"389d4879fd2258b14fc830972976cb268d3d5cf196b20e4b6201d5082852e672"} Jan 31 09:25:08 crc kubenswrapper[4830]: I0131 09:25:08.287434 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d473ad04-829e-427a-81a8-68d368eb9cfc" path="/var/lib/kubelet/pods/d473ad04-829e-427a-81a8-68d368eb9cfc/volumes" Jan 31 09:25:09 crc kubenswrapper[4830]: I0131 09:25:09.032192 4830 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/neutron-59d6cd4869-w2rrr" podUID="9404af59-7e12-483b-90d0-9ebdc4140cc2" containerName="neutron-api" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 31 09:25:09 crc kubenswrapper[4830]: I0131 09:25:09.084388 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-59d6cd4869-w2rrr" podUID="9404af59-7e12-483b-90d0-9ebdc4140cc2" containerName="neutron-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 31 09:25:09 crc kubenswrapper[4830]: I0131 09:25:09.158363 4830 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/neutron-59d6cd4869-w2rrr" podUID="9404af59-7e12-483b-90d0-9ebdc4140cc2" containerName="neutron-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 31 09:25:09 crc kubenswrapper[4830]: I0131 09:25:09.493144 4830 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-7bdc9b7794-hvbg6" podUID="050c36c9-2a82-4d10-a00f-c252a73374ba" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.203:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 31 09:25:09 crc kubenswrapper[4830]: I0131 09:25:09.534085 4830 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-7bdc9b7794-hvbg6" podUID="050c36c9-2a82-4d10-a00f-c252a73374ba" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.203:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 31 09:25:09 crc kubenswrapper[4830]: I0131 09:25:09.884291 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-7bdc9b7794-hvbg6" Jan 31 09:25:10 crc kubenswrapper[4830]: I0131 09:25:10.167718 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-7bdc9b7794-hvbg6" Jan 31 09:25:12 crc kubenswrapper[4830]: I0131 09:25:12.388757 4830 generic.go:334] "Generic (PLEG): container finished" podID="6324b6ba-4288-44f4-bf87-1a4356c1a9f0" containerID="68201a955abf6cbbb906e8ea8b01b1ed9f44aea832f898360508559e3d2781fe" exitCode=0 Jan 31 09:25:12 crc kubenswrapper[4830]: I0131 09:25:12.388836 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-hh79w" event={"ID":"6324b6ba-4288-44f4-bf87-1a4356c1a9f0","Type":"ContainerDied","Data":"68201a955abf6cbbb906e8ea8b01b1ed9f44aea832f898360508559e3d2781fe"} Jan 31 09:25:13 crc kubenswrapper[4830]: I0131 09:25:13.158304 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-bd45896b-5lsfl" Jan 31 09:25:13 crc kubenswrapper[4830]: I0131 09:25:13.344516 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-25d9r" Jan 31 09:25:13 crc kubenswrapper[4830]: I0131 09:25:13.344680 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-25d9r" Jan 31 09:25:14 crc kubenswrapper[4830]: I0131 09:25:14.353064 4830 patch_prober.go:28] interesting pod/machine-config-daemon-gt7kd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 09:25:14 crc kubenswrapper[4830]: I0131 09:25:14.353614 4830 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" podUID="158dbfda-9b0a-4809-9946-3c6ee2d082dc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 09:25:14 crc kubenswrapper[4830]: I0131 09:25:14.426913 4830 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-25d9r" podUID="d136e2d6-6468-43c5-942f-71b672962cae" containerName="registry-server" probeResult="failure" output=< Jan 31 09:25:14 crc kubenswrapper[4830]: timeout: failed to connect service ":50051" within 1s Jan 31 09:25:14 crc kubenswrapper[4830]: > Jan 31 09:25:14 crc kubenswrapper[4830]: I0131 09:25:14.452756 4830 generic.go:334] "Generic (PLEG): container finished" podID="0617092f-40a9-4d3d-b472-f284a2b24000" containerID="9bec4c50fcc4d62de1378f26906fe163c84640b76bf85c39066f6aa600cbcc69" exitCode=0 Jan 31 09:25:14 crc kubenswrapper[4830]: I0131 09:25:14.452833 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-w6kxz" event={"ID":"0617092f-40a9-4d3d-b472-f284a2b24000","Type":"ContainerDied","Data":"9bec4c50fcc4d62de1378f26906fe163c84640b76bf85c39066f6aa600cbcc69"} Jan 31 09:25:14 crc kubenswrapper[4830]: I0131 09:25:14.694014 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-bd45896b-5lsfl" podUID="68bf6013-de5a-401f-868a-79325ed5ab24" containerName="barbican-api" probeResult="failure" output="Get \"https://10.217.0.204:9311/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 31 09:25:14 crc kubenswrapper[4830]: I0131 09:25:14.964756 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-7995f9f9fb-6r8k4" Jan 31 09:25:16 crc kubenswrapper[4830]: I0131 09:25:16.001745 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-7995f9f9fb-6r8k4" Jan 31 09:25:16 crc kubenswrapper[4830]: I0131 09:25:16.534523 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-99b5d6b8d-v6s9l"] Jan 31 09:25:16 crc kubenswrapper[4830]: E0131 09:25:16.536322 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d473ad04-829e-427a-81a8-68d368eb9cfc" containerName="init" Jan 31 09:25:16 crc kubenswrapper[4830]: I0131 09:25:16.536356 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="d473ad04-829e-427a-81a8-68d368eb9cfc" containerName="init" Jan 31 09:25:16 crc kubenswrapper[4830]: E0131 09:25:16.536426 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d473ad04-829e-427a-81a8-68d368eb9cfc" containerName="dnsmasq-dns" Jan 31 09:25:16 crc kubenswrapper[4830]: I0131 09:25:16.536436 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="d473ad04-829e-427a-81a8-68d368eb9cfc" containerName="dnsmasq-dns" Jan 31 09:25:16 crc kubenswrapper[4830]: I0131 09:25:16.541339 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="d473ad04-829e-427a-81a8-68d368eb9cfc" containerName="dnsmasq-dns" Jan 31 09:25:16 crc kubenswrapper[4830]: I0131 09:25:16.544657 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-99b5d6b8d-v6s9l" Jan 31 09:25:16 crc kubenswrapper[4830]: I0131 09:25:16.605465 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-99b5d6b8d-v6s9l"] Jan 31 09:25:16 crc kubenswrapper[4830]: I0131 09:25:16.620034 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/75d4710e-57ca-46dd-921f-3c215c3ee94c-internal-tls-certs\") pod \"placement-99b5d6b8d-v6s9l\" (UID: \"75d4710e-57ca-46dd-921f-3c215c3ee94c\") " pod="openstack/placement-99b5d6b8d-v6s9l" Jan 31 09:25:16 crc kubenswrapper[4830]: I0131 09:25:16.620473 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j5nft\" (UniqueName: \"kubernetes.io/projected/75d4710e-57ca-46dd-921f-3c215c3ee94c-kube-api-access-j5nft\") pod \"placement-99b5d6b8d-v6s9l\" (UID: \"75d4710e-57ca-46dd-921f-3c215c3ee94c\") " pod="openstack/placement-99b5d6b8d-v6s9l" Jan 31 09:25:16 crc kubenswrapper[4830]: I0131 09:25:16.620666 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75d4710e-57ca-46dd-921f-3c215c3ee94c-combined-ca-bundle\") pod \"placement-99b5d6b8d-v6s9l\" (UID: \"75d4710e-57ca-46dd-921f-3c215c3ee94c\") " pod="openstack/placement-99b5d6b8d-v6s9l" Jan 31 09:25:16 crc kubenswrapper[4830]: I0131 09:25:16.620920 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/75d4710e-57ca-46dd-921f-3c215c3ee94c-logs\") pod \"placement-99b5d6b8d-v6s9l\" (UID: \"75d4710e-57ca-46dd-921f-3c215c3ee94c\") " pod="openstack/placement-99b5d6b8d-v6s9l" Jan 31 09:25:16 crc kubenswrapper[4830]: I0131 09:25:16.621036 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/75d4710e-57ca-46dd-921f-3c215c3ee94c-scripts\") pod \"placement-99b5d6b8d-v6s9l\" (UID: \"75d4710e-57ca-46dd-921f-3c215c3ee94c\") " pod="openstack/placement-99b5d6b8d-v6s9l" Jan 31 09:25:16 crc kubenswrapper[4830]: I0131 09:25:16.624575 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/75d4710e-57ca-46dd-921f-3c215c3ee94c-public-tls-certs\") pod \"placement-99b5d6b8d-v6s9l\" (UID: \"75d4710e-57ca-46dd-921f-3c215c3ee94c\") " pod="openstack/placement-99b5d6b8d-v6s9l" Jan 31 09:25:16 crc kubenswrapper[4830]: I0131 09:25:16.624765 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/75d4710e-57ca-46dd-921f-3c215c3ee94c-config-data\") pod \"placement-99b5d6b8d-v6s9l\" (UID: \"75d4710e-57ca-46dd-921f-3c215c3ee94c\") " pod="openstack/placement-99b5d6b8d-v6s9l" Jan 31 09:25:16 crc kubenswrapper[4830]: I0131 09:25:16.733366 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/75d4710e-57ca-46dd-921f-3c215c3ee94c-config-data\") pod \"placement-99b5d6b8d-v6s9l\" (UID: \"75d4710e-57ca-46dd-921f-3c215c3ee94c\") " pod="openstack/placement-99b5d6b8d-v6s9l" Jan 31 09:25:16 crc kubenswrapper[4830]: I0131 09:25:16.733603 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/75d4710e-57ca-46dd-921f-3c215c3ee94c-internal-tls-certs\") pod \"placement-99b5d6b8d-v6s9l\" (UID: \"75d4710e-57ca-46dd-921f-3c215c3ee94c\") " pod="openstack/placement-99b5d6b8d-v6s9l" Jan 31 09:25:16 crc kubenswrapper[4830]: I0131 09:25:16.733624 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j5nft\" (UniqueName: \"kubernetes.io/projected/75d4710e-57ca-46dd-921f-3c215c3ee94c-kube-api-access-j5nft\") pod \"placement-99b5d6b8d-v6s9l\" (UID: \"75d4710e-57ca-46dd-921f-3c215c3ee94c\") " pod="openstack/placement-99b5d6b8d-v6s9l" Jan 31 09:25:16 crc kubenswrapper[4830]: I0131 09:25:16.733645 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75d4710e-57ca-46dd-921f-3c215c3ee94c-combined-ca-bundle\") pod \"placement-99b5d6b8d-v6s9l\" (UID: \"75d4710e-57ca-46dd-921f-3c215c3ee94c\") " pod="openstack/placement-99b5d6b8d-v6s9l" Jan 31 09:25:16 crc kubenswrapper[4830]: I0131 09:25:16.733746 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/75d4710e-57ca-46dd-921f-3c215c3ee94c-logs\") pod \"placement-99b5d6b8d-v6s9l\" (UID: \"75d4710e-57ca-46dd-921f-3c215c3ee94c\") " pod="openstack/placement-99b5d6b8d-v6s9l" Jan 31 09:25:16 crc kubenswrapper[4830]: I0131 09:25:16.733780 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/75d4710e-57ca-46dd-921f-3c215c3ee94c-scripts\") pod \"placement-99b5d6b8d-v6s9l\" (UID: \"75d4710e-57ca-46dd-921f-3c215c3ee94c\") " pod="openstack/placement-99b5d6b8d-v6s9l" Jan 31 09:25:16 crc kubenswrapper[4830]: I0131 09:25:16.733819 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/75d4710e-57ca-46dd-921f-3c215c3ee94c-public-tls-certs\") pod \"placement-99b5d6b8d-v6s9l\" (UID: \"75d4710e-57ca-46dd-921f-3c215c3ee94c\") " pod="openstack/placement-99b5d6b8d-v6s9l" Jan 31 09:25:16 crc kubenswrapper[4830]: I0131 09:25:16.742285 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/75d4710e-57ca-46dd-921f-3c215c3ee94c-config-data\") pod \"placement-99b5d6b8d-v6s9l\" (UID: \"75d4710e-57ca-46dd-921f-3c215c3ee94c\") " pod="openstack/placement-99b5d6b8d-v6s9l" Jan 31 09:25:16 crc kubenswrapper[4830]: I0131 09:25:16.742529 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/75d4710e-57ca-46dd-921f-3c215c3ee94c-scripts\") pod \"placement-99b5d6b8d-v6s9l\" (UID: \"75d4710e-57ca-46dd-921f-3c215c3ee94c\") " pod="openstack/placement-99b5d6b8d-v6s9l" Jan 31 09:25:16 crc kubenswrapper[4830]: I0131 09:25:16.743428 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/75d4710e-57ca-46dd-921f-3c215c3ee94c-logs\") pod \"placement-99b5d6b8d-v6s9l\" (UID: \"75d4710e-57ca-46dd-921f-3c215c3ee94c\") " pod="openstack/placement-99b5d6b8d-v6s9l" Jan 31 09:25:16 crc kubenswrapper[4830]: I0131 09:25:16.748596 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/75d4710e-57ca-46dd-921f-3c215c3ee94c-internal-tls-certs\") pod \"placement-99b5d6b8d-v6s9l\" (UID: \"75d4710e-57ca-46dd-921f-3c215c3ee94c\") " pod="openstack/placement-99b5d6b8d-v6s9l" Jan 31 09:25:16 crc kubenswrapper[4830]: I0131 09:25:16.749126 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/75d4710e-57ca-46dd-921f-3c215c3ee94c-public-tls-certs\") pod \"placement-99b5d6b8d-v6s9l\" (UID: \"75d4710e-57ca-46dd-921f-3c215c3ee94c\") " pod="openstack/placement-99b5d6b8d-v6s9l" Jan 31 09:25:16 crc kubenswrapper[4830]: I0131 09:25:16.749594 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75d4710e-57ca-46dd-921f-3c215c3ee94c-combined-ca-bundle\") pod \"placement-99b5d6b8d-v6s9l\" (UID: \"75d4710e-57ca-46dd-921f-3c215c3ee94c\") " pod="openstack/placement-99b5d6b8d-v6s9l" Jan 31 09:25:16 crc kubenswrapper[4830]: I0131 09:25:16.766264 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j5nft\" (UniqueName: \"kubernetes.io/projected/75d4710e-57ca-46dd-921f-3c215c3ee94c-kube-api-access-j5nft\") pod \"placement-99b5d6b8d-v6s9l\" (UID: \"75d4710e-57ca-46dd-921f-3c215c3ee94c\") " pod="openstack/placement-99b5d6b8d-v6s9l" Jan 31 09:25:16 crc kubenswrapper[4830]: I0131 09:25:16.917566 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-99b5d6b8d-v6s9l" Jan 31 09:25:17 crc kubenswrapper[4830]: I0131 09:25:17.517803 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-bd45896b-5lsfl" Jan 31 09:25:17 crc kubenswrapper[4830]: I0131 09:25:17.633898 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-7bdc9b7794-hvbg6"] Jan 31 09:25:17 crc kubenswrapper[4830]: I0131 09:25:17.634695 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-7bdc9b7794-hvbg6" podUID="050c36c9-2a82-4d10-a00f-c252a73374ba" containerName="barbican-api-log" containerID="cri-o://c85b47e5ae3e82c083265949830b049a0d31fa8eab66ece86afe051157a570da" gracePeriod=30 Jan 31 09:25:17 crc kubenswrapper[4830]: I0131 09:25:17.634944 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-7bdc9b7794-hvbg6" podUID="050c36c9-2a82-4d10-a00f-c252a73374ba" containerName="barbican-api" containerID="cri-o://4e2cdd360766961e110664f97744c088c85575b7622712bc721ce9aa80105b32" gracePeriod=30 Jan 31 09:25:18 crc kubenswrapper[4830]: I0131 09:25:18.424358 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-w6kxz" Jan 31 09:25:18 crc kubenswrapper[4830]: I0131 09:25:18.434086 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-hh79w" Jan 31 09:25:18 crc kubenswrapper[4830]: I0131 09:25:18.490821 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/0617092f-40a9-4d3d-b472-f284a2b24000-db-sync-config-data\") pod \"0617092f-40a9-4d3d-b472-f284a2b24000\" (UID: \"0617092f-40a9-4d3d-b472-f284a2b24000\") " Jan 31 09:25:18 crc kubenswrapper[4830]: I0131 09:25:18.490964 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q5w5b\" (UniqueName: \"kubernetes.io/projected/0617092f-40a9-4d3d-b472-f284a2b24000-kube-api-access-q5w5b\") pod \"0617092f-40a9-4d3d-b472-f284a2b24000\" (UID: \"0617092f-40a9-4d3d-b472-f284a2b24000\") " Jan 31 09:25:18 crc kubenswrapper[4830]: I0131 09:25:18.491022 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0617092f-40a9-4d3d-b472-f284a2b24000-etc-machine-id\") pod \"0617092f-40a9-4d3d-b472-f284a2b24000\" (UID: \"0617092f-40a9-4d3d-b472-f284a2b24000\") " Jan 31 09:25:18 crc kubenswrapper[4830]: I0131 09:25:18.491048 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0617092f-40a9-4d3d-b472-f284a2b24000-combined-ca-bundle\") pod \"0617092f-40a9-4d3d-b472-f284a2b24000\" (UID: \"0617092f-40a9-4d3d-b472-f284a2b24000\") " Jan 31 09:25:18 crc kubenswrapper[4830]: I0131 09:25:18.491114 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6324b6ba-4288-44f4-bf87-1a4356c1a9f0-config-data\") pod \"6324b6ba-4288-44f4-bf87-1a4356c1a9f0\" (UID: \"6324b6ba-4288-44f4-bf87-1a4356c1a9f0\") " Jan 31 09:25:18 crc kubenswrapper[4830]: I0131 09:25:18.491153 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6324b6ba-4288-44f4-bf87-1a4356c1a9f0-combined-ca-bundle\") pod \"6324b6ba-4288-44f4-bf87-1a4356c1a9f0\" (UID: \"6324b6ba-4288-44f4-bf87-1a4356c1a9f0\") " Jan 31 09:25:18 crc kubenswrapper[4830]: I0131 09:25:18.491278 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bh7dj\" (UniqueName: \"kubernetes.io/projected/6324b6ba-4288-44f4-bf87-1a4356c1a9f0-kube-api-access-bh7dj\") pod \"6324b6ba-4288-44f4-bf87-1a4356c1a9f0\" (UID: \"6324b6ba-4288-44f4-bf87-1a4356c1a9f0\") " Jan 31 09:25:18 crc kubenswrapper[4830]: I0131 09:25:18.491307 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0617092f-40a9-4d3d-b472-f284a2b24000-scripts\") pod \"0617092f-40a9-4d3d-b472-f284a2b24000\" (UID: \"0617092f-40a9-4d3d-b472-f284a2b24000\") " Jan 31 09:25:18 crc kubenswrapper[4830]: I0131 09:25:18.491338 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0617092f-40a9-4d3d-b472-f284a2b24000-config-data\") pod \"0617092f-40a9-4d3d-b472-f284a2b24000\" (UID: \"0617092f-40a9-4d3d-b472-f284a2b24000\") " Jan 31 09:25:18 crc kubenswrapper[4830]: I0131 09:25:18.491690 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0617092f-40a9-4d3d-b472-f284a2b24000-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "0617092f-40a9-4d3d-b472-f284a2b24000" (UID: "0617092f-40a9-4d3d-b472-f284a2b24000"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 09:25:18 crc kubenswrapper[4830]: I0131 09:25:18.492210 4830 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0617092f-40a9-4d3d-b472-f284a2b24000-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 31 09:25:18 crc kubenswrapper[4830]: I0131 09:25:18.499571 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0617092f-40a9-4d3d-b472-f284a2b24000-scripts" (OuterVolumeSpecName: "scripts") pod "0617092f-40a9-4d3d-b472-f284a2b24000" (UID: "0617092f-40a9-4d3d-b472-f284a2b24000"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:25:18 crc kubenswrapper[4830]: I0131 09:25:18.507325 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0617092f-40a9-4d3d-b472-f284a2b24000-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "0617092f-40a9-4d3d-b472-f284a2b24000" (UID: "0617092f-40a9-4d3d-b472-f284a2b24000"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:25:18 crc kubenswrapper[4830]: I0131 09:25:18.507597 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6324b6ba-4288-44f4-bf87-1a4356c1a9f0-kube-api-access-bh7dj" (OuterVolumeSpecName: "kube-api-access-bh7dj") pod "6324b6ba-4288-44f4-bf87-1a4356c1a9f0" (UID: "6324b6ba-4288-44f4-bf87-1a4356c1a9f0"). InnerVolumeSpecName "kube-api-access-bh7dj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:25:18 crc kubenswrapper[4830]: I0131 09:25:18.510515 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0617092f-40a9-4d3d-b472-f284a2b24000-kube-api-access-q5w5b" (OuterVolumeSpecName: "kube-api-access-q5w5b") pod "0617092f-40a9-4d3d-b472-f284a2b24000" (UID: "0617092f-40a9-4d3d-b472-f284a2b24000"). InnerVolumeSpecName "kube-api-access-q5w5b". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:25:18 crc kubenswrapper[4830]: I0131 09:25:18.569253 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6324b6ba-4288-44f4-bf87-1a4356c1a9f0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6324b6ba-4288-44f4-bf87-1a4356c1a9f0" (UID: "6324b6ba-4288-44f4-bf87-1a4356c1a9f0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:25:18 crc kubenswrapper[4830]: I0131 09:25:18.597320 4830 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/0617092f-40a9-4d3d-b472-f284a2b24000-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 09:25:18 crc kubenswrapper[4830]: I0131 09:25:18.598176 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q5w5b\" (UniqueName: \"kubernetes.io/projected/0617092f-40a9-4d3d-b472-f284a2b24000-kube-api-access-q5w5b\") on node \"crc\" DevicePath \"\"" Jan 31 09:25:18 crc kubenswrapper[4830]: I0131 09:25:18.598460 4830 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6324b6ba-4288-44f4-bf87-1a4356c1a9f0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 09:25:18 crc kubenswrapper[4830]: I0131 09:25:18.598590 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bh7dj\" (UniqueName: \"kubernetes.io/projected/6324b6ba-4288-44f4-bf87-1a4356c1a9f0-kube-api-access-bh7dj\") on node \"crc\" DevicePath \"\"" Jan 31 09:25:18 crc kubenswrapper[4830]: I0131 09:25:18.598681 4830 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0617092f-40a9-4d3d-b472-f284a2b24000-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 09:25:18 crc kubenswrapper[4830]: I0131 09:25:18.613813 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6324b6ba-4288-44f4-bf87-1a4356c1a9f0-config-data" (OuterVolumeSpecName: "config-data") pod "6324b6ba-4288-44f4-bf87-1a4356c1a9f0" (UID: "6324b6ba-4288-44f4-bf87-1a4356c1a9f0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:25:18 crc kubenswrapper[4830]: I0131 09:25:18.614266 4830 generic.go:334] "Generic (PLEG): container finished" podID="050c36c9-2a82-4d10-a00f-c252a73374ba" containerID="c85b47e5ae3e82c083265949830b049a0d31fa8eab66ece86afe051157a570da" exitCode=143 Jan 31 09:25:18 crc kubenswrapper[4830]: I0131 09:25:18.614439 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7bdc9b7794-hvbg6" event={"ID":"050c36c9-2a82-4d10-a00f-c252a73374ba","Type":"ContainerDied","Data":"c85b47e5ae3e82c083265949830b049a0d31fa8eab66ece86afe051157a570da"} Jan 31 09:25:18 crc kubenswrapper[4830]: I0131 09:25:18.620958 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-hh79w" event={"ID":"6324b6ba-4288-44f4-bf87-1a4356c1a9f0","Type":"ContainerDied","Data":"ab181e61759699af586fd212a367c4ff5c4ea24b4ce13c2c9846f71f1f0cae7b"} Jan 31 09:25:18 crc kubenswrapper[4830]: I0131 09:25:18.621156 4830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ab181e61759699af586fd212a367c4ff5c4ea24b4ce13c2c9846f71f1f0cae7b" Jan 31 09:25:18 crc kubenswrapper[4830]: I0131 09:25:18.621208 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-hh79w" Jan 31 09:25:18 crc kubenswrapper[4830]: I0131 09:25:18.627075 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-w6kxz" event={"ID":"0617092f-40a9-4d3d-b472-f284a2b24000","Type":"ContainerDied","Data":"5c6e2dd7d3e5fcbec9754bdd3b47a9158084fa1e19f804470ed0b49c0684bd9e"} Jan 31 09:25:18 crc kubenswrapper[4830]: I0131 09:25:18.627126 4830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5c6e2dd7d3e5fcbec9754bdd3b47a9158084fa1e19f804470ed0b49c0684bd9e" Jan 31 09:25:18 crc kubenswrapper[4830]: I0131 09:25:18.627217 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-w6kxz" Jan 31 09:25:18 crc kubenswrapper[4830]: I0131 09:25:18.634624 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0617092f-40a9-4d3d-b472-f284a2b24000-config-data" (OuterVolumeSpecName: "config-data") pod "0617092f-40a9-4d3d-b472-f284a2b24000" (UID: "0617092f-40a9-4d3d-b472-f284a2b24000"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:25:18 crc kubenswrapper[4830]: I0131 09:25:18.642872 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0617092f-40a9-4d3d-b472-f284a2b24000-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0617092f-40a9-4d3d-b472-f284a2b24000" (UID: "0617092f-40a9-4d3d-b472-f284a2b24000"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:25:18 crc kubenswrapper[4830]: I0131 09:25:18.701188 4830 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0617092f-40a9-4d3d-b472-f284a2b24000-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 09:25:18 crc kubenswrapper[4830]: I0131 09:25:18.701226 4830 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6324b6ba-4288-44f4-bf87-1a4356c1a9f0-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 09:25:18 crc kubenswrapper[4830]: I0131 09:25:18.701235 4830 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0617092f-40a9-4d3d-b472-f284a2b24000-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 09:25:20 crc kubenswrapper[4830]: I0131 09:25:20.003809 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5784cf869f-tqtdt"] Jan 31 09:25:20 crc kubenswrapper[4830]: E0131 09:25:20.005955 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6324b6ba-4288-44f4-bf87-1a4356c1a9f0" containerName="heat-db-sync" Jan 31 09:25:20 crc kubenswrapper[4830]: I0131 09:25:20.006065 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="6324b6ba-4288-44f4-bf87-1a4356c1a9f0" containerName="heat-db-sync" Jan 31 09:25:20 crc kubenswrapper[4830]: E0131 09:25:20.006157 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0617092f-40a9-4d3d-b472-f284a2b24000" containerName="cinder-db-sync" Jan 31 09:25:20 crc kubenswrapper[4830]: I0131 09:25:20.006216 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="0617092f-40a9-4d3d-b472-f284a2b24000" containerName="cinder-db-sync" Jan 31 09:25:20 crc kubenswrapper[4830]: I0131 09:25:20.006532 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="0617092f-40a9-4d3d-b472-f284a2b24000" containerName="cinder-db-sync" Jan 31 09:25:20 crc kubenswrapper[4830]: I0131 09:25:20.006633 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="6324b6ba-4288-44f4-bf87-1a4356c1a9f0" containerName="heat-db-sync" Jan 31 09:25:20 crc kubenswrapper[4830]: I0131 09:25:20.008260 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5784cf869f-tqtdt" Jan 31 09:25:20 crc kubenswrapper[4830]: I0131 09:25:20.023717 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5784cf869f-tqtdt"] Jan 31 09:25:20 crc kubenswrapper[4830]: I0131 09:25:20.112676 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e51aef7d-4b7d-44da-8d0f-b0e2b86d2842-ovsdbserver-sb\") pod \"dnsmasq-dns-5784cf869f-tqtdt\" (UID: \"e51aef7d-4b7d-44da-8d0f-b0e2b86d2842\") " pod="openstack/dnsmasq-dns-5784cf869f-tqtdt" Jan 31 09:25:20 crc kubenswrapper[4830]: I0131 09:25:20.112769 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e51aef7d-4b7d-44da-8d0f-b0e2b86d2842-dns-svc\") pod \"dnsmasq-dns-5784cf869f-tqtdt\" (UID: \"e51aef7d-4b7d-44da-8d0f-b0e2b86d2842\") " pod="openstack/dnsmasq-dns-5784cf869f-tqtdt" Jan 31 09:25:20 crc kubenswrapper[4830]: I0131 09:25:20.112840 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kbxzp\" (UniqueName: \"kubernetes.io/projected/e51aef7d-4b7d-44da-8d0f-b0e2b86d2842-kube-api-access-kbxzp\") pod \"dnsmasq-dns-5784cf869f-tqtdt\" (UID: \"e51aef7d-4b7d-44da-8d0f-b0e2b86d2842\") " pod="openstack/dnsmasq-dns-5784cf869f-tqtdt" Jan 31 09:25:20 crc kubenswrapper[4830]: I0131 09:25:20.112881 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e51aef7d-4b7d-44da-8d0f-b0e2b86d2842-dns-swift-storage-0\") pod \"dnsmasq-dns-5784cf869f-tqtdt\" (UID: \"e51aef7d-4b7d-44da-8d0f-b0e2b86d2842\") " pod="openstack/dnsmasq-dns-5784cf869f-tqtdt" Jan 31 09:25:20 crc kubenswrapper[4830]: I0131 09:25:20.112908 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e51aef7d-4b7d-44da-8d0f-b0e2b86d2842-ovsdbserver-nb\") pod \"dnsmasq-dns-5784cf869f-tqtdt\" (UID: \"e51aef7d-4b7d-44da-8d0f-b0e2b86d2842\") " pod="openstack/dnsmasq-dns-5784cf869f-tqtdt" Jan 31 09:25:20 crc kubenswrapper[4830]: I0131 09:25:20.112994 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e51aef7d-4b7d-44da-8d0f-b0e2b86d2842-config\") pod \"dnsmasq-dns-5784cf869f-tqtdt\" (UID: \"e51aef7d-4b7d-44da-8d0f-b0e2b86d2842\") " pod="openstack/dnsmasq-dns-5784cf869f-tqtdt" Jan 31 09:25:20 crc kubenswrapper[4830]: I0131 09:25:20.194882 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Jan 31 09:25:20 crc kubenswrapper[4830]: I0131 09:25:20.202885 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 31 09:25:20 crc kubenswrapper[4830]: I0131 09:25:20.220308 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Jan 31 09:25:20 crc kubenswrapper[4830]: I0131 09:25:20.220862 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Jan 31 09:25:20 crc kubenswrapper[4830]: I0131 09:25:20.221080 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Jan 31 09:25:20 crc kubenswrapper[4830]: I0131 09:25:20.222070 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-lrffb" Jan 31 09:25:20 crc kubenswrapper[4830]: I0131 09:25:20.223779 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e51aef7d-4b7d-44da-8d0f-b0e2b86d2842-config\") pod \"dnsmasq-dns-5784cf869f-tqtdt\" (UID: \"e51aef7d-4b7d-44da-8d0f-b0e2b86d2842\") " pod="openstack/dnsmasq-dns-5784cf869f-tqtdt" Jan 31 09:25:20 crc kubenswrapper[4830]: I0131 09:25:20.224023 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e51aef7d-4b7d-44da-8d0f-b0e2b86d2842-ovsdbserver-sb\") pod \"dnsmasq-dns-5784cf869f-tqtdt\" (UID: \"e51aef7d-4b7d-44da-8d0f-b0e2b86d2842\") " pod="openstack/dnsmasq-dns-5784cf869f-tqtdt" Jan 31 09:25:20 crc kubenswrapper[4830]: I0131 09:25:20.224085 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e51aef7d-4b7d-44da-8d0f-b0e2b86d2842-dns-svc\") pod \"dnsmasq-dns-5784cf869f-tqtdt\" (UID: \"e51aef7d-4b7d-44da-8d0f-b0e2b86d2842\") " pod="openstack/dnsmasq-dns-5784cf869f-tqtdt" Jan 31 09:25:20 crc kubenswrapper[4830]: I0131 09:25:20.224211 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kbxzp\" (UniqueName: \"kubernetes.io/projected/e51aef7d-4b7d-44da-8d0f-b0e2b86d2842-kube-api-access-kbxzp\") pod \"dnsmasq-dns-5784cf869f-tqtdt\" (UID: \"e51aef7d-4b7d-44da-8d0f-b0e2b86d2842\") " pod="openstack/dnsmasq-dns-5784cf869f-tqtdt" Jan 31 09:25:20 crc kubenswrapper[4830]: I0131 09:25:20.224270 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e51aef7d-4b7d-44da-8d0f-b0e2b86d2842-dns-swift-storage-0\") pod \"dnsmasq-dns-5784cf869f-tqtdt\" (UID: \"e51aef7d-4b7d-44da-8d0f-b0e2b86d2842\") " pod="openstack/dnsmasq-dns-5784cf869f-tqtdt" Jan 31 09:25:20 crc kubenswrapper[4830]: I0131 09:25:20.224296 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e51aef7d-4b7d-44da-8d0f-b0e2b86d2842-ovsdbserver-nb\") pod \"dnsmasq-dns-5784cf869f-tqtdt\" (UID: \"e51aef7d-4b7d-44da-8d0f-b0e2b86d2842\") " pod="openstack/dnsmasq-dns-5784cf869f-tqtdt" Jan 31 09:25:20 crc kubenswrapper[4830]: I0131 09:25:20.225559 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e51aef7d-4b7d-44da-8d0f-b0e2b86d2842-ovsdbserver-nb\") pod \"dnsmasq-dns-5784cf869f-tqtdt\" (UID: \"e51aef7d-4b7d-44da-8d0f-b0e2b86d2842\") " pod="openstack/dnsmasq-dns-5784cf869f-tqtdt" Jan 31 09:25:20 crc kubenswrapper[4830]: I0131 09:25:20.227853 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e51aef7d-4b7d-44da-8d0f-b0e2b86d2842-dns-svc\") pod \"dnsmasq-dns-5784cf869f-tqtdt\" (UID: \"e51aef7d-4b7d-44da-8d0f-b0e2b86d2842\") " pod="openstack/dnsmasq-dns-5784cf869f-tqtdt" Jan 31 09:25:20 crc kubenswrapper[4830]: I0131 09:25:20.228468 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e51aef7d-4b7d-44da-8d0f-b0e2b86d2842-ovsdbserver-sb\") pod \"dnsmasq-dns-5784cf869f-tqtdt\" (UID: \"e51aef7d-4b7d-44da-8d0f-b0e2b86d2842\") " pod="openstack/dnsmasq-dns-5784cf869f-tqtdt" Jan 31 09:25:20 crc kubenswrapper[4830]: I0131 09:25:20.229265 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e51aef7d-4b7d-44da-8d0f-b0e2b86d2842-dns-swift-storage-0\") pod \"dnsmasq-dns-5784cf869f-tqtdt\" (UID: \"e51aef7d-4b7d-44da-8d0f-b0e2b86d2842\") " pod="openstack/dnsmasq-dns-5784cf869f-tqtdt" Jan 31 09:25:20 crc kubenswrapper[4830]: I0131 09:25:20.242331 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e51aef7d-4b7d-44da-8d0f-b0e2b86d2842-config\") pod \"dnsmasq-dns-5784cf869f-tqtdt\" (UID: \"e51aef7d-4b7d-44da-8d0f-b0e2b86d2842\") " pod="openstack/dnsmasq-dns-5784cf869f-tqtdt" Jan 31 09:25:20 crc kubenswrapper[4830]: I0131 09:25:20.248586 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 31 09:25:20 crc kubenswrapper[4830]: I0131 09:25:20.282397 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kbxzp\" (UniqueName: \"kubernetes.io/projected/e51aef7d-4b7d-44da-8d0f-b0e2b86d2842-kube-api-access-kbxzp\") pod \"dnsmasq-dns-5784cf869f-tqtdt\" (UID: \"e51aef7d-4b7d-44da-8d0f-b0e2b86d2842\") " pod="openstack/dnsmasq-dns-5784cf869f-tqtdt" Jan 31 09:25:20 crc kubenswrapper[4830]: I0131 09:25:20.330679 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/86af5a4c-fe49-4f01-a832-71260d0ad1e4-scripts\") pod \"cinder-scheduler-0\" (UID: \"86af5a4c-fe49-4f01-a832-71260d0ad1e4\") " pod="openstack/cinder-scheduler-0" Jan 31 09:25:20 crc kubenswrapper[4830]: I0131 09:25:20.330769 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5wvbw\" (UniqueName: \"kubernetes.io/projected/86af5a4c-fe49-4f01-a832-71260d0ad1e4-kube-api-access-5wvbw\") pod \"cinder-scheduler-0\" (UID: \"86af5a4c-fe49-4f01-a832-71260d0ad1e4\") " pod="openstack/cinder-scheduler-0" Jan 31 09:25:20 crc kubenswrapper[4830]: I0131 09:25:20.330821 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/86af5a4c-fe49-4f01-a832-71260d0ad1e4-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"86af5a4c-fe49-4f01-a832-71260d0ad1e4\") " pod="openstack/cinder-scheduler-0" Jan 31 09:25:20 crc kubenswrapper[4830]: I0131 09:25:20.330930 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86af5a4c-fe49-4f01-a832-71260d0ad1e4-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"86af5a4c-fe49-4f01-a832-71260d0ad1e4\") " pod="openstack/cinder-scheduler-0" Jan 31 09:25:20 crc kubenswrapper[4830]: I0131 09:25:20.331014 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/86af5a4c-fe49-4f01-a832-71260d0ad1e4-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"86af5a4c-fe49-4f01-a832-71260d0ad1e4\") " pod="openstack/cinder-scheduler-0" Jan 31 09:25:20 crc kubenswrapper[4830]: I0131 09:25:20.331071 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/86af5a4c-fe49-4f01-a832-71260d0ad1e4-config-data\") pod \"cinder-scheduler-0\" (UID: \"86af5a4c-fe49-4f01-a832-71260d0ad1e4\") " pod="openstack/cinder-scheduler-0" Jan 31 09:25:20 crc kubenswrapper[4830]: I0131 09:25:20.370520 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5784cf869f-tqtdt" Jan 31 09:25:20 crc kubenswrapper[4830]: I0131 09:25:20.433506 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/86af5a4c-fe49-4f01-a832-71260d0ad1e4-config-data\") pod \"cinder-scheduler-0\" (UID: \"86af5a4c-fe49-4f01-a832-71260d0ad1e4\") " pod="openstack/cinder-scheduler-0" Jan 31 09:25:20 crc kubenswrapper[4830]: I0131 09:25:20.433849 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/86af5a4c-fe49-4f01-a832-71260d0ad1e4-scripts\") pod \"cinder-scheduler-0\" (UID: \"86af5a4c-fe49-4f01-a832-71260d0ad1e4\") " pod="openstack/cinder-scheduler-0" Jan 31 09:25:20 crc kubenswrapper[4830]: I0131 09:25:20.433890 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5wvbw\" (UniqueName: \"kubernetes.io/projected/86af5a4c-fe49-4f01-a832-71260d0ad1e4-kube-api-access-5wvbw\") pod \"cinder-scheduler-0\" (UID: \"86af5a4c-fe49-4f01-a832-71260d0ad1e4\") " pod="openstack/cinder-scheduler-0" Jan 31 09:25:20 crc kubenswrapper[4830]: I0131 09:25:20.433933 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/86af5a4c-fe49-4f01-a832-71260d0ad1e4-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"86af5a4c-fe49-4f01-a832-71260d0ad1e4\") " pod="openstack/cinder-scheduler-0" Jan 31 09:25:20 crc kubenswrapper[4830]: I0131 09:25:20.434069 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86af5a4c-fe49-4f01-a832-71260d0ad1e4-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"86af5a4c-fe49-4f01-a832-71260d0ad1e4\") " pod="openstack/cinder-scheduler-0" Jan 31 09:25:20 crc kubenswrapper[4830]: I0131 09:25:20.434193 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/86af5a4c-fe49-4f01-a832-71260d0ad1e4-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"86af5a4c-fe49-4f01-a832-71260d0ad1e4\") " pod="openstack/cinder-scheduler-0" Jan 31 09:25:20 crc kubenswrapper[4830]: I0131 09:25:20.435659 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/86af5a4c-fe49-4f01-a832-71260d0ad1e4-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"86af5a4c-fe49-4f01-a832-71260d0ad1e4\") " pod="openstack/cinder-scheduler-0" Jan 31 09:25:20 crc kubenswrapper[4830]: I0131 09:25:20.440261 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/86af5a4c-fe49-4f01-a832-71260d0ad1e4-config-data\") pod \"cinder-scheduler-0\" (UID: \"86af5a4c-fe49-4f01-a832-71260d0ad1e4\") " pod="openstack/cinder-scheduler-0" Jan 31 09:25:20 crc kubenswrapper[4830]: I0131 09:25:20.440680 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/86af5a4c-fe49-4f01-a832-71260d0ad1e4-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"86af5a4c-fe49-4f01-a832-71260d0ad1e4\") " pod="openstack/cinder-scheduler-0" Jan 31 09:25:20 crc kubenswrapper[4830]: I0131 09:25:20.440963 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86af5a4c-fe49-4f01-a832-71260d0ad1e4-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"86af5a4c-fe49-4f01-a832-71260d0ad1e4\") " pod="openstack/cinder-scheduler-0" Jan 31 09:25:20 crc kubenswrapper[4830]: I0131 09:25:20.451404 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/86af5a4c-fe49-4f01-a832-71260d0ad1e4-scripts\") pod \"cinder-scheduler-0\" (UID: \"86af5a4c-fe49-4f01-a832-71260d0ad1e4\") " pod="openstack/cinder-scheduler-0" Jan 31 09:25:20 crc kubenswrapper[4830]: I0131 09:25:20.467455 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5wvbw\" (UniqueName: \"kubernetes.io/projected/86af5a4c-fe49-4f01-a832-71260d0ad1e4-kube-api-access-5wvbw\") pod \"cinder-scheduler-0\" (UID: \"86af5a4c-fe49-4f01-a832-71260d0ad1e4\") " pod="openstack/cinder-scheduler-0" Jan 31 09:25:20 crc kubenswrapper[4830]: I0131 09:25:20.595961 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 31 09:25:20 crc kubenswrapper[4830]: I0131 09:25:20.610590 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Jan 31 09:25:20 crc kubenswrapper[4830]: I0131 09:25:20.613296 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 31 09:25:20 crc kubenswrapper[4830]: I0131 09:25:20.619956 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Jan 31 09:25:20 crc kubenswrapper[4830]: I0131 09:25:20.631301 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 31 09:25:20 crc kubenswrapper[4830]: I0131 09:25:20.742751 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e29fe6c-4095-4fbc-ad45-6b889620ad7b-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"0e29fe6c-4095-4fbc-ad45-6b889620ad7b\") " pod="openstack/cinder-api-0" Jan 31 09:25:20 crc kubenswrapper[4830]: I0131 09:25:20.743211 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0e29fe6c-4095-4fbc-ad45-6b889620ad7b-config-data\") pod \"cinder-api-0\" (UID: \"0e29fe6c-4095-4fbc-ad45-6b889620ad7b\") " pod="openstack/cinder-api-0" Jan 31 09:25:20 crc kubenswrapper[4830]: I0131 09:25:20.743444 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cgbw4\" (UniqueName: \"kubernetes.io/projected/0e29fe6c-4095-4fbc-ad45-6b889620ad7b-kube-api-access-cgbw4\") pod \"cinder-api-0\" (UID: \"0e29fe6c-4095-4fbc-ad45-6b889620ad7b\") " pod="openstack/cinder-api-0" Jan 31 09:25:20 crc kubenswrapper[4830]: I0131 09:25:20.743670 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0e29fe6c-4095-4fbc-ad45-6b889620ad7b-logs\") pod \"cinder-api-0\" (UID: \"0e29fe6c-4095-4fbc-ad45-6b889620ad7b\") " pod="openstack/cinder-api-0" Jan 31 09:25:20 crc kubenswrapper[4830]: I0131 09:25:20.743857 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0e29fe6c-4095-4fbc-ad45-6b889620ad7b-config-data-custom\") pod \"cinder-api-0\" (UID: \"0e29fe6c-4095-4fbc-ad45-6b889620ad7b\") " pod="openstack/cinder-api-0" Jan 31 09:25:20 crc kubenswrapper[4830]: I0131 09:25:20.744171 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0e29fe6c-4095-4fbc-ad45-6b889620ad7b-etc-machine-id\") pod \"cinder-api-0\" (UID: \"0e29fe6c-4095-4fbc-ad45-6b889620ad7b\") " pod="openstack/cinder-api-0" Jan 31 09:25:20 crc kubenswrapper[4830]: I0131 09:25:20.744336 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0e29fe6c-4095-4fbc-ad45-6b889620ad7b-scripts\") pod \"cinder-api-0\" (UID: \"0e29fe6c-4095-4fbc-ad45-6b889620ad7b\") " pod="openstack/cinder-api-0" Jan 31 09:25:20 crc kubenswrapper[4830]: I0131 09:25:20.846980 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cgbw4\" (UniqueName: \"kubernetes.io/projected/0e29fe6c-4095-4fbc-ad45-6b889620ad7b-kube-api-access-cgbw4\") pod \"cinder-api-0\" (UID: \"0e29fe6c-4095-4fbc-ad45-6b889620ad7b\") " pod="openstack/cinder-api-0" Jan 31 09:25:20 crc kubenswrapper[4830]: I0131 09:25:20.847108 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0e29fe6c-4095-4fbc-ad45-6b889620ad7b-logs\") pod \"cinder-api-0\" (UID: \"0e29fe6c-4095-4fbc-ad45-6b889620ad7b\") " pod="openstack/cinder-api-0" Jan 31 09:25:20 crc kubenswrapper[4830]: I0131 09:25:20.847176 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0e29fe6c-4095-4fbc-ad45-6b889620ad7b-config-data-custom\") pod \"cinder-api-0\" (UID: \"0e29fe6c-4095-4fbc-ad45-6b889620ad7b\") " pod="openstack/cinder-api-0" Jan 31 09:25:20 crc kubenswrapper[4830]: I0131 09:25:20.847229 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0e29fe6c-4095-4fbc-ad45-6b889620ad7b-etc-machine-id\") pod \"cinder-api-0\" (UID: \"0e29fe6c-4095-4fbc-ad45-6b889620ad7b\") " pod="openstack/cinder-api-0" Jan 31 09:25:20 crc kubenswrapper[4830]: I0131 09:25:20.847278 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0e29fe6c-4095-4fbc-ad45-6b889620ad7b-scripts\") pod \"cinder-api-0\" (UID: \"0e29fe6c-4095-4fbc-ad45-6b889620ad7b\") " pod="openstack/cinder-api-0" Jan 31 09:25:20 crc kubenswrapper[4830]: I0131 09:25:20.847317 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e29fe6c-4095-4fbc-ad45-6b889620ad7b-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"0e29fe6c-4095-4fbc-ad45-6b889620ad7b\") " pod="openstack/cinder-api-0" Jan 31 09:25:20 crc kubenswrapper[4830]: I0131 09:25:20.847338 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0e29fe6c-4095-4fbc-ad45-6b889620ad7b-config-data\") pod \"cinder-api-0\" (UID: \"0e29fe6c-4095-4fbc-ad45-6b889620ad7b\") " pod="openstack/cinder-api-0" Jan 31 09:25:20 crc kubenswrapper[4830]: I0131 09:25:20.847445 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0e29fe6c-4095-4fbc-ad45-6b889620ad7b-etc-machine-id\") pod \"cinder-api-0\" (UID: \"0e29fe6c-4095-4fbc-ad45-6b889620ad7b\") " pod="openstack/cinder-api-0" Jan 31 09:25:20 crc kubenswrapper[4830]: I0131 09:25:20.847686 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0e29fe6c-4095-4fbc-ad45-6b889620ad7b-logs\") pod \"cinder-api-0\" (UID: \"0e29fe6c-4095-4fbc-ad45-6b889620ad7b\") " pod="openstack/cinder-api-0" Jan 31 09:25:20 crc kubenswrapper[4830]: I0131 09:25:20.852449 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0e29fe6c-4095-4fbc-ad45-6b889620ad7b-scripts\") pod \"cinder-api-0\" (UID: \"0e29fe6c-4095-4fbc-ad45-6b889620ad7b\") " pod="openstack/cinder-api-0" Jan 31 09:25:20 crc kubenswrapper[4830]: I0131 09:25:20.856786 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0e29fe6c-4095-4fbc-ad45-6b889620ad7b-config-data-custom\") pod \"cinder-api-0\" (UID: \"0e29fe6c-4095-4fbc-ad45-6b889620ad7b\") " pod="openstack/cinder-api-0" Jan 31 09:25:20 crc kubenswrapper[4830]: I0131 09:25:20.873193 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e29fe6c-4095-4fbc-ad45-6b889620ad7b-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"0e29fe6c-4095-4fbc-ad45-6b889620ad7b\") " pod="openstack/cinder-api-0" Jan 31 09:25:20 crc kubenswrapper[4830]: I0131 09:25:20.875847 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0e29fe6c-4095-4fbc-ad45-6b889620ad7b-config-data\") pod \"cinder-api-0\" (UID: \"0e29fe6c-4095-4fbc-ad45-6b889620ad7b\") " pod="openstack/cinder-api-0" Jan 31 09:25:20 crc kubenswrapper[4830]: I0131 09:25:20.883292 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cgbw4\" (UniqueName: \"kubernetes.io/projected/0e29fe6c-4095-4fbc-ad45-6b889620ad7b-kube-api-access-cgbw4\") pod \"cinder-api-0\" (UID: \"0e29fe6c-4095-4fbc-ad45-6b889620ad7b\") " pod="openstack/cinder-api-0" Jan 31 09:25:20 crc kubenswrapper[4830]: I0131 09:25:20.896332 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-7bdc9b7794-hvbg6" podUID="050c36c9-2a82-4d10-a00f-c252a73374ba" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.203:9311/healthcheck\": read tcp 10.217.0.2:38860->10.217.0.203:9311: read: connection reset by peer" Jan 31 09:25:20 crc kubenswrapper[4830]: I0131 09:25:20.896508 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-7bdc9b7794-hvbg6" podUID="050c36c9-2a82-4d10-a00f-c252a73374ba" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.203:9311/healthcheck\": read tcp 10.217.0.2:38854->10.217.0.203:9311: read: connection reset by peer" Jan 31 09:25:20 crc kubenswrapper[4830]: I0131 09:25:20.957892 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 31 09:25:21 crc kubenswrapper[4830]: I0131 09:25:21.751482 4830 generic.go:334] "Generic (PLEG): container finished" podID="050c36c9-2a82-4d10-a00f-c252a73374ba" containerID="4e2cdd360766961e110664f97744c088c85575b7622712bc721ce9aa80105b32" exitCode=0 Jan 31 09:25:21 crc kubenswrapper[4830]: I0131 09:25:21.751577 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7bdc9b7794-hvbg6" event={"ID":"050c36c9-2a82-4d10-a00f-c252a73374ba","Type":"ContainerDied","Data":"4e2cdd360766961e110664f97744c088c85575b7622712bc721ce9aa80105b32"} Jan 31 09:25:22 crc kubenswrapper[4830]: E0131 09:25:22.441488 4830 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/ubi9/httpd-24:latest" Jan 31 09:25:22 crc kubenswrapper[4830]: E0131 09:25:22.442128 4830 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:proxy-httpd,Image:registry.redhat.io/ubi9/httpd-24:latest,Command:[/usr/sbin/httpd],Args:[-DFOREGROUND],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:proxy-httpd,HostPort:0,ContainerPort:3000,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/httpd/conf/httpd.conf,SubPath:httpd.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/httpd/conf.d/ssl.conf,SubPath:ssl.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:run-httpd,ReadOnly:false,MountPath:/run/httpd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:log-httpd,ReadOnly:false,MountPath:/var/log/httpd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zwkxh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 3000 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:30,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 3000 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:30,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(39688f84-c227-4658-aee1-ce5e5d450ca1): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 31 09:25:22 crc kubenswrapper[4830]: E0131 09:25:22.443615 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"proxy-httpd\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"]" pod="openstack/ceilometer-0" podUID="39688f84-c227-4658-aee1-ce5e5d450ca1" Jan 31 09:25:22 crc kubenswrapper[4830]: I0131 09:25:22.767988 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="39688f84-c227-4658-aee1-ce5e5d450ca1" containerName="ceilometer-notification-agent" containerID="cri-o://6c78395d815c0f304dabbb72d124784561343be071e34588d43374ea0a8c7ab6" gracePeriod=30 Jan 31 09:25:22 crc kubenswrapper[4830]: I0131 09:25:22.768187 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="39688f84-c227-4658-aee1-ce5e5d450ca1" containerName="sg-core" containerID="cri-o://ada6cefce159c5c6e84f6c0ce9d82ac301872a4c0b6ad072a5e89202581763bc" gracePeriod=30 Jan 31 09:25:22 crc kubenswrapper[4830]: I0131 09:25:22.940970 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 31 09:25:23 crc kubenswrapper[4830]: I0131 09:25:23.539124 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-25d9r" Jan 31 09:25:23 crc kubenswrapper[4830]: I0131 09:25:23.541014 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7bdc9b7794-hvbg6" Jan 31 09:25:23 crc kubenswrapper[4830]: I0131 09:25:23.639091 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-25d9r" Jan 31 09:25:23 crc kubenswrapper[4830]: I0131 09:25:23.648085 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g4tz6\" (UniqueName: \"kubernetes.io/projected/050c36c9-2a82-4d10-a00f-c252a73374ba-kube-api-access-g4tz6\") pod \"050c36c9-2a82-4d10-a00f-c252a73374ba\" (UID: \"050c36c9-2a82-4d10-a00f-c252a73374ba\") " Jan 31 09:25:23 crc kubenswrapper[4830]: I0131 09:25:23.648467 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/050c36c9-2a82-4d10-a00f-c252a73374ba-logs\") pod \"050c36c9-2a82-4d10-a00f-c252a73374ba\" (UID: \"050c36c9-2a82-4d10-a00f-c252a73374ba\") " Jan 31 09:25:23 crc kubenswrapper[4830]: I0131 09:25:23.648562 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/050c36c9-2a82-4d10-a00f-c252a73374ba-config-data\") pod \"050c36c9-2a82-4d10-a00f-c252a73374ba\" (UID: \"050c36c9-2a82-4d10-a00f-c252a73374ba\") " Jan 31 09:25:23 crc kubenswrapper[4830]: I0131 09:25:23.648669 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/050c36c9-2a82-4d10-a00f-c252a73374ba-config-data-custom\") pod \"050c36c9-2a82-4d10-a00f-c252a73374ba\" (UID: \"050c36c9-2a82-4d10-a00f-c252a73374ba\") " Jan 31 09:25:23 crc kubenswrapper[4830]: I0131 09:25:23.648715 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/050c36c9-2a82-4d10-a00f-c252a73374ba-combined-ca-bundle\") pod \"050c36c9-2a82-4d10-a00f-c252a73374ba\" (UID: \"050c36c9-2a82-4d10-a00f-c252a73374ba\") " Jan 31 09:25:23 crc kubenswrapper[4830]: I0131 09:25:23.650467 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/050c36c9-2a82-4d10-a00f-c252a73374ba-logs" (OuterVolumeSpecName: "logs") pod "050c36c9-2a82-4d10-a00f-c252a73374ba" (UID: "050c36c9-2a82-4d10-a00f-c252a73374ba"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 09:25:23 crc kubenswrapper[4830]: I0131 09:25:23.651761 4830 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/050c36c9-2a82-4d10-a00f-c252a73374ba-logs\") on node \"crc\" DevicePath \"\"" Jan 31 09:25:23 crc kubenswrapper[4830]: I0131 09:25:23.704054 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/050c36c9-2a82-4d10-a00f-c252a73374ba-kube-api-access-g4tz6" (OuterVolumeSpecName: "kube-api-access-g4tz6") pod "050c36c9-2a82-4d10-a00f-c252a73374ba" (UID: "050c36c9-2a82-4d10-a00f-c252a73374ba"). InnerVolumeSpecName "kube-api-access-g4tz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:25:23 crc kubenswrapper[4830]: I0131 09:25:23.772995 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g4tz6\" (UniqueName: \"kubernetes.io/projected/050c36c9-2a82-4d10-a00f-c252a73374ba-kube-api-access-g4tz6\") on node \"crc\" DevicePath \"\"" Jan 31 09:25:23 crc kubenswrapper[4830]: I0131 09:25:23.787014 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/050c36c9-2a82-4d10-a00f-c252a73374ba-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "050c36c9-2a82-4d10-a00f-c252a73374ba" (UID: "050c36c9-2a82-4d10-a00f-c252a73374ba"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:25:23 crc kubenswrapper[4830]: I0131 09:25:23.835141 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/050c36c9-2a82-4d10-a00f-c252a73374ba-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "050c36c9-2a82-4d10-a00f-c252a73374ba" (UID: "050c36c9-2a82-4d10-a00f-c252a73374ba"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:25:23 crc kubenswrapper[4830]: I0131 09:25:23.869415 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-99b5d6b8d-v6s9l"] Jan 31 09:25:23 crc kubenswrapper[4830]: I0131 09:25:23.875701 4830 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/050c36c9-2a82-4d10-a00f-c252a73374ba-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 31 09:25:23 crc kubenswrapper[4830]: I0131 09:25:23.875765 4830 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/050c36c9-2a82-4d10-a00f-c252a73374ba-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 09:25:23 crc kubenswrapper[4830]: I0131 09:25:23.908520 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 31 09:25:23 crc kubenswrapper[4830]: I0131 09:25:23.925466 4830 generic.go:334] "Generic (PLEG): container finished" podID="39688f84-c227-4658-aee1-ce5e5d450ca1" containerID="ada6cefce159c5c6e84f6c0ce9d82ac301872a4c0b6ad072a5e89202581763bc" exitCode=2 Jan 31 09:25:23 crc kubenswrapper[4830]: I0131 09:25:23.925561 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"39688f84-c227-4658-aee1-ce5e5d450ca1","Type":"ContainerDied","Data":"ada6cefce159c5c6e84f6c0ce9d82ac301872a4c0b6ad072a5e89202581763bc"} Jan 31 09:25:23 crc kubenswrapper[4830]: I0131 09:25:23.939550 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/050c36c9-2a82-4d10-a00f-c252a73374ba-config-data" (OuterVolumeSpecName: "config-data") pod "050c36c9-2a82-4d10-a00f-c252a73374ba" (UID: "050c36c9-2a82-4d10-a00f-c252a73374ba"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:25:23 crc kubenswrapper[4830]: I0131 09:25:23.961613 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5784cf869f-tqtdt"] Jan 31 09:25:23 crc kubenswrapper[4830]: I0131 09:25:23.979074 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7bdc9b7794-hvbg6" Jan 31 09:25:23 crc kubenswrapper[4830]: I0131 09:25:23.979303 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7bdc9b7794-hvbg6" event={"ID":"050c36c9-2a82-4d10-a00f-c252a73374ba","Type":"ContainerDied","Data":"6108ecb19ef4c0f5ca909b0b5d87dde289d34732e32ac4e51e3c00c063d98067"} Jan 31 09:25:23 crc kubenswrapper[4830]: I0131 09:25:23.979371 4830 scope.go:117] "RemoveContainer" containerID="4e2cdd360766961e110664f97744c088c85575b7622712bc721ce9aa80105b32" Jan 31 09:25:23 crc kubenswrapper[4830]: I0131 09:25:23.979868 4830 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/050c36c9-2a82-4d10-a00f-c252a73374ba-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 09:25:23 crc kubenswrapper[4830]: I0131 09:25:23.994770 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 31 09:25:24 crc kubenswrapper[4830]: I0131 09:25:24.057277 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-7bdc9b7794-hvbg6"] Jan 31 09:25:24 crc kubenswrapper[4830]: I0131 09:25:24.079538 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-7bdc9b7794-hvbg6"] Jan 31 09:25:24 crc kubenswrapper[4830]: I0131 09:25:24.084112 4830 scope.go:117] "RemoveContainer" containerID="c85b47e5ae3e82c083265949830b049a0d31fa8eab66ece86afe051157a570da" Jan 31 09:25:24 crc kubenswrapper[4830]: I0131 09:25:24.196942 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-25d9r"] Jan 31 09:25:24 crc kubenswrapper[4830]: I0131 09:25:24.282579 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="050c36c9-2a82-4d10-a00f-c252a73374ba" path="/var/lib/kubelet/pods/050c36c9-2a82-4d10-a00f-c252a73374ba/volumes" Jan 31 09:25:25 crc kubenswrapper[4830]: I0131 09:25:25.015764 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"0e29fe6c-4095-4fbc-ad45-6b889620ad7b","Type":"ContainerStarted","Data":"aa442de484b9f5939338d65325c7b6d2b93e34f70dd7b1abce057d39bf36e99b"} Jan 31 09:25:25 crc kubenswrapper[4830]: I0131 09:25:25.019667 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-99b5d6b8d-v6s9l" event={"ID":"75d4710e-57ca-46dd-921f-3c215c3ee94c","Type":"ContainerStarted","Data":"f91c164e7f221cacc969e7106cdb89bd10433edaff32e4e055d78563505d8370"} Jan 31 09:25:25 crc kubenswrapper[4830]: I0131 09:25:25.019693 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-99b5d6b8d-v6s9l" event={"ID":"75d4710e-57ca-46dd-921f-3c215c3ee94c","Type":"ContainerStarted","Data":"6484b0835ebfb13e38013ebab2fda8cb9fe303461a71ed2a229710712ccd7ff2"} Jan 31 09:25:25 crc kubenswrapper[4830]: I0131 09:25:25.019703 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-99b5d6b8d-v6s9l" event={"ID":"75d4710e-57ca-46dd-921f-3c215c3ee94c","Type":"ContainerStarted","Data":"6273120da94dc89d1b650f5454f75bf351fd899a338529cf823c34b2d526e041"} Jan 31 09:25:25 crc kubenswrapper[4830]: I0131 09:25:25.019857 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-99b5d6b8d-v6s9l" Jan 31 09:25:25 crc kubenswrapper[4830]: I0131 09:25:25.019916 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-99b5d6b8d-v6s9l" Jan 31 09:25:25 crc kubenswrapper[4830]: I0131 09:25:25.025070 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"86af5a4c-fe49-4f01-a832-71260d0ad1e4","Type":"ContainerStarted","Data":"21ed65e1e63bfb0d88ec3a8b99cc7a6d33f1e3ebff84803a8c1b27ef6d5af67d"} Jan 31 09:25:25 crc kubenswrapper[4830]: I0131 09:25:25.029454 4830 generic.go:334] "Generic (PLEG): container finished" podID="e51aef7d-4b7d-44da-8d0f-b0e2b86d2842" containerID="d7826742a535cf8bc43a9329d109fa52d874d91f2c42c16b755f433515a7a9c0" exitCode=0 Jan 31 09:25:25 crc kubenswrapper[4830]: I0131 09:25:25.029772 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-25d9r" podUID="d136e2d6-6468-43c5-942f-71b672962cae" containerName="registry-server" containerID="cri-o://8db670658f44cd881cacd40a3d0a06b2519c2ec6152e7a77e36f1a0489d400d3" gracePeriod=2 Jan 31 09:25:25 crc kubenswrapper[4830]: I0131 09:25:25.030655 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5784cf869f-tqtdt" event={"ID":"e51aef7d-4b7d-44da-8d0f-b0e2b86d2842","Type":"ContainerDied","Data":"d7826742a535cf8bc43a9329d109fa52d874d91f2c42c16b755f433515a7a9c0"} Jan 31 09:25:25 crc kubenswrapper[4830]: I0131 09:25:25.030770 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5784cf869f-tqtdt" event={"ID":"e51aef7d-4b7d-44da-8d0f-b0e2b86d2842","Type":"ContainerStarted","Data":"3a2459c0d0919385bb4e4d427ba3d92d0c3a9d7416f91e67527916d5b90e051f"} Jan 31 09:25:25 crc kubenswrapper[4830]: I0131 09:25:25.061538 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-99b5d6b8d-v6s9l" podStartSLOduration=9.061510968 podStartE2EDuration="9.061510968s" podCreationTimestamp="2026-01-31 09:25:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 09:25:25.049885947 +0000 UTC m=+1469.543248389" watchObservedRunningTime="2026-01-31 09:25:25.061510968 +0000 UTC m=+1469.554873410" Jan 31 09:25:25 crc kubenswrapper[4830]: I0131 09:25:25.918859 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-25d9r" Jan 31 09:25:25 crc kubenswrapper[4830]: I0131 09:25:25.944064 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d136e2d6-6468-43c5-942f-71b672962cae-catalog-content\") pod \"d136e2d6-6468-43c5-942f-71b672962cae\" (UID: \"d136e2d6-6468-43c5-942f-71b672962cae\") " Jan 31 09:25:25 crc kubenswrapper[4830]: I0131 09:25:25.944473 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d136e2d6-6468-43c5-942f-71b672962cae-utilities\") pod \"d136e2d6-6468-43c5-942f-71b672962cae\" (UID: \"d136e2d6-6468-43c5-942f-71b672962cae\") " Jan 31 09:25:25 crc kubenswrapper[4830]: I0131 09:25:25.944542 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qwvtw\" (UniqueName: \"kubernetes.io/projected/d136e2d6-6468-43c5-942f-71b672962cae-kube-api-access-qwvtw\") pod \"d136e2d6-6468-43c5-942f-71b672962cae\" (UID: \"d136e2d6-6468-43c5-942f-71b672962cae\") " Jan 31 09:25:25 crc kubenswrapper[4830]: I0131 09:25:25.946405 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d136e2d6-6468-43c5-942f-71b672962cae-utilities" (OuterVolumeSpecName: "utilities") pod "d136e2d6-6468-43c5-942f-71b672962cae" (UID: "d136e2d6-6468-43c5-942f-71b672962cae"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 09:25:25 crc kubenswrapper[4830]: I0131 09:25:25.969180 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d136e2d6-6468-43c5-942f-71b672962cae-kube-api-access-qwvtw" (OuterVolumeSpecName: "kube-api-access-qwvtw") pod "d136e2d6-6468-43c5-942f-71b672962cae" (UID: "d136e2d6-6468-43c5-942f-71b672962cae"). InnerVolumeSpecName "kube-api-access-qwvtw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:25:25 crc kubenswrapper[4830]: I0131 09:25:25.986506 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d136e2d6-6468-43c5-942f-71b672962cae-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d136e2d6-6468-43c5-942f-71b672962cae" (UID: "d136e2d6-6468-43c5-942f-71b672962cae"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 09:25:26 crc kubenswrapper[4830]: I0131 09:25:26.048622 4830 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d136e2d6-6468-43c5-942f-71b672962cae-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 09:25:26 crc kubenswrapper[4830]: I0131 09:25:26.048662 4830 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d136e2d6-6468-43c5-942f-71b672962cae-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 09:25:26 crc kubenswrapper[4830]: I0131 09:25:26.048675 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qwvtw\" (UniqueName: \"kubernetes.io/projected/d136e2d6-6468-43c5-942f-71b672962cae-kube-api-access-qwvtw\") on node \"crc\" DevicePath \"\"" Jan 31 09:25:26 crc kubenswrapper[4830]: I0131 09:25:26.069709 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5784cf869f-tqtdt" event={"ID":"e51aef7d-4b7d-44da-8d0f-b0e2b86d2842","Type":"ContainerStarted","Data":"1575f6c9bde9aa49bcd066c47e0fe165efc4fb44b0896b31be8c3f3ba23ffcc4"} Jan 31 09:25:26 crc kubenswrapper[4830]: I0131 09:25:26.069934 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5784cf869f-tqtdt" Jan 31 09:25:26 crc kubenswrapper[4830]: I0131 09:25:26.077151 4830 generic.go:334] "Generic (PLEG): container finished" podID="d136e2d6-6468-43c5-942f-71b672962cae" containerID="8db670658f44cd881cacd40a3d0a06b2519c2ec6152e7a77e36f1a0489d400d3" exitCode=0 Jan 31 09:25:26 crc kubenswrapper[4830]: I0131 09:25:26.077296 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-25d9r" event={"ID":"d136e2d6-6468-43c5-942f-71b672962cae","Type":"ContainerDied","Data":"8db670658f44cd881cacd40a3d0a06b2519c2ec6152e7a77e36f1a0489d400d3"} Jan 31 09:25:26 crc kubenswrapper[4830]: I0131 09:25:26.077333 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-25d9r" event={"ID":"d136e2d6-6468-43c5-942f-71b672962cae","Type":"ContainerDied","Data":"67b840635369ed683f4532b40018dab542c18dcdc088b16d45f05a29d36f7d1d"} Jan 31 09:25:26 crc kubenswrapper[4830]: I0131 09:25:26.077355 4830 scope.go:117] "RemoveContainer" containerID="8db670658f44cd881cacd40a3d0a06b2519c2ec6152e7a77e36f1a0489d400d3" Jan 31 09:25:26 crc kubenswrapper[4830]: I0131 09:25:26.077596 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-25d9r" Jan 31 09:25:26 crc kubenswrapper[4830]: I0131 09:25:26.088572 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"0e29fe6c-4095-4fbc-ad45-6b889620ad7b","Type":"ContainerStarted","Data":"25cb76b668dce9b34cd72aa63b9c975e549d2a613452ed830a660e07c96eb546"} Jan 31 09:25:26 crc kubenswrapper[4830]: I0131 09:25:26.105586 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5784cf869f-tqtdt" podStartSLOduration=7.10556269 podStartE2EDuration="7.10556269s" podCreationTimestamp="2026-01-31 09:25:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 09:25:26.09362066 +0000 UTC m=+1470.586983102" watchObservedRunningTime="2026-01-31 09:25:26.10556269 +0000 UTC m=+1470.598925132" Jan 31 09:25:26 crc kubenswrapper[4830]: I0131 09:25:26.141184 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-25d9r"] Jan 31 09:25:26 crc kubenswrapper[4830]: I0131 09:25:26.161446 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-25d9r"] Jan 31 09:25:26 crc kubenswrapper[4830]: I0131 09:25:26.175874 4830 scope.go:117] "RemoveContainer" containerID="773bab6729dbd54770124250a80e4a7d28587e68070d03405706327f0630e7ee" Jan 31 09:25:26 crc kubenswrapper[4830]: I0131 09:25:26.210047 4830 scope.go:117] "RemoveContainer" containerID="508754901a7bb2391c6303ffae85608aeb0943832edadd27887721c3e28c2281" Jan 31 09:25:26 crc kubenswrapper[4830]: I0131 09:25:26.277330 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d136e2d6-6468-43c5-942f-71b672962cae" path="/var/lib/kubelet/pods/d136e2d6-6468-43c5-942f-71b672962cae/volumes" Jan 31 09:25:26 crc kubenswrapper[4830]: I0131 09:25:26.289915 4830 scope.go:117] "RemoveContainer" containerID="8db670658f44cd881cacd40a3d0a06b2519c2ec6152e7a77e36f1a0489d400d3" Jan 31 09:25:26 crc kubenswrapper[4830]: E0131 09:25:26.290993 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8db670658f44cd881cacd40a3d0a06b2519c2ec6152e7a77e36f1a0489d400d3\": container with ID starting with 8db670658f44cd881cacd40a3d0a06b2519c2ec6152e7a77e36f1a0489d400d3 not found: ID does not exist" containerID="8db670658f44cd881cacd40a3d0a06b2519c2ec6152e7a77e36f1a0489d400d3" Jan 31 09:25:26 crc kubenswrapper[4830]: I0131 09:25:26.291040 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8db670658f44cd881cacd40a3d0a06b2519c2ec6152e7a77e36f1a0489d400d3"} err="failed to get container status \"8db670658f44cd881cacd40a3d0a06b2519c2ec6152e7a77e36f1a0489d400d3\": rpc error: code = NotFound desc = could not find container \"8db670658f44cd881cacd40a3d0a06b2519c2ec6152e7a77e36f1a0489d400d3\": container with ID starting with 8db670658f44cd881cacd40a3d0a06b2519c2ec6152e7a77e36f1a0489d400d3 not found: ID does not exist" Jan 31 09:25:26 crc kubenswrapper[4830]: I0131 09:25:26.291072 4830 scope.go:117] "RemoveContainer" containerID="773bab6729dbd54770124250a80e4a7d28587e68070d03405706327f0630e7ee" Jan 31 09:25:26 crc kubenswrapper[4830]: E0131 09:25:26.296939 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"773bab6729dbd54770124250a80e4a7d28587e68070d03405706327f0630e7ee\": container with ID starting with 773bab6729dbd54770124250a80e4a7d28587e68070d03405706327f0630e7ee not found: ID does not exist" containerID="773bab6729dbd54770124250a80e4a7d28587e68070d03405706327f0630e7ee" Jan 31 09:25:26 crc kubenswrapper[4830]: I0131 09:25:26.296998 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"773bab6729dbd54770124250a80e4a7d28587e68070d03405706327f0630e7ee"} err="failed to get container status \"773bab6729dbd54770124250a80e4a7d28587e68070d03405706327f0630e7ee\": rpc error: code = NotFound desc = could not find container \"773bab6729dbd54770124250a80e4a7d28587e68070d03405706327f0630e7ee\": container with ID starting with 773bab6729dbd54770124250a80e4a7d28587e68070d03405706327f0630e7ee not found: ID does not exist" Jan 31 09:25:26 crc kubenswrapper[4830]: I0131 09:25:26.297034 4830 scope.go:117] "RemoveContainer" containerID="508754901a7bb2391c6303ffae85608aeb0943832edadd27887721c3e28c2281" Jan 31 09:25:26 crc kubenswrapper[4830]: E0131 09:25:26.320040 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"508754901a7bb2391c6303ffae85608aeb0943832edadd27887721c3e28c2281\": container with ID starting with 508754901a7bb2391c6303ffae85608aeb0943832edadd27887721c3e28c2281 not found: ID does not exist" containerID="508754901a7bb2391c6303ffae85608aeb0943832edadd27887721c3e28c2281" Jan 31 09:25:26 crc kubenswrapper[4830]: I0131 09:25:26.320139 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"508754901a7bb2391c6303ffae85608aeb0943832edadd27887721c3e28c2281"} err="failed to get container status \"508754901a7bb2391c6303ffae85608aeb0943832edadd27887721c3e28c2281\": rpc error: code = NotFound desc = could not find container \"508754901a7bb2391c6303ffae85608aeb0943832edadd27887721c3e28c2281\": container with ID starting with 508754901a7bb2391c6303ffae85608aeb0943832edadd27887721c3e28c2281 not found: ID does not exist" Jan 31 09:25:27 crc kubenswrapper[4830]: I0131 09:25:27.120194 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"0e29fe6c-4095-4fbc-ad45-6b889620ad7b","Type":"ContainerStarted","Data":"0125597ce884955620d92773a47666cb62632ba94ad98232ed8140caf4e0a33f"} Jan 31 09:25:27 crc kubenswrapper[4830]: I0131 09:25:27.120764 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Jan 31 09:25:27 crc kubenswrapper[4830]: I0131 09:25:27.120516 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="0e29fe6c-4095-4fbc-ad45-6b889620ad7b" containerName="cinder-api" containerID="cri-o://0125597ce884955620d92773a47666cb62632ba94ad98232ed8140caf4e0a33f" gracePeriod=30 Jan 31 09:25:27 crc kubenswrapper[4830]: I0131 09:25:27.120414 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="0e29fe6c-4095-4fbc-ad45-6b889620ad7b" containerName="cinder-api-log" containerID="cri-o://25cb76b668dce9b34cd72aa63b9c975e549d2a613452ed830a660e07c96eb546" gracePeriod=30 Jan 31 09:25:27 crc kubenswrapper[4830]: I0131 09:25:27.125487 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"86af5a4c-fe49-4f01-a832-71260d0ad1e4","Type":"ContainerStarted","Data":"6820a2f81e5e911a2a74220b4d1198c828a0b10ab55e9c2458125c863bf2ebac"} Jan 31 09:25:27 crc kubenswrapper[4830]: I0131 09:25:27.150514 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=7.150484457 podStartE2EDuration="7.150484457s" podCreationTimestamp="2026-01-31 09:25:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 09:25:27.142394817 +0000 UTC m=+1471.635757259" watchObservedRunningTime="2026-01-31 09:25:27.150484457 +0000 UTC m=+1471.643846899" Jan 31 09:25:27 crc kubenswrapper[4830]: I0131 09:25:27.164556 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-57d8f8c487-sqqph" Jan 31 09:25:28 crc kubenswrapper[4830]: I0131 09:25:28.139955 4830 generic.go:334] "Generic (PLEG): container finished" podID="0e29fe6c-4095-4fbc-ad45-6b889620ad7b" containerID="25cb76b668dce9b34cd72aa63b9c975e549d2a613452ed830a660e07c96eb546" exitCode=143 Jan 31 09:25:28 crc kubenswrapper[4830]: I0131 09:25:28.140051 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"0e29fe6c-4095-4fbc-ad45-6b889620ad7b","Type":"ContainerDied","Data":"25cb76b668dce9b34cd72aa63b9c975e549d2a613452ed830a660e07c96eb546"} Jan 31 09:25:28 crc kubenswrapper[4830]: I0131 09:25:28.161755 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"86af5a4c-fe49-4f01-a832-71260d0ad1e4","Type":"ContainerStarted","Data":"e4a1932dd9e42c2d262f95fa30c91af465b1b416b646dbcc49fc00f9db6d10f8"} Jan 31 09:25:28 crc kubenswrapper[4830]: I0131 09:25:28.196514 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=6.704696835 podStartE2EDuration="8.196489616s" podCreationTimestamp="2026-01-31 09:25:20 +0000 UTC" firstStartedPulling="2026-01-31 09:25:23.903773093 +0000 UTC m=+1468.397135535" lastFinishedPulling="2026-01-31 09:25:25.395565874 +0000 UTC m=+1469.888928316" observedRunningTime="2026-01-31 09:25:28.186862912 +0000 UTC m=+1472.680225354" watchObservedRunningTime="2026-01-31 09:25:28.196489616 +0000 UTC m=+1472.689852058" Jan 31 09:25:28 crc kubenswrapper[4830]: I0131 09:25:28.843149 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 31 09:25:28 crc kubenswrapper[4830]: I0131 09:25:28.880241 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 31 09:25:28 crc kubenswrapper[4830]: I0131 09:25:28.967290 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/39688f84-c227-4658-aee1-ce5e5d450ca1-scripts\") pod \"39688f84-c227-4658-aee1-ce5e5d450ca1\" (UID: \"39688f84-c227-4658-aee1-ce5e5d450ca1\") " Jan 31 09:25:28 crc kubenswrapper[4830]: I0131 09:25:28.967370 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cgbw4\" (UniqueName: \"kubernetes.io/projected/0e29fe6c-4095-4fbc-ad45-6b889620ad7b-kube-api-access-cgbw4\") pod \"0e29fe6c-4095-4fbc-ad45-6b889620ad7b\" (UID: \"0e29fe6c-4095-4fbc-ad45-6b889620ad7b\") " Jan 31 09:25:28 crc kubenswrapper[4830]: I0131 09:25:28.967392 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zwkxh\" (UniqueName: \"kubernetes.io/projected/39688f84-c227-4658-aee1-ce5e5d450ca1-kube-api-access-zwkxh\") pod \"39688f84-c227-4658-aee1-ce5e5d450ca1\" (UID: \"39688f84-c227-4658-aee1-ce5e5d450ca1\") " Jan 31 09:25:28 crc kubenswrapper[4830]: I0131 09:25:28.967462 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0e29fe6c-4095-4fbc-ad45-6b889620ad7b-logs\") pod \"0e29fe6c-4095-4fbc-ad45-6b889620ad7b\" (UID: \"0e29fe6c-4095-4fbc-ad45-6b889620ad7b\") " Jan 31 09:25:28 crc kubenswrapper[4830]: I0131 09:25:28.967584 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0e29fe6c-4095-4fbc-ad45-6b889620ad7b-scripts\") pod \"0e29fe6c-4095-4fbc-ad45-6b889620ad7b\" (UID: \"0e29fe6c-4095-4fbc-ad45-6b889620ad7b\") " Jan 31 09:25:28 crc kubenswrapper[4830]: I0131 09:25:28.967684 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/39688f84-c227-4658-aee1-ce5e5d450ca1-config-data\") pod \"39688f84-c227-4658-aee1-ce5e5d450ca1\" (UID: \"39688f84-c227-4658-aee1-ce5e5d450ca1\") " Jan 31 09:25:28 crc kubenswrapper[4830]: I0131 09:25:28.967716 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0e29fe6c-4095-4fbc-ad45-6b889620ad7b-config-data\") pod \"0e29fe6c-4095-4fbc-ad45-6b889620ad7b\" (UID: \"0e29fe6c-4095-4fbc-ad45-6b889620ad7b\") " Jan 31 09:25:28 crc kubenswrapper[4830]: I0131 09:25:28.967866 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/39688f84-c227-4658-aee1-ce5e5d450ca1-run-httpd\") pod \"39688f84-c227-4658-aee1-ce5e5d450ca1\" (UID: \"39688f84-c227-4658-aee1-ce5e5d450ca1\") " Jan 31 09:25:28 crc kubenswrapper[4830]: I0131 09:25:28.967971 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0e29fe6c-4095-4fbc-ad45-6b889620ad7b-etc-machine-id\") pod \"0e29fe6c-4095-4fbc-ad45-6b889620ad7b\" (UID: \"0e29fe6c-4095-4fbc-ad45-6b889620ad7b\") " Jan 31 09:25:28 crc kubenswrapper[4830]: I0131 09:25:28.968066 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/39688f84-c227-4658-aee1-ce5e5d450ca1-log-httpd\") pod \"39688f84-c227-4658-aee1-ce5e5d450ca1\" (UID: \"39688f84-c227-4658-aee1-ce5e5d450ca1\") " Jan 31 09:25:28 crc kubenswrapper[4830]: I0131 09:25:28.968144 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e29fe6c-4095-4fbc-ad45-6b889620ad7b-combined-ca-bundle\") pod \"0e29fe6c-4095-4fbc-ad45-6b889620ad7b\" (UID: \"0e29fe6c-4095-4fbc-ad45-6b889620ad7b\") " Jan 31 09:25:28 crc kubenswrapper[4830]: I0131 09:25:28.968164 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0e29fe6c-4095-4fbc-ad45-6b889620ad7b-config-data-custom\") pod \"0e29fe6c-4095-4fbc-ad45-6b889620ad7b\" (UID: \"0e29fe6c-4095-4fbc-ad45-6b889620ad7b\") " Jan 31 09:25:28 crc kubenswrapper[4830]: I0131 09:25:28.968232 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39688f84-c227-4658-aee1-ce5e5d450ca1-combined-ca-bundle\") pod \"39688f84-c227-4658-aee1-ce5e5d450ca1\" (UID: \"39688f84-c227-4658-aee1-ce5e5d450ca1\") " Jan 31 09:25:28 crc kubenswrapper[4830]: I0131 09:25:28.968265 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/39688f84-c227-4658-aee1-ce5e5d450ca1-sg-core-conf-yaml\") pod \"39688f84-c227-4658-aee1-ce5e5d450ca1\" (UID: \"39688f84-c227-4658-aee1-ce5e5d450ca1\") " Jan 31 09:25:28 crc kubenswrapper[4830]: I0131 09:25:28.968766 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0e29fe6c-4095-4fbc-ad45-6b889620ad7b-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "0e29fe6c-4095-4fbc-ad45-6b889620ad7b" (UID: "0e29fe6c-4095-4fbc-ad45-6b889620ad7b"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 09:25:28 crc kubenswrapper[4830]: I0131 09:25:28.969001 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/39688f84-c227-4658-aee1-ce5e5d450ca1-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "39688f84-c227-4658-aee1-ce5e5d450ca1" (UID: "39688f84-c227-4658-aee1-ce5e5d450ca1"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 09:25:28 crc kubenswrapper[4830]: I0131 09:25:28.969108 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/39688f84-c227-4658-aee1-ce5e5d450ca1-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "39688f84-c227-4658-aee1-ce5e5d450ca1" (UID: "39688f84-c227-4658-aee1-ce5e5d450ca1"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 09:25:28 crc kubenswrapper[4830]: I0131 09:25:28.969405 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0e29fe6c-4095-4fbc-ad45-6b889620ad7b-logs" (OuterVolumeSpecName: "logs") pod "0e29fe6c-4095-4fbc-ad45-6b889620ad7b" (UID: "0e29fe6c-4095-4fbc-ad45-6b889620ad7b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 09:25:28 crc kubenswrapper[4830]: I0131 09:25:28.970420 4830 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/39688f84-c227-4658-aee1-ce5e5d450ca1-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 31 09:25:28 crc kubenswrapper[4830]: I0131 09:25:28.970456 4830 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0e29fe6c-4095-4fbc-ad45-6b889620ad7b-logs\") on node \"crc\" DevicePath \"\"" Jan 31 09:25:28 crc kubenswrapper[4830]: I0131 09:25:28.970492 4830 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/39688f84-c227-4658-aee1-ce5e5d450ca1-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 31 09:25:28 crc kubenswrapper[4830]: I0131 09:25:28.970508 4830 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0e29fe6c-4095-4fbc-ad45-6b889620ad7b-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 31 09:25:28 crc kubenswrapper[4830]: I0131 09:25:28.978852 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0e29fe6c-4095-4fbc-ad45-6b889620ad7b-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "0e29fe6c-4095-4fbc-ad45-6b889620ad7b" (UID: "0e29fe6c-4095-4fbc-ad45-6b889620ad7b"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:25:28 crc kubenswrapper[4830]: I0131 09:25:28.985703 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0e29fe6c-4095-4fbc-ad45-6b889620ad7b-scripts" (OuterVolumeSpecName: "scripts") pod "0e29fe6c-4095-4fbc-ad45-6b889620ad7b" (UID: "0e29fe6c-4095-4fbc-ad45-6b889620ad7b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:25:28 crc kubenswrapper[4830]: I0131 09:25:28.986707 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/39688f84-c227-4658-aee1-ce5e5d450ca1-kube-api-access-zwkxh" (OuterVolumeSpecName: "kube-api-access-zwkxh") pod "39688f84-c227-4658-aee1-ce5e5d450ca1" (UID: "39688f84-c227-4658-aee1-ce5e5d450ca1"). InnerVolumeSpecName "kube-api-access-zwkxh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:25:28 crc kubenswrapper[4830]: I0131 09:25:28.989651 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/39688f84-c227-4658-aee1-ce5e5d450ca1-scripts" (OuterVolumeSpecName: "scripts") pod "39688f84-c227-4658-aee1-ce5e5d450ca1" (UID: "39688f84-c227-4658-aee1-ce5e5d450ca1"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:25:28 crc kubenswrapper[4830]: I0131 09:25:28.991581 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0e29fe6c-4095-4fbc-ad45-6b889620ad7b-kube-api-access-cgbw4" (OuterVolumeSpecName: "kube-api-access-cgbw4") pod "0e29fe6c-4095-4fbc-ad45-6b889620ad7b" (UID: "0e29fe6c-4095-4fbc-ad45-6b889620ad7b"). InnerVolumeSpecName "kube-api-access-cgbw4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:25:29 crc kubenswrapper[4830]: I0131 09:25:29.027345 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/39688f84-c227-4658-aee1-ce5e5d450ca1-config-data" (OuterVolumeSpecName: "config-data") pod "39688f84-c227-4658-aee1-ce5e5d450ca1" (UID: "39688f84-c227-4658-aee1-ce5e5d450ca1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:25:29 crc kubenswrapper[4830]: I0131 09:25:29.039052 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/39688f84-c227-4658-aee1-ce5e5d450ca1-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "39688f84-c227-4658-aee1-ce5e5d450ca1" (UID: "39688f84-c227-4658-aee1-ce5e5d450ca1"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:25:29 crc kubenswrapper[4830]: I0131 09:25:29.042515 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/39688f84-c227-4658-aee1-ce5e5d450ca1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "39688f84-c227-4658-aee1-ce5e5d450ca1" (UID: "39688f84-c227-4658-aee1-ce5e5d450ca1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:25:29 crc kubenswrapper[4830]: I0131 09:25:29.065131 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0e29fe6c-4095-4fbc-ad45-6b889620ad7b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0e29fe6c-4095-4fbc-ad45-6b889620ad7b" (UID: "0e29fe6c-4095-4fbc-ad45-6b889620ad7b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:25:29 crc kubenswrapper[4830]: I0131 09:25:29.073603 4830 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e29fe6c-4095-4fbc-ad45-6b889620ad7b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 09:25:29 crc kubenswrapper[4830]: I0131 09:25:29.073650 4830 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0e29fe6c-4095-4fbc-ad45-6b889620ad7b-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 31 09:25:29 crc kubenswrapper[4830]: I0131 09:25:29.073659 4830 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39688f84-c227-4658-aee1-ce5e5d450ca1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 09:25:29 crc kubenswrapper[4830]: I0131 09:25:29.073668 4830 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/39688f84-c227-4658-aee1-ce5e5d450ca1-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 31 09:25:29 crc kubenswrapper[4830]: I0131 09:25:29.073678 4830 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/39688f84-c227-4658-aee1-ce5e5d450ca1-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 09:25:29 crc kubenswrapper[4830]: I0131 09:25:29.073690 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cgbw4\" (UniqueName: \"kubernetes.io/projected/0e29fe6c-4095-4fbc-ad45-6b889620ad7b-kube-api-access-cgbw4\") on node \"crc\" DevicePath \"\"" Jan 31 09:25:29 crc kubenswrapper[4830]: I0131 09:25:29.073704 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zwkxh\" (UniqueName: \"kubernetes.io/projected/39688f84-c227-4658-aee1-ce5e5d450ca1-kube-api-access-zwkxh\") on node \"crc\" DevicePath \"\"" Jan 31 09:25:29 crc kubenswrapper[4830]: I0131 09:25:29.073713 4830 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0e29fe6c-4095-4fbc-ad45-6b889620ad7b-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 09:25:29 crc kubenswrapper[4830]: I0131 09:25:29.073734 4830 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/39688f84-c227-4658-aee1-ce5e5d450ca1-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 09:25:29 crc kubenswrapper[4830]: I0131 09:25:29.086017 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0e29fe6c-4095-4fbc-ad45-6b889620ad7b-config-data" (OuterVolumeSpecName: "config-data") pod "0e29fe6c-4095-4fbc-ad45-6b889620ad7b" (UID: "0e29fe6c-4095-4fbc-ad45-6b889620ad7b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:25:29 crc kubenswrapper[4830]: I0131 09:25:29.176658 4830 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0e29fe6c-4095-4fbc-ad45-6b889620ad7b-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 09:25:29 crc kubenswrapper[4830]: I0131 09:25:29.182661 4830 generic.go:334] "Generic (PLEG): container finished" podID="0e29fe6c-4095-4fbc-ad45-6b889620ad7b" containerID="0125597ce884955620d92773a47666cb62632ba94ad98232ed8140caf4e0a33f" exitCode=0 Jan 31 09:25:29 crc kubenswrapper[4830]: I0131 09:25:29.182842 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 31 09:25:29 crc kubenswrapper[4830]: I0131 09:25:29.182870 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"0e29fe6c-4095-4fbc-ad45-6b889620ad7b","Type":"ContainerDied","Data":"0125597ce884955620d92773a47666cb62632ba94ad98232ed8140caf4e0a33f"} Jan 31 09:25:29 crc kubenswrapper[4830]: I0131 09:25:29.183050 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"0e29fe6c-4095-4fbc-ad45-6b889620ad7b","Type":"ContainerDied","Data":"aa442de484b9f5939338d65325c7b6d2b93e34f70dd7b1abce057d39bf36e99b"} Jan 31 09:25:29 crc kubenswrapper[4830]: I0131 09:25:29.183083 4830 scope.go:117] "RemoveContainer" containerID="0125597ce884955620d92773a47666cb62632ba94ad98232ed8140caf4e0a33f" Jan 31 09:25:29 crc kubenswrapper[4830]: I0131 09:25:29.187412 4830 generic.go:334] "Generic (PLEG): container finished" podID="39688f84-c227-4658-aee1-ce5e5d450ca1" containerID="6c78395d815c0f304dabbb72d124784561343be071e34588d43374ea0a8c7ab6" exitCode=0 Jan 31 09:25:29 crc kubenswrapper[4830]: I0131 09:25:29.187536 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 31 09:25:29 crc kubenswrapper[4830]: I0131 09:25:29.187530 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"39688f84-c227-4658-aee1-ce5e5d450ca1","Type":"ContainerDied","Data":"6c78395d815c0f304dabbb72d124784561343be071e34588d43374ea0a8c7ab6"} Jan 31 09:25:29 crc kubenswrapper[4830]: I0131 09:25:29.187628 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"39688f84-c227-4658-aee1-ce5e5d450ca1","Type":"ContainerDied","Data":"ee4289ea302093429d5df627640be772463a6449d5b3c652786a1c1df47a36e1"} Jan 31 09:25:29 crc kubenswrapper[4830]: I0131 09:25:29.278123 4830 scope.go:117] "RemoveContainer" containerID="25cb76b668dce9b34cd72aa63b9c975e549d2a613452ed830a660e07c96eb546" Jan 31 09:25:29 crc kubenswrapper[4830]: I0131 09:25:29.308477 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 31 09:25:29 crc kubenswrapper[4830]: I0131 09:25:29.393826 4830 scope.go:117] "RemoveContainer" containerID="0125597ce884955620d92773a47666cb62632ba94ad98232ed8140caf4e0a33f" Jan 31 09:25:29 crc kubenswrapper[4830]: E0131 09:25:29.395013 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0125597ce884955620d92773a47666cb62632ba94ad98232ed8140caf4e0a33f\": container with ID starting with 0125597ce884955620d92773a47666cb62632ba94ad98232ed8140caf4e0a33f not found: ID does not exist" containerID="0125597ce884955620d92773a47666cb62632ba94ad98232ed8140caf4e0a33f" Jan 31 09:25:29 crc kubenswrapper[4830]: I0131 09:25:29.395090 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0125597ce884955620d92773a47666cb62632ba94ad98232ed8140caf4e0a33f"} err="failed to get container status \"0125597ce884955620d92773a47666cb62632ba94ad98232ed8140caf4e0a33f\": rpc error: code = NotFound desc = could not find container \"0125597ce884955620d92773a47666cb62632ba94ad98232ed8140caf4e0a33f\": container with ID starting with 0125597ce884955620d92773a47666cb62632ba94ad98232ed8140caf4e0a33f not found: ID does not exist" Jan 31 09:25:29 crc kubenswrapper[4830]: I0131 09:25:29.395130 4830 scope.go:117] "RemoveContainer" containerID="25cb76b668dce9b34cd72aa63b9c975e549d2a613452ed830a660e07c96eb546" Jan 31 09:25:29 crc kubenswrapper[4830]: E0131 09:25:29.396330 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"25cb76b668dce9b34cd72aa63b9c975e549d2a613452ed830a660e07c96eb546\": container with ID starting with 25cb76b668dce9b34cd72aa63b9c975e549d2a613452ed830a660e07c96eb546 not found: ID does not exist" containerID="25cb76b668dce9b34cd72aa63b9c975e549d2a613452ed830a660e07c96eb546" Jan 31 09:25:29 crc kubenswrapper[4830]: I0131 09:25:29.396376 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"25cb76b668dce9b34cd72aa63b9c975e549d2a613452ed830a660e07c96eb546"} err="failed to get container status \"25cb76b668dce9b34cd72aa63b9c975e549d2a613452ed830a660e07c96eb546\": rpc error: code = NotFound desc = could not find container \"25cb76b668dce9b34cd72aa63b9c975e549d2a613452ed830a660e07c96eb546\": container with ID starting with 25cb76b668dce9b34cd72aa63b9c975e549d2a613452ed830a660e07c96eb546 not found: ID does not exist" Jan 31 09:25:29 crc kubenswrapper[4830]: I0131 09:25:29.396405 4830 scope.go:117] "RemoveContainer" containerID="ada6cefce159c5c6e84f6c0ce9d82ac301872a4c0b6ad072a5e89202581763bc" Jan 31 09:25:29 crc kubenswrapper[4830]: I0131 09:25:29.416608 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Jan 31 09:25:29 crc kubenswrapper[4830]: I0131 09:25:29.436292 4830 scope.go:117] "RemoveContainer" containerID="6c78395d815c0f304dabbb72d124784561343be071e34588d43374ea0a8c7ab6" Jan 31 09:25:29 crc kubenswrapper[4830]: I0131 09:25:29.473979 4830 scope.go:117] "RemoveContainer" containerID="ada6cefce159c5c6e84f6c0ce9d82ac301872a4c0b6ad072a5e89202581763bc" Jan 31 09:25:29 crc kubenswrapper[4830]: I0131 09:25:29.474154 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Jan 31 09:25:29 crc kubenswrapper[4830]: E0131 09:25:29.474661 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ada6cefce159c5c6e84f6c0ce9d82ac301872a4c0b6ad072a5e89202581763bc\": container with ID starting with ada6cefce159c5c6e84f6c0ce9d82ac301872a4c0b6ad072a5e89202581763bc not found: ID does not exist" containerID="ada6cefce159c5c6e84f6c0ce9d82ac301872a4c0b6ad072a5e89202581763bc" Jan 31 09:25:29 crc kubenswrapper[4830]: I0131 09:25:29.474747 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ada6cefce159c5c6e84f6c0ce9d82ac301872a4c0b6ad072a5e89202581763bc"} err="failed to get container status \"ada6cefce159c5c6e84f6c0ce9d82ac301872a4c0b6ad072a5e89202581763bc\": rpc error: code = NotFound desc = could not find container \"ada6cefce159c5c6e84f6c0ce9d82ac301872a4c0b6ad072a5e89202581763bc\": container with ID starting with ada6cefce159c5c6e84f6c0ce9d82ac301872a4c0b6ad072a5e89202581763bc not found: ID does not exist" Jan 31 09:25:29 crc kubenswrapper[4830]: I0131 09:25:29.474783 4830 scope.go:117] "RemoveContainer" containerID="6c78395d815c0f304dabbb72d124784561343be071e34588d43374ea0a8c7ab6" Jan 31 09:25:29 crc kubenswrapper[4830]: E0131 09:25:29.474960 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39688f84-c227-4658-aee1-ce5e5d450ca1" containerName="ceilometer-notification-agent" Jan 31 09:25:29 crc kubenswrapper[4830]: I0131 09:25:29.474985 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="39688f84-c227-4658-aee1-ce5e5d450ca1" containerName="ceilometer-notification-agent" Jan 31 09:25:29 crc kubenswrapper[4830]: E0131 09:25:29.475036 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39688f84-c227-4658-aee1-ce5e5d450ca1" containerName="sg-core" Jan 31 09:25:29 crc kubenswrapper[4830]: I0131 09:25:29.475045 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="39688f84-c227-4658-aee1-ce5e5d450ca1" containerName="sg-core" Jan 31 09:25:29 crc kubenswrapper[4830]: E0131 09:25:29.475058 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="050c36c9-2a82-4d10-a00f-c252a73374ba" containerName="barbican-api" Jan 31 09:25:29 crc kubenswrapper[4830]: I0131 09:25:29.475066 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="050c36c9-2a82-4d10-a00f-c252a73374ba" containerName="barbican-api" Jan 31 09:25:29 crc kubenswrapper[4830]: E0131 09:25:29.475085 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e29fe6c-4095-4fbc-ad45-6b889620ad7b" containerName="cinder-api-log" Jan 31 09:25:29 crc kubenswrapper[4830]: I0131 09:25:29.475093 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e29fe6c-4095-4fbc-ad45-6b889620ad7b" containerName="cinder-api-log" Jan 31 09:25:29 crc kubenswrapper[4830]: E0131 09:25:29.475107 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="050c36c9-2a82-4d10-a00f-c252a73374ba" containerName="barbican-api-log" Jan 31 09:25:29 crc kubenswrapper[4830]: I0131 09:25:29.475115 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="050c36c9-2a82-4d10-a00f-c252a73374ba" containerName="barbican-api-log" Jan 31 09:25:29 crc kubenswrapper[4830]: E0131 09:25:29.475127 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d136e2d6-6468-43c5-942f-71b672962cae" containerName="extract-utilities" Jan 31 09:25:29 crc kubenswrapper[4830]: I0131 09:25:29.475135 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="d136e2d6-6468-43c5-942f-71b672962cae" containerName="extract-utilities" Jan 31 09:25:29 crc kubenswrapper[4830]: E0131 09:25:29.475148 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d136e2d6-6468-43c5-942f-71b672962cae" containerName="extract-content" Jan 31 09:25:29 crc kubenswrapper[4830]: I0131 09:25:29.475155 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="d136e2d6-6468-43c5-942f-71b672962cae" containerName="extract-content" Jan 31 09:25:29 crc kubenswrapper[4830]: E0131 09:25:29.475175 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d136e2d6-6468-43c5-942f-71b672962cae" containerName="registry-server" Jan 31 09:25:29 crc kubenswrapper[4830]: I0131 09:25:29.475184 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="d136e2d6-6468-43c5-942f-71b672962cae" containerName="registry-server" Jan 31 09:25:29 crc kubenswrapper[4830]: E0131 09:25:29.475201 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e29fe6c-4095-4fbc-ad45-6b889620ad7b" containerName="cinder-api" Jan 31 09:25:29 crc kubenswrapper[4830]: I0131 09:25:29.475209 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e29fe6c-4095-4fbc-ad45-6b889620ad7b" containerName="cinder-api" Jan 31 09:25:29 crc kubenswrapper[4830]: I0131 09:25:29.475490 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="0e29fe6c-4095-4fbc-ad45-6b889620ad7b" containerName="cinder-api-log" Jan 31 09:25:29 crc kubenswrapper[4830]: I0131 09:25:29.475515 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="050c36c9-2a82-4d10-a00f-c252a73374ba" containerName="barbican-api-log" Jan 31 09:25:29 crc kubenswrapper[4830]: I0131 09:25:29.475530 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="0e29fe6c-4095-4fbc-ad45-6b889620ad7b" containerName="cinder-api" Jan 31 09:25:29 crc kubenswrapper[4830]: I0131 09:25:29.475546 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="39688f84-c227-4658-aee1-ce5e5d450ca1" containerName="sg-core" Jan 31 09:25:29 crc kubenswrapper[4830]: I0131 09:25:29.475560 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="050c36c9-2a82-4d10-a00f-c252a73374ba" containerName="barbican-api" Jan 31 09:25:29 crc kubenswrapper[4830]: I0131 09:25:29.475572 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="d136e2d6-6468-43c5-942f-71b672962cae" containerName="registry-server" Jan 31 09:25:29 crc kubenswrapper[4830]: I0131 09:25:29.475592 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="39688f84-c227-4658-aee1-ce5e5d450ca1" containerName="ceilometer-notification-agent" Jan 31 09:25:29 crc kubenswrapper[4830]: E0131 09:25:29.476939 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6c78395d815c0f304dabbb72d124784561343be071e34588d43374ea0a8c7ab6\": container with ID starting with 6c78395d815c0f304dabbb72d124784561343be071e34588d43374ea0a8c7ab6 not found: ID does not exist" containerID="6c78395d815c0f304dabbb72d124784561343be071e34588d43374ea0a8c7ab6" Jan 31 09:25:29 crc kubenswrapper[4830]: I0131 09:25:29.476974 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6c78395d815c0f304dabbb72d124784561343be071e34588d43374ea0a8c7ab6"} err="failed to get container status \"6c78395d815c0f304dabbb72d124784561343be071e34588d43374ea0a8c7ab6\": rpc error: code = NotFound desc = could not find container \"6c78395d815c0f304dabbb72d124784561343be071e34588d43374ea0a8c7ab6\": container with ID starting with 6c78395d815c0f304dabbb72d124784561343be071e34588d43374ea0a8c7ab6 not found: ID does not exist" Jan 31 09:25:29 crc kubenswrapper[4830]: I0131 09:25:29.477198 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 31 09:25:29 crc kubenswrapper[4830]: I0131 09:25:29.483596 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Jan 31 09:25:29 crc kubenswrapper[4830]: I0131 09:25:29.483828 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Jan 31 09:25:29 crc kubenswrapper[4830]: I0131 09:25:29.483991 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Jan 31 09:25:29 crc kubenswrapper[4830]: I0131 09:25:29.494839 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 31 09:25:29 crc kubenswrapper[4830]: I0131 09:25:29.528985 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 31 09:25:29 crc kubenswrapper[4830]: I0131 09:25:29.549827 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 31 09:25:29 crc kubenswrapper[4830]: I0131 09:25:29.577280 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 31 09:25:29 crc kubenswrapper[4830]: I0131 09:25:29.581503 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 31 09:25:29 crc kubenswrapper[4830]: I0131 09:25:29.585245 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 31 09:25:29 crc kubenswrapper[4830]: I0131 09:25:29.585585 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 31 09:25:29 crc kubenswrapper[4830]: I0131 09:25:29.603415 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 31 09:25:29 crc kubenswrapper[4830]: I0131 09:25:29.620376 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/945c030b-2a43-431b-b898-d3a28b4e3821-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"945c030b-2a43-431b-b898-d3a28b4e3821\") " pod="openstack/cinder-api-0" Jan 31 09:25:29 crc kubenswrapper[4830]: I0131 09:25:29.620449 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dfcw8\" (UniqueName: \"kubernetes.io/projected/945c030b-2a43-431b-b898-d3a28b4e3821-kube-api-access-dfcw8\") pod \"cinder-api-0\" (UID: \"945c030b-2a43-431b-b898-d3a28b4e3821\") " pod="openstack/cinder-api-0" Jan 31 09:25:29 crc kubenswrapper[4830]: I0131 09:25:29.620514 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/945c030b-2a43-431b-b898-d3a28b4e3821-logs\") pod \"cinder-api-0\" (UID: \"945c030b-2a43-431b-b898-d3a28b4e3821\") " pod="openstack/cinder-api-0" Jan 31 09:25:29 crc kubenswrapper[4830]: I0131 09:25:29.620532 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd3d398c-5a9f-4835-9b6a-6700097e85ed-config-data\") pod \"ceilometer-0\" (UID: \"fd3d398c-5a9f-4835-9b6a-6700097e85ed\") " pod="openstack/ceilometer-0" Jan 31 09:25:29 crc kubenswrapper[4830]: I0131 09:25:29.620550 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fd3d398c-5a9f-4835-9b6a-6700097e85ed-scripts\") pod \"ceilometer-0\" (UID: \"fd3d398c-5a9f-4835-9b6a-6700097e85ed\") " pod="openstack/ceilometer-0" Jan 31 09:25:29 crc kubenswrapper[4830]: I0131 09:25:29.620618 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sxsw8\" (UniqueName: \"kubernetes.io/projected/fd3d398c-5a9f-4835-9b6a-6700097e85ed-kube-api-access-sxsw8\") pod \"ceilometer-0\" (UID: \"fd3d398c-5a9f-4835-9b6a-6700097e85ed\") " pod="openstack/ceilometer-0" Jan 31 09:25:29 crc kubenswrapper[4830]: I0131 09:25:29.620644 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fd3d398c-5a9f-4835-9b6a-6700097e85ed-log-httpd\") pod \"ceilometer-0\" (UID: \"fd3d398c-5a9f-4835-9b6a-6700097e85ed\") " pod="openstack/ceilometer-0" Jan 31 09:25:29 crc kubenswrapper[4830]: I0131 09:25:29.620682 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/945c030b-2a43-431b-b898-d3a28b4e3821-public-tls-certs\") pod \"cinder-api-0\" (UID: \"945c030b-2a43-431b-b898-d3a28b4e3821\") " pod="openstack/cinder-api-0" Jan 31 09:25:29 crc kubenswrapper[4830]: I0131 09:25:29.620736 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/945c030b-2a43-431b-b898-d3a28b4e3821-etc-machine-id\") pod \"cinder-api-0\" (UID: \"945c030b-2a43-431b-b898-d3a28b4e3821\") " pod="openstack/cinder-api-0" Jan 31 09:25:29 crc kubenswrapper[4830]: I0131 09:25:29.620766 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fd3d398c-5a9f-4835-9b6a-6700097e85ed-run-httpd\") pod \"ceilometer-0\" (UID: \"fd3d398c-5a9f-4835-9b6a-6700097e85ed\") " pod="openstack/ceilometer-0" Jan 31 09:25:29 crc kubenswrapper[4830]: I0131 09:25:29.620819 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd3d398c-5a9f-4835-9b6a-6700097e85ed-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"fd3d398c-5a9f-4835-9b6a-6700097e85ed\") " pod="openstack/ceilometer-0" Jan 31 09:25:29 crc kubenswrapper[4830]: I0131 09:25:29.620842 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/945c030b-2a43-431b-b898-d3a28b4e3821-scripts\") pod \"cinder-api-0\" (UID: \"945c030b-2a43-431b-b898-d3a28b4e3821\") " pod="openstack/cinder-api-0" Jan 31 09:25:29 crc kubenswrapper[4830]: I0131 09:25:29.620870 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/945c030b-2a43-431b-b898-d3a28b4e3821-config-data-custom\") pod \"cinder-api-0\" (UID: \"945c030b-2a43-431b-b898-d3a28b4e3821\") " pod="openstack/cinder-api-0" Jan 31 09:25:29 crc kubenswrapper[4830]: I0131 09:25:29.620889 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fd3d398c-5a9f-4835-9b6a-6700097e85ed-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"fd3d398c-5a9f-4835-9b6a-6700097e85ed\") " pod="openstack/ceilometer-0" Jan 31 09:25:29 crc kubenswrapper[4830]: I0131 09:25:29.620909 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/945c030b-2a43-431b-b898-d3a28b4e3821-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"945c030b-2a43-431b-b898-d3a28b4e3821\") " pod="openstack/cinder-api-0" Jan 31 09:25:29 crc kubenswrapper[4830]: I0131 09:25:29.620928 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/945c030b-2a43-431b-b898-d3a28b4e3821-config-data\") pod \"cinder-api-0\" (UID: \"945c030b-2a43-431b-b898-d3a28b4e3821\") " pod="openstack/cinder-api-0" Jan 31 09:25:29 crc kubenswrapper[4830]: I0131 09:25:29.724374 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sxsw8\" (UniqueName: \"kubernetes.io/projected/fd3d398c-5a9f-4835-9b6a-6700097e85ed-kube-api-access-sxsw8\") pod \"ceilometer-0\" (UID: \"fd3d398c-5a9f-4835-9b6a-6700097e85ed\") " pod="openstack/ceilometer-0" Jan 31 09:25:29 crc kubenswrapper[4830]: I0131 09:25:29.725017 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fd3d398c-5a9f-4835-9b6a-6700097e85ed-log-httpd\") pod \"ceilometer-0\" (UID: \"fd3d398c-5a9f-4835-9b6a-6700097e85ed\") " pod="openstack/ceilometer-0" Jan 31 09:25:29 crc kubenswrapper[4830]: I0131 09:25:29.725103 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/945c030b-2a43-431b-b898-d3a28b4e3821-public-tls-certs\") pod \"cinder-api-0\" (UID: \"945c030b-2a43-431b-b898-d3a28b4e3821\") " pod="openstack/cinder-api-0" Jan 31 09:25:29 crc kubenswrapper[4830]: I0131 09:25:29.725167 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/945c030b-2a43-431b-b898-d3a28b4e3821-etc-machine-id\") pod \"cinder-api-0\" (UID: \"945c030b-2a43-431b-b898-d3a28b4e3821\") " pod="openstack/cinder-api-0" Jan 31 09:25:29 crc kubenswrapper[4830]: I0131 09:25:29.725212 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fd3d398c-5a9f-4835-9b6a-6700097e85ed-run-httpd\") pod \"ceilometer-0\" (UID: \"fd3d398c-5a9f-4835-9b6a-6700097e85ed\") " pod="openstack/ceilometer-0" Jan 31 09:25:29 crc kubenswrapper[4830]: I0131 09:25:29.725301 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd3d398c-5a9f-4835-9b6a-6700097e85ed-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"fd3d398c-5a9f-4835-9b6a-6700097e85ed\") " pod="openstack/ceilometer-0" Jan 31 09:25:29 crc kubenswrapper[4830]: I0131 09:25:29.725337 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/945c030b-2a43-431b-b898-d3a28b4e3821-scripts\") pod \"cinder-api-0\" (UID: \"945c030b-2a43-431b-b898-d3a28b4e3821\") " pod="openstack/cinder-api-0" Jan 31 09:25:29 crc kubenswrapper[4830]: I0131 09:25:29.725371 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/945c030b-2a43-431b-b898-d3a28b4e3821-config-data-custom\") pod \"cinder-api-0\" (UID: \"945c030b-2a43-431b-b898-d3a28b4e3821\") " pod="openstack/cinder-api-0" Jan 31 09:25:29 crc kubenswrapper[4830]: I0131 09:25:29.725403 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/945c030b-2a43-431b-b898-d3a28b4e3821-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"945c030b-2a43-431b-b898-d3a28b4e3821\") " pod="openstack/cinder-api-0" Jan 31 09:25:29 crc kubenswrapper[4830]: I0131 09:25:29.725428 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fd3d398c-5a9f-4835-9b6a-6700097e85ed-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"fd3d398c-5a9f-4835-9b6a-6700097e85ed\") " pod="openstack/ceilometer-0" Jan 31 09:25:29 crc kubenswrapper[4830]: I0131 09:25:29.725458 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/945c030b-2a43-431b-b898-d3a28b4e3821-config-data\") pod \"cinder-api-0\" (UID: \"945c030b-2a43-431b-b898-d3a28b4e3821\") " pod="openstack/cinder-api-0" Jan 31 09:25:29 crc kubenswrapper[4830]: I0131 09:25:29.725490 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/945c030b-2a43-431b-b898-d3a28b4e3821-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"945c030b-2a43-431b-b898-d3a28b4e3821\") " pod="openstack/cinder-api-0" Jan 31 09:25:29 crc kubenswrapper[4830]: I0131 09:25:29.725560 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dfcw8\" (UniqueName: \"kubernetes.io/projected/945c030b-2a43-431b-b898-d3a28b4e3821-kube-api-access-dfcw8\") pod \"cinder-api-0\" (UID: \"945c030b-2a43-431b-b898-d3a28b4e3821\") " pod="openstack/cinder-api-0" Jan 31 09:25:29 crc kubenswrapper[4830]: I0131 09:25:29.725580 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fd3d398c-5a9f-4835-9b6a-6700097e85ed-log-httpd\") pod \"ceilometer-0\" (UID: \"fd3d398c-5a9f-4835-9b6a-6700097e85ed\") " pod="openstack/ceilometer-0" Jan 31 09:25:29 crc kubenswrapper[4830]: I0131 09:25:29.725749 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/945c030b-2a43-431b-b898-d3a28b4e3821-logs\") pod \"cinder-api-0\" (UID: \"945c030b-2a43-431b-b898-d3a28b4e3821\") " pod="openstack/cinder-api-0" Jan 31 09:25:29 crc kubenswrapper[4830]: I0131 09:25:29.727001 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd3d398c-5a9f-4835-9b6a-6700097e85ed-config-data\") pod \"ceilometer-0\" (UID: \"fd3d398c-5a9f-4835-9b6a-6700097e85ed\") " pod="openstack/ceilometer-0" Jan 31 09:25:29 crc kubenswrapper[4830]: I0131 09:25:29.727047 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fd3d398c-5a9f-4835-9b6a-6700097e85ed-scripts\") pod \"ceilometer-0\" (UID: \"fd3d398c-5a9f-4835-9b6a-6700097e85ed\") " pod="openstack/ceilometer-0" Jan 31 09:25:29 crc kubenswrapper[4830]: I0131 09:25:29.726821 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/945c030b-2a43-431b-b898-d3a28b4e3821-logs\") pod \"cinder-api-0\" (UID: \"945c030b-2a43-431b-b898-d3a28b4e3821\") " pod="openstack/cinder-api-0" Jan 31 09:25:29 crc kubenswrapper[4830]: I0131 09:25:29.726776 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fd3d398c-5a9f-4835-9b6a-6700097e85ed-run-httpd\") pod \"ceilometer-0\" (UID: \"fd3d398c-5a9f-4835-9b6a-6700097e85ed\") " pod="openstack/ceilometer-0" Jan 31 09:25:29 crc kubenswrapper[4830]: I0131 09:25:29.725956 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/945c030b-2a43-431b-b898-d3a28b4e3821-etc-machine-id\") pod \"cinder-api-0\" (UID: \"945c030b-2a43-431b-b898-d3a28b4e3821\") " pod="openstack/cinder-api-0" Jan 31 09:25:29 crc kubenswrapper[4830]: I0131 09:25:29.734634 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fd3d398c-5a9f-4835-9b6a-6700097e85ed-scripts\") pod \"ceilometer-0\" (UID: \"fd3d398c-5a9f-4835-9b6a-6700097e85ed\") " pod="openstack/ceilometer-0" Jan 31 09:25:29 crc kubenswrapper[4830]: I0131 09:25:29.734929 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/945c030b-2a43-431b-b898-d3a28b4e3821-public-tls-certs\") pod \"cinder-api-0\" (UID: \"945c030b-2a43-431b-b898-d3a28b4e3821\") " pod="openstack/cinder-api-0" Jan 31 09:25:29 crc kubenswrapper[4830]: I0131 09:25:29.735227 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/945c030b-2a43-431b-b898-d3a28b4e3821-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"945c030b-2a43-431b-b898-d3a28b4e3821\") " pod="openstack/cinder-api-0" Jan 31 09:25:29 crc kubenswrapper[4830]: I0131 09:25:29.735935 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/945c030b-2a43-431b-b898-d3a28b4e3821-config-data-custom\") pod \"cinder-api-0\" (UID: \"945c030b-2a43-431b-b898-d3a28b4e3821\") " pod="openstack/cinder-api-0" Jan 31 09:25:29 crc kubenswrapper[4830]: I0131 09:25:29.737459 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/945c030b-2a43-431b-b898-d3a28b4e3821-scripts\") pod \"cinder-api-0\" (UID: \"945c030b-2a43-431b-b898-d3a28b4e3821\") " pod="openstack/cinder-api-0" Jan 31 09:25:29 crc kubenswrapper[4830]: I0131 09:25:29.737650 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fd3d398c-5a9f-4835-9b6a-6700097e85ed-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"fd3d398c-5a9f-4835-9b6a-6700097e85ed\") " pod="openstack/ceilometer-0" Jan 31 09:25:29 crc kubenswrapper[4830]: I0131 09:25:29.738177 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd3d398c-5a9f-4835-9b6a-6700097e85ed-config-data\") pod \"ceilometer-0\" (UID: \"fd3d398c-5a9f-4835-9b6a-6700097e85ed\") " pod="openstack/ceilometer-0" Jan 31 09:25:29 crc kubenswrapper[4830]: I0131 09:25:29.743456 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/945c030b-2a43-431b-b898-d3a28b4e3821-config-data\") pod \"cinder-api-0\" (UID: \"945c030b-2a43-431b-b898-d3a28b4e3821\") " pod="openstack/cinder-api-0" Jan 31 09:25:29 crc kubenswrapper[4830]: I0131 09:25:29.743880 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd3d398c-5a9f-4835-9b6a-6700097e85ed-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"fd3d398c-5a9f-4835-9b6a-6700097e85ed\") " pod="openstack/ceilometer-0" Jan 31 09:25:29 crc kubenswrapper[4830]: I0131 09:25:29.744886 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/945c030b-2a43-431b-b898-d3a28b4e3821-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"945c030b-2a43-431b-b898-d3a28b4e3821\") " pod="openstack/cinder-api-0" Jan 31 09:25:29 crc kubenswrapper[4830]: I0131 09:25:29.746080 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dfcw8\" (UniqueName: \"kubernetes.io/projected/945c030b-2a43-431b-b898-d3a28b4e3821-kube-api-access-dfcw8\") pod \"cinder-api-0\" (UID: \"945c030b-2a43-431b-b898-d3a28b4e3821\") " pod="openstack/cinder-api-0" Jan 31 09:25:29 crc kubenswrapper[4830]: I0131 09:25:29.767512 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sxsw8\" (UniqueName: \"kubernetes.io/projected/fd3d398c-5a9f-4835-9b6a-6700097e85ed-kube-api-access-sxsw8\") pod \"ceilometer-0\" (UID: \"fd3d398c-5a9f-4835-9b6a-6700097e85ed\") " pod="openstack/ceilometer-0" Jan 31 09:25:29 crc kubenswrapper[4830]: I0131 09:25:29.817054 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 31 09:25:29 crc kubenswrapper[4830]: I0131 09:25:29.915201 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 31 09:25:30 crc kubenswrapper[4830]: I0131 09:25:30.163983 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Jan 31 09:25:30 crc kubenswrapper[4830]: I0131 09:25:30.167333 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 31 09:25:30 crc kubenswrapper[4830]: I0131 09:25:30.172226 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-2n2vj" Jan 31 09:25:30 crc kubenswrapper[4830]: I0131 09:25:30.172464 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Jan 31 09:25:30 crc kubenswrapper[4830]: I0131 09:25:30.172637 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Jan 31 09:25:30 crc kubenswrapper[4830]: I0131 09:25:30.178289 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 31 09:25:30 crc kubenswrapper[4830]: I0131 09:25:30.241883 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zc22h\" (UniqueName: \"kubernetes.io/projected/4ed170d0-8e88-40c3-a2b4-9908fc87a3db-kube-api-access-zc22h\") pod \"openstackclient\" (UID: \"4ed170d0-8e88-40c3-a2b4-9908fc87a3db\") " pod="openstack/openstackclient" Jan 31 09:25:30 crc kubenswrapper[4830]: I0131 09:25:30.241971 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/4ed170d0-8e88-40c3-a2b4-9908fc87a3db-openstack-config-secret\") pod \"openstackclient\" (UID: \"4ed170d0-8e88-40c3-a2b4-9908fc87a3db\") " pod="openstack/openstackclient" Jan 31 09:25:30 crc kubenswrapper[4830]: I0131 09:25:30.242101 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ed170d0-8e88-40c3-a2b4-9908fc87a3db-combined-ca-bundle\") pod \"openstackclient\" (UID: \"4ed170d0-8e88-40c3-a2b4-9908fc87a3db\") " pod="openstack/openstackclient" Jan 31 09:25:30 crc kubenswrapper[4830]: I0131 09:25:30.242217 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/4ed170d0-8e88-40c3-a2b4-9908fc87a3db-openstack-config\") pod \"openstackclient\" (UID: \"4ed170d0-8e88-40c3-a2b4-9908fc87a3db\") " pod="openstack/openstackclient" Jan 31 09:25:30 crc kubenswrapper[4830]: I0131 09:25:30.269285 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0e29fe6c-4095-4fbc-ad45-6b889620ad7b" path="/var/lib/kubelet/pods/0e29fe6c-4095-4fbc-ad45-6b889620ad7b/volumes" Jan 31 09:25:30 crc kubenswrapper[4830]: I0131 09:25:30.270375 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="39688f84-c227-4658-aee1-ce5e5d450ca1" path="/var/lib/kubelet/pods/39688f84-c227-4658-aee1-ce5e5d450ca1/volumes" Jan 31 09:25:30 crc kubenswrapper[4830]: I0131 09:25:30.347039 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ed170d0-8e88-40c3-a2b4-9908fc87a3db-combined-ca-bundle\") pod \"openstackclient\" (UID: \"4ed170d0-8e88-40c3-a2b4-9908fc87a3db\") " pod="openstack/openstackclient" Jan 31 09:25:30 crc kubenswrapper[4830]: I0131 09:25:30.347681 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/4ed170d0-8e88-40c3-a2b4-9908fc87a3db-openstack-config\") pod \"openstackclient\" (UID: \"4ed170d0-8e88-40c3-a2b4-9908fc87a3db\") " pod="openstack/openstackclient" Jan 31 09:25:30 crc kubenswrapper[4830]: I0131 09:25:30.349148 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/4ed170d0-8e88-40c3-a2b4-9908fc87a3db-openstack-config\") pod \"openstackclient\" (UID: \"4ed170d0-8e88-40c3-a2b4-9908fc87a3db\") " pod="openstack/openstackclient" Jan 31 09:25:30 crc kubenswrapper[4830]: I0131 09:25:30.349423 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zc22h\" (UniqueName: \"kubernetes.io/projected/4ed170d0-8e88-40c3-a2b4-9908fc87a3db-kube-api-access-zc22h\") pod \"openstackclient\" (UID: \"4ed170d0-8e88-40c3-a2b4-9908fc87a3db\") " pod="openstack/openstackclient" Jan 31 09:25:30 crc kubenswrapper[4830]: I0131 09:25:30.349545 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/4ed170d0-8e88-40c3-a2b4-9908fc87a3db-openstack-config-secret\") pod \"openstackclient\" (UID: \"4ed170d0-8e88-40c3-a2b4-9908fc87a3db\") " pod="openstack/openstackclient" Jan 31 09:25:30 crc kubenswrapper[4830]: I0131 09:25:30.363202 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ed170d0-8e88-40c3-a2b4-9908fc87a3db-combined-ca-bundle\") pod \"openstackclient\" (UID: \"4ed170d0-8e88-40c3-a2b4-9908fc87a3db\") " pod="openstack/openstackclient" Jan 31 09:25:30 crc kubenswrapper[4830]: I0131 09:25:30.376405 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5784cf869f-tqtdt" Jan 31 09:25:30 crc kubenswrapper[4830]: I0131 09:25:30.378324 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/4ed170d0-8e88-40c3-a2b4-9908fc87a3db-openstack-config-secret\") pod \"openstackclient\" (UID: \"4ed170d0-8e88-40c3-a2b4-9908fc87a3db\") " pod="openstack/openstackclient" Jan 31 09:25:30 crc kubenswrapper[4830]: I0131 09:25:30.391749 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 31 09:25:30 crc kubenswrapper[4830]: I0131 09:25:30.400670 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zc22h\" (UniqueName: \"kubernetes.io/projected/4ed170d0-8e88-40c3-a2b4-9908fc87a3db-kube-api-access-zc22h\") pod \"openstackclient\" (UID: \"4ed170d0-8e88-40c3-a2b4-9908fc87a3db\") " pod="openstack/openstackclient" Jan 31 09:25:30 crc kubenswrapper[4830]: I0131 09:25:30.499515 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-75c8ddd69c-pfcld"] Jan 31 09:25:30 crc kubenswrapper[4830]: I0131 09:25:30.500010 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-75c8ddd69c-pfcld" podUID="b9f0dccc-8d65-4aa7-81c9-548907df8af4" containerName="dnsmasq-dns" containerID="cri-o://8de8560fc90445522a14e354f527ee56a73e3d5d539f428fcc3ddd4040d2e3b9" gracePeriod=10 Jan 31 09:25:30 crc kubenswrapper[4830]: I0131 09:25:30.527714 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 31 09:25:30 crc kubenswrapper[4830]: I0131 09:25:30.596535 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 31 09:25:30 crc kubenswrapper[4830]: I0131 09:25:30.671403 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 31 09:25:31 crc kubenswrapper[4830]: I0131 09:25:31.256684 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fd3d398c-5a9f-4835-9b6a-6700097e85ed","Type":"ContainerStarted","Data":"43f7094c271b65d5fb5dba1876b7a0276d1dd694dcfe6df091a4418ca3073ed0"} Jan 31 09:25:31 crc kubenswrapper[4830]: I0131 09:25:31.261832 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"945c030b-2a43-431b-b898-d3a28b4e3821","Type":"ContainerStarted","Data":"e27e5c1b6620d5352a9af2f6bf98242186c1163aeb1faad8b37c5bba973bbbf6"} Jan 31 09:25:31 crc kubenswrapper[4830]: I0131 09:25:31.275255 4830 generic.go:334] "Generic (PLEG): container finished" podID="b9f0dccc-8d65-4aa7-81c9-548907df8af4" containerID="8de8560fc90445522a14e354f527ee56a73e3d5d539f428fcc3ddd4040d2e3b9" exitCode=0 Jan 31 09:25:31 crc kubenswrapper[4830]: I0131 09:25:31.275312 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75c8ddd69c-pfcld" event={"ID":"b9f0dccc-8d65-4aa7-81c9-548907df8af4","Type":"ContainerDied","Data":"8de8560fc90445522a14e354f527ee56a73e3d5d539f428fcc3ddd4040d2e3b9"} Jan 31 09:25:31 crc kubenswrapper[4830]: I0131 09:25:31.366031 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 31 09:25:31 crc kubenswrapper[4830]: I0131 09:25:31.938231 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75c8ddd69c-pfcld" Jan 31 09:25:32 crc kubenswrapper[4830]: I0131 09:25:32.097927 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b9f0dccc-8d65-4aa7-81c9-548907df8af4-ovsdbserver-nb\") pod \"b9f0dccc-8d65-4aa7-81c9-548907df8af4\" (UID: \"b9f0dccc-8d65-4aa7-81c9-548907df8af4\") " Jan 31 09:25:32 crc kubenswrapper[4830]: I0131 09:25:32.099024 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b9f0dccc-8d65-4aa7-81c9-548907df8af4-dns-svc\") pod \"b9f0dccc-8d65-4aa7-81c9-548907df8af4\" (UID: \"b9f0dccc-8d65-4aa7-81c9-548907df8af4\") " Jan 31 09:25:32 crc kubenswrapper[4830]: I0131 09:25:32.099222 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b9f0dccc-8d65-4aa7-81c9-548907df8af4-dns-swift-storage-0\") pod \"b9f0dccc-8d65-4aa7-81c9-548907df8af4\" (UID: \"b9f0dccc-8d65-4aa7-81c9-548907df8af4\") " Jan 31 09:25:32 crc kubenswrapper[4830]: I0131 09:25:32.099254 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fjjvk\" (UniqueName: \"kubernetes.io/projected/b9f0dccc-8d65-4aa7-81c9-548907df8af4-kube-api-access-fjjvk\") pod \"b9f0dccc-8d65-4aa7-81c9-548907df8af4\" (UID: \"b9f0dccc-8d65-4aa7-81c9-548907df8af4\") " Jan 31 09:25:32 crc kubenswrapper[4830]: I0131 09:25:32.099383 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b9f0dccc-8d65-4aa7-81c9-548907df8af4-ovsdbserver-sb\") pod \"b9f0dccc-8d65-4aa7-81c9-548907df8af4\" (UID: \"b9f0dccc-8d65-4aa7-81c9-548907df8af4\") " Jan 31 09:25:32 crc kubenswrapper[4830]: I0131 09:25:32.099931 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b9f0dccc-8d65-4aa7-81c9-548907df8af4-config\") pod \"b9f0dccc-8d65-4aa7-81c9-548907df8af4\" (UID: \"b9f0dccc-8d65-4aa7-81c9-548907df8af4\") " Jan 31 09:25:32 crc kubenswrapper[4830]: I0131 09:25:32.203859 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b9f0dccc-8d65-4aa7-81c9-548907df8af4-kube-api-access-fjjvk" (OuterVolumeSpecName: "kube-api-access-fjjvk") pod "b9f0dccc-8d65-4aa7-81c9-548907df8af4" (UID: "b9f0dccc-8d65-4aa7-81c9-548907df8af4"). InnerVolumeSpecName "kube-api-access-fjjvk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:25:32 crc kubenswrapper[4830]: I0131 09:25:32.232287 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fjjvk\" (UniqueName: \"kubernetes.io/projected/b9f0dccc-8d65-4aa7-81c9-548907df8af4-kube-api-access-fjjvk\") on node \"crc\" DevicePath \"\"" Jan 31 09:25:32 crc kubenswrapper[4830]: I0131 09:25:32.250436 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b9f0dccc-8d65-4aa7-81c9-548907df8af4-config" (OuterVolumeSpecName: "config") pod "b9f0dccc-8d65-4aa7-81c9-548907df8af4" (UID: "b9f0dccc-8d65-4aa7-81c9-548907df8af4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:25:32 crc kubenswrapper[4830]: I0131 09:25:32.347091 4830 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b9f0dccc-8d65-4aa7-81c9-548907df8af4-config\") on node \"crc\" DevicePath \"\"" Jan 31 09:25:32 crc kubenswrapper[4830]: I0131 09:25:32.357370 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b9f0dccc-8d65-4aa7-81c9-548907df8af4-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "b9f0dccc-8d65-4aa7-81c9-548907df8af4" (UID: "b9f0dccc-8d65-4aa7-81c9-548907df8af4"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:25:32 crc kubenswrapper[4830]: I0131 09:25:32.378775 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75c8ddd69c-pfcld" Jan 31 09:25:32 crc kubenswrapper[4830]: I0131 09:25:32.390758 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b9f0dccc-8d65-4aa7-81c9-548907df8af4-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "b9f0dccc-8d65-4aa7-81c9-548907df8af4" (UID: "b9f0dccc-8d65-4aa7-81c9-548907df8af4"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:25:32 crc kubenswrapper[4830]: I0131 09:25:32.395127 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b9f0dccc-8d65-4aa7-81c9-548907df8af4-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "b9f0dccc-8d65-4aa7-81c9-548907df8af4" (UID: "b9f0dccc-8d65-4aa7-81c9-548907df8af4"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:25:32 crc kubenswrapper[4830]: I0131 09:25:32.424560 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b9f0dccc-8d65-4aa7-81c9-548907df8af4-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "b9f0dccc-8d65-4aa7-81c9-548907df8af4" (UID: "b9f0dccc-8d65-4aa7-81c9-548907df8af4"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:25:32 crc kubenswrapper[4830]: I0131 09:25:32.450475 4830 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b9f0dccc-8d65-4aa7-81c9-548907df8af4-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 31 09:25:32 crc kubenswrapper[4830]: I0131 09:25:32.450531 4830 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b9f0dccc-8d65-4aa7-81c9-548907df8af4-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 31 09:25:32 crc kubenswrapper[4830]: I0131 09:25:32.450543 4830 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b9f0dccc-8d65-4aa7-81c9-548907df8af4-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 31 09:25:32 crc kubenswrapper[4830]: I0131 09:25:32.450555 4830 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b9f0dccc-8d65-4aa7-81c9-548907df8af4-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 31 09:25:32 crc kubenswrapper[4830]: I0131 09:25:32.498233 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75c8ddd69c-pfcld" event={"ID":"b9f0dccc-8d65-4aa7-81c9-548907df8af4","Type":"ContainerDied","Data":"c94c243fc25f354758328246b89afd6381ff241fdfdd3f787538de0920265d86"} Jan 31 09:25:32 crc kubenswrapper[4830]: I0131 09:25:32.498298 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"945c030b-2a43-431b-b898-d3a28b4e3821","Type":"ContainerStarted","Data":"03c8c558c71a14f1ab3f943da83ccd95aa3384406730ea5a24938c15a44d1351"} Jan 31 09:25:32 crc kubenswrapper[4830]: I0131 09:25:32.498314 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"4ed170d0-8e88-40c3-a2b4-9908fc87a3db","Type":"ContainerStarted","Data":"af5fb69f74678c21208c6aa6006fde1d167d9d64941dc62ef89b08a911fc2e35"} Jan 31 09:25:32 crc kubenswrapper[4830]: I0131 09:25:32.498336 4830 scope.go:117] "RemoveContainer" containerID="8de8560fc90445522a14e354f527ee56a73e3d5d539f428fcc3ddd4040d2e3b9" Jan 31 09:25:32 crc kubenswrapper[4830]: I0131 09:25:32.615422 4830 scope.go:117] "RemoveContainer" containerID="fc66f93cf2dd5e1ef6bdfeed1b2f9c16ad775a8d00fc09bdce347f7a625175bc" Jan 31 09:25:32 crc kubenswrapper[4830]: I0131 09:25:32.760209 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-75c8ddd69c-pfcld"] Jan 31 09:25:32 crc kubenswrapper[4830]: I0131 09:25:32.792713 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-75c8ddd69c-pfcld"] Jan 31 09:25:33 crc kubenswrapper[4830]: I0131 09:25:33.409084 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fd3d398c-5a9f-4835-9b6a-6700097e85ed","Type":"ContainerStarted","Data":"cb4a05ac9302c7356f4830d38a00f8f941d688e43deecb9bdbf3ea14257b5c5e"} Jan 31 09:25:33 crc kubenswrapper[4830]: I0131 09:25:33.412776 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"945c030b-2a43-431b-b898-d3a28b4e3821","Type":"ContainerStarted","Data":"bdcafe7e4ff4b85a5eb743154508134ecd3792f2fe65025ac441692a77da999d"} Jan 31 09:25:33 crc kubenswrapper[4830]: I0131 09:25:33.413119 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Jan 31 09:25:33 crc kubenswrapper[4830]: I0131 09:25:33.452763 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=4.452713411 podStartE2EDuration="4.452713411s" podCreationTimestamp="2026-01-31 09:25:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 09:25:33.442430688 +0000 UTC m=+1477.935793130" watchObservedRunningTime="2026-01-31 09:25:33.452713411 +0000 UTC m=+1477.946075853" Jan 31 09:25:33 crc kubenswrapper[4830]: I0131 09:25:33.855670 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-6cd8b566d4-4q75x"] Jan 31 09:25:33 crc kubenswrapper[4830]: I0131 09:25:33.856036 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-6cd8b566d4-4q75x" podUID="74254e68-cbf8-446e-a2d8-768185ec778f" containerName="neutron-api" containerID="cri-o://9c7ed3187c5fd1fce5ed05e9c48a484b4b9883935cce4cff33ab889828b9bc46" gracePeriod=30 Jan 31 09:25:33 crc kubenswrapper[4830]: I0131 09:25:33.857088 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-6cd8b566d4-4q75x" podUID="74254e68-cbf8-446e-a2d8-768185ec778f" containerName="neutron-httpd" containerID="cri-o://4c6decc22e93c41d7227bdefca13848b3fc35f5adc6c7d1553fa05be847967dc" gracePeriod=30 Jan 31 09:25:33 crc kubenswrapper[4830]: I0131 09:25:33.889161 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-6cd8b566d4-4q75x" Jan 31 09:25:33 crc kubenswrapper[4830]: I0131 09:25:33.900370 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-cc7d8b455-4zmj7"] Jan 31 09:25:33 crc kubenswrapper[4830]: E0131 09:25:33.901164 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b9f0dccc-8d65-4aa7-81c9-548907df8af4" containerName="dnsmasq-dns" Jan 31 09:25:33 crc kubenswrapper[4830]: I0131 09:25:33.901185 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="b9f0dccc-8d65-4aa7-81c9-548907df8af4" containerName="dnsmasq-dns" Jan 31 09:25:33 crc kubenswrapper[4830]: E0131 09:25:33.901221 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b9f0dccc-8d65-4aa7-81c9-548907df8af4" containerName="init" Jan 31 09:25:33 crc kubenswrapper[4830]: I0131 09:25:33.901228 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="b9f0dccc-8d65-4aa7-81c9-548907df8af4" containerName="init" Jan 31 09:25:33 crc kubenswrapper[4830]: I0131 09:25:33.901509 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="b9f0dccc-8d65-4aa7-81c9-548907df8af4" containerName="dnsmasq-dns" Jan 31 09:25:33 crc kubenswrapper[4830]: I0131 09:25:33.903212 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-cc7d8b455-4zmj7" Jan 31 09:25:33 crc kubenswrapper[4830]: I0131 09:25:33.925349 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-cc7d8b455-4zmj7"] Jan 31 09:25:34 crc kubenswrapper[4830]: I0131 09:25:34.003607 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/d1262ef4-ec58-4db3-a66e-be826421d514-config\") pod \"neutron-cc7d8b455-4zmj7\" (UID: \"d1262ef4-ec58-4db3-a66e-be826421d514\") " pod="openstack/neutron-cc7d8b455-4zmj7" Jan 31 09:25:34 crc kubenswrapper[4830]: I0131 09:25:34.004112 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/d1262ef4-ec58-4db3-a66e-be826421d514-ovndb-tls-certs\") pod \"neutron-cc7d8b455-4zmj7\" (UID: \"d1262ef4-ec58-4db3-a66e-be826421d514\") " pod="openstack/neutron-cc7d8b455-4zmj7" Jan 31 09:25:34 crc kubenswrapper[4830]: I0131 09:25:34.004162 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d1262ef4-ec58-4db3-a66e-be826421d514-internal-tls-certs\") pod \"neutron-cc7d8b455-4zmj7\" (UID: \"d1262ef4-ec58-4db3-a66e-be826421d514\") " pod="openstack/neutron-cc7d8b455-4zmj7" Jan 31 09:25:34 crc kubenswrapper[4830]: I0131 09:25:34.004234 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/d1262ef4-ec58-4db3-a66e-be826421d514-httpd-config\") pod \"neutron-cc7d8b455-4zmj7\" (UID: \"d1262ef4-ec58-4db3-a66e-be826421d514\") " pod="openstack/neutron-cc7d8b455-4zmj7" Jan 31 09:25:34 crc kubenswrapper[4830]: I0131 09:25:34.004264 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1262ef4-ec58-4db3-a66e-be826421d514-combined-ca-bundle\") pod \"neutron-cc7d8b455-4zmj7\" (UID: \"d1262ef4-ec58-4db3-a66e-be826421d514\") " pod="openstack/neutron-cc7d8b455-4zmj7" Jan 31 09:25:34 crc kubenswrapper[4830]: I0131 09:25:34.004319 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n4c7r\" (UniqueName: \"kubernetes.io/projected/d1262ef4-ec58-4db3-a66e-be826421d514-kube-api-access-n4c7r\") pod \"neutron-cc7d8b455-4zmj7\" (UID: \"d1262ef4-ec58-4db3-a66e-be826421d514\") " pod="openstack/neutron-cc7d8b455-4zmj7" Jan 31 09:25:34 crc kubenswrapper[4830]: I0131 09:25:34.004384 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d1262ef4-ec58-4db3-a66e-be826421d514-public-tls-certs\") pod \"neutron-cc7d8b455-4zmj7\" (UID: \"d1262ef4-ec58-4db3-a66e-be826421d514\") " pod="openstack/neutron-cc7d8b455-4zmj7" Jan 31 09:25:34 crc kubenswrapper[4830]: I0131 09:25:34.107464 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d1262ef4-ec58-4db3-a66e-be826421d514-public-tls-certs\") pod \"neutron-cc7d8b455-4zmj7\" (UID: \"d1262ef4-ec58-4db3-a66e-be826421d514\") " pod="openstack/neutron-cc7d8b455-4zmj7" Jan 31 09:25:34 crc kubenswrapper[4830]: I0131 09:25:34.107629 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/d1262ef4-ec58-4db3-a66e-be826421d514-config\") pod \"neutron-cc7d8b455-4zmj7\" (UID: \"d1262ef4-ec58-4db3-a66e-be826421d514\") " pod="openstack/neutron-cc7d8b455-4zmj7" Jan 31 09:25:34 crc kubenswrapper[4830]: I0131 09:25:34.107694 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/d1262ef4-ec58-4db3-a66e-be826421d514-ovndb-tls-certs\") pod \"neutron-cc7d8b455-4zmj7\" (UID: \"d1262ef4-ec58-4db3-a66e-be826421d514\") " pod="openstack/neutron-cc7d8b455-4zmj7" Jan 31 09:25:34 crc kubenswrapper[4830]: I0131 09:25:34.107767 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d1262ef4-ec58-4db3-a66e-be826421d514-internal-tls-certs\") pod \"neutron-cc7d8b455-4zmj7\" (UID: \"d1262ef4-ec58-4db3-a66e-be826421d514\") " pod="openstack/neutron-cc7d8b455-4zmj7" Jan 31 09:25:34 crc kubenswrapper[4830]: I0131 09:25:34.107824 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/d1262ef4-ec58-4db3-a66e-be826421d514-httpd-config\") pod \"neutron-cc7d8b455-4zmj7\" (UID: \"d1262ef4-ec58-4db3-a66e-be826421d514\") " pod="openstack/neutron-cc7d8b455-4zmj7" Jan 31 09:25:34 crc kubenswrapper[4830]: I0131 09:25:34.107841 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1262ef4-ec58-4db3-a66e-be826421d514-combined-ca-bundle\") pod \"neutron-cc7d8b455-4zmj7\" (UID: \"d1262ef4-ec58-4db3-a66e-be826421d514\") " pod="openstack/neutron-cc7d8b455-4zmj7" Jan 31 09:25:34 crc kubenswrapper[4830]: I0131 09:25:34.107898 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n4c7r\" (UniqueName: \"kubernetes.io/projected/d1262ef4-ec58-4db3-a66e-be826421d514-kube-api-access-n4c7r\") pod \"neutron-cc7d8b455-4zmj7\" (UID: \"d1262ef4-ec58-4db3-a66e-be826421d514\") " pod="openstack/neutron-cc7d8b455-4zmj7" Jan 31 09:25:34 crc kubenswrapper[4830]: I0131 09:25:34.120782 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/d1262ef4-ec58-4db3-a66e-be826421d514-ovndb-tls-certs\") pod \"neutron-cc7d8b455-4zmj7\" (UID: \"d1262ef4-ec58-4db3-a66e-be826421d514\") " pod="openstack/neutron-cc7d8b455-4zmj7" Jan 31 09:25:34 crc kubenswrapper[4830]: I0131 09:25:34.120798 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d1262ef4-ec58-4db3-a66e-be826421d514-internal-tls-certs\") pod \"neutron-cc7d8b455-4zmj7\" (UID: \"d1262ef4-ec58-4db3-a66e-be826421d514\") " pod="openstack/neutron-cc7d8b455-4zmj7" Jan 31 09:25:34 crc kubenswrapper[4830]: I0131 09:25:34.120887 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1262ef4-ec58-4db3-a66e-be826421d514-combined-ca-bundle\") pod \"neutron-cc7d8b455-4zmj7\" (UID: \"d1262ef4-ec58-4db3-a66e-be826421d514\") " pod="openstack/neutron-cc7d8b455-4zmj7" Jan 31 09:25:34 crc kubenswrapper[4830]: I0131 09:25:34.124712 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/d1262ef4-ec58-4db3-a66e-be826421d514-httpd-config\") pod \"neutron-cc7d8b455-4zmj7\" (UID: \"d1262ef4-ec58-4db3-a66e-be826421d514\") " pod="openstack/neutron-cc7d8b455-4zmj7" Jan 31 09:25:34 crc kubenswrapper[4830]: I0131 09:25:34.125476 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/d1262ef4-ec58-4db3-a66e-be826421d514-config\") pod \"neutron-cc7d8b455-4zmj7\" (UID: \"d1262ef4-ec58-4db3-a66e-be826421d514\") " pod="openstack/neutron-cc7d8b455-4zmj7" Jan 31 09:25:34 crc kubenswrapper[4830]: I0131 09:25:34.126028 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d1262ef4-ec58-4db3-a66e-be826421d514-public-tls-certs\") pod \"neutron-cc7d8b455-4zmj7\" (UID: \"d1262ef4-ec58-4db3-a66e-be826421d514\") " pod="openstack/neutron-cc7d8b455-4zmj7" Jan 31 09:25:34 crc kubenswrapper[4830]: I0131 09:25:34.131040 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n4c7r\" (UniqueName: \"kubernetes.io/projected/d1262ef4-ec58-4db3-a66e-be826421d514-kube-api-access-n4c7r\") pod \"neutron-cc7d8b455-4zmj7\" (UID: \"d1262ef4-ec58-4db3-a66e-be826421d514\") " pod="openstack/neutron-cc7d8b455-4zmj7" Jan 31 09:25:34 crc kubenswrapper[4830]: I0131 09:25:34.258685 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-cc7d8b455-4zmj7" Jan 31 09:25:34 crc kubenswrapper[4830]: I0131 09:25:34.276745 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b9f0dccc-8d65-4aa7-81c9-548907df8af4" path="/var/lib/kubelet/pods/b9f0dccc-8d65-4aa7-81c9-548907df8af4/volumes" Jan 31 09:25:34 crc kubenswrapper[4830]: I0131 09:25:34.494976 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fd3d398c-5a9f-4835-9b6a-6700097e85ed","Type":"ContainerStarted","Data":"410d3fa387ba52fc900df14a4ccefea9f4c22babba4e0a3efb0d6b88d925adb6"} Jan 31 09:25:34 crc kubenswrapper[4830]: I0131 09:25:34.516370 4830 generic.go:334] "Generic (PLEG): container finished" podID="74254e68-cbf8-446e-a2d8-768185ec778f" containerID="4c6decc22e93c41d7227bdefca13848b3fc35f5adc6c7d1553fa05be847967dc" exitCode=0 Jan 31 09:25:34 crc kubenswrapper[4830]: I0131 09:25:34.516678 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6cd8b566d4-4q75x" event={"ID":"74254e68-cbf8-446e-a2d8-768185ec778f","Type":"ContainerDied","Data":"4c6decc22e93c41d7227bdefca13848b3fc35f5adc6c7d1553fa05be847967dc"} Jan 31 09:25:34 crc kubenswrapper[4830]: I0131 09:25:34.987680 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-cc7d8b455-4zmj7"] Jan 31 09:25:35 crc kubenswrapper[4830]: I0131 09:25:35.595669 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fd3d398c-5a9f-4835-9b6a-6700097e85ed","Type":"ContainerStarted","Data":"00d9abf46523e252c342902e9571685e6008daa60278da5c785f45f9d550fc4b"} Jan 31 09:25:35 crc kubenswrapper[4830]: I0131 09:25:35.599480 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-cc7d8b455-4zmj7" event={"ID":"d1262ef4-ec58-4db3-a66e-be826421d514","Type":"ContainerStarted","Data":"24ebfed724e7b875e398d5eefaf1852c85952ede38cd23cead4faa30551333c9"} Jan 31 09:25:36 crc kubenswrapper[4830]: I0131 09:25:36.010029 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 31 09:25:36 crc kubenswrapper[4830]: I0131 09:25:36.112120 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 31 09:25:36 crc kubenswrapper[4830]: I0131 09:25:36.150185 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-6cd8b566d4-4q75x" podUID="74254e68-cbf8-446e-a2d8-768185ec778f" containerName="neutron-httpd" probeResult="failure" output="Get \"http://10.217.0.193:9696/\": dial tcp 10.217.0.193:9696: connect: connection refused" Jan 31 09:25:36 crc kubenswrapper[4830]: I0131 09:25:36.617786 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-cc7d8b455-4zmj7" event={"ID":"d1262ef4-ec58-4db3-a66e-be826421d514","Type":"ContainerStarted","Data":"b8732c3b47880e9dca5adfc5cea238750ffd7c679245e233bd9c4a3f2f12ae35"} Jan 31 09:25:36 crc kubenswrapper[4830]: I0131 09:25:36.617925 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-cc7d8b455-4zmj7" event={"ID":"d1262ef4-ec58-4db3-a66e-be826421d514","Type":"ContainerStarted","Data":"017cffe77ca19c7209af3089b021e0f2041f95a6b5681d5a0e6c8889a55f39e0"} Jan 31 09:25:36 crc kubenswrapper[4830]: I0131 09:25:36.618075 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-cc7d8b455-4zmj7" Jan 31 09:25:36 crc kubenswrapper[4830]: I0131 09:25:36.619808 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="86af5a4c-fe49-4f01-a832-71260d0ad1e4" containerName="cinder-scheduler" containerID="cri-o://6820a2f81e5e911a2a74220b4d1198c828a0b10ab55e9c2458125c863bf2ebac" gracePeriod=30 Jan 31 09:25:36 crc kubenswrapper[4830]: I0131 09:25:36.619857 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="86af5a4c-fe49-4f01-a832-71260d0ad1e4" containerName="probe" containerID="cri-o://e4a1932dd9e42c2d262f95fa30c91af465b1b416b646dbcc49fc00f9db6d10f8" gracePeriod=30 Jan 31 09:25:36 crc kubenswrapper[4830]: I0131 09:25:36.663494 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-cc7d8b455-4zmj7" podStartSLOduration=3.663463383 podStartE2EDuration="3.663463383s" podCreationTimestamp="2026-01-31 09:25:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 09:25:36.648965751 +0000 UTC m=+1481.142328183" watchObservedRunningTime="2026-01-31 09:25:36.663463383 +0000 UTC m=+1481.156825825" Jan 31 09:25:37 crc kubenswrapper[4830]: E0131 09:25:37.381707 4830 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode1fe9f02_72ff_45af_8728_91cecff0d1ac.slice/crio-conmon-812c2aae3528acf16bf86636dffabbac6b879f0d6db1779b988c28680f5f9e21.scope\": RecentStats: unable to find data in memory cache]" Jan 31 09:25:37 crc kubenswrapper[4830]: I0131 09:25:37.721067 4830 generic.go:334] "Generic (PLEG): container finished" podID="95712f82-07ef-4b0f-b1c8-af74932c2c4c" containerID="4d944a21c183acfd35cdc755af8107e0e14d955b90c01cde24d1da3b845122be" exitCode=137 Jan 31 09:25:37 crc kubenswrapper[4830]: I0131 09:25:37.721639 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-795c4b4b5d-76dwx" event={"ID":"95712f82-07ef-4b0f-b1c8-af74932c2c4c","Type":"ContainerDied","Data":"4d944a21c183acfd35cdc755af8107e0e14d955b90c01cde24d1da3b845122be"} Jan 31 09:25:37 crc kubenswrapper[4830]: I0131 09:25:37.748326 4830 generic.go:334] "Generic (PLEG): container finished" podID="e1fe9f02-72ff-45af-8728-91cecff0d1ac" containerID="812c2aae3528acf16bf86636dffabbac6b879f0d6db1779b988c28680f5f9e21" exitCode=137 Jan 31 09:25:37 crc kubenswrapper[4830]: I0131 09:25:37.748415 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-9d5f95fb7-h7vp9" event={"ID":"e1fe9f02-72ff-45af-8728-91cecff0d1ac","Type":"ContainerDied","Data":"812c2aae3528acf16bf86636dffabbac6b879f0d6db1779b988c28680f5f9e21"} Jan 31 09:25:37 crc kubenswrapper[4830]: I0131 09:25:37.769788 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fd3d398c-5a9f-4835-9b6a-6700097e85ed","Type":"ContainerStarted","Data":"432786d33d3771aab5e5d32e3cafd9b8a281299a22963e8a340e9dc5bdc1494a"} Jan 31 09:25:37 crc kubenswrapper[4830]: I0131 09:25:37.770919 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 31 09:25:37 crc kubenswrapper[4830]: I0131 09:25:37.817671 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.533472337 podStartE2EDuration="8.817651127s" podCreationTimestamp="2026-01-31 09:25:29 +0000 UTC" firstStartedPulling="2026-01-31 09:25:30.684524841 +0000 UTC m=+1475.177887283" lastFinishedPulling="2026-01-31 09:25:36.968703631 +0000 UTC m=+1481.462066073" observedRunningTime="2026-01-31 09:25:37.803216157 +0000 UTC m=+1482.296578609" watchObservedRunningTime="2026-01-31 09:25:37.817651127 +0000 UTC m=+1482.311013569" Jan 31 09:25:38 crc kubenswrapper[4830]: I0131 09:25:38.000160 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-9d5f95fb7-h7vp9" Jan 31 09:25:38 crc kubenswrapper[4830]: I0131 09:25:38.138903 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e1fe9f02-72ff-45af-8728-91cecff0d1ac-logs\") pod \"e1fe9f02-72ff-45af-8728-91cecff0d1ac\" (UID: \"e1fe9f02-72ff-45af-8728-91cecff0d1ac\") " Jan 31 09:25:38 crc kubenswrapper[4830]: I0131 09:25:38.139919 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e1fe9f02-72ff-45af-8728-91cecff0d1ac-config-data\") pod \"e1fe9f02-72ff-45af-8728-91cecff0d1ac\" (UID: \"e1fe9f02-72ff-45af-8728-91cecff0d1ac\") " Jan 31 09:25:38 crc kubenswrapper[4830]: I0131 09:25:38.140260 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hg8gm\" (UniqueName: \"kubernetes.io/projected/e1fe9f02-72ff-45af-8728-91cecff0d1ac-kube-api-access-hg8gm\") pod \"e1fe9f02-72ff-45af-8728-91cecff0d1ac\" (UID: \"e1fe9f02-72ff-45af-8728-91cecff0d1ac\") " Jan 31 09:25:38 crc kubenswrapper[4830]: I0131 09:25:38.142000 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e1fe9f02-72ff-45af-8728-91cecff0d1ac-logs" (OuterVolumeSpecName: "logs") pod "e1fe9f02-72ff-45af-8728-91cecff0d1ac" (UID: "e1fe9f02-72ff-45af-8728-91cecff0d1ac"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 09:25:38 crc kubenswrapper[4830]: I0131 09:25:38.146833 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e1fe9f02-72ff-45af-8728-91cecff0d1ac-config-data-custom\") pod \"e1fe9f02-72ff-45af-8728-91cecff0d1ac\" (UID: \"e1fe9f02-72ff-45af-8728-91cecff0d1ac\") " Jan 31 09:25:38 crc kubenswrapper[4830]: I0131 09:25:38.146961 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e1fe9f02-72ff-45af-8728-91cecff0d1ac-combined-ca-bundle\") pod \"e1fe9f02-72ff-45af-8728-91cecff0d1ac\" (UID: \"e1fe9f02-72ff-45af-8728-91cecff0d1ac\") " Jan 31 09:25:38 crc kubenswrapper[4830]: I0131 09:25:38.150487 4830 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e1fe9f02-72ff-45af-8728-91cecff0d1ac-logs\") on node \"crc\" DevicePath \"\"" Jan 31 09:25:38 crc kubenswrapper[4830]: I0131 09:25:38.159692 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e1fe9f02-72ff-45af-8728-91cecff0d1ac-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "e1fe9f02-72ff-45af-8728-91cecff0d1ac" (UID: "e1fe9f02-72ff-45af-8728-91cecff0d1ac"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:25:38 crc kubenswrapper[4830]: I0131 09:25:38.178279 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e1fe9f02-72ff-45af-8728-91cecff0d1ac-kube-api-access-hg8gm" (OuterVolumeSpecName: "kube-api-access-hg8gm") pod "e1fe9f02-72ff-45af-8728-91cecff0d1ac" (UID: "e1fe9f02-72ff-45af-8728-91cecff0d1ac"). InnerVolumeSpecName "kube-api-access-hg8gm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:25:38 crc kubenswrapper[4830]: I0131 09:25:38.272830 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hg8gm\" (UniqueName: \"kubernetes.io/projected/e1fe9f02-72ff-45af-8728-91cecff0d1ac-kube-api-access-hg8gm\") on node \"crc\" DevicePath \"\"" Jan 31 09:25:38 crc kubenswrapper[4830]: I0131 09:25:38.273452 4830 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e1fe9f02-72ff-45af-8728-91cecff0d1ac-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 31 09:25:38 crc kubenswrapper[4830]: I0131 09:25:38.340759 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-795c4b4b5d-76dwx" Jan 31 09:25:38 crc kubenswrapper[4830]: I0131 09:25:38.387511 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/95712f82-07ef-4b0f-b1c8-af74932c2c4c-config-data\") pod \"95712f82-07ef-4b0f-b1c8-af74932c2c4c\" (UID: \"95712f82-07ef-4b0f-b1c8-af74932c2c4c\") " Jan 31 09:25:38 crc kubenswrapper[4830]: I0131 09:25:38.387612 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95712f82-07ef-4b0f-b1c8-af74932c2c4c-combined-ca-bundle\") pod \"95712f82-07ef-4b0f-b1c8-af74932c2c4c\" (UID: \"95712f82-07ef-4b0f-b1c8-af74932c2c4c\") " Jan 31 09:25:38 crc kubenswrapper[4830]: I0131 09:25:38.387833 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/95712f82-07ef-4b0f-b1c8-af74932c2c4c-config-data-custom\") pod \"95712f82-07ef-4b0f-b1c8-af74932c2c4c\" (UID: \"95712f82-07ef-4b0f-b1c8-af74932c2c4c\") " Jan 31 09:25:38 crc kubenswrapper[4830]: I0131 09:25:38.387936 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/95712f82-07ef-4b0f-b1c8-af74932c2c4c-logs\") pod \"95712f82-07ef-4b0f-b1c8-af74932c2c4c\" (UID: \"95712f82-07ef-4b0f-b1c8-af74932c2c4c\") " Jan 31 09:25:38 crc kubenswrapper[4830]: I0131 09:25:38.388063 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-llck6\" (UniqueName: \"kubernetes.io/projected/95712f82-07ef-4b0f-b1c8-af74932c2c4c-kube-api-access-llck6\") pod \"95712f82-07ef-4b0f-b1c8-af74932c2c4c\" (UID: \"95712f82-07ef-4b0f-b1c8-af74932c2c4c\") " Jan 31 09:25:38 crc kubenswrapper[4830]: I0131 09:25:38.389462 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/95712f82-07ef-4b0f-b1c8-af74932c2c4c-logs" (OuterVolumeSpecName: "logs") pod "95712f82-07ef-4b0f-b1c8-af74932c2c4c" (UID: "95712f82-07ef-4b0f-b1c8-af74932c2c4c"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 09:25:38 crc kubenswrapper[4830]: I0131 09:25:38.393805 4830 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/95712f82-07ef-4b0f-b1c8-af74932c2c4c-logs\") on node \"crc\" DevicePath \"\"" Jan 31 09:25:38 crc kubenswrapper[4830]: I0131 09:25:38.408334 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-engine-666cdcb7b8-d25gt"] Jan 31 09:25:38 crc kubenswrapper[4830]: E0131 09:25:38.408987 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1fe9f02-72ff-45af-8728-91cecff0d1ac" containerName="barbican-worker-log" Jan 31 09:25:38 crc kubenswrapper[4830]: I0131 09:25:38.409006 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1fe9f02-72ff-45af-8728-91cecff0d1ac" containerName="barbican-worker-log" Jan 31 09:25:38 crc kubenswrapper[4830]: E0131 09:25:38.409043 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95712f82-07ef-4b0f-b1c8-af74932c2c4c" containerName="barbican-keystone-listener" Jan 31 09:25:38 crc kubenswrapper[4830]: I0131 09:25:38.409051 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="95712f82-07ef-4b0f-b1c8-af74932c2c4c" containerName="barbican-keystone-listener" Jan 31 09:25:38 crc kubenswrapper[4830]: E0131 09:25:38.409064 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95712f82-07ef-4b0f-b1c8-af74932c2c4c" containerName="barbican-keystone-listener-log" Jan 31 09:25:38 crc kubenswrapper[4830]: I0131 09:25:38.409071 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="95712f82-07ef-4b0f-b1c8-af74932c2c4c" containerName="barbican-keystone-listener-log" Jan 31 09:25:38 crc kubenswrapper[4830]: E0131 09:25:38.409099 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1fe9f02-72ff-45af-8728-91cecff0d1ac" containerName="barbican-worker" Jan 31 09:25:38 crc kubenswrapper[4830]: I0131 09:25:38.409104 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1fe9f02-72ff-45af-8728-91cecff0d1ac" containerName="barbican-worker" Jan 31 09:25:38 crc kubenswrapper[4830]: I0131 09:25:38.409331 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="95712f82-07ef-4b0f-b1c8-af74932c2c4c" containerName="barbican-keystone-listener-log" Jan 31 09:25:38 crc kubenswrapper[4830]: I0131 09:25:38.409344 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="e1fe9f02-72ff-45af-8728-91cecff0d1ac" containerName="barbican-worker-log" Jan 31 09:25:38 crc kubenswrapper[4830]: I0131 09:25:38.409501 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="95712f82-07ef-4b0f-b1c8-af74932c2c4c" containerName="barbican-keystone-listener" Jan 31 09:25:38 crc kubenswrapper[4830]: I0131 09:25:38.409519 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="e1fe9f02-72ff-45af-8728-91cecff0d1ac" containerName="barbican-worker" Jan 31 09:25:38 crc kubenswrapper[4830]: I0131 09:25:38.410496 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-666cdcb7b8-d25gt"] Jan 31 09:25:38 crc kubenswrapper[4830]: I0131 09:25:38.410591 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-666cdcb7b8-d25gt" Jan 31 09:25:38 crc kubenswrapper[4830]: I0131 09:25:38.427427 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/95712f82-07ef-4b0f-b1c8-af74932c2c4c-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "95712f82-07ef-4b0f-b1c8-af74932c2c4c" (UID: "95712f82-07ef-4b0f-b1c8-af74932c2c4c"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:25:38 crc kubenswrapper[4830]: I0131 09:25:38.434078 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-heat-dockercfg-scjhm" Jan 31 09:25:38 crc kubenswrapper[4830]: I0131 09:25:38.434372 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-engine-config-data" Jan 31 09:25:38 crc kubenswrapper[4830]: I0131 09:25:38.448890 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/95712f82-07ef-4b0f-b1c8-af74932c2c4c-kube-api-access-llck6" (OuterVolumeSpecName: "kube-api-access-llck6") pod "95712f82-07ef-4b0f-b1c8-af74932c2c4c" (UID: "95712f82-07ef-4b0f-b1c8-af74932c2c4c"). InnerVolumeSpecName "kube-api-access-llck6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:25:38 crc kubenswrapper[4830]: I0131 09:25:38.449523 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-config-data" Jan 31 09:25:38 crc kubenswrapper[4830]: I0131 09:25:38.508195 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5klr5\" (UniqueName: \"kubernetes.io/projected/07e6233b-8dfa-42db-8e5f-62dbe5372610-kube-api-access-5klr5\") pod \"heat-engine-666cdcb7b8-d25gt\" (UID: \"07e6233b-8dfa-42db-8e5f-62dbe5372610\") " pod="openstack/heat-engine-666cdcb7b8-d25gt" Jan 31 09:25:38 crc kubenswrapper[4830]: I0131 09:25:38.508320 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/07e6233b-8dfa-42db-8e5f-62dbe5372610-config-data\") pod \"heat-engine-666cdcb7b8-d25gt\" (UID: \"07e6233b-8dfa-42db-8e5f-62dbe5372610\") " pod="openstack/heat-engine-666cdcb7b8-d25gt" Jan 31 09:25:38 crc kubenswrapper[4830]: I0131 09:25:38.508412 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07e6233b-8dfa-42db-8e5f-62dbe5372610-combined-ca-bundle\") pod \"heat-engine-666cdcb7b8-d25gt\" (UID: \"07e6233b-8dfa-42db-8e5f-62dbe5372610\") " pod="openstack/heat-engine-666cdcb7b8-d25gt" Jan 31 09:25:38 crc kubenswrapper[4830]: I0131 09:25:38.508443 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/07e6233b-8dfa-42db-8e5f-62dbe5372610-config-data-custom\") pod \"heat-engine-666cdcb7b8-d25gt\" (UID: \"07e6233b-8dfa-42db-8e5f-62dbe5372610\") " pod="openstack/heat-engine-666cdcb7b8-d25gt" Jan 31 09:25:38 crc kubenswrapper[4830]: I0131 09:25:38.508946 4830 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/95712f82-07ef-4b0f-b1c8-af74932c2c4c-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 31 09:25:38 crc kubenswrapper[4830]: I0131 09:25:38.508994 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-llck6\" (UniqueName: \"kubernetes.io/projected/95712f82-07ef-4b0f-b1c8-af74932c2c4c-kube-api-access-llck6\") on node \"crc\" DevicePath \"\"" Jan 31 09:25:38 crc kubenswrapper[4830]: I0131 09:25:38.562830 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e1fe9f02-72ff-45af-8728-91cecff0d1ac-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e1fe9f02-72ff-45af-8728-91cecff0d1ac" (UID: "e1fe9f02-72ff-45af-8728-91cecff0d1ac"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:25:38 crc kubenswrapper[4830]: I0131 09:25:38.598889 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/95712f82-07ef-4b0f-b1c8-af74932c2c4c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "95712f82-07ef-4b0f-b1c8-af74932c2c4c" (UID: "95712f82-07ef-4b0f-b1c8-af74932c2c4c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:25:38 crc kubenswrapper[4830]: I0131 09:25:38.619002 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e1fe9f02-72ff-45af-8728-91cecff0d1ac-config-data" (OuterVolumeSpecName: "config-data") pod "e1fe9f02-72ff-45af-8728-91cecff0d1ac" (UID: "e1fe9f02-72ff-45af-8728-91cecff0d1ac"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:25:38 crc kubenswrapper[4830]: I0131 09:25:38.621981 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-f6bc4c6c9-pjvlb"] Jan 31 09:25:38 crc kubenswrapper[4830]: I0131 09:25:38.625275 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f6bc4c6c9-pjvlb" Jan 31 09:25:38 crc kubenswrapper[4830]: I0131 09:25:38.628984 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/07e6233b-8dfa-42db-8e5f-62dbe5372610-config-data\") pod \"heat-engine-666cdcb7b8-d25gt\" (UID: \"07e6233b-8dfa-42db-8e5f-62dbe5372610\") " pod="openstack/heat-engine-666cdcb7b8-d25gt" Jan 31 09:25:38 crc kubenswrapper[4830]: I0131 09:25:38.629998 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07e6233b-8dfa-42db-8e5f-62dbe5372610-combined-ca-bundle\") pod \"heat-engine-666cdcb7b8-d25gt\" (UID: \"07e6233b-8dfa-42db-8e5f-62dbe5372610\") " pod="openstack/heat-engine-666cdcb7b8-d25gt" Jan 31 09:25:38 crc kubenswrapper[4830]: I0131 09:25:38.630090 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/07e6233b-8dfa-42db-8e5f-62dbe5372610-config-data-custom\") pod \"heat-engine-666cdcb7b8-d25gt\" (UID: \"07e6233b-8dfa-42db-8e5f-62dbe5372610\") " pod="openstack/heat-engine-666cdcb7b8-d25gt" Jan 31 09:25:38 crc kubenswrapper[4830]: I0131 09:25:38.630323 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5klr5\" (UniqueName: \"kubernetes.io/projected/07e6233b-8dfa-42db-8e5f-62dbe5372610-kube-api-access-5klr5\") pod \"heat-engine-666cdcb7b8-d25gt\" (UID: \"07e6233b-8dfa-42db-8e5f-62dbe5372610\") " pod="openstack/heat-engine-666cdcb7b8-d25gt" Jan 31 09:25:38 crc kubenswrapper[4830]: I0131 09:25:38.630419 4830 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e1fe9f02-72ff-45af-8728-91cecff0d1ac-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 09:25:38 crc kubenswrapper[4830]: I0131 09:25:38.630432 4830 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95712f82-07ef-4b0f-b1c8-af74932c2c4c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 09:25:38 crc kubenswrapper[4830]: I0131 09:25:38.630443 4830 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e1fe9f02-72ff-45af-8728-91cecff0d1ac-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 09:25:38 crc kubenswrapper[4830]: I0131 09:25:38.635887 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07e6233b-8dfa-42db-8e5f-62dbe5372610-combined-ca-bundle\") pod \"heat-engine-666cdcb7b8-d25gt\" (UID: \"07e6233b-8dfa-42db-8e5f-62dbe5372610\") " pod="openstack/heat-engine-666cdcb7b8-d25gt" Jan 31 09:25:38 crc kubenswrapper[4830]: I0131 09:25:38.638936 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/07e6233b-8dfa-42db-8e5f-62dbe5372610-config-data-custom\") pod \"heat-engine-666cdcb7b8-d25gt\" (UID: \"07e6233b-8dfa-42db-8e5f-62dbe5372610\") " pod="openstack/heat-engine-666cdcb7b8-d25gt" Jan 31 09:25:38 crc kubenswrapper[4830]: I0131 09:25:38.656703 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/07e6233b-8dfa-42db-8e5f-62dbe5372610-config-data\") pod \"heat-engine-666cdcb7b8-d25gt\" (UID: \"07e6233b-8dfa-42db-8e5f-62dbe5372610\") " pod="openstack/heat-engine-666cdcb7b8-d25gt" Jan 31 09:25:38 crc kubenswrapper[4830]: I0131 09:25:38.720819 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-f6bc4c6c9-pjvlb"] Jan 31 09:25:38 crc kubenswrapper[4830]: I0131 09:25:38.721869 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5klr5\" (UniqueName: \"kubernetes.io/projected/07e6233b-8dfa-42db-8e5f-62dbe5372610-kube-api-access-5klr5\") pod \"heat-engine-666cdcb7b8-d25gt\" (UID: \"07e6233b-8dfa-42db-8e5f-62dbe5372610\") " pod="openstack/heat-engine-666cdcb7b8-d25gt" Jan 31 09:25:38 crc kubenswrapper[4830]: I0131 09:25:38.738473 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/653ad6ae-7808-49a1-8f07-484c37dfeb66-ovsdbserver-nb\") pod \"dnsmasq-dns-f6bc4c6c9-pjvlb\" (UID: \"653ad6ae-7808-49a1-8f07-484c37dfeb66\") " pod="openstack/dnsmasq-dns-f6bc4c6c9-pjvlb" Jan 31 09:25:38 crc kubenswrapper[4830]: I0131 09:25:38.738574 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/653ad6ae-7808-49a1-8f07-484c37dfeb66-ovsdbserver-sb\") pod \"dnsmasq-dns-f6bc4c6c9-pjvlb\" (UID: \"653ad6ae-7808-49a1-8f07-484c37dfeb66\") " pod="openstack/dnsmasq-dns-f6bc4c6c9-pjvlb" Jan 31 09:25:38 crc kubenswrapper[4830]: I0131 09:25:38.738630 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/653ad6ae-7808-49a1-8f07-484c37dfeb66-dns-svc\") pod \"dnsmasq-dns-f6bc4c6c9-pjvlb\" (UID: \"653ad6ae-7808-49a1-8f07-484c37dfeb66\") " pod="openstack/dnsmasq-dns-f6bc4c6c9-pjvlb" Jan 31 09:25:38 crc kubenswrapper[4830]: I0131 09:25:38.738692 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lqhfv\" (UniqueName: \"kubernetes.io/projected/653ad6ae-7808-49a1-8f07-484c37dfeb66-kube-api-access-lqhfv\") pod \"dnsmasq-dns-f6bc4c6c9-pjvlb\" (UID: \"653ad6ae-7808-49a1-8f07-484c37dfeb66\") " pod="openstack/dnsmasq-dns-f6bc4c6c9-pjvlb" Jan 31 09:25:38 crc kubenswrapper[4830]: I0131 09:25:38.739250 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/653ad6ae-7808-49a1-8f07-484c37dfeb66-dns-swift-storage-0\") pod \"dnsmasq-dns-f6bc4c6c9-pjvlb\" (UID: \"653ad6ae-7808-49a1-8f07-484c37dfeb66\") " pod="openstack/dnsmasq-dns-f6bc4c6c9-pjvlb" Jan 31 09:25:38 crc kubenswrapper[4830]: I0131 09:25:38.739362 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/653ad6ae-7808-49a1-8f07-484c37dfeb66-config\") pod \"dnsmasq-dns-f6bc4c6c9-pjvlb\" (UID: \"653ad6ae-7808-49a1-8f07-484c37dfeb66\") " pod="openstack/dnsmasq-dns-f6bc4c6c9-pjvlb" Jan 31 09:25:38 crc kubenswrapper[4830]: I0131 09:25:38.771797 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-5db4bc48b8-mphcw"] Jan 31 09:25:38 crc kubenswrapper[4830]: I0131 09:25:38.780706 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-5db4bc48b8-mphcw" Jan 31 09:25:38 crc kubenswrapper[4830]: I0131 09:25:38.797650 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-5db4bc48b8-mphcw"] Jan 31 09:25:38 crc kubenswrapper[4830]: I0131 09:25:38.821313 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-cfnapi-config-data" Jan 31 09:25:38 crc kubenswrapper[4830]: I0131 09:25:38.821566 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-666cdcb7b8-d25gt" Jan 31 09:25:38 crc kubenswrapper[4830]: I0131 09:25:38.823365 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/95712f82-07ef-4b0f-b1c8-af74932c2c4c-config-data" (OuterVolumeSpecName: "config-data") pod "95712f82-07ef-4b0f-b1c8-af74932c2c4c" (UID: "95712f82-07ef-4b0f-b1c8-af74932c2c4c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:25:38 crc kubenswrapper[4830]: I0131 09:25:38.847806 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/507f4c57-9369-4487-a575-370014e22eeb-combined-ca-bundle\") pod \"heat-cfnapi-5db4bc48b8-mphcw\" (UID: \"507f4c57-9369-4487-a575-370014e22eeb\") " pod="openstack/heat-cfnapi-5db4bc48b8-mphcw" Jan 31 09:25:38 crc kubenswrapper[4830]: I0131 09:25:38.848350 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/653ad6ae-7808-49a1-8f07-484c37dfeb66-config\") pod \"dnsmasq-dns-f6bc4c6c9-pjvlb\" (UID: \"653ad6ae-7808-49a1-8f07-484c37dfeb66\") " pod="openstack/dnsmasq-dns-f6bc4c6c9-pjvlb" Jan 31 09:25:38 crc kubenswrapper[4830]: I0131 09:25:38.848682 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/507f4c57-9369-4487-a575-370014e22eeb-config-data-custom\") pod \"heat-cfnapi-5db4bc48b8-mphcw\" (UID: \"507f4c57-9369-4487-a575-370014e22eeb\") " pod="openstack/heat-cfnapi-5db4bc48b8-mphcw" Jan 31 09:25:38 crc kubenswrapper[4830]: I0131 09:25:38.848873 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/507f4c57-9369-4487-a575-370014e22eeb-config-data\") pod \"heat-cfnapi-5db4bc48b8-mphcw\" (UID: \"507f4c57-9369-4487-a575-370014e22eeb\") " pod="openstack/heat-cfnapi-5db4bc48b8-mphcw" Jan 31 09:25:38 crc kubenswrapper[4830]: I0131 09:25:38.849037 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/653ad6ae-7808-49a1-8f07-484c37dfeb66-ovsdbserver-nb\") pod \"dnsmasq-dns-f6bc4c6c9-pjvlb\" (UID: \"653ad6ae-7808-49a1-8f07-484c37dfeb66\") " pod="openstack/dnsmasq-dns-f6bc4c6c9-pjvlb" Jan 31 09:25:38 crc kubenswrapper[4830]: I0131 09:25:38.849493 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/653ad6ae-7808-49a1-8f07-484c37dfeb66-ovsdbserver-sb\") pod \"dnsmasq-dns-f6bc4c6c9-pjvlb\" (UID: \"653ad6ae-7808-49a1-8f07-484c37dfeb66\") " pod="openstack/dnsmasq-dns-f6bc4c6c9-pjvlb" Jan 31 09:25:38 crc kubenswrapper[4830]: I0131 09:25:38.849703 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/653ad6ae-7808-49a1-8f07-484c37dfeb66-dns-svc\") pod \"dnsmasq-dns-f6bc4c6c9-pjvlb\" (UID: \"653ad6ae-7808-49a1-8f07-484c37dfeb66\") " pod="openstack/dnsmasq-dns-f6bc4c6c9-pjvlb" Jan 31 09:25:38 crc kubenswrapper[4830]: I0131 09:25:38.849881 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/653ad6ae-7808-49a1-8f07-484c37dfeb66-config\") pod \"dnsmasq-dns-f6bc4c6c9-pjvlb\" (UID: \"653ad6ae-7808-49a1-8f07-484c37dfeb66\") " pod="openstack/dnsmasq-dns-f6bc4c6c9-pjvlb" Jan 31 09:25:38 crc kubenswrapper[4830]: I0131 09:25:38.850127 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/653ad6ae-7808-49a1-8f07-484c37dfeb66-ovsdbserver-nb\") pod \"dnsmasq-dns-f6bc4c6c9-pjvlb\" (UID: \"653ad6ae-7808-49a1-8f07-484c37dfeb66\") " pod="openstack/dnsmasq-dns-f6bc4c6c9-pjvlb" Jan 31 09:25:38 crc kubenswrapper[4830]: I0131 09:25:38.852795 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/653ad6ae-7808-49a1-8f07-484c37dfeb66-ovsdbserver-sb\") pod \"dnsmasq-dns-f6bc4c6c9-pjvlb\" (UID: \"653ad6ae-7808-49a1-8f07-484c37dfeb66\") " pod="openstack/dnsmasq-dns-f6bc4c6c9-pjvlb" Jan 31 09:25:38 crc kubenswrapper[4830]: I0131 09:25:38.853600 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/653ad6ae-7808-49a1-8f07-484c37dfeb66-dns-svc\") pod \"dnsmasq-dns-f6bc4c6c9-pjvlb\" (UID: \"653ad6ae-7808-49a1-8f07-484c37dfeb66\") " pod="openstack/dnsmasq-dns-f6bc4c6c9-pjvlb" Jan 31 09:25:38 crc kubenswrapper[4830]: I0131 09:25:38.854224 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r6gqn\" (UniqueName: \"kubernetes.io/projected/507f4c57-9369-4487-a575-370014e22eeb-kube-api-access-r6gqn\") pod \"heat-cfnapi-5db4bc48b8-mphcw\" (UID: \"507f4c57-9369-4487-a575-370014e22eeb\") " pod="openstack/heat-cfnapi-5db4bc48b8-mphcw" Jan 31 09:25:38 crc kubenswrapper[4830]: I0131 09:25:38.854391 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lqhfv\" (UniqueName: \"kubernetes.io/projected/653ad6ae-7808-49a1-8f07-484c37dfeb66-kube-api-access-lqhfv\") pod \"dnsmasq-dns-f6bc4c6c9-pjvlb\" (UID: \"653ad6ae-7808-49a1-8f07-484c37dfeb66\") " pod="openstack/dnsmasq-dns-f6bc4c6c9-pjvlb" Jan 31 09:25:38 crc kubenswrapper[4830]: I0131 09:25:38.854482 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/653ad6ae-7808-49a1-8f07-484c37dfeb66-dns-swift-storage-0\") pod \"dnsmasq-dns-f6bc4c6c9-pjvlb\" (UID: \"653ad6ae-7808-49a1-8f07-484c37dfeb66\") " pod="openstack/dnsmasq-dns-f6bc4c6c9-pjvlb" Jan 31 09:25:38 crc kubenswrapper[4830]: I0131 09:25:38.855097 4830 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/95712f82-07ef-4b0f-b1c8-af74932c2c4c-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 09:25:38 crc kubenswrapper[4830]: I0131 09:25:38.856036 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/653ad6ae-7808-49a1-8f07-484c37dfeb66-dns-swift-storage-0\") pod \"dnsmasq-dns-f6bc4c6c9-pjvlb\" (UID: \"653ad6ae-7808-49a1-8f07-484c37dfeb66\") " pod="openstack/dnsmasq-dns-f6bc4c6c9-pjvlb" Jan 31 09:25:38 crc kubenswrapper[4830]: I0131 09:25:38.911311 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-9d5f95fb7-h7vp9" event={"ID":"e1fe9f02-72ff-45af-8728-91cecff0d1ac","Type":"ContainerDied","Data":"e566cefeb57470cec6a2eff1bb891beb339e6e053f250d03ccca28789637710a"} Jan 31 09:25:38 crc kubenswrapper[4830]: I0131 09:25:38.911381 4830 scope.go:117] "RemoveContainer" containerID="812c2aae3528acf16bf86636dffabbac6b879f0d6db1779b988c28680f5f9e21" Jan 31 09:25:38 crc kubenswrapper[4830]: I0131 09:25:38.911537 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-9d5f95fb7-h7vp9" Jan 31 09:25:38 crc kubenswrapper[4830]: I0131 09:25:38.940358 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lqhfv\" (UniqueName: \"kubernetes.io/projected/653ad6ae-7808-49a1-8f07-484c37dfeb66-kube-api-access-lqhfv\") pod \"dnsmasq-dns-f6bc4c6c9-pjvlb\" (UID: \"653ad6ae-7808-49a1-8f07-484c37dfeb66\") " pod="openstack/dnsmasq-dns-f6bc4c6c9-pjvlb" Jan 31 09:25:38 crc kubenswrapper[4830]: I0131 09:25:38.958463 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r6gqn\" (UniqueName: \"kubernetes.io/projected/507f4c57-9369-4487-a575-370014e22eeb-kube-api-access-r6gqn\") pod \"heat-cfnapi-5db4bc48b8-mphcw\" (UID: \"507f4c57-9369-4487-a575-370014e22eeb\") " pod="openstack/heat-cfnapi-5db4bc48b8-mphcw" Jan 31 09:25:38 crc kubenswrapper[4830]: I0131 09:25:38.958595 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/507f4c57-9369-4487-a575-370014e22eeb-combined-ca-bundle\") pod \"heat-cfnapi-5db4bc48b8-mphcw\" (UID: \"507f4c57-9369-4487-a575-370014e22eeb\") " pod="openstack/heat-cfnapi-5db4bc48b8-mphcw" Jan 31 09:25:38 crc kubenswrapper[4830]: I0131 09:25:38.958745 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/507f4c57-9369-4487-a575-370014e22eeb-config-data-custom\") pod \"heat-cfnapi-5db4bc48b8-mphcw\" (UID: \"507f4c57-9369-4487-a575-370014e22eeb\") " pod="openstack/heat-cfnapi-5db4bc48b8-mphcw" Jan 31 09:25:38 crc kubenswrapper[4830]: I0131 09:25:38.958847 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/507f4c57-9369-4487-a575-370014e22eeb-config-data\") pod \"heat-cfnapi-5db4bc48b8-mphcw\" (UID: \"507f4c57-9369-4487-a575-370014e22eeb\") " pod="openstack/heat-cfnapi-5db4bc48b8-mphcw" Jan 31 09:25:38 crc kubenswrapper[4830]: I0131 09:25:38.969487 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/507f4c57-9369-4487-a575-370014e22eeb-config-data\") pod \"heat-cfnapi-5db4bc48b8-mphcw\" (UID: \"507f4c57-9369-4487-a575-370014e22eeb\") " pod="openstack/heat-cfnapi-5db4bc48b8-mphcw" Jan 31 09:25:38 crc kubenswrapper[4830]: I0131 09:25:38.973830 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/507f4c57-9369-4487-a575-370014e22eeb-combined-ca-bundle\") pod \"heat-cfnapi-5db4bc48b8-mphcw\" (UID: \"507f4c57-9369-4487-a575-370014e22eeb\") " pod="openstack/heat-cfnapi-5db4bc48b8-mphcw" Jan 31 09:25:38 crc kubenswrapper[4830]: I0131 09:25:38.984054 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/507f4c57-9369-4487-a575-370014e22eeb-config-data-custom\") pod \"heat-cfnapi-5db4bc48b8-mphcw\" (UID: \"507f4c57-9369-4487-a575-370014e22eeb\") " pod="openstack/heat-cfnapi-5db4bc48b8-mphcw" Jan 31 09:25:39 crc kubenswrapper[4830]: I0131 09:25:39.021553 4830 generic.go:334] "Generic (PLEG): container finished" podID="86af5a4c-fe49-4f01-a832-71260d0ad1e4" containerID="e4a1932dd9e42c2d262f95fa30c91af465b1b416b646dbcc49fc00f9db6d10f8" exitCode=0 Jan 31 09:25:39 crc kubenswrapper[4830]: I0131 09:25:39.021602 4830 generic.go:334] "Generic (PLEG): container finished" podID="86af5a4c-fe49-4f01-a832-71260d0ad1e4" containerID="6820a2f81e5e911a2a74220b4d1198c828a0b10ab55e9c2458125c863bf2ebac" exitCode=0 Jan 31 09:25:39 crc kubenswrapper[4830]: I0131 09:25:39.021976 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-59478c766f-tgwgd"] Jan 31 09:25:39 crc kubenswrapper[4830]: I0131 09:25:39.022692 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r6gqn\" (UniqueName: \"kubernetes.io/projected/507f4c57-9369-4487-a575-370014e22eeb-kube-api-access-r6gqn\") pod \"heat-cfnapi-5db4bc48b8-mphcw\" (UID: \"507f4c57-9369-4487-a575-370014e22eeb\") " pod="openstack/heat-cfnapi-5db4bc48b8-mphcw" Jan 31 09:25:39 crc kubenswrapper[4830]: I0131 09:25:39.024311 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"86af5a4c-fe49-4f01-a832-71260d0ad1e4","Type":"ContainerDied","Data":"e4a1932dd9e42c2d262f95fa30c91af465b1b416b646dbcc49fc00f9db6d10f8"} Jan 31 09:25:39 crc kubenswrapper[4830]: I0131 09:25:39.028982 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"86af5a4c-fe49-4f01-a832-71260d0ad1e4","Type":"ContainerDied","Data":"6820a2f81e5e911a2a74220b4d1198c828a0b10ab55e9c2458125c863bf2ebac"} Jan 31 09:25:39 crc kubenswrapper[4830]: I0131 09:25:39.025009 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f6bc4c6c9-pjvlb" Jan 31 09:25:39 crc kubenswrapper[4830]: I0131 09:25:39.045447 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-59478c766f-tgwgd" Jan 31 09:25:39 crc kubenswrapper[4830]: I0131 09:25:39.067058 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-api-config-data" Jan 31 09:25:39 crc kubenswrapper[4830]: I0131 09:25:39.088669 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-59478c766f-tgwgd"] Jan 31 09:25:39 crc kubenswrapper[4830]: I0131 09:25:39.091809 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-795c4b4b5d-76dwx" event={"ID":"95712f82-07ef-4b0f-b1c8-af74932c2c4c","Type":"ContainerDied","Data":"5feb49ec710e5078e396b397234e0693c301f749ac98c78d4f9c94404cfc3cf2"} Jan 31 09:25:39 crc kubenswrapper[4830]: I0131 09:25:39.092282 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-795c4b4b5d-76dwx" Jan 31 09:25:39 crc kubenswrapper[4830]: I0131 09:25:39.188563 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-5db4bc48b8-mphcw" Jan 31 09:25:39 crc kubenswrapper[4830]: I0131 09:25:39.192741 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2d4zb\" (UniqueName: \"kubernetes.io/projected/e7f604a2-4cc7-4619-846c-51cb5cddffda-kube-api-access-2d4zb\") pod \"heat-api-59478c766f-tgwgd\" (UID: \"e7f604a2-4cc7-4619-846c-51cb5cddffda\") " pod="openstack/heat-api-59478c766f-tgwgd" Jan 31 09:25:39 crc kubenswrapper[4830]: I0131 09:25:39.193046 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e7f604a2-4cc7-4619-846c-51cb5cddffda-config-data\") pod \"heat-api-59478c766f-tgwgd\" (UID: \"e7f604a2-4cc7-4619-846c-51cb5cddffda\") " pod="openstack/heat-api-59478c766f-tgwgd" Jan 31 09:25:39 crc kubenswrapper[4830]: I0131 09:25:39.193160 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7f604a2-4cc7-4619-846c-51cb5cddffda-combined-ca-bundle\") pod \"heat-api-59478c766f-tgwgd\" (UID: \"e7f604a2-4cc7-4619-846c-51cb5cddffda\") " pod="openstack/heat-api-59478c766f-tgwgd" Jan 31 09:25:39 crc kubenswrapper[4830]: I0131 09:25:39.193206 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e7f604a2-4cc7-4619-846c-51cb5cddffda-config-data-custom\") pod \"heat-api-59478c766f-tgwgd\" (UID: \"e7f604a2-4cc7-4619-846c-51cb5cddffda\") " pod="openstack/heat-api-59478c766f-tgwgd" Jan 31 09:25:39 crc kubenswrapper[4830]: I0131 09:25:39.301514 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2d4zb\" (UniqueName: \"kubernetes.io/projected/e7f604a2-4cc7-4619-846c-51cb5cddffda-kube-api-access-2d4zb\") pod \"heat-api-59478c766f-tgwgd\" (UID: \"e7f604a2-4cc7-4619-846c-51cb5cddffda\") " pod="openstack/heat-api-59478c766f-tgwgd" Jan 31 09:25:39 crc kubenswrapper[4830]: I0131 09:25:39.307086 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e7f604a2-4cc7-4619-846c-51cb5cddffda-config-data\") pod \"heat-api-59478c766f-tgwgd\" (UID: \"e7f604a2-4cc7-4619-846c-51cb5cddffda\") " pod="openstack/heat-api-59478c766f-tgwgd" Jan 31 09:25:39 crc kubenswrapper[4830]: I0131 09:25:39.307220 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e7f604a2-4cc7-4619-846c-51cb5cddffda-config-data-custom\") pod \"heat-api-59478c766f-tgwgd\" (UID: \"e7f604a2-4cc7-4619-846c-51cb5cddffda\") " pod="openstack/heat-api-59478c766f-tgwgd" Jan 31 09:25:39 crc kubenswrapper[4830]: I0131 09:25:39.307238 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7f604a2-4cc7-4619-846c-51cb5cddffda-combined-ca-bundle\") pod \"heat-api-59478c766f-tgwgd\" (UID: \"e7f604a2-4cc7-4619-846c-51cb5cddffda\") " pod="openstack/heat-api-59478c766f-tgwgd" Jan 31 09:25:39 crc kubenswrapper[4830]: I0131 09:25:39.314922 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e7f604a2-4cc7-4619-846c-51cb5cddffda-config-data\") pod \"heat-api-59478c766f-tgwgd\" (UID: \"e7f604a2-4cc7-4619-846c-51cb5cddffda\") " pod="openstack/heat-api-59478c766f-tgwgd" Jan 31 09:25:39 crc kubenswrapper[4830]: I0131 09:25:39.320812 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7f604a2-4cc7-4619-846c-51cb5cddffda-combined-ca-bundle\") pod \"heat-api-59478c766f-tgwgd\" (UID: \"e7f604a2-4cc7-4619-846c-51cb5cddffda\") " pod="openstack/heat-api-59478c766f-tgwgd" Jan 31 09:25:39 crc kubenswrapper[4830]: I0131 09:25:39.331923 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e7f604a2-4cc7-4619-846c-51cb5cddffda-config-data-custom\") pod \"heat-api-59478c766f-tgwgd\" (UID: \"e7f604a2-4cc7-4619-846c-51cb5cddffda\") " pod="openstack/heat-api-59478c766f-tgwgd" Jan 31 09:25:39 crc kubenswrapper[4830]: I0131 09:25:39.382973 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-59d6cd4869-w2rrr" Jan 31 09:25:39 crc kubenswrapper[4830]: I0131 09:25:39.384678 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2d4zb\" (UniqueName: \"kubernetes.io/projected/e7f604a2-4cc7-4619-846c-51cb5cddffda-kube-api-access-2d4zb\") pod \"heat-api-59478c766f-tgwgd\" (UID: \"e7f604a2-4cc7-4619-846c-51cb5cddffda\") " pod="openstack/heat-api-59478c766f-tgwgd" Jan 31 09:25:39 crc kubenswrapper[4830]: I0131 09:25:39.465205 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-59478c766f-tgwgd" Jan 31 09:25:39 crc kubenswrapper[4830]: I0131 09:25:39.503252 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-keystone-listener-795c4b4b5d-76dwx"] Jan 31 09:25:39 crc kubenswrapper[4830]: I0131 09:25:39.519322 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-keystone-listener-795c4b4b5d-76dwx"] Jan 31 09:25:39 crc kubenswrapper[4830]: I0131 09:25:39.587387 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-worker-9d5f95fb7-h7vp9"] Jan 31 09:25:39 crc kubenswrapper[4830]: I0131 09:25:39.616015 4830 scope.go:117] "RemoveContainer" containerID="389d4879fd2258b14fc830972976cb268d3d5cf196b20e4b6201d5082852e672" Jan 31 09:25:39 crc kubenswrapper[4830]: I0131 09:25:39.764243 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-worker-9d5f95fb7-h7vp9"] Jan 31 09:25:39 crc kubenswrapper[4830]: I0131 09:25:39.774664 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 31 09:25:39 crc kubenswrapper[4830]: I0131 09:25:39.876866 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/86af5a4c-fe49-4f01-a832-71260d0ad1e4-config-data-custom\") pod \"86af5a4c-fe49-4f01-a832-71260d0ad1e4\" (UID: \"86af5a4c-fe49-4f01-a832-71260d0ad1e4\") " Jan 31 09:25:39 crc kubenswrapper[4830]: I0131 09:25:39.876992 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/86af5a4c-fe49-4f01-a832-71260d0ad1e4-config-data\") pod \"86af5a4c-fe49-4f01-a832-71260d0ad1e4\" (UID: \"86af5a4c-fe49-4f01-a832-71260d0ad1e4\") " Jan 31 09:25:39 crc kubenswrapper[4830]: I0131 09:25:39.877140 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86af5a4c-fe49-4f01-a832-71260d0ad1e4-combined-ca-bundle\") pod \"86af5a4c-fe49-4f01-a832-71260d0ad1e4\" (UID: \"86af5a4c-fe49-4f01-a832-71260d0ad1e4\") " Jan 31 09:25:39 crc kubenswrapper[4830]: I0131 09:25:39.877210 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/86af5a4c-fe49-4f01-a832-71260d0ad1e4-etc-machine-id\") pod \"86af5a4c-fe49-4f01-a832-71260d0ad1e4\" (UID: \"86af5a4c-fe49-4f01-a832-71260d0ad1e4\") " Jan 31 09:25:39 crc kubenswrapper[4830]: I0131 09:25:39.877261 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5wvbw\" (UniqueName: \"kubernetes.io/projected/86af5a4c-fe49-4f01-a832-71260d0ad1e4-kube-api-access-5wvbw\") pod \"86af5a4c-fe49-4f01-a832-71260d0ad1e4\" (UID: \"86af5a4c-fe49-4f01-a832-71260d0ad1e4\") " Jan 31 09:25:39 crc kubenswrapper[4830]: I0131 09:25:39.877314 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/86af5a4c-fe49-4f01-a832-71260d0ad1e4-scripts\") pod \"86af5a4c-fe49-4f01-a832-71260d0ad1e4\" (UID: \"86af5a4c-fe49-4f01-a832-71260d0ad1e4\") " Jan 31 09:25:39 crc kubenswrapper[4830]: I0131 09:25:39.880227 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/86af5a4c-fe49-4f01-a832-71260d0ad1e4-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "86af5a4c-fe49-4f01-a832-71260d0ad1e4" (UID: "86af5a4c-fe49-4f01-a832-71260d0ad1e4"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 09:25:39 crc kubenswrapper[4830]: I0131 09:25:39.896139 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/86af5a4c-fe49-4f01-a832-71260d0ad1e4-kube-api-access-5wvbw" (OuterVolumeSpecName: "kube-api-access-5wvbw") pod "86af5a4c-fe49-4f01-a832-71260d0ad1e4" (UID: "86af5a4c-fe49-4f01-a832-71260d0ad1e4"). InnerVolumeSpecName "kube-api-access-5wvbw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:25:39 crc kubenswrapper[4830]: I0131 09:25:39.898429 4830 scope.go:117] "RemoveContainer" containerID="4d944a21c183acfd35cdc755af8107e0e14d955b90c01cde24d1da3b845122be" Jan 31 09:25:39 crc kubenswrapper[4830]: I0131 09:25:39.899129 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/86af5a4c-fe49-4f01-a832-71260d0ad1e4-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "86af5a4c-fe49-4f01-a832-71260d0ad1e4" (UID: "86af5a4c-fe49-4f01-a832-71260d0ad1e4"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:25:39 crc kubenswrapper[4830]: I0131 09:25:39.908489 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/86af5a4c-fe49-4f01-a832-71260d0ad1e4-scripts" (OuterVolumeSpecName: "scripts") pod "86af5a4c-fe49-4f01-a832-71260d0ad1e4" (UID: "86af5a4c-fe49-4f01-a832-71260d0ad1e4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:25:39 crc kubenswrapper[4830]: I0131 09:25:39.988101 4830 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/86af5a4c-fe49-4f01-a832-71260d0ad1e4-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 31 09:25:39 crc kubenswrapper[4830]: I0131 09:25:39.988138 4830 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/86af5a4c-fe49-4f01-a832-71260d0ad1e4-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 31 09:25:39 crc kubenswrapper[4830]: I0131 09:25:39.988148 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5wvbw\" (UniqueName: \"kubernetes.io/projected/86af5a4c-fe49-4f01-a832-71260d0ad1e4-kube-api-access-5wvbw\") on node \"crc\" DevicePath \"\"" Jan 31 09:25:39 crc kubenswrapper[4830]: I0131 09:25:39.988163 4830 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/86af5a4c-fe49-4f01-a832-71260d0ad1e4-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 09:25:40 crc kubenswrapper[4830]: I0131 09:25:40.057938 4830 scope.go:117] "RemoveContainer" containerID="e7d6f105a83f9fe19fde582e262b440375a23956562ae712d887136a0e0fdf65" Jan 31 09:25:40 crc kubenswrapper[4830]: I0131 09:25:40.135832 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/86af5a4c-fe49-4f01-a832-71260d0ad1e4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "86af5a4c-fe49-4f01-a832-71260d0ad1e4" (UID: "86af5a4c-fe49-4f01-a832-71260d0ad1e4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:25:40 crc kubenswrapper[4830]: I0131 09:25:40.194010 4830 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86af5a4c-fe49-4f01-a832-71260d0ad1e4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 09:25:40 crc kubenswrapper[4830]: I0131 09:25:40.215998 4830 generic.go:334] "Generic (PLEG): container finished" podID="74254e68-cbf8-446e-a2d8-768185ec778f" containerID="9c7ed3187c5fd1fce5ed05e9c48a484b4b9883935cce4cff33ab889828b9bc46" exitCode=0 Jan 31 09:25:40 crc kubenswrapper[4830]: I0131 09:25:40.216099 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6cd8b566d4-4q75x" event={"ID":"74254e68-cbf8-446e-a2d8-768185ec778f","Type":"ContainerDied","Data":"9c7ed3187c5fd1fce5ed05e9c48a484b4b9883935cce4cff33ab889828b9bc46"} Jan 31 09:25:40 crc kubenswrapper[4830]: I0131 09:25:40.333666 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/86af5a4c-fe49-4f01-a832-71260d0ad1e4-config-data" (OuterVolumeSpecName: "config-data") pod "86af5a4c-fe49-4f01-a832-71260d0ad1e4" (UID: "86af5a4c-fe49-4f01-a832-71260d0ad1e4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:25:40 crc kubenswrapper[4830]: I0131 09:25:40.336677 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 31 09:25:40 crc kubenswrapper[4830]: I0131 09:25:40.356596 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="95712f82-07ef-4b0f-b1c8-af74932c2c4c" path="/var/lib/kubelet/pods/95712f82-07ef-4b0f-b1c8-af74932c2c4c/volumes" Jan 31 09:25:40 crc kubenswrapper[4830]: I0131 09:25:40.358316 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e1fe9f02-72ff-45af-8728-91cecff0d1ac" path="/var/lib/kubelet/pods/e1fe9f02-72ff-45af-8728-91cecff0d1ac/volumes" Jan 31 09:25:40 crc kubenswrapper[4830]: I0131 09:25:40.361805 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"86af5a4c-fe49-4f01-a832-71260d0ad1e4","Type":"ContainerDied","Data":"21ed65e1e63bfb0d88ec3a8b99cc7a6d33f1e3ebff84803a8c1b27ef6d5af67d"} Jan 31 09:25:40 crc kubenswrapper[4830]: I0131 09:25:40.361881 4830 scope.go:117] "RemoveContainer" containerID="e4a1932dd9e42c2d262f95fa30c91af465b1b416b646dbcc49fc00f9db6d10f8" Jan 31 09:25:40 crc kubenswrapper[4830]: I0131 09:25:40.416219 4830 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/86af5a4c-fe49-4f01-a832-71260d0ad1e4-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 09:25:40 crc kubenswrapper[4830]: I0131 09:25:40.433902 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 31 09:25:40 crc kubenswrapper[4830]: I0131 09:25:40.457347 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 31 09:25:40 crc kubenswrapper[4830]: I0131 09:25:40.460610 4830 scope.go:117] "RemoveContainer" containerID="6820a2f81e5e911a2a74220b4d1198c828a0b10ab55e9c2458125c863bf2ebac" Jan 31 09:25:40 crc kubenswrapper[4830]: I0131 09:25:40.480133 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Jan 31 09:25:40 crc kubenswrapper[4830]: E0131 09:25:40.481028 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86af5a4c-fe49-4f01-a832-71260d0ad1e4" containerName="probe" Jan 31 09:25:40 crc kubenswrapper[4830]: I0131 09:25:40.481067 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="86af5a4c-fe49-4f01-a832-71260d0ad1e4" containerName="probe" Jan 31 09:25:40 crc kubenswrapper[4830]: E0131 09:25:40.481137 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86af5a4c-fe49-4f01-a832-71260d0ad1e4" containerName="cinder-scheduler" Jan 31 09:25:40 crc kubenswrapper[4830]: I0131 09:25:40.481146 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="86af5a4c-fe49-4f01-a832-71260d0ad1e4" containerName="cinder-scheduler" Jan 31 09:25:40 crc kubenswrapper[4830]: I0131 09:25:40.481445 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="86af5a4c-fe49-4f01-a832-71260d0ad1e4" containerName="cinder-scheduler" Jan 31 09:25:40 crc kubenswrapper[4830]: I0131 09:25:40.481471 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="86af5a4c-fe49-4f01-a832-71260d0ad1e4" containerName="probe" Jan 31 09:25:40 crc kubenswrapper[4830]: I0131 09:25:40.491072 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 31 09:25:40 crc kubenswrapper[4830]: I0131 09:25:40.498401 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Jan 31 09:25:40 crc kubenswrapper[4830]: I0131 09:25:40.515152 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 31 09:25:40 crc kubenswrapper[4830]: I0131 09:25:40.620848 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c45f6608-4c27-4322-b60a-3362294e1ab8-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"c45f6608-4c27-4322-b60a-3362294e1ab8\") " pod="openstack/cinder-scheduler-0" Jan 31 09:25:40 crc kubenswrapper[4830]: I0131 09:25:40.620957 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c45f6608-4c27-4322-b60a-3362294e1ab8-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"c45f6608-4c27-4322-b60a-3362294e1ab8\") " pod="openstack/cinder-scheduler-0" Jan 31 09:25:40 crc kubenswrapper[4830]: I0131 09:25:40.620989 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c45f6608-4c27-4322-b60a-3362294e1ab8-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"c45f6608-4c27-4322-b60a-3362294e1ab8\") " pod="openstack/cinder-scheduler-0" Jan 31 09:25:40 crc kubenswrapper[4830]: I0131 09:25:40.621036 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c45f6608-4c27-4322-b60a-3362294e1ab8-config-data\") pod \"cinder-scheduler-0\" (UID: \"c45f6608-4c27-4322-b60a-3362294e1ab8\") " pod="openstack/cinder-scheduler-0" Jan 31 09:25:40 crc kubenswrapper[4830]: I0131 09:25:40.621069 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fmwvg\" (UniqueName: \"kubernetes.io/projected/c45f6608-4c27-4322-b60a-3362294e1ab8-kube-api-access-fmwvg\") pod \"cinder-scheduler-0\" (UID: \"c45f6608-4c27-4322-b60a-3362294e1ab8\") " pod="openstack/cinder-scheduler-0" Jan 31 09:25:40 crc kubenswrapper[4830]: I0131 09:25:40.621108 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c45f6608-4c27-4322-b60a-3362294e1ab8-scripts\") pod \"cinder-scheduler-0\" (UID: \"c45f6608-4c27-4322-b60a-3362294e1ab8\") " pod="openstack/cinder-scheduler-0" Jan 31 09:25:40 crc kubenswrapper[4830]: I0131 09:25:40.723440 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c45f6608-4c27-4322-b60a-3362294e1ab8-scripts\") pod \"cinder-scheduler-0\" (UID: \"c45f6608-4c27-4322-b60a-3362294e1ab8\") " pod="openstack/cinder-scheduler-0" Jan 31 09:25:40 crc kubenswrapper[4830]: I0131 09:25:40.723646 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c45f6608-4c27-4322-b60a-3362294e1ab8-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"c45f6608-4c27-4322-b60a-3362294e1ab8\") " pod="openstack/cinder-scheduler-0" Jan 31 09:25:40 crc kubenswrapper[4830]: I0131 09:25:40.723712 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c45f6608-4c27-4322-b60a-3362294e1ab8-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"c45f6608-4c27-4322-b60a-3362294e1ab8\") " pod="openstack/cinder-scheduler-0" Jan 31 09:25:40 crc kubenswrapper[4830]: I0131 09:25:40.723752 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c45f6608-4c27-4322-b60a-3362294e1ab8-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"c45f6608-4c27-4322-b60a-3362294e1ab8\") " pod="openstack/cinder-scheduler-0" Jan 31 09:25:40 crc kubenswrapper[4830]: I0131 09:25:40.723791 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c45f6608-4c27-4322-b60a-3362294e1ab8-config-data\") pod \"cinder-scheduler-0\" (UID: \"c45f6608-4c27-4322-b60a-3362294e1ab8\") " pod="openstack/cinder-scheduler-0" Jan 31 09:25:40 crc kubenswrapper[4830]: I0131 09:25:40.723832 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fmwvg\" (UniqueName: \"kubernetes.io/projected/c45f6608-4c27-4322-b60a-3362294e1ab8-kube-api-access-fmwvg\") pod \"cinder-scheduler-0\" (UID: \"c45f6608-4c27-4322-b60a-3362294e1ab8\") " pod="openstack/cinder-scheduler-0" Jan 31 09:25:40 crc kubenswrapper[4830]: I0131 09:25:40.730455 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c45f6608-4c27-4322-b60a-3362294e1ab8-scripts\") pod \"cinder-scheduler-0\" (UID: \"c45f6608-4c27-4322-b60a-3362294e1ab8\") " pod="openstack/cinder-scheduler-0" Jan 31 09:25:40 crc kubenswrapper[4830]: I0131 09:25:40.730562 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c45f6608-4c27-4322-b60a-3362294e1ab8-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"c45f6608-4c27-4322-b60a-3362294e1ab8\") " pod="openstack/cinder-scheduler-0" Jan 31 09:25:40 crc kubenswrapper[4830]: I0131 09:25:40.734117 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c45f6608-4c27-4322-b60a-3362294e1ab8-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"c45f6608-4c27-4322-b60a-3362294e1ab8\") " pod="openstack/cinder-scheduler-0" Jan 31 09:25:40 crc kubenswrapper[4830]: I0131 09:25:40.741648 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c45f6608-4c27-4322-b60a-3362294e1ab8-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"c45f6608-4c27-4322-b60a-3362294e1ab8\") " pod="openstack/cinder-scheduler-0" Jan 31 09:25:40 crc kubenswrapper[4830]: I0131 09:25:40.749669 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fmwvg\" (UniqueName: \"kubernetes.io/projected/c45f6608-4c27-4322-b60a-3362294e1ab8-kube-api-access-fmwvg\") pod \"cinder-scheduler-0\" (UID: \"c45f6608-4c27-4322-b60a-3362294e1ab8\") " pod="openstack/cinder-scheduler-0" Jan 31 09:25:40 crc kubenswrapper[4830]: I0131 09:25:40.752422 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c45f6608-4c27-4322-b60a-3362294e1ab8-config-data\") pod \"cinder-scheduler-0\" (UID: \"c45f6608-4c27-4322-b60a-3362294e1ab8\") " pod="openstack/cinder-scheduler-0" Jan 31 09:25:40 crc kubenswrapper[4830]: I0131 09:25:40.846907 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 31 09:25:41 crc kubenswrapper[4830]: I0131 09:25:41.060765 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-666cdcb7b8-d25gt"] Jan 31 09:25:41 crc kubenswrapper[4830]: I0131 09:25:41.118886 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-5db4bc48b8-mphcw"] Jan 31 09:25:41 crc kubenswrapper[4830]: I0131 09:25:41.173124 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-f6bc4c6c9-pjvlb"] Jan 31 09:25:41 crc kubenswrapper[4830]: I0131 09:25:41.350577 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-59478c766f-tgwgd"] Jan 31 09:25:41 crc kubenswrapper[4830]: I0131 09:25:41.414287 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-5db4bc48b8-mphcw" event={"ID":"507f4c57-9369-4487-a575-370014e22eeb","Type":"ContainerStarted","Data":"253d32d18e3e567e17d7fea76b5a1330ed46251c63873a169cb22ffd0b28d3b7"} Jan 31 09:25:41 crc kubenswrapper[4830]: I0131 09:25:41.418073 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6cd8b566d4-4q75x" Jan 31 09:25:41 crc kubenswrapper[4830]: I0131 09:25:41.486462 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6cd8b566d4-4q75x" Jan 31 09:25:41 crc kubenswrapper[4830]: I0131 09:25:41.487352 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6cd8b566d4-4q75x" event={"ID":"74254e68-cbf8-446e-a2d8-768185ec778f","Type":"ContainerDied","Data":"b6be3b1263caaf0265275aa0905943ece9fac8b1520e8c7e36464dae7cf5b417"} Jan 31 09:25:41 crc kubenswrapper[4830]: I0131 09:25:41.487400 4830 scope.go:117] "RemoveContainer" containerID="4c6decc22e93c41d7227bdefca13848b3fc35f5adc6c7d1553fa05be847967dc" Jan 31 09:25:41 crc kubenswrapper[4830]: I0131 09:25:41.582212 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-59478c766f-tgwgd" event={"ID":"e7f604a2-4cc7-4619-846c-51cb5cddffda","Type":"ContainerStarted","Data":"61551977f6b459885b384ceb5621b7aab419bb9bb31168466556d91d4f9e9cc9"} Jan 31 09:25:41 crc kubenswrapper[4830]: I0131 09:25:41.583006 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/74254e68-cbf8-446e-a2d8-768185ec778f-config\") pod \"74254e68-cbf8-446e-a2d8-768185ec778f\" (UID: \"74254e68-cbf8-446e-a2d8-768185ec778f\") " Jan 31 09:25:41 crc kubenswrapper[4830]: I0131 09:25:41.583241 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qrmgs\" (UniqueName: \"kubernetes.io/projected/74254e68-cbf8-446e-a2d8-768185ec778f-kube-api-access-qrmgs\") pod \"74254e68-cbf8-446e-a2d8-768185ec778f\" (UID: \"74254e68-cbf8-446e-a2d8-768185ec778f\") " Jan 31 09:25:41 crc kubenswrapper[4830]: I0131 09:25:41.583262 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/74254e68-cbf8-446e-a2d8-768185ec778f-httpd-config\") pod \"74254e68-cbf8-446e-a2d8-768185ec778f\" (UID: \"74254e68-cbf8-446e-a2d8-768185ec778f\") " Jan 31 09:25:41 crc kubenswrapper[4830]: I0131 09:25:41.583305 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/74254e68-cbf8-446e-a2d8-768185ec778f-combined-ca-bundle\") pod \"74254e68-cbf8-446e-a2d8-768185ec778f\" (UID: \"74254e68-cbf8-446e-a2d8-768185ec778f\") " Jan 31 09:25:41 crc kubenswrapper[4830]: I0131 09:25:41.583397 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/74254e68-cbf8-446e-a2d8-768185ec778f-ovndb-tls-certs\") pod \"74254e68-cbf8-446e-a2d8-768185ec778f\" (UID: \"74254e68-cbf8-446e-a2d8-768185ec778f\") " Jan 31 09:25:41 crc kubenswrapper[4830]: I0131 09:25:41.623086 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/74254e68-cbf8-446e-a2d8-768185ec778f-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "74254e68-cbf8-446e-a2d8-768185ec778f" (UID: "74254e68-cbf8-446e-a2d8-768185ec778f"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:25:41 crc kubenswrapper[4830]: I0131 09:25:41.628135 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/74254e68-cbf8-446e-a2d8-768185ec778f-kube-api-access-qrmgs" (OuterVolumeSpecName: "kube-api-access-qrmgs") pod "74254e68-cbf8-446e-a2d8-768185ec778f" (UID: "74254e68-cbf8-446e-a2d8-768185ec778f"). InnerVolumeSpecName "kube-api-access-qrmgs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:25:41 crc kubenswrapper[4830]: I0131 09:25:41.636100 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-666cdcb7b8-d25gt" event={"ID":"07e6233b-8dfa-42db-8e5f-62dbe5372610","Type":"ContainerStarted","Data":"517129cbc72083038bc2ac937e20f39da5212e9a723992ee10bfc9669b2e90ac"} Jan 31 09:25:41 crc kubenswrapper[4830]: I0131 09:25:41.663804 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f6bc4c6c9-pjvlb" event={"ID":"653ad6ae-7808-49a1-8f07-484c37dfeb66","Type":"ContainerStarted","Data":"0cac50a0805fe024c0c817cc9846648968338493666183a89803e21af108886e"} Jan 31 09:25:41 crc kubenswrapper[4830]: I0131 09:25:41.686836 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qrmgs\" (UniqueName: \"kubernetes.io/projected/74254e68-cbf8-446e-a2d8-768185ec778f-kube-api-access-qrmgs\") on node \"crc\" DevicePath \"\"" Jan 31 09:25:41 crc kubenswrapper[4830]: I0131 09:25:41.686893 4830 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/74254e68-cbf8-446e-a2d8-768185ec778f-httpd-config\") on node \"crc\" DevicePath \"\"" Jan 31 09:25:41 crc kubenswrapper[4830]: I0131 09:25:41.761464 4830 scope.go:117] "RemoveContainer" containerID="9c7ed3187c5fd1fce5ed05e9c48a484b4b9883935cce4cff33ab889828b9bc46" Jan 31 09:25:41 crc kubenswrapper[4830]: I0131 09:25:41.916743 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/74254e68-cbf8-446e-a2d8-768185ec778f-config" (OuterVolumeSpecName: "config") pod "74254e68-cbf8-446e-a2d8-768185ec778f" (UID: "74254e68-cbf8-446e-a2d8-768185ec778f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:25:41 crc kubenswrapper[4830]: I0131 09:25:41.942202 4830 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/74254e68-cbf8-446e-a2d8-768185ec778f-config\") on node \"crc\" DevicePath \"\"" Jan 31 09:25:42 crc kubenswrapper[4830]: I0131 09:25:42.146868 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 31 09:25:42 crc kubenswrapper[4830]: W0131 09:25:42.202693 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc45f6608_4c27_4322_b60a_3362294e1ab8.slice/crio-96118f3ea31665a9f6b1a2478a536157e7d021b7df03fe81d6a91b7533438476 WatchSource:0}: Error finding container 96118f3ea31665a9f6b1a2478a536157e7d021b7df03fe81d6a91b7533438476: Status 404 returned error can't find the container with id 96118f3ea31665a9f6b1a2478a536157e7d021b7df03fe81d6a91b7533438476 Jan 31 09:25:42 crc kubenswrapper[4830]: I0131 09:25:42.322175 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="86af5a4c-fe49-4f01-a832-71260d0ad1e4" path="/var/lib/kubelet/pods/86af5a4c-fe49-4f01-a832-71260d0ad1e4/volumes" Jan 31 09:25:42 crc kubenswrapper[4830]: I0131 09:25:42.482389 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/74254e68-cbf8-446e-a2d8-768185ec778f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "74254e68-cbf8-446e-a2d8-768185ec778f" (UID: "74254e68-cbf8-446e-a2d8-768185ec778f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:25:42 crc kubenswrapper[4830]: I0131 09:25:42.487744 4830 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/74254e68-cbf8-446e-a2d8-768185ec778f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 09:25:42 crc kubenswrapper[4830]: I0131 09:25:42.491459 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/74254e68-cbf8-446e-a2d8-768185ec778f-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "74254e68-cbf8-446e-a2d8-768185ec778f" (UID: "74254e68-cbf8-446e-a2d8-768185ec778f"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:25:42 crc kubenswrapper[4830]: I0131 09:25:42.598088 4830 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/74254e68-cbf8-446e-a2d8-768185ec778f-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 31 09:25:42 crc kubenswrapper[4830]: I0131 09:25:42.748548 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-666cdcb7b8-d25gt" event={"ID":"07e6233b-8dfa-42db-8e5f-62dbe5372610","Type":"ContainerStarted","Data":"e522becee83fe1b6467a13a68505e16f4514e10b42775e0bcb9984296834555d"} Jan 31 09:25:42 crc kubenswrapper[4830]: I0131 09:25:42.750109 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-engine-666cdcb7b8-d25gt" Jan 31 09:25:42 crc kubenswrapper[4830]: I0131 09:25:42.858526 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-engine-666cdcb7b8-d25gt" podStartSLOduration=4.858495499 podStartE2EDuration="4.858495499s" podCreationTimestamp="2026-01-31 09:25:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 09:25:42.834926929 +0000 UTC m=+1487.328289391" watchObservedRunningTime="2026-01-31 09:25:42.858495499 +0000 UTC m=+1487.351857961" Jan 31 09:25:42 crc kubenswrapper[4830]: I0131 09:25:42.865285 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f6bc4c6c9-pjvlb" event={"ID":"653ad6ae-7808-49a1-8f07-484c37dfeb66","Type":"ContainerStarted","Data":"3623580d42f6ceb7e958776487d7e6fd090435cc50f969e862a9e9df4b46a30c"} Jan 31 09:25:42 crc kubenswrapper[4830]: I0131 09:25:42.887534 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"c45f6608-4c27-4322-b60a-3362294e1ab8","Type":"ContainerStarted","Data":"96118f3ea31665a9f6b1a2478a536157e7d021b7df03fe81d6a91b7533438476"} Jan 31 09:25:43 crc kubenswrapper[4830]: I0131 09:25:43.007818 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-6cd8b566d4-4q75x"] Jan 31 09:25:43 crc kubenswrapper[4830]: I0131 09:25:43.030362 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-6cd8b566d4-4q75x"] Jan 31 09:25:43 crc kubenswrapper[4830]: I0131 09:25:43.362663 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-f44b7d679-6khcx"] Jan 31 09:25:43 crc kubenswrapper[4830]: E0131 09:25:43.364565 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="74254e68-cbf8-446e-a2d8-768185ec778f" containerName="neutron-httpd" Jan 31 09:25:43 crc kubenswrapper[4830]: I0131 09:25:43.364594 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="74254e68-cbf8-446e-a2d8-768185ec778f" containerName="neutron-httpd" Jan 31 09:25:43 crc kubenswrapper[4830]: E0131 09:25:43.364633 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="74254e68-cbf8-446e-a2d8-768185ec778f" containerName="neutron-api" Jan 31 09:25:43 crc kubenswrapper[4830]: I0131 09:25:43.364642 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="74254e68-cbf8-446e-a2d8-768185ec778f" containerName="neutron-api" Jan 31 09:25:43 crc kubenswrapper[4830]: I0131 09:25:43.364946 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="74254e68-cbf8-446e-a2d8-768185ec778f" containerName="neutron-api" Jan 31 09:25:43 crc kubenswrapper[4830]: I0131 09:25:43.364972 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="74254e68-cbf8-446e-a2d8-768185ec778f" containerName="neutron-httpd" Jan 31 09:25:43 crc kubenswrapper[4830]: I0131 09:25:43.366808 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-f44b7d679-6khcx" Jan 31 09:25:43 crc kubenswrapper[4830]: I0131 09:25:43.371468 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Jan 31 09:25:43 crc kubenswrapper[4830]: I0131 09:25:43.371754 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Jan 31 09:25:43 crc kubenswrapper[4830]: I0131 09:25:43.376914 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Jan 31 09:25:43 crc kubenswrapper[4830]: I0131 09:25:43.391067 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-f44b7d679-6khcx"] Jan 31 09:25:43 crc kubenswrapper[4830]: I0131 09:25:43.415211 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f99258ad-5714-491f-bdad-d7196ed9833a-run-httpd\") pod \"swift-proxy-f44b7d679-6khcx\" (UID: \"f99258ad-5714-491f-bdad-d7196ed9833a\") " pod="openstack/swift-proxy-f44b7d679-6khcx" Jan 31 09:25:43 crc kubenswrapper[4830]: I0131 09:25:43.415468 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f99258ad-5714-491f-bdad-d7196ed9833a-config-data\") pod \"swift-proxy-f44b7d679-6khcx\" (UID: \"f99258ad-5714-491f-bdad-d7196ed9833a\") " pod="openstack/swift-proxy-f44b7d679-6khcx" Jan 31 09:25:43 crc kubenswrapper[4830]: I0131 09:25:43.415654 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f99258ad-5714-491f-bdad-d7196ed9833a-log-httpd\") pod \"swift-proxy-f44b7d679-6khcx\" (UID: \"f99258ad-5714-491f-bdad-d7196ed9833a\") " pod="openstack/swift-proxy-f44b7d679-6khcx" Jan 31 09:25:43 crc kubenswrapper[4830]: I0131 09:25:43.415815 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f99258ad-5714-491f-bdad-d7196ed9833a-internal-tls-certs\") pod \"swift-proxy-f44b7d679-6khcx\" (UID: \"f99258ad-5714-491f-bdad-d7196ed9833a\") " pod="openstack/swift-proxy-f44b7d679-6khcx" Jan 31 09:25:43 crc kubenswrapper[4830]: I0131 09:25:43.416267 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/f99258ad-5714-491f-bdad-d7196ed9833a-etc-swift\") pod \"swift-proxy-f44b7d679-6khcx\" (UID: \"f99258ad-5714-491f-bdad-d7196ed9833a\") " pod="openstack/swift-proxy-f44b7d679-6khcx" Jan 31 09:25:43 crc kubenswrapper[4830]: I0131 09:25:43.416560 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vfvgt\" (UniqueName: \"kubernetes.io/projected/f99258ad-5714-491f-bdad-d7196ed9833a-kube-api-access-vfvgt\") pod \"swift-proxy-f44b7d679-6khcx\" (UID: \"f99258ad-5714-491f-bdad-d7196ed9833a\") " pod="openstack/swift-proxy-f44b7d679-6khcx" Jan 31 09:25:43 crc kubenswrapper[4830]: I0131 09:25:43.416637 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f99258ad-5714-491f-bdad-d7196ed9833a-combined-ca-bundle\") pod \"swift-proxy-f44b7d679-6khcx\" (UID: \"f99258ad-5714-491f-bdad-d7196ed9833a\") " pod="openstack/swift-proxy-f44b7d679-6khcx" Jan 31 09:25:43 crc kubenswrapper[4830]: I0131 09:25:43.416741 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f99258ad-5714-491f-bdad-d7196ed9833a-public-tls-certs\") pod \"swift-proxy-f44b7d679-6khcx\" (UID: \"f99258ad-5714-491f-bdad-d7196ed9833a\") " pod="openstack/swift-proxy-f44b7d679-6khcx" Jan 31 09:25:43 crc kubenswrapper[4830]: I0131 09:25:43.522831 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vfvgt\" (UniqueName: \"kubernetes.io/projected/f99258ad-5714-491f-bdad-d7196ed9833a-kube-api-access-vfvgt\") pod \"swift-proxy-f44b7d679-6khcx\" (UID: \"f99258ad-5714-491f-bdad-d7196ed9833a\") " pod="openstack/swift-proxy-f44b7d679-6khcx" Jan 31 09:25:43 crc kubenswrapper[4830]: I0131 09:25:43.522896 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f99258ad-5714-491f-bdad-d7196ed9833a-combined-ca-bundle\") pod \"swift-proxy-f44b7d679-6khcx\" (UID: \"f99258ad-5714-491f-bdad-d7196ed9833a\") " pod="openstack/swift-proxy-f44b7d679-6khcx" Jan 31 09:25:43 crc kubenswrapper[4830]: I0131 09:25:43.522932 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f99258ad-5714-491f-bdad-d7196ed9833a-public-tls-certs\") pod \"swift-proxy-f44b7d679-6khcx\" (UID: \"f99258ad-5714-491f-bdad-d7196ed9833a\") " pod="openstack/swift-proxy-f44b7d679-6khcx" Jan 31 09:25:43 crc kubenswrapper[4830]: I0131 09:25:43.523030 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f99258ad-5714-491f-bdad-d7196ed9833a-run-httpd\") pod \"swift-proxy-f44b7d679-6khcx\" (UID: \"f99258ad-5714-491f-bdad-d7196ed9833a\") " pod="openstack/swift-proxy-f44b7d679-6khcx" Jan 31 09:25:43 crc kubenswrapper[4830]: I0131 09:25:43.523064 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f99258ad-5714-491f-bdad-d7196ed9833a-log-httpd\") pod \"swift-proxy-f44b7d679-6khcx\" (UID: \"f99258ad-5714-491f-bdad-d7196ed9833a\") " pod="openstack/swift-proxy-f44b7d679-6khcx" Jan 31 09:25:43 crc kubenswrapper[4830]: I0131 09:25:43.523084 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f99258ad-5714-491f-bdad-d7196ed9833a-internal-tls-certs\") pod \"swift-proxy-f44b7d679-6khcx\" (UID: \"f99258ad-5714-491f-bdad-d7196ed9833a\") " pod="openstack/swift-proxy-f44b7d679-6khcx" Jan 31 09:25:43 crc kubenswrapper[4830]: I0131 09:25:43.523101 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f99258ad-5714-491f-bdad-d7196ed9833a-config-data\") pod \"swift-proxy-f44b7d679-6khcx\" (UID: \"f99258ad-5714-491f-bdad-d7196ed9833a\") " pod="openstack/swift-proxy-f44b7d679-6khcx" Jan 31 09:25:43 crc kubenswrapper[4830]: I0131 09:25:43.523171 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/f99258ad-5714-491f-bdad-d7196ed9833a-etc-swift\") pod \"swift-proxy-f44b7d679-6khcx\" (UID: \"f99258ad-5714-491f-bdad-d7196ed9833a\") " pod="openstack/swift-proxy-f44b7d679-6khcx" Jan 31 09:25:43 crc kubenswrapper[4830]: I0131 09:25:43.524038 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f99258ad-5714-491f-bdad-d7196ed9833a-log-httpd\") pod \"swift-proxy-f44b7d679-6khcx\" (UID: \"f99258ad-5714-491f-bdad-d7196ed9833a\") " pod="openstack/swift-proxy-f44b7d679-6khcx" Jan 31 09:25:43 crc kubenswrapper[4830]: I0131 09:25:43.533187 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f99258ad-5714-491f-bdad-d7196ed9833a-public-tls-certs\") pod \"swift-proxy-f44b7d679-6khcx\" (UID: \"f99258ad-5714-491f-bdad-d7196ed9833a\") " pod="openstack/swift-proxy-f44b7d679-6khcx" Jan 31 09:25:43 crc kubenswrapper[4830]: I0131 09:25:43.535160 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f99258ad-5714-491f-bdad-d7196ed9833a-internal-tls-certs\") pod \"swift-proxy-f44b7d679-6khcx\" (UID: \"f99258ad-5714-491f-bdad-d7196ed9833a\") " pod="openstack/swift-proxy-f44b7d679-6khcx" Jan 31 09:25:43 crc kubenswrapper[4830]: I0131 09:25:43.536306 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f99258ad-5714-491f-bdad-d7196ed9833a-run-httpd\") pod \"swift-proxy-f44b7d679-6khcx\" (UID: \"f99258ad-5714-491f-bdad-d7196ed9833a\") " pod="openstack/swift-proxy-f44b7d679-6khcx" Jan 31 09:25:43 crc kubenswrapper[4830]: I0131 09:25:43.545219 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/f99258ad-5714-491f-bdad-d7196ed9833a-etc-swift\") pod \"swift-proxy-f44b7d679-6khcx\" (UID: \"f99258ad-5714-491f-bdad-d7196ed9833a\") " pod="openstack/swift-proxy-f44b7d679-6khcx" Jan 31 09:25:43 crc kubenswrapper[4830]: I0131 09:25:43.550051 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f99258ad-5714-491f-bdad-d7196ed9833a-config-data\") pod \"swift-proxy-f44b7d679-6khcx\" (UID: \"f99258ad-5714-491f-bdad-d7196ed9833a\") " pod="openstack/swift-proxy-f44b7d679-6khcx" Jan 31 09:25:43 crc kubenswrapper[4830]: I0131 09:25:43.552375 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vfvgt\" (UniqueName: \"kubernetes.io/projected/f99258ad-5714-491f-bdad-d7196ed9833a-kube-api-access-vfvgt\") pod \"swift-proxy-f44b7d679-6khcx\" (UID: \"f99258ad-5714-491f-bdad-d7196ed9833a\") " pod="openstack/swift-proxy-f44b7d679-6khcx" Jan 31 09:25:43 crc kubenswrapper[4830]: I0131 09:25:43.553836 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f99258ad-5714-491f-bdad-d7196ed9833a-combined-ca-bundle\") pod \"swift-proxy-f44b7d679-6khcx\" (UID: \"f99258ad-5714-491f-bdad-d7196ed9833a\") " pod="openstack/swift-proxy-f44b7d679-6khcx" Jan 31 09:25:43 crc kubenswrapper[4830]: I0131 09:25:43.733509 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-f44b7d679-6khcx" Jan 31 09:25:43 crc kubenswrapper[4830]: I0131 09:25:43.872655 4830 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/cinder-api-0" podUID="945c030b-2a43-431b-b898-d3a28b4e3821" containerName="cinder-api" probeResult="failure" output="Get \"https://10.217.0.209:8776/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 31 09:25:43 crc kubenswrapper[4830]: I0131 09:25:43.974116 4830 generic.go:334] "Generic (PLEG): container finished" podID="653ad6ae-7808-49a1-8f07-484c37dfeb66" containerID="3623580d42f6ceb7e958776487d7e6fd090435cc50f969e862a9e9df4b46a30c" exitCode=0 Jan 31 09:25:43 crc kubenswrapper[4830]: I0131 09:25:43.976523 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f6bc4c6c9-pjvlb" event={"ID":"653ad6ae-7808-49a1-8f07-484c37dfeb66","Type":"ContainerDied","Data":"3623580d42f6ceb7e958776487d7e6fd090435cc50f969e862a9e9df4b46a30c"} Jan 31 09:25:43 crc kubenswrapper[4830]: I0131 09:25:43.976874 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-f6bc4c6c9-pjvlb" Jan 31 09:25:43 crc kubenswrapper[4830]: I0131 09:25:43.976920 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f6bc4c6c9-pjvlb" event={"ID":"653ad6ae-7808-49a1-8f07-484c37dfeb66","Type":"ContainerStarted","Data":"e175ac6da10a42a62ab3d7bc4e420a07f3178def8398c209a369acf2010f25b4"} Jan 31 09:25:44 crc kubenswrapper[4830]: I0131 09:25:44.034279 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-f6bc4c6c9-pjvlb" podStartSLOduration=6.034250515 podStartE2EDuration="6.034250515s" podCreationTimestamp="2026-01-31 09:25:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 09:25:44.009115041 +0000 UTC m=+1488.502477483" watchObservedRunningTime="2026-01-31 09:25:44.034250515 +0000 UTC m=+1488.527612957" Jan 31 09:25:44 crc kubenswrapper[4830]: I0131 09:25:44.289220 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="74254e68-cbf8-446e-a2d8-768185ec778f" path="/var/lib/kubelet/pods/74254e68-cbf8-446e-a2d8-768185ec778f/volumes" Jan 31 09:25:44 crc kubenswrapper[4830]: I0131 09:25:44.353932 4830 patch_prober.go:28] interesting pod/machine-config-daemon-gt7kd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 09:25:44 crc kubenswrapper[4830]: I0131 09:25:44.354025 4830 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" podUID="158dbfda-9b0a-4809-9946-3c6ee2d082dc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 09:25:44 crc kubenswrapper[4830]: I0131 09:25:44.354106 4830 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" Jan 31 09:25:44 crc kubenswrapper[4830]: I0131 09:25:44.355327 4830 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a04fad3617a9e38076099693ce6bd6f0b7e1a9b845b3b8a22acffddfa772e8f0"} pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 31 09:25:44 crc kubenswrapper[4830]: I0131 09:25:44.355401 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" podUID="158dbfda-9b0a-4809-9946-3c6ee2d082dc" containerName="machine-config-daemon" containerID="cri-o://a04fad3617a9e38076099693ce6bd6f0b7e1a9b845b3b8a22acffddfa772e8f0" gracePeriod=600 Jan 31 09:25:44 crc kubenswrapper[4830]: I0131 09:25:44.833067 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cinder-api-0" podUID="945c030b-2a43-431b-b898-d3a28b4e3821" containerName="cinder-api" probeResult="failure" output="Get \"https://10.217.0.209:8776/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 31 09:25:44 crc kubenswrapper[4830]: I0131 09:25:44.866509 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-f44b7d679-6khcx"] Jan 31 09:25:45 crc kubenswrapper[4830]: I0131 09:25:45.041625 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"c45f6608-4c27-4322-b60a-3362294e1ab8","Type":"ContainerStarted","Data":"51ac00061815f4f76b1bcd8da30d2e12d08c49a1d9468728407654b6e4ca4049"} Jan 31 09:25:45 crc kubenswrapper[4830]: I0131 09:25:45.049508 4830 generic.go:334] "Generic (PLEG): container finished" podID="158dbfda-9b0a-4809-9946-3c6ee2d082dc" containerID="a04fad3617a9e38076099693ce6bd6f0b7e1a9b845b3b8a22acffddfa772e8f0" exitCode=0 Jan 31 09:25:45 crc kubenswrapper[4830]: I0131 09:25:45.049595 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" event={"ID":"158dbfda-9b0a-4809-9946-3c6ee2d082dc","Type":"ContainerDied","Data":"a04fad3617a9e38076099693ce6bd6f0b7e1a9b845b3b8a22acffddfa772e8f0"} Jan 31 09:25:45 crc kubenswrapper[4830]: I0131 09:25:45.049644 4830 scope.go:117] "RemoveContainer" containerID="67bf188d9d9b9ad6793313549c12d77b38caf6229dc0633ec340b752f089c942" Jan 31 09:25:45 crc kubenswrapper[4830]: I0131 09:25:45.065701 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-f44b7d679-6khcx" event={"ID":"f99258ad-5714-491f-bdad-d7196ed9833a","Type":"ContainerStarted","Data":"49b02bce0b691db9f68a4c4e7bdf95a3e27f43997cf5c7049a069a7d509ee859"} Jan 31 09:25:45 crc kubenswrapper[4830]: E0131 09:25:45.218841 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gt7kd_openshift-machine-config-operator(158dbfda-9b0a-4809-9946-3c6ee2d082dc)\"" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" podUID="158dbfda-9b0a-4809-9946-3c6ee2d082dc" Jan 31 09:25:46 crc kubenswrapper[4830]: I0131 09:25:46.087647 4830 scope.go:117] "RemoveContainer" containerID="a04fad3617a9e38076099693ce6bd6f0b7e1a9b845b3b8a22acffddfa772e8f0" Jan 31 09:25:46 crc kubenswrapper[4830]: E0131 09:25:46.088953 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gt7kd_openshift-machine-config-operator(158dbfda-9b0a-4809-9946-3c6ee2d082dc)\"" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" podUID="158dbfda-9b0a-4809-9946-3c6ee2d082dc" Jan 31 09:25:47 crc kubenswrapper[4830]: I0131 09:25:47.128890 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"c45f6608-4c27-4322-b60a-3362294e1ab8","Type":"ContainerStarted","Data":"55dfad9e36ae880fcbc4a26b56d8f31d59aae9224ddb69aa9f6766ed9b694b9a"} Jan 31 09:25:47 crc kubenswrapper[4830]: I0131 09:25:47.164793 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=7.164766516 podStartE2EDuration="7.164766516s" podCreationTimestamp="2026-01-31 09:25:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 09:25:47.161927735 +0000 UTC m=+1491.655290187" watchObservedRunningTime="2026-01-31 09:25:47.164766516 +0000 UTC m=+1491.658128958" Jan 31 09:25:48 crc kubenswrapper[4830]: I0131 09:25:48.915384 4830 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/cinder-api-0" podUID="945c030b-2a43-431b-b898-d3a28b4e3821" containerName="cinder-api" probeResult="failure" output="Get \"https://10.217.0.209:8776/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 31 09:25:49 crc kubenswrapper[4830]: I0131 09:25:49.027952 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-f6bc4c6c9-pjvlb" Jan 31 09:25:49 crc kubenswrapper[4830]: I0131 09:25:49.160974 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5784cf869f-tqtdt"] Jan 31 09:25:49 crc kubenswrapper[4830]: I0131 09:25:49.161673 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5784cf869f-tqtdt" podUID="e51aef7d-4b7d-44da-8d0f-b0e2b86d2842" containerName="dnsmasq-dns" containerID="cri-o://1575f6c9bde9aa49bcd066c47e0fe165efc4fb44b0896b31be8c3f3ba23ffcc4" gracePeriod=10 Jan 31 09:25:49 crc kubenswrapper[4830]: I0131 09:25:49.838049 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cinder-api-0" podUID="945c030b-2a43-431b-b898-d3a28b4e3821" containerName="cinder-api" probeResult="failure" output="Get \"https://10.217.0.209:8776/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 31 09:25:49 crc kubenswrapper[4830]: I0131 09:25:49.977299 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-99b5d6b8d-v6s9l" Jan 31 09:25:50 crc kubenswrapper[4830]: I0131 09:25:50.107609 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-99b5d6b8d-v6s9l" Jan 31 09:25:50 crc kubenswrapper[4830]: I0131 09:25:50.200501 4830 generic.go:334] "Generic (PLEG): container finished" podID="e51aef7d-4b7d-44da-8d0f-b0e2b86d2842" containerID="1575f6c9bde9aa49bcd066c47e0fe165efc4fb44b0896b31be8c3f3ba23ffcc4" exitCode=0 Jan 31 09:25:50 crc kubenswrapper[4830]: I0131 09:25:50.201005 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5784cf869f-tqtdt" event={"ID":"e51aef7d-4b7d-44da-8d0f-b0e2b86d2842","Type":"ContainerDied","Data":"1575f6c9bde9aa49bcd066c47e0fe165efc4fb44b0896b31be8c3f3ba23ffcc4"} Jan 31 09:25:50 crc kubenswrapper[4830]: I0131 09:25:50.213076 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-7995f9f9fb-6r8k4"] Jan 31 09:25:50 crc kubenswrapper[4830]: I0131 09:25:50.213379 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-7995f9f9fb-6r8k4" podUID="43cbd586-1683-440f-992a-113173028a37" containerName="placement-log" containerID="cri-o://eabcd1235c056e6d23ed658dd38e5f2e72bac2c473103f6f3e4acd1aa0dacec8" gracePeriod=30 Jan 31 09:25:50 crc kubenswrapper[4830]: I0131 09:25:50.214762 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-7995f9f9fb-6r8k4" podUID="43cbd586-1683-440f-992a-113173028a37" containerName="placement-api" containerID="cri-o://7c6922b39c4dd9c7624db328248b385cabff90417731eff072afdbd30b6ab102" gracePeriod=30 Jan 31 09:25:50 crc kubenswrapper[4830]: I0131 09:25:50.334376 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Jan 31 09:25:50 crc kubenswrapper[4830]: I0131 09:25:50.372958 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5784cf869f-tqtdt" podUID="e51aef7d-4b7d-44da-8d0f-b0e2b86d2842" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.206:5353: connect: connection refused" Jan 31 09:25:50 crc kubenswrapper[4830]: I0131 09:25:50.849767 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 31 09:25:51 crc kubenswrapper[4830]: I0131 09:25:51.180146 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 31 09:25:51 crc kubenswrapper[4830]: I0131 09:25:51.234054 4830 generic.go:334] "Generic (PLEG): container finished" podID="43cbd586-1683-440f-992a-113173028a37" containerID="eabcd1235c056e6d23ed658dd38e5f2e72bac2c473103f6f3e4acd1aa0dacec8" exitCode=143 Jan 31 09:25:51 crc kubenswrapper[4830]: I0131 09:25:51.235750 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7995f9f9fb-6r8k4" event={"ID":"43cbd586-1683-440f-992a-113173028a37","Type":"ContainerDied","Data":"eabcd1235c056e6d23ed658dd38e5f2e72bac2c473103f6f3e4acd1aa0dacec8"} Jan 31 09:25:51 crc kubenswrapper[4830]: I0131 09:25:51.286760 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-engine-587fd67997-pvqls"] Jan 31 09:25:51 crc kubenswrapper[4830]: I0131 09:25:51.288562 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-587fd67997-pvqls" Jan 31 09:25:51 crc kubenswrapper[4830]: I0131 09:25:51.328244 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-587fd67997-pvqls"] Jan 31 09:25:51 crc kubenswrapper[4830]: I0131 09:25:51.357981 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-6948bd58db-k47sz"] Jan 31 09:25:51 crc kubenswrapper[4830]: I0131 09:25:51.360236 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-6948bd58db-k47sz" Jan 31 09:25:51 crc kubenswrapper[4830]: I0131 09:25:51.388325 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5627b4b-982b-41c1-8ff9-8ca07513680d-combined-ca-bundle\") pod \"heat-engine-587fd67997-pvqls\" (UID: \"e5627b4b-982b-41c1-8ff9-8ca07513680d\") " pod="openstack/heat-engine-587fd67997-pvqls" Jan 31 09:25:51 crc kubenswrapper[4830]: I0131 09:25:51.388468 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e5627b4b-982b-41c1-8ff9-8ca07513680d-config-data\") pod \"heat-engine-587fd67997-pvqls\" (UID: \"e5627b4b-982b-41c1-8ff9-8ca07513680d\") " pod="openstack/heat-engine-587fd67997-pvqls" Jan 31 09:25:51 crc kubenswrapper[4830]: I0131 09:25:51.388562 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pmhw7\" (UniqueName: \"kubernetes.io/projected/8f3e08be-d310-4474-99fb-d9226ab6eedb-kube-api-access-pmhw7\") pod \"heat-api-6948bd58db-k47sz\" (UID: \"8f3e08be-d310-4474-99fb-d9226ab6eedb\") " pod="openstack/heat-api-6948bd58db-k47sz" Jan 31 09:25:51 crc kubenswrapper[4830]: I0131 09:25:51.388624 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8f3e08be-d310-4474-99fb-d9226ab6eedb-config-data-custom\") pod \"heat-api-6948bd58db-k47sz\" (UID: \"8f3e08be-d310-4474-99fb-d9226ab6eedb\") " pod="openstack/heat-api-6948bd58db-k47sz" Jan 31 09:25:51 crc kubenswrapper[4830]: I0131 09:25:51.388658 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f3e08be-d310-4474-99fb-d9226ab6eedb-config-data\") pod \"heat-api-6948bd58db-k47sz\" (UID: \"8f3e08be-d310-4474-99fb-d9226ab6eedb\") " pod="openstack/heat-api-6948bd58db-k47sz" Jan 31 09:25:51 crc kubenswrapper[4830]: I0131 09:25:51.388682 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qsmkl\" (UniqueName: \"kubernetes.io/projected/e5627b4b-982b-41c1-8ff9-8ca07513680d-kube-api-access-qsmkl\") pod \"heat-engine-587fd67997-pvqls\" (UID: \"e5627b4b-982b-41c1-8ff9-8ca07513680d\") " pod="openstack/heat-engine-587fd67997-pvqls" Jan 31 09:25:51 crc kubenswrapper[4830]: I0131 09:25:51.388819 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f3e08be-d310-4474-99fb-d9226ab6eedb-combined-ca-bundle\") pod \"heat-api-6948bd58db-k47sz\" (UID: \"8f3e08be-d310-4474-99fb-d9226ab6eedb\") " pod="openstack/heat-api-6948bd58db-k47sz" Jan 31 09:25:51 crc kubenswrapper[4830]: I0131 09:25:51.388880 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e5627b4b-982b-41c1-8ff9-8ca07513680d-config-data-custom\") pod \"heat-engine-587fd67997-pvqls\" (UID: \"e5627b4b-982b-41c1-8ff9-8ca07513680d\") " pod="openstack/heat-engine-587fd67997-pvqls" Jan 31 09:25:51 crc kubenswrapper[4830]: I0131 09:25:51.518196 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pmhw7\" (UniqueName: \"kubernetes.io/projected/8f3e08be-d310-4474-99fb-d9226ab6eedb-kube-api-access-pmhw7\") pod \"heat-api-6948bd58db-k47sz\" (UID: \"8f3e08be-d310-4474-99fb-d9226ab6eedb\") " pod="openstack/heat-api-6948bd58db-k47sz" Jan 31 09:25:51 crc kubenswrapper[4830]: I0131 09:25:51.518405 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8f3e08be-d310-4474-99fb-d9226ab6eedb-config-data-custom\") pod \"heat-api-6948bd58db-k47sz\" (UID: \"8f3e08be-d310-4474-99fb-d9226ab6eedb\") " pod="openstack/heat-api-6948bd58db-k47sz" Jan 31 09:25:51 crc kubenswrapper[4830]: I0131 09:25:51.518491 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f3e08be-d310-4474-99fb-d9226ab6eedb-config-data\") pod \"heat-api-6948bd58db-k47sz\" (UID: \"8f3e08be-d310-4474-99fb-d9226ab6eedb\") " pod="openstack/heat-api-6948bd58db-k47sz" Jan 31 09:25:51 crc kubenswrapper[4830]: I0131 09:25:51.518522 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qsmkl\" (UniqueName: \"kubernetes.io/projected/e5627b4b-982b-41c1-8ff9-8ca07513680d-kube-api-access-qsmkl\") pod \"heat-engine-587fd67997-pvqls\" (UID: \"e5627b4b-982b-41c1-8ff9-8ca07513680d\") " pod="openstack/heat-engine-587fd67997-pvqls" Jan 31 09:25:51 crc kubenswrapper[4830]: I0131 09:25:51.518529 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-f57b45989-7xfmm"] Jan 31 09:25:51 crc kubenswrapper[4830]: I0131 09:25:51.518641 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f3e08be-d310-4474-99fb-d9226ab6eedb-combined-ca-bundle\") pod \"heat-api-6948bd58db-k47sz\" (UID: \"8f3e08be-d310-4474-99fb-d9226ab6eedb\") " pod="openstack/heat-api-6948bd58db-k47sz" Jan 31 09:25:51 crc kubenswrapper[4830]: I0131 09:25:51.518770 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e5627b4b-982b-41c1-8ff9-8ca07513680d-config-data-custom\") pod \"heat-engine-587fd67997-pvqls\" (UID: \"e5627b4b-982b-41c1-8ff9-8ca07513680d\") " pod="openstack/heat-engine-587fd67997-pvqls" Jan 31 09:25:51 crc kubenswrapper[4830]: I0131 09:25:51.518915 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5627b4b-982b-41c1-8ff9-8ca07513680d-combined-ca-bundle\") pod \"heat-engine-587fd67997-pvqls\" (UID: \"e5627b4b-982b-41c1-8ff9-8ca07513680d\") " pod="openstack/heat-engine-587fd67997-pvqls" Jan 31 09:25:51 crc kubenswrapper[4830]: I0131 09:25:51.519181 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e5627b4b-982b-41c1-8ff9-8ca07513680d-config-data\") pod \"heat-engine-587fd67997-pvqls\" (UID: \"e5627b4b-982b-41c1-8ff9-8ca07513680d\") " pod="openstack/heat-engine-587fd67997-pvqls" Jan 31 09:25:51 crc kubenswrapper[4830]: I0131 09:25:51.528627 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f3e08be-d310-4474-99fb-d9226ab6eedb-combined-ca-bundle\") pod \"heat-api-6948bd58db-k47sz\" (UID: \"8f3e08be-d310-4474-99fb-d9226ab6eedb\") " pod="openstack/heat-api-6948bd58db-k47sz" Jan 31 09:25:51 crc kubenswrapper[4830]: I0131 09:25:51.529284 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e5627b4b-982b-41c1-8ff9-8ca07513680d-config-data-custom\") pod \"heat-engine-587fd67997-pvqls\" (UID: \"e5627b4b-982b-41c1-8ff9-8ca07513680d\") " pod="openstack/heat-engine-587fd67997-pvqls" Jan 31 09:25:51 crc kubenswrapper[4830]: I0131 09:25:51.531046 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8f3e08be-d310-4474-99fb-d9226ab6eedb-config-data-custom\") pod \"heat-api-6948bd58db-k47sz\" (UID: \"8f3e08be-d310-4474-99fb-d9226ab6eedb\") " pod="openstack/heat-api-6948bd58db-k47sz" Jan 31 09:25:51 crc kubenswrapper[4830]: I0131 09:25:51.533022 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-f57b45989-7xfmm" Jan 31 09:25:51 crc kubenswrapper[4830]: I0131 09:25:51.538892 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f3e08be-d310-4474-99fb-d9226ab6eedb-config-data\") pod \"heat-api-6948bd58db-k47sz\" (UID: \"8f3e08be-d310-4474-99fb-d9226ab6eedb\") " pod="openstack/heat-api-6948bd58db-k47sz" Jan 31 09:25:51 crc kubenswrapper[4830]: I0131 09:25:51.543829 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5627b4b-982b-41c1-8ff9-8ca07513680d-combined-ca-bundle\") pod \"heat-engine-587fd67997-pvqls\" (UID: \"e5627b4b-982b-41c1-8ff9-8ca07513680d\") " pod="openstack/heat-engine-587fd67997-pvqls" Jan 31 09:25:51 crc kubenswrapper[4830]: I0131 09:25:51.549774 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pmhw7\" (UniqueName: \"kubernetes.io/projected/8f3e08be-d310-4474-99fb-d9226ab6eedb-kube-api-access-pmhw7\") pod \"heat-api-6948bd58db-k47sz\" (UID: \"8f3e08be-d310-4474-99fb-d9226ab6eedb\") " pod="openstack/heat-api-6948bd58db-k47sz" Jan 31 09:25:51 crc kubenswrapper[4830]: I0131 09:25:51.551950 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qsmkl\" (UniqueName: \"kubernetes.io/projected/e5627b4b-982b-41c1-8ff9-8ca07513680d-kube-api-access-qsmkl\") pod \"heat-engine-587fd67997-pvqls\" (UID: \"e5627b4b-982b-41c1-8ff9-8ca07513680d\") " pod="openstack/heat-engine-587fd67997-pvqls" Jan 31 09:25:51 crc kubenswrapper[4830]: I0131 09:25:51.578258 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e5627b4b-982b-41c1-8ff9-8ca07513680d-config-data\") pod \"heat-engine-587fd67997-pvqls\" (UID: \"e5627b4b-982b-41c1-8ff9-8ca07513680d\") " pod="openstack/heat-engine-587fd67997-pvqls" Jan 31 09:25:51 crc kubenswrapper[4830]: I0131 09:25:51.583095 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-6948bd58db-k47sz"] Jan 31 09:25:51 crc kubenswrapper[4830]: I0131 09:25:51.606587 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-f57b45989-7xfmm"] Jan 31 09:25:51 crc kubenswrapper[4830]: I0131 09:25:51.624251 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tdcbs\" (UniqueName: \"kubernetes.io/projected/de11d1f2-fd91-48c7-9dc3-79748064e53d-kube-api-access-tdcbs\") pod \"heat-cfnapi-f57b45989-7xfmm\" (UID: \"de11d1f2-fd91-48c7-9dc3-79748064e53d\") " pod="openstack/heat-cfnapi-f57b45989-7xfmm" Jan 31 09:25:51 crc kubenswrapper[4830]: I0131 09:25:51.624783 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de11d1f2-fd91-48c7-9dc3-79748064e53d-combined-ca-bundle\") pod \"heat-cfnapi-f57b45989-7xfmm\" (UID: \"de11d1f2-fd91-48c7-9dc3-79748064e53d\") " pod="openstack/heat-cfnapi-f57b45989-7xfmm" Jan 31 09:25:51 crc kubenswrapper[4830]: I0131 09:25:51.624854 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/de11d1f2-fd91-48c7-9dc3-79748064e53d-config-data-custom\") pod \"heat-cfnapi-f57b45989-7xfmm\" (UID: \"de11d1f2-fd91-48c7-9dc3-79748064e53d\") " pod="openstack/heat-cfnapi-f57b45989-7xfmm" Jan 31 09:25:51 crc kubenswrapper[4830]: I0131 09:25:51.625312 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/de11d1f2-fd91-48c7-9dc3-79748064e53d-config-data\") pod \"heat-cfnapi-f57b45989-7xfmm\" (UID: \"de11d1f2-fd91-48c7-9dc3-79748064e53d\") " pod="openstack/heat-cfnapi-f57b45989-7xfmm" Jan 31 09:25:51 crc kubenswrapper[4830]: I0131 09:25:51.729079 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de11d1f2-fd91-48c7-9dc3-79748064e53d-combined-ca-bundle\") pod \"heat-cfnapi-f57b45989-7xfmm\" (UID: \"de11d1f2-fd91-48c7-9dc3-79748064e53d\") " pod="openstack/heat-cfnapi-f57b45989-7xfmm" Jan 31 09:25:51 crc kubenswrapper[4830]: I0131 09:25:51.729208 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/de11d1f2-fd91-48c7-9dc3-79748064e53d-config-data-custom\") pod \"heat-cfnapi-f57b45989-7xfmm\" (UID: \"de11d1f2-fd91-48c7-9dc3-79748064e53d\") " pod="openstack/heat-cfnapi-f57b45989-7xfmm" Jan 31 09:25:51 crc kubenswrapper[4830]: I0131 09:25:51.729359 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/de11d1f2-fd91-48c7-9dc3-79748064e53d-config-data\") pod \"heat-cfnapi-f57b45989-7xfmm\" (UID: \"de11d1f2-fd91-48c7-9dc3-79748064e53d\") " pod="openstack/heat-cfnapi-f57b45989-7xfmm" Jan 31 09:25:51 crc kubenswrapper[4830]: I0131 09:25:51.729440 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tdcbs\" (UniqueName: \"kubernetes.io/projected/de11d1f2-fd91-48c7-9dc3-79748064e53d-kube-api-access-tdcbs\") pod \"heat-cfnapi-f57b45989-7xfmm\" (UID: \"de11d1f2-fd91-48c7-9dc3-79748064e53d\") " pod="openstack/heat-cfnapi-f57b45989-7xfmm" Jan 31 09:25:51 crc kubenswrapper[4830]: I0131 09:25:51.736857 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/de11d1f2-fd91-48c7-9dc3-79748064e53d-config-data-custom\") pod \"heat-cfnapi-f57b45989-7xfmm\" (UID: \"de11d1f2-fd91-48c7-9dc3-79748064e53d\") " pod="openstack/heat-cfnapi-f57b45989-7xfmm" Jan 31 09:25:51 crc kubenswrapper[4830]: I0131 09:25:51.745278 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de11d1f2-fd91-48c7-9dc3-79748064e53d-combined-ca-bundle\") pod \"heat-cfnapi-f57b45989-7xfmm\" (UID: \"de11d1f2-fd91-48c7-9dc3-79748064e53d\") " pod="openstack/heat-cfnapi-f57b45989-7xfmm" Jan 31 09:25:51 crc kubenswrapper[4830]: I0131 09:25:51.745419 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/de11d1f2-fd91-48c7-9dc3-79748064e53d-config-data\") pod \"heat-cfnapi-f57b45989-7xfmm\" (UID: \"de11d1f2-fd91-48c7-9dc3-79748064e53d\") " pod="openstack/heat-cfnapi-f57b45989-7xfmm" Jan 31 09:25:51 crc kubenswrapper[4830]: I0131 09:25:51.746252 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-6948bd58db-k47sz" Jan 31 09:25:51 crc kubenswrapper[4830]: I0131 09:25:51.752263 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tdcbs\" (UniqueName: \"kubernetes.io/projected/de11d1f2-fd91-48c7-9dc3-79748064e53d-kube-api-access-tdcbs\") pod \"heat-cfnapi-f57b45989-7xfmm\" (UID: \"de11d1f2-fd91-48c7-9dc3-79748064e53d\") " pod="openstack/heat-cfnapi-f57b45989-7xfmm" Jan 31 09:25:51 crc kubenswrapper[4830]: I0131 09:25:51.756829 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-587fd67997-pvqls" Jan 31 09:25:52 crc kubenswrapper[4830]: I0131 09:25:52.000169 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-f57b45989-7xfmm" Jan 31 09:25:53 crc kubenswrapper[4830]: I0131 09:25:53.725512 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-59478c766f-tgwgd"] Jan 31 09:25:53 crc kubenswrapper[4830]: I0131 09:25:53.783896 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-5db4bc48b8-mphcw"] Jan 31 09:25:53 crc kubenswrapper[4830]: I0131 09:25:53.832837 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-bcd57748c-bwxdf"] Jan 31 09:25:53 crc kubenswrapper[4830]: I0131 09:25:53.854296 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-bcd57748c-bwxdf" Jan 31 09:25:53 crc kubenswrapper[4830]: I0131 09:25:53.862660 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-bcd57748c-bwxdf"] Jan 31 09:25:53 crc kubenswrapper[4830]: I0131 09:25:53.864937 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-api-public-svc" Jan 31 09:25:53 crc kubenswrapper[4830]: I0131 09:25:53.865174 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-api-internal-svc" Jan 31 09:25:53 crc kubenswrapper[4830]: I0131 09:25:53.912864 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/de2e8918-df90-4e54-8365-e7148dbdbcd1-internal-tls-certs\") pod \"heat-api-bcd57748c-bwxdf\" (UID: \"de2e8918-df90-4e54-8365-e7148dbdbcd1\") " pod="openstack/heat-api-bcd57748c-bwxdf" Jan 31 09:25:53 crc kubenswrapper[4830]: I0131 09:25:53.913497 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de2e8918-df90-4e54-8365-e7148dbdbcd1-combined-ca-bundle\") pod \"heat-api-bcd57748c-bwxdf\" (UID: \"de2e8918-df90-4e54-8365-e7148dbdbcd1\") " pod="openstack/heat-api-bcd57748c-bwxdf" Jan 31 09:25:53 crc kubenswrapper[4830]: I0131 09:25:53.913598 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/de2e8918-df90-4e54-8365-e7148dbdbcd1-config-data\") pod \"heat-api-bcd57748c-bwxdf\" (UID: \"de2e8918-df90-4e54-8365-e7148dbdbcd1\") " pod="openstack/heat-api-bcd57748c-bwxdf" Jan 31 09:25:53 crc kubenswrapper[4830]: I0131 09:25:53.913646 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r6q7v\" (UniqueName: \"kubernetes.io/projected/de2e8918-df90-4e54-8365-e7148dbdbcd1-kube-api-access-r6q7v\") pod \"heat-api-bcd57748c-bwxdf\" (UID: \"de2e8918-df90-4e54-8365-e7148dbdbcd1\") " pod="openstack/heat-api-bcd57748c-bwxdf" Jan 31 09:25:53 crc kubenswrapper[4830]: I0131 09:25:53.913872 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/de2e8918-df90-4e54-8365-e7148dbdbcd1-public-tls-certs\") pod \"heat-api-bcd57748c-bwxdf\" (UID: \"de2e8918-df90-4e54-8365-e7148dbdbcd1\") " pod="openstack/heat-api-bcd57748c-bwxdf" Jan 31 09:25:53 crc kubenswrapper[4830]: I0131 09:25:53.913970 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/de2e8918-df90-4e54-8365-e7148dbdbcd1-config-data-custom\") pod \"heat-api-bcd57748c-bwxdf\" (UID: \"de2e8918-df90-4e54-8365-e7148dbdbcd1\") " pod="openstack/heat-api-bcd57748c-bwxdf" Jan 31 09:25:53 crc kubenswrapper[4830]: I0131 09:25:53.912936 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-9f575bfb8-72ll7"] Jan 31 09:25:53 crc kubenswrapper[4830]: I0131 09:25:53.930338 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-9f575bfb8-72ll7" Jan 31 09:25:53 crc kubenswrapper[4830]: I0131 09:25:53.947100 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-cfnapi-internal-svc" Jan 31 09:25:53 crc kubenswrapper[4830]: I0131 09:25:53.947407 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-cfnapi-public-svc" Jan 31 09:25:53 crc kubenswrapper[4830]: I0131 09:25:53.973544 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-9f575bfb8-72ll7"] Jan 31 09:25:54 crc kubenswrapper[4830]: I0131 09:25:54.021940 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jtvcl\" (UniqueName: \"kubernetes.io/projected/71e048bc-59e9-496e-8883-5374a863a094-kube-api-access-jtvcl\") pod \"heat-cfnapi-9f575bfb8-72ll7\" (UID: \"71e048bc-59e9-496e-8883-5374a863a094\") " pod="openstack/heat-cfnapi-9f575bfb8-72ll7" Jan 31 09:25:54 crc kubenswrapper[4830]: I0131 09:25:54.022004 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/71e048bc-59e9-496e-8883-5374a863a094-internal-tls-certs\") pod \"heat-cfnapi-9f575bfb8-72ll7\" (UID: \"71e048bc-59e9-496e-8883-5374a863a094\") " pod="openstack/heat-cfnapi-9f575bfb8-72ll7" Jan 31 09:25:54 crc kubenswrapper[4830]: I0131 09:25:54.022050 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/de2e8918-df90-4e54-8365-e7148dbdbcd1-config-data\") pod \"heat-api-bcd57748c-bwxdf\" (UID: \"de2e8918-df90-4e54-8365-e7148dbdbcd1\") " pod="openstack/heat-api-bcd57748c-bwxdf" Jan 31 09:25:54 crc kubenswrapper[4830]: I0131 09:25:54.022100 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r6q7v\" (UniqueName: \"kubernetes.io/projected/de2e8918-df90-4e54-8365-e7148dbdbcd1-kube-api-access-r6q7v\") pod \"heat-api-bcd57748c-bwxdf\" (UID: \"de2e8918-df90-4e54-8365-e7148dbdbcd1\") " pod="openstack/heat-api-bcd57748c-bwxdf" Jan 31 09:25:54 crc kubenswrapper[4830]: I0131 09:25:54.022209 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/de2e8918-df90-4e54-8365-e7148dbdbcd1-public-tls-certs\") pod \"heat-api-bcd57748c-bwxdf\" (UID: \"de2e8918-df90-4e54-8365-e7148dbdbcd1\") " pod="openstack/heat-api-bcd57748c-bwxdf" Jan 31 09:25:54 crc kubenswrapper[4830]: I0131 09:25:54.022265 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/de2e8918-df90-4e54-8365-e7148dbdbcd1-config-data-custom\") pod \"heat-api-bcd57748c-bwxdf\" (UID: \"de2e8918-df90-4e54-8365-e7148dbdbcd1\") " pod="openstack/heat-api-bcd57748c-bwxdf" Jan 31 09:25:54 crc kubenswrapper[4830]: I0131 09:25:54.022287 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/71e048bc-59e9-496e-8883-5374a863a094-config-data-custom\") pod \"heat-cfnapi-9f575bfb8-72ll7\" (UID: \"71e048bc-59e9-496e-8883-5374a863a094\") " pod="openstack/heat-cfnapi-9f575bfb8-72ll7" Jan 31 09:25:54 crc kubenswrapper[4830]: I0131 09:25:54.022310 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/de2e8918-df90-4e54-8365-e7148dbdbcd1-internal-tls-certs\") pod \"heat-api-bcd57748c-bwxdf\" (UID: \"de2e8918-df90-4e54-8365-e7148dbdbcd1\") " pod="openstack/heat-api-bcd57748c-bwxdf" Jan 31 09:25:54 crc kubenswrapper[4830]: I0131 09:25:54.022348 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71e048bc-59e9-496e-8883-5374a863a094-combined-ca-bundle\") pod \"heat-cfnapi-9f575bfb8-72ll7\" (UID: \"71e048bc-59e9-496e-8883-5374a863a094\") " pod="openstack/heat-cfnapi-9f575bfb8-72ll7" Jan 31 09:25:54 crc kubenswrapper[4830]: I0131 09:25:54.022399 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/71e048bc-59e9-496e-8883-5374a863a094-public-tls-certs\") pod \"heat-cfnapi-9f575bfb8-72ll7\" (UID: \"71e048bc-59e9-496e-8883-5374a863a094\") " pod="openstack/heat-cfnapi-9f575bfb8-72ll7" Jan 31 09:25:54 crc kubenswrapper[4830]: I0131 09:25:54.032052 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de2e8918-df90-4e54-8365-e7148dbdbcd1-combined-ca-bundle\") pod \"heat-api-bcd57748c-bwxdf\" (UID: \"de2e8918-df90-4e54-8365-e7148dbdbcd1\") " pod="openstack/heat-api-bcd57748c-bwxdf" Jan 31 09:25:54 crc kubenswrapper[4830]: I0131 09:25:54.032137 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/71e048bc-59e9-496e-8883-5374a863a094-config-data\") pod \"heat-cfnapi-9f575bfb8-72ll7\" (UID: \"71e048bc-59e9-496e-8883-5374a863a094\") " pod="openstack/heat-cfnapi-9f575bfb8-72ll7" Jan 31 09:25:54 crc kubenswrapper[4830]: I0131 09:25:54.062338 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de2e8918-df90-4e54-8365-e7148dbdbcd1-combined-ca-bundle\") pod \"heat-api-bcd57748c-bwxdf\" (UID: \"de2e8918-df90-4e54-8365-e7148dbdbcd1\") " pod="openstack/heat-api-bcd57748c-bwxdf" Jan 31 09:25:54 crc kubenswrapper[4830]: I0131 09:25:54.065032 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/de2e8918-df90-4e54-8365-e7148dbdbcd1-config-data\") pod \"heat-api-bcd57748c-bwxdf\" (UID: \"de2e8918-df90-4e54-8365-e7148dbdbcd1\") " pod="openstack/heat-api-bcd57748c-bwxdf" Jan 31 09:25:54 crc kubenswrapper[4830]: I0131 09:25:54.066856 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/de2e8918-df90-4e54-8365-e7148dbdbcd1-config-data-custom\") pod \"heat-api-bcd57748c-bwxdf\" (UID: \"de2e8918-df90-4e54-8365-e7148dbdbcd1\") " pod="openstack/heat-api-bcd57748c-bwxdf" Jan 31 09:25:54 crc kubenswrapper[4830]: I0131 09:25:54.089586 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r6q7v\" (UniqueName: \"kubernetes.io/projected/de2e8918-df90-4e54-8365-e7148dbdbcd1-kube-api-access-r6q7v\") pod \"heat-api-bcd57748c-bwxdf\" (UID: \"de2e8918-df90-4e54-8365-e7148dbdbcd1\") " pod="openstack/heat-api-bcd57748c-bwxdf" Jan 31 09:25:54 crc kubenswrapper[4830]: I0131 09:25:54.090526 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/de2e8918-df90-4e54-8365-e7148dbdbcd1-public-tls-certs\") pod \"heat-api-bcd57748c-bwxdf\" (UID: \"de2e8918-df90-4e54-8365-e7148dbdbcd1\") " pod="openstack/heat-api-bcd57748c-bwxdf" Jan 31 09:25:54 crc kubenswrapper[4830]: I0131 09:25:54.105360 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/de2e8918-df90-4e54-8365-e7148dbdbcd1-internal-tls-certs\") pod \"heat-api-bcd57748c-bwxdf\" (UID: \"de2e8918-df90-4e54-8365-e7148dbdbcd1\") " pod="openstack/heat-api-bcd57748c-bwxdf" Jan 31 09:25:54 crc kubenswrapper[4830]: I0131 09:25:54.144460 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/71e048bc-59e9-496e-8883-5374a863a094-config-data\") pod \"heat-cfnapi-9f575bfb8-72ll7\" (UID: \"71e048bc-59e9-496e-8883-5374a863a094\") " pod="openstack/heat-cfnapi-9f575bfb8-72ll7" Jan 31 09:25:54 crc kubenswrapper[4830]: I0131 09:25:54.144549 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jtvcl\" (UniqueName: \"kubernetes.io/projected/71e048bc-59e9-496e-8883-5374a863a094-kube-api-access-jtvcl\") pod \"heat-cfnapi-9f575bfb8-72ll7\" (UID: \"71e048bc-59e9-496e-8883-5374a863a094\") " pod="openstack/heat-cfnapi-9f575bfb8-72ll7" Jan 31 09:25:54 crc kubenswrapper[4830]: I0131 09:25:54.144608 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/71e048bc-59e9-496e-8883-5374a863a094-internal-tls-certs\") pod \"heat-cfnapi-9f575bfb8-72ll7\" (UID: \"71e048bc-59e9-496e-8883-5374a863a094\") " pod="openstack/heat-cfnapi-9f575bfb8-72ll7" Jan 31 09:25:54 crc kubenswrapper[4830]: I0131 09:25:54.145070 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/71e048bc-59e9-496e-8883-5374a863a094-config-data-custom\") pod \"heat-cfnapi-9f575bfb8-72ll7\" (UID: \"71e048bc-59e9-496e-8883-5374a863a094\") " pod="openstack/heat-cfnapi-9f575bfb8-72ll7" Jan 31 09:25:54 crc kubenswrapper[4830]: I0131 09:25:54.145179 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71e048bc-59e9-496e-8883-5374a863a094-combined-ca-bundle\") pod \"heat-cfnapi-9f575bfb8-72ll7\" (UID: \"71e048bc-59e9-496e-8883-5374a863a094\") " pod="openstack/heat-cfnapi-9f575bfb8-72ll7" Jan 31 09:25:54 crc kubenswrapper[4830]: I0131 09:25:54.156454 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/71e048bc-59e9-496e-8883-5374a863a094-public-tls-certs\") pod \"heat-cfnapi-9f575bfb8-72ll7\" (UID: \"71e048bc-59e9-496e-8883-5374a863a094\") " pod="openstack/heat-cfnapi-9f575bfb8-72ll7" Jan 31 09:25:54 crc kubenswrapper[4830]: I0131 09:25:54.170967 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/71e048bc-59e9-496e-8883-5374a863a094-config-data-custom\") pod \"heat-cfnapi-9f575bfb8-72ll7\" (UID: \"71e048bc-59e9-496e-8883-5374a863a094\") " pod="openstack/heat-cfnapi-9f575bfb8-72ll7" Jan 31 09:25:54 crc kubenswrapper[4830]: I0131 09:25:54.171375 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/71e048bc-59e9-496e-8883-5374a863a094-internal-tls-certs\") pod \"heat-cfnapi-9f575bfb8-72ll7\" (UID: \"71e048bc-59e9-496e-8883-5374a863a094\") " pod="openstack/heat-cfnapi-9f575bfb8-72ll7" Jan 31 09:25:54 crc kubenswrapper[4830]: I0131 09:25:54.174199 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/71e048bc-59e9-496e-8883-5374a863a094-config-data\") pod \"heat-cfnapi-9f575bfb8-72ll7\" (UID: \"71e048bc-59e9-496e-8883-5374a863a094\") " pod="openstack/heat-cfnapi-9f575bfb8-72ll7" Jan 31 09:25:54 crc kubenswrapper[4830]: I0131 09:25:54.174426 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71e048bc-59e9-496e-8883-5374a863a094-combined-ca-bundle\") pod \"heat-cfnapi-9f575bfb8-72ll7\" (UID: \"71e048bc-59e9-496e-8883-5374a863a094\") " pod="openstack/heat-cfnapi-9f575bfb8-72ll7" Jan 31 09:25:54 crc kubenswrapper[4830]: I0131 09:25:54.176672 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/71e048bc-59e9-496e-8883-5374a863a094-public-tls-certs\") pod \"heat-cfnapi-9f575bfb8-72ll7\" (UID: \"71e048bc-59e9-496e-8883-5374a863a094\") " pod="openstack/heat-cfnapi-9f575bfb8-72ll7" Jan 31 09:25:54 crc kubenswrapper[4830]: I0131 09:25:54.186682 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-bcd57748c-bwxdf" Jan 31 09:25:54 crc kubenswrapper[4830]: I0131 09:25:54.220909 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jtvcl\" (UniqueName: \"kubernetes.io/projected/71e048bc-59e9-496e-8883-5374a863a094-kube-api-access-jtvcl\") pod \"heat-cfnapi-9f575bfb8-72ll7\" (UID: \"71e048bc-59e9-496e-8883-5374a863a094\") " pod="openstack/heat-cfnapi-9f575bfb8-72ll7" Jan 31 09:25:54 crc kubenswrapper[4830]: I0131 09:25:54.285948 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-9f575bfb8-72ll7" Jan 31 09:25:54 crc kubenswrapper[4830]: I0131 09:25:54.357749 4830 generic.go:334] "Generic (PLEG): container finished" podID="43cbd586-1683-440f-992a-113173028a37" containerID="7c6922b39c4dd9c7624db328248b385cabff90417731eff072afdbd30b6ab102" exitCode=0 Jan 31 09:25:54 crc kubenswrapper[4830]: I0131 09:25:54.357838 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7995f9f9fb-6r8k4" event={"ID":"43cbd586-1683-440f-992a-113173028a37","Type":"ContainerDied","Data":"7c6922b39c4dd9c7624db328248b385cabff90417731eff072afdbd30b6ab102"} Jan 31 09:25:54 crc kubenswrapper[4830]: I0131 09:25:54.465886 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5784cf869f-tqtdt" Jan 31 09:25:54 crc kubenswrapper[4830]: I0131 09:25:54.612021 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e51aef7d-4b7d-44da-8d0f-b0e2b86d2842-ovsdbserver-nb\") pod \"e51aef7d-4b7d-44da-8d0f-b0e2b86d2842\" (UID: \"e51aef7d-4b7d-44da-8d0f-b0e2b86d2842\") " Jan 31 09:25:54 crc kubenswrapper[4830]: I0131 09:25:54.614530 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e51aef7d-4b7d-44da-8d0f-b0e2b86d2842-ovsdbserver-sb\") pod \"e51aef7d-4b7d-44da-8d0f-b0e2b86d2842\" (UID: \"e51aef7d-4b7d-44da-8d0f-b0e2b86d2842\") " Jan 31 09:25:54 crc kubenswrapper[4830]: I0131 09:25:54.614642 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e51aef7d-4b7d-44da-8d0f-b0e2b86d2842-dns-swift-storage-0\") pod \"e51aef7d-4b7d-44da-8d0f-b0e2b86d2842\" (UID: \"e51aef7d-4b7d-44da-8d0f-b0e2b86d2842\") " Jan 31 09:25:54 crc kubenswrapper[4830]: I0131 09:25:54.614795 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e51aef7d-4b7d-44da-8d0f-b0e2b86d2842-dns-svc\") pod \"e51aef7d-4b7d-44da-8d0f-b0e2b86d2842\" (UID: \"e51aef7d-4b7d-44da-8d0f-b0e2b86d2842\") " Jan 31 09:25:54 crc kubenswrapper[4830]: I0131 09:25:54.615063 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e51aef7d-4b7d-44da-8d0f-b0e2b86d2842-config\") pod \"e51aef7d-4b7d-44da-8d0f-b0e2b86d2842\" (UID: \"e51aef7d-4b7d-44da-8d0f-b0e2b86d2842\") " Jan 31 09:25:54 crc kubenswrapper[4830]: I0131 09:25:54.615188 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kbxzp\" (UniqueName: \"kubernetes.io/projected/e51aef7d-4b7d-44da-8d0f-b0e2b86d2842-kube-api-access-kbxzp\") pod \"e51aef7d-4b7d-44da-8d0f-b0e2b86d2842\" (UID: \"e51aef7d-4b7d-44da-8d0f-b0e2b86d2842\") " Jan 31 09:25:54 crc kubenswrapper[4830]: I0131 09:25:54.629997 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e51aef7d-4b7d-44da-8d0f-b0e2b86d2842-kube-api-access-kbxzp" (OuterVolumeSpecName: "kube-api-access-kbxzp") pod "e51aef7d-4b7d-44da-8d0f-b0e2b86d2842" (UID: "e51aef7d-4b7d-44da-8d0f-b0e2b86d2842"). InnerVolumeSpecName "kube-api-access-kbxzp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:25:54 crc kubenswrapper[4830]: I0131 09:25:54.713645 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e51aef7d-4b7d-44da-8d0f-b0e2b86d2842-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "e51aef7d-4b7d-44da-8d0f-b0e2b86d2842" (UID: "e51aef7d-4b7d-44da-8d0f-b0e2b86d2842"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:25:54 crc kubenswrapper[4830]: I0131 09:25:54.715671 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e51aef7d-4b7d-44da-8d0f-b0e2b86d2842-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "e51aef7d-4b7d-44da-8d0f-b0e2b86d2842" (UID: "e51aef7d-4b7d-44da-8d0f-b0e2b86d2842"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:25:54 crc kubenswrapper[4830]: I0131 09:25:54.720416 4830 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e51aef7d-4b7d-44da-8d0f-b0e2b86d2842-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 31 09:25:54 crc kubenswrapper[4830]: I0131 09:25:54.720457 4830 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e51aef7d-4b7d-44da-8d0f-b0e2b86d2842-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 31 09:25:54 crc kubenswrapper[4830]: I0131 09:25:54.720472 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kbxzp\" (UniqueName: \"kubernetes.io/projected/e51aef7d-4b7d-44da-8d0f-b0e2b86d2842-kube-api-access-kbxzp\") on node \"crc\" DevicePath \"\"" Jan 31 09:25:54 crc kubenswrapper[4830]: I0131 09:25:54.743270 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e51aef7d-4b7d-44da-8d0f-b0e2b86d2842-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "e51aef7d-4b7d-44da-8d0f-b0e2b86d2842" (UID: "e51aef7d-4b7d-44da-8d0f-b0e2b86d2842"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:25:54 crc kubenswrapper[4830]: I0131 09:25:54.756570 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e51aef7d-4b7d-44da-8d0f-b0e2b86d2842-config" (OuterVolumeSpecName: "config") pod "e51aef7d-4b7d-44da-8d0f-b0e2b86d2842" (UID: "e51aef7d-4b7d-44da-8d0f-b0e2b86d2842"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:25:54 crc kubenswrapper[4830]: I0131 09:25:54.771849 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e51aef7d-4b7d-44da-8d0f-b0e2b86d2842-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "e51aef7d-4b7d-44da-8d0f-b0e2b86d2842" (UID: "e51aef7d-4b7d-44da-8d0f-b0e2b86d2842"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:25:54 crc kubenswrapper[4830]: I0131 09:25:54.822750 4830 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e51aef7d-4b7d-44da-8d0f-b0e2b86d2842-config\") on node \"crc\" DevicePath \"\"" Jan 31 09:25:54 crc kubenswrapper[4830]: I0131 09:25:54.822795 4830 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e51aef7d-4b7d-44da-8d0f-b0e2b86d2842-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 31 09:25:54 crc kubenswrapper[4830]: I0131 09:25:54.822804 4830 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e51aef7d-4b7d-44da-8d0f-b0e2b86d2842-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 31 09:25:55 crc kubenswrapper[4830]: I0131 09:25:55.505005 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5784cf869f-tqtdt" event={"ID":"e51aef7d-4b7d-44da-8d0f-b0e2b86d2842","Type":"ContainerDied","Data":"3a2459c0d0919385bb4e4d427ba3d92d0c3a9d7416f91e67527916d5b90e051f"} Jan 31 09:25:55 crc kubenswrapper[4830]: I0131 09:25:55.505589 4830 scope.go:117] "RemoveContainer" containerID="1575f6c9bde9aa49bcd066c47e0fe165efc4fb44b0896b31be8c3f3ba23ffcc4" Jan 31 09:25:55 crc kubenswrapper[4830]: I0131 09:25:55.505123 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5784cf869f-tqtdt" Jan 31 09:25:55 crc kubenswrapper[4830]: I0131 09:25:55.573640 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5784cf869f-tqtdt"] Jan 31 09:25:55 crc kubenswrapper[4830]: I0131 09:25:55.585135 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5784cf869f-tqtdt"] Jan 31 09:25:56 crc kubenswrapper[4830]: I0131 09:25:56.233264 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 31 09:25:56 crc kubenswrapper[4830]: I0131 09:25:56.234146 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="fd3d398c-5a9f-4835-9b6a-6700097e85ed" containerName="ceilometer-central-agent" containerID="cri-o://cb4a05ac9302c7356f4830d38a00f8f941d688e43deecb9bdbf3ea14257b5c5e" gracePeriod=30 Jan 31 09:25:56 crc kubenswrapper[4830]: I0131 09:25:56.234686 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="fd3d398c-5a9f-4835-9b6a-6700097e85ed" containerName="proxy-httpd" containerID="cri-o://432786d33d3771aab5e5d32e3cafd9b8a281299a22963e8a340e9dc5bdc1494a" gracePeriod=30 Jan 31 09:25:56 crc kubenswrapper[4830]: I0131 09:25:56.234763 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="fd3d398c-5a9f-4835-9b6a-6700097e85ed" containerName="sg-core" containerID="cri-o://00d9abf46523e252c342902e9571685e6008daa60278da5c785f45f9d550fc4b" gracePeriod=30 Jan 31 09:25:56 crc kubenswrapper[4830]: I0131 09:25:56.234805 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="fd3d398c-5a9f-4835-9b6a-6700097e85ed" containerName="ceilometer-notification-agent" containerID="cri-o://410d3fa387ba52fc900df14a4ccefea9f4c22babba4e0a3efb0d6b88d925adb6" gracePeriod=30 Jan 31 09:25:56 crc kubenswrapper[4830]: I0131 09:25:56.261197 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="fd3d398c-5a9f-4835-9b6a-6700097e85ed" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.210:3000/\": EOF" Jan 31 09:25:56 crc kubenswrapper[4830]: I0131 09:25:56.288689 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e51aef7d-4b7d-44da-8d0f-b0e2b86d2842" path="/var/lib/kubelet/pods/e51aef7d-4b7d-44da-8d0f-b0e2b86d2842/volumes" Jan 31 09:25:56 crc kubenswrapper[4830]: I0131 09:25:56.565078 4830 generic.go:334] "Generic (PLEG): container finished" podID="fd3d398c-5a9f-4835-9b6a-6700097e85ed" containerID="432786d33d3771aab5e5d32e3cafd9b8a281299a22963e8a340e9dc5bdc1494a" exitCode=0 Jan 31 09:25:56 crc kubenswrapper[4830]: I0131 09:25:56.565116 4830 generic.go:334] "Generic (PLEG): container finished" podID="fd3d398c-5a9f-4835-9b6a-6700097e85ed" containerID="00d9abf46523e252c342902e9571685e6008daa60278da5c785f45f9d550fc4b" exitCode=2 Jan 31 09:25:56 crc kubenswrapper[4830]: I0131 09:25:56.565138 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fd3d398c-5a9f-4835-9b6a-6700097e85ed","Type":"ContainerDied","Data":"432786d33d3771aab5e5d32e3cafd9b8a281299a22963e8a340e9dc5bdc1494a"} Jan 31 09:25:56 crc kubenswrapper[4830]: I0131 09:25:56.565167 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fd3d398c-5a9f-4835-9b6a-6700097e85ed","Type":"ContainerDied","Data":"00d9abf46523e252c342902e9571685e6008daa60278da5c785f45f9d550fc4b"} Jan 31 09:25:56 crc kubenswrapper[4830]: E0131 09:25:56.609406 4830 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfd3d398c_5a9f_4835_9b6a_6700097e85ed.slice/crio-conmon-00d9abf46523e252c342902e9571685e6008daa60278da5c785f45f9d550fc4b.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfd3d398c_5a9f_4835_9b6a_6700097e85ed.slice/crio-00d9abf46523e252c342902e9571685e6008daa60278da5c785f45f9d550fc4b.scope\": RecentStats: unable to find data in memory cache]" Jan 31 09:25:57 crc kubenswrapper[4830]: I0131 09:25:57.596886 4830 generic.go:334] "Generic (PLEG): container finished" podID="fd3d398c-5a9f-4835-9b6a-6700097e85ed" containerID="410d3fa387ba52fc900df14a4ccefea9f4c22babba4e0a3efb0d6b88d925adb6" exitCode=0 Jan 31 09:25:57 crc kubenswrapper[4830]: I0131 09:25:57.597426 4830 generic.go:334] "Generic (PLEG): container finished" podID="fd3d398c-5a9f-4835-9b6a-6700097e85ed" containerID="cb4a05ac9302c7356f4830d38a00f8f941d688e43deecb9bdbf3ea14257b5c5e" exitCode=0 Jan 31 09:25:57 crc kubenswrapper[4830]: I0131 09:25:57.596985 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fd3d398c-5a9f-4835-9b6a-6700097e85ed","Type":"ContainerDied","Data":"410d3fa387ba52fc900df14a4ccefea9f4c22babba4e0a3efb0d6b88d925adb6"} Jan 31 09:25:57 crc kubenswrapper[4830]: I0131 09:25:57.597480 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fd3d398c-5a9f-4835-9b6a-6700097e85ed","Type":"ContainerDied","Data":"cb4a05ac9302c7356f4830d38a00f8f941d688e43deecb9bdbf3ea14257b5c5e"} Jan 31 09:25:58 crc kubenswrapper[4830]: I0131 09:25:58.936582 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-engine-666cdcb7b8-d25gt" Jan 31 09:25:59 crc kubenswrapper[4830]: I0131 09:25:59.917028 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="fd3d398c-5a9f-4835-9b6a-6700097e85ed" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.210:3000/\": dial tcp 10.217.0.210:3000: connect: connection refused" Jan 31 09:26:00 crc kubenswrapper[4830]: I0131 09:26:00.254588 4830 scope.go:117] "RemoveContainer" containerID="a04fad3617a9e38076099693ce6bd6f0b7e1a9b845b3b8a22acffddfa772e8f0" Jan 31 09:26:00 crc kubenswrapper[4830]: E0131 09:26:00.255496 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gt7kd_openshift-machine-config-operator(158dbfda-9b0a-4809-9946-3c6ee2d082dc)\"" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" podUID="158dbfda-9b0a-4809-9946-3c6ee2d082dc" Jan 31 09:26:03 crc kubenswrapper[4830]: E0131 09:26:03.725183 4830 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-openstackclient:current-podified" Jan 31 09:26:03 crc kubenswrapper[4830]: E0131 09:26:03.726663 4830 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:openstackclient,Image:quay.io/podified-antelope-centos9/openstack-openstackclient:current-podified,Command:[/bin/sleep],Args:[infinity],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n8bh64h597hb4h5fbh58h98hcch594h545h79h5cbh65h56ch55ch656h5c4h5b8hcfh5fh5cfh89h689hb9h89h84h5c9h5dbh98h56dh66ch8fq,ValueFrom:nil,},EnvVar{Name:OS_CLOUD,Value:default,ValueFrom:nil,},EnvVar{Name:PROMETHEUS_CA_CERT,Value:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,ValueFrom:nil,},EnvVar{Name:PROMETHEUS_HOST,Value:metric-storage-prometheus.openstack.svc,ValueFrom:nil,},EnvVar{Name:PROMETHEUS_PORT,Value:9090,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:openstack-config,ReadOnly:false,MountPath:/home/cloud-admin/.config/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/home/cloud-admin/.config/openstack/secure.yaml,SubPath:secure.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/home/cloud-admin/cloudrc,SubPath:cloudrc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zc22h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42401,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42401,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstackclient_openstack(4ed170d0-8e88-40c3-a2b4-9908fc87a3db): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 31 09:26:03 crc kubenswrapper[4830]: E0131 09:26:03.728078 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openstackclient\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/openstackclient" podUID="4ed170d0-8e88-40c3-a2b4-9908fc87a3db" Jan 31 09:26:03 crc kubenswrapper[4830]: E0131 09:26:03.891008 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openstackclient\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-openstackclient:current-podified\\\"\"" pod="openstack/openstackclient" podUID="4ed170d0-8e88-40c3-a2b4-9908fc87a3db" Jan 31 09:26:04 crc kubenswrapper[4830]: I0131 09:26:04.186006 4830 scope.go:117] "RemoveContainer" containerID="d7826742a535cf8bc43a9329d109fa52d874d91f2c42c16b755f433515a7a9c0" Jan 31 09:26:04 crc kubenswrapper[4830]: I0131 09:26:04.329895 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-cc7d8b455-4zmj7" Jan 31 09:26:04 crc kubenswrapper[4830]: I0131 09:26:04.469571 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-59d6cd4869-w2rrr"] Jan 31 09:26:04 crc kubenswrapper[4830]: I0131 09:26:04.470036 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-59d6cd4869-w2rrr" podUID="9404af59-7e12-483b-90d0-9ebdc4140cc2" containerName="neutron-api" containerID="cri-o://1cec5eaefe29b55b53814da42acd0c523600e78af1749ea2cf9bbaa730773373" gracePeriod=30 Jan 31 09:26:04 crc kubenswrapper[4830]: I0131 09:26:04.470940 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-59d6cd4869-w2rrr" podUID="9404af59-7e12-483b-90d0-9ebdc4140cc2" containerName="neutron-httpd" containerID="cri-o://ebbfc0576c942e0e24080af4a45767ccb924675876b9993065a3eeec34f93cb2" gracePeriod=30 Jan 31 09:26:04 crc kubenswrapper[4830]: I0131 09:26:04.513139 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 31 09:26:04 crc kubenswrapper[4830]: I0131 09:26:04.513551 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="ff4e5fbc-7e45-42b7-8af6-ff34b36bb594" containerName="glance-log" containerID="cri-o://7cac1cf1ee9f45c0bb4a831735025c733474ea6d0e388d5961a9e10c557f87a2" gracePeriod=30 Jan 31 09:26:04 crc kubenswrapper[4830]: I0131 09:26:04.514365 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="ff4e5fbc-7e45-42b7-8af6-ff34b36bb594" containerName="glance-httpd" containerID="cri-o://c0f6af8c9ac8c455376ada0690cecdd3516ef4b1a4487609897ae6527c19432d" gracePeriod=30 Jan 31 09:26:04 crc kubenswrapper[4830]: I0131 09:26:04.889820 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fd3d398c-5a9f-4835-9b6a-6700097e85ed","Type":"ContainerDied","Data":"43f7094c271b65d5fb5dba1876b7a0276d1dd694dcfe6df091a4418ca3073ed0"} Jan 31 09:26:04 crc kubenswrapper[4830]: I0131 09:26:04.890790 4830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="43f7094c271b65d5fb5dba1876b7a0276d1dd694dcfe6df091a4418ca3073ed0" Jan 31 09:26:04 crc kubenswrapper[4830]: I0131 09:26:04.895293 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7995f9f9fb-6r8k4" event={"ID":"43cbd586-1683-440f-992a-113173028a37","Type":"ContainerDied","Data":"8687d9d0250fd0b21fcf11768f66ecf3bc54a50a638ec7b0dfdb35d0934c31c2"} Jan 31 09:26:04 crc kubenswrapper[4830]: I0131 09:26:04.895336 4830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8687d9d0250fd0b21fcf11768f66ecf3bc54a50a638ec7b0dfdb35d0934c31c2" Jan 31 09:26:04 crc kubenswrapper[4830]: I0131 09:26:04.898175 4830 generic.go:334] "Generic (PLEG): container finished" podID="ff4e5fbc-7e45-42b7-8af6-ff34b36bb594" containerID="7cac1cf1ee9f45c0bb4a831735025c733474ea6d0e388d5961a9e10c557f87a2" exitCode=143 Jan 31 09:26:04 crc kubenswrapper[4830]: I0131 09:26:04.898211 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"ff4e5fbc-7e45-42b7-8af6-ff34b36bb594","Type":"ContainerDied","Data":"7cac1cf1ee9f45c0bb4a831735025c733474ea6d0e388d5961a9e10c557f87a2"} Jan 31 09:26:05 crc kubenswrapper[4830]: I0131 09:26:05.408029 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-7995f9f9fb-6r8k4" Jan 31 09:26:05 crc kubenswrapper[4830]: I0131 09:26:05.469932 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 31 09:26:05 crc kubenswrapper[4830]: I0131 09:26:05.552173 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-587fd67997-pvqls"] Jan 31 09:26:05 crc kubenswrapper[4830]: I0131 09:26:05.564096 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-f57b45989-7xfmm"] Jan 31 09:26:05 crc kubenswrapper[4830]: I0131 09:26:05.584301 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd3d398c-5a9f-4835-9b6a-6700097e85ed-config-data\") pod \"fd3d398c-5a9f-4835-9b6a-6700097e85ed\" (UID: \"fd3d398c-5a9f-4835-9b6a-6700097e85ed\") " Jan 31 09:26:05 crc kubenswrapper[4830]: I0131 09:26:05.585952 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/43cbd586-1683-440f-992a-113173028a37-public-tls-certs\") pod \"43cbd586-1683-440f-992a-113173028a37\" (UID: \"43cbd586-1683-440f-992a-113173028a37\") " Jan 31 09:26:05 crc kubenswrapper[4830]: I0131 09:26:05.586118 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/43cbd586-1683-440f-992a-113173028a37-logs\") pod \"43cbd586-1683-440f-992a-113173028a37\" (UID: \"43cbd586-1683-440f-992a-113173028a37\") " Jan 31 09:26:05 crc kubenswrapper[4830]: I0131 09:26:05.586265 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fd3d398c-5a9f-4835-9b6a-6700097e85ed-scripts\") pod \"fd3d398c-5a9f-4835-9b6a-6700097e85ed\" (UID: \"fd3d398c-5a9f-4835-9b6a-6700097e85ed\") " Jan 31 09:26:05 crc kubenswrapper[4830]: I0131 09:26:05.586378 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fd3d398c-5a9f-4835-9b6a-6700097e85ed-run-httpd\") pod \"fd3d398c-5a9f-4835-9b6a-6700097e85ed\" (UID: \"fd3d398c-5a9f-4835-9b6a-6700097e85ed\") " Jan 31 09:26:05 crc kubenswrapper[4830]: I0131 09:26:05.586519 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/43cbd586-1683-440f-992a-113173028a37-internal-tls-certs\") pod \"43cbd586-1683-440f-992a-113173028a37\" (UID: \"43cbd586-1683-440f-992a-113173028a37\") " Jan 31 09:26:05 crc kubenswrapper[4830]: I0131 09:26:05.586670 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b9wwc\" (UniqueName: \"kubernetes.io/projected/43cbd586-1683-440f-992a-113173028a37-kube-api-access-b9wwc\") pod \"43cbd586-1683-440f-992a-113173028a37\" (UID: \"43cbd586-1683-440f-992a-113173028a37\") " Jan 31 09:26:05 crc kubenswrapper[4830]: I0131 09:26:05.586814 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/43cbd586-1683-440f-992a-113173028a37-scripts\") pod \"43cbd586-1683-440f-992a-113173028a37\" (UID: \"43cbd586-1683-440f-992a-113173028a37\") " Jan 31 09:26:05 crc kubenswrapper[4830]: I0131 09:26:05.586938 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd3d398c-5a9f-4835-9b6a-6700097e85ed-combined-ca-bundle\") pod \"fd3d398c-5a9f-4835-9b6a-6700097e85ed\" (UID: \"fd3d398c-5a9f-4835-9b6a-6700097e85ed\") " Jan 31 09:26:05 crc kubenswrapper[4830]: I0131 09:26:05.587048 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43cbd586-1683-440f-992a-113173028a37-combined-ca-bundle\") pod \"43cbd586-1683-440f-992a-113173028a37\" (UID: \"43cbd586-1683-440f-992a-113173028a37\") " Jan 31 09:26:05 crc kubenswrapper[4830]: I0131 09:26:05.587028 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/43cbd586-1683-440f-992a-113173028a37-logs" (OuterVolumeSpecName: "logs") pod "43cbd586-1683-440f-992a-113173028a37" (UID: "43cbd586-1683-440f-992a-113173028a37"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 09:26:05 crc kubenswrapper[4830]: I0131 09:26:05.587634 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sxsw8\" (UniqueName: \"kubernetes.io/projected/fd3d398c-5a9f-4835-9b6a-6700097e85ed-kube-api-access-sxsw8\") pod \"fd3d398c-5a9f-4835-9b6a-6700097e85ed\" (UID: \"fd3d398c-5a9f-4835-9b6a-6700097e85ed\") " Jan 31 09:26:05 crc kubenswrapper[4830]: I0131 09:26:05.587802 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fd3d398c-5a9f-4835-9b6a-6700097e85ed-sg-core-conf-yaml\") pod \"fd3d398c-5a9f-4835-9b6a-6700097e85ed\" (UID: \"fd3d398c-5a9f-4835-9b6a-6700097e85ed\") " Jan 31 09:26:05 crc kubenswrapper[4830]: I0131 09:26:05.587988 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/43cbd586-1683-440f-992a-113173028a37-config-data\") pod \"43cbd586-1683-440f-992a-113173028a37\" (UID: \"43cbd586-1683-440f-992a-113173028a37\") " Jan 31 09:26:05 crc kubenswrapper[4830]: I0131 09:26:05.588262 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fd3d398c-5a9f-4835-9b6a-6700097e85ed-log-httpd\") pod \"fd3d398c-5a9f-4835-9b6a-6700097e85ed\" (UID: \"fd3d398c-5a9f-4835-9b6a-6700097e85ed\") " Jan 31 09:26:05 crc kubenswrapper[4830]: I0131 09:26:05.591850 4830 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/43cbd586-1683-440f-992a-113173028a37-logs\") on node \"crc\" DevicePath \"\"" Jan 31 09:26:05 crc kubenswrapper[4830]: I0131 09:26:05.600015 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fd3d398c-5a9f-4835-9b6a-6700097e85ed-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "fd3d398c-5a9f-4835-9b6a-6700097e85ed" (UID: "fd3d398c-5a9f-4835-9b6a-6700097e85ed"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 09:26:05 crc kubenswrapper[4830]: I0131 09:26:05.605864 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fd3d398c-5a9f-4835-9b6a-6700097e85ed-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "fd3d398c-5a9f-4835-9b6a-6700097e85ed" (UID: "fd3d398c-5a9f-4835-9b6a-6700097e85ed"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 09:26:05 crc kubenswrapper[4830]: I0131 09:26:05.642395 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43cbd586-1683-440f-992a-113173028a37-scripts" (OuterVolumeSpecName: "scripts") pod "43cbd586-1683-440f-992a-113173028a37" (UID: "43cbd586-1683-440f-992a-113173028a37"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:26:05 crc kubenswrapper[4830]: I0131 09:26:05.655179 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43cbd586-1683-440f-992a-113173028a37-kube-api-access-b9wwc" (OuterVolumeSpecName: "kube-api-access-b9wwc") pod "43cbd586-1683-440f-992a-113173028a37" (UID: "43cbd586-1683-440f-992a-113173028a37"). InnerVolumeSpecName "kube-api-access-b9wwc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:26:05 crc kubenswrapper[4830]: I0131 09:26:05.655310 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd3d398c-5a9f-4835-9b6a-6700097e85ed-scripts" (OuterVolumeSpecName: "scripts") pod "fd3d398c-5a9f-4835-9b6a-6700097e85ed" (UID: "fd3d398c-5a9f-4835-9b6a-6700097e85ed"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:26:05 crc kubenswrapper[4830]: I0131 09:26:05.711610 4830 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fd3d398c-5a9f-4835-9b6a-6700097e85ed-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 09:26:05 crc kubenswrapper[4830]: I0131 09:26:05.720426 4830 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fd3d398c-5a9f-4835-9b6a-6700097e85ed-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 31 09:26:05 crc kubenswrapper[4830]: I0131 09:26:05.720476 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b9wwc\" (UniqueName: \"kubernetes.io/projected/43cbd586-1683-440f-992a-113173028a37-kube-api-access-b9wwc\") on node \"crc\" DevicePath \"\"" Jan 31 09:26:05 crc kubenswrapper[4830]: I0131 09:26:05.720488 4830 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/43cbd586-1683-440f-992a-113173028a37-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 09:26:05 crc kubenswrapper[4830]: I0131 09:26:05.720498 4830 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fd3d398c-5a9f-4835-9b6a-6700097e85ed-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 31 09:26:05 crc kubenswrapper[4830]: I0131 09:26:05.745542 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fd3d398c-5a9f-4835-9b6a-6700097e85ed-kube-api-access-sxsw8" (OuterVolumeSpecName: "kube-api-access-sxsw8") pod "fd3d398c-5a9f-4835-9b6a-6700097e85ed" (UID: "fd3d398c-5a9f-4835-9b6a-6700097e85ed"). InnerVolumeSpecName "kube-api-access-sxsw8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:26:05 crc kubenswrapper[4830]: I0131 09:26:05.920472 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sxsw8\" (UniqueName: \"kubernetes.io/projected/fd3d398c-5a9f-4835-9b6a-6700097e85ed-kube-api-access-sxsw8\") on node \"crc\" DevicePath \"\"" Jan 31 09:26:05 crc kubenswrapper[4830]: I0131 09:26:05.925841 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-9f575bfb8-72ll7"] Jan 31 09:26:06 crc kubenswrapper[4830]: I0131 09:26:06.014515 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-f57b45989-7xfmm" event={"ID":"de11d1f2-fd91-48c7-9dc3-79748064e53d","Type":"ContainerStarted","Data":"37d0816b932e2a2c94808352db0421b8050f8e545b6ddcdee9e81e61a6f60f44"} Jan 31 09:26:06 crc kubenswrapper[4830]: I0131 09:26:06.016769 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-6948bd58db-k47sz"] Jan 31 09:26:06 crc kubenswrapper[4830]: I0131 09:26:06.032936 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-587fd67997-pvqls" event={"ID":"e5627b4b-982b-41c1-8ff9-8ca07513680d","Type":"ContainerStarted","Data":"a77be11b8b426e5fcbbca9cdf9bc97dfdb9a46bfefa1001c18b5f9ee3d60dda7"} Jan 31 09:26:06 crc kubenswrapper[4830]: I0131 09:26:06.037555 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-bcd57748c-bwxdf"] Jan 31 09:26:06 crc kubenswrapper[4830]: I0131 09:26:06.048972 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-f44b7d679-6khcx" event={"ID":"f99258ad-5714-491f-bdad-d7196ed9833a","Type":"ContainerStarted","Data":"b9eb61a048f3769f2f13bfd3718dc822266759ed006dc5cda3ace07ebb3e46a4"} Jan 31 09:26:06 crc kubenswrapper[4830]: I0131 09:26:06.062687 4830 generic.go:334] "Generic (PLEG): container finished" podID="9404af59-7e12-483b-90d0-9ebdc4140cc2" containerID="ebbfc0576c942e0e24080af4a45767ccb924675876b9993065a3eeec34f93cb2" exitCode=0 Jan 31 09:26:06 crc kubenswrapper[4830]: I0131 09:26:06.062806 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-7995f9f9fb-6r8k4" Jan 31 09:26:06 crc kubenswrapper[4830]: I0131 09:26:06.064112 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-59d6cd4869-w2rrr" event={"ID":"9404af59-7e12-483b-90d0-9ebdc4140cc2","Type":"ContainerDied","Data":"ebbfc0576c942e0e24080af4a45767ccb924675876b9993065a3eeec34f93cb2"} Jan 31 09:26:06 crc kubenswrapper[4830]: I0131 09:26:06.064193 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 31 09:26:06 crc kubenswrapper[4830]: W0131 09:26:06.262321 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podde2e8918_df90_4e54_8365_e7148dbdbcd1.slice/crio-cca59e1b5e667529610d1c883344c1f8e23707b7c5a3a317953d3a7236b4997c WatchSource:0}: Error finding container cca59e1b5e667529610d1c883344c1f8e23707b7c5a3a317953d3a7236b4997c: Status 404 returned error can't find the container with id cca59e1b5e667529610d1c883344c1f8e23707b7c5a3a317953d3a7236b4997c Jan 31 09:26:06 crc kubenswrapper[4830]: I0131 09:26:06.613255 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43cbd586-1683-440f-992a-113173028a37-config-data" (OuterVolumeSpecName: "config-data") pod "43cbd586-1683-440f-992a-113173028a37" (UID: "43cbd586-1683-440f-992a-113173028a37"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:26:06 crc kubenswrapper[4830]: I0131 09:26:06.652357 4830 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/43cbd586-1683-440f-992a-113173028a37-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 09:26:06 crc kubenswrapper[4830]: I0131 09:26:06.658312 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd3d398c-5a9f-4835-9b6a-6700097e85ed-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "fd3d398c-5a9f-4835-9b6a-6700097e85ed" (UID: "fd3d398c-5a9f-4835-9b6a-6700097e85ed"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:26:06 crc kubenswrapper[4830]: I0131 09:26:06.749959 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd3d398c-5a9f-4835-9b6a-6700097e85ed-config-data" (OuterVolumeSpecName: "config-data") pod "fd3d398c-5a9f-4835-9b6a-6700097e85ed" (UID: "fd3d398c-5a9f-4835-9b6a-6700097e85ed"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:26:06 crc kubenswrapper[4830]: I0131 09:26:06.754887 4830 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fd3d398c-5a9f-4835-9b6a-6700097e85ed-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 31 09:26:06 crc kubenswrapper[4830]: I0131 09:26:06.754929 4830 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd3d398c-5a9f-4835-9b6a-6700097e85ed-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 09:26:06 crc kubenswrapper[4830]: I0131 09:26:06.773150 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43cbd586-1683-440f-992a-113173028a37-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "43cbd586-1683-440f-992a-113173028a37" (UID: "43cbd586-1683-440f-992a-113173028a37"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:26:06 crc kubenswrapper[4830]: I0131 09:26:06.787329 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd3d398c-5a9f-4835-9b6a-6700097e85ed-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fd3d398c-5a9f-4835-9b6a-6700097e85ed" (UID: "fd3d398c-5a9f-4835-9b6a-6700097e85ed"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:26:06 crc kubenswrapper[4830]: I0131 09:26:06.790028 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43cbd586-1683-440f-992a-113173028a37-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "43cbd586-1683-440f-992a-113173028a37" (UID: "43cbd586-1683-440f-992a-113173028a37"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:26:06 crc kubenswrapper[4830]: I0131 09:26:06.822947 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43cbd586-1683-440f-992a-113173028a37-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "43cbd586-1683-440f-992a-113173028a37" (UID: "43cbd586-1683-440f-992a-113173028a37"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:26:06 crc kubenswrapper[4830]: I0131 09:26:06.858154 4830 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/43cbd586-1683-440f-992a-113173028a37-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 31 09:26:06 crc kubenswrapper[4830]: I0131 09:26:06.858580 4830 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/43cbd586-1683-440f-992a-113173028a37-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 31 09:26:06 crc kubenswrapper[4830]: I0131 09:26:06.858594 4830 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd3d398c-5a9f-4835-9b6a-6700097e85ed-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 09:26:06 crc kubenswrapper[4830]: I0131 09:26:06.858605 4830 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43cbd586-1683-440f-992a-113173028a37-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 09:26:07 crc kubenswrapper[4830]: I0131 09:26:07.052793 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-7995f9f9fb-6r8k4"] Jan 31 09:26:07 crc kubenswrapper[4830]: I0131 09:26:07.088049 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-5db4bc48b8-mphcw" event={"ID":"507f4c57-9369-4487-a575-370014e22eeb","Type":"ContainerStarted","Data":"048b3cace074d7927293bcbc8ec9fae217cdc957e95d9197018a0d1958db2c4b"} Jan 31 09:26:07 crc kubenswrapper[4830]: I0131 09:26:07.088236 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-cfnapi-5db4bc48b8-mphcw" podUID="507f4c57-9369-4487-a575-370014e22eeb" containerName="heat-cfnapi" containerID="cri-o://048b3cace074d7927293bcbc8ec9fae217cdc957e95d9197018a0d1958db2c4b" gracePeriod=60 Jan 31 09:26:07 crc kubenswrapper[4830]: I0131 09:26:07.088411 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-5db4bc48b8-mphcw" Jan 31 09:26:07 crc kubenswrapper[4830]: I0131 09:26:07.089873 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-7995f9f9fb-6r8k4"] Jan 31 09:26:07 crc kubenswrapper[4830]: I0131 09:26:07.096568 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-59478c766f-tgwgd" event={"ID":"e7f604a2-4cc7-4619-846c-51cb5cddffda","Type":"ContainerStarted","Data":"e7fd95d0f71fabbfa5af4f6ed53adf185141ac5ded059ca14ce7a31dcb26ccd9"} Jan 31 09:26:07 crc kubenswrapper[4830]: I0131 09:26:07.096713 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-59478c766f-tgwgd" Jan 31 09:26:07 crc kubenswrapper[4830]: I0131 09:26:07.096706 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-api-59478c766f-tgwgd" podUID="e7f604a2-4cc7-4619-846c-51cb5cddffda" containerName="heat-api" containerID="cri-o://e7fd95d0f71fabbfa5af4f6ed53adf185141ac5ded059ca14ce7a31dcb26ccd9" gracePeriod=60 Jan 31 09:26:07 crc kubenswrapper[4830]: I0131 09:26:07.102253 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-9f575bfb8-72ll7" event={"ID":"71e048bc-59e9-496e-8883-5374a863a094","Type":"ContainerStarted","Data":"247ebe34066d018c324f9f68d5c96290ef51e1e05192402ba36cf1b74cf8e146"} Jan 31 09:26:07 crc kubenswrapper[4830]: I0131 09:26:07.114593 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 31 09:26:07 crc kubenswrapper[4830]: I0131 09:26:07.124380 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-6948bd58db-k47sz" event={"ID":"8f3e08be-d310-4474-99fb-d9226ab6eedb","Type":"ContainerStarted","Data":"664eeafbb6c39f134ac2709d86d41cc57703e400362fa62a65b9420e8728ae3d"} Jan 31 09:26:07 crc kubenswrapper[4830]: I0131 09:26:07.134561 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-bcd57748c-bwxdf" event={"ID":"de2e8918-df90-4e54-8365-e7148dbdbcd1","Type":"ContainerStarted","Data":"cca59e1b5e667529610d1c883344c1f8e23707b7c5a3a317953d3a7236b4997c"} Jan 31 09:26:07 crc kubenswrapper[4830]: I0131 09:26:07.180826 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 31 09:26:07 crc kubenswrapper[4830]: I0131 09:26:07.244777 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 31 09:26:07 crc kubenswrapper[4830]: E0131 09:26:07.254337 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e51aef7d-4b7d-44da-8d0f-b0e2b86d2842" containerName="init" Jan 31 09:26:07 crc kubenswrapper[4830]: I0131 09:26:07.254380 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="e51aef7d-4b7d-44da-8d0f-b0e2b86d2842" containerName="init" Jan 31 09:26:07 crc kubenswrapper[4830]: E0131 09:26:07.254423 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd3d398c-5a9f-4835-9b6a-6700097e85ed" containerName="proxy-httpd" Jan 31 09:26:07 crc kubenswrapper[4830]: I0131 09:26:07.254434 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd3d398c-5a9f-4835-9b6a-6700097e85ed" containerName="proxy-httpd" Jan 31 09:26:07 crc kubenswrapper[4830]: E0131 09:26:07.254484 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="43cbd586-1683-440f-992a-113173028a37" containerName="placement-api" Jan 31 09:26:07 crc kubenswrapper[4830]: I0131 09:26:07.254498 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="43cbd586-1683-440f-992a-113173028a37" containerName="placement-api" Jan 31 09:26:07 crc kubenswrapper[4830]: E0131 09:26:07.254510 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd3d398c-5a9f-4835-9b6a-6700097e85ed" containerName="sg-core" Jan 31 09:26:07 crc kubenswrapper[4830]: I0131 09:26:07.254518 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd3d398c-5a9f-4835-9b6a-6700097e85ed" containerName="sg-core" Jan 31 09:26:07 crc kubenswrapper[4830]: E0131 09:26:07.254549 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd3d398c-5a9f-4835-9b6a-6700097e85ed" containerName="ceilometer-notification-agent" Jan 31 09:26:07 crc kubenswrapper[4830]: I0131 09:26:07.254558 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd3d398c-5a9f-4835-9b6a-6700097e85ed" containerName="ceilometer-notification-agent" Jan 31 09:26:07 crc kubenswrapper[4830]: E0131 09:26:07.254590 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd3d398c-5a9f-4835-9b6a-6700097e85ed" containerName="ceilometer-central-agent" Jan 31 09:26:07 crc kubenswrapper[4830]: I0131 09:26:07.254605 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd3d398c-5a9f-4835-9b6a-6700097e85ed" containerName="ceilometer-central-agent" Jan 31 09:26:07 crc kubenswrapper[4830]: E0131 09:26:07.254648 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="43cbd586-1683-440f-992a-113173028a37" containerName="placement-log" Jan 31 09:26:07 crc kubenswrapper[4830]: I0131 09:26:07.254657 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="43cbd586-1683-440f-992a-113173028a37" containerName="placement-log" Jan 31 09:26:07 crc kubenswrapper[4830]: E0131 09:26:07.254687 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e51aef7d-4b7d-44da-8d0f-b0e2b86d2842" containerName="dnsmasq-dns" Jan 31 09:26:07 crc kubenswrapper[4830]: I0131 09:26:07.254702 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="e51aef7d-4b7d-44da-8d0f-b0e2b86d2842" containerName="dnsmasq-dns" Jan 31 09:26:07 crc kubenswrapper[4830]: I0131 09:26:07.255347 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="43cbd586-1683-440f-992a-113173028a37" containerName="placement-log" Jan 31 09:26:07 crc kubenswrapper[4830]: I0131 09:26:07.255380 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="fd3d398c-5a9f-4835-9b6a-6700097e85ed" containerName="sg-core" Jan 31 09:26:07 crc kubenswrapper[4830]: I0131 09:26:07.255418 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="e51aef7d-4b7d-44da-8d0f-b0e2b86d2842" containerName="dnsmasq-dns" Jan 31 09:26:07 crc kubenswrapper[4830]: I0131 09:26:07.256909 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="fd3d398c-5a9f-4835-9b6a-6700097e85ed" containerName="ceilometer-central-agent" Jan 31 09:26:07 crc kubenswrapper[4830]: I0131 09:26:07.256977 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="43cbd586-1683-440f-992a-113173028a37" containerName="placement-api" Jan 31 09:26:07 crc kubenswrapper[4830]: I0131 09:26:07.257013 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="fd3d398c-5a9f-4835-9b6a-6700097e85ed" containerName="proxy-httpd" Jan 31 09:26:07 crc kubenswrapper[4830]: I0131 09:26:07.257040 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="fd3d398c-5a9f-4835-9b6a-6700097e85ed" containerName="ceilometer-notification-agent" Jan 31 09:26:07 crc kubenswrapper[4830]: I0131 09:26:07.297554 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 31 09:26:07 crc kubenswrapper[4830]: I0131 09:26:07.337067 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 31 09:26:07 crc kubenswrapper[4830]: I0131 09:26:07.337531 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 31 09:26:07 crc kubenswrapper[4830]: I0131 09:26:07.342570 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 31 09:26:07 crc kubenswrapper[4830]: I0131 09:26:07.360271 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-5db4bc48b8-mphcw" podStartSLOduration=6.514896501 podStartE2EDuration="29.360244518s" podCreationTimestamp="2026-01-31 09:25:38 +0000 UTC" firstStartedPulling="2026-01-31 09:25:41.099716987 +0000 UTC m=+1485.593079429" lastFinishedPulling="2026-01-31 09:26:03.945065004 +0000 UTC m=+1508.438427446" observedRunningTime="2026-01-31 09:26:07.125131874 +0000 UTC m=+1511.618494326" watchObservedRunningTime="2026-01-31 09:26:07.360244518 +0000 UTC m=+1511.853606960" Jan 31 09:26:07 crc kubenswrapper[4830]: I0131 09:26:07.374974 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-59478c766f-tgwgd" podStartSLOduration=6.766934906 podStartE2EDuration="29.374942146s" podCreationTimestamp="2026-01-31 09:25:38 +0000 UTC" firstStartedPulling="2026-01-31 09:25:41.358874344 +0000 UTC m=+1485.852236776" lastFinishedPulling="2026-01-31 09:26:03.966881574 +0000 UTC m=+1508.460244016" observedRunningTime="2026-01-31 09:26:07.154105688 +0000 UTC m=+1511.647468130" watchObservedRunningTime="2026-01-31 09:26:07.374942146 +0000 UTC m=+1511.868304588" Jan 31 09:26:07 crc kubenswrapper[4830]: I0131 09:26:07.387026 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/22fcbbb1-bfe8-4d6a-b0ea-6e0949dd7246-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"22fcbbb1-bfe8-4d6a-b0ea-6e0949dd7246\") " pod="openstack/ceilometer-0" Jan 31 09:26:07 crc kubenswrapper[4830]: I0131 09:26:07.387400 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22fcbbb1-bfe8-4d6a-b0ea-6e0949dd7246-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"22fcbbb1-bfe8-4d6a-b0ea-6e0949dd7246\") " pod="openstack/ceilometer-0" Jan 31 09:26:07 crc kubenswrapper[4830]: I0131 09:26:07.388247 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/22fcbbb1-bfe8-4d6a-b0ea-6e0949dd7246-run-httpd\") pod \"ceilometer-0\" (UID: \"22fcbbb1-bfe8-4d6a-b0ea-6e0949dd7246\") " pod="openstack/ceilometer-0" Jan 31 09:26:07 crc kubenswrapper[4830]: I0131 09:26:07.388313 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qfh7w\" (UniqueName: \"kubernetes.io/projected/22fcbbb1-bfe8-4d6a-b0ea-6e0949dd7246-kube-api-access-qfh7w\") pod \"ceilometer-0\" (UID: \"22fcbbb1-bfe8-4d6a-b0ea-6e0949dd7246\") " pod="openstack/ceilometer-0" Jan 31 09:26:07 crc kubenswrapper[4830]: I0131 09:26:07.388339 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/22fcbbb1-bfe8-4d6a-b0ea-6e0949dd7246-scripts\") pod \"ceilometer-0\" (UID: \"22fcbbb1-bfe8-4d6a-b0ea-6e0949dd7246\") " pod="openstack/ceilometer-0" Jan 31 09:26:07 crc kubenswrapper[4830]: I0131 09:26:07.388404 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/22fcbbb1-bfe8-4d6a-b0ea-6e0949dd7246-log-httpd\") pod \"ceilometer-0\" (UID: \"22fcbbb1-bfe8-4d6a-b0ea-6e0949dd7246\") " pod="openstack/ceilometer-0" Jan 31 09:26:07 crc kubenswrapper[4830]: I0131 09:26:07.388452 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/22fcbbb1-bfe8-4d6a-b0ea-6e0949dd7246-config-data\") pod \"ceilometer-0\" (UID: \"22fcbbb1-bfe8-4d6a-b0ea-6e0949dd7246\") " pod="openstack/ceilometer-0" Jan 31 09:26:07 crc kubenswrapper[4830]: I0131 09:26:07.491276 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/22fcbbb1-bfe8-4d6a-b0ea-6e0949dd7246-run-httpd\") pod \"ceilometer-0\" (UID: \"22fcbbb1-bfe8-4d6a-b0ea-6e0949dd7246\") " pod="openstack/ceilometer-0" Jan 31 09:26:07 crc kubenswrapper[4830]: I0131 09:26:07.491363 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qfh7w\" (UniqueName: \"kubernetes.io/projected/22fcbbb1-bfe8-4d6a-b0ea-6e0949dd7246-kube-api-access-qfh7w\") pod \"ceilometer-0\" (UID: \"22fcbbb1-bfe8-4d6a-b0ea-6e0949dd7246\") " pod="openstack/ceilometer-0" Jan 31 09:26:07 crc kubenswrapper[4830]: I0131 09:26:07.491398 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/22fcbbb1-bfe8-4d6a-b0ea-6e0949dd7246-scripts\") pod \"ceilometer-0\" (UID: \"22fcbbb1-bfe8-4d6a-b0ea-6e0949dd7246\") " pod="openstack/ceilometer-0" Jan 31 09:26:07 crc kubenswrapper[4830]: I0131 09:26:07.491439 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/22fcbbb1-bfe8-4d6a-b0ea-6e0949dd7246-log-httpd\") pod \"ceilometer-0\" (UID: \"22fcbbb1-bfe8-4d6a-b0ea-6e0949dd7246\") " pod="openstack/ceilometer-0" Jan 31 09:26:07 crc kubenswrapper[4830]: I0131 09:26:07.491483 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/22fcbbb1-bfe8-4d6a-b0ea-6e0949dd7246-config-data\") pod \"ceilometer-0\" (UID: \"22fcbbb1-bfe8-4d6a-b0ea-6e0949dd7246\") " pod="openstack/ceilometer-0" Jan 31 09:26:07 crc kubenswrapper[4830]: I0131 09:26:07.491521 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/22fcbbb1-bfe8-4d6a-b0ea-6e0949dd7246-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"22fcbbb1-bfe8-4d6a-b0ea-6e0949dd7246\") " pod="openstack/ceilometer-0" Jan 31 09:26:07 crc kubenswrapper[4830]: I0131 09:26:07.491688 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22fcbbb1-bfe8-4d6a-b0ea-6e0949dd7246-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"22fcbbb1-bfe8-4d6a-b0ea-6e0949dd7246\") " pod="openstack/ceilometer-0" Jan 31 09:26:07 crc kubenswrapper[4830]: I0131 09:26:07.491958 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/22fcbbb1-bfe8-4d6a-b0ea-6e0949dd7246-log-httpd\") pod \"ceilometer-0\" (UID: \"22fcbbb1-bfe8-4d6a-b0ea-6e0949dd7246\") " pod="openstack/ceilometer-0" Jan 31 09:26:07 crc kubenswrapper[4830]: I0131 09:26:07.492373 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/22fcbbb1-bfe8-4d6a-b0ea-6e0949dd7246-run-httpd\") pod \"ceilometer-0\" (UID: \"22fcbbb1-bfe8-4d6a-b0ea-6e0949dd7246\") " pod="openstack/ceilometer-0" Jan 31 09:26:07 crc kubenswrapper[4830]: I0131 09:26:07.512487 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22fcbbb1-bfe8-4d6a-b0ea-6e0949dd7246-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"22fcbbb1-bfe8-4d6a-b0ea-6e0949dd7246\") " pod="openstack/ceilometer-0" Jan 31 09:26:07 crc kubenswrapper[4830]: I0131 09:26:07.513609 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/22fcbbb1-bfe8-4d6a-b0ea-6e0949dd7246-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"22fcbbb1-bfe8-4d6a-b0ea-6e0949dd7246\") " pod="openstack/ceilometer-0" Jan 31 09:26:07 crc kubenswrapper[4830]: I0131 09:26:07.514080 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/22fcbbb1-bfe8-4d6a-b0ea-6e0949dd7246-config-data\") pod \"ceilometer-0\" (UID: \"22fcbbb1-bfe8-4d6a-b0ea-6e0949dd7246\") " pod="openstack/ceilometer-0" Jan 31 09:26:07 crc kubenswrapper[4830]: I0131 09:26:07.514399 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/22fcbbb1-bfe8-4d6a-b0ea-6e0949dd7246-scripts\") pod \"ceilometer-0\" (UID: \"22fcbbb1-bfe8-4d6a-b0ea-6e0949dd7246\") " pod="openstack/ceilometer-0" Jan 31 09:26:07 crc kubenswrapper[4830]: I0131 09:26:07.524843 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qfh7w\" (UniqueName: \"kubernetes.io/projected/22fcbbb1-bfe8-4d6a-b0ea-6e0949dd7246-kube-api-access-qfh7w\") pod \"ceilometer-0\" (UID: \"22fcbbb1-bfe8-4d6a-b0ea-6e0949dd7246\") " pod="openstack/ceilometer-0" Jan 31 09:26:07 crc kubenswrapper[4830]: I0131 09:26:07.683388 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 31 09:26:08 crc kubenswrapper[4830]: I0131 09:26:08.175216 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-default-external-api-0" podUID="ff4e5fbc-7e45-42b7-8af6-ff34b36bb594" containerName="glance-log" probeResult="failure" output="Get \"https://10.217.0.189:9292/healthcheck\": read tcp 10.217.0.2:38190->10.217.0.189:9292: read: connection reset by peer" Jan 31 09:26:08 crc kubenswrapper[4830]: I0131 09:26:08.176215 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-default-external-api-0" podUID="ff4e5fbc-7e45-42b7-8af6-ff34b36bb594" containerName="glance-httpd" probeResult="failure" output="Get \"https://10.217.0.189:9292/healthcheck\": read tcp 10.217.0.2:38174->10.217.0.189:9292: read: connection reset by peer" Jan 31 09:26:08 crc kubenswrapper[4830]: I0131 09:26:08.188033 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-f57b45989-7xfmm" event={"ID":"de11d1f2-fd91-48c7-9dc3-79748064e53d","Type":"ContainerStarted","Data":"bffa2c548becd7f946fbb4976e3d447b020e9ffd9c9dcc3e92ea7384d16a0f88"} Jan 31 09:26:08 crc kubenswrapper[4830]: I0131 09:26:08.188113 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-f57b45989-7xfmm" Jan 31 09:26:08 crc kubenswrapper[4830]: I0131 09:26:08.233237 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-587fd67997-pvqls" event={"ID":"e5627b4b-982b-41c1-8ff9-8ca07513680d","Type":"ContainerStarted","Data":"de595e3300571102b1c4fb25a65643a6193ff774aeaf83f24ec9a38461e8a8e6"} Jan 31 09:26:08 crc kubenswrapper[4830]: I0131 09:26:08.233897 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-engine-587fd67997-pvqls" Jan 31 09:26:08 crc kubenswrapper[4830]: I0131 09:26:08.241962 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-f57b45989-7xfmm" podStartSLOduration=17.241931874 podStartE2EDuration="17.241931874s" podCreationTimestamp="2026-01-31 09:25:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 09:26:08.229250703 +0000 UTC m=+1512.722613145" watchObservedRunningTime="2026-01-31 09:26:08.241931874 +0000 UTC m=+1512.735294316" Jan 31 09:26:08 crc kubenswrapper[4830]: I0131 09:26:08.305185 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43cbd586-1683-440f-992a-113173028a37" path="/var/lib/kubelet/pods/43cbd586-1683-440f-992a-113173028a37/volumes" Jan 31 09:26:08 crc kubenswrapper[4830]: I0131 09:26:08.318337 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fd3d398c-5a9f-4835-9b6a-6700097e85ed" path="/var/lib/kubelet/pods/fd3d398c-5a9f-4835-9b6a-6700097e85ed/volumes" Jan 31 09:26:08 crc kubenswrapper[4830]: I0131 09:26:08.322382 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-engine-587fd67997-pvqls" podStartSLOduration=17.32233357 podStartE2EDuration="17.32233357s" podCreationTimestamp="2026-01-31 09:25:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 09:26:08.267431809 +0000 UTC m=+1512.760794271" watchObservedRunningTime="2026-01-31 09:26:08.32233357 +0000 UTC m=+1512.815696012" Jan 31 09:26:08 crc kubenswrapper[4830]: I0131 09:26:08.335935 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-9f575bfb8-72ll7" event={"ID":"71e048bc-59e9-496e-8883-5374a863a094","Type":"ContainerStarted","Data":"c1a3d28b9afea684a4488518cad58a562bdee227ba35c305368e849581cd3782"} Jan 31 09:26:08 crc kubenswrapper[4830]: I0131 09:26:08.336033 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-9f575bfb8-72ll7" Jan 31 09:26:08 crc kubenswrapper[4830]: I0131 09:26:08.336093 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-6948bd58db-k47sz" event={"ID":"8f3e08be-d310-4474-99fb-d9226ab6eedb","Type":"ContainerStarted","Data":"14b8d2b014edebbf2a4fb63beaa402cdcf021e136b2b034bf350b5e7b634d6a7"} Jan 31 09:26:08 crc kubenswrapper[4830]: I0131 09:26:08.336115 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-6948bd58db-k47sz" Jan 31 09:26:08 crc kubenswrapper[4830]: I0131 09:26:08.336152 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-f44b7d679-6khcx" Jan 31 09:26:08 crc kubenswrapper[4830]: I0131 09:26:08.336166 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-f44b7d679-6khcx" event={"ID":"f99258ad-5714-491f-bdad-d7196ed9833a","Type":"ContainerStarted","Data":"81350577eb03d15b4d5361dda8875d491271fd5fffedb4dbed2e9afc706ba9ee"} Jan 31 09:26:08 crc kubenswrapper[4830]: I0131 09:26:08.336185 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-f44b7d679-6khcx" Jan 31 09:26:08 crc kubenswrapper[4830]: I0131 09:26:08.336197 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-bcd57748c-bwxdf" event={"ID":"de2e8918-df90-4e54-8365-e7148dbdbcd1","Type":"ContainerStarted","Data":"aca1b9c776a715fad2800fffa5c89f0af8ff618c62ffa18884f1656e6031454e"} Jan 31 09:26:08 crc kubenswrapper[4830]: I0131 09:26:08.336259 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-bcd57748c-bwxdf" Jan 31 09:26:08 crc kubenswrapper[4830]: I0131 09:26:08.339525 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-9f575bfb8-72ll7" podStartSLOduration=15.339481177 podStartE2EDuration="15.339481177s" podCreationTimestamp="2026-01-31 09:25:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 09:26:08.311159222 +0000 UTC m=+1512.804521684" watchObservedRunningTime="2026-01-31 09:26:08.339481177 +0000 UTC m=+1512.832843619" Jan 31 09:26:08 crc kubenswrapper[4830]: I0131 09:26:08.381552 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-6948bd58db-k47sz" podStartSLOduration=17.381521913 podStartE2EDuration="17.381521913s" podCreationTimestamp="2026-01-31 09:25:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 09:26:08.339183629 +0000 UTC m=+1512.832546071" watchObservedRunningTime="2026-01-31 09:26:08.381521913 +0000 UTC m=+1512.874884355" Jan 31 09:26:08 crc kubenswrapper[4830]: I0131 09:26:08.419398 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-f44b7d679-6khcx" podStartSLOduration=25.419362698 podStartE2EDuration="25.419362698s" podCreationTimestamp="2026-01-31 09:25:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 09:26:08.371158798 +0000 UTC m=+1512.864521370" watchObservedRunningTime="2026-01-31 09:26:08.419362698 +0000 UTC m=+1512.912725140" Jan 31 09:26:08 crc kubenswrapper[4830]: I0131 09:26:08.432562 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-bcd57748c-bwxdf" podStartSLOduration=15.432532943 podStartE2EDuration="15.432532943s" podCreationTimestamp="2026-01-31 09:25:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 09:26:08.399261247 +0000 UTC m=+1512.892623689" watchObservedRunningTime="2026-01-31 09:26:08.432532943 +0000 UTC m=+1512.925895385" Jan 31 09:26:08 crc kubenswrapper[4830]: I0131 09:26:08.457846 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 31 09:26:08 crc kubenswrapper[4830]: E0131 09:26:08.592360 4830 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8f3e08be_d310_4474_99fb_d9226ab6eedb.slice/crio-14b8d2b014edebbf2a4fb63beaa402cdcf021e136b2b034bf350b5e7b634d6a7.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8f3e08be_d310_4474_99fb_d9226ab6eedb.slice/crio-conmon-14b8d2b014edebbf2a4fb63beaa402cdcf021e136b2b034bf350b5e7b634d6a7.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podff4e5fbc_7e45_42b7_8af6_ff34b36bb594.slice/crio-conmon-c0f6af8c9ac8c455376ada0690cecdd3516ef4b1a4487609897ae6527c19432d.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9404af59_7e12_483b_90d0_9ebdc4140cc2.slice/crio-conmon-1cec5eaefe29b55b53814da42acd0c523600e78af1749ea2cf9bbaa730773373.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podff4e5fbc_7e45_42b7_8af6_ff34b36bb594.slice/crio-c0f6af8c9ac8c455376ada0690cecdd3516ef4b1a4487609897ae6527c19432d.scope\": RecentStats: unable to find data in memory cache]" Jan 31 09:26:08 crc kubenswrapper[4830]: I0131 09:26:08.856412 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 31 09:26:08 crc kubenswrapper[4830]: I0131 09:26:08.857840 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="0ec35101-03e3-421d-8799-a7a0b1864b9b" containerName="glance-log" containerID="cri-o://e8d65d3b6016cd3404d2d4f61f24bafc19156c82cbfa0497b23e98f7f4e9893e" gracePeriod=30 Jan 31 09:26:08 crc kubenswrapper[4830]: I0131 09:26:08.859580 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="0ec35101-03e3-421d-8799-a7a0b1864b9b" containerName="glance-httpd" containerID="cri-o://be1a139ca62a9ee5841b9e7d34ba9750b8cdc0d9b26aa9a3ed0ba027497b53ec" gracePeriod=30 Jan 31 09:26:08 crc kubenswrapper[4830]: I0131 09:26:08.973071 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 31 09:26:09 crc kubenswrapper[4830]: I0131 09:26:09.041017 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-59d6cd4869-w2rrr" podUID="9404af59-7e12-483b-90d0-9ebdc4140cc2" containerName="neutron-httpd" probeResult="failure" output="Get \"https://10.217.0.194:9696/\": dial tcp 10.217.0.194:9696: connect: connection refused" Jan 31 09:26:09 crc kubenswrapper[4830]: I0131 09:26:09.206204 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 31 09:26:09 crc kubenswrapper[4830]: I0131 09:26:09.301571 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff4e5fbc-7e45-42b7-8af6-ff34b36bb594-combined-ca-bundle\") pod \"ff4e5fbc-7e45-42b7-8af6-ff34b36bb594\" (UID: \"ff4e5fbc-7e45-42b7-8af6-ff34b36bb594\") " Jan 31 09:26:09 crc kubenswrapper[4830]: I0131 09:26:09.301660 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ff4e5fbc-7e45-42b7-8af6-ff34b36bb594-httpd-run\") pod \"ff4e5fbc-7e45-42b7-8af6-ff34b36bb594\" (UID: \"ff4e5fbc-7e45-42b7-8af6-ff34b36bb594\") " Jan 31 09:26:09 crc kubenswrapper[4830]: I0131 09:26:09.301765 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ff4e5fbc-7e45-42b7-8af6-ff34b36bb594-scripts\") pod \"ff4e5fbc-7e45-42b7-8af6-ff34b36bb594\" (UID: \"ff4e5fbc-7e45-42b7-8af6-ff34b36bb594\") " Jan 31 09:26:09 crc kubenswrapper[4830]: I0131 09:26:09.302040 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-534498b2-d616-470f-a82d-6fd5620e2438\") pod \"ff4e5fbc-7e45-42b7-8af6-ff34b36bb594\" (UID: \"ff4e5fbc-7e45-42b7-8af6-ff34b36bb594\") " Jan 31 09:26:09 crc kubenswrapper[4830]: I0131 09:26:09.302194 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xpj\" (UniqueName: \"kubernetes.io/projected/ff4e5fbc-7e45-42b7-8af6-ff34b36bb594-kube-api-access-w4xpj\") pod \"ff4e5fbc-7e45-42b7-8af6-ff34b36bb594\" (UID: \"ff4e5fbc-7e45-42b7-8af6-ff34b36bb594\") " Jan 31 09:26:09 crc kubenswrapper[4830]: I0131 09:26:09.302245 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff4e5fbc-7e45-42b7-8af6-ff34b36bb594-config-data\") pod \"ff4e5fbc-7e45-42b7-8af6-ff34b36bb594\" (UID: \"ff4e5fbc-7e45-42b7-8af6-ff34b36bb594\") " Jan 31 09:26:09 crc kubenswrapper[4830]: I0131 09:26:09.302348 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ff4e5fbc-7e45-42b7-8af6-ff34b36bb594-logs\") pod \"ff4e5fbc-7e45-42b7-8af6-ff34b36bb594\" (UID: \"ff4e5fbc-7e45-42b7-8af6-ff34b36bb594\") " Jan 31 09:26:09 crc kubenswrapper[4830]: I0131 09:26:09.302448 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ff4e5fbc-7e45-42b7-8af6-ff34b36bb594-public-tls-certs\") pod \"ff4e5fbc-7e45-42b7-8af6-ff34b36bb594\" (UID: \"ff4e5fbc-7e45-42b7-8af6-ff34b36bb594\") " Jan 31 09:26:09 crc kubenswrapper[4830]: I0131 09:26:09.319936 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ff4e5fbc-7e45-42b7-8af6-ff34b36bb594-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "ff4e5fbc-7e45-42b7-8af6-ff34b36bb594" (UID: "ff4e5fbc-7e45-42b7-8af6-ff34b36bb594"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 09:26:09 crc kubenswrapper[4830]: I0131 09:26:09.320280 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ff4e5fbc-7e45-42b7-8af6-ff34b36bb594-logs" (OuterVolumeSpecName: "logs") pod "ff4e5fbc-7e45-42b7-8af6-ff34b36bb594" (UID: "ff4e5fbc-7e45-42b7-8af6-ff34b36bb594"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 09:26:09 crc kubenswrapper[4830]: I0131 09:26:09.418640 4830 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ff4e5fbc-7e45-42b7-8af6-ff34b36bb594-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 31 09:26:09 crc kubenswrapper[4830]: I0131 09:26:09.418682 4830 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ff4e5fbc-7e45-42b7-8af6-ff34b36bb594-logs\") on node \"crc\" DevicePath \"\"" Jan 31 09:26:09 crc kubenswrapper[4830]: I0131 09:26:09.433337 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff4e5fbc-7e45-42b7-8af6-ff34b36bb594-scripts" (OuterVolumeSpecName: "scripts") pod "ff4e5fbc-7e45-42b7-8af6-ff34b36bb594" (UID: "ff4e5fbc-7e45-42b7-8af6-ff34b36bb594"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:26:09 crc kubenswrapper[4830]: I0131 09:26:09.460024 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ff4e5fbc-7e45-42b7-8af6-ff34b36bb594-kube-api-access-w4xpj" (OuterVolumeSpecName: "kube-api-access-w4xpj") pod "ff4e5fbc-7e45-42b7-8af6-ff34b36bb594" (UID: "ff4e5fbc-7e45-42b7-8af6-ff34b36bb594"). InnerVolumeSpecName "kube-api-access-w4xpj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:26:09 crc kubenswrapper[4830]: I0131 09:26:09.486238 4830 generic.go:334] "Generic (PLEG): container finished" podID="8f3e08be-d310-4474-99fb-d9226ab6eedb" containerID="14b8d2b014edebbf2a4fb63beaa402cdcf021e136b2b034bf350b5e7b634d6a7" exitCode=1 Jan 31 09:26:09 crc kubenswrapper[4830]: I0131 09:26:09.486354 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-6948bd58db-k47sz" event={"ID":"8f3e08be-d310-4474-99fb-d9226ab6eedb","Type":"ContainerDied","Data":"14b8d2b014edebbf2a4fb63beaa402cdcf021e136b2b034bf350b5e7b634d6a7"} Jan 31 09:26:09 crc kubenswrapper[4830]: I0131 09:26:09.514807 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-sl874"] Jan 31 09:26:09 crc kubenswrapper[4830]: I0131 09:26:09.516382 4830 scope.go:117] "RemoveContainer" containerID="14b8d2b014edebbf2a4fb63beaa402cdcf021e136b2b034bf350b5e7b634d6a7" Jan 31 09:26:09 crc kubenswrapper[4830]: I0131 09:26:09.565569 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xpj\" (UniqueName: \"kubernetes.io/projected/ff4e5fbc-7e45-42b7-8af6-ff34b36bb594-kube-api-access-w4xpj\") on node \"crc\" DevicePath \"\"" Jan 31 09:26:09 crc kubenswrapper[4830]: I0131 09:26:09.565605 4830 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ff4e5fbc-7e45-42b7-8af6-ff34b36bb594-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 09:26:09 crc kubenswrapper[4830]: E0131 09:26:09.602128 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff4e5fbc-7e45-42b7-8af6-ff34b36bb594" containerName="glance-httpd" Jan 31 09:26:09 crc kubenswrapper[4830]: I0131 09:26:09.602183 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff4e5fbc-7e45-42b7-8af6-ff34b36bb594" containerName="glance-httpd" Jan 31 09:26:09 crc kubenswrapper[4830]: E0131 09:26:09.602207 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff4e5fbc-7e45-42b7-8af6-ff34b36bb594" containerName="glance-log" Jan 31 09:26:09 crc kubenswrapper[4830]: I0131 09:26:09.602215 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff4e5fbc-7e45-42b7-8af6-ff34b36bb594" containerName="glance-log" Jan 31 09:26:09 crc kubenswrapper[4830]: I0131 09:26:09.602776 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff4e5fbc-7e45-42b7-8af6-ff34b36bb594" containerName="glance-log" Jan 31 09:26:09 crc kubenswrapper[4830]: I0131 09:26:09.602824 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff4e5fbc-7e45-42b7-8af6-ff34b36bb594" containerName="glance-httpd" Jan 31 09:26:09 crc kubenswrapper[4830]: I0131 09:26:09.603988 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-sl874" Jan 31 09:26:09 crc kubenswrapper[4830]: I0131 09:26:09.697371 4830 generic.go:334] "Generic (PLEG): container finished" podID="ff4e5fbc-7e45-42b7-8af6-ff34b36bb594" containerID="c0f6af8c9ac8c455376ada0690cecdd3516ef4b1a4487609897ae6527c19432d" exitCode=0 Jan 31 09:26:09 crc kubenswrapper[4830]: I0131 09:26:09.697503 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"ff4e5fbc-7e45-42b7-8af6-ff34b36bb594","Type":"ContainerDied","Data":"c0f6af8c9ac8c455376ada0690cecdd3516ef4b1a4487609897ae6527c19432d"} Jan 31 09:26:09 crc kubenswrapper[4830]: I0131 09:26:09.697542 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"ff4e5fbc-7e45-42b7-8af6-ff34b36bb594","Type":"ContainerDied","Data":"84929e956f010095fa7add1e8963285c7075dd230723bdf7490ef2a4f736eb55"} Jan 31 09:26:09 crc kubenswrapper[4830]: I0131 09:26:09.697561 4830 scope.go:117] "RemoveContainer" containerID="c0f6af8c9ac8c455376ada0690cecdd3516ef4b1a4487609897ae6527c19432d" Jan 31 09:26:09 crc kubenswrapper[4830]: I0131 09:26:09.697792 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 31 09:26:09 crc kubenswrapper[4830]: I0131 09:26:09.709912 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-sl874"] Jan 31 09:26:09 crc kubenswrapper[4830]: I0131 09:26:09.771749 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"22fcbbb1-bfe8-4d6a-b0ea-6e0949dd7246","Type":"ContainerStarted","Data":"bdbd802b8245c47749f2f516fa912b95812c54407e3c12685691aed1de7ac3b4"} Jan 31 09:26:09 crc kubenswrapper[4830]: I0131 09:26:09.825114 4830 generic.go:334] "Generic (PLEG): container finished" podID="0ec35101-03e3-421d-8799-a7a0b1864b9b" containerID="e8d65d3b6016cd3404d2d4f61f24bafc19156c82cbfa0497b23e98f7f4e9893e" exitCode=143 Jan 31 09:26:09 crc kubenswrapper[4830]: I0131 09:26:09.825239 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"0ec35101-03e3-421d-8799-a7a0b1864b9b","Type":"ContainerDied","Data":"e8d65d3b6016cd3404d2d4f61f24bafc19156c82cbfa0497b23e98f7f4e9893e"} Jan 31 09:26:09 crc kubenswrapper[4830]: I0131 09:26:09.829491 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kjn7r\" (UniqueName: \"kubernetes.io/projected/631e221b-b504-4f59-8848-c9427f67c0df-kube-api-access-kjn7r\") pod \"nova-api-db-create-sl874\" (UID: \"631e221b-b504-4f59-8848-c9427f67c0df\") " pod="openstack/nova-api-db-create-sl874" Jan 31 09:26:09 crc kubenswrapper[4830]: I0131 09:26:09.829604 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/631e221b-b504-4f59-8848-c9427f67c0df-operator-scripts\") pod \"nova-api-db-create-sl874\" (UID: \"631e221b-b504-4f59-8848-c9427f67c0df\") " pod="openstack/nova-api-db-create-sl874" Jan 31 09:26:09 crc kubenswrapper[4830]: I0131 09:26:09.834269 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff4e5fbc-7e45-42b7-8af6-ff34b36bb594-config-data" (OuterVolumeSpecName: "config-data") pod "ff4e5fbc-7e45-42b7-8af6-ff34b36bb594" (UID: "ff4e5fbc-7e45-42b7-8af6-ff34b36bb594"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:26:09 crc kubenswrapper[4830]: I0131 09:26:09.836789 4830 generic.go:334] "Generic (PLEG): container finished" podID="9404af59-7e12-483b-90d0-9ebdc4140cc2" containerID="1cec5eaefe29b55b53814da42acd0c523600e78af1749ea2cf9bbaa730773373" exitCode=0 Jan 31 09:26:09 crc kubenswrapper[4830]: I0131 09:26:09.836880 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-59d6cd4869-w2rrr" event={"ID":"9404af59-7e12-483b-90d0-9ebdc4140cc2","Type":"ContainerDied","Data":"1cec5eaefe29b55b53814da42acd0c523600e78af1749ea2cf9bbaa730773373"} Jan 31 09:26:09 crc kubenswrapper[4830]: I0131 09:26:09.845016 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff4e5fbc-7e45-42b7-8af6-ff34b36bb594-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ff4e5fbc-7e45-42b7-8af6-ff34b36bb594" (UID: "ff4e5fbc-7e45-42b7-8af6-ff34b36bb594"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:26:09 crc kubenswrapper[4830]: I0131 09:26:09.879207 4830 generic.go:334] "Generic (PLEG): container finished" podID="de11d1f2-fd91-48c7-9dc3-79748064e53d" containerID="bffa2c548becd7f946fbb4976e3d447b020e9ffd9c9dcc3e92ea7384d16a0f88" exitCode=1 Jan 31 09:26:09 crc kubenswrapper[4830]: I0131 09:26:09.881295 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-f57b45989-7xfmm" event={"ID":"de11d1f2-fd91-48c7-9dc3-79748064e53d","Type":"ContainerDied","Data":"bffa2c548becd7f946fbb4976e3d447b020e9ffd9c9dcc3e92ea7384d16a0f88"} Jan 31 09:26:09 crc kubenswrapper[4830]: I0131 09:26:09.881783 4830 scope.go:117] "RemoveContainer" containerID="bffa2c548becd7f946fbb4976e3d447b020e9ffd9c9dcc3e92ea7384d16a0f88" Jan 31 09:26:09 crc kubenswrapper[4830]: I0131 09:26:09.924836 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-v2w79"] Jan 31 09:26:09 crc kubenswrapper[4830]: I0131 09:26:09.927140 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-v2w79" Jan 31 09:26:09 crc kubenswrapper[4830]: I0131 09:26:09.933157 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kjn7r\" (UniqueName: \"kubernetes.io/projected/631e221b-b504-4f59-8848-c9427f67c0df-kube-api-access-kjn7r\") pod \"nova-api-db-create-sl874\" (UID: \"631e221b-b504-4f59-8848-c9427f67c0df\") " pod="openstack/nova-api-db-create-sl874" Jan 31 09:26:09 crc kubenswrapper[4830]: I0131 09:26:09.933289 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/631e221b-b504-4f59-8848-c9427f67c0df-operator-scripts\") pod \"nova-api-db-create-sl874\" (UID: \"631e221b-b504-4f59-8848-c9427f67c0df\") " pod="openstack/nova-api-db-create-sl874" Jan 31 09:26:09 crc kubenswrapper[4830]: I0131 09:26:09.933378 4830 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff4e5fbc-7e45-42b7-8af6-ff34b36bb594-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 09:26:09 crc kubenswrapper[4830]: I0131 09:26:09.933654 4830 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff4e5fbc-7e45-42b7-8af6-ff34b36bb594-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 09:26:09 crc kubenswrapper[4830]: I0131 09:26:09.934705 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/631e221b-b504-4f59-8848-c9427f67c0df-operator-scripts\") pod \"nova-api-db-create-sl874\" (UID: \"631e221b-b504-4f59-8848-c9427f67c0df\") " pod="openstack/nova-api-db-create-sl874" Jan 31 09:26:09 crc kubenswrapper[4830]: I0131 09:26:09.975644 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff4e5fbc-7e45-42b7-8af6-ff34b36bb594-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "ff4e5fbc-7e45-42b7-8af6-ff34b36bb594" (UID: "ff4e5fbc-7e45-42b7-8af6-ff34b36bb594"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:26:10 crc kubenswrapper[4830]: I0131 09:26:10.004787 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-v2w79"] Jan 31 09:26:10 crc kubenswrapper[4830]: I0131 09:26:10.006558 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kjn7r\" (UniqueName: \"kubernetes.io/projected/631e221b-b504-4f59-8848-c9427f67c0df-kube-api-access-kjn7r\") pod \"nova-api-db-create-sl874\" (UID: \"631e221b-b504-4f59-8848-c9427f67c0df\") " pod="openstack/nova-api-db-create-sl874" Jan 31 09:26:10 crc kubenswrapper[4830]: I0131 09:26:10.036535 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qj57x\" (UniqueName: \"kubernetes.io/projected/ced551a8-224d-488b-aa58-c424e387ccca-kube-api-access-qj57x\") pod \"nova-cell0-db-create-v2w79\" (UID: \"ced551a8-224d-488b-aa58-c424e387ccca\") " pod="openstack/nova-cell0-db-create-v2w79" Jan 31 09:26:10 crc kubenswrapper[4830]: I0131 09:26:10.036820 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ced551a8-224d-488b-aa58-c424e387ccca-operator-scripts\") pod \"nova-cell0-db-create-v2w79\" (UID: \"ced551a8-224d-488b-aa58-c424e387ccca\") " pod="openstack/nova-cell0-db-create-v2w79" Jan 31 09:26:10 crc kubenswrapper[4830]: I0131 09:26:10.036955 4830 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ff4e5fbc-7e45-42b7-8af6-ff34b36bb594-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 31 09:26:10 crc kubenswrapper[4830]: I0131 09:26:10.060880 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-ttdxz"] Jan 31 09:26:10 crc kubenswrapper[4830]: I0131 09:26:10.063109 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-ttdxz" Jan 31 09:26:10 crc kubenswrapper[4830]: I0131 09:26:10.082507 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-ttdxz"] Jan 31 09:26:10 crc kubenswrapper[4830]: I0131 09:26:10.101256 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-184b-account-create-update-gwvpg"] Jan 31 09:26:10 crc kubenswrapper[4830]: I0131 09:26:10.110136 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-184b-account-create-update-gwvpg" Jan 31 09:26:10 crc kubenswrapper[4830]: I0131 09:26:10.115417 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Jan 31 09:26:10 crc kubenswrapper[4830]: I0131 09:26:10.121927 4830 scope.go:117] "RemoveContainer" containerID="7cac1cf1ee9f45c0bb4a831735025c733474ea6d0e388d5961a9e10c557f87a2" Jan 31 09:26:10 crc kubenswrapper[4830]: I0131 09:26:10.137749 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-184b-account-create-update-gwvpg"] Jan 31 09:26:10 crc kubenswrapper[4830]: I0131 09:26:10.145247 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qj57x\" (UniqueName: \"kubernetes.io/projected/ced551a8-224d-488b-aa58-c424e387ccca-kube-api-access-qj57x\") pod \"nova-cell0-db-create-v2w79\" (UID: \"ced551a8-224d-488b-aa58-c424e387ccca\") " pod="openstack/nova-cell0-db-create-v2w79" Jan 31 09:26:10 crc kubenswrapper[4830]: I0131 09:26:10.145554 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ced551a8-224d-488b-aa58-c424e387ccca-operator-scripts\") pod \"nova-cell0-db-create-v2w79\" (UID: \"ced551a8-224d-488b-aa58-c424e387ccca\") " pod="openstack/nova-cell0-db-create-v2w79" Jan 31 09:26:10 crc kubenswrapper[4830]: I0131 09:26:10.146671 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ced551a8-224d-488b-aa58-c424e387ccca-operator-scripts\") pod \"nova-cell0-db-create-v2w79\" (UID: \"ced551a8-224d-488b-aa58-c424e387ccca\") " pod="openstack/nova-cell0-db-create-v2w79" Jan 31 09:26:10 crc kubenswrapper[4830]: I0131 09:26:10.151566 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-sl874" Jan 31 09:26:10 crc kubenswrapper[4830]: I0131 09:26:10.180595 4830 scope.go:117] "RemoveContainer" containerID="c0f6af8c9ac8c455376ada0690cecdd3516ef4b1a4487609897ae6527c19432d" Jan 31 09:26:10 crc kubenswrapper[4830]: E0131 09:26:10.183145 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c0f6af8c9ac8c455376ada0690cecdd3516ef4b1a4487609897ae6527c19432d\": container with ID starting with c0f6af8c9ac8c455376ada0690cecdd3516ef4b1a4487609897ae6527c19432d not found: ID does not exist" containerID="c0f6af8c9ac8c455376ada0690cecdd3516ef4b1a4487609897ae6527c19432d" Jan 31 09:26:10 crc kubenswrapper[4830]: I0131 09:26:10.183187 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c0f6af8c9ac8c455376ada0690cecdd3516ef4b1a4487609897ae6527c19432d"} err="failed to get container status \"c0f6af8c9ac8c455376ada0690cecdd3516ef4b1a4487609897ae6527c19432d\": rpc error: code = NotFound desc = could not find container \"c0f6af8c9ac8c455376ada0690cecdd3516ef4b1a4487609897ae6527c19432d\": container with ID starting with c0f6af8c9ac8c455376ada0690cecdd3516ef4b1a4487609897ae6527c19432d not found: ID does not exist" Jan 31 09:26:10 crc kubenswrapper[4830]: I0131 09:26:10.183215 4830 scope.go:117] "RemoveContainer" containerID="7cac1cf1ee9f45c0bb4a831735025c733474ea6d0e388d5961a9e10c557f87a2" Jan 31 09:26:10 crc kubenswrapper[4830]: E0131 09:26:10.183658 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7cac1cf1ee9f45c0bb4a831735025c733474ea6d0e388d5961a9e10c557f87a2\": container with ID starting with 7cac1cf1ee9f45c0bb4a831735025c733474ea6d0e388d5961a9e10c557f87a2 not found: ID does not exist" containerID="7cac1cf1ee9f45c0bb4a831735025c733474ea6d0e388d5961a9e10c557f87a2" Jan 31 09:26:10 crc kubenswrapper[4830]: I0131 09:26:10.183689 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7cac1cf1ee9f45c0bb4a831735025c733474ea6d0e388d5961a9e10c557f87a2"} err="failed to get container status \"7cac1cf1ee9f45c0bb4a831735025c733474ea6d0e388d5961a9e10c557f87a2\": rpc error: code = NotFound desc = could not find container \"7cac1cf1ee9f45c0bb4a831735025c733474ea6d0e388d5961a9e10c557f87a2\": container with ID starting with 7cac1cf1ee9f45c0bb4a831735025c733474ea6d0e388d5961a9e10c557f87a2 not found: ID does not exist" Jan 31 09:26:10 crc kubenswrapper[4830]: I0131 09:26:10.188518 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-534498b2-d616-470f-a82d-6fd5620e2438" (OuterVolumeSpecName: "glance") pod "ff4e5fbc-7e45-42b7-8af6-ff34b36bb594" (UID: "ff4e5fbc-7e45-42b7-8af6-ff34b36bb594"). InnerVolumeSpecName "pvc-534498b2-d616-470f-a82d-6fd5620e2438". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 31 09:26:10 crc kubenswrapper[4830]: I0131 09:26:10.209860 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qj57x\" (UniqueName: \"kubernetes.io/projected/ced551a8-224d-488b-aa58-c424e387ccca-kube-api-access-qj57x\") pod \"nova-cell0-db-create-v2w79\" (UID: \"ced551a8-224d-488b-aa58-c424e387ccca\") " pod="openstack/nova-cell0-db-create-v2w79" Jan 31 09:26:10 crc kubenswrapper[4830]: I0131 09:26:10.212564 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-59d6cd4869-w2rrr" Jan 31 09:26:10 crc kubenswrapper[4830]: I0131 09:26:10.231581 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-61ea-account-create-update-8wzt7"] Jan 31 09:26:10 crc kubenswrapper[4830]: E0131 09:26:10.232213 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9404af59-7e12-483b-90d0-9ebdc4140cc2" containerName="neutron-api" Jan 31 09:26:10 crc kubenswrapper[4830]: I0131 09:26:10.232233 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="9404af59-7e12-483b-90d0-9ebdc4140cc2" containerName="neutron-api" Jan 31 09:26:10 crc kubenswrapper[4830]: E0131 09:26:10.232250 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9404af59-7e12-483b-90d0-9ebdc4140cc2" containerName="neutron-httpd" Jan 31 09:26:10 crc kubenswrapper[4830]: I0131 09:26:10.232258 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="9404af59-7e12-483b-90d0-9ebdc4140cc2" containerName="neutron-httpd" Jan 31 09:26:10 crc kubenswrapper[4830]: I0131 09:26:10.232512 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="9404af59-7e12-483b-90d0-9ebdc4140cc2" containerName="neutron-api" Jan 31 09:26:10 crc kubenswrapper[4830]: I0131 09:26:10.232546 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="9404af59-7e12-483b-90d0-9ebdc4140cc2" containerName="neutron-httpd" Jan 31 09:26:10 crc kubenswrapper[4830]: I0131 09:26:10.233455 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-61ea-account-create-update-8wzt7" Jan 31 09:26:10 crc kubenswrapper[4830]: I0131 09:26:10.236481 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Jan 31 09:26:10 crc kubenswrapper[4830]: I0131 09:26:10.267036 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9378cf4e-8ab3-4e97-8955-158a9b0c4c26-operator-scripts\") pod \"nova-api-184b-account-create-update-gwvpg\" (UID: \"9378cf4e-8ab3-4e97-8955-158a9b0c4c26\") " pod="openstack/nova-api-184b-account-create-update-gwvpg" Jan 31 09:26:10 crc kubenswrapper[4830]: I0131 09:26:10.267161 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/89ec8e55-13ac-45e1-b5a6-b38ee34a1702-operator-scripts\") pod \"nova-cell1-db-create-ttdxz\" (UID: \"89ec8e55-13ac-45e1-b5a6-b38ee34a1702\") " pod="openstack/nova-cell1-db-create-ttdxz" Jan 31 09:26:10 crc kubenswrapper[4830]: I0131 09:26:10.267493 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ckkc2\" (UniqueName: \"kubernetes.io/projected/89ec8e55-13ac-45e1-b5a6-b38ee34a1702-kube-api-access-ckkc2\") pod \"nova-cell1-db-create-ttdxz\" (UID: \"89ec8e55-13ac-45e1-b5a6-b38ee34a1702\") " pod="openstack/nova-cell1-db-create-ttdxz" Jan 31 09:26:10 crc kubenswrapper[4830]: I0131 09:26:10.267658 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pnfkz\" (UniqueName: \"kubernetes.io/projected/9378cf4e-8ab3-4e97-8955-158a9b0c4c26-kube-api-access-pnfkz\") pod \"nova-api-184b-account-create-update-gwvpg\" (UID: \"9378cf4e-8ab3-4e97-8955-158a9b0c4c26\") " pod="openstack/nova-api-184b-account-create-update-gwvpg" Jan 31 09:26:10 crc kubenswrapper[4830]: I0131 09:26:10.267821 4830 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-534498b2-d616-470f-a82d-6fd5620e2438\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-534498b2-d616-470f-a82d-6fd5620e2438\") on node \"crc\" " Jan 31 09:26:10 crc kubenswrapper[4830]: I0131 09:26:10.305255 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-v2w79" Jan 31 09:26:10 crc kubenswrapper[4830]: I0131 09:26:10.370242 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9404af59-7e12-483b-90d0-9ebdc4140cc2-internal-tls-certs\") pod \"9404af59-7e12-483b-90d0-9ebdc4140cc2\" (UID: \"9404af59-7e12-483b-90d0-9ebdc4140cc2\") " Jan 31 09:26:10 crc kubenswrapper[4830]: I0131 09:26:10.370305 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9404af59-7e12-483b-90d0-9ebdc4140cc2-combined-ca-bundle\") pod \"9404af59-7e12-483b-90d0-9ebdc4140cc2\" (UID: \"9404af59-7e12-483b-90d0-9ebdc4140cc2\") " Jan 31 09:26:10 crc kubenswrapper[4830]: I0131 09:26:10.370383 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/9404af59-7e12-483b-90d0-9ebdc4140cc2-ovndb-tls-certs\") pod \"9404af59-7e12-483b-90d0-9ebdc4140cc2\" (UID: \"9404af59-7e12-483b-90d0-9ebdc4140cc2\") " Jan 31 09:26:10 crc kubenswrapper[4830]: I0131 09:26:10.370430 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/9404af59-7e12-483b-90d0-9ebdc4140cc2-httpd-config\") pod \"9404af59-7e12-483b-90d0-9ebdc4140cc2\" (UID: \"9404af59-7e12-483b-90d0-9ebdc4140cc2\") " Jan 31 09:26:10 crc kubenswrapper[4830]: I0131 09:26:10.370501 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9404af59-7e12-483b-90d0-9ebdc4140cc2-public-tls-certs\") pod \"9404af59-7e12-483b-90d0-9ebdc4140cc2\" (UID: \"9404af59-7e12-483b-90d0-9ebdc4140cc2\") " Jan 31 09:26:10 crc kubenswrapper[4830]: I0131 09:26:10.370545 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8nxhr\" (UniqueName: \"kubernetes.io/projected/9404af59-7e12-483b-90d0-9ebdc4140cc2-kube-api-access-8nxhr\") pod \"9404af59-7e12-483b-90d0-9ebdc4140cc2\" (UID: \"9404af59-7e12-483b-90d0-9ebdc4140cc2\") " Jan 31 09:26:10 crc kubenswrapper[4830]: I0131 09:26:10.370594 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/9404af59-7e12-483b-90d0-9ebdc4140cc2-config\") pod \"9404af59-7e12-483b-90d0-9ebdc4140cc2\" (UID: \"9404af59-7e12-483b-90d0-9ebdc4140cc2\") " Jan 31 09:26:10 crc kubenswrapper[4830]: I0131 09:26:10.371005 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ckkc2\" (UniqueName: \"kubernetes.io/projected/89ec8e55-13ac-45e1-b5a6-b38ee34a1702-kube-api-access-ckkc2\") pod \"nova-cell1-db-create-ttdxz\" (UID: \"89ec8e55-13ac-45e1-b5a6-b38ee34a1702\") " pod="openstack/nova-cell1-db-create-ttdxz" Jan 31 09:26:10 crc kubenswrapper[4830]: I0131 09:26:10.371236 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pnfkz\" (UniqueName: \"kubernetes.io/projected/9378cf4e-8ab3-4e97-8955-158a9b0c4c26-kube-api-access-pnfkz\") pod \"nova-api-184b-account-create-update-gwvpg\" (UID: \"9378cf4e-8ab3-4e97-8955-158a9b0c4c26\") " pod="openstack/nova-api-184b-account-create-update-gwvpg" Jan 31 09:26:10 crc kubenswrapper[4830]: I0131 09:26:10.371392 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zdhsj\" (UniqueName: \"kubernetes.io/projected/fb75ec26-4fae-4778-b520-828660b869cb-kube-api-access-zdhsj\") pod \"nova-cell0-61ea-account-create-update-8wzt7\" (UID: \"fb75ec26-4fae-4778-b520-828660b869cb\") " pod="openstack/nova-cell0-61ea-account-create-update-8wzt7" Jan 31 09:26:10 crc kubenswrapper[4830]: I0131 09:26:10.371451 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9378cf4e-8ab3-4e97-8955-158a9b0c4c26-operator-scripts\") pod \"nova-api-184b-account-create-update-gwvpg\" (UID: \"9378cf4e-8ab3-4e97-8955-158a9b0c4c26\") " pod="openstack/nova-api-184b-account-create-update-gwvpg" Jan 31 09:26:10 crc kubenswrapper[4830]: I0131 09:26:10.371500 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/89ec8e55-13ac-45e1-b5a6-b38ee34a1702-operator-scripts\") pod \"nova-cell1-db-create-ttdxz\" (UID: \"89ec8e55-13ac-45e1-b5a6-b38ee34a1702\") " pod="openstack/nova-cell1-db-create-ttdxz" Jan 31 09:26:10 crc kubenswrapper[4830]: I0131 09:26:10.371528 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fb75ec26-4fae-4778-b520-828660b869cb-operator-scripts\") pod \"nova-cell0-61ea-account-create-update-8wzt7\" (UID: \"fb75ec26-4fae-4778-b520-828660b869cb\") " pod="openstack/nova-cell0-61ea-account-create-update-8wzt7" Jan 31 09:26:10 crc kubenswrapper[4830]: I0131 09:26:10.383070 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9378cf4e-8ab3-4e97-8955-158a9b0c4c26-operator-scripts\") pod \"nova-api-184b-account-create-update-gwvpg\" (UID: \"9378cf4e-8ab3-4e97-8955-158a9b0c4c26\") " pod="openstack/nova-api-184b-account-create-update-gwvpg" Jan 31 09:26:10 crc kubenswrapper[4830]: I0131 09:26:10.388634 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/89ec8e55-13ac-45e1-b5a6-b38ee34a1702-operator-scripts\") pod \"nova-cell1-db-create-ttdxz\" (UID: \"89ec8e55-13ac-45e1-b5a6-b38ee34a1702\") " pod="openstack/nova-cell1-db-create-ttdxz" Jan 31 09:26:10 crc kubenswrapper[4830]: I0131 09:26:10.447234 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9404af59-7e12-483b-90d0-9ebdc4140cc2-kube-api-access-8nxhr" (OuterVolumeSpecName: "kube-api-access-8nxhr") pod "9404af59-7e12-483b-90d0-9ebdc4140cc2" (UID: "9404af59-7e12-483b-90d0-9ebdc4140cc2"). InnerVolumeSpecName "kube-api-access-8nxhr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:26:10 crc kubenswrapper[4830]: I0131 09:26:10.461867 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pnfkz\" (UniqueName: \"kubernetes.io/projected/9378cf4e-8ab3-4e97-8955-158a9b0c4c26-kube-api-access-pnfkz\") pod \"nova-api-184b-account-create-update-gwvpg\" (UID: \"9378cf4e-8ab3-4e97-8955-158a9b0c4c26\") " pod="openstack/nova-api-184b-account-create-update-gwvpg" Jan 31 09:26:10 crc kubenswrapper[4830]: I0131 09:26:10.496541 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9404af59-7e12-483b-90d0-9ebdc4140cc2-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "9404af59-7e12-483b-90d0-9ebdc4140cc2" (UID: "9404af59-7e12-483b-90d0-9ebdc4140cc2"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:26:10 crc kubenswrapper[4830]: I0131 09:26:10.525275 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-184b-account-create-update-gwvpg" Jan 31 09:26:10 crc kubenswrapper[4830]: I0131 09:26:10.551189 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zdhsj\" (UniqueName: \"kubernetes.io/projected/fb75ec26-4fae-4778-b520-828660b869cb-kube-api-access-zdhsj\") pod \"nova-cell0-61ea-account-create-update-8wzt7\" (UID: \"fb75ec26-4fae-4778-b520-828660b869cb\") " pod="openstack/nova-cell0-61ea-account-create-update-8wzt7" Jan 31 09:26:10 crc kubenswrapper[4830]: I0131 09:26:10.554601 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fb75ec26-4fae-4778-b520-828660b869cb-operator-scripts\") pod \"nova-cell0-61ea-account-create-update-8wzt7\" (UID: \"fb75ec26-4fae-4778-b520-828660b869cb\") " pod="openstack/nova-cell0-61ea-account-create-update-8wzt7" Jan 31 09:26:10 crc kubenswrapper[4830]: I0131 09:26:10.557701 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ckkc2\" (UniqueName: \"kubernetes.io/projected/89ec8e55-13ac-45e1-b5a6-b38ee34a1702-kube-api-access-ckkc2\") pod \"nova-cell1-db-create-ttdxz\" (UID: \"89ec8e55-13ac-45e1-b5a6-b38ee34a1702\") " pod="openstack/nova-cell1-db-create-ttdxz" Jan 31 09:26:10 crc kubenswrapper[4830]: I0131 09:26:10.566517 4830 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/9404af59-7e12-483b-90d0-9ebdc4140cc2-httpd-config\") on node \"crc\" DevicePath \"\"" Jan 31 09:26:10 crc kubenswrapper[4830]: I0131 09:26:10.572207 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8nxhr\" (UniqueName: \"kubernetes.io/projected/9404af59-7e12-483b-90d0-9ebdc4140cc2-kube-api-access-8nxhr\") on node \"crc\" DevicePath \"\"" Jan 31 09:26:10 crc kubenswrapper[4830]: I0131 09:26:10.581028 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fb75ec26-4fae-4778-b520-828660b869cb-operator-scripts\") pod \"nova-cell0-61ea-account-create-update-8wzt7\" (UID: \"fb75ec26-4fae-4778-b520-828660b869cb\") " pod="openstack/nova-cell0-61ea-account-create-update-8wzt7" Jan 31 09:26:10 crc kubenswrapper[4830]: I0131 09:26:10.618854 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zdhsj\" (UniqueName: \"kubernetes.io/projected/fb75ec26-4fae-4778-b520-828660b869cb-kube-api-access-zdhsj\") pod \"nova-cell0-61ea-account-create-update-8wzt7\" (UID: \"fb75ec26-4fae-4778-b520-828660b869cb\") " pod="openstack/nova-cell0-61ea-account-create-update-8wzt7" Jan 31 09:26:10 crc kubenswrapper[4830]: I0131 09:26:10.621130 4830 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Jan 31 09:26:10 crc kubenswrapper[4830]: I0131 09:26:10.621446 4830 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-534498b2-d616-470f-a82d-6fd5620e2438" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-534498b2-d616-470f-a82d-6fd5620e2438") on node "crc" Jan 31 09:26:10 crc kubenswrapper[4830]: I0131 09:26:10.652024 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-61ea-account-create-update-8wzt7" Jan 31 09:26:10 crc kubenswrapper[4830]: I0131 09:26:10.680763 4830 reconciler_common.go:293] "Volume detached for volume \"pvc-534498b2-d616-470f-a82d-6fd5620e2438\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-534498b2-d616-470f-a82d-6fd5620e2438\") on node \"crc\" DevicePath \"\"" Jan 31 09:26:10 crc kubenswrapper[4830]: I0131 09:26:10.716556 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-ttdxz" Jan 31 09:26:10 crc kubenswrapper[4830]: I0131 09:26:10.797704 4830 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/swift-proxy-f44b7d679-6khcx" podUID="f99258ad-5714-491f-bdad-d7196ed9833a" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 31 09:26:10 crc kubenswrapper[4830]: I0131 09:26:10.895957 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9404af59-7e12-483b-90d0-9ebdc4140cc2-config" (OuterVolumeSpecName: "config") pod "9404af59-7e12-483b-90d0-9ebdc4140cc2" (UID: "9404af59-7e12-483b-90d0-9ebdc4140cc2"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:26:10 crc kubenswrapper[4830]: I0131 09:26:10.930173 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9404af59-7e12-483b-90d0-9ebdc4140cc2-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "9404af59-7e12-483b-90d0-9ebdc4140cc2" (UID: "9404af59-7e12-483b-90d0-9ebdc4140cc2"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:26:10 crc kubenswrapper[4830]: I0131 09:26:10.971973 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9404af59-7e12-483b-90d0-9ebdc4140cc2-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "9404af59-7e12-483b-90d0-9ebdc4140cc2" (UID: "9404af59-7e12-483b-90d0-9ebdc4140cc2"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:26:10 crc kubenswrapper[4830]: I0131 09:26:10.982297 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9404af59-7e12-483b-90d0-9ebdc4140cc2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9404af59-7e12-483b-90d0-9ebdc4140cc2" (UID: "9404af59-7e12-483b-90d0-9ebdc4140cc2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:26:10 crc kubenswrapper[4830]: I0131 09:26:10.985098 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-59d6cd4869-w2rrr" Jan 31 09:26:10 crc kubenswrapper[4830]: I0131 09:26:10.985380 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9404af59-7e12-483b-90d0-9ebdc4140cc2-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "9404af59-7e12-483b-90d0-9ebdc4140cc2" (UID: "9404af59-7e12-483b-90d0-9ebdc4140cc2"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:26:11 crc kubenswrapper[4830]: I0131 09:26:11.005568 4830 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/9404af59-7e12-483b-90d0-9ebdc4140cc2-config\") on node \"crc\" DevicePath \"\"" Jan 31 09:26:11 crc kubenswrapper[4830]: I0131 09:26:11.005632 4830 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9404af59-7e12-483b-90d0-9ebdc4140cc2-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 31 09:26:11 crc kubenswrapper[4830]: I0131 09:26:11.005650 4830 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9404af59-7e12-483b-90d0-9ebdc4140cc2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 09:26:11 crc kubenswrapper[4830]: I0131 09:26:11.005664 4830 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/9404af59-7e12-483b-90d0-9ebdc4140cc2-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 31 09:26:11 crc kubenswrapper[4830]: I0131 09:26:11.005703 4830 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9404af59-7e12-483b-90d0-9ebdc4140cc2-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 31 09:26:11 crc kubenswrapper[4830]: I0131 09:26:11.092913 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-61ea-account-create-update-8wzt7"] Jan 31 09:26:11 crc kubenswrapper[4830]: I0131 09:26:11.092978 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-59d6cd4869-w2rrr" event={"ID":"9404af59-7e12-483b-90d0-9ebdc4140cc2","Type":"ContainerDied","Data":"bd674d6826529c7c8a216cc6649c22beca723f2881ec0549ff5e3b4f031f896a"} Jan 31 09:26:11 crc kubenswrapper[4830]: I0131 09:26:11.093014 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-6948bd58db-k47sz" event={"ID":"8f3e08be-d310-4474-99fb-d9226ab6eedb","Type":"ContainerStarted","Data":"12da4461a7423985fb15fa6a8127510011961b290ffafb67a07ee27348a2d699"} Jan 31 09:26:11 crc kubenswrapper[4830]: I0131 09:26:11.093046 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-6858-account-create-update-mxmt7"] Jan 31 09:26:11 crc kubenswrapper[4830]: I0131 09:26:11.095206 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-6858-account-create-update-mxmt7"] Jan 31 09:26:11 crc kubenswrapper[4830]: I0131 09:26:11.095320 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-6858-account-create-update-mxmt7" Jan 31 09:26:11 crc kubenswrapper[4830]: I0131 09:26:11.096659 4830 scope.go:117] "RemoveContainer" containerID="ebbfc0576c942e0e24080af4a45767ccb924675876b9993065a3eeec34f93cb2" Jan 31 09:26:11 crc kubenswrapper[4830]: I0131 09:26:11.104804 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Jan 31 09:26:11 crc kubenswrapper[4830]: I0131 09:26:11.142368 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-sl874"] Jan 31 09:26:11 crc kubenswrapper[4830]: I0131 09:26:11.198001 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 31 09:26:11 crc kubenswrapper[4830]: I0131 09:26:11.199991 4830 scope.go:117] "RemoveContainer" containerID="1cec5eaefe29b55b53814da42acd0c523600e78af1749ea2cf9bbaa730773373" Jan 31 09:26:11 crc kubenswrapper[4830]: I0131 09:26:11.236347 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 31 09:26:11 crc kubenswrapper[4830]: I0131 09:26:11.239206 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6090d149-6116-4ccf-981f-67ad48e42a1f-operator-scripts\") pod \"nova-cell1-6858-account-create-update-mxmt7\" (UID: \"6090d149-6116-4ccf-981f-67ad48e42a1f\") " pod="openstack/nova-cell1-6858-account-create-update-mxmt7" Jan 31 09:26:11 crc kubenswrapper[4830]: I0131 09:26:11.239419 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h9x5k\" (UniqueName: \"kubernetes.io/projected/6090d149-6116-4ccf-981f-67ad48e42a1f-kube-api-access-h9x5k\") pod \"nova-cell1-6858-account-create-update-mxmt7\" (UID: \"6090d149-6116-4ccf-981f-67ad48e42a1f\") " pod="openstack/nova-cell1-6858-account-create-update-mxmt7" Jan 31 09:26:11 crc kubenswrapper[4830]: I0131 09:26:11.317464 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 31 09:26:11 crc kubenswrapper[4830]: I0131 09:26:11.342657 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h9x5k\" (UniqueName: \"kubernetes.io/projected/6090d149-6116-4ccf-981f-67ad48e42a1f-kube-api-access-h9x5k\") pod \"nova-cell1-6858-account-create-update-mxmt7\" (UID: \"6090d149-6116-4ccf-981f-67ad48e42a1f\") " pod="openstack/nova-cell1-6858-account-create-update-mxmt7" Jan 31 09:26:11 crc kubenswrapper[4830]: I0131 09:26:11.342861 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6090d149-6116-4ccf-981f-67ad48e42a1f-operator-scripts\") pod \"nova-cell1-6858-account-create-update-mxmt7\" (UID: \"6090d149-6116-4ccf-981f-67ad48e42a1f\") " pod="openstack/nova-cell1-6858-account-create-update-mxmt7" Jan 31 09:26:11 crc kubenswrapper[4830]: I0131 09:26:11.347831 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6090d149-6116-4ccf-981f-67ad48e42a1f-operator-scripts\") pod \"nova-cell1-6858-account-create-update-mxmt7\" (UID: \"6090d149-6116-4ccf-981f-67ad48e42a1f\") " pod="openstack/nova-cell1-6858-account-create-update-mxmt7" Jan 31 09:26:11 crc kubenswrapper[4830]: I0131 09:26:11.350975 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 31 09:26:11 crc kubenswrapper[4830]: I0131 09:26:11.352515 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 31 09:26:11 crc kubenswrapper[4830]: I0131 09:26:11.363553 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 31 09:26:11 crc kubenswrapper[4830]: I0131 09:26:11.365070 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 31 09:26:11 crc kubenswrapper[4830]: I0131 09:26:11.405313 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-v2w79"] Jan 31 09:26:11 crc kubenswrapper[4830]: I0131 09:26:11.427268 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h9x5k\" (UniqueName: \"kubernetes.io/projected/6090d149-6116-4ccf-981f-67ad48e42a1f-kube-api-access-h9x5k\") pod \"nova-cell1-6858-account-create-update-mxmt7\" (UID: \"6090d149-6116-4ccf-981f-67ad48e42a1f\") " pod="openstack/nova-cell1-6858-account-create-update-mxmt7" Jan 31 09:26:11 crc kubenswrapper[4830]: I0131 09:26:11.467623 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-59d6cd4869-w2rrr"] Jan 31 09:26:11 crc kubenswrapper[4830]: I0131 09:26:11.481345 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-6858-account-create-update-mxmt7" Jan 31 09:26:11 crc kubenswrapper[4830]: I0131 09:26:11.485658 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-59d6cd4869-w2rrr"] Jan 31 09:26:11 crc kubenswrapper[4830]: I0131 09:26:11.549698 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0e63470a-95b6-4653-b917-ed1f8ff66466-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"0e63470a-95b6-4653-b917-ed1f8ff66466\") " pod="openstack/glance-default-external-api-0" Jan 31 09:26:11 crc kubenswrapper[4830]: I0131 09:26:11.549777 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e63470a-95b6-4653-b917-ed1f8ff66466-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"0e63470a-95b6-4653-b917-ed1f8ff66466\") " pod="openstack/glance-default-external-api-0" Jan 31 09:26:11 crc kubenswrapper[4830]: I0131 09:26:11.549864 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-534498b2-d616-470f-a82d-6fd5620e2438\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-534498b2-d616-470f-a82d-6fd5620e2438\") pod \"glance-default-external-api-0\" (UID: \"0e63470a-95b6-4653-b917-ed1f8ff66466\") " pod="openstack/glance-default-external-api-0" Jan 31 09:26:11 crc kubenswrapper[4830]: I0131 09:26:11.549907 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0e63470a-95b6-4653-b917-ed1f8ff66466-scripts\") pod \"glance-default-external-api-0\" (UID: \"0e63470a-95b6-4653-b917-ed1f8ff66466\") " pod="openstack/glance-default-external-api-0" Jan 31 09:26:11 crc kubenswrapper[4830]: I0131 09:26:11.549963 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0e63470a-95b6-4653-b917-ed1f8ff66466-config-data\") pod \"glance-default-external-api-0\" (UID: \"0e63470a-95b6-4653-b917-ed1f8ff66466\") " pod="openstack/glance-default-external-api-0" Jan 31 09:26:11 crc kubenswrapper[4830]: I0131 09:26:11.550004 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0e63470a-95b6-4653-b917-ed1f8ff66466-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"0e63470a-95b6-4653-b917-ed1f8ff66466\") " pod="openstack/glance-default-external-api-0" Jan 31 09:26:11 crc kubenswrapper[4830]: I0131 09:26:11.550059 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0e63470a-95b6-4653-b917-ed1f8ff66466-logs\") pod \"glance-default-external-api-0\" (UID: \"0e63470a-95b6-4653-b917-ed1f8ff66466\") " pod="openstack/glance-default-external-api-0" Jan 31 09:26:11 crc kubenswrapper[4830]: I0131 09:26:11.550120 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cnvcx\" (UniqueName: \"kubernetes.io/projected/0e63470a-95b6-4653-b917-ed1f8ff66466-kube-api-access-cnvcx\") pod \"glance-default-external-api-0\" (UID: \"0e63470a-95b6-4653-b917-ed1f8ff66466\") " pod="openstack/glance-default-external-api-0" Jan 31 09:26:11 crc kubenswrapper[4830]: I0131 09:26:11.657169 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0e63470a-95b6-4653-b917-ed1f8ff66466-scripts\") pod \"glance-default-external-api-0\" (UID: \"0e63470a-95b6-4653-b917-ed1f8ff66466\") " pod="openstack/glance-default-external-api-0" Jan 31 09:26:11 crc kubenswrapper[4830]: I0131 09:26:11.657427 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0e63470a-95b6-4653-b917-ed1f8ff66466-config-data\") pod \"glance-default-external-api-0\" (UID: \"0e63470a-95b6-4653-b917-ed1f8ff66466\") " pod="openstack/glance-default-external-api-0" Jan 31 09:26:11 crc kubenswrapper[4830]: I0131 09:26:11.657576 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0e63470a-95b6-4653-b917-ed1f8ff66466-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"0e63470a-95b6-4653-b917-ed1f8ff66466\") " pod="openstack/glance-default-external-api-0" Jan 31 09:26:11 crc kubenswrapper[4830]: I0131 09:26:11.657656 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0e63470a-95b6-4653-b917-ed1f8ff66466-logs\") pod \"glance-default-external-api-0\" (UID: \"0e63470a-95b6-4653-b917-ed1f8ff66466\") " pod="openstack/glance-default-external-api-0" Jan 31 09:26:11 crc kubenswrapper[4830]: I0131 09:26:11.657875 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cnvcx\" (UniqueName: \"kubernetes.io/projected/0e63470a-95b6-4653-b917-ed1f8ff66466-kube-api-access-cnvcx\") pod \"glance-default-external-api-0\" (UID: \"0e63470a-95b6-4653-b917-ed1f8ff66466\") " pod="openstack/glance-default-external-api-0" Jan 31 09:26:11 crc kubenswrapper[4830]: I0131 09:26:11.658036 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0e63470a-95b6-4653-b917-ed1f8ff66466-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"0e63470a-95b6-4653-b917-ed1f8ff66466\") " pod="openstack/glance-default-external-api-0" Jan 31 09:26:11 crc kubenswrapper[4830]: I0131 09:26:11.658091 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e63470a-95b6-4653-b917-ed1f8ff66466-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"0e63470a-95b6-4653-b917-ed1f8ff66466\") " pod="openstack/glance-default-external-api-0" Jan 31 09:26:11 crc kubenswrapper[4830]: I0131 09:26:11.661434 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-534498b2-d616-470f-a82d-6fd5620e2438\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-534498b2-d616-470f-a82d-6fd5620e2438\") pod \"glance-default-external-api-0\" (UID: \"0e63470a-95b6-4653-b917-ed1f8ff66466\") " pod="openstack/glance-default-external-api-0" Jan 31 09:26:11 crc kubenswrapper[4830]: I0131 09:26:11.663111 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0e63470a-95b6-4653-b917-ed1f8ff66466-logs\") pod \"glance-default-external-api-0\" (UID: \"0e63470a-95b6-4653-b917-ed1f8ff66466\") " pod="openstack/glance-default-external-api-0" Jan 31 09:26:11 crc kubenswrapper[4830]: I0131 09:26:11.663373 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0e63470a-95b6-4653-b917-ed1f8ff66466-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"0e63470a-95b6-4653-b917-ed1f8ff66466\") " pod="openstack/glance-default-external-api-0" Jan 31 09:26:11 crc kubenswrapper[4830]: I0131 09:26:11.678535 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0e63470a-95b6-4653-b917-ed1f8ff66466-scripts\") pod \"glance-default-external-api-0\" (UID: \"0e63470a-95b6-4653-b917-ed1f8ff66466\") " pod="openstack/glance-default-external-api-0" Jan 31 09:26:11 crc kubenswrapper[4830]: I0131 09:26:11.679461 4830 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 31 09:26:11 crc kubenswrapper[4830]: I0131 09:26:11.679598 4830 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-534498b2-d616-470f-a82d-6fd5620e2438\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-534498b2-d616-470f-a82d-6fd5620e2438\") pod \"glance-default-external-api-0\" (UID: \"0e63470a-95b6-4653-b917-ed1f8ff66466\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/b2b5f65c3ddebbe1a693d29972c24b4f4a39793430c5c2cc47acd10e0b700ef0/globalmount\"" pod="openstack/glance-default-external-api-0" Jan 31 09:26:11 crc kubenswrapper[4830]: I0131 09:26:11.687660 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e63470a-95b6-4653-b917-ed1f8ff66466-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"0e63470a-95b6-4653-b917-ed1f8ff66466\") " pod="openstack/glance-default-external-api-0" Jan 31 09:26:11 crc kubenswrapper[4830]: I0131 09:26:11.697769 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cnvcx\" (UniqueName: \"kubernetes.io/projected/0e63470a-95b6-4653-b917-ed1f8ff66466-kube-api-access-cnvcx\") pod \"glance-default-external-api-0\" (UID: \"0e63470a-95b6-4653-b917-ed1f8ff66466\") " pod="openstack/glance-default-external-api-0" Jan 31 09:26:11 crc kubenswrapper[4830]: I0131 09:26:11.715606 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0e63470a-95b6-4653-b917-ed1f8ff66466-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"0e63470a-95b6-4653-b917-ed1f8ff66466\") " pod="openstack/glance-default-external-api-0" Jan 31 09:26:11 crc kubenswrapper[4830]: I0131 09:26:11.717008 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0e63470a-95b6-4653-b917-ed1f8ff66466-config-data\") pod \"glance-default-external-api-0\" (UID: \"0e63470a-95b6-4653-b917-ed1f8ff66466\") " pod="openstack/glance-default-external-api-0" Jan 31 09:26:11 crc kubenswrapper[4830]: I0131 09:26:11.751702 4830 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/heat-api-6948bd58db-k47sz" Jan 31 09:26:12 crc kubenswrapper[4830]: I0131 09:26:12.001336 4830 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/heat-cfnapi-f57b45989-7xfmm" Jan 31 09:26:12 crc kubenswrapper[4830]: I0131 09:26:12.013009 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-184b-account-create-update-gwvpg"] Jan 31 09:26:12 crc kubenswrapper[4830]: I0131 09:26:12.044367 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-61ea-account-create-update-8wzt7"] Jan 31 09:26:12 crc kubenswrapper[4830]: I0131 09:26:12.084402 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-ttdxz"] Jan 31 09:26:12 crc kubenswrapper[4830]: I0131 09:26:12.095707 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-f57b45989-7xfmm" event={"ID":"de11d1f2-fd91-48c7-9dc3-79748064e53d","Type":"ContainerStarted","Data":"c9597527182897f5730379a910416cce1be500224567c627cf9a0061702ca197"} Jan 31 09:26:12 crc kubenswrapper[4830]: I0131 09:26:12.097237 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-f57b45989-7xfmm" Jan 31 09:26:12 crc kubenswrapper[4830]: I0131 09:26:12.113504 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-6948bd58db-k47sz" event={"ID":"8f3e08be-d310-4474-99fb-d9226ab6eedb","Type":"ContainerDied","Data":"12da4461a7423985fb15fa6a8127510011961b290ffafb67a07ee27348a2d699"} Jan 31 09:26:12 crc kubenswrapper[4830]: I0131 09:26:12.113772 4830 scope.go:117] "RemoveContainer" containerID="14b8d2b014edebbf2a4fb63beaa402cdcf021e136b2b034bf350b5e7b634d6a7" Jan 31 09:26:12 crc kubenswrapper[4830]: I0131 09:26:12.113269 4830 generic.go:334] "Generic (PLEG): container finished" podID="8f3e08be-d310-4474-99fb-d9226ab6eedb" containerID="12da4461a7423985fb15fa6a8127510011961b290ffafb67a07ee27348a2d699" exitCode=1 Jan 31 09:26:12 crc kubenswrapper[4830]: I0131 09:26:12.123540 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-v2w79" event={"ID":"ced551a8-224d-488b-aa58-c424e387ccca","Type":"ContainerStarted","Data":"1d5bf74abd597c5c4babf5eba0fbdac53566f697059f84189e2bc7027f6c1097"} Jan 31 09:26:12 crc kubenswrapper[4830]: I0131 09:26:12.132559 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-sl874" event={"ID":"631e221b-b504-4f59-8848-c9427f67c0df","Type":"ContainerStarted","Data":"6164c5de3de4a41e1514ec899d7bd7a15c63cd4138aed837f0dea01b250a6089"} Jan 31 09:26:12 crc kubenswrapper[4830]: I0131 09:26:12.133700 4830 scope.go:117] "RemoveContainer" containerID="12da4461a7423985fb15fa6a8127510011961b290ffafb67a07ee27348a2d699" Jan 31 09:26:12 crc kubenswrapper[4830]: I0131 09:26:12.143321 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-534498b2-d616-470f-a82d-6fd5620e2438\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-534498b2-d616-470f-a82d-6fd5620e2438\") pod \"glance-default-external-api-0\" (UID: \"0e63470a-95b6-4653-b917-ed1f8ff66466\") " pod="openstack/glance-default-external-api-0" Jan 31 09:26:12 crc kubenswrapper[4830]: E0131 09:26:12.151275 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-6948bd58db-k47sz_openstack(8f3e08be-d310-4474-99fb-d9226ab6eedb)\"" pod="openstack/heat-api-6948bd58db-k47sz" podUID="8f3e08be-d310-4474-99fb-d9226ab6eedb" Jan 31 09:26:12 crc kubenswrapper[4830]: I0131 09:26:12.181401 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"22fcbbb1-bfe8-4d6a-b0ea-6e0949dd7246","Type":"ContainerStarted","Data":"b50060fcb658e52a780891a55ae6a27f256e9873b517b788c78ba792c3c4be75"} Jan 31 09:26:12 crc kubenswrapper[4830]: I0131 09:26:12.263493 4830 scope.go:117] "RemoveContainer" containerID="a04fad3617a9e38076099693ce6bd6f0b7e1a9b845b3b8a22acffddfa772e8f0" Jan 31 09:26:12 crc kubenswrapper[4830]: E0131 09:26:12.264428 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gt7kd_openshift-machine-config-operator(158dbfda-9b0a-4809-9946-3c6ee2d082dc)\"" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" podUID="158dbfda-9b0a-4809-9946-3c6ee2d082dc" Jan 31 09:26:12 crc kubenswrapper[4830]: I0131 09:26:12.308542 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9404af59-7e12-483b-90d0-9ebdc4140cc2" path="/var/lib/kubelet/pods/9404af59-7e12-483b-90d0-9ebdc4140cc2/volumes" Jan 31 09:26:12 crc kubenswrapper[4830]: I0131 09:26:12.311960 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ff4e5fbc-7e45-42b7-8af6-ff34b36bb594" path="/var/lib/kubelet/pods/ff4e5fbc-7e45-42b7-8af6-ff34b36bb594/volumes" Jan 31 09:26:12 crc kubenswrapper[4830]: I0131 09:26:12.429678 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 31 09:26:12 crc kubenswrapper[4830]: I0131 09:26:12.580482 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-6858-account-create-update-mxmt7"] Jan 31 09:26:13 crc kubenswrapper[4830]: I0131 09:26:13.291872 4830 generic.go:334] "Generic (PLEG): container finished" podID="0ec35101-03e3-421d-8799-a7a0b1864b9b" containerID="be1a139ca62a9ee5841b9e7d34ba9750b8cdc0d9b26aa9a3ed0ba027497b53ec" exitCode=0 Jan 31 09:26:13 crc kubenswrapper[4830]: I0131 09:26:13.293488 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"0ec35101-03e3-421d-8799-a7a0b1864b9b","Type":"ContainerDied","Data":"be1a139ca62a9ee5841b9e7d34ba9750b8cdc0d9b26aa9a3ed0ba027497b53ec"} Jan 31 09:26:13 crc kubenswrapper[4830]: I0131 09:26:13.361681 4830 generic.go:334] "Generic (PLEG): container finished" podID="ced551a8-224d-488b-aa58-c424e387ccca" containerID="33ce36b1ce77eaeff55c653e8e8346ca8a6889b2299dbcca7791a0a92d4139ed" exitCode=0 Jan 31 09:26:13 crc kubenswrapper[4830]: I0131 09:26:13.362658 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-v2w79" event={"ID":"ced551a8-224d-488b-aa58-c424e387ccca","Type":"ContainerDied","Data":"33ce36b1ce77eaeff55c653e8e8346ca8a6889b2299dbcca7791a0a92d4139ed"} Jan 31 09:26:13 crc kubenswrapper[4830]: I0131 09:26:13.412570 4830 generic.go:334] "Generic (PLEG): container finished" podID="631e221b-b504-4f59-8848-c9427f67c0df" containerID="4516c190e9bc838a21d47cc181276a28f31ec0ad1385177a11bce69c5cdfacfa" exitCode=0 Jan 31 09:26:13 crc kubenswrapper[4830]: I0131 09:26:13.413196 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-sl874" event={"ID":"631e221b-b504-4f59-8848-c9427f67c0df","Type":"ContainerDied","Data":"4516c190e9bc838a21d47cc181276a28f31ec0ad1385177a11bce69c5cdfacfa"} Jan 31 09:26:13 crc kubenswrapper[4830]: I0131 09:26:13.420261 4830 generic.go:334] "Generic (PLEG): container finished" podID="de11d1f2-fd91-48c7-9dc3-79748064e53d" containerID="c9597527182897f5730379a910416cce1be500224567c627cf9a0061702ca197" exitCode=1 Jan 31 09:26:13 crc kubenswrapper[4830]: I0131 09:26:13.420404 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-f57b45989-7xfmm" event={"ID":"de11d1f2-fd91-48c7-9dc3-79748064e53d","Type":"ContainerDied","Data":"c9597527182897f5730379a910416cce1be500224567c627cf9a0061702ca197"} Jan 31 09:26:13 crc kubenswrapper[4830]: I0131 09:26:13.420451 4830 scope.go:117] "RemoveContainer" containerID="bffa2c548becd7f946fbb4976e3d447b020e9ffd9c9dcc3e92ea7384d16a0f88" Jan 31 09:26:13 crc kubenswrapper[4830]: I0131 09:26:13.422631 4830 scope.go:117] "RemoveContainer" containerID="c9597527182897f5730379a910416cce1be500224567c627cf9a0061702ca197" Jan 31 09:26:13 crc kubenswrapper[4830]: E0131 09:26:13.423165 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-f57b45989-7xfmm_openstack(de11d1f2-fd91-48c7-9dc3-79748064e53d)\"" pod="openstack/heat-cfnapi-f57b45989-7xfmm" podUID="de11d1f2-fd91-48c7-9dc3-79748064e53d" Jan 31 09:26:13 crc kubenswrapper[4830]: I0131 09:26:13.441142 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-184b-account-create-update-gwvpg" event={"ID":"9378cf4e-8ab3-4e97-8955-158a9b0c4c26","Type":"ContainerStarted","Data":"4e5dac9ef0de94793774bc9bc33247f6d25cd612833c4396e3c00a632dc89fab"} Jan 31 09:26:13 crc kubenswrapper[4830]: I0131 09:26:13.462407 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-61ea-account-create-update-8wzt7" event={"ID":"fb75ec26-4fae-4778-b520-828660b869cb","Type":"ContainerStarted","Data":"b6fa43351ce00a45bd1a5ccc12600259e56fc6376a3bd940ea407570c11f730b"} Jan 31 09:26:13 crc kubenswrapper[4830]: I0131 09:26:13.495126 4830 scope.go:117] "RemoveContainer" containerID="12da4461a7423985fb15fa6a8127510011961b290ffafb67a07ee27348a2d699" Jan 31 09:26:13 crc kubenswrapper[4830]: E0131 09:26:13.495402 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-6948bd58db-k47sz_openstack(8f3e08be-d310-4474-99fb-d9226ab6eedb)\"" pod="openstack/heat-api-6948bd58db-k47sz" podUID="8f3e08be-d310-4474-99fb-d9226ab6eedb" Jan 31 09:26:13 crc kubenswrapper[4830]: I0131 09:26:13.503404 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-6858-account-create-update-mxmt7" event={"ID":"6090d149-6116-4ccf-981f-67ad48e42a1f","Type":"ContainerStarted","Data":"26fe5854bd9951228d696c7cad2497a4dd0299fdd4bb543a8a4b4411aae708a8"} Jan 31 09:26:13 crc kubenswrapper[4830]: I0131 09:26:13.518550 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-ttdxz" event={"ID":"89ec8e55-13ac-45e1-b5a6-b38ee34a1702","Type":"ContainerStarted","Data":"33e9246aab8d9009071735890e4a4e0d3d6ac097623378334c317c3ff4076293"} Jan 31 09:26:13 crc kubenswrapper[4830]: I0131 09:26:13.518634 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-ttdxz" event={"ID":"89ec8e55-13ac-45e1-b5a6-b38ee34a1702","Type":"ContainerStarted","Data":"bcb751bce1af428ec2fe98cffd37113e24dddc85e638c11fad64679b219c7031"} Jan 31 09:26:13 crc kubenswrapper[4830]: I0131 09:26:13.548370 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 31 09:26:13 crc kubenswrapper[4830]: I0131 09:26:13.588554 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 31 09:26:13 crc kubenswrapper[4830]: I0131 09:26:13.611048 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-184b-account-create-update-gwvpg" podStartSLOduration=4.610998268 podStartE2EDuration="4.610998268s" podCreationTimestamp="2026-01-31 09:26:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 09:26:13.495790222 +0000 UTC m=+1517.989152664" watchObservedRunningTime="2026-01-31 09:26:13.610998268 +0000 UTC m=+1518.104360710" Jan 31 09:26:13 crc kubenswrapper[4830]: I0131 09:26:13.629173 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0ec35101-03e3-421d-8799-a7a0b1864b9b-internal-tls-certs\") pod \"0ec35101-03e3-421d-8799-a7a0b1864b9b\" (UID: \"0ec35101-03e3-421d-8799-a7a0b1864b9b\") " Jan 31 09:26:13 crc kubenswrapper[4830]: I0131 09:26:13.629331 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0ec35101-03e3-421d-8799-a7a0b1864b9b-scripts\") pod \"0ec35101-03e3-421d-8799-a7a0b1864b9b\" (UID: \"0ec35101-03e3-421d-8799-a7a0b1864b9b\") " Jan 31 09:26:13 crc kubenswrapper[4830]: I0131 09:26:13.629494 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9a2130b1-175e-4117-ab6c-14e39b8138c2\") pod \"0ec35101-03e3-421d-8799-a7a0b1864b9b\" (UID: \"0ec35101-03e3-421d-8799-a7a0b1864b9b\") " Jan 31 09:26:13 crc kubenswrapper[4830]: I0131 09:26:13.629553 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0ec35101-03e3-421d-8799-a7a0b1864b9b-logs\") pod \"0ec35101-03e3-421d-8799-a7a0b1864b9b\" (UID: \"0ec35101-03e3-421d-8799-a7a0b1864b9b\") " Jan 31 09:26:13 crc kubenswrapper[4830]: I0131 09:26:13.629647 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0ec35101-03e3-421d-8799-a7a0b1864b9b-httpd-run\") pod \"0ec35101-03e3-421d-8799-a7a0b1864b9b\" (UID: \"0ec35101-03e3-421d-8799-a7a0b1864b9b\") " Jan 31 09:26:13 crc kubenswrapper[4830]: I0131 09:26:13.629683 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ec35101-03e3-421d-8799-a7a0b1864b9b-config-data\") pod \"0ec35101-03e3-421d-8799-a7a0b1864b9b\" (UID: \"0ec35101-03e3-421d-8799-a7a0b1864b9b\") " Jan 31 09:26:13 crc kubenswrapper[4830]: I0131 09:26:13.629709 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-srpvg\" (UniqueName: \"kubernetes.io/projected/0ec35101-03e3-421d-8799-a7a0b1864b9b-kube-api-access-srpvg\") pod \"0ec35101-03e3-421d-8799-a7a0b1864b9b\" (UID: \"0ec35101-03e3-421d-8799-a7a0b1864b9b\") " Jan 31 09:26:13 crc kubenswrapper[4830]: I0131 09:26:13.631871 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ec35101-03e3-421d-8799-a7a0b1864b9b-combined-ca-bundle\") pod \"0ec35101-03e3-421d-8799-a7a0b1864b9b\" (UID: \"0ec35101-03e3-421d-8799-a7a0b1864b9b\") " Jan 31 09:26:13 crc kubenswrapper[4830]: I0131 09:26:13.636192 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0ec35101-03e3-421d-8799-a7a0b1864b9b-logs" (OuterVolumeSpecName: "logs") pod "0ec35101-03e3-421d-8799-a7a0b1864b9b" (UID: "0ec35101-03e3-421d-8799-a7a0b1864b9b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 09:26:13 crc kubenswrapper[4830]: I0131 09:26:13.654883 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0ec35101-03e3-421d-8799-a7a0b1864b9b-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "0ec35101-03e3-421d-8799-a7a0b1864b9b" (UID: "0ec35101-03e3-421d-8799-a7a0b1864b9b"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 09:26:13 crc kubenswrapper[4830]: I0131 09:26:13.654939 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0ec35101-03e3-421d-8799-a7a0b1864b9b-scripts" (OuterVolumeSpecName: "scripts") pod "0ec35101-03e3-421d-8799-a7a0b1864b9b" (UID: "0ec35101-03e3-421d-8799-a7a0b1864b9b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:26:13 crc kubenswrapper[4830]: I0131 09:26:13.664040 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0ec35101-03e3-421d-8799-a7a0b1864b9b-kube-api-access-srpvg" (OuterVolumeSpecName: "kube-api-access-srpvg") pod "0ec35101-03e3-421d-8799-a7a0b1864b9b" (UID: "0ec35101-03e3-421d-8799-a7a0b1864b9b"). InnerVolumeSpecName "kube-api-access-srpvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:26:13 crc kubenswrapper[4830]: I0131 09:26:13.706344 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-61ea-account-create-update-8wzt7" podStartSLOduration=3.706313478 podStartE2EDuration="3.706313478s" podCreationTimestamp="2026-01-31 09:26:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 09:26:13.540085742 +0000 UTC m=+1518.033448184" watchObservedRunningTime="2026-01-31 09:26:13.706313478 +0000 UTC m=+1518.199675930" Jan 31 09:26:13 crc kubenswrapper[4830]: I0131 09:26:13.731017 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-db-create-ttdxz" podStartSLOduration=4.730988909 podStartE2EDuration="4.730988909s" podCreationTimestamp="2026-01-31 09:26:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 09:26:13.557136247 +0000 UTC m=+1518.050498689" watchObservedRunningTime="2026-01-31 09:26:13.730988909 +0000 UTC m=+1518.224351371" Jan 31 09:26:13 crc kubenswrapper[4830]: I0131 09:26:13.732265 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9a2130b1-175e-4117-ab6c-14e39b8138c2" (OuterVolumeSpecName: "glance") pod "0ec35101-03e3-421d-8799-a7a0b1864b9b" (UID: "0ec35101-03e3-421d-8799-a7a0b1864b9b"). InnerVolumeSpecName "pvc-9a2130b1-175e-4117-ab6c-14e39b8138c2". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 31 09:26:13 crc kubenswrapper[4830]: I0131 09:26:13.738861 4830 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0ec35101-03e3-421d-8799-a7a0b1864b9b-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 09:26:13 crc kubenswrapper[4830]: I0131 09:26:13.741348 4830 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-9a2130b1-175e-4117-ab6c-14e39b8138c2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9a2130b1-175e-4117-ab6c-14e39b8138c2\") on node \"crc\" " Jan 31 09:26:13 crc kubenswrapper[4830]: I0131 09:26:13.741443 4830 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0ec35101-03e3-421d-8799-a7a0b1864b9b-logs\") on node \"crc\" DevicePath \"\"" Jan 31 09:26:13 crc kubenswrapper[4830]: I0131 09:26:13.741509 4830 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0ec35101-03e3-421d-8799-a7a0b1864b9b-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 31 09:26:13 crc kubenswrapper[4830]: I0131 09:26:13.741567 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-srpvg\" (UniqueName: \"kubernetes.io/projected/0ec35101-03e3-421d-8799-a7a0b1864b9b-kube-api-access-srpvg\") on node \"crc\" DevicePath \"\"" Jan 31 09:26:13 crc kubenswrapper[4830]: I0131 09:26:13.772967 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0ec35101-03e3-421d-8799-a7a0b1864b9b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0ec35101-03e3-421d-8799-a7a0b1864b9b" (UID: "0ec35101-03e3-421d-8799-a7a0b1864b9b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:26:13 crc kubenswrapper[4830]: I0131 09:26:13.870108 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-6858-account-create-update-mxmt7" podStartSLOduration=3.870070684 podStartE2EDuration="3.870070684s" podCreationTimestamp="2026-01-31 09:26:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 09:26:13.578546205 +0000 UTC m=+1518.071908647" watchObservedRunningTime="2026-01-31 09:26:13.870070684 +0000 UTC m=+1518.363433126" Jan 31 09:26:13 crc kubenswrapper[4830]: I0131 09:26:13.875224 4830 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ec35101-03e3-421d-8799-a7a0b1864b9b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 09:26:13 crc kubenswrapper[4830]: I0131 09:26:13.916635 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0ec35101-03e3-421d-8799-a7a0b1864b9b-config-data" (OuterVolumeSpecName: "config-data") pod "0ec35101-03e3-421d-8799-a7a0b1864b9b" (UID: "0ec35101-03e3-421d-8799-a7a0b1864b9b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:26:13 crc kubenswrapper[4830]: I0131 09:26:13.916946 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0ec35101-03e3-421d-8799-a7a0b1864b9b-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "0ec35101-03e3-421d-8799-a7a0b1864b9b" (UID: "0ec35101-03e3-421d-8799-a7a0b1864b9b"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:26:13 crc kubenswrapper[4830]: I0131 09:26:13.919639 4830 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Jan 31 09:26:13 crc kubenswrapper[4830]: I0131 09:26:13.933066 4830 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-9a2130b1-175e-4117-ab6c-14e39b8138c2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9a2130b1-175e-4117-ab6c-14e39b8138c2") on node "crc" Jan 31 09:26:13 crc kubenswrapper[4830]: I0131 09:26:13.943376 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-f44b7d679-6khcx" Jan 31 09:26:13 crc kubenswrapper[4830]: I0131 09:26:13.946350 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-f44b7d679-6khcx" Jan 31 09:26:13 crc kubenswrapper[4830]: I0131 09:26:13.986970 4830 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0ec35101-03e3-421d-8799-a7a0b1864b9b-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 31 09:26:13 crc kubenswrapper[4830]: I0131 09:26:13.987019 4830 reconciler_common.go:293] "Volume detached for volume \"pvc-9a2130b1-175e-4117-ab6c-14e39b8138c2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9a2130b1-175e-4117-ab6c-14e39b8138c2\") on node \"crc\" DevicePath \"\"" Jan 31 09:26:13 crc kubenswrapper[4830]: I0131 09:26:13.987034 4830 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ec35101-03e3-421d-8799-a7a0b1864b9b-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 09:26:14 crc kubenswrapper[4830]: I0131 09:26:14.575115 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"0ec35101-03e3-421d-8799-a7a0b1864b9b","Type":"ContainerDied","Data":"9cbfffa7f21453dce27e8b4d361906ec1abf2d13f356ffc985e453344268c914"} Jan 31 09:26:14 crc kubenswrapper[4830]: I0131 09:26:14.575571 4830 scope.go:117] "RemoveContainer" containerID="be1a139ca62a9ee5841b9e7d34ba9750b8cdc0d9b26aa9a3ed0ba027497b53ec" Jan 31 09:26:14 crc kubenswrapper[4830]: I0131 09:26:14.575275 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 31 09:26:14 crc kubenswrapper[4830]: I0131 09:26:14.587372 4830 scope.go:117] "RemoveContainer" containerID="c9597527182897f5730379a910416cce1be500224567c627cf9a0061702ca197" Jan 31 09:26:14 crc kubenswrapper[4830]: E0131 09:26:14.587800 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-f57b45989-7xfmm_openstack(de11d1f2-fd91-48c7-9dc3-79748064e53d)\"" pod="openstack/heat-cfnapi-f57b45989-7xfmm" podUID="de11d1f2-fd91-48c7-9dc3-79748064e53d" Jan 31 09:26:14 crc kubenswrapper[4830]: I0131 09:26:14.606377 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"0e63470a-95b6-4653-b917-ed1f8ff66466","Type":"ContainerStarted","Data":"4e13736e63dad07c376b5bb063ff12e67be2b518eb1cbace8eb0e4a078e3aded"} Jan 31 09:26:14 crc kubenswrapper[4830]: I0131 09:26:14.664254 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-184b-account-create-update-gwvpg" event={"ID":"9378cf4e-8ab3-4e97-8955-158a9b0c4c26","Type":"ContainerStarted","Data":"d6d4cf01fce114709d28b91bcb628afcf2a464d89cd0dada7c17fb07e07a31f8"} Jan 31 09:26:14 crc kubenswrapper[4830]: I0131 09:26:14.705654 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-61ea-account-create-update-8wzt7" event={"ID":"fb75ec26-4fae-4778-b520-828660b869cb","Type":"ContainerStarted","Data":"f4affda6e7f7c44cfc96fba7823dea3f1025d50b218f98323e79ade3447ba6f9"} Jan 31 09:26:14 crc kubenswrapper[4830]: I0131 09:26:14.747943 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-6858-account-create-update-mxmt7" event={"ID":"6090d149-6116-4ccf-981f-67ad48e42a1f","Type":"ContainerStarted","Data":"8d18b4c06c2bf2ed5e89d4d11232dd02eb2d60da09b3d362bf9e195fa7b1ee30"} Jan 31 09:26:14 crc kubenswrapper[4830]: I0131 09:26:14.761208 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"22fcbbb1-bfe8-4d6a-b0ea-6e0949dd7246","Type":"ContainerStarted","Data":"6803ced896ddcc76c9368cf99d58ed56cdafe21dfee22e80d9ed8b7d9ac413a3"} Jan 31 09:26:14 crc kubenswrapper[4830]: I0131 09:26:14.769396 4830 generic.go:334] "Generic (PLEG): container finished" podID="89ec8e55-13ac-45e1-b5a6-b38ee34a1702" containerID="33e9246aab8d9009071735890e4a4e0d3d6ac097623378334c317c3ff4076293" exitCode=0 Jan 31 09:26:14 crc kubenswrapper[4830]: I0131 09:26:14.779298 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-ttdxz" event={"ID":"89ec8e55-13ac-45e1-b5a6-b38ee34a1702","Type":"ContainerDied","Data":"33e9246aab8d9009071735890e4a4e0d3d6ac097623378334c317c3ff4076293"} Jan 31 09:26:14 crc kubenswrapper[4830]: I0131 09:26:14.877457 4830 scope.go:117] "RemoveContainer" containerID="e8d65d3b6016cd3404d2d4f61f24bafc19156c82cbfa0497b23e98f7f4e9893e" Jan 31 09:26:14 crc kubenswrapper[4830]: I0131 09:26:14.921403 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 31 09:26:14 crc kubenswrapper[4830]: I0131 09:26:14.939363 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 31 09:26:14 crc kubenswrapper[4830]: I0131 09:26:14.979093 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 31 09:26:14 crc kubenswrapper[4830]: E0131 09:26:14.979924 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ec35101-03e3-421d-8799-a7a0b1864b9b" containerName="glance-httpd" Jan 31 09:26:14 crc kubenswrapper[4830]: I0131 09:26:14.979943 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ec35101-03e3-421d-8799-a7a0b1864b9b" containerName="glance-httpd" Jan 31 09:26:14 crc kubenswrapper[4830]: E0131 09:26:14.979968 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ec35101-03e3-421d-8799-a7a0b1864b9b" containerName="glance-log" Jan 31 09:26:14 crc kubenswrapper[4830]: I0131 09:26:14.979974 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ec35101-03e3-421d-8799-a7a0b1864b9b" containerName="glance-log" Jan 31 09:26:14 crc kubenswrapper[4830]: I0131 09:26:14.980251 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="0ec35101-03e3-421d-8799-a7a0b1864b9b" containerName="glance-log" Jan 31 09:26:14 crc kubenswrapper[4830]: I0131 09:26:14.980276 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="0ec35101-03e3-421d-8799-a7a0b1864b9b" containerName="glance-httpd" Jan 31 09:26:14 crc kubenswrapper[4830]: I0131 09:26:14.983335 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 31 09:26:14 crc kubenswrapper[4830]: I0131 09:26:14.994504 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 31 09:26:14 crc kubenswrapper[4830]: I0131 09:26:14.994858 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 31 09:26:14 crc kubenswrapper[4830]: I0131 09:26:14.996209 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 31 09:26:15 crc kubenswrapper[4830]: I0131 09:26:15.140993 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/65303534-fa3e-4008-9ea1-95cd77e752c9-scripts\") pod \"glance-default-internal-api-0\" (UID: \"65303534-fa3e-4008-9ea1-95cd77e752c9\") " pod="openstack/glance-default-internal-api-0" Jan 31 09:26:15 crc kubenswrapper[4830]: I0131 09:26:15.141125 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/65303534-fa3e-4008-9ea1-95cd77e752c9-logs\") pod \"glance-default-internal-api-0\" (UID: \"65303534-fa3e-4008-9ea1-95cd77e752c9\") " pod="openstack/glance-default-internal-api-0" Jan 31 09:26:15 crc kubenswrapper[4830]: I0131 09:26:15.141249 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-szkkn\" (UniqueName: \"kubernetes.io/projected/65303534-fa3e-4008-9ea1-95cd77e752c9-kube-api-access-szkkn\") pod \"glance-default-internal-api-0\" (UID: \"65303534-fa3e-4008-9ea1-95cd77e752c9\") " pod="openstack/glance-default-internal-api-0" Jan 31 09:26:15 crc kubenswrapper[4830]: I0131 09:26:15.141343 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/65303534-fa3e-4008-9ea1-95cd77e752c9-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"65303534-fa3e-4008-9ea1-95cd77e752c9\") " pod="openstack/glance-default-internal-api-0" Jan 31 09:26:15 crc kubenswrapper[4830]: I0131 09:26:15.141472 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-9a2130b1-175e-4117-ab6c-14e39b8138c2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9a2130b1-175e-4117-ab6c-14e39b8138c2\") pod \"glance-default-internal-api-0\" (UID: \"65303534-fa3e-4008-9ea1-95cd77e752c9\") " pod="openstack/glance-default-internal-api-0" Jan 31 09:26:15 crc kubenswrapper[4830]: I0131 09:26:15.141539 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/65303534-fa3e-4008-9ea1-95cd77e752c9-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"65303534-fa3e-4008-9ea1-95cd77e752c9\") " pod="openstack/glance-default-internal-api-0" Jan 31 09:26:15 crc kubenswrapper[4830]: I0131 09:26:15.141819 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/65303534-fa3e-4008-9ea1-95cd77e752c9-config-data\") pod \"glance-default-internal-api-0\" (UID: \"65303534-fa3e-4008-9ea1-95cd77e752c9\") " pod="openstack/glance-default-internal-api-0" Jan 31 09:26:15 crc kubenswrapper[4830]: I0131 09:26:15.141876 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65303534-fa3e-4008-9ea1-95cd77e752c9-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"65303534-fa3e-4008-9ea1-95cd77e752c9\") " pod="openstack/glance-default-internal-api-0" Jan 31 09:26:15 crc kubenswrapper[4830]: I0131 09:26:15.253667 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-szkkn\" (UniqueName: \"kubernetes.io/projected/65303534-fa3e-4008-9ea1-95cd77e752c9-kube-api-access-szkkn\") pod \"glance-default-internal-api-0\" (UID: \"65303534-fa3e-4008-9ea1-95cd77e752c9\") " pod="openstack/glance-default-internal-api-0" Jan 31 09:26:15 crc kubenswrapper[4830]: I0131 09:26:15.253902 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/65303534-fa3e-4008-9ea1-95cd77e752c9-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"65303534-fa3e-4008-9ea1-95cd77e752c9\") " pod="openstack/glance-default-internal-api-0" Jan 31 09:26:15 crc kubenswrapper[4830]: I0131 09:26:15.253967 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-9a2130b1-175e-4117-ab6c-14e39b8138c2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9a2130b1-175e-4117-ab6c-14e39b8138c2\") pod \"glance-default-internal-api-0\" (UID: \"65303534-fa3e-4008-9ea1-95cd77e752c9\") " pod="openstack/glance-default-internal-api-0" Jan 31 09:26:15 crc kubenswrapper[4830]: I0131 09:26:15.254003 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/65303534-fa3e-4008-9ea1-95cd77e752c9-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"65303534-fa3e-4008-9ea1-95cd77e752c9\") " pod="openstack/glance-default-internal-api-0" Jan 31 09:26:15 crc kubenswrapper[4830]: I0131 09:26:15.254098 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/65303534-fa3e-4008-9ea1-95cd77e752c9-config-data\") pod \"glance-default-internal-api-0\" (UID: \"65303534-fa3e-4008-9ea1-95cd77e752c9\") " pod="openstack/glance-default-internal-api-0" Jan 31 09:26:15 crc kubenswrapper[4830]: I0131 09:26:15.254125 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65303534-fa3e-4008-9ea1-95cd77e752c9-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"65303534-fa3e-4008-9ea1-95cd77e752c9\") " pod="openstack/glance-default-internal-api-0" Jan 31 09:26:15 crc kubenswrapper[4830]: I0131 09:26:15.254157 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/65303534-fa3e-4008-9ea1-95cd77e752c9-scripts\") pod \"glance-default-internal-api-0\" (UID: \"65303534-fa3e-4008-9ea1-95cd77e752c9\") " pod="openstack/glance-default-internal-api-0" Jan 31 09:26:15 crc kubenswrapper[4830]: I0131 09:26:15.254181 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/65303534-fa3e-4008-9ea1-95cd77e752c9-logs\") pod \"glance-default-internal-api-0\" (UID: \"65303534-fa3e-4008-9ea1-95cd77e752c9\") " pod="openstack/glance-default-internal-api-0" Jan 31 09:26:15 crc kubenswrapper[4830]: I0131 09:26:15.254840 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/65303534-fa3e-4008-9ea1-95cd77e752c9-logs\") pod \"glance-default-internal-api-0\" (UID: \"65303534-fa3e-4008-9ea1-95cd77e752c9\") " pod="openstack/glance-default-internal-api-0" Jan 31 09:26:15 crc kubenswrapper[4830]: I0131 09:26:15.255490 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/65303534-fa3e-4008-9ea1-95cd77e752c9-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"65303534-fa3e-4008-9ea1-95cd77e752c9\") " pod="openstack/glance-default-internal-api-0" Jan 31 09:26:15 crc kubenswrapper[4830]: I0131 09:26:15.274992 4830 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 31 09:26:15 crc kubenswrapper[4830]: I0131 09:26:15.275041 4830 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-9a2130b1-175e-4117-ab6c-14e39b8138c2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9a2130b1-175e-4117-ab6c-14e39b8138c2\") pod \"glance-default-internal-api-0\" (UID: \"65303534-fa3e-4008-9ea1-95cd77e752c9\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/c32ecbe26e9667ad28a7d3f49252f55c097486de0a04fe8536b2f1b0061aa335/globalmount\"" pod="openstack/glance-default-internal-api-0" Jan 31 09:26:15 crc kubenswrapper[4830]: I0131 09:26:15.278116 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/65303534-fa3e-4008-9ea1-95cd77e752c9-scripts\") pod \"glance-default-internal-api-0\" (UID: \"65303534-fa3e-4008-9ea1-95cd77e752c9\") " pod="openstack/glance-default-internal-api-0" Jan 31 09:26:15 crc kubenswrapper[4830]: I0131 09:26:15.281488 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/65303534-fa3e-4008-9ea1-95cd77e752c9-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"65303534-fa3e-4008-9ea1-95cd77e752c9\") " pod="openstack/glance-default-internal-api-0" Jan 31 09:26:15 crc kubenswrapper[4830]: I0131 09:26:15.291584 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/65303534-fa3e-4008-9ea1-95cd77e752c9-config-data\") pod \"glance-default-internal-api-0\" (UID: \"65303534-fa3e-4008-9ea1-95cd77e752c9\") " pod="openstack/glance-default-internal-api-0" Jan 31 09:26:15 crc kubenswrapper[4830]: I0131 09:26:15.327656 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65303534-fa3e-4008-9ea1-95cd77e752c9-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"65303534-fa3e-4008-9ea1-95cd77e752c9\") " pod="openstack/glance-default-internal-api-0" Jan 31 09:26:15 crc kubenswrapper[4830]: I0131 09:26:15.383100 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-szkkn\" (UniqueName: \"kubernetes.io/projected/65303534-fa3e-4008-9ea1-95cd77e752c9-kube-api-access-szkkn\") pod \"glance-default-internal-api-0\" (UID: \"65303534-fa3e-4008-9ea1-95cd77e752c9\") " pod="openstack/glance-default-internal-api-0" Jan 31 09:26:15 crc kubenswrapper[4830]: I0131 09:26:15.724625 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-9a2130b1-175e-4117-ab6c-14e39b8138c2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9a2130b1-175e-4117-ab6c-14e39b8138c2\") pod \"glance-default-internal-api-0\" (UID: \"65303534-fa3e-4008-9ea1-95cd77e752c9\") " pod="openstack/glance-default-internal-api-0" Jan 31 09:26:15 crc kubenswrapper[4830]: I0131 09:26:15.784819 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-sl874" Jan 31 09:26:15 crc kubenswrapper[4830]: I0131 09:26:15.796939 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-v2w79" Jan 31 09:26:15 crc kubenswrapper[4830]: I0131 09:26:15.797958 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"0e63470a-95b6-4653-b917-ed1f8ff66466","Type":"ContainerStarted","Data":"52c6d6f046d28a08603c58e2ac8ee9d8323493de94a8d2fcc63361e9ab592201"} Jan 31 09:26:15 crc kubenswrapper[4830]: I0131 09:26:15.799117 4830 generic.go:334] "Generic (PLEG): container finished" podID="9378cf4e-8ab3-4e97-8955-158a9b0c4c26" containerID="d6d4cf01fce114709d28b91bcb628afcf2a464d89cd0dada7c17fb07e07a31f8" exitCode=0 Jan 31 09:26:15 crc kubenswrapper[4830]: I0131 09:26:15.799161 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-184b-account-create-update-gwvpg" event={"ID":"9378cf4e-8ab3-4e97-8955-158a9b0c4c26","Type":"ContainerDied","Data":"d6d4cf01fce114709d28b91bcb628afcf2a464d89cd0dada7c17fb07e07a31f8"} Jan 31 09:26:15 crc kubenswrapper[4830]: I0131 09:26:15.808111 4830 generic.go:334] "Generic (PLEG): container finished" podID="fb75ec26-4fae-4778-b520-828660b869cb" containerID="f4affda6e7f7c44cfc96fba7823dea3f1025d50b218f98323e79ade3447ba6f9" exitCode=0 Jan 31 09:26:15 crc kubenswrapper[4830]: I0131 09:26:15.808202 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-61ea-account-create-update-8wzt7" event={"ID":"fb75ec26-4fae-4778-b520-828660b869cb","Type":"ContainerDied","Data":"f4affda6e7f7c44cfc96fba7823dea3f1025d50b218f98323e79ade3447ba6f9"} Jan 31 09:26:15 crc kubenswrapper[4830]: I0131 09:26:15.815483 4830 generic.go:334] "Generic (PLEG): container finished" podID="6090d149-6116-4ccf-981f-67ad48e42a1f" containerID="8d18b4c06c2bf2ed5e89d4d11232dd02eb2d60da09b3d362bf9e195fa7b1ee30" exitCode=0 Jan 31 09:26:15 crc kubenswrapper[4830]: I0131 09:26:15.815959 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-6858-account-create-update-mxmt7" event={"ID":"6090d149-6116-4ccf-981f-67ad48e42a1f","Type":"ContainerDied","Data":"8d18b4c06c2bf2ed5e89d4d11232dd02eb2d60da09b3d362bf9e195fa7b1ee30"} Jan 31 09:26:15 crc kubenswrapper[4830]: I0131 09:26:15.865174 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-v2w79" event={"ID":"ced551a8-224d-488b-aa58-c424e387ccca","Type":"ContainerDied","Data":"1d5bf74abd597c5c4babf5eba0fbdac53566f697059f84189e2bc7027f6c1097"} Jan 31 09:26:15 crc kubenswrapper[4830]: I0131 09:26:15.865239 4830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1d5bf74abd597c5c4babf5eba0fbdac53566f697059f84189e2bc7027f6c1097" Jan 31 09:26:15 crc kubenswrapper[4830]: I0131 09:26:15.865355 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-v2w79" Jan 31 09:26:15 crc kubenswrapper[4830]: I0131 09:26:15.886324 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-sl874" event={"ID":"631e221b-b504-4f59-8848-c9427f67c0df","Type":"ContainerDied","Data":"6164c5de3de4a41e1514ec899d7bd7a15c63cd4138aed837f0dea01b250a6089"} Jan 31 09:26:15 crc kubenswrapper[4830]: I0131 09:26:15.886377 4830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6164c5de3de4a41e1514ec899d7bd7a15c63cd4138aed837f0dea01b250a6089" Jan 31 09:26:15 crc kubenswrapper[4830]: I0131 09:26:15.886456 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-sl874" Jan 31 09:26:15 crc kubenswrapper[4830]: I0131 09:26:15.895879 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"22fcbbb1-bfe8-4d6a-b0ea-6e0949dd7246","Type":"ContainerStarted","Data":"ffaa284dfee60c7213ff245ce3df30a7180c9e387dbae4964764089f0908c322"} Jan 31 09:26:15 crc kubenswrapper[4830]: I0131 09:26:15.919056 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 31 09:26:15 crc kubenswrapper[4830]: I0131 09:26:15.951879 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ced551a8-224d-488b-aa58-c424e387ccca-operator-scripts\") pod \"ced551a8-224d-488b-aa58-c424e387ccca\" (UID: \"ced551a8-224d-488b-aa58-c424e387ccca\") " Jan 31 09:26:15 crc kubenswrapper[4830]: I0131 09:26:15.952087 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/631e221b-b504-4f59-8848-c9427f67c0df-operator-scripts\") pod \"631e221b-b504-4f59-8848-c9427f67c0df\" (UID: \"631e221b-b504-4f59-8848-c9427f67c0df\") " Jan 31 09:26:15 crc kubenswrapper[4830]: I0131 09:26:15.952129 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kjn7r\" (UniqueName: \"kubernetes.io/projected/631e221b-b504-4f59-8848-c9427f67c0df-kube-api-access-kjn7r\") pod \"631e221b-b504-4f59-8848-c9427f67c0df\" (UID: \"631e221b-b504-4f59-8848-c9427f67c0df\") " Jan 31 09:26:15 crc kubenswrapper[4830]: I0131 09:26:15.952286 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qj57x\" (UniqueName: \"kubernetes.io/projected/ced551a8-224d-488b-aa58-c424e387ccca-kube-api-access-qj57x\") pod \"ced551a8-224d-488b-aa58-c424e387ccca\" (UID: \"ced551a8-224d-488b-aa58-c424e387ccca\") " Jan 31 09:26:15 crc kubenswrapper[4830]: I0131 09:26:15.956623 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ced551a8-224d-488b-aa58-c424e387ccca-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ced551a8-224d-488b-aa58-c424e387ccca" (UID: "ced551a8-224d-488b-aa58-c424e387ccca"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:26:15 crc kubenswrapper[4830]: I0131 09:26:15.958346 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/631e221b-b504-4f59-8848-c9427f67c0df-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "631e221b-b504-4f59-8848-c9427f67c0df" (UID: "631e221b-b504-4f59-8848-c9427f67c0df"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:26:15 crc kubenswrapper[4830]: I0131 09:26:15.976450 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/631e221b-b504-4f59-8848-c9427f67c0df-kube-api-access-kjn7r" (OuterVolumeSpecName: "kube-api-access-kjn7r") pod "631e221b-b504-4f59-8848-c9427f67c0df" (UID: "631e221b-b504-4f59-8848-c9427f67c0df"). InnerVolumeSpecName "kube-api-access-kjn7r". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:26:15 crc kubenswrapper[4830]: I0131 09:26:15.995771 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ced551a8-224d-488b-aa58-c424e387ccca-kube-api-access-qj57x" (OuterVolumeSpecName: "kube-api-access-qj57x") pod "ced551a8-224d-488b-aa58-c424e387ccca" (UID: "ced551a8-224d-488b-aa58-c424e387ccca"). InnerVolumeSpecName "kube-api-access-qj57x". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:26:16 crc kubenswrapper[4830]: I0131 09:26:16.058614 4830 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/631e221b-b504-4f59-8848-c9427f67c0df-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 09:26:16 crc kubenswrapper[4830]: I0131 09:26:16.058654 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kjn7r\" (UniqueName: \"kubernetes.io/projected/631e221b-b504-4f59-8848-c9427f67c0df-kube-api-access-kjn7r\") on node \"crc\" DevicePath \"\"" Jan 31 09:26:16 crc kubenswrapper[4830]: I0131 09:26:16.058666 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qj57x\" (UniqueName: \"kubernetes.io/projected/ced551a8-224d-488b-aa58-c424e387ccca-kube-api-access-qj57x\") on node \"crc\" DevicePath \"\"" Jan 31 09:26:16 crc kubenswrapper[4830]: I0131 09:26:16.058676 4830 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ced551a8-224d-488b-aa58-c424e387ccca-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 09:26:17 crc kubenswrapper[4830]: I0131 09:26:16.479435 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0ec35101-03e3-421d-8799-a7a0b1864b9b" path="/var/lib/kubelet/pods/0ec35101-03e3-421d-8799-a7a0b1864b9b/volumes" Jan 31 09:26:17 crc kubenswrapper[4830]: I0131 09:26:16.678464 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-184b-account-create-update-gwvpg" Jan 31 09:26:17 crc kubenswrapper[4830]: I0131 09:26:16.756531 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-6948bd58db-k47sz" Jan 31 09:26:17 crc kubenswrapper[4830]: I0131 09:26:16.757654 4830 scope.go:117] "RemoveContainer" containerID="12da4461a7423985fb15fa6a8127510011961b290ffafb67a07ee27348a2d699" Jan 31 09:26:17 crc kubenswrapper[4830]: E0131 09:26:16.758122 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-6948bd58db-k47sz_openstack(8f3e08be-d310-4474-99fb-d9226ab6eedb)\"" pod="openstack/heat-api-6948bd58db-k47sz" podUID="8f3e08be-d310-4474-99fb-d9226ab6eedb" Jan 31 09:26:17 crc kubenswrapper[4830]: I0131 09:26:16.758587 4830 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/heat-api-6948bd58db-k47sz" Jan 31 09:26:17 crc kubenswrapper[4830]: I0131 09:26:16.794170 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9378cf4e-8ab3-4e97-8955-158a9b0c4c26-operator-scripts\") pod \"9378cf4e-8ab3-4e97-8955-158a9b0c4c26\" (UID: \"9378cf4e-8ab3-4e97-8955-158a9b0c4c26\") " Jan 31 09:26:17 crc kubenswrapper[4830]: I0131 09:26:16.794377 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pnfkz\" (UniqueName: \"kubernetes.io/projected/9378cf4e-8ab3-4e97-8955-158a9b0c4c26-kube-api-access-pnfkz\") pod \"9378cf4e-8ab3-4e97-8955-158a9b0c4c26\" (UID: \"9378cf4e-8ab3-4e97-8955-158a9b0c4c26\") " Jan 31 09:26:17 crc kubenswrapper[4830]: I0131 09:26:16.796949 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9378cf4e-8ab3-4e97-8955-158a9b0c4c26-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "9378cf4e-8ab3-4e97-8955-158a9b0c4c26" (UID: "9378cf4e-8ab3-4e97-8955-158a9b0c4c26"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:26:17 crc kubenswrapper[4830]: I0131 09:26:16.858501 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9378cf4e-8ab3-4e97-8955-158a9b0c4c26-kube-api-access-pnfkz" (OuterVolumeSpecName: "kube-api-access-pnfkz") pod "9378cf4e-8ab3-4e97-8955-158a9b0c4c26" (UID: "9378cf4e-8ab3-4e97-8955-158a9b0c4c26"). InnerVolumeSpecName "kube-api-access-pnfkz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:26:17 crc kubenswrapper[4830]: I0131 09:26:16.899064 4830 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9378cf4e-8ab3-4e97-8955-158a9b0c4c26-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 09:26:17 crc kubenswrapper[4830]: I0131 09:26:16.899099 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pnfkz\" (UniqueName: \"kubernetes.io/projected/9378cf4e-8ab3-4e97-8955-158a9b0c4c26-kube-api-access-pnfkz\") on node \"crc\" DevicePath \"\"" Jan 31 09:26:17 crc kubenswrapper[4830]: I0131 09:26:16.960369 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-184b-account-create-update-gwvpg" Jan 31 09:26:17 crc kubenswrapper[4830]: I0131 09:26:16.962926 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-184b-account-create-update-gwvpg" event={"ID":"9378cf4e-8ab3-4e97-8955-158a9b0c4c26","Type":"ContainerDied","Data":"4e5dac9ef0de94793774bc9bc33247f6d25cd612833c4396e3c00a632dc89fab"} Jan 31 09:26:17 crc kubenswrapper[4830]: I0131 09:26:16.963005 4830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4e5dac9ef0de94793774bc9bc33247f6d25cd612833c4396e3c00a632dc89fab" Jan 31 09:26:17 crc kubenswrapper[4830]: I0131 09:26:16.964212 4830 scope.go:117] "RemoveContainer" containerID="12da4461a7423985fb15fa6a8127510011961b290ffafb67a07ee27348a2d699" Jan 31 09:26:17 crc kubenswrapper[4830]: E0131 09:26:16.964558 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-6948bd58db-k47sz_openstack(8f3e08be-d310-4474-99fb-d9226ab6eedb)\"" pod="openstack/heat-api-6948bd58db-k47sz" podUID="8f3e08be-d310-4474-99fb-d9226ab6eedb" Jan 31 09:26:17 crc kubenswrapper[4830]: I0131 09:26:17.011913 4830 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/heat-cfnapi-f57b45989-7xfmm" Jan 31 09:26:17 crc kubenswrapper[4830]: I0131 09:26:17.013290 4830 scope.go:117] "RemoveContainer" containerID="c9597527182897f5730379a910416cce1be500224567c627cf9a0061702ca197" Jan 31 09:26:17 crc kubenswrapper[4830]: E0131 09:26:17.013598 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-f57b45989-7xfmm_openstack(de11d1f2-fd91-48c7-9dc3-79748064e53d)\"" pod="openstack/heat-cfnapi-f57b45989-7xfmm" podUID="de11d1f2-fd91-48c7-9dc3-79748064e53d" Jan 31 09:26:17 crc kubenswrapper[4830]: I0131 09:26:17.075189 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-6858-account-create-update-mxmt7" Jan 31 09:26:17 crc kubenswrapper[4830]: I0131 09:26:17.233656 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6090d149-6116-4ccf-981f-67ad48e42a1f-operator-scripts\") pod \"6090d149-6116-4ccf-981f-67ad48e42a1f\" (UID: \"6090d149-6116-4ccf-981f-67ad48e42a1f\") " Jan 31 09:26:17 crc kubenswrapper[4830]: I0131 09:26:17.234520 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6090d149-6116-4ccf-981f-67ad48e42a1f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6090d149-6116-4ccf-981f-67ad48e42a1f" (UID: "6090d149-6116-4ccf-981f-67ad48e42a1f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:26:17 crc kubenswrapper[4830]: I0131 09:26:17.234820 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h9x5k\" (UniqueName: \"kubernetes.io/projected/6090d149-6116-4ccf-981f-67ad48e42a1f-kube-api-access-h9x5k\") pod \"6090d149-6116-4ccf-981f-67ad48e42a1f\" (UID: \"6090d149-6116-4ccf-981f-67ad48e42a1f\") " Jan 31 09:26:17 crc kubenswrapper[4830]: I0131 09:26:17.245482 4830 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6090d149-6116-4ccf-981f-67ad48e42a1f-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 09:26:17 crc kubenswrapper[4830]: I0131 09:26:17.271007 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6090d149-6116-4ccf-981f-67ad48e42a1f-kube-api-access-h9x5k" (OuterVolumeSpecName: "kube-api-access-h9x5k") pod "6090d149-6116-4ccf-981f-67ad48e42a1f" (UID: "6090d149-6116-4ccf-981f-67ad48e42a1f"). InnerVolumeSpecName "kube-api-access-h9x5k". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:26:17 crc kubenswrapper[4830]: I0131 09:26:17.351319 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h9x5k\" (UniqueName: \"kubernetes.io/projected/6090d149-6116-4ccf-981f-67ad48e42a1f-kube-api-access-h9x5k\") on node \"crc\" DevicePath \"\"" Jan 31 09:26:17 crc kubenswrapper[4830]: I0131 09:26:17.898261 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-api-59478c766f-tgwgd" Jan 31 09:26:17 crc kubenswrapper[4830]: I0131 09:26:17.916395 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-cfnapi-5db4bc48b8-mphcw" Jan 31 09:26:17 crc kubenswrapper[4830]: I0131 09:26:17.958266 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-api-bcd57748c-bwxdf" Jan 31 09:26:18 crc kubenswrapper[4830]: I0131 09:26:18.077022 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-6948bd58db-k47sz"] Jan 31 09:26:18 crc kubenswrapper[4830]: I0131 09:26:18.085437 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-6858-account-create-update-mxmt7" event={"ID":"6090d149-6116-4ccf-981f-67ad48e42a1f","Type":"ContainerDied","Data":"26fe5854bd9951228d696c7cad2497a4dd0299fdd4bb543a8a4b4411aae708a8"} Jan 31 09:26:18 crc kubenswrapper[4830]: I0131 09:26:18.085517 4830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="26fe5854bd9951228d696c7cad2497a4dd0299fdd4bb543a8a4b4411aae708a8" Jan 31 09:26:18 crc kubenswrapper[4830]: I0131 09:26:18.085863 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-6858-account-create-update-mxmt7" Jan 31 09:26:18 crc kubenswrapper[4830]: I0131 09:26:18.108680 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"0e63470a-95b6-4653-b917-ed1f8ff66466","Type":"ContainerStarted","Data":"fb37459cfcb4ac7b47a249bc208db487e1659a6b2d635750e7cf032f351a55f5"} Jan 31 09:26:18 crc kubenswrapper[4830]: I0131 09:26:18.245353 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=7.245322463 podStartE2EDuration="7.245322463s" podCreationTimestamp="2026-01-31 09:26:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 09:26:18.17241488 +0000 UTC m=+1522.665777322" watchObservedRunningTime="2026-01-31 09:26:18.245322463 +0000 UTC m=+1522.738684895" Jan 31 09:26:18 crc kubenswrapper[4830]: I0131 09:26:18.448995 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-ttdxz" Jan 31 09:26:18 crc kubenswrapper[4830]: I0131 09:26:18.522970 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/89ec8e55-13ac-45e1-b5a6-b38ee34a1702-operator-scripts\") pod \"89ec8e55-13ac-45e1-b5a6-b38ee34a1702\" (UID: \"89ec8e55-13ac-45e1-b5a6-b38ee34a1702\") " Jan 31 09:26:18 crc kubenswrapper[4830]: I0131 09:26:18.523535 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ckkc2\" (UniqueName: \"kubernetes.io/projected/89ec8e55-13ac-45e1-b5a6-b38ee34a1702-kube-api-access-ckkc2\") pod \"89ec8e55-13ac-45e1-b5a6-b38ee34a1702\" (UID: \"89ec8e55-13ac-45e1-b5a6-b38ee34a1702\") " Jan 31 09:26:18 crc kubenswrapper[4830]: I0131 09:26:18.524402 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/89ec8e55-13ac-45e1-b5a6-b38ee34a1702-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "89ec8e55-13ac-45e1-b5a6-b38ee34a1702" (UID: "89ec8e55-13ac-45e1-b5a6-b38ee34a1702"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:26:18 crc kubenswrapper[4830]: I0131 09:26:18.524636 4830 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/89ec8e55-13ac-45e1-b5a6-b38ee34a1702-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 09:26:18 crc kubenswrapper[4830]: I0131 09:26:18.532374 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/89ec8e55-13ac-45e1-b5a6-b38ee34a1702-kube-api-access-ckkc2" (OuterVolumeSpecName: "kube-api-access-ckkc2") pod "89ec8e55-13ac-45e1-b5a6-b38ee34a1702" (UID: "89ec8e55-13ac-45e1-b5a6-b38ee34a1702"). InnerVolumeSpecName "kube-api-access-ckkc2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:26:18 crc kubenswrapper[4830]: I0131 09:26:18.569775 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-cfnapi-9f575bfb8-72ll7" Jan 31 09:26:18 crc kubenswrapper[4830]: I0131 09:26:18.630634 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ckkc2\" (UniqueName: \"kubernetes.io/projected/89ec8e55-13ac-45e1-b5a6-b38ee34a1702-kube-api-access-ckkc2\") on node \"crc\" DevicePath \"\"" Jan 31 09:26:18 crc kubenswrapper[4830]: I0131 09:26:18.703928 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-f57b45989-7xfmm"] Jan 31 09:26:18 crc kubenswrapper[4830]: I0131 09:26:18.740558 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-61ea-account-create-update-8wzt7" Jan 31 09:26:18 crc kubenswrapper[4830]: I0131 09:26:18.839351 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fb75ec26-4fae-4778-b520-828660b869cb-operator-scripts\") pod \"fb75ec26-4fae-4778-b520-828660b869cb\" (UID: \"fb75ec26-4fae-4778-b520-828660b869cb\") " Jan 31 09:26:18 crc kubenswrapper[4830]: I0131 09:26:18.840230 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zdhsj\" (UniqueName: \"kubernetes.io/projected/fb75ec26-4fae-4778-b520-828660b869cb-kube-api-access-zdhsj\") pod \"fb75ec26-4fae-4778-b520-828660b869cb\" (UID: \"fb75ec26-4fae-4778-b520-828660b869cb\") " Jan 31 09:26:18 crc kubenswrapper[4830]: I0131 09:26:18.844603 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fb75ec26-4fae-4778-b520-828660b869cb-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "fb75ec26-4fae-4778-b520-828660b869cb" (UID: "fb75ec26-4fae-4778-b520-828660b869cb"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:26:18 crc kubenswrapper[4830]: I0131 09:26:18.854699 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fb75ec26-4fae-4778-b520-828660b869cb-kube-api-access-zdhsj" (OuterVolumeSpecName: "kube-api-access-zdhsj") pod "fb75ec26-4fae-4778-b520-828660b869cb" (UID: "fb75ec26-4fae-4778-b520-828660b869cb"). InnerVolumeSpecName "kube-api-access-zdhsj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:26:18 crc kubenswrapper[4830]: I0131 09:26:18.945106 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zdhsj\" (UniqueName: \"kubernetes.io/projected/fb75ec26-4fae-4778-b520-828660b869cb-kube-api-access-zdhsj\") on node \"crc\" DevicePath \"\"" Jan 31 09:26:18 crc kubenswrapper[4830]: I0131 09:26:18.945153 4830 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fb75ec26-4fae-4778-b520-828660b869cb-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 09:26:19 crc kubenswrapper[4830]: I0131 09:26:19.043993 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-6948bd58db-k47sz" Jan 31 09:26:19 crc kubenswrapper[4830]: I0131 09:26:19.145684 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"22fcbbb1-bfe8-4d6a-b0ea-6e0949dd7246","Type":"ContainerStarted","Data":"b4fc26d55d14582dc8ce44164579bf49025f52146d438d637b3ce471134703bc"} Jan 31 09:26:19 crc kubenswrapper[4830]: I0131 09:26:19.145833 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="22fcbbb1-bfe8-4d6a-b0ea-6e0949dd7246" containerName="ceilometer-central-agent" containerID="cri-o://b50060fcb658e52a780891a55ae6a27f256e9873b517b788c78ba792c3c4be75" gracePeriod=30 Jan 31 09:26:19 crc kubenswrapper[4830]: I0131 09:26:19.145906 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 31 09:26:19 crc kubenswrapper[4830]: I0131 09:26:19.145963 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="22fcbbb1-bfe8-4d6a-b0ea-6e0949dd7246" containerName="proxy-httpd" containerID="cri-o://b4fc26d55d14582dc8ce44164579bf49025f52146d438d637b3ce471134703bc" gracePeriod=30 Jan 31 09:26:19 crc kubenswrapper[4830]: I0131 09:26:19.146015 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="22fcbbb1-bfe8-4d6a-b0ea-6e0949dd7246" containerName="sg-core" containerID="cri-o://ffaa284dfee60c7213ff245ce3df30a7180c9e387dbae4964764089f0908c322" gracePeriod=30 Jan 31 09:26:19 crc kubenswrapper[4830]: I0131 09:26:19.146050 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="22fcbbb1-bfe8-4d6a-b0ea-6e0949dd7246" containerName="ceilometer-notification-agent" containerID="cri-o://6803ced896ddcc76c9368cf99d58ed56cdafe21dfee22e80d9ed8b7d9ac413a3" gracePeriod=30 Jan 31 09:26:19 crc kubenswrapper[4830]: I0131 09:26:19.162601 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-ttdxz" Jan 31 09:26:19 crc kubenswrapper[4830]: I0131 09:26:19.163126 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f3e08be-d310-4474-99fb-d9226ab6eedb-combined-ca-bundle\") pod \"8f3e08be-d310-4474-99fb-d9226ab6eedb\" (UID: \"8f3e08be-d310-4474-99fb-d9226ab6eedb\") " Jan 31 09:26:19 crc kubenswrapper[4830]: I0131 09:26:19.163268 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f3e08be-d310-4474-99fb-d9226ab6eedb-config-data\") pod \"8f3e08be-d310-4474-99fb-d9226ab6eedb\" (UID: \"8f3e08be-d310-4474-99fb-d9226ab6eedb\") " Jan 31 09:26:19 crc kubenswrapper[4830]: I0131 09:26:19.163492 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pmhw7\" (UniqueName: \"kubernetes.io/projected/8f3e08be-d310-4474-99fb-d9226ab6eedb-kube-api-access-pmhw7\") pod \"8f3e08be-d310-4474-99fb-d9226ab6eedb\" (UID: \"8f3e08be-d310-4474-99fb-d9226ab6eedb\") " Jan 31 09:26:19 crc kubenswrapper[4830]: I0131 09:26:19.163615 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8f3e08be-d310-4474-99fb-d9226ab6eedb-config-data-custom\") pod \"8f3e08be-d310-4474-99fb-d9226ab6eedb\" (UID: \"8f3e08be-d310-4474-99fb-d9226ab6eedb\") " Jan 31 09:26:19 crc kubenswrapper[4830]: I0131 09:26:19.163831 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-ttdxz" event={"ID":"89ec8e55-13ac-45e1-b5a6-b38ee34a1702","Type":"ContainerDied","Data":"bcb751bce1af428ec2fe98cffd37113e24dddc85e638c11fad64679b219c7031"} Jan 31 09:26:19 crc kubenswrapper[4830]: I0131 09:26:19.163892 4830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bcb751bce1af428ec2fe98cffd37113e24dddc85e638c11fad64679b219c7031" Jan 31 09:26:19 crc kubenswrapper[4830]: I0131 09:26:19.179935 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"4ed170d0-8e88-40c3-a2b4-9908fc87a3db","Type":"ContainerStarted","Data":"788a97f65f4a301293233e0a5009eca61cc15dc645dc8dfa003f13c4ad11b664"} Jan 31 09:26:19 crc kubenswrapper[4830]: I0131 09:26:19.185224 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f3e08be-d310-4474-99fb-d9226ab6eedb-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "8f3e08be-d310-4474-99fb-d9226ab6eedb" (UID: "8f3e08be-d310-4474-99fb-d9226ab6eedb"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:26:19 crc kubenswrapper[4830]: I0131 09:26:19.185907 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f3e08be-d310-4474-99fb-d9226ab6eedb-kube-api-access-pmhw7" (OuterVolumeSpecName: "kube-api-access-pmhw7") pod "8f3e08be-d310-4474-99fb-d9226ab6eedb" (UID: "8f3e08be-d310-4474-99fb-d9226ab6eedb"). InnerVolumeSpecName "kube-api-access-pmhw7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:26:19 crc kubenswrapper[4830]: I0131 09:26:19.200998 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-61ea-account-create-update-8wzt7" event={"ID":"fb75ec26-4fae-4778-b520-828660b869cb","Type":"ContainerDied","Data":"b6fa43351ce00a45bd1a5ccc12600259e56fc6376a3bd940ea407570c11f730b"} Jan 31 09:26:19 crc kubenswrapper[4830]: I0131 09:26:19.201624 4830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b6fa43351ce00a45bd1a5ccc12600259e56fc6376a3bd940ea407570c11f730b" Jan 31 09:26:19 crc kubenswrapper[4830]: I0131 09:26:19.201457 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-61ea-account-create-update-8wzt7" Jan 31 09:26:19 crc kubenswrapper[4830]: I0131 09:26:19.229769 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=2.835629447 podStartE2EDuration="49.229716669s" podCreationTimestamp="2026-01-31 09:25:30 +0000 UTC" firstStartedPulling="2026-01-31 09:25:31.417800828 +0000 UTC m=+1475.911163270" lastFinishedPulling="2026-01-31 09:26:17.81188805 +0000 UTC m=+1522.305250492" observedRunningTime="2026-01-31 09:26:19.208781633 +0000 UTC m=+1523.702144085" watchObservedRunningTime="2026-01-31 09:26:19.229716669 +0000 UTC m=+1523.723079121" Jan 31 09:26:19 crc kubenswrapper[4830]: I0131 09:26:19.256469 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f3e08be-d310-4474-99fb-d9226ab6eedb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8f3e08be-d310-4474-99fb-d9226ab6eedb" (UID: "8f3e08be-d310-4474-99fb-d9226ab6eedb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:26:19 crc kubenswrapper[4830]: I0131 09:26:19.266661 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.26254098 podStartE2EDuration="12.266626988s" podCreationTimestamp="2026-01-31 09:26:07 +0000 UTC" firstStartedPulling="2026-01-31 09:26:08.529572352 +0000 UTC m=+1513.022934794" lastFinishedPulling="2026-01-31 09:26:17.53365836 +0000 UTC m=+1522.027020802" observedRunningTime="2026-01-31 09:26:19.181440786 +0000 UTC m=+1523.674803238" watchObservedRunningTime="2026-01-31 09:26:19.266626988 +0000 UTC m=+1523.759989430" Jan 31 09:26:19 crc kubenswrapper[4830]: I0131 09:26:19.276928 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-6948bd58db-k47sz" Jan 31 09:26:19 crc kubenswrapper[4830]: I0131 09:26:19.277512 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-6948bd58db-k47sz" event={"ID":"8f3e08be-d310-4474-99fb-d9226ab6eedb","Type":"ContainerDied","Data":"664eeafbb6c39f134ac2709d86d41cc57703e400362fa62a65b9420e8728ae3d"} Jan 31 09:26:19 crc kubenswrapper[4830]: I0131 09:26:19.277604 4830 scope.go:117] "RemoveContainer" containerID="12da4461a7423985fb15fa6a8127510011961b290ffafb67a07ee27348a2d699" Jan 31 09:26:19 crc kubenswrapper[4830]: I0131 09:26:19.294936 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pmhw7\" (UniqueName: \"kubernetes.io/projected/8f3e08be-d310-4474-99fb-d9226ab6eedb-kube-api-access-pmhw7\") on node \"crc\" DevicePath \"\"" Jan 31 09:26:19 crc kubenswrapper[4830]: I0131 09:26:19.301242 4830 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8f3e08be-d310-4474-99fb-d9226ab6eedb-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 31 09:26:19 crc kubenswrapper[4830]: I0131 09:26:19.302365 4830 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f3e08be-d310-4474-99fb-d9226ab6eedb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 09:26:19 crc kubenswrapper[4830]: I0131 09:26:19.348048 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f3e08be-d310-4474-99fb-d9226ab6eedb-config-data" (OuterVolumeSpecName: "config-data") pod "8f3e08be-d310-4474-99fb-d9226ab6eedb" (UID: "8f3e08be-d310-4474-99fb-d9226ab6eedb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:26:19 crc kubenswrapper[4830]: I0131 09:26:19.406690 4830 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f3e08be-d310-4474-99fb-d9226ab6eedb-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 09:26:19 crc kubenswrapper[4830]: I0131 09:26:19.427048 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-f57b45989-7xfmm" Jan 31 09:26:19 crc kubenswrapper[4830]: I0131 09:26:19.611267 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tdcbs\" (UniqueName: \"kubernetes.io/projected/de11d1f2-fd91-48c7-9dc3-79748064e53d-kube-api-access-tdcbs\") pod \"de11d1f2-fd91-48c7-9dc3-79748064e53d\" (UID: \"de11d1f2-fd91-48c7-9dc3-79748064e53d\") " Jan 31 09:26:19 crc kubenswrapper[4830]: I0131 09:26:19.611337 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de11d1f2-fd91-48c7-9dc3-79748064e53d-combined-ca-bundle\") pod \"de11d1f2-fd91-48c7-9dc3-79748064e53d\" (UID: \"de11d1f2-fd91-48c7-9dc3-79748064e53d\") " Jan 31 09:26:19 crc kubenswrapper[4830]: I0131 09:26:19.611372 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/de11d1f2-fd91-48c7-9dc3-79748064e53d-config-data-custom\") pod \"de11d1f2-fd91-48c7-9dc3-79748064e53d\" (UID: \"de11d1f2-fd91-48c7-9dc3-79748064e53d\") " Jan 31 09:26:19 crc kubenswrapper[4830]: I0131 09:26:19.611513 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/de11d1f2-fd91-48c7-9dc3-79748064e53d-config-data\") pod \"de11d1f2-fd91-48c7-9dc3-79748064e53d\" (UID: \"de11d1f2-fd91-48c7-9dc3-79748064e53d\") " Jan 31 09:26:19 crc kubenswrapper[4830]: I0131 09:26:19.652208 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de11d1f2-fd91-48c7-9dc3-79748064e53d-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "de11d1f2-fd91-48c7-9dc3-79748064e53d" (UID: "de11d1f2-fd91-48c7-9dc3-79748064e53d"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:26:19 crc kubenswrapper[4830]: I0131 09:26:19.652221 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/de11d1f2-fd91-48c7-9dc3-79748064e53d-kube-api-access-tdcbs" (OuterVolumeSpecName: "kube-api-access-tdcbs") pod "de11d1f2-fd91-48c7-9dc3-79748064e53d" (UID: "de11d1f2-fd91-48c7-9dc3-79748064e53d"). InnerVolumeSpecName "kube-api-access-tdcbs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:26:19 crc kubenswrapper[4830]: I0131 09:26:19.717945 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tdcbs\" (UniqueName: \"kubernetes.io/projected/de11d1f2-fd91-48c7-9dc3-79748064e53d-kube-api-access-tdcbs\") on node \"crc\" DevicePath \"\"" Jan 31 09:26:19 crc kubenswrapper[4830]: I0131 09:26:19.718091 4830 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/de11d1f2-fd91-48c7-9dc3-79748064e53d-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 31 09:26:19 crc kubenswrapper[4830]: I0131 09:26:19.738483 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de11d1f2-fd91-48c7-9dc3-79748064e53d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "de11d1f2-fd91-48c7-9dc3-79748064e53d" (UID: "de11d1f2-fd91-48c7-9dc3-79748064e53d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:26:19 crc kubenswrapper[4830]: I0131 09:26:19.754621 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de11d1f2-fd91-48c7-9dc3-79748064e53d-config-data" (OuterVolumeSpecName: "config-data") pod "de11d1f2-fd91-48c7-9dc3-79748064e53d" (UID: "de11d1f2-fd91-48c7-9dc3-79748064e53d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:26:19 crc kubenswrapper[4830]: I0131 09:26:19.761923 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 31 09:26:19 crc kubenswrapper[4830]: I0131 09:26:19.820704 4830 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/de11d1f2-fd91-48c7-9dc3-79748064e53d-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 09:26:19 crc kubenswrapper[4830]: I0131 09:26:19.820773 4830 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de11d1f2-fd91-48c7-9dc3-79748064e53d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 09:26:20 crc kubenswrapper[4830]: I0131 09:26:20.056803 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-6948bd58db-k47sz"] Jan 31 09:26:20 crc kubenswrapper[4830]: I0131 09:26:20.096994 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-api-6948bd58db-k47sz"] Jan 31 09:26:20 crc kubenswrapper[4830]: I0131 09:26:20.277882 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f3e08be-d310-4474-99fb-d9226ab6eedb" path="/var/lib/kubelet/pods/8f3e08be-d310-4474-99fb-d9226ab6eedb/volumes" Jan 31 09:26:20 crc kubenswrapper[4830]: I0131 09:26:20.296471 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-f57b45989-7xfmm" event={"ID":"de11d1f2-fd91-48c7-9dc3-79748064e53d","Type":"ContainerDied","Data":"37d0816b932e2a2c94808352db0421b8050f8e545b6ddcdee9e81e61a6f60f44"} Jan 31 09:26:20 crc kubenswrapper[4830]: I0131 09:26:20.296490 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-f57b45989-7xfmm" Jan 31 09:26:20 crc kubenswrapper[4830]: I0131 09:26:20.296541 4830 scope.go:117] "RemoveContainer" containerID="c9597527182897f5730379a910416cce1be500224567c627cf9a0061702ca197" Jan 31 09:26:20 crc kubenswrapper[4830]: I0131 09:26:20.303157 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"65303534-fa3e-4008-9ea1-95cd77e752c9","Type":"ContainerStarted","Data":"71727a2bc3181c8a8d3b57d43486e45b27939e919c0fe39e47e64fa3904c5f88"} Jan 31 09:26:20 crc kubenswrapper[4830]: I0131 09:26:20.332095 4830 generic.go:334] "Generic (PLEG): container finished" podID="22fcbbb1-bfe8-4d6a-b0ea-6e0949dd7246" containerID="b4fc26d55d14582dc8ce44164579bf49025f52146d438d637b3ce471134703bc" exitCode=0 Jan 31 09:26:20 crc kubenswrapper[4830]: I0131 09:26:20.332134 4830 generic.go:334] "Generic (PLEG): container finished" podID="22fcbbb1-bfe8-4d6a-b0ea-6e0949dd7246" containerID="ffaa284dfee60c7213ff245ce3df30a7180c9e387dbae4964764089f0908c322" exitCode=2 Jan 31 09:26:20 crc kubenswrapper[4830]: I0131 09:26:20.332148 4830 generic.go:334] "Generic (PLEG): container finished" podID="22fcbbb1-bfe8-4d6a-b0ea-6e0949dd7246" containerID="6803ced896ddcc76c9368cf99d58ed56cdafe21dfee22e80d9ed8b7d9ac413a3" exitCode=0 Jan 31 09:26:20 crc kubenswrapper[4830]: I0131 09:26:20.332178 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"22fcbbb1-bfe8-4d6a-b0ea-6e0949dd7246","Type":"ContainerDied","Data":"b4fc26d55d14582dc8ce44164579bf49025f52146d438d637b3ce471134703bc"} Jan 31 09:26:20 crc kubenswrapper[4830]: I0131 09:26:20.332214 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"22fcbbb1-bfe8-4d6a-b0ea-6e0949dd7246","Type":"ContainerDied","Data":"ffaa284dfee60c7213ff245ce3df30a7180c9e387dbae4964764089f0908c322"} Jan 31 09:26:20 crc kubenswrapper[4830]: I0131 09:26:20.332228 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"22fcbbb1-bfe8-4d6a-b0ea-6e0949dd7246","Type":"ContainerDied","Data":"6803ced896ddcc76c9368cf99d58ed56cdafe21dfee22e80d9ed8b7d9ac413a3"} Jan 31 09:26:20 crc kubenswrapper[4830]: I0131 09:26:20.350544 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-f57b45989-7xfmm"] Jan 31 09:26:20 crc kubenswrapper[4830]: I0131 09:26:20.362959 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-cfnapi-f57b45989-7xfmm"] Jan 31 09:26:20 crc kubenswrapper[4830]: I0131 09:26:20.610361 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-tgdwk"] Jan 31 09:26:20 crc kubenswrapper[4830]: E0131 09:26:20.611029 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f3e08be-d310-4474-99fb-d9226ab6eedb" containerName="heat-api" Jan 31 09:26:20 crc kubenswrapper[4830]: I0131 09:26:20.611050 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f3e08be-d310-4474-99fb-d9226ab6eedb" containerName="heat-api" Jan 31 09:26:20 crc kubenswrapper[4830]: E0131 09:26:20.611065 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de11d1f2-fd91-48c7-9dc3-79748064e53d" containerName="heat-cfnapi" Jan 31 09:26:20 crc kubenswrapper[4830]: I0131 09:26:20.611072 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="de11d1f2-fd91-48c7-9dc3-79748064e53d" containerName="heat-cfnapi" Jan 31 09:26:20 crc kubenswrapper[4830]: E0131 09:26:20.611087 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f3e08be-d310-4474-99fb-d9226ab6eedb" containerName="heat-api" Jan 31 09:26:20 crc kubenswrapper[4830]: I0131 09:26:20.611094 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f3e08be-d310-4474-99fb-d9226ab6eedb" containerName="heat-api" Jan 31 09:26:20 crc kubenswrapper[4830]: E0131 09:26:20.611105 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6090d149-6116-4ccf-981f-67ad48e42a1f" containerName="mariadb-account-create-update" Jan 31 09:26:20 crc kubenswrapper[4830]: I0131 09:26:20.611113 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="6090d149-6116-4ccf-981f-67ad48e42a1f" containerName="mariadb-account-create-update" Jan 31 09:26:20 crc kubenswrapper[4830]: E0131 09:26:20.611137 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ced551a8-224d-488b-aa58-c424e387ccca" containerName="mariadb-database-create" Jan 31 09:26:20 crc kubenswrapper[4830]: I0131 09:26:20.611145 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="ced551a8-224d-488b-aa58-c424e387ccca" containerName="mariadb-database-create" Jan 31 09:26:20 crc kubenswrapper[4830]: E0131 09:26:20.611158 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb75ec26-4fae-4778-b520-828660b869cb" containerName="mariadb-account-create-update" Jan 31 09:26:20 crc kubenswrapper[4830]: I0131 09:26:20.611166 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb75ec26-4fae-4778-b520-828660b869cb" containerName="mariadb-account-create-update" Jan 31 09:26:20 crc kubenswrapper[4830]: E0131 09:26:20.611184 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89ec8e55-13ac-45e1-b5a6-b38ee34a1702" containerName="mariadb-database-create" Jan 31 09:26:20 crc kubenswrapper[4830]: I0131 09:26:20.611190 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="89ec8e55-13ac-45e1-b5a6-b38ee34a1702" containerName="mariadb-database-create" Jan 31 09:26:20 crc kubenswrapper[4830]: E0131 09:26:20.611197 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9378cf4e-8ab3-4e97-8955-158a9b0c4c26" containerName="mariadb-account-create-update" Jan 31 09:26:20 crc kubenswrapper[4830]: I0131 09:26:20.611205 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="9378cf4e-8ab3-4e97-8955-158a9b0c4c26" containerName="mariadb-account-create-update" Jan 31 09:26:20 crc kubenswrapper[4830]: E0131 09:26:20.611224 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="631e221b-b504-4f59-8848-c9427f67c0df" containerName="mariadb-database-create" Jan 31 09:26:20 crc kubenswrapper[4830]: I0131 09:26:20.611230 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="631e221b-b504-4f59-8848-c9427f67c0df" containerName="mariadb-database-create" Jan 31 09:26:20 crc kubenswrapper[4830]: I0131 09:26:20.611473 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="8f3e08be-d310-4474-99fb-d9226ab6eedb" containerName="heat-api" Jan 31 09:26:20 crc kubenswrapper[4830]: I0131 09:26:20.611484 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="de11d1f2-fd91-48c7-9dc3-79748064e53d" containerName="heat-cfnapi" Jan 31 09:26:20 crc kubenswrapper[4830]: I0131 09:26:20.611507 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="de11d1f2-fd91-48c7-9dc3-79748064e53d" containerName="heat-cfnapi" Jan 31 09:26:20 crc kubenswrapper[4830]: I0131 09:26:20.611526 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="fb75ec26-4fae-4778-b520-828660b869cb" containerName="mariadb-account-create-update" Jan 31 09:26:20 crc kubenswrapper[4830]: I0131 09:26:20.611542 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="89ec8e55-13ac-45e1-b5a6-b38ee34a1702" containerName="mariadb-database-create" Jan 31 09:26:20 crc kubenswrapper[4830]: I0131 09:26:20.611552 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="631e221b-b504-4f59-8848-c9427f67c0df" containerName="mariadb-database-create" Jan 31 09:26:20 crc kubenswrapper[4830]: I0131 09:26:20.611565 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="8f3e08be-d310-4474-99fb-d9226ab6eedb" containerName="heat-api" Jan 31 09:26:20 crc kubenswrapper[4830]: I0131 09:26:20.611576 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="ced551a8-224d-488b-aa58-c424e387ccca" containerName="mariadb-database-create" Jan 31 09:26:20 crc kubenswrapper[4830]: I0131 09:26:20.611589 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="6090d149-6116-4ccf-981f-67ad48e42a1f" containerName="mariadb-account-create-update" Jan 31 09:26:20 crc kubenswrapper[4830]: I0131 09:26:20.611600 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="9378cf4e-8ab3-4e97-8955-158a9b0c4c26" containerName="mariadb-account-create-update" Jan 31 09:26:20 crc kubenswrapper[4830]: I0131 09:26:20.612515 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-tgdwk" Jan 31 09:26:20 crc kubenswrapper[4830]: I0131 09:26:20.616394 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-cdl7g" Jan 31 09:26:20 crc kubenswrapper[4830]: I0131 09:26:20.623576 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 31 09:26:20 crc kubenswrapper[4830]: I0131 09:26:20.623858 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Jan 31 09:26:20 crc kubenswrapper[4830]: I0131 09:26:20.644546 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-tgdwk"] Jan 31 09:26:20 crc kubenswrapper[4830]: I0131 09:26:20.753900 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5da52f5f-a3fa-4dbc-8089-bf0dac06c78f-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-tgdwk\" (UID: \"5da52f5f-a3fa-4dbc-8089-bf0dac06c78f\") " pod="openstack/nova-cell0-conductor-db-sync-tgdwk" Jan 31 09:26:20 crc kubenswrapper[4830]: I0131 09:26:20.754216 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5da52f5f-a3fa-4dbc-8089-bf0dac06c78f-config-data\") pod \"nova-cell0-conductor-db-sync-tgdwk\" (UID: \"5da52f5f-a3fa-4dbc-8089-bf0dac06c78f\") " pod="openstack/nova-cell0-conductor-db-sync-tgdwk" Jan 31 09:26:20 crc kubenswrapper[4830]: I0131 09:26:20.754284 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5da52f5f-a3fa-4dbc-8089-bf0dac06c78f-scripts\") pod \"nova-cell0-conductor-db-sync-tgdwk\" (UID: \"5da52f5f-a3fa-4dbc-8089-bf0dac06c78f\") " pod="openstack/nova-cell0-conductor-db-sync-tgdwk" Jan 31 09:26:20 crc kubenswrapper[4830]: I0131 09:26:20.754355 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xh5vd\" (UniqueName: \"kubernetes.io/projected/5da52f5f-a3fa-4dbc-8089-bf0dac06c78f-kube-api-access-xh5vd\") pod \"nova-cell0-conductor-db-sync-tgdwk\" (UID: \"5da52f5f-a3fa-4dbc-8089-bf0dac06c78f\") " pod="openstack/nova-cell0-conductor-db-sync-tgdwk" Jan 31 09:26:20 crc kubenswrapper[4830]: I0131 09:26:20.856252 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5da52f5f-a3fa-4dbc-8089-bf0dac06c78f-config-data\") pod \"nova-cell0-conductor-db-sync-tgdwk\" (UID: \"5da52f5f-a3fa-4dbc-8089-bf0dac06c78f\") " pod="openstack/nova-cell0-conductor-db-sync-tgdwk" Jan 31 09:26:20 crc kubenswrapper[4830]: I0131 09:26:20.856313 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5da52f5f-a3fa-4dbc-8089-bf0dac06c78f-scripts\") pod \"nova-cell0-conductor-db-sync-tgdwk\" (UID: \"5da52f5f-a3fa-4dbc-8089-bf0dac06c78f\") " pod="openstack/nova-cell0-conductor-db-sync-tgdwk" Jan 31 09:26:20 crc kubenswrapper[4830]: I0131 09:26:20.856358 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xh5vd\" (UniqueName: \"kubernetes.io/projected/5da52f5f-a3fa-4dbc-8089-bf0dac06c78f-kube-api-access-xh5vd\") pod \"nova-cell0-conductor-db-sync-tgdwk\" (UID: \"5da52f5f-a3fa-4dbc-8089-bf0dac06c78f\") " pod="openstack/nova-cell0-conductor-db-sync-tgdwk" Jan 31 09:26:20 crc kubenswrapper[4830]: I0131 09:26:20.856445 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5da52f5f-a3fa-4dbc-8089-bf0dac06c78f-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-tgdwk\" (UID: \"5da52f5f-a3fa-4dbc-8089-bf0dac06c78f\") " pod="openstack/nova-cell0-conductor-db-sync-tgdwk" Jan 31 09:26:20 crc kubenswrapper[4830]: I0131 09:26:20.860997 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5da52f5f-a3fa-4dbc-8089-bf0dac06c78f-config-data\") pod \"nova-cell0-conductor-db-sync-tgdwk\" (UID: \"5da52f5f-a3fa-4dbc-8089-bf0dac06c78f\") " pod="openstack/nova-cell0-conductor-db-sync-tgdwk" Jan 31 09:26:20 crc kubenswrapper[4830]: I0131 09:26:20.870284 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5da52f5f-a3fa-4dbc-8089-bf0dac06c78f-scripts\") pod \"nova-cell0-conductor-db-sync-tgdwk\" (UID: \"5da52f5f-a3fa-4dbc-8089-bf0dac06c78f\") " pod="openstack/nova-cell0-conductor-db-sync-tgdwk" Jan 31 09:26:20 crc kubenswrapper[4830]: I0131 09:26:20.872741 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5da52f5f-a3fa-4dbc-8089-bf0dac06c78f-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-tgdwk\" (UID: \"5da52f5f-a3fa-4dbc-8089-bf0dac06c78f\") " pod="openstack/nova-cell0-conductor-db-sync-tgdwk" Jan 31 09:26:20 crc kubenswrapper[4830]: I0131 09:26:20.907777 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xh5vd\" (UniqueName: \"kubernetes.io/projected/5da52f5f-a3fa-4dbc-8089-bf0dac06c78f-kube-api-access-xh5vd\") pod \"nova-cell0-conductor-db-sync-tgdwk\" (UID: \"5da52f5f-a3fa-4dbc-8089-bf0dac06c78f\") " pod="openstack/nova-cell0-conductor-db-sync-tgdwk" Jan 31 09:26:20 crc kubenswrapper[4830]: I0131 09:26:20.937238 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-tgdwk" Jan 31 09:26:21 crc kubenswrapper[4830]: I0131 09:26:21.403012 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"65303534-fa3e-4008-9ea1-95cd77e752c9","Type":"ContainerStarted","Data":"99d10f1501a991ea28527b24a85d98ce1bc913f45fc6a642a9522ce70a2026cc"} Jan 31 09:26:21 crc kubenswrapper[4830]: I0131 09:26:21.681957 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-tgdwk"] Jan 31 09:26:21 crc kubenswrapper[4830]: I0131 09:26:21.815216 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-engine-587fd67997-pvqls" Jan 31 09:26:21 crc kubenswrapper[4830]: I0131 09:26:21.915512 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-engine-666cdcb7b8-d25gt"] Jan 31 09:26:21 crc kubenswrapper[4830]: I0131 09:26:21.915803 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-engine-666cdcb7b8-d25gt" podUID="07e6233b-8dfa-42db-8e5f-62dbe5372610" containerName="heat-engine" containerID="cri-o://e522becee83fe1b6467a13a68505e16f4514e10b42775e0bcb9984296834555d" gracePeriod=60 Jan 31 09:26:22 crc kubenswrapper[4830]: I0131 09:26:22.268558 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="de11d1f2-fd91-48c7-9dc3-79748064e53d" path="/var/lib/kubelet/pods/de11d1f2-fd91-48c7-9dc3-79748064e53d/volumes" Jan 31 09:26:22 crc kubenswrapper[4830]: I0131 09:26:22.425368 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-tgdwk" event={"ID":"5da52f5f-a3fa-4dbc-8089-bf0dac06c78f","Type":"ContainerStarted","Data":"54daaccbc2c44b5f351e2f3aa902758c523ddb4a600c8c624d35887ab4703fb2"} Jan 31 09:26:22 crc kubenswrapper[4830]: I0131 09:26:22.428414 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"65303534-fa3e-4008-9ea1-95cd77e752c9","Type":"ContainerStarted","Data":"2095d2c0767fd8718ea138dfbf3de091ecc91cddcf30d6d4a16a089af821ce14"} Jan 31 09:26:22 crc kubenswrapper[4830]: I0131 09:26:22.432569 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 31 09:26:22 crc kubenswrapper[4830]: I0131 09:26:22.432925 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 31 09:26:22 crc kubenswrapper[4830]: I0131 09:26:22.462587 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=8.462561199 podStartE2EDuration="8.462561199s" podCreationTimestamp="2026-01-31 09:26:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 09:26:22.452534735 +0000 UTC m=+1526.945897177" watchObservedRunningTime="2026-01-31 09:26:22.462561199 +0000 UTC m=+1526.955923641" Jan 31 09:26:22 crc kubenswrapper[4830]: I0131 09:26:22.495677 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 31 09:26:22 crc kubenswrapper[4830]: I0131 09:26:22.511090 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 31 09:26:23 crc kubenswrapper[4830]: I0131 09:26:23.449085 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 31 09:26:23 crc kubenswrapper[4830]: I0131 09:26:23.450579 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 31 09:26:25 crc kubenswrapper[4830]: I0131 09:26:25.920845 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 31 09:26:25 crc kubenswrapper[4830]: I0131 09:26:25.921435 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 31 09:26:26 crc kubenswrapper[4830]: I0131 09:26:26.013689 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 31 09:26:26 crc kubenswrapper[4830]: I0131 09:26:26.053650 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 31 09:26:26 crc kubenswrapper[4830]: I0131 09:26:26.263236 4830 scope.go:117] "RemoveContainer" containerID="a04fad3617a9e38076099693ce6bd6f0b7e1a9b845b3b8a22acffddfa772e8f0" Jan 31 09:26:26 crc kubenswrapper[4830]: E0131 09:26:26.264029 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gt7kd_openshift-machine-config-operator(158dbfda-9b0a-4809-9946-3c6ee2d082dc)\"" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" podUID="158dbfda-9b0a-4809-9946-3c6ee2d082dc" Jan 31 09:26:26 crc kubenswrapper[4830]: I0131 09:26:26.515522 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 31 09:26:26 crc kubenswrapper[4830]: I0131 09:26:26.516071 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 31 09:26:28 crc kubenswrapper[4830]: I0131 09:26:28.053494 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 31 09:26:28 crc kubenswrapper[4830]: I0131 09:26:28.054121 4830 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 31 09:26:28 crc kubenswrapper[4830]: I0131 09:26:28.059989 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 31 09:26:28 crc kubenswrapper[4830]: E0131 09:26:28.824397 4830 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="e522becee83fe1b6467a13a68505e16f4514e10b42775e0bcb9984296834555d" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Jan 31 09:26:28 crc kubenswrapper[4830]: E0131 09:26:28.827644 4830 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="e522becee83fe1b6467a13a68505e16f4514e10b42775e0bcb9984296834555d" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Jan 31 09:26:28 crc kubenswrapper[4830]: E0131 09:26:28.840382 4830 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="e522becee83fe1b6467a13a68505e16f4514e10b42775e0bcb9984296834555d" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Jan 31 09:26:28 crc kubenswrapper[4830]: E0131 09:26:28.840477 4830 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/heat-engine-666cdcb7b8-d25gt" podUID="07e6233b-8dfa-42db-8e5f-62dbe5372610" containerName="heat-engine" Jan 31 09:26:29 crc kubenswrapper[4830]: I0131 09:26:29.383069 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 31 09:26:29 crc kubenswrapper[4830]: I0131 09:26:29.383206 4830 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 31 09:26:29 crc kubenswrapper[4830]: I0131 09:26:29.390368 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 31 09:26:31 crc kubenswrapper[4830]: I0131 09:26:31.639822 4830 generic.go:334] "Generic (PLEG): container finished" podID="22fcbbb1-bfe8-4d6a-b0ea-6e0949dd7246" containerID="b50060fcb658e52a780891a55ae6a27f256e9873b517b788c78ba792c3c4be75" exitCode=0 Jan 31 09:26:31 crc kubenswrapper[4830]: I0131 09:26:31.639889 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"22fcbbb1-bfe8-4d6a-b0ea-6e0949dd7246","Type":"ContainerDied","Data":"b50060fcb658e52a780891a55ae6a27f256e9873b517b788c78ba792c3c4be75"} Jan 31 09:26:33 crc kubenswrapper[4830]: I0131 09:26:33.688822 4830 generic.go:334] "Generic (PLEG): container finished" podID="07e6233b-8dfa-42db-8e5f-62dbe5372610" containerID="e522becee83fe1b6467a13a68505e16f4514e10b42775e0bcb9984296834555d" exitCode=0 Jan 31 09:26:33 crc kubenswrapper[4830]: I0131 09:26:33.689046 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-666cdcb7b8-d25gt" event={"ID":"07e6233b-8dfa-42db-8e5f-62dbe5372610","Type":"ContainerDied","Data":"e522becee83fe1b6467a13a68505e16f4514e10b42775e0bcb9984296834555d"} Jan 31 09:26:36 crc kubenswrapper[4830]: I0131 09:26:36.313211 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 31 09:26:36 crc kubenswrapper[4830]: I0131 09:26:36.474998 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-666cdcb7b8-d25gt" Jan 31 09:26:36 crc kubenswrapper[4830]: I0131 09:26:36.511229 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/22fcbbb1-bfe8-4d6a-b0ea-6e0949dd7246-log-httpd\") pod \"22fcbbb1-bfe8-4d6a-b0ea-6e0949dd7246\" (UID: \"22fcbbb1-bfe8-4d6a-b0ea-6e0949dd7246\") " Jan 31 09:26:36 crc kubenswrapper[4830]: I0131 09:26:36.511347 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/22fcbbb1-bfe8-4d6a-b0ea-6e0949dd7246-scripts\") pod \"22fcbbb1-bfe8-4d6a-b0ea-6e0949dd7246\" (UID: \"22fcbbb1-bfe8-4d6a-b0ea-6e0949dd7246\") " Jan 31 09:26:36 crc kubenswrapper[4830]: I0131 09:26:36.511398 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22fcbbb1-bfe8-4d6a-b0ea-6e0949dd7246-combined-ca-bundle\") pod \"22fcbbb1-bfe8-4d6a-b0ea-6e0949dd7246\" (UID: \"22fcbbb1-bfe8-4d6a-b0ea-6e0949dd7246\") " Jan 31 09:26:36 crc kubenswrapper[4830]: I0131 09:26:36.511419 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/22fcbbb1-bfe8-4d6a-b0ea-6e0949dd7246-sg-core-conf-yaml\") pod \"22fcbbb1-bfe8-4d6a-b0ea-6e0949dd7246\" (UID: \"22fcbbb1-bfe8-4d6a-b0ea-6e0949dd7246\") " Jan 31 09:26:36 crc kubenswrapper[4830]: I0131 09:26:36.511496 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qfh7w\" (UniqueName: \"kubernetes.io/projected/22fcbbb1-bfe8-4d6a-b0ea-6e0949dd7246-kube-api-access-qfh7w\") pod \"22fcbbb1-bfe8-4d6a-b0ea-6e0949dd7246\" (UID: \"22fcbbb1-bfe8-4d6a-b0ea-6e0949dd7246\") " Jan 31 09:26:36 crc kubenswrapper[4830]: I0131 09:26:36.511568 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/22fcbbb1-bfe8-4d6a-b0ea-6e0949dd7246-run-httpd\") pod \"22fcbbb1-bfe8-4d6a-b0ea-6e0949dd7246\" (UID: \"22fcbbb1-bfe8-4d6a-b0ea-6e0949dd7246\") " Jan 31 09:26:36 crc kubenswrapper[4830]: I0131 09:26:36.511688 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/22fcbbb1-bfe8-4d6a-b0ea-6e0949dd7246-config-data\") pod \"22fcbbb1-bfe8-4d6a-b0ea-6e0949dd7246\" (UID: \"22fcbbb1-bfe8-4d6a-b0ea-6e0949dd7246\") " Jan 31 09:26:36 crc kubenswrapper[4830]: I0131 09:26:36.512505 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/22fcbbb1-bfe8-4d6a-b0ea-6e0949dd7246-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "22fcbbb1-bfe8-4d6a-b0ea-6e0949dd7246" (UID: "22fcbbb1-bfe8-4d6a-b0ea-6e0949dd7246"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 09:26:36 crc kubenswrapper[4830]: I0131 09:26:36.512982 4830 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/22fcbbb1-bfe8-4d6a-b0ea-6e0949dd7246-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 31 09:26:36 crc kubenswrapper[4830]: I0131 09:26:36.513029 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/22fcbbb1-bfe8-4d6a-b0ea-6e0949dd7246-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "22fcbbb1-bfe8-4d6a-b0ea-6e0949dd7246" (UID: "22fcbbb1-bfe8-4d6a-b0ea-6e0949dd7246"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 09:26:36 crc kubenswrapper[4830]: I0131 09:26:36.520151 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22fcbbb1-bfe8-4d6a-b0ea-6e0949dd7246-kube-api-access-qfh7w" (OuterVolumeSpecName: "kube-api-access-qfh7w") pod "22fcbbb1-bfe8-4d6a-b0ea-6e0949dd7246" (UID: "22fcbbb1-bfe8-4d6a-b0ea-6e0949dd7246"). InnerVolumeSpecName "kube-api-access-qfh7w". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:26:36 crc kubenswrapper[4830]: I0131 09:26:36.534965 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22fcbbb1-bfe8-4d6a-b0ea-6e0949dd7246-scripts" (OuterVolumeSpecName: "scripts") pod "22fcbbb1-bfe8-4d6a-b0ea-6e0949dd7246" (UID: "22fcbbb1-bfe8-4d6a-b0ea-6e0949dd7246"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:26:36 crc kubenswrapper[4830]: I0131 09:26:36.561904 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22fcbbb1-bfe8-4d6a-b0ea-6e0949dd7246-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "22fcbbb1-bfe8-4d6a-b0ea-6e0949dd7246" (UID: "22fcbbb1-bfe8-4d6a-b0ea-6e0949dd7246"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:26:36 crc kubenswrapper[4830]: I0131 09:26:36.615050 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07e6233b-8dfa-42db-8e5f-62dbe5372610-combined-ca-bundle\") pod \"07e6233b-8dfa-42db-8e5f-62dbe5372610\" (UID: \"07e6233b-8dfa-42db-8e5f-62dbe5372610\") " Jan 31 09:26:36 crc kubenswrapper[4830]: I0131 09:26:36.615138 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5klr5\" (UniqueName: \"kubernetes.io/projected/07e6233b-8dfa-42db-8e5f-62dbe5372610-kube-api-access-5klr5\") pod \"07e6233b-8dfa-42db-8e5f-62dbe5372610\" (UID: \"07e6233b-8dfa-42db-8e5f-62dbe5372610\") " Jan 31 09:26:36 crc kubenswrapper[4830]: I0131 09:26:36.615332 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/07e6233b-8dfa-42db-8e5f-62dbe5372610-config-data\") pod \"07e6233b-8dfa-42db-8e5f-62dbe5372610\" (UID: \"07e6233b-8dfa-42db-8e5f-62dbe5372610\") " Jan 31 09:26:36 crc kubenswrapper[4830]: I0131 09:26:36.615434 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/07e6233b-8dfa-42db-8e5f-62dbe5372610-config-data-custom\") pod \"07e6233b-8dfa-42db-8e5f-62dbe5372610\" (UID: \"07e6233b-8dfa-42db-8e5f-62dbe5372610\") " Jan 31 09:26:36 crc kubenswrapper[4830]: I0131 09:26:36.616145 4830 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/22fcbbb1-bfe8-4d6a-b0ea-6e0949dd7246-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 31 09:26:36 crc kubenswrapper[4830]: I0131 09:26:36.616169 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qfh7w\" (UniqueName: \"kubernetes.io/projected/22fcbbb1-bfe8-4d6a-b0ea-6e0949dd7246-kube-api-access-qfh7w\") on node \"crc\" DevicePath \"\"" Jan 31 09:26:36 crc kubenswrapper[4830]: I0131 09:26:36.616181 4830 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/22fcbbb1-bfe8-4d6a-b0ea-6e0949dd7246-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 31 09:26:36 crc kubenswrapper[4830]: I0131 09:26:36.616190 4830 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/22fcbbb1-bfe8-4d6a-b0ea-6e0949dd7246-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 09:26:36 crc kubenswrapper[4830]: I0131 09:26:36.621856 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/07e6233b-8dfa-42db-8e5f-62dbe5372610-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "07e6233b-8dfa-42db-8e5f-62dbe5372610" (UID: "07e6233b-8dfa-42db-8e5f-62dbe5372610"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:26:36 crc kubenswrapper[4830]: I0131 09:26:36.622558 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/07e6233b-8dfa-42db-8e5f-62dbe5372610-kube-api-access-5klr5" (OuterVolumeSpecName: "kube-api-access-5klr5") pod "07e6233b-8dfa-42db-8e5f-62dbe5372610" (UID: "07e6233b-8dfa-42db-8e5f-62dbe5372610"). InnerVolumeSpecName "kube-api-access-5klr5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:26:36 crc kubenswrapper[4830]: I0131 09:26:36.648609 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22fcbbb1-bfe8-4d6a-b0ea-6e0949dd7246-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "22fcbbb1-bfe8-4d6a-b0ea-6e0949dd7246" (UID: "22fcbbb1-bfe8-4d6a-b0ea-6e0949dd7246"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:26:36 crc kubenswrapper[4830]: I0131 09:26:36.671098 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/07e6233b-8dfa-42db-8e5f-62dbe5372610-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "07e6233b-8dfa-42db-8e5f-62dbe5372610" (UID: "07e6233b-8dfa-42db-8e5f-62dbe5372610"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:26:36 crc kubenswrapper[4830]: I0131 09:26:36.718938 4830 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07e6233b-8dfa-42db-8e5f-62dbe5372610-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 09:26:36 crc kubenswrapper[4830]: I0131 09:26:36.718980 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5klr5\" (UniqueName: \"kubernetes.io/projected/07e6233b-8dfa-42db-8e5f-62dbe5372610-kube-api-access-5klr5\") on node \"crc\" DevicePath \"\"" Jan 31 09:26:36 crc kubenswrapper[4830]: I0131 09:26:36.718993 4830 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/07e6233b-8dfa-42db-8e5f-62dbe5372610-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 31 09:26:36 crc kubenswrapper[4830]: I0131 09:26:36.719004 4830 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22fcbbb1-bfe8-4d6a-b0ea-6e0949dd7246-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 09:26:36 crc kubenswrapper[4830]: I0131 09:26:36.727933 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22fcbbb1-bfe8-4d6a-b0ea-6e0949dd7246-config-data" (OuterVolumeSpecName: "config-data") pod "22fcbbb1-bfe8-4d6a-b0ea-6e0949dd7246" (UID: "22fcbbb1-bfe8-4d6a-b0ea-6e0949dd7246"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:26:36 crc kubenswrapper[4830]: I0131 09:26:36.742540 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/07e6233b-8dfa-42db-8e5f-62dbe5372610-config-data" (OuterVolumeSpecName: "config-data") pod "07e6233b-8dfa-42db-8e5f-62dbe5372610" (UID: "07e6233b-8dfa-42db-8e5f-62dbe5372610"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:26:36 crc kubenswrapper[4830]: I0131 09:26:36.747952 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"22fcbbb1-bfe8-4d6a-b0ea-6e0949dd7246","Type":"ContainerDied","Data":"bdbd802b8245c47749f2f516fa912b95812c54407e3c12685691aed1de7ac3b4"} Jan 31 09:26:36 crc kubenswrapper[4830]: I0131 09:26:36.748650 4830 scope.go:117] "RemoveContainer" containerID="b4fc26d55d14582dc8ce44164579bf49025f52146d438d637b3ce471134703bc" Jan 31 09:26:36 crc kubenswrapper[4830]: I0131 09:26:36.748317 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 31 09:26:36 crc kubenswrapper[4830]: I0131 09:26:36.751145 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-tgdwk" event={"ID":"5da52f5f-a3fa-4dbc-8089-bf0dac06c78f","Type":"ContainerStarted","Data":"565dce176c45178d9047506d1a941b39571d032c601b0b7aa5a9b05eb3d88775"} Jan 31 09:26:36 crc kubenswrapper[4830]: I0131 09:26:36.759805 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-666cdcb7b8-d25gt" event={"ID":"07e6233b-8dfa-42db-8e5f-62dbe5372610","Type":"ContainerDied","Data":"517129cbc72083038bc2ac937e20f39da5212e9a723992ee10bfc9669b2e90ac"} Jan 31 09:26:36 crc kubenswrapper[4830]: I0131 09:26:36.759995 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-666cdcb7b8-d25gt" Jan 31 09:26:36 crc kubenswrapper[4830]: I0131 09:26:36.789031 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-tgdwk" podStartSLOduration=2.717070914 podStartE2EDuration="16.788985292s" podCreationTimestamp="2026-01-31 09:26:20 +0000 UTC" firstStartedPulling="2026-01-31 09:26:21.717523428 +0000 UTC m=+1526.210885870" lastFinishedPulling="2026-01-31 09:26:35.789437806 +0000 UTC m=+1540.282800248" observedRunningTime="2026-01-31 09:26:36.774937444 +0000 UTC m=+1541.268299886" watchObservedRunningTime="2026-01-31 09:26:36.788985292 +0000 UTC m=+1541.282347734" Jan 31 09:26:36 crc kubenswrapper[4830]: I0131 09:26:36.821670 4830 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/22fcbbb1-bfe8-4d6a-b0ea-6e0949dd7246-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 09:26:36 crc kubenswrapper[4830]: I0131 09:26:36.821772 4830 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/07e6233b-8dfa-42db-8e5f-62dbe5372610-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 09:26:36 crc kubenswrapper[4830]: I0131 09:26:36.883316 4830 scope.go:117] "RemoveContainer" containerID="ffaa284dfee60c7213ff245ce3df30a7180c9e387dbae4964764089f0908c322" Jan 31 09:26:36 crc kubenswrapper[4830]: I0131 09:26:36.909226 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 31 09:26:36 crc kubenswrapper[4830]: I0131 09:26:36.923329 4830 scope.go:117] "RemoveContainer" containerID="6803ced896ddcc76c9368cf99d58ed56cdafe21dfee22e80d9ed8b7d9ac413a3" Jan 31 09:26:36 crc kubenswrapper[4830]: I0131 09:26:36.923851 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 31 09:26:36 crc kubenswrapper[4830]: I0131 09:26:36.939932 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-engine-666cdcb7b8-d25gt"] Jan 31 09:26:36 crc kubenswrapper[4830]: I0131 09:26:36.958682 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-engine-666cdcb7b8-d25gt"] Jan 31 09:26:36 crc kubenswrapper[4830]: I0131 09:26:36.966753 4830 scope.go:117] "RemoveContainer" containerID="b50060fcb658e52a780891a55ae6a27f256e9873b517b788c78ba792c3c4be75" Jan 31 09:26:36 crc kubenswrapper[4830]: I0131 09:26:36.975332 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 31 09:26:36 crc kubenswrapper[4830]: E0131 09:26:36.976180 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="22fcbbb1-bfe8-4d6a-b0ea-6e0949dd7246" containerName="ceilometer-notification-agent" Jan 31 09:26:36 crc kubenswrapper[4830]: I0131 09:26:36.976207 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="22fcbbb1-bfe8-4d6a-b0ea-6e0949dd7246" containerName="ceilometer-notification-agent" Jan 31 09:26:36 crc kubenswrapper[4830]: E0131 09:26:36.976219 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="22fcbbb1-bfe8-4d6a-b0ea-6e0949dd7246" containerName="ceilometer-central-agent" Jan 31 09:26:36 crc kubenswrapper[4830]: I0131 09:26:36.976227 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="22fcbbb1-bfe8-4d6a-b0ea-6e0949dd7246" containerName="ceilometer-central-agent" Jan 31 09:26:36 crc kubenswrapper[4830]: E0131 09:26:36.976256 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07e6233b-8dfa-42db-8e5f-62dbe5372610" containerName="heat-engine" Jan 31 09:26:36 crc kubenswrapper[4830]: I0131 09:26:36.976266 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="07e6233b-8dfa-42db-8e5f-62dbe5372610" containerName="heat-engine" Jan 31 09:26:36 crc kubenswrapper[4830]: E0131 09:26:36.976282 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="22fcbbb1-bfe8-4d6a-b0ea-6e0949dd7246" containerName="sg-core" Jan 31 09:26:36 crc kubenswrapper[4830]: I0131 09:26:36.976291 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="22fcbbb1-bfe8-4d6a-b0ea-6e0949dd7246" containerName="sg-core" Jan 31 09:26:36 crc kubenswrapper[4830]: E0131 09:26:36.976331 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de11d1f2-fd91-48c7-9dc3-79748064e53d" containerName="heat-cfnapi" Jan 31 09:26:36 crc kubenswrapper[4830]: I0131 09:26:36.976338 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="de11d1f2-fd91-48c7-9dc3-79748064e53d" containerName="heat-cfnapi" Jan 31 09:26:36 crc kubenswrapper[4830]: E0131 09:26:36.976361 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="22fcbbb1-bfe8-4d6a-b0ea-6e0949dd7246" containerName="proxy-httpd" Jan 31 09:26:36 crc kubenswrapper[4830]: I0131 09:26:36.976368 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="22fcbbb1-bfe8-4d6a-b0ea-6e0949dd7246" containerName="proxy-httpd" Jan 31 09:26:36 crc kubenswrapper[4830]: I0131 09:26:36.976665 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="22fcbbb1-bfe8-4d6a-b0ea-6e0949dd7246" containerName="ceilometer-notification-agent" Jan 31 09:26:36 crc kubenswrapper[4830]: I0131 09:26:36.976703 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="22fcbbb1-bfe8-4d6a-b0ea-6e0949dd7246" containerName="sg-core" Jan 31 09:26:36 crc kubenswrapper[4830]: I0131 09:26:36.976719 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="22fcbbb1-bfe8-4d6a-b0ea-6e0949dd7246" containerName="ceilometer-central-agent" Jan 31 09:26:36 crc kubenswrapper[4830]: I0131 09:26:36.976747 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="22fcbbb1-bfe8-4d6a-b0ea-6e0949dd7246" containerName="proxy-httpd" Jan 31 09:26:36 crc kubenswrapper[4830]: I0131 09:26:36.976758 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="07e6233b-8dfa-42db-8e5f-62dbe5372610" containerName="heat-engine" Jan 31 09:26:36 crc kubenswrapper[4830]: I0131 09:26:36.979988 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 31 09:26:36 crc kubenswrapper[4830]: I0131 09:26:36.982531 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 31 09:26:36 crc kubenswrapper[4830]: I0131 09:26:36.983081 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 31 09:26:36 crc kubenswrapper[4830]: I0131 09:26:36.989030 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 31 09:26:37 crc kubenswrapper[4830]: I0131 09:26:37.025266 4830 scope.go:117] "RemoveContainer" containerID="e522becee83fe1b6467a13a68505e16f4514e10b42775e0bcb9984296834555d" Jan 31 09:26:37 crc kubenswrapper[4830]: I0131 09:26:37.129273 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/20e72fdb-b11a-4573-844c-475d2967f8ac-scripts\") pod \"ceilometer-0\" (UID: \"20e72fdb-b11a-4573-844c-475d2967f8ac\") " pod="openstack/ceilometer-0" Jan 31 09:26:37 crc kubenswrapper[4830]: I0131 09:26:37.129372 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20e72fdb-b11a-4573-844c-475d2967f8ac-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"20e72fdb-b11a-4573-844c-475d2967f8ac\") " pod="openstack/ceilometer-0" Jan 31 09:26:37 crc kubenswrapper[4830]: I0131 09:26:37.129406 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/20e72fdb-b11a-4573-844c-475d2967f8ac-log-httpd\") pod \"ceilometer-0\" (UID: \"20e72fdb-b11a-4573-844c-475d2967f8ac\") " pod="openstack/ceilometer-0" Jan 31 09:26:37 crc kubenswrapper[4830]: I0131 09:26:37.129507 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9vl5t\" (UniqueName: \"kubernetes.io/projected/20e72fdb-b11a-4573-844c-475d2967f8ac-kube-api-access-9vl5t\") pod \"ceilometer-0\" (UID: \"20e72fdb-b11a-4573-844c-475d2967f8ac\") " pod="openstack/ceilometer-0" Jan 31 09:26:37 crc kubenswrapper[4830]: I0131 09:26:37.129533 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/20e72fdb-b11a-4573-844c-475d2967f8ac-run-httpd\") pod \"ceilometer-0\" (UID: \"20e72fdb-b11a-4573-844c-475d2967f8ac\") " pod="openstack/ceilometer-0" Jan 31 09:26:37 crc kubenswrapper[4830]: I0131 09:26:37.129572 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/20e72fdb-b11a-4573-844c-475d2967f8ac-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"20e72fdb-b11a-4573-844c-475d2967f8ac\") " pod="openstack/ceilometer-0" Jan 31 09:26:37 crc kubenswrapper[4830]: I0131 09:26:37.129598 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20e72fdb-b11a-4573-844c-475d2967f8ac-config-data\") pod \"ceilometer-0\" (UID: \"20e72fdb-b11a-4573-844c-475d2967f8ac\") " pod="openstack/ceilometer-0" Jan 31 09:26:37 crc kubenswrapper[4830]: I0131 09:26:37.232309 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/20e72fdb-b11a-4573-844c-475d2967f8ac-run-httpd\") pod \"ceilometer-0\" (UID: \"20e72fdb-b11a-4573-844c-475d2967f8ac\") " pod="openstack/ceilometer-0" Jan 31 09:26:37 crc kubenswrapper[4830]: I0131 09:26:37.232707 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9vl5t\" (UniqueName: \"kubernetes.io/projected/20e72fdb-b11a-4573-844c-475d2967f8ac-kube-api-access-9vl5t\") pod \"ceilometer-0\" (UID: \"20e72fdb-b11a-4573-844c-475d2967f8ac\") " pod="openstack/ceilometer-0" Jan 31 09:26:37 crc kubenswrapper[4830]: I0131 09:26:37.232921 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/20e72fdb-b11a-4573-844c-475d2967f8ac-run-httpd\") pod \"ceilometer-0\" (UID: \"20e72fdb-b11a-4573-844c-475d2967f8ac\") " pod="openstack/ceilometer-0" Jan 31 09:26:37 crc kubenswrapper[4830]: I0131 09:26:37.232919 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/20e72fdb-b11a-4573-844c-475d2967f8ac-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"20e72fdb-b11a-4573-844c-475d2967f8ac\") " pod="openstack/ceilometer-0" Jan 31 09:26:37 crc kubenswrapper[4830]: I0131 09:26:37.233129 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20e72fdb-b11a-4573-844c-475d2967f8ac-config-data\") pod \"ceilometer-0\" (UID: \"20e72fdb-b11a-4573-844c-475d2967f8ac\") " pod="openstack/ceilometer-0" Jan 31 09:26:37 crc kubenswrapper[4830]: I0131 09:26:37.233517 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/20e72fdb-b11a-4573-844c-475d2967f8ac-scripts\") pod \"ceilometer-0\" (UID: \"20e72fdb-b11a-4573-844c-475d2967f8ac\") " pod="openstack/ceilometer-0" Jan 31 09:26:37 crc kubenswrapper[4830]: I0131 09:26:37.233717 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20e72fdb-b11a-4573-844c-475d2967f8ac-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"20e72fdb-b11a-4573-844c-475d2967f8ac\") " pod="openstack/ceilometer-0" Jan 31 09:26:37 crc kubenswrapper[4830]: I0131 09:26:37.233831 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/20e72fdb-b11a-4573-844c-475d2967f8ac-log-httpd\") pod \"ceilometer-0\" (UID: \"20e72fdb-b11a-4573-844c-475d2967f8ac\") " pod="openstack/ceilometer-0" Jan 31 09:26:37 crc kubenswrapper[4830]: I0131 09:26:37.234308 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/20e72fdb-b11a-4573-844c-475d2967f8ac-log-httpd\") pod \"ceilometer-0\" (UID: \"20e72fdb-b11a-4573-844c-475d2967f8ac\") " pod="openstack/ceilometer-0" Jan 31 09:26:37 crc kubenswrapper[4830]: I0131 09:26:37.238064 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/20e72fdb-b11a-4573-844c-475d2967f8ac-scripts\") pod \"ceilometer-0\" (UID: \"20e72fdb-b11a-4573-844c-475d2967f8ac\") " pod="openstack/ceilometer-0" Jan 31 09:26:37 crc kubenswrapper[4830]: I0131 09:26:37.238104 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20e72fdb-b11a-4573-844c-475d2967f8ac-config-data\") pod \"ceilometer-0\" (UID: \"20e72fdb-b11a-4573-844c-475d2967f8ac\") " pod="openstack/ceilometer-0" Jan 31 09:26:37 crc kubenswrapper[4830]: I0131 09:26:37.244887 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20e72fdb-b11a-4573-844c-475d2967f8ac-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"20e72fdb-b11a-4573-844c-475d2967f8ac\") " pod="openstack/ceilometer-0" Jan 31 09:26:37 crc kubenswrapper[4830]: I0131 09:26:37.252315 4830 scope.go:117] "RemoveContainer" containerID="a04fad3617a9e38076099693ce6bd6f0b7e1a9b845b3b8a22acffddfa772e8f0" Jan 31 09:26:37 crc kubenswrapper[4830]: E0131 09:26:37.252709 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gt7kd_openshift-machine-config-operator(158dbfda-9b0a-4809-9946-3c6ee2d082dc)\"" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" podUID="158dbfda-9b0a-4809-9946-3c6ee2d082dc" Jan 31 09:26:37 crc kubenswrapper[4830]: I0131 09:26:37.259409 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/20e72fdb-b11a-4573-844c-475d2967f8ac-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"20e72fdb-b11a-4573-844c-475d2967f8ac\") " pod="openstack/ceilometer-0" Jan 31 09:26:37 crc kubenswrapper[4830]: I0131 09:26:37.268609 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9vl5t\" (UniqueName: \"kubernetes.io/projected/20e72fdb-b11a-4573-844c-475d2967f8ac-kube-api-access-9vl5t\") pod \"ceilometer-0\" (UID: \"20e72fdb-b11a-4573-844c-475d2967f8ac\") " pod="openstack/ceilometer-0" Jan 31 09:26:37 crc kubenswrapper[4830]: I0131 09:26:37.317824 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 31 09:26:37 crc kubenswrapper[4830]: I0131 09:26:37.879160 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 31 09:26:38 crc kubenswrapper[4830]: I0131 09:26:38.280979 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="07e6233b-8dfa-42db-8e5f-62dbe5372610" path="/var/lib/kubelet/pods/07e6233b-8dfa-42db-8e5f-62dbe5372610/volumes" Jan 31 09:26:38 crc kubenswrapper[4830]: I0131 09:26:38.282221 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22fcbbb1-bfe8-4d6a-b0ea-6e0949dd7246" path="/var/lib/kubelet/pods/22fcbbb1-bfe8-4d6a-b0ea-6e0949dd7246/volumes" Jan 31 09:26:38 crc kubenswrapper[4830]: I0131 09:26:38.792755 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"20e72fdb-b11a-4573-844c-475d2967f8ac","Type":"ContainerStarted","Data":"ba6dfcf1e350a5368d73f6940d4dd5634d85299c29af73e7b546e020eb54c2d9"} Jan 31 09:26:38 crc kubenswrapper[4830]: I0131 09:26:38.793143 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"20e72fdb-b11a-4573-844c-475d2967f8ac","Type":"ContainerStarted","Data":"5040d85044d7f4c2044af82438ad626b964ada43b41a43b347d79d95b7c8320e"} Jan 31 09:26:39 crc kubenswrapper[4830]: I0131 09:26:39.827470 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"20e72fdb-b11a-4573-844c-475d2967f8ac","Type":"ContainerStarted","Data":"5084136b971b4112f29d18b0dfab4a73eb35cf3507d380929ff5ba9bb1967c39"} Jan 31 09:26:40 crc kubenswrapper[4830]: I0131 09:26:40.871175 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"20e72fdb-b11a-4573-844c-475d2967f8ac","Type":"ContainerStarted","Data":"1f7f3d7c70997ea5294afc049daa565ad1580abd8255e828795e85adf50aada4"} Jan 31 09:26:41 crc kubenswrapper[4830]: I0131 09:26:41.093635 4830 pod_container_manager_linux.go:210] "Failed to delete cgroup paths" cgroupName=["kubepods","besteffort","podff4e5fbc-7e45-42b7-8af6-ff34b36bb594"] err="unable to destroy cgroup paths for cgroup [kubepods besteffort podff4e5fbc-7e45-42b7-8af6-ff34b36bb594] : Timed out while waiting for systemd to remove kubepods-besteffort-podff4e5fbc_7e45_42b7_8af6_ff34b36bb594.slice" Jan 31 09:26:42 crc kubenswrapper[4830]: I0131 09:26:42.711499 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 31 09:26:42 crc kubenswrapper[4830]: I0131 09:26:42.898694 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"20e72fdb-b11a-4573-844c-475d2967f8ac","Type":"ContainerStarted","Data":"840c1bade5bb6a14f7f1be80e3774d07a66c9e1933e911852e4d990baa6d6bda"} Jan 31 09:26:42 crc kubenswrapper[4830]: I0131 09:26:42.899081 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="20e72fdb-b11a-4573-844c-475d2967f8ac" containerName="ceilometer-central-agent" containerID="cri-o://ba6dfcf1e350a5368d73f6940d4dd5634d85299c29af73e7b546e020eb54c2d9" gracePeriod=30 Jan 31 09:26:42 crc kubenswrapper[4830]: I0131 09:26:42.899201 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="20e72fdb-b11a-4573-844c-475d2967f8ac" containerName="sg-core" containerID="cri-o://1f7f3d7c70997ea5294afc049daa565ad1580abd8255e828795e85adf50aada4" gracePeriod=30 Jan 31 09:26:42 crc kubenswrapper[4830]: I0131 09:26:42.899177 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="20e72fdb-b11a-4573-844c-475d2967f8ac" containerName="proxy-httpd" containerID="cri-o://840c1bade5bb6a14f7f1be80e3774d07a66c9e1933e911852e4d990baa6d6bda" gracePeriod=30 Jan 31 09:26:42 crc kubenswrapper[4830]: I0131 09:26:42.899247 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="20e72fdb-b11a-4573-844c-475d2967f8ac" containerName="ceilometer-notification-agent" containerID="cri-o://5084136b971b4112f29d18b0dfab4a73eb35cf3507d380929ff5ba9bb1967c39" gracePeriod=30 Jan 31 09:26:42 crc kubenswrapper[4830]: I0131 09:26:42.899410 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 31 09:26:42 crc kubenswrapper[4830]: I0131 09:26:42.942146 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.455013798 podStartE2EDuration="6.942095187s" podCreationTimestamp="2026-01-31 09:26:36 +0000 UTC" firstStartedPulling="2026-01-31 09:26:37.890449167 +0000 UTC m=+1542.383811609" lastFinishedPulling="2026-01-31 09:26:42.377530556 +0000 UTC m=+1546.870892998" observedRunningTime="2026-01-31 09:26:42.93340798 +0000 UTC m=+1547.426770422" watchObservedRunningTime="2026-01-31 09:26:42.942095187 +0000 UTC m=+1547.435457629" Jan 31 09:26:43 crc kubenswrapper[4830]: I0131 09:26:43.914226 4830 generic.go:334] "Generic (PLEG): container finished" podID="20e72fdb-b11a-4573-844c-475d2967f8ac" containerID="1f7f3d7c70997ea5294afc049daa565ad1580abd8255e828795e85adf50aada4" exitCode=2 Jan 31 09:26:43 crc kubenswrapper[4830]: I0131 09:26:43.914709 4830 generic.go:334] "Generic (PLEG): container finished" podID="20e72fdb-b11a-4573-844c-475d2967f8ac" containerID="5084136b971b4112f29d18b0dfab4a73eb35cf3507d380929ff5ba9bb1967c39" exitCode=0 Jan 31 09:26:43 crc kubenswrapper[4830]: I0131 09:26:43.914751 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"20e72fdb-b11a-4573-844c-475d2967f8ac","Type":"ContainerDied","Data":"1f7f3d7c70997ea5294afc049daa565ad1580abd8255e828795e85adf50aada4"} Jan 31 09:26:43 crc kubenswrapper[4830]: I0131 09:26:43.914786 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"20e72fdb-b11a-4573-844c-475d2967f8ac","Type":"ContainerDied","Data":"5084136b971b4112f29d18b0dfab4a73eb35cf3507d380929ff5ba9bb1967c39"} Jan 31 09:26:48 crc kubenswrapper[4830]: I0131 09:26:48.252346 4830 scope.go:117] "RemoveContainer" containerID="a04fad3617a9e38076099693ce6bd6f0b7e1a9b845b3b8a22acffddfa772e8f0" Jan 31 09:26:48 crc kubenswrapper[4830]: E0131 09:26:48.255177 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gt7kd_openshift-machine-config-operator(158dbfda-9b0a-4809-9946-3c6ee2d082dc)\"" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" podUID="158dbfda-9b0a-4809-9946-3c6ee2d082dc" Jan 31 09:26:52 crc kubenswrapper[4830]: I0131 09:26:52.045004 4830 generic.go:334] "Generic (PLEG): container finished" podID="20e72fdb-b11a-4573-844c-475d2967f8ac" containerID="ba6dfcf1e350a5368d73f6940d4dd5634d85299c29af73e7b546e020eb54c2d9" exitCode=0 Jan 31 09:26:52 crc kubenswrapper[4830]: I0131 09:26:52.045884 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"20e72fdb-b11a-4573-844c-475d2967f8ac","Type":"ContainerDied","Data":"ba6dfcf1e350a5368d73f6940d4dd5634d85299c29af73e7b546e020eb54c2d9"} Jan 31 09:26:54 crc kubenswrapper[4830]: I0131 09:26:54.072810 4830 generic.go:334] "Generic (PLEG): container finished" podID="5da52f5f-a3fa-4dbc-8089-bf0dac06c78f" containerID="565dce176c45178d9047506d1a941b39571d032c601b0b7aa5a9b05eb3d88775" exitCode=0 Jan 31 09:26:54 crc kubenswrapper[4830]: I0131 09:26:54.072898 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-tgdwk" event={"ID":"5da52f5f-a3fa-4dbc-8089-bf0dac06c78f","Type":"ContainerDied","Data":"565dce176c45178d9047506d1a941b39571d032c601b0b7aa5a9b05eb3d88775"} Jan 31 09:26:55 crc kubenswrapper[4830]: I0131 09:26:55.638181 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-tgdwk" Jan 31 09:26:55 crc kubenswrapper[4830]: I0131 09:26:55.741946 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5da52f5f-a3fa-4dbc-8089-bf0dac06c78f-config-data\") pod \"5da52f5f-a3fa-4dbc-8089-bf0dac06c78f\" (UID: \"5da52f5f-a3fa-4dbc-8089-bf0dac06c78f\") " Jan 31 09:26:55 crc kubenswrapper[4830]: I0131 09:26:55.742428 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xh5vd\" (UniqueName: \"kubernetes.io/projected/5da52f5f-a3fa-4dbc-8089-bf0dac06c78f-kube-api-access-xh5vd\") pod \"5da52f5f-a3fa-4dbc-8089-bf0dac06c78f\" (UID: \"5da52f5f-a3fa-4dbc-8089-bf0dac06c78f\") " Jan 31 09:26:55 crc kubenswrapper[4830]: I0131 09:26:55.742568 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5da52f5f-a3fa-4dbc-8089-bf0dac06c78f-combined-ca-bundle\") pod \"5da52f5f-a3fa-4dbc-8089-bf0dac06c78f\" (UID: \"5da52f5f-a3fa-4dbc-8089-bf0dac06c78f\") " Jan 31 09:26:55 crc kubenswrapper[4830]: I0131 09:26:55.742651 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5da52f5f-a3fa-4dbc-8089-bf0dac06c78f-scripts\") pod \"5da52f5f-a3fa-4dbc-8089-bf0dac06c78f\" (UID: \"5da52f5f-a3fa-4dbc-8089-bf0dac06c78f\") " Jan 31 09:26:55 crc kubenswrapper[4830]: I0131 09:26:55.750162 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5da52f5f-a3fa-4dbc-8089-bf0dac06c78f-kube-api-access-xh5vd" (OuterVolumeSpecName: "kube-api-access-xh5vd") pod "5da52f5f-a3fa-4dbc-8089-bf0dac06c78f" (UID: "5da52f5f-a3fa-4dbc-8089-bf0dac06c78f"). InnerVolumeSpecName "kube-api-access-xh5vd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:26:55 crc kubenswrapper[4830]: I0131 09:26:55.755759 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5da52f5f-a3fa-4dbc-8089-bf0dac06c78f-scripts" (OuterVolumeSpecName: "scripts") pod "5da52f5f-a3fa-4dbc-8089-bf0dac06c78f" (UID: "5da52f5f-a3fa-4dbc-8089-bf0dac06c78f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:26:55 crc kubenswrapper[4830]: I0131 09:26:55.787047 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5da52f5f-a3fa-4dbc-8089-bf0dac06c78f-config-data" (OuterVolumeSpecName: "config-data") pod "5da52f5f-a3fa-4dbc-8089-bf0dac06c78f" (UID: "5da52f5f-a3fa-4dbc-8089-bf0dac06c78f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:26:55 crc kubenswrapper[4830]: I0131 09:26:55.795768 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5da52f5f-a3fa-4dbc-8089-bf0dac06c78f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5da52f5f-a3fa-4dbc-8089-bf0dac06c78f" (UID: "5da52f5f-a3fa-4dbc-8089-bf0dac06c78f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:26:55 crc kubenswrapper[4830]: I0131 09:26:55.846207 4830 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5da52f5f-a3fa-4dbc-8089-bf0dac06c78f-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 09:26:55 crc kubenswrapper[4830]: I0131 09:26:55.846258 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xh5vd\" (UniqueName: \"kubernetes.io/projected/5da52f5f-a3fa-4dbc-8089-bf0dac06c78f-kube-api-access-xh5vd\") on node \"crc\" DevicePath \"\"" Jan 31 09:26:55 crc kubenswrapper[4830]: I0131 09:26:55.846271 4830 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5da52f5f-a3fa-4dbc-8089-bf0dac06c78f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 09:26:55 crc kubenswrapper[4830]: I0131 09:26:55.846279 4830 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5da52f5f-a3fa-4dbc-8089-bf0dac06c78f-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 09:26:56 crc kubenswrapper[4830]: I0131 09:26:56.103519 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-tgdwk" event={"ID":"5da52f5f-a3fa-4dbc-8089-bf0dac06c78f","Type":"ContainerDied","Data":"54daaccbc2c44b5f351e2f3aa902758c523ddb4a600c8c624d35887ab4703fb2"} Jan 31 09:26:56 crc kubenswrapper[4830]: I0131 09:26:56.103585 4830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="54daaccbc2c44b5f351e2f3aa902758c523ddb4a600c8c624d35887ab4703fb2" Jan 31 09:26:56 crc kubenswrapper[4830]: I0131 09:26:56.103669 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-tgdwk" Jan 31 09:26:56 crc kubenswrapper[4830]: I0131 09:26:56.220605 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 31 09:26:56 crc kubenswrapper[4830]: E0131 09:26:56.221311 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5da52f5f-a3fa-4dbc-8089-bf0dac06c78f" containerName="nova-cell0-conductor-db-sync" Jan 31 09:26:56 crc kubenswrapper[4830]: I0131 09:26:56.221334 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="5da52f5f-a3fa-4dbc-8089-bf0dac06c78f" containerName="nova-cell0-conductor-db-sync" Jan 31 09:26:56 crc kubenswrapper[4830]: I0131 09:26:56.221605 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="5da52f5f-a3fa-4dbc-8089-bf0dac06c78f" containerName="nova-cell0-conductor-db-sync" Jan 31 09:26:56 crc kubenswrapper[4830]: I0131 09:26:56.222603 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 31 09:26:56 crc kubenswrapper[4830]: I0131 09:26:56.233498 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-cdl7g" Jan 31 09:26:56 crc kubenswrapper[4830]: I0131 09:26:56.234067 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 31 09:26:56 crc kubenswrapper[4830]: I0131 09:26:56.307250 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 31 09:26:56 crc kubenswrapper[4830]: I0131 09:26:56.382129 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/211bb9c5-07d6-4936-b444-3544b2db1b19-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"211bb9c5-07d6-4936-b444-3544b2db1b19\") " pod="openstack/nova-cell0-conductor-0" Jan 31 09:26:56 crc kubenswrapper[4830]: I0131 09:26:56.382474 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nlp62\" (UniqueName: \"kubernetes.io/projected/211bb9c5-07d6-4936-b444-3544b2db1b19-kube-api-access-nlp62\") pod \"nova-cell0-conductor-0\" (UID: \"211bb9c5-07d6-4936-b444-3544b2db1b19\") " pod="openstack/nova-cell0-conductor-0" Jan 31 09:26:56 crc kubenswrapper[4830]: I0131 09:26:56.382517 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/211bb9c5-07d6-4936-b444-3544b2db1b19-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"211bb9c5-07d6-4936-b444-3544b2db1b19\") " pod="openstack/nova-cell0-conductor-0" Jan 31 09:26:56 crc kubenswrapper[4830]: I0131 09:26:56.486246 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/211bb9c5-07d6-4936-b444-3544b2db1b19-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"211bb9c5-07d6-4936-b444-3544b2db1b19\") " pod="openstack/nova-cell0-conductor-0" Jan 31 09:26:56 crc kubenswrapper[4830]: I0131 09:26:56.486339 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nlp62\" (UniqueName: \"kubernetes.io/projected/211bb9c5-07d6-4936-b444-3544b2db1b19-kube-api-access-nlp62\") pod \"nova-cell0-conductor-0\" (UID: \"211bb9c5-07d6-4936-b444-3544b2db1b19\") " pod="openstack/nova-cell0-conductor-0" Jan 31 09:26:56 crc kubenswrapper[4830]: I0131 09:26:56.486365 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/211bb9c5-07d6-4936-b444-3544b2db1b19-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"211bb9c5-07d6-4936-b444-3544b2db1b19\") " pod="openstack/nova-cell0-conductor-0" Jan 31 09:26:56 crc kubenswrapper[4830]: I0131 09:26:56.491311 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/211bb9c5-07d6-4936-b444-3544b2db1b19-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"211bb9c5-07d6-4936-b444-3544b2db1b19\") " pod="openstack/nova-cell0-conductor-0" Jan 31 09:26:56 crc kubenswrapper[4830]: I0131 09:26:56.492409 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/211bb9c5-07d6-4936-b444-3544b2db1b19-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"211bb9c5-07d6-4936-b444-3544b2db1b19\") " pod="openstack/nova-cell0-conductor-0" Jan 31 09:26:56 crc kubenswrapper[4830]: I0131 09:26:56.512251 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nlp62\" (UniqueName: \"kubernetes.io/projected/211bb9c5-07d6-4936-b444-3544b2db1b19-kube-api-access-nlp62\") pod \"nova-cell0-conductor-0\" (UID: \"211bb9c5-07d6-4936-b444-3544b2db1b19\") " pod="openstack/nova-cell0-conductor-0" Jan 31 09:26:56 crc kubenswrapper[4830]: I0131 09:26:56.573588 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 31 09:26:57 crc kubenswrapper[4830]: I0131 09:26:57.122417 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 31 09:26:58 crc kubenswrapper[4830]: I0131 09:26:58.131158 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"211bb9c5-07d6-4936-b444-3544b2db1b19","Type":"ContainerStarted","Data":"c6cd6a87e3962717e5e0987185c6b76612e45ff28f1bcc2ea82afc8fd824deb1"} Jan 31 09:26:58 crc kubenswrapper[4830]: I0131 09:26:58.131571 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"211bb9c5-07d6-4936-b444-3544b2db1b19","Type":"ContainerStarted","Data":"a4f0fd01ca876d32e2c87a06a7eec19e5c685a00ba6c375951d8411967f97816"} Jan 31 09:26:58 crc kubenswrapper[4830]: I0131 09:26:58.131842 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Jan 31 09:26:58 crc kubenswrapper[4830]: I0131 09:26:58.166557 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.166534513 podStartE2EDuration="2.166534513s" podCreationTimestamp="2026-01-31 09:26:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 09:26:58.152152394 +0000 UTC m=+1562.645514836" watchObservedRunningTime="2026-01-31 09:26:58.166534513 +0000 UTC m=+1562.659896945" Jan 31 09:26:59 crc kubenswrapper[4830]: I0131 09:26:59.252802 4830 scope.go:117] "RemoveContainer" containerID="a04fad3617a9e38076099693ce6bd6f0b7e1a9b845b3b8a22acffddfa772e8f0" Jan 31 09:26:59 crc kubenswrapper[4830]: E0131 09:26:59.253236 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gt7kd_openshift-machine-config-operator(158dbfda-9b0a-4809-9946-3c6ee2d082dc)\"" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" podUID="158dbfda-9b0a-4809-9946-3c6ee2d082dc" Jan 31 09:27:06 crc kubenswrapper[4830]: I0131 09:27:06.618248 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Jan 31 09:27:07 crc kubenswrapper[4830]: I0131 09:27:07.261039 4830 generic.go:334] "Generic (PLEG): container finished" podID="507f4c57-9369-4487-a575-370014e22eeb" containerID="048b3cace074d7927293bcbc8ec9fae217cdc957e95d9197018a0d1958db2c4b" exitCode=137 Jan 31 09:27:07 crc kubenswrapper[4830]: I0131 09:27:07.261102 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-5db4bc48b8-mphcw" event={"ID":"507f4c57-9369-4487-a575-370014e22eeb","Type":"ContainerDied","Data":"048b3cace074d7927293bcbc8ec9fae217cdc957e95d9197018a0d1958db2c4b"} Jan 31 09:27:07 crc kubenswrapper[4830]: I0131 09:27:07.325825 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="20e72fdb-b11a-4573-844c-475d2967f8ac" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 31 09:27:07 crc kubenswrapper[4830]: I0131 09:27:07.863293 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-svqlf"] Jan 31 09:27:07 crc kubenswrapper[4830]: I0131 09:27:07.866291 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-svqlf" Jan 31 09:27:07 crc kubenswrapper[4830]: I0131 09:27:07.880231 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Jan 31 09:27:07 crc kubenswrapper[4830]: I0131 09:27:07.880559 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Jan 31 09:27:07 crc kubenswrapper[4830]: I0131 09:27:07.942700 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-svqlf"] Jan 31 09:27:07 crc kubenswrapper[4830]: I0131 09:27:07.961156 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/714acb03-29b5-4da1-8f14-9587cabcd207-scripts\") pod \"nova-cell0-cell-mapping-svqlf\" (UID: \"714acb03-29b5-4da1-8f14-9587cabcd207\") " pod="openstack/nova-cell0-cell-mapping-svqlf" Jan 31 09:27:07 crc kubenswrapper[4830]: I0131 09:27:07.961242 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-88mr5\" (UniqueName: \"kubernetes.io/projected/714acb03-29b5-4da1-8f14-9587cabcd207-kube-api-access-88mr5\") pod \"nova-cell0-cell-mapping-svqlf\" (UID: \"714acb03-29b5-4da1-8f14-9587cabcd207\") " pod="openstack/nova-cell0-cell-mapping-svqlf" Jan 31 09:27:07 crc kubenswrapper[4830]: I0131 09:27:07.963496 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/714acb03-29b5-4da1-8f14-9587cabcd207-config-data\") pod \"nova-cell0-cell-mapping-svqlf\" (UID: \"714acb03-29b5-4da1-8f14-9587cabcd207\") " pod="openstack/nova-cell0-cell-mapping-svqlf" Jan 31 09:27:07 crc kubenswrapper[4830]: I0131 09:27:07.964227 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/714acb03-29b5-4da1-8f14-9587cabcd207-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-svqlf\" (UID: \"714acb03-29b5-4da1-8f14-9587cabcd207\") " pod="openstack/nova-cell0-cell-mapping-svqlf" Jan 31 09:27:08 crc kubenswrapper[4830]: I0131 09:27:08.026691 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-5db4bc48b8-mphcw" Jan 31 09:27:08 crc kubenswrapper[4830]: I0131 09:27:08.070531 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/714acb03-29b5-4da1-8f14-9587cabcd207-config-data\") pod \"nova-cell0-cell-mapping-svqlf\" (UID: \"714acb03-29b5-4da1-8f14-9587cabcd207\") " pod="openstack/nova-cell0-cell-mapping-svqlf" Jan 31 09:27:08 crc kubenswrapper[4830]: I0131 09:27:08.071128 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/714acb03-29b5-4da1-8f14-9587cabcd207-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-svqlf\" (UID: \"714acb03-29b5-4da1-8f14-9587cabcd207\") " pod="openstack/nova-cell0-cell-mapping-svqlf" Jan 31 09:27:08 crc kubenswrapper[4830]: I0131 09:27:08.071292 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-59478c766f-tgwgd" Jan 31 09:27:08 crc kubenswrapper[4830]: I0131 09:27:08.089835 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/714acb03-29b5-4da1-8f14-9587cabcd207-scripts\") pod \"nova-cell0-cell-mapping-svqlf\" (UID: \"714acb03-29b5-4da1-8f14-9587cabcd207\") " pod="openstack/nova-cell0-cell-mapping-svqlf" Jan 31 09:27:08 crc kubenswrapper[4830]: I0131 09:27:08.090016 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-88mr5\" (UniqueName: \"kubernetes.io/projected/714acb03-29b5-4da1-8f14-9587cabcd207-kube-api-access-88mr5\") pod \"nova-cell0-cell-mapping-svqlf\" (UID: \"714acb03-29b5-4da1-8f14-9587cabcd207\") " pod="openstack/nova-cell0-cell-mapping-svqlf" Jan 31 09:27:08 crc kubenswrapper[4830]: I0131 09:27:08.098229 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/714acb03-29b5-4da1-8f14-9587cabcd207-scripts\") pod \"nova-cell0-cell-mapping-svqlf\" (UID: \"714acb03-29b5-4da1-8f14-9587cabcd207\") " pod="openstack/nova-cell0-cell-mapping-svqlf" Jan 31 09:27:08 crc kubenswrapper[4830]: I0131 09:27:08.098772 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/714acb03-29b5-4da1-8f14-9587cabcd207-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-svqlf\" (UID: \"714acb03-29b5-4da1-8f14-9587cabcd207\") " pod="openstack/nova-cell0-cell-mapping-svqlf" Jan 31 09:27:08 crc kubenswrapper[4830]: I0131 09:27:08.105514 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/714acb03-29b5-4da1-8f14-9587cabcd207-config-data\") pod \"nova-cell0-cell-mapping-svqlf\" (UID: \"714acb03-29b5-4da1-8f14-9587cabcd207\") " pod="openstack/nova-cell0-cell-mapping-svqlf" Jan 31 09:27:08 crc kubenswrapper[4830]: I0131 09:27:08.132260 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-88mr5\" (UniqueName: \"kubernetes.io/projected/714acb03-29b5-4da1-8f14-9587cabcd207-kube-api-access-88mr5\") pod \"nova-cell0-cell-mapping-svqlf\" (UID: \"714acb03-29b5-4da1-8f14-9587cabcd207\") " pod="openstack/nova-cell0-cell-mapping-svqlf" Jan 31 09:27:08 crc kubenswrapper[4830]: I0131 09:27:08.191882 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/507f4c57-9369-4487-a575-370014e22eeb-combined-ca-bundle\") pod \"507f4c57-9369-4487-a575-370014e22eeb\" (UID: \"507f4c57-9369-4487-a575-370014e22eeb\") " Jan 31 09:27:08 crc kubenswrapper[4830]: I0131 09:27:08.191970 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r6gqn\" (UniqueName: \"kubernetes.io/projected/507f4c57-9369-4487-a575-370014e22eeb-kube-api-access-r6gqn\") pod \"507f4c57-9369-4487-a575-370014e22eeb\" (UID: \"507f4c57-9369-4487-a575-370014e22eeb\") " Jan 31 09:27:08 crc kubenswrapper[4830]: I0131 09:27:08.192009 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7f604a2-4cc7-4619-846c-51cb5cddffda-combined-ca-bundle\") pod \"e7f604a2-4cc7-4619-846c-51cb5cddffda\" (UID: \"e7f604a2-4cc7-4619-846c-51cb5cddffda\") " Jan 31 09:27:08 crc kubenswrapper[4830]: I0131 09:27:08.192058 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/507f4c57-9369-4487-a575-370014e22eeb-config-data\") pod \"507f4c57-9369-4487-a575-370014e22eeb\" (UID: \"507f4c57-9369-4487-a575-370014e22eeb\") " Jan 31 09:27:08 crc kubenswrapper[4830]: I0131 09:27:08.192141 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/507f4c57-9369-4487-a575-370014e22eeb-config-data-custom\") pod \"507f4c57-9369-4487-a575-370014e22eeb\" (UID: \"507f4c57-9369-4487-a575-370014e22eeb\") " Jan 31 09:27:08 crc kubenswrapper[4830]: I0131 09:27:08.192183 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4zb\" (UniqueName: \"kubernetes.io/projected/e7f604a2-4cc7-4619-846c-51cb5cddffda-kube-api-access-2d4zb\") pod \"e7f604a2-4cc7-4619-846c-51cb5cddffda\" (UID: \"e7f604a2-4cc7-4619-846c-51cb5cddffda\") " Jan 31 09:27:08 crc kubenswrapper[4830]: I0131 09:27:08.192248 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e7f604a2-4cc7-4619-846c-51cb5cddffda-config-data\") pod \"e7f604a2-4cc7-4619-846c-51cb5cddffda\" (UID: \"e7f604a2-4cc7-4619-846c-51cb5cddffda\") " Jan 31 09:27:08 crc kubenswrapper[4830]: I0131 09:27:08.192316 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e7f604a2-4cc7-4619-846c-51cb5cddffda-config-data-custom\") pod \"e7f604a2-4cc7-4619-846c-51cb5cddffda\" (UID: \"e7f604a2-4cc7-4619-846c-51cb5cddffda\") " Jan 31 09:27:08 crc kubenswrapper[4830]: I0131 09:27:08.203025 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/507f4c57-9369-4487-a575-370014e22eeb-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "507f4c57-9369-4487-a575-370014e22eeb" (UID: "507f4c57-9369-4487-a575-370014e22eeb"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:27:08 crc kubenswrapper[4830]: I0131 09:27:08.211204 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7f604a2-4cc7-4619-846c-51cb5cddffda-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "e7f604a2-4cc7-4619-846c-51cb5cddffda" (UID: "e7f604a2-4cc7-4619-846c-51cb5cddffda"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:27:08 crc kubenswrapper[4830]: I0131 09:27:08.237280 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/507f4c57-9369-4487-a575-370014e22eeb-kube-api-access-r6gqn" (OuterVolumeSpecName: "kube-api-access-r6gqn") pod "507f4c57-9369-4487-a575-370014e22eeb" (UID: "507f4c57-9369-4487-a575-370014e22eeb"). InnerVolumeSpecName "kube-api-access-r6gqn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:27:08 crc kubenswrapper[4830]: I0131 09:27:08.247014 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7f604a2-4cc7-4619-846c-51cb5cddffda-kube-api-access-2d4zb" (OuterVolumeSpecName: "kube-api-access-2d4zb") pod "e7f604a2-4cc7-4619-846c-51cb5cddffda" (UID: "e7f604a2-4cc7-4619-846c-51cb5cddffda"). InnerVolumeSpecName "kube-api-access-2d4zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:27:08 crc kubenswrapper[4830]: I0131 09:27:08.326489 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r6gqn\" (UniqueName: \"kubernetes.io/projected/507f4c57-9369-4487-a575-370014e22eeb-kube-api-access-r6gqn\") on node \"crc\" DevicePath \"\"" Jan 31 09:27:08 crc kubenswrapper[4830]: I0131 09:27:08.326534 4830 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/507f4c57-9369-4487-a575-370014e22eeb-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 31 09:27:08 crc kubenswrapper[4830]: I0131 09:27:08.326546 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4zb\" (UniqueName: \"kubernetes.io/projected/e7f604a2-4cc7-4619-846c-51cb5cddffda-kube-api-access-2d4zb\") on node \"crc\" DevicePath \"\"" Jan 31 09:27:08 crc kubenswrapper[4830]: I0131 09:27:08.326557 4830 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e7f604a2-4cc7-4619-846c-51cb5cddffda-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 31 09:27:08 crc kubenswrapper[4830]: I0131 09:27:08.327395 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7f604a2-4cc7-4619-846c-51cb5cddffda-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e7f604a2-4cc7-4619-846c-51cb5cddffda" (UID: "e7f604a2-4cc7-4619-846c-51cb5cddffda"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:27:08 crc kubenswrapper[4830]: I0131 09:27:08.363389 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-svqlf" Jan 31 09:27:08 crc kubenswrapper[4830]: I0131 09:27:08.387662 4830 generic.go:334] "Generic (PLEG): container finished" podID="e7f604a2-4cc7-4619-846c-51cb5cddffda" containerID="e7fd95d0f71fabbfa5af4f6ed53adf185141ac5ded059ca14ce7a31dcb26ccd9" exitCode=137 Jan 31 09:27:08 crc kubenswrapper[4830]: I0131 09:27:08.388332 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-59478c766f-tgwgd" Jan 31 09:27:08 crc kubenswrapper[4830]: I0131 09:27:08.446915 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7f604a2-4cc7-4619-846c-51cb5cddffda-config-data" (OuterVolumeSpecName: "config-data") pod "e7f604a2-4cc7-4619-846c-51cb5cddffda" (UID: "e7f604a2-4cc7-4619-846c-51cb5cddffda"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:27:08 crc kubenswrapper[4830]: I0131 09:27:08.447953 4830 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7f604a2-4cc7-4619-846c-51cb5cddffda-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 09:27:08 crc kubenswrapper[4830]: I0131 09:27:08.452402 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-5db4bc48b8-mphcw" Jan 31 09:27:08 crc kubenswrapper[4830]: I0131 09:27:08.537422 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/507f4c57-9369-4487-a575-370014e22eeb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "507f4c57-9369-4487-a575-370014e22eeb" (UID: "507f4c57-9369-4487-a575-370014e22eeb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:27:08 crc kubenswrapper[4830]: I0131 09:27:08.553684 4830 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e7f604a2-4cc7-4619-846c-51cb5cddffda-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 09:27:08 crc kubenswrapper[4830]: I0131 09:27:08.554079 4830 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/507f4c57-9369-4487-a575-370014e22eeb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 09:27:08 crc kubenswrapper[4830]: I0131 09:27:08.610705 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/507f4c57-9369-4487-a575-370014e22eeb-config-data" (OuterVolumeSpecName: "config-data") pod "507f4c57-9369-4487-a575-370014e22eeb" (UID: "507f4c57-9369-4487-a575-370014e22eeb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:27:08 crc kubenswrapper[4830]: I0131 09:27:08.614238 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-59478c766f-tgwgd" event={"ID":"e7f604a2-4cc7-4619-846c-51cb5cddffda","Type":"ContainerDied","Data":"e7fd95d0f71fabbfa5af4f6ed53adf185141ac5ded059ca14ce7a31dcb26ccd9"} Jan 31 09:27:08 crc kubenswrapper[4830]: I0131 09:27:08.614387 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 31 09:27:08 crc kubenswrapper[4830]: I0131 09:27:08.616182 4830 scope.go:117] "RemoveContainer" containerID="e7fd95d0f71fabbfa5af4f6ed53adf185141ac5ded059ca14ce7a31dcb26ccd9" Jan 31 09:27:08 crc kubenswrapper[4830]: E0131 09:27:08.621391 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="507f4c57-9369-4487-a575-370014e22eeb" containerName="heat-cfnapi" Jan 31 09:27:08 crc kubenswrapper[4830]: I0131 09:27:08.621425 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="507f4c57-9369-4487-a575-370014e22eeb" containerName="heat-cfnapi" Jan 31 09:27:08 crc kubenswrapper[4830]: E0131 09:27:08.621587 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e7f604a2-4cc7-4619-846c-51cb5cddffda" containerName="heat-api" Jan 31 09:27:08 crc kubenswrapper[4830]: I0131 09:27:08.621596 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="e7f604a2-4cc7-4619-846c-51cb5cddffda" containerName="heat-api" Jan 31 09:27:08 crc kubenswrapper[4830]: I0131 09:27:08.625442 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="507f4c57-9369-4487-a575-370014e22eeb" containerName="heat-cfnapi" Jan 31 09:27:08 crc kubenswrapper[4830]: I0131 09:27:08.625465 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="e7f604a2-4cc7-4619-846c-51cb5cddffda" containerName="heat-api" Jan 31 09:27:08 crc kubenswrapper[4830]: I0131 09:27:08.633302 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-59478c766f-tgwgd" event={"ID":"e7f604a2-4cc7-4619-846c-51cb5cddffda","Type":"ContainerDied","Data":"61551977f6b459885b384ceb5621b7aab419bb9bb31168466556d91d4f9e9cc9"} Jan 31 09:27:08 crc kubenswrapper[4830]: I0131 09:27:08.633360 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-5db4bc48b8-mphcw" event={"ID":"507f4c57-9369-4487-a575-370014e22eeb","Type":"ContainerDied","Data":"253d32d18e3e567e17d7fea76b5a1330ed46251c63873a169cb22ffd0b28d3b7"} Jan 31 09:27:08 crc kubenswrapper[4830]: I0131 09:27:08.633406 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 31 09:27:08 crc kubenswrapper[4830]: I0131 09:27:08.635088 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 31 09:27:08 crc kubenswrapper[4830]: I0131 09:27:08.640716 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 31 09:27:08 crc kubenswrapper[4830]: I0131 09:27:08.652796 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 31 09:27:08 crc kubenswrapper[4830]: I0131 09:27:08.652866 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 31 09:27:08 crc kubenswrapper[4830]: I0131 09:27:08.652884 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 31 09:27:08 crc kubenswrapper[4830]: I0131 09:27:08.657204 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 31 09:27:08 crc kubenswrapper[4830]: I0131 09:27:08.660624 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Jan 31 09:27:08 crc kubenswrapper[4830]: I0131 09:27:08.676373 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 31 09:27:08 crc kubenswrapper[4830]: I0131 09:27:08.676612 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 31 09:27:08 crc kubenswrapper[4830]: I0131 09:27:08.685091 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 31 09:27:08 crc kubenswrapper[4830]: I0131 09:27:08.686550 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5f4s8\" (UniqueName: \"kubernetes.io/projected/1e035fc4-d1e4-4716-ab3c-432991bca55e-kube-api-access-5f4s8\") pod \"nova-api-0\" (UID: \"1e035fc4-d1e4-4716-ab3c-432991bca55e\") " pod="openstack/nova-api-0" Jan 31 09:27:08 crc kubenswrapper[4830]: I0131 09:27:08.686638 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e035fc4-d1e4-4716-ab3c-432991bca55e-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"1e035fc4-d1e4-4716-ab3c-432991bca55e\") " pod="openstack/nova-api-0" Jan 31 09:27:08 crc kubenswrapper[4830]: I0131 09:27:08.687056 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1e035fc4-d1e4-4716-ab3c-432991bca55e-logs\") pod \"nova-api-0\" (UID: \"1e035fc4-d1e4-4716-ab3c-432991bca55e\") " pod="openstack/nova-api-0" Jan 31 09:27:08 crc kubenswrapper[4830]: I0131 09:27:08.687111 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1e035fc4-d1e4-4716-ab3c-432991bca55e-config-data\") pod \"nova-api-0\" (UID: \"1e035fc4-d1e4-4716-ab3c-432991bca55e\") " pod="openstack/nova-api-0" Jan 31 09:27:08 crc kubenswrapper[4830]: I0131 09:27:08.687786 4830 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/507f4c57-9369-4487-a575-370014e22eeb-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 09:27:08 crc kubenswrapper[4830]: I0131 09:27:08.732496 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 31 09:27:08 crc kubenswrapper[4830]: I0131 09:27:08.748025 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 31 09:27:08 crc kubenswrapper[4830]: I0131 09:27:08.752398 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 31 09:27:08 crc kubenswrapper[4830]: I0131 09:27:08.782175 4830 scope.go:117] "RemoveContainer" containerID="e7fd95d0f71fabbfa5af4f6ed53adf185141ac5ded059ca14ce7a31dcb26ccd9" Jan 31 09:27:08 crc kubenswrapper[4830]: E0131 09:27:08.790444 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e7fd95d0f71fabbfa5af4f6ed53adf185141ac5ded059ca14ce7a31dcb26ccd9\": container with ID starting with e7fd95d0f71fabbfa5af4f6ed53adf185141ac5ded059ca14ce7a31dcb26ccd9 not found: ID does not exist" containerID="e7fd95d0f71fabbfa5af4f6ed53adf185141ac5ded059ca14ce7a31dcb26ccd9" Jan 31 09:27:08 crc kubenswrapper[4830]: I0131 09:27:08.790497 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e7fd95d0f71fabbfa5af4f6ed53adf185141ac5ded059ca14ce7a31dcb26ccd9"} err="failed to get container status \"e7fd95d0f71fabbfa5af4f6ed53adf185141ac5ded059ca14ce7a31dcb26ccd9\": rpc error: code = NotFound desc = could not find container \"e7fd95d0f71fabbfa5af4f6ed53adf185141ac5ded059ca14ce7a31dcb26ccd9\": container with ID starting with e7fd95d0f71fabbfa5af4f6ed53adf185141ac5ded059ca14ce7a31dcb26ccd9 not found: ID does not exist" Jan 31 09:27:08 crc kubenswrapper[4830]: I0131 09:27:08.790549 4830 scope.go:117] "RemoveContainer" containerID="048b3cace074d7927293bcbc8ec9fae217cdc957e95d9197018a0d1958db2c4b" Jan 31 09:27:08 crc kubenswrapper[4830]: I0131 09:27:08.812244 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5f4s8\" (UniqueName: \"kubernetes.io/projected/1e035fc4-d1e4-4716-ab3c-432991bca55e-kube-api-access-5f4s8\") pod \"nova-api-0\" (UID: \"1e035fc4-d1e4-4716-ab3c-432991bca55e\") " pod="openstack/nova-api-0" Jan 31 09:27:08 crc kubenswrapper[4830]: I0131 09:27:08.821415 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e035fc4-d1e4-4716-ab3c-432991bca55e-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"1e035fc4-d1e4-4716-ab3c-432991bca55e\") " pod="openstack/nova-api-0" Jan 31 09:27:08 crc kubenswrapper[4830]: I0131 09:27:08.821749 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/81b8f0ff-f3d6-4e28-a3f3-cd17dc8550f5-config-data\") pod \"nova-scheduler-0\" (UID: \"81b8f0ff-f3d6-4e28-a3f3-cd17dc8550f5\") " pod="openstack/nova-scheduler-0" Jan 31 09:27:08 crc kubenswrapper[4830]: I0131 09:27:08.821802 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81b8f0ff-f3d6-4e28-a3f3-cd17dc8550f5-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"81b8f0ff-f3d6-4e28-a3f3-cd17dc8550f5\") " pod="openstack/nova-scheduler-0" Jan 31 09:27:08 crc kubenswrapper[4830]: I0131 09:27:08.821822 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g52ng\" (UniqueName: \"kubernetes.io/projected/81b8f0ff-f3d6-4e28-a3f3-cd17dc8550f5-kube-api-access-g52ng\") pod \"nova-scheduler-0\" (UID: \"81b8f0ff-f3d6-4e28-a3f3-cd17dc8550f5\") " pod="openstack/nova-scheduler-0" Jan 31 09:27:08 crc kubenswrapper[4830]: I0131 09:27:08.822060 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1e035fc4-d1e4-4716-ab3c-432991bca55e-logs\") pod \"nova-api-0\" (UID: \"1e035fc4-d1e4-4716-ab3c-432991bca55e\") " pod="openstack/nova-api-0" Jan 31 09:27:08 crc kubenswrapper[4830]: I0131 09:27:08.822119 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1e035fc4-d1e4-4716-ab3c-432991bca55e-config-data\") pod \"nova-api-0\" (UID: \"1e035fc4-d1e4-4716-ab3c-432991bca55e\") " pod="openstack/nova-api-0" Jan 31 09:27:08 crc kubenswrapper[4830]: I0131 09:27:08.824716 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1e035fc4-d1e4-4716-ab3c-432991bca55e-logs\") pod \"nova-api-0\" (UID: \"1e035fc4-d1e4-4716-ab3c-432991bca55e\") " pod="openstack/nova-api-0" Jan 31 09:27:08 crc kubenswrapper[4830]: I0131 09:27:08.827851 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 31 09:27:08 crc kubenswrapper[4830]: I0131 09:27:08.845475 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1e035fc4-d1e4-4716-ab3c-432991bca55e-config-data\") pod \"nova-api-0\" (UID: \"1e035fc4-d1e4-4716-ab3c-432991bca55e\") " pod="openstack/nova-api-0" Jan 31 09:27:08 crc kubenswrapper[4830]: I0131 09:27:08.851417 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e035fc4-d1e4-4716-ab3c-432991bca55e-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"1e035fc4-d1e4-4716-ab3c-432991bca55e\") " pod="openstack/nova-api-0" Jan 31 09:27:08 crc kubenswrapper[4830]: I0131 09:27:08.886147 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5f4s8\" (UniqueName: \"kubernetes.io/projected/1e035fc4-d1e4-4716-ab3c-432991bca55e-kube-api-access-5f4s8\") pod \"nova-api-0\" (UID: \"1e035fc4-d1e4-4716-ab3c-432991bca55e\") " pod="openstack/nova-api-0" Jan 31 09:27:08 crc kubenswrapper[4830]: I0131 09:27:08.927443 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bf7ad62e-1ba6-47a8-a397-3f078d8291d4-logs\") pod \"nova-metadata-0\" (UID: \"bf7ad62e-1ba6-47a8-a397-3f078d8291d4\") " pod="openstack/nova-metadata-0" Jan 31 09:27:08 crc kubenswrapper[4830]: I0131 09:27:08.945153 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fhmz7\" (UniqueName: \"kubernetes.io/projected/bf7ad62e-1ba6-47a8-a397-3f078d8291d4-kube-api-access-fhmz7\") pod \"nova-metadata-0\" (UID: \"bf7ad62e-1ba6-47a8-a397-3f078d8291d4\") " pod="openstack/nova-metadata-0" Jan 31 09:27:08 crc kubenswrapper[4830]: I0131 09:27:08.945433 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf7ad62e-1ba6-47a8-a397-3f078d8291d4-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"bf7ad62e-1ba6-47a8-a397-3f078d8291d4\") " pod="openstack/nova-metadata-0" Jan 31 09:27:08 crc kubenswrapper[4830]: I0131 09:27:08.945592 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09cd85b5-2912-444f-89ae-06d177587496-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"09cd85b5-2912-444f-89ae-06d177587496\") " pod="openstack/nova-cell1-novncproxy-0" Jan 31 09:27:08 crc kubenswrapper[4830]: I0131 09:27:08.945762 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-46qgq\" (UniqueName: \"kubernetes.io/projected/09cd85b5-2912-444f-89ae-06d177587496-kube-api-access-46qgq\") pod \"nova-cell1-novncproxy-0\" (UID: \"09cd85b5-2912-444f-89ae-06d177587496\") " pod="openstack/nova-cell1-novncproxy-0" Jan 31 09:27:08 crc kubenswrapper[4830]: I0131 09:27:08.945929 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/81b8f0ff-f3d6-4e28-a3f3-cd17dc8550f5-config-data\") pod \"nova-scheduler-0\" (UID: \"81b8f0ff-f3d6-4e28-a3f3-cd17dc8550f5\") " pod="openstack/nova-scheduler-0" Jan 31 09:27:08 crc kubenswrapper[4830]: I0131 09:27:08.946038 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bf7ad62e-1ba6-47a8-a397-3f078d8291d4-config-data\") pod \"nova-metadata-0\" (UID: \"bf7ad62e-1ba6-47a8-a397-3f078d8291d4\") " pod="openstack/nova-metadata-0" Jan 31 09:27:08 crc kubenswrapper[4830]: I0131 09:27:08.946163 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81b8f0ff-f3d6-4e28-a3f3-cd17dc8550f5-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"81b8f0ff-f3d6-4e28-a3f3-cd17dc8550f5\") " pod="openstack/nova-scheduler-0" Jan 31 09:27:08 crc kubenswrapper[4830]: I0131 09:27:08.946278 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g52ng\" (UniqueName: \"kubernetes.io/projected/81b8f0ff-f3d6-4e28-a3f3-cd17dc8550f5-kube-api-access-g52ng\") pod \"nova-scheduler-0\" (UID: \"81b8f0ff-f3d6-4e28-a3f3-cd17dc8550f5\") " pod="openstack/nova-scheduler-0" Jan 31 09:27:08 crc kubenswrapper[4830]: I0131 09:27:08.946583 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/09cd85b5-2912-444f-89ae-06d177587496-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"09cd85b5-2912-444f-89ae-06d177587496\") " pod="openstack/nova-cell1-novncproxy-0" Jan 31 09:27:08 crc kubenswrapper[4830]: I0131 09:27:08.959374 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81b8f0ff-f3d6-4e28-a3f3-cd17dc8550f5-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"81b8f0ff-f3d6-4e28-a3f3-cd17dc8550f5\") " pod="openstack/nova-scheduler-0" Jan 31 09:27:08 crc kubenswrapper[4830]: I0131 09:27:08.964495 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5fbc4d444f-vwp6p"] Jan 31 09:27:08 crc kubenswrapper[4830]: I0131 09:27:08.969145 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/81b8f0ff-f3d6-4e28-a3f3-cd17dc8550f5-config-data\") pod \"nova-scheduler-0\" (UID: \"81b8f0ff-f3d6-4e28-a3f3-cd17dc8550f5\") " pod="openstack/nova-scheduler-0" Jan 31 09:27:08 crc kubenswrapper[4830]: I0131 09:27:08.989415 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g52ng\" (UniqueName: \"kubernetes.io/projected/81b8f0ff-f3d6-4e28-a3f3-cd17dc8550f5-kube-api-access-g52ng\") pod \"nova-scheduler-0\" (UID: \"81b8f0ff-f3d6-4e28-a3f3-cd17dc8550f5\") " pod="openstack/nova-scheduler-0" Jan 31 09:27:08 crc kubenswrapper[4830]: I0131 09:27:08.990153 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5fbc4d444f-vwp6p"] Jan 31 09:27:08 crc kubenswrapper[4830]: I0131 09:27:08.990295 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5fbc4d444f-vwp6p" Jan 31 09:27:09 crc kubenswrapper[4830]: I0131 09:27:09.015898 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 31 09:27:09 crc kubenswrapper[4830]: I0131 09:27:09.049883 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/084e69fe-072f-4659-a28c-f0000f8c16fe-dns-swift-storage-0\") pod \"dnsmasq-dns-5fbc4d444f-vwp6p\" (UID: \"084e69fe-072f-4659-a28c-f0000f8c16fe\") " pod="openstack/dnsmasq-dns-5fbc4d444f-vwp6p" Jan 31 09:27:09 crc kubenswrapper[4830]: I0131 09:27:09.050038 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bf7ad62e-1ba6-47a8-a397-3f078d8291d4-logs\") pod \"nova-metadata-0\" (UID: \"bf7ad62e-1ba6-47a8-a397-3f078d8291d4\") " pod="openstack/nova-metadata-0" Jan 31 09:27:09 crc kubenswrapper[4830]: I0131 09:27:09.050119 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fhmz7\" (UniqueName: \"kubernetes.io/projected/bf7ad62e-1ba6-47a8-a397-3f078d8291d4-kube-api-access-fhmz7\") pod \"nova-metadata-0\" (UID: \"bf7ad62e-1ba6-47a8-a397-3f078d8291d4\") " pod="openstack/nova-metadata-0" Jan 31 09:27:09 crc kubenswrapper[4830]: I0131 09:27:09.050159 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/084e69fe-072f-4659-a28c-f0000f8c16fe-config\") pod \"dnsmasq-dns-5fbc4d444f-vwp6p\" (UID: \"084e69fe-072f-4659-a28c-f0000f8c16fe\") " pod="openstack/dnsmasq-dns-5fbc4d444f-vwp6p" Jan 31 09:27:09 crc kubenswrapper[4830]: I0131 09:27:09.050201 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/084e69fe-072f-4659-a28c-f0000f8c16fe-dns-svc\") pod \"dnsmasq-dns-5fbc4d444f-vwp6p\" (UID: \"084e69fe-072f-4659-a28c-f0000f8c16fe\") " pod="openstack/dnsmasq-dns-5fbc4d444f-vwp6p" Jan 31 09:27:09 crc kubenswrapper[4830]: I0131 09:27:09.050252 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/084e69fe-072f-4659-a28c-f0000f8c16fe-ovsdbserver-sb\") pod \"dnsmasq-dns-5fbc4d444f-vwp6p\" (UID: \"084e69fe-072f-4659-a28c-f0000f8c16fe\") " pod="openstack/dnsmasq-dns-5fbc4d444f-vwp6p" Jan 31 09:27:09 crc kubenswrapper[4830]: I0131 09:27:09.050280 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf7ad62e-1ba6-47a8-a397-3f078d8291d4-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"bf7ad62e-1ba6-47a8-a397-3f078d8291d4\") " pod="openstack/nova-metadata-0" Jan 31 09:27:09 crc kubenswrapper[4830]: I0131 09:27:09.050336 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09cd85b5-2912-444f-89ae-06d177587496-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"09cd85b5-2912-444f-89ae-06d177587496\") " pod="openstack/nova-cell1-novncproxy-0" Jan 31 09:27:09 crc kubenswrapper[4830]: I0131 09:27:09.050409 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-46qgq\" (UniqueName: \"kubernetes.io/projected/09cd85b5-2912-444f-89ae-06d177587496-kube-api-access-46qgq\") pod \"nova-cell1-novncproxy-0\" (UID: \"09cd85b5-2912-444f-89ae-06d177587496\") " pod="openstack/nova-cell1-novncproxy-0" Jan 31 09:27:09 crc kubenswrapper[4830]: I0131 09:27:09.050477 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bf7ad62e-1ba6-47a8-a397-3f078d8291d4-config-data\") pod \"nova-metadata-0\" (UID: \"bf7ad62e-1ba6-47a8-a397-3f078d8291d4\") " pod="openstack/nova-metadata-0" Jan 31 09:27:09 crc kubenswrapper[4830]: I0131 09:27:09.050604 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/084e69fe-072f-4659-a28c-f0000f8c16fe-ovsdbserver-nb\") pod \"dnsmasq-dns-5fbc4d444f-vwp6p\" (UID: \"084e69fe-072f-4659-a28c-f0000f8c16fe\") " pod="openstack/dnsmasq-dns-5fbc4d444f-vwp6p" Jan 31 09:27:09 crc kubenswrapper[4830]: I0131 09:27:09.053470 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bf7ad62e-1ba6-47a8-a397-3f078d8291d4-logs\") pod \"nova-metadata-0\" (UID: \"bf7ad62e-1ba6-47a8-a397-3f078d8291d4\") " pod="openstack/nova-metadata-0" Jan 31 09:27:09 crc kubenswrapper[4830]: I0131 09:27:09.050694 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8tfth\" (UniqueName: \"kubernetes.io/projected/084e69fe-072f-4659-a28c-f0000f8c16fe-kube-api-access-8tfth\") pod \"dnsmasq-dns-5fbc4d444f-vwp6p\" (UID: \"084e69fe-072f-4659-a28c-f0000f8c16fe\") " pod="openstack/dnsmasq-dns-5fbc4d444f-vwp6p" Jan 31 09:27:09 crc kubenswrapper[4830]: I0131 09:27:09.055622 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/09cd85b5-2912-444f-89ae-06d177587496-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"09cd85b5-2912-444f-89ae-06d177587496\") " pod="openstack/nova-cell1-novncproxy-0" Jan 31 09:27:09 crc kubenswrapper[4830]: I0131 09:27:09.062549 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf7ad62e-1ba6-47a8-a397-3f078d8291d4-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"bf7ad62e-1ba6-47a8-a397-3f078d8291d4\") " pod="openstack/nova-metadata-0" Jan 31 09:27:09 crc kubenswrapper[4830]: I0131 09:27:09.067718 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09cd85b5-2912-444f-89ae-06d177587496-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"09cd85b5-2912-444f-89ae-06d177587496\") " pod="openstack/nova-cell1-novncproxy-0" Jan 31 09:27:09 crc kubenswrapper[4830]: I0131 09:27:09.072498 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/09cd85b5-2912-444f-89ae-06d177587496-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"09cd85b5-2912-444f-89ae-06d177587496\") " pod="openstack/nova-cell1-novncproxy-0" Jan 31 09:27:09 crc kubenswrapper[4830]: I0131 09:27:09.072646 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bf7ad62e-1ba6-47a8-a397-3f078d8291d4-config-data\") pod \"nova-metadata-0\" (UID: \"bf7ad62e-1ba6-47a8-a397-3f078d8291d4\") " pod="openstack/nova-metadata-0" Jan 31 09:27:09 crc kubenswrapper[4830]: I0131 09:27:09.099352 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fhmz7\" (UniqueName: \"kubernetes.io/projected/bf7ad62e-1ba6-47a8-a397-3f078d8291d4-kube-api-access-fhmz7\") pod \"nova-metadata-0\" (UID: \"bf7ad62e-1ba6-47a8-a397-3f078d8291d4\") " pod="openstack/nova-metadata-0" Jan 31 09:27:09 crc kubenswrapper[4830]: I0131 09:27:09.099411 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-46qgq\" (UniqueName: \"kubernetes.io/projected/09cd85b5-2912-444f-89ae-06d177587496-kube-api-access-46qgq\") pod \"nova-cell1-novncproxy-0\" (UID: \"09cd85b5-2912-444f-89ae-06d177587496\") " pod="openstack/nova-cell1-novncproxy-0" Jan 31 09:27:09 crc kubenswrapper[4830]: I0131 09:27:09.125852 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-59478c766f-tgwgd"] Jan 31 09:27:09 crc kubenswrapper[4830]: I0131 09:27:09.148016 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-api-59478c766f-tgwgd"] Jan 31 09:27:09 crc kubenswrapper[4830]: I0131 09:27:09.157043 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 31 09:27:09 crc kubenswrapper[4830]: I0131 09:27:09.158670 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/084e69fe-072f-4659-a28c-f0000f8c16fe-dns-svc\") pod \"dnsmasq-dns-5fbc4d444f-vwp6p\" (UID: \"084e69fe-072f-4659-a28c-f0000f8c16fe\") " pod="openstack/dnsmasq-dns-5fbc4d444f-vwp6p" Jan 31 09:27:09 crc kubenswrapper[4830]: I0131 09:27:09.158778 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/084e69fe-072f-4659-a28c-f0000f8c16fe-ovsdbserver-sb\") pod \"dnsmasq-dns-5fbc4d444f-vwp6p\" (UID: \"084e69fe-072f-4659-a28c-f0000f8c16fe\") " pod="openstack/dnsmasq-dns-5fbc4d444f-vwp6p" Jan 31 09:27:09 crc kubenswrapper[4830]: I0131 09:27:09.158937 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/084e69fe-072f-4659-a28c-f0000f8c16fe-ovsdbserver-nb\") pod \"dnsmasq-dns-5fbc4d444f-vwp6p\" (UID: \"084e69fe-072f-4659-a28c-f0000f8c16fe\") " pod="openstack/dnsmasq-dns-5fbc4d444f-vwp6p" Jan 31 09:27:09 crc kubenswrapper[4830]: I0131 09:27:09.158989 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8tfth\" (UniqueName: \"kubernetes.io/projected/084e69fe-072f-4659-a28c-f0000f8c16fe-kube-api-access-8tfth\") pod \"dnsmasq-dns-5fbc4d444f-vwp6p\" (UID: \"084e69fe-072f-4659-a28c-f0000f8c16fe\") " pod="openstack/dnsmasq-dns-5fbc4d444f-vwp6p" Jan 31 09:27:09 crc kubenswrapper[4830]: I0131 09:27:09.159062 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/084e69fe-072f-4659-a28c-f0000f8c16fe-dns-swift-storage-0\") pod \"dnsmasq-dns-5fbc4d444f-vwp6p\" (UID: \"084e69fe-072f-4659-a28c-f0000f8c16fe\") " pod="openstack/dnsmasq-dns-5fbc4d444f-vwp6p" Jan 31 09:27:09 crc kubenswrapper[4830]: I0131 09:27:09.159157 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/084e69fe-072f-4659-a28c-f0000f8c16fe-config\") pod \"dnsmasq-dns-5fbc4d444f-vwp6p\" (UID: \"084e69fe-072f-4659-a28c-f0000f8c16fe\") " pod="openstack/dnsmasq-dns-5fbc4d444f-vwp6p" Jan 31 09:27:09 crc kubenswrapper[4830]: I0131 09:27:09.160688 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/084e69fe-072f-4659-a28c-f0000f8c16fe-ovsdbserver-nb\") pod \"dnsmasq-dns-5fbc4d444f-vwp6p\" (UID: \"084e69fe-072f-4659-a28c-f0000f8c16fe\") " pod="openstack/dnsmasq-dns-5fbc4d444f-vwp6p" Jan 31 09:27:09 crc kubenswrapper[4830]: I0131 09:27:09.161356 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/084e69fe-072f-4659-a28c-f0000f8c16fe-dns-svc\") pod \"dnsmasq-dns-5fbc4d444f-vwp6p\" (UID: \"084e69fe-072f-4659-a28c-f0000f8c16fe\") " pod="openstack/dnsmasq-dns-5fbc4d444f-vwp6p" Jan 31 09:27:09 crc kubenswrapper[4830]: I0131 09:27:09.162231 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/084e69fe-072f-4659-a28c-f0000f8c16fe-dns-swift-storage-0\") pod \"dnsmasq-dns-5fbc4d444f-vwp6p\" (UID: \"084e69fe-072f-4659-a28c-f0000f8c16fe\") " pod="openstack/dnsmasq-dns-5fbc4d444f-vwp6p" Jan 31 09:27:09 crc kubenswrapper[4830]: I0131 09:27:09.162254 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/084e69fe-072f-4659-a28c-f0000f8c16fe-ovsdbserver-sb\") pod \"dnsmasq-dns-5fbc4d444f-vwp6p\" (UID: \"084e69fe-072f-4659-a28c-f0000f8c16fe\") " pod="openstack/dnsmasq-dns-5fbc4d444f-vwp6p" Jan 31 09:27:09 crc kubenswrapper[4830]: I0131 09:27:09.165235 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/084e69fe-072f-4659-a28c-f0000f8c16fe-config\") pod \"dnsmasq-dns-5fbc4d444f-vwp6p\" (UID: \"084e69fe-072f-4659-a28c-f0000f8c16fe\") " pod="openstack/dnsmasq-dns-5fbc4d444f-vwp6p" Jan 31 09:27:09 crc kubenswrapper[4830]: I0131 09:27:09.171784 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-5db4bc48b8-mphcw"] Jan 31 09:27:09 crc kubenswrapper[4830]: I0131 09:27:09.185160 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8tfth\" (UniqueName: \"kubernetes.io/projected/084e69fe-072f-4659-a28c-f0000f8c16fe-kube-api-access-8tfth\") pod \"dnsmasq-dns-5fbc4d444f-vwp6p\" (UID: \"084e69fe-072f-4659-a28c-f0000f8c16fe\") " pod="openstack/dnsmasq-dns-5fbc4d444f-vwp6p" Jan 31 09:27:09 crc kubenswrapper[4830]: I0131 09:27:09.213151 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-cfnapi-5db4bc48b8-mphcw"] Jan 31 09:27:09 crc kubenswrapper[4830]: I0131 09:27:09.226895 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 31 09:27:09 crc kubenswrapper[4830]: I0131 09:27:09.325870 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5fbc4d444f-vwp6p" Jan 31 09:27:09 crc kubenswrapper[4830]: I0131 09:27:09.388269 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 31 09:27:09 crc kubenswrapper[4830]: I0131 09:27:09.414272 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-svqlf"] Jan 31 09:27:09 crc kubenswrapper[4830]: I0131 09:27:09.781617 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 31 09:27:09 crc kubenswrapper[4830]: W0131 09:27:09.840982 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1e035fc4_d1e4_4716_ab3c_432991bca55e.slice/crio-0f07d675a6a751b01ff3ff542ae40338072f61c5596abd08371f4bd8f4924500 WatchSource:0}: Error finding container 0f07d675a6a751b01ff3ff542ae40338072f61c5596abd08371f4bd8f4924500: Status 404 returned error can't find the container with id 0f07d675a6a751b01ff3ff542ae40338072f61c5596abd08371f4bd8f4924500 Jan 31 09:27:10 crc kubenswrapper[4830]: I0131 09:27:10.297686 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="507f4c57-9369-4487-a575-370014e22eeb" path="/var/lib/kubelet/pods/507f4c57-9369-4487-a575-370014e22eeb/volumes" Jan 31 09:27:10 crc kubenswrapper[4830]: I0131 09:27:10.299935 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7f604a2-4cc7-4619-846c-51cb5cddffda" path="/var/lib/kubelet/pods/e7f604a2-4cc7-4619-846c-51cb5cddffda/volumes" Jan 31 09:27:10 crc kubenswrapper[4830]: I0131 09:27:10.344939 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 31 09:27:10 crc kubenswrapper[4830]: I0131 09:27:10.494717 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 31 09:27:10 crc kubenswrapper[4830]: I0131 09:27:10.698691 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1e035fc4-d1e4-4716-ab3c-432991bca55e","Type":"ContainerStarted","Data":"0f07d675a6a751b01ff3ff542ae40338072f61c5596abd08371f4bd8f4924500"} Jan 31 09:27:10 crc kubenswrapper[4830]: I0131 09:27:10.720622 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"81b8f0ff-f3d6-4e28-a3f3-cd17dc8550f5","Type":"ContainerStarted","Data":"8b2f88cce18a06397ce89bc4b1cdfd75501e183d7f30a5a02cba0a31520f30a7"} Jan 31 09:27:10 crc kubenswrapper[4830]: I0131 09:27:10.767698 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"bf7ad62e-1ba6-47a8-a397-3f078d8291d4","Type":"ContainerStarted","Data":"db2ccfd019a3f1614af2af67c698a404ca641f8a5a337ba6e9a012e3f3d72c77"} Jan 31 09:27:10 crc kubenswrapper[4830]: I0131 09:27:10.783129 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-svqlf" event={"ID":"714acb03-29b5-4da1-8f14-9587cabcd207","Type":"ContainerStarted","Data":"f0b45e6d646475b426502e17ae647c2de02a19d7fbebfbe7cacdbfcc6685fbf5"} Jan 31 09:27:10 crc kubenswrapper[4830]: I0131 09:27:10.783213 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-svqlf" event={"ID":"714acb03-29b5-4da1-8f14-9587cabcd207","Type":"ContainerStarted","Data":"d342facad64b2e2a01671efafb703490954b7cc14f9d493d80ec473b9d7c8ff7"} Jan 31 09:27:10 crc kubenswrapper[4830]: I0131 09:27:10.851536 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-svqlf" podStartSLOduration=3.851507648 podStartE2EDuration="3.851507648s" podCreationTimestamp="2026-01-31 09:27:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 09:27:10.815209986 +0000 UTC m=+1575.308572428" watchObservedRunningTime="2026-01-31 09:27:10.851507648 +0000 UTC m=+1575.344870110" Jan 31 09:27:10 crc kubenswrapper[4830]: W0131 09:27:10.938227 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod09cd85b5_2912_444f_89ae_06d177587496.slice/crio-0006041b3c2c439b29ab50c71c99fd019c17e98b3cfef9a71628f3323c534f83 WatchSource:0}: Error finding container 0006041b3c2c439b29ab50c71c99fd019c17e98b3cfef9a71628f3323c534f83: Status 404 returned error can't find the container with id 0006041b3c2c439b29ab50c71c99fd019c17e98b3cfef9a71628f3323c534f83 Jan 31 09:27:10 crc kubenswrapper[4830]: I0131 09:27:10.973256 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 31 09:27:11 crc kubenswrapper[4830]: I0131 09:27:11.011093 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5fbc4d444f-vwp6p"] Jan 31 09:27:11 crc kubenswrapper[4830]: I0131 09:27:11.184654 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-shj89"] Jan 31 09:27:11 crc kubenswrapper[4830]: I0131 09:27:11.187659 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-shj89" Jan 31 09:27:11 crc kubenswrapper[4830]: I0131 09:27:11.191629 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Jan 31 09:27:11 crc kubenswrapper[4830]: I0131 09:27:11.193335 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 31 09:27:11 crc kubenswrapper[4830]: I0131 09:27:11.208607 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-shj89"] Jan 31 09:27:11 crc kubenswrapper[4830]: I0131 09:27:11.255745 4830 scope.go:117] "RemoveContainer" containerID="a04fad3617a9e38076099693ce6bd6f0b7e1a9b845b3b8a22acffddfa772e8f0" Jan 31 09:27:11 crc kubenswrapper[4830]: E0131 09:27:11.256527 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gt7kd_openshift-machine-config-operator(158dbfda-9b0a-4809-9946-3c6ee2d082dc)\"" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" podUID="158dbfda-9b0a-4809-9946-3c6ee2d082dc" Jan 31 09:27:11 crc kubenswrapper[4830]: I0131 09:27:11.289205 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1533bfd-c9f9-4c8d-9cb2-085f694b1f45-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-shj89\" (UID: \"c1533bfd-c9f9-4c8d-9cb2-085f694b1f45\") " pod="openstack/nova-cell1-conductor-db-sync-shj89" Jan 31 09:27:11 crc kubenswrapper[4830]: I0131 09:27:11.289288 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c1533bfd-c9f9-4c8d-9cb2-085f694b1f45-config-data\") pod \"nova-cell1-conductor-db-sync-shj89\" (UID: \"c1533bfd-c9f9-4c8d-9cb2-085f694b1f45\") " pod="openstack/nova-cell1-conductor-db-sync-shj89" Jan 31 09:27:11 crc kubenswrapper[4830]: I0131 09:27:11.289344 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wzzxk\" (UniqueName: \"kubernetes.io/projected/c1533bfd-c9f9-4c8d-9cb2-085f694b1f45-kube-api-access-wzzxk\") pod \"nova-cell1-conductor-db-sync-shj89\" (UID: \"c1533bfd-c9f9-4c8d-9cb2-085f694b1f45\") " pod="openstack/nova-cell1-conductor-db-sync-shj89" Jan 31 09:27:11 crc kubenswrapper[4830]: I0131 09:27:11.289492 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c1533bfd-c9f9-4c8d-9cb2-085f694b1f45-scripts\") pod \"nova-cell1-conductor-db-sync-shj89\" (UID: \"c1533bfd-c9f9-4c8d-9cb2-085f694b1f45\") " pod="openstack/nova-cell1-conductor-db-sync-shj89" Jan 31 09:27:11 crc kubenswrapper[4830]: I0131 09:27:11.393329 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c1533bfd-c9f9-4c8d-9cb2-085f694b1f45-scripts\") pod \"nova-cell1-conductor-db-sync-shj89\" (UID: \"c1533bfd-c9f9-4c8d-9cb2-085f694b1f45\") " pod="openstack/nova-cell1-conductor-db-sync-shj89" Jan 31 09:27:11 crc kubenswrapper[4830]: I0131 09:27:11.393535 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1533bfd-c9f9-4c8d-9cb2-085f694b1f45-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-shj89\" (UID: \"c1533bfd-c9f9-4c8d-9cb2-085f694b1f45\") " pod="openstack/nova-cell1-conductor-db-sync-shj89" Jan 31 09:27:11 crc kubenswrapper[4830]: I0131 09:27:11.393652 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c1533bfd-c9f9-4c8d-9cb2-085f694b1f45-config-data\") pod \"nova-cell1-conductor-db-sync-shj89\" (UID: \"c1533bfd-c9f9-4c8d-9cb2-085f694b1f45\") " pod="openstack/nova-cell1-conductor-db-sync-shj89" Jan 31 09:27:11 crc kubenswrapper[4830]: I0131 09:27:11.393743 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wzzxk\" (UniqueName: \"kubernetes.io/projected/c1533bfd-c9f9-4c8d-9cb2-085f694b1f45-kube-api-access-wzzxk\") pod \"nova-cell1-conductor-db-sync-shj89\" (UID: \"c1533bfd-c9f9-4c8d-9cb2-085f694b1f45\") " pod="openstack/nova-cell1-conductor-db-sync-shj89" Jan 31 09:27:11 crc kubenswrapper[4830]: I0131 09:27:11.399039 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c1533bfd-c9f9-4c8d-9cb2-085f694b1f45-scripts\") pod \"nova-cell1-conductor-db-sync-shj89\" (UID: \"c1533bfd-c9f9-4c8d-9cb2-085f694b1f45\") " pod="openstack/nova-cell1-conductor-db-sync-shj89" Jan 31 09:27:11 crc kubenswrapper[4830]: I0131 09:27:11.400932 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1533bfd-c9f9-4c8d-9cb2-085f694b1f45-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-shj89\" (UID: \"c1533bfd-c9f9-4c8d-9cb2-085f694b1f45\") " pod="openstack/nova-cell1-conductor-db-sync-shj89" Jan 31 09:27:11 crc kubenswrapper[4830]: I0131 09:27:11.403422 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c1533bfd-c9f9-4c8d-9cb2-085f694b1f45-config-data\") pod \"nova-cell1-conductor-db-sync-shj89\" (UID: \"c1533bfd-c9f9-4c8d-9cb2-085f694b1f45\") " pod="openstack/nova-cell1-conductor-db-sync-shj89" Jan 31 09:27:11 crc kubenswrapper[4830]: I0131 09:27:11.417552 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wzzxk\" (UniqueName: \"kubernetes.io/projected/c1533bfd-c9f9-4c8d-9cb2-085f694b1f45-kube-api-access-wzzxk\") pod \"nova-cell1-conductor-db-sync-shj89\" (UID: \"c1533bfd-c9f9-4c8d-9cb2-085f694b1f45\") " pod="openstack/nova-cell1-conductor-db-sync-shj89" Jan 31 09:27:11 crc kubenswrapper[4830]: I0131 09:27:11.524397 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-shj89" Jan 31 09:27:11 crc kubenswrapper[4830]: I0131 09:27:11.826841 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"09cd85b5-2912-444f-89ae-06d177587496","Type":"ContainerStarted","Data":"0006041b3c2c439b29ab50c71c99fd019c17e98b3cfef9a71628f3323c534f83"} Jan 31 09:27:11 crc kubenswrapper[4830]: I0131 09:27:11.838709 4830 generic.go:334] "Generic (PLEG): container finished" podID="084e69fe-072f-4659-a28c-f0000f8c16fe" containerID="ef2ae4c1e7da16890c368e1398a228726fd3a2b9b647052381591b4bbe509814" exitCode=0 Jan 31 09:27:11 crc kubenswrapper[4830]: I0131 09:27:11.844925 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5fbc4d444f-vwp6p" event={"ID":"084e69fe-072f-4659-a28c-f0000f8c16fe","Type":"ContainerDied","Data":"ef2ae4c1e7da16890c368e1398a228726fd3a2b9b647052381591b4bbe509814"} Jan 31 09:27:11 crc kubenswrapper[4830]: I0131 09:27:11.845003 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5fbc4d444f-vwp6p" event={"ID":"084e69fe-072f-4659-a28c-f0000f8c16fe","Type":"ContainerStarted","Data":"8ed5339e48754bc5bb00e3c42fbcdf4a42994c8dbb486269a9450af25cb94150"} Jan 31 09:27:12 crc kubenswrapper[4830]: I0131 09:27:12.302996 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-shj89"] Jan 31 09:27:12 crc kubenswrapper[4830]: I0131 09:27:12.532812 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 31 09:27:12 crc kubenswrapper[4830]: I0131 09:27:12.557210 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 31 09:27:12 crc kubenswrapper[4830]: I0131 09:27:12.896765 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5fbc4d444f-vwp6p" event={"ID":"084e69fe-072f-4659-a28c-f0000f8c16fe","Type":"ContainerStarted","Data":"91ad512ef1df892b20ea0fe4ddccb641038a02dec8474a6237285250f14d6cdd"} Jan 31 09:27:12 crc kubenswrapper[4830]: I0131 09:27:12.898193 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5fbc4d444f-vwp6p" Jan 31 09:27:12 crc kubenswrapper[4830]: I0131 09:27:12.925713 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-shj89" event={"ID":"c1533bfd-c9f9-4c8d-9cb2-085f694b1f45","Type":"ContainerStarted","Data":"b107efaccf2b98b8561f3fc2480bbd19ae235859bc35b0e5ac1cbd07d9dadcb3"} Jan 31 09:27:12 crc kubenswrapper[4830]: I0131 09:27:12.925788 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-shj89" event={"ID":"c1533bfd-c9f9-4c8d-9cb2-085f694b1f45","Type":"ContainerStarted","Data":"fdf340414cc96a9f5ac3ecbd9a0a9e72f3e69a59e75b266492c58dd1f06eada7"} Jan 31 09:27:12 crc kubenswrapper[4830]: I0131 09:27:12.947198 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5fbc4d444f-vwp6p" podStartSLOduration=4.9471779080000005 podStartE2EDuration="4.947177908s" podCreationTimestamp="2026-01-31 09:27:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 09:27:12.924838833 +0000 UTC m=+1577.418201295" watchObservedRunningTime="2026-01-31 09:27:12.947177908 +0000 UTC m=+1577.440540340" Jan 31 09:27:13 crc kubenswrapper[4830]: I0131 09:27:13.009937 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-shj89" podStartSLOduration=2.009911512 podStartE2EDuration="2.009911512s" podCreationTimestamp="2026-01-31 09:27:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 09:27:12.969320018 +0000 UTC m=+1577.462682460" watchObservedRunningTime="2026-01-31 09:27:13.009911512 +0000 UTC m=+1577.503273954" Jan 31 09:27:13 crc kubenswrapper[4830]: I0131 09:27:13.951022 4830 generic.go:334] "Generic (PLEG): container finished" podID="20e72fdb-b11a-4573-844c-475d2967f8ac" containerID="840c1bade5bb6a14f7f1be80e3774d07a66c9e1933e911852e4d990baa6d6bda" exitCode=137 Jan 31 09:27:13 crc kubenswrapper[4830]: I0131 09:27:13.951121 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"20e72fdb-b11a-4573-844c-475d2967f8ac","Type":"ContainerDied","Data":"840c1bade5bb6a14f7f1be80e3774d07a66c9e1933e911852e4d990baa6d6bda"} Jan 31 09:27:15 crc kubenswrapper[4830]: I0131 09:27:15.009818 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 31 09:27:15 crc kubenswrapper[4830]: I0131 09:27:15.010541 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell0-conductor-0" podUID="211bb9c5-07d6-4936-b444-3544b2db1b19" containerName="nova-cell0-conductor-conductor" containerID="cri-o://c6cd6a87e3962717e5e0987185c6b76612e45ff28f1bcc2ea82afc8fd824deb1" gracePeriod=30 Jan 31 09:27:15 crc kubenswrapper[4830]: I0131 09:27:15.048799 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 31 09:27:15 crc kubenswrapper[4830]: I0131 09:27:15.073485 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 31 09:27:16 crc kubenswrapper[4830]: I0131 09:27:16.015279 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"20e72fdb-b11a-4573-844c-475d2967f8ac","Type":"ContainerDied","Data":"5040d85044d7f4c2044af82438ad626b964ada43b41a43b347d79d95b7c8320e"} Jan 31 09:27:16 crc kubenswrapper[4830]: I0131 09:27:16.015715 4830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5040d85044d7f4c2044af82438ad626b964ada43b41a43b347d79d95b7c8320e" Jan 31 09:27:16 crc kubenswrapper[4830]: I0131 09:27:16.103121 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 31 09:27:16 crc kubenswrapper[4830]: I0131 09:27:16.285491 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/20e72fdb-b11a-4573-844c-475d2967f8ac-scripts\") pod \"20e72fdb-b11a-4573-844c-475d2967f8ac\" (UID: \"20e72fdb-b11a-4573-844c-475d2967f8ac\") " Jan 31 09:27:16 crc kubenswrapper[4830]: I0131 09:27:16.286009 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20e72fdb-b11a-4573-844c-475d2967f8ac-combined-ca-bundle\") pod \"20e72fdb-b11a-4573-844c-475d2967f8ac\" (UID: \"20e72fdb-b11a-4573-844c-475d2967f8ac\") " Jan 31 09:27:16 crc kubenswrapper[4830]: I0131 09:27:16.286059 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/20e72fdb-b11a-4573-844c-475d2967f8ac-log-httpd\") pod \"20e72fdb-b11a-4573-844c-475d2967f8ac\" (UID: \"20e72fdb-b11a-4573-844c-475d2967f8ac\") " Jan 31 09:27:16 crc kubenswrapper[4830]: I0131 09:27:16.286089 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20e72fdb-b11a-4573-844c-475d2967f8ac-config-data\") pod \"20e72fdb-b11a-4573-844c-475d2967f8ac\" (UID: \"20e72fdb-b11a-4573-844c-475d2967f8ac\") " Jan 31 09:27:16 crc kubenswrapper[4830]: I0131 09:27:16.286337 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/20e72fdb-b11a-4573-844c-475d2967f8ac-sg-core-conf-yaml\") pod \"20e72fdb-b11a-4573-844c-475d2967f8ac\" (UID: \"20e72fdb-b11a-4573-844c-475d2967f8ac\") " Jan 31 09:27:16 crc kubenswrapper[4830]: I0131 09:27:16.286382 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9vl5t\" (UniqueName: \"kubernetes.io/projected/20e72fdb-b11a-4573-844c-475d2967f8ac-kube-api-access-9vl5t\") pod \"20e72fdb-b11a-4573-844c-475d2967f8ac\" (UID: \"20e72fdb-b11a-4573-844c-475d2967f8ac\") " Jan 31 09:27:16 crc kubenswrapper[4830]: I0131 09:27:16.286414 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/20e72fdb-b11a-4573-844c-475d2967f8ac-run-httpd\") pod \"20e72fdb-b11a-4573-844c-475d2967f8ac\" (UID: \"20e72fdb-b11a-4573-844c-475d2967f8ac\") " Jan 31 09:27:16 crc kubenswrapper[4830]: I0131 09:27:16.288664 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/20e72fdb-b11a-4573-844c-475d2967f8ac-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "20e72fdb-b11a-4573-844c-475d2967f8ac" (UID: "20e72fdb-b11a-4573-844c-475d2967f8ac"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 09:27:16 crc kubenswrapper[4830]: I0131 09:27:16.290717 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/20e72fdb-b11a-4573-844c-475d2967f8ac-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "20e72fdb-b11a-4573-844c-475d2967f8ac" (UID: "20e72fdb-b11a-4573-844c-475d2967f8ac"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 09:27:16 crc kubenswrapper[4830]: I0131 09:27:16.360069 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20e72fdb-b11a-4573-844c-475d2967f8ac-scripts" (OuterVolumeSpecName: "scripts") pod "20e72fdb-b11a-4573-844c-475d2967f8ac" (UID: "20e72fdb-b11a-4573-844c-475d2967f8ac"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:27:16 crc kubenswrapper[4830]: I0131 09:27:16.361170 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20e72fdb-b11a-4573-844c-475d2967f8ac-kube-api-access-9vl5t" (OuterVolumeSpecName: "kube-api-access-9vl5t") pod "20e72fdb-b11a-4573-844c-475d2967f8ac" (UID: "20e72fdb-b11a-4573-844c-475d2967f8ac"). InnerVolumeSpecName "kube-api-access-9vl5t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:27:16 crc kubenswrapper[4830]: I0131 09:27:16.412684 4830 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/20e72fdb-b11a-4573-844c-475d2967f8ac-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 09:27:16 crc kubenswrapper[4830]: I0131 09:27:16.412837 4830 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/20e72fdb-b11a-4573-844c-475d2967f8ac-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 31 09:27:16 crc kubenswrapper[4830]: I0131 09:27:16.412861 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9vl5t\" (UniqueName: \"kubernetes.io/projected/20e72fdb-b11a-4573-844c-475d2967f8ac-kube-api-access-9vl5t\") on node \"crc\" DevicePath \"\"" Jan 31 09:27:16 crc kubenswrapper[4830]: I0131 09:27:16.412879 4830 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/20e72fdb-b11a-4573-844c-475d2967f8ac-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 31 09:27:16 crc kubenswrapper[4830]: I0131 09:27:16.538845 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20e72fdb-b11a-4573-844c-475d2967f8ac-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "20e72fdb-b11a-4573-844c-475d2967f8ac" (UID: "20e72fdb-b11a-4573-844c-475d2967f8ac"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:27:16 crc kubenswrapper[4830]: E0131 09:27:16.577035 4830 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="c6cd6a87e3962717e5e0987185c6b76612e45ff28f1bcc2ea82afc8fd824deb1" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 31 09:27:16 crc kubenswrapper[4830]: E0131 09:27:16.578836 4830 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="c6cd6a87e3962717e5e0987185c6b76612e45ff28f1bcc2ea82afc8fd824deb1" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 31 09:27:16 crc kubenswrapper[4830]: E0131 09:27:16.579917 4830 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="c6cd6a87e3962717e5e0987185c6b76612e45ff28f1bcc2ea82afc8fd824deb1" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 31 09:27:16 crc kubenswrapper[4830]: E0131 09:27:16.579959 4830 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-cell0-conductor-0" podUID="211bb9c5-07d6-4936-b444-3544b2db1b19" containerName="nova-cell0-conductor-conductor" Jan 31 09:27:16 crc kubenswrapper[4830]: I0131 09:27:16.621335 4830 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/20e72fdb-b11a-4573-844c-475d2967f8ac-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 31 09:27:16 crc kubenswrapper[4830]: I0131 09:27:16.622693 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20e72fdb-b11a-4573-844c-475d2967f8ac-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "20e72fdb-b11a-4573-844c-475d2967f8ac" (UID: "20e72fdb-b11a-4573-844c-475d2967f8ac"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:27:16 crc kubenswrapper[4830]: I0131 09:27:16.699912 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20e72fdb-b11a-4573-844c-475d2967f8ac-config-data" (OuterVolumeSpecName: "config-data") pod "20e72fdb-b11a-4573-844c-475d2967f8ac" (UID: "20e72fdb-b11a-4573-844c-475d2967f8ac"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:27:16 crc kubenswrapper[4830]: I0131 09:27:16.724887 4830 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20e72fdb-b11a-4573-844c-475d2967f8ac-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 09:27:16 crc kubenswrapper[4830]: I0131 09:27:16.724948 4830 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20e72fdb-b11a-4573-844c-475d2967f8ac-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 09:27:17 crc kubenswrapper[4830]: I0131 09:27:17.031115 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"bf7ad62e-1ba6-47a8-a397-3f078d8291d4","Type":"ContainerStarted","Data":"5323de851424e5d94a8c2eeab1d38fae1563ff3fcdcde59f95c513bc6359a6f4"} Jan 31 09:27:17 crc kubenswrapper[4830]: I0131 09:27:17.031198 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"bf7ad62e-1ba6-47a8-a397-3f078d8291d4","Type":"ContainerStarted","Data":"8934f85d83e771f5120d5ce2dbdc20ee0153da86015ba6227acd3e1364f909d7"} Jan 31 09:27:17 crc kubenswrapper[4830]: I0131 09:27:17.031236 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="bf7ad62e-1ba6-47a8-a397-3f078d8291d4" containerName="nova-metadata-log" containerID="cri-o://8934f85d83e771f5120d5ce2dbdc20ee0153da86015ba6227acd3e1364f909d7" gracePeriod=30 Jan 31 09:27:17 crc kubenswrapper[4830]: I0131 09:27:17.031294 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="bf7ad62e-1ba6-47a8-a397-3f078d8291d4" containerName="nova-metadata-metadata" containerID="cri-o://5323de851424e5d94a8c2eeab1d38fae1563ff3fcdcde59f95c513bc6359a6f4" gracePeriod=30 Jan 31 09:27:17 crc kubenswrapper[4830]: I0131 09:27:17.038557 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1e035fc4-d1e4-4716-ab3c-432991bca55e","Type":"ContainerStarted","Data":"05e42f2149384a6274173a7f5b605663add224965beeb44737ea148514066aff"} Jan 31 09:27:17 crc kubenswrapper[4830]: I0131 09:27:17.038586 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="1e035fc4-d1e4-4716-ab3c-432991bca55e" containerName="nova-api-log" containerID="cri-o://fb620f43abe52b50211c6da287613be27f785ed7f330ce7dc9af4b71e8678607" gracePeriod=30 Jan 31 09:27:17 crc kubenswrapper[4830]: I0131 09:27:17.038620 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1e035fc4-d1e4-4716-ab3c-432991bca55e","Type":"ContainerStarted","Data":"fb620f43abe52b50211c6da287613be27f785ed7f330ce7dc9af4b71e8678607"} Jan 31 09:27:17 crc kubenswrapper[4830]: I0131 09:27:17.038751 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="1e035fc4-d1e4-4716-ab3c-432991bca55e" containerName="nova-api-api" containerID="cri-o://05e42f2149384a6274173a7f5b605663add224965beeb44737ea148514066aff" gracePeriod=30 Jan 31 09:27:17 crc kubenswrapper[4830]: I0131 09:27:17.045405 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"81b8f0ff-f3d6-4e28-a3f3-cd17dc8550f5","Type":"ContainerStarted","Data":"9f918a294f70d26695851495e2acbb7eb081d0c0742cc06769aac18433e60ae2"} Jan 31 09:27:17 crc kubenswrapper[4830]: I0131 09:27:17.045605 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="81b8f0ff-f3d6-4e28-a3f3-cd17dc8550f5" containerName="nova-scheduler-scheduler" containerID="cri-o://9f918a294f70d26695851495e2acbb7eb081d0c0742cc06769aac18433e60ae2" gracePeriod=30 Jan 31 09:27:17 crc kubenswrapper[4830]: I0131 09:27:17.055105 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 31 09:27:17 crc kubenswrapper[4830]: I0131 09:27:17.057371 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="09cd85b5-2912-444f-89ae-06d177587496" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://751cc74ec4161c69d0d316b0488fea1b14c595ce785d0f45db9c36f7dbe1b8fd" gracePeriod=30 Jan 31 09:27:17 crc kubenswrapper[4830]: I0131 09:27:17.057480 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"09cd85b5-2912-444f-89ae-06d177587496","Type":"ContainerStarted","Data":"751cc74ec4161c69d0d316b0488fea1b14c595ce785d0f45db9c36f7dbe1b8fd"} Jan 31 09:27:17 crc kubenswrapper[4830]: I0131 09:27:17.074494 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.825761155 podStartE2EDuration="9.074463418s" podCreationTimestamp="2026-01-31 09:27:08 +0000 UTC" firstStartedPulling="2026-01-31 09:27:10.404487889 +0000 UTC m=+1574.897850331" lastFinishedPulling="2026-01-31 09:27:15.653190152 +0000 UTC m=+1580.146552594" observedRunningTime="2026-01-31 09:27:17.05376858 +0000 UTC m=+1581.547131022" watchObservedRunningTime="2026-01-31 09:27:17.074463418 +0000 UTC m=+1581.567825870" Jan 31 09:27:17 crc kubenswrapper[4830]: I0131 09:27:17.115806 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-xpkzh"] Jan 31 09:27:17 crc kubenswrapper[4830]: E0131 09:27:17.116566 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20e72fdb-b11a-4573-844c-475d2967f8ac" containerName="proxy-httpd" Jan 31 09:27:17 crc kubenswrapper[4830]: I0131 09:27:17.116590 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="20e72fdb-b11a-4573-844c-475d2967f8ac" containerName="proxy-httpd" Jan 31 09:27:17 crc kubenswrapper[4830]: E0131 09:27:17.116625 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20e72fdb-b11a-4573-844c-475d2967f8ac" containerName="ceilometer-notification-agent" Jan 31 09:27:17 crc kubenswrapper[4830]: I0131 09:27:17.116634 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="20e72fdb-b11a-4573-844c-475d2967f8ac" containerName="ceilometer-notification-agent" Jan 31 09:27:17 crc kubenswrapper[4830]: E0131 09:27:17.116661 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20e72fdb-b11a-4573-844c-475d2967f8ac" containerName="ceilometer-central-agent" Jan 31 09:27:17 crc kubenswrapper[4830]: I0131 09:27:17.116668 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="20e72fdb-b11a-4573-844c-475d2967f8ac" containerName="ceilometer-central-agent" Jan 31 09:27:17 crc kubenswrapper[4830]: E0131 09:27:17.116690 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20e72fdb-b11a-4573-844c-475d2967f8ac" containerName="sg-core" Jan 31 09:27:17 crc kubenswrapper[4830]: I0131 09:27:17.116697 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="20e72fdb-b11a-4573-844c-475d2967f8ac" containerName="sg-core" Jan 31 09:27:17 crc kubenswrapper[4830]: I0131 09:27:17.116991 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="20e72fdb-b11a-4573-844c-475d2967f8ac" containerName="ceilometer-notification-agent" Jan 31 09:27:17 crc kubenswrapper[4830]: I0131 09:27:17.117014 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="20e72fdb-b11a-4573-844c-475d2967f8ac" containerName="sg-core" Jan 31 09:27:17 crc kubenswrapper[4830]: I0131 09:27:17.117030 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="20e72fdb-b11a-4573-844c-475d2967f8ac" containerName="ceilometer-central-agent" Jan 31 09:27:17 crc kubenswrapper[4830]: I0131 09:27:17.117042 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="20e72fdb-b11a-4573-844c-475d2967f8ac" containerName="proxy-httpd" Jan 31 09:27:17 crc kubenswrapper[4830]: I0131 09:27:17.119119 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xpkzh" Jan 31 09:27:17 crc kubenswrapper[4830]: I0131 09:27:17.141664 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/00eec1e5-054c-4c87-ad69-ce449d1aa577-catalog-content\") pod \"community-operators-xpkzh\" (UID: \"00eec1e5-054c-4c87-ad69-ce449d1aa577\") " pod="openshift-marketplace/community-operators-xpkzh" Jan 31 09:27:17 crc kubenswrapper[4830]: I0131 09:27:17.148788 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/00eec1e5-054c-4c87-ad69-ce449d1aa577-utilities\") pod \"community-operators-xpkzh\" (UID: \"00eec1e5-054c-4c87-ad69-ce449d1aa577\") " pod="openshift-marketplace/community-operators-xpkzh" Jan 31 09:27:17 crc kubenswrapper[4830]: I0131 09:27:17.149031 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2pmvd\" (UniqueName: \"kubernetes.io/projected/00eec1e5-054c-4c87-ad69-ce449d1aa577-kube-api-access-2pmvd\") pod \"community-operators-xpkzh\" (UID: \"00eec1e5-054c-4c87-ad69-ce449d1aa577\") " pod="openshift-marketplace/community-operators-xpkzh" Jan 31 09:27:17 crc kubenswrapper[4830]: I0131 09:27:17.161766 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-xpkzh"] Jan 31 09:27:17 crc kubenswrapper[4830]: I0131 09:27:17.163125 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.361106835 podStartE2EDuration="9.163108368s" podCreationTimestamp="2026-01-31 09:27:08 +0000 UTC" firstStartedPulling="2026-01-31 09:27:09.850452908 +0000 UTC m=+1574.343815350" lastFinishedPulling="2026-01-31 09:27:15.652454441 +0000 UTC m=+1580.145816883" observedRunningTime="2026-01-31 09:27:17.108718352 +0000 UTC m=+1581.602080794" watchObservedRunningTime="2026-01-31 09:27:17.163108368 +0000 UTC m=+1581.656470820" Jan 31 09:27:17 crc kubenswrapper[4830]: I0131 09:27:17.253366 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/00eec1e5-054c-4c87-ad69-ce449d1aa577-catalog-content\") pod \"community-operators-xpkzh\" (UID: \"00eec1e5-054c-4c87-ad69-ce449d1aa577\") " pod="openshift-marketplace/community-operators-xpkzh" Jan 31 09:27:17 crc kubenswrapper[4830]: I0131 09:27:17.253621 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/00eec1e5-054c-4c87-ad69-ce449d1aa577-utilities\") pod \"community-operators-xpkzh\" (UID: \"00eec1e5-054c-4c87-ad69-ce449d1aa577\") " pod="openshift-marketplace/community-operators-xpkzh" Jan 31 09:27:17 crc kubenswrapper[4830]: I0131 09:27:17.253750 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2pmvd\" (UniqueName: \"kubernetes.io/projected/00eec1e5-054c-4c87-ad69-ce449d1aa577-kube-api-access-2pmvd\") pod \"community-operators-xpkzh\" (UID: \"00eec1e5-054c-4c87-ad69-ce449d1aa577\") " pod="openshift-marketplace/community-operators-xpkzh" Jan 31 09:27:17 crc kubenswrapper[4830]: I0131 09:27:17.254272 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/00eec1e5-054c-4c87-ad69-ce449d1aa577-catalog-content\") pod \"community-operators-xpkzh\" (UID: \"00eec1e5-054c-4c87-ad69-ce449d1aa577\") " pod="openshift-marketplace/community-operators-xpkzh" Jan 31 09:27:17 crc kubenswrapper[4830]: I0131 09:27:17.255088 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/00eec1e5-054c-4c87-ad69-ce449d1aa577-utilities\") pod \"community-operators-xpkzh\" (UID: \"00eec1e5-054c-4c87-ad69-ce449d1aa577\") " pod="openshift-marketplace/community-operators-xpkzh" Jan 31 09:27:17 crc kubenswrapper[4830]: I0131 09:27:17.300953 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=4.155687666 podStartE2EDuration="9.300920597s" podCreationTimestamp="2026-01-31 09:27:08 +0000 UTC" firstStartedPulling="2026-01-31 09:27:10.505609434 +0000 UTC m=+1574.998971876" lastFinishedPulling="2026-01-31 09:27:15.650842365 +0000 UTC m=+1580.144204807" observedRunningTime="2026-01-31 09:27:17.142832242 +0000 UTC m=+1581.636194704" watchObservedRunningTime="2026-01-31 09:27:17.300920597 +0000 UTC m=+1581.794283039" Jan 31 09:27:17 crc kubenswrapper[4830]: I0131 09:27:17.323208 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2pmvd\" (UniqueName: \"kubernetes.io/projected/00eec1e5-054c-4c87-ad69-ce449d1aa577-kube-api-access-2pmvd\") pod \"community-operators-xpkzh\" (UID: \"00eec1e5-054c-4c87-ad69-ce449d1aa577\") " pod="openshift-marketplace/community-operators-xpkzh" Jan 31 09:27:17 crc kubenswrapper[4830]: I0131 09:27:17.360637 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 31 09:27:17 crc kubenswrapper[4830]: I0131 09:27:17.373655 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 31 09:27:17 crc kubenswrapper[4830]: I0131 09:27:17.391088 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xpkzh" Jan 31 09:27:17 crc kubenswrapper[4830]: I0131 09:27:17.393418 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 31 09:27:17 crc kubenswrapper[4830]: I0131 09:27:17.397060 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 31 09:27:17 crc kubenswrapper[4830]: I0131 09:27:17.404651 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 31 09:27:17 crc kubenswrapper[4830]: I0131 09:27:17.404856 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 31 09:27:17 crc kubenswrapper[4830]: I0131 09:27:17.433684 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 31 09:27:17 crc kubenswrapper[4830]: I0131 09:27:17.443295 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=4.7639116470000005 podStartE2EDuration="9.443265393s" podCreationTimestamp="2026-01-31 09:27:08 +0000 UTC" firstStartedPulling="2026-01-31 09:27:10.981041611 +0000 UTC m=+1575.474404053" lastFinishedPulling="2026-01-31 09:27:15.660395357 +0000 UTC m=+1580.153757799" observedRunningTime="2026-01-31 09:27:17.27221054 +0000 UTC m=+1581.765572982" watchObservedRunningTime="2026-01-31 09:27:17.443265393 +0000 UTC m=+1581.936627835" Jan 31 09:27:17 crc kubenswrapper[4830]: I0131 09:27:17.460967 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ef00418a-82b1-46ac-b1af-d43bab22cdd7-config-data\") pod \"ceilometer-0\" (UID: \"ef00418a-82b1-46ac-b1af-d43bab22cdd7\") " pod="openstack/ceilometer-0" Jan 31 09:27:17 crc kubenswrapper[4830]: I0131 09:27:17.461118 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ef00418a-82b1-46ac-b1af-d43bab22cdd7-run-httpd\") pod \"ceilometer-0\" (UID: \"ef00418a-82b1-46ac-b1af-d43bab22cdd7\") " pod="openstack/ceilometer-0" Jan 31 09:27:17 crc kubenswrapper[4830]: I0131 09:27:17.461318 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rv6s6\" (UniqueName: \"kubernetes.io/projected/ef00418a-82b1-46ac-b1af-d43bab22cdd7-kube-api-access-rv6s6\") pod \"ceilometer-0\" (UID: \"ef00418a-82b1-46ac-b1af-d43bab22cdd7\") " pod="openstack/ceilometer-0" Jan 31 09:27:17 crc kubenswrapper[4830]: I0131 09:27:17.461371 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ef00418a-82b1-46ac-b1af-d43bab22cdd7-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ef00418a-82b1-46ac-b1af-d43bab22cdd7\") " pod="openstack/ceilometer-0" Jan 31 09:27:17 crc kubenswrapper[4830]: I0131 09:27:17.461454 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ef00418a-82b1-46ac-b1af-d43bab22cdd7-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ef00418a-82b1-46ac-b1af-d43bab22cdd7\") " pod="openstack/ceilometer-0" Jan 31 09:27:17 crc kubenswrapper[4830]: I0131 09:27:17.461496 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ef00418a-82b1-46ac-b1af-d43bab22cdd7-log-httpd\") pod \"ceilometer-0\" (UID: \"ef00418a-82b1-46ac-b1af-d43bab22cdd7\") " pod="openstack/ceilometer-0" Jan 31 09:27:17 crc kubenswrapper[4830]: I0131 09:27:17.461557 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ef00418a-82b1-46ac-b1af-d43bab22cdd7-scripts\") pod \"ceilometer-0\" (UID: \"ef00418a-82b1-46ac-b1af-d43bab22cdd7\") " pod="openstack/ceilometer-0" Jan 31 09:27:17 crc kubenswrapper[4830]: I0131 09:27:17.571006 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rv6s6\" (UniqueName: \"kubernetes.io/projected/ef00418a-82b1-46ac-b1af-d43bab22cdd7-kube-api-access-rv6s6\") pod \"ceilometer-0\" (UID: \"ef00418a-82b1-46ac-b1af-d43bab22cdd7\") " pod="openstack/ceilometer-0" Jan 31 09:27:17 crc kubenswrapper[4830]: I0131 09:27:17.571701 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ef00418a-82b1-46ac-b1af-d43bab22cdd7-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ef00418a-82b1-46ac-b1af-d43bab22cdd7\") " pod="openstack/ceilometer-0" Jan 31 09:27:17 crc kubenswrapper[4830]: I0131 09:27:17.571814 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ef00418a-82b1-46ac-b1af-d43bab22cdd7-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ef00418a-82b1-46ac-b1af-d43bab22cdd7\") " pod="openstack/ceilometer-0" Jan 31 09:27:17 crc kubenswrapper[4830]: I0131 09:27:17.571865 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ef00418a-82b1-46ac-b1af-d43bab22cdd7-log-httpd\") pod \"ceilometer-0\" (UID: \"ef00418a-82b1-46ac-b1af-d43bab22cdd7\") " pod="openstack/ceilometer-0" Jan 31 09:27:17 crc kubenswrapper[4830]: I0131 09:27:17.571985 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ef00418a-82b1-46ac-b1af-d43bab22cdd7-scripts\") pod \"ceilometer-0\" (UID: \"ef00418a-82b1-46ac-b1af-d43bab22cdd7\") " pod="openstack/ceilometer-0" Jan 31 09:27:17 crc kubenswrapper[4830]: I0131 09:27:17.572430 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ef00418a-82b1-46ac-b1af-d43bab22cdd7-config-data\") pod \"ceilometer-0\" (UID: \"ef00418a-82b1-46ac-b1af-d43bab22cdd7\") " pod="openstack/ceilometer-0" Jan 31 09:27:17 crc kubenswrapper[4830]: I0131 09:27:17.572620 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ef00418a-82b1-46ac-b1af-d43bab22cdd7-run-httpd\") pod \"ceilometer-0\" (UID: \"ef00418a-82b1-46ac-b1af-d43bab22cdd7\") " pod="openstack/ceilometer-0" Jan 31 09:27:17 crc kubenswrapper[4830]: I0131 09:27:17.579641 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ef00418a-82b1-46ac-b1af-d43bab22cdd7-run-httpd\") pod \"ceilometer-0\" (UID: \"ef00418a-82b1-46ac-b1af-d43bab22cdd7\") " pod="openstack/ceilometer-0" Jan 31 09:27:17 crc kubenswrapper[4830]: I0131 09:27:17.580269 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ef00418a-82b1-46ac-b1af-d43bab22cdd7-log-httpd\") pod \"ceilometer-0\" (UID: \"ef00418a-82b1-46ac-b1af-d43bab22cdd7\") " pod="openstack/ceilometer-0" Jan 31 09:27:17 crc kubenswrapper[4830]: I0131 09:27:17.583269 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ef00418a-82b1-46ac-b1af-d43bab22cdd7-scripts\") pod \"ceilometer-0\" (UID: \"ef00418a-82b1-46ac-b1af-d43bab22cdd7\") " pod="openstack/ceilometer-0" Jan 31 09:27:17 crc kubenswrapper[4830]: I0131 09:27:17.583718 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ef00418a-82b1-46ac-b1af-d43bab22cdd7-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ef00418a-82b1-46ac-b1af-d43bab22cdd7\") " pod="openstack/ceilometer-0" Jan 31 09:27:17 crc kubenswrapper[4830]: I0131 09:27:17.586695 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ef00418a-82b1-46ac-b1af-d43bab22cdd7-config-data\") pod \"ceilometer-0\" (UID: \"ef00418a-82b1-46ac-b1af-d43bab22cdd7\") " pod="openstack/ceilometer-0" Jan 31 09:27:17 crc kubenswrapper[4830]: I0131 09:27:17.593000 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ef00418a-82b1-46ac-b1af-d43bab22cdd7-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ef00418a-82b1-46ac-b1af-d43bab22cdd7\") " pod="openstack/ceilometer-0" Jan 31 09:27:17 crc kubenswrapper[4830]: I0131 09:27:17.597484 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rv6s6\" (UniqueName: \"kubernetes.io/projected/ef00418a-82b1-46ac-b1af-d43bab22cdd7-kube-api-access-rv6s6\") pod \"ceilometer-0\" (UID: \"ef00418a-82b1-46ac-b1af-d43bab22cdd7\") " pod="openstack/ceilometer-0" Jan 31 09:27:17 crc kubenswrapper[4830]: I0131 09:27:17.725570 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 31 09:27:18 crc kubenswrapper[4830]: I0131 09:27:18.017937 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-xpkzh"] Jan 31 09:27:18 crc kubenswrapper[4830]: I0131 09:27:18.106251 4830 generic.go:334] "Generic (PLEG): container finished" podID="1e035fc4-d1e4-4716-ab3c-432991bca55e" containerID="fb620f43abe52b50211c6da287613be27f785ed7f330ce7dc9af4b71e8678607" exitCode=143 Jan 31 09:27:18 crc kubenswrapper[4830]: I0131 09:27:18.106342 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1e035fc4-d1e4-4716-ab3c-432991bca55e","Type":"ContainerDied","Data":"fb620f43abe52b50211c6da287613be27f785ed7f330ce7dc9af4b71e8678607"} Jan 31 09:27:18 crc kubenswrapper[4830]: I0131 09:27:18.109061 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xpkzh" event={"ID":"00eec1e5-054c-4c87-ad69-ce449d1aa577","Type":"ContainerStarted","Data":"78bd69b177c18208d51d058e7961fe03d1d36bd3def65cb7e8d9e6449bc2760e"} Jan 31 09:27:18 crc kubenswrapper[4830]: I0131 09:27:18.127622 4830 generic.go:334] "Generic (PLEG): container finished" podID="bf7ad62e-1ba6-47a8-a397-3f078d8291d4" containerID="8934f85d83e771f5120d5ce2dbdc20ee0153da86015ba6227acd3e1364f909d7" exitCode=143 Jan 31 09:27:18 crc kubenswrapper[4830]: I0131 09:27:18.127688 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"bf7ad62e-1ba6-47a8-a397-3f078d8291d4","Type":"ContainerDied","Data":"8934f85d83e771f5120d5ce2dbdc20ee0153da86015ba6227acd3e1364f909d7"} Jan 31 09:27:18 crc kubenswrapper[4830]: I0131 09:27:18.276966 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20e72fdb-b11a-4573-844c-475d2967f8ac" path="/var/lib/kubelet/pods/20e72fdb-b11a-4573-844c-475d2967f8ac/volumes" Jan 31 09:27:18 crc kubenswrapper[4830]: I0131 09:27:18.368739 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 31 09:27:18 crc kubenswrapper[4830]: W0131 09:27:18.381952 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podef00418a_82b1_46ac_b1af_d43bab22cdd7.slice/crio-3dd1d03bf2c33bcda87c69ea7edb909a8f8932310d9dfe4bd8346bb3f65d11f8 WatchSource:0}: Error finding container 3dd1d03bf2c33bcda87c69ea7edb909a8f8932310d9dfe4bd8346bb3f65d11f8: Status 404 returned error can't find the container with id 3dd1d03bf2c33bcda87c69ea7edb909a8f8932310d9dfe4bd8346bb3f65d11f8 Jan 31 09:27:19 crc kubenswrapper[4830]: I0131 09:27:19.144324 4830 generic.go:334] "Generic (PLEG): container finished" podID="00eec1e5-054c-4c87-ad69-ce449d1aa577" containerID="80462b6937d6037ddcd0f0515f17adf59c3c73e242fd57b73e5fe1d0ad4147a9" exitCode=0 Jan 31 09:27:19 crc kubenswrapper[4830]: I0131 09:27:19.144403 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xpkzh" event={"ID":"00eec1e5-054c-4c87-ad69-ce449d1aa577","Type":"ContainerDied","Data":"80462b6937d6037ddcd0f0515f17adf59c3c73e242fd57b73e5fe1d0ad4147a9"} Jan 31 09:27:19 crc kubenswrapper[4830]: I0131 09:27:19.148876 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ef00418a-82b1-46ac-b1af-d43bab22cdd7","Type":"ContainerStarted","Data":"3dd1d03bf2c33bcda87c69ea7edb909a8f8932310d9dfe4bd8346bb3f65d11f8"} Jan 31 09:27:19 crc kubenswrapper[4830]: I0131 09:27:19.161416 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 31 09:27:19 crc kubenswrapper[4830]: I0131 09:27:19.228501 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 31 09:27:19 crc kubenswrapper[4830]: I0131 09:27:19.228561 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 31 09:27:19 crc kubenswrapper[4830]: I0131 09:27:19.331211 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5fbc4d444f-vwp6p" Jan 31 09:27:19 crc kubenswrapper[4830]: I0131 09:27:19.395804 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Jan 31 09:27:19 crc kubenswrapper[4830]: I0131 09:27:19.457706 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-f6bc4c6c9-pjvlb"] Jan 31 09:27:19 crc kubenswrapper[4830]: I0131 09:27:19.458397 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-f6bc4c6c9-pjvlb" podUID="653ad6ae-7808-49a1-8f07-484c37dfeb66" containerName="dnsmasq-dns" containerID="cri-o://e175ac6da10a42a62ab3d7bc4e420a07f3178def8398c209a369acf2010f25b4" gracePeriod=10 Jan 31 09:27:20 crc kubenswrapper[4830]: I0131 09:27:20.163956 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ef00418a-82b1-46ac-b1af-d43bab22cdd7","Type":"ContainerStarted","Data":"93b933d398bf69ad4b58b7fdb977b74ffc92753a30cb95e9355a7916153aa722"} Jan 31 09:27:20 crc kubenswrapper[4830]: I0131 09:27:20.170694 4830 generic.go:334] "Generic (PLEG): container finished" podID="653ad6ae-7808-49a1-8f07-484c37dfeb66" containerID="e175ac6da10a42a62ab3d7bc4e420a07f3178def8398c209a369acf2010f25b4" exitCode=0 Jan 31 09:27:20 crc kubenswrapper[4830]: I0131 09:27:20.170842 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f6bc4c6c9-pjvlb" event={"ID":"653ad6ae-7808-49a1-8f07-484c37dfeb66","Type":"ContainerDied","Data":"e175ac6da10a42a62ab3d7bc4e420a07f3178def8398c209a369acf2010f25b4"} Jan 31 09:27:20 crc kubenswrapper[4830]: I0131 09:27:20.182456 4830 generic.go:334] "Generic (PLEG): container finished" podID="211bb9c5-07d6-4936-b444-3544b2db1b19" containerID="c6cd6a87e3962717e5e0987185c6b76612e45ff28f1bcc2ea82afc8fd824deb1" exitCode=0 Jan 31 09:27:20 crc kubenswrapper[4830]: I0131 09:27:20.182510 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"211bb9c5-07d6-4936-b444-3544b2db1b19","Type":"ContainerDied","Data":"c6cd6a87e3962717e5e0987185c6b76612e45ff28f1bcc2ea82afc8fd824deb1"} Jan 31 09:27:20 crc kubenswrapper[4830]: I0131 09:27:20.573395 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f6bc4c6c9-pjvlb" Jan 31 09:27:20 crc kubenswrapper[4830]: I0131 09:27:20.657243 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 31 09:27:20 crc kubenswrapper[4830]: I0131 09:27:20.691581 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/211bb9c5-07d6-4936-b444-3544b2db1b19-combined-ca-bundle\") pod \"211bb9c5-07d6-4936-b444-3544b2db1b19\" (UID: \"211bb9c5-07d6-4936-b444-3544b2db1b19\") " Jan 31 09:27:20 crc kubenswrapper[4830]: I0131 09:27:20.691690 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nlp62\" (UniqueName: \"kubernetes.io/projected/211bb9c5-07d6-4936-b444-3544b2db1b19-kube-api-access-nlp62\") pod \"211bb9c5-07d6-4936-b444-3544b2db1b19\" (UID: \"211bb9c5-07d6-4936-b444-3544b2db1b19\") " Jan 31 09:27:20 crc kubenswrapper[4830]: I0131 09:27:20.691737 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/653ad6ae-7808-49a1-8f07-484c37dfeb66-dns-swift-storage-0\") pod \"653ad6ae-7808-49a1-8f07-484c37dfeb66\" (UID: \"653ad6ae-7808-49a1-8f07-484c37dfeb66\") " Jan 31 09:27:20 crc kubenswrapper[4830]: I0131 09:27:20.691840 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/653ad6ae-7808-49a1-8f07-484c37dfeb66-ovsdbserver-sb\") pod \"653ad6ae-7808-49a1-8f07-484c37dfeb66\" (UID: \"653ad6ae-7808-49a1-8f07-484c37dfeb66\") " Jan 31 09:27:20 crc kubenswrapper[4830]: I0131 09:27:20.692015 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/211bb9c5-07d6-4936-b444-3544b2db1b19-config-data\") pod \"211bb9c5-07d6-4936-b444-3544b2db1b19\" (UID: \"211bb9c5-07d6-4936-b444-3544b2db1b19\") " Jan 31 09:27:20 crc kubenswrapper[4830]: I0131 09:27:20.692103 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/653ad6ae-7808-49a1-8f07-484c37dfeb66-ovsdbserver-nb\") pod \"653ad6ae-7808-49a1-8f07-484c37dfeb66\" (UID: \"653ad6ae-7808-49a1-8f07-484c37dfeb66\") " Jan 31 09:27:20 crc kubenswrapper[4830]: I0131 09:27:20.692149 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lqhfv\" (UniqueName: \"kubernetes.io/projected/653ad6ae-7808-49a1-8f07-484c37dfeb66-kube-api-access-lqhfv\") pod \"653ad6ae-7808-49a1-8f07-484c37dfeb66\" (UID: \"653ad6ae-7808-49a1-8f07-484c37dfeb66\") " Jan 31 09:27:20 crc kubenswrapper[4830]: I0131 09:27:20.692428 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/653ad6ae-7808-49a1-8f07-484c37dfeb66-config\") pod \"653ad6ae-7808-49a1-8f07-484c37dfeb66\" (UID: \"653ad6ae-7808-49a1-8f07-484c37dfeb66\") " Jan 31 09:27:20 crc kubenswrapper[4830]: I0131 09:27:20.692482 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/653ad6ae-7808-49a1-8f07-484c37dfeb66-dns-svc\") pod \"653ad6ae-7808-49a1-8f07-484c37dfeb66\" (UID: \"653ad6ae-7808-49a1-8f07-484c37dfeb66\") " Jan 31 09:27:20 crc kubenswrapper[4830]: I0131 09:27:20.733818 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/211bb9c5-07d6-4936-b444-3544b2db1b19-kube-api-access-nlp62" (OuterVolumeSpecName: "kube-api-access-nlp62") pod "211bb9c5-07d6-4936-b444-3544b2db1b19" (UID: "211bb9c5-07d6-4936-b444-3544b2db1b19"). InnerVolumeSpecName "kube-api-access-nlp62". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:27:20 crc kubenswrapper[4830]: I0131 09:27:20.767043 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-z7qcf"] Jan 31 09:27:20 crc kubenswrapper[4830]: E0131 09:27:20.768049 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="211bb9c5-07d6-4936-b444-3544b2db1b19" containerName="nova-cell0-conductor-conductor" Jan 31 09:27:20 crc kubenswrapper[4830]: I0131 09:27:20.768065 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="211bb9c5-07d6-4936-b444-3544b2db1b19" containerName="nova-cell0-conductor-conductor" Jan 31 09:27:20 crc kubenswrapper[4830]: E0131 09:27:20.768076 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="653ad6ae-7808-49a1-8f07-484c37dfeb66" containerName="init" Jan 31 09:27:20 crc kubenswrapper[4830]: I0131 09:27:20.768085 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="653ad6ae-7808-49a1-8f07-484c37dfeb66" containerName="init" Jan 31 09:27:20 crc kubenswrapper[4830]: E0131 09:27:20.768117 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="653ad6ae-7808-49a1-8f07-484c37dfeb66" containerName="dnsmasq-dns" Jan 31 09:27:20 crc kubenswrapper[4830]: I0131 09:27:20.768124 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="653ad6ae-7808-49a1-8f07-484c37dfeb66" containerName="dnsmasq-dns" Jan 31 09:27:20 crc kubenswrapper[4830]: I0131 09:27:20.768451 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="653ad6ae-7808-49a1-8f07-484c37dfeb66" containerName="dnsmasq-dns" Jan 31 09:27:20 crc kubenswrapper[4830]: I0131 09:27:20.768469 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="211bb9c5-07d6-4936-b444-3544b2db1b19" containerName="nova-cell0-conductor-conductor" Jan 31 09:27:20 crc kubenswrapper[4830]: I0131 09:27:20.769068 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/653ad6ae-7808-49a1-8f07-484c37dfeb66-kube-api-access-lqhfv" (OuterVolumeSpecName: "kube-api-access-lqhfv") pod "653ad6ae-7808-49a1-8f07-484c37dfeb66" (UID: "653ad6ae-7808-49a1-8f07-484c37dfeb66"). InnerVolumeSpecName "kube-api-access-lqhfv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:27:20 crc kubenswrapper[4830]: I0131 09:27:20.779407 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-z7qcf" Jan 31 09:27:20 crc kubenswrapper[4830]: I0131 09:27:20.796493 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/01a16d5c-bea7-4cab-8c88-206e4c5c901d-catalog-content\") pod \"certified-operators-z7qcf\" (UID: \"01a16d5c-bea7-4cab-8c88-206e4c5c901d\") " pod="openshift-marketplace/certified-operators-z7qcf" Jan 31 09:27:20 crc kubenswrapper[4830]: I0131 09:27:20.796596 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/01a16d5c-bea7-4cab-8c88-206e4c5c901d-utilities\") pod \"certified-operators-z7qcf\" (UID: \"01a16d5c-bea7-4cab-8c88-206e4c5c901d\") " pod="openshift-marketplace/certified-operators-z7qcf" Jan 31 09:27:20 crc kubenswrapper[4830]: I0131 09:27:20.796813 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h9k8j\" (UniqueName: \"kubernetes.io/projected/01a16d5c-bea7-4cab-8c88-206e4c5c901d-kube-api-access-h9k8j\") pod \"certified-operators-z7qcf\" (UID: \"01a16d5c-bea7-4cab-8c88-206e4c5c901d\") " pod="openshift-marketplace/certified-operators-z7qcf" Jan 31 09:27:20 crc kubenswrapper[4830]: I0131 09:27:20.796956 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nlp62\" (UniqueName: \"kubernetes.io/projected/211bb9c5-07d6-4936-b444-3544b2db1b19-kube-api-access-nlp62\") on node \"crc\" DevicePath \"\"" Jan 31 09:27:20 crc kubenswrapper[4830]: I0131 09:27:20.796976 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lqhfv\" (UniqueName: \"kubernetes.io/projected/653ad6ae-7808-49a1-8f07-484c37dfeb66-kube-api-access-lqhfv\") on node \"crc\" DevicePath \"\"" Jan 31 09:27:20 crc kubenswrapper[4830]: I0131 09:27:20.838059 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-z7qcf"] Jan 31 09:27:20 crc kubenswrapper[4830]: I0131 09:27:20.852065 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/653ad6ae-7808-49a1-8f07-484c37dfeb66-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "653ad6ae-7808-49a1-8f07-484c37dfeb66" (UID: "653ad6ae-7808-49a1-8f07-484c37dfeb66"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:27:20 crc kubenswrapper[4830]: I0131 09:27:20.907552 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/01a16d5c-bea7-4cab-8c88-206e4c5c901d-utilities\") pod \"certified-operators-z7qcf\" (UID: \"01a16d5c-bea7-4cab-8c88-206e4c5c901d\") " pod="openshift-marketplace/certified-operators-z7qcf" Jan 31 09:27:20 crc kubenswrapper[4830]: I0131 09:27:20.907889 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h9k8j\" (UniqueName: \"kubernetes.io/projected/01a16d5c-bea7-4cab-8c88-206e4c5c901d-kube-api-access-h9k8j\") pod \"certified-operators-z7qcf\" (UID: \"01a16d5c-bea7-4cab-8c88-206e4c5c901d\") " pod="openshift-marketplace/certified-operators-z7qcf" Jan 31 09:27:20 crc kubenswrapper[4830]: I0131 09:27:20.912064 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/01a16d5c-bea7-4cab-8c88-206e4c5c901d-utilities\") pod \"certified-operators-z7qcf\" (UID: \"01a16d5c-bea7-4cab-8c88-206e4c5c901d\") " pod="openshift-marketplace/certified-operators-z7qcf" Jan 31 09:27:20 crc kubenswrapper[4830]: I0131 09:27:20.917561 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/01a16d5c-bea7-4cab-8c88-206e4c5c901d-catalog-content\") pod \"certified-operators-z7qcf\" (UID: \"01a16d5c-bea7-4cab-8c88-206e4c5c901d\") " pod="openshift-marketplace/certified-operators-z7qcf" Jan 31 09:27:20 crc kubenswrapper[4830]: I0131 09:27:20.918254 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/01a16d5c-bea7-4cab-8c88-206e4c5c901d-catalog-content\") pod \"certified-operators-z7qcf\" (UID: \"01a16d5c-bea7-4cab-8c88-206e4c5c901d\") " pod="openshift-marketplace/certified-operators-z7qcf" Jan 31 09:27:20 crc kubenswrapper[4830]: I0131 09:27:20.918313 4830 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/653ad6ae-7808-49a1-8f07-484c37dfeb66-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 31 09:27:20 crc kubenswrapper[4830]: I0131 09:27:20.979930 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h9k8j\" (UniqueName: \"kubernetes.io/projected/01a16d5c-bea7-4cab-8c88-206e4c5c901d-kube-api-access-h9k8j\") pod \"certified-operators-z7qcf\" (UID: \"01a16d5c-bea7-4cab-8c88-206e4c5c901d\") " pod="openshift-marketplace/certified-operators-z7qcf" Jan 31 09:27:21 crc kubenswrapper[4830]: I0131 09:27:21.003280 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/211bb9c5-07d6-4936-b444-3544b2db1b19-config-data" (OuterVolumeSpecName: "config-data") pod "211bb9c5-07d6-4936-b444-3544b2db1b19" (UID: "211bb9c5-07d6-4936-b444-3544b2db1b19"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:27:21 crc kubenswrapper[4830]: I0131 09:27:21.023583 4830 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/211bb9c5-07d6-4936-b444-3544b2db1b19-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 09:27:21 crc kubenswrapper[4830]: I0131 09:27:21.045955 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/653ad6ae-7808-49a1-8f07-484c37dfeb66-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "653ad6ae-7808-49a1-8f07-484c37dfeb66" (UID: "653ad6ae-7808-49a1-8f07-484c37dfeb66"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:27:21 crc kubenswrapper[4830]: I0131 09:27:21.087637 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-db-create-gqmmk"] Jan 31 09:27:21 crc kubenswrapper[4830]: I0131 09:27:21.094741 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-gqmmk" Jan 31 09:27:21 crc kubenswrapper[4830]: I0131 09:27:21.098515 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/653ad6ae-7808-49a1-8f07-484c37dfeb66-config" (OuterVolumeSpecName: "config") pod "653ad6ae-7808-49a1-8f07-484c37dfeb66" (UID: "653ad6ae-7808-49a1-8f07-484c37dfeb66"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:27:21 crc kubenswrapper[4830]: I0131 09:27:21.108460 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/653ad6ae-7808-49a1-8f07-484c37dfeb66-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "653ad6ae-7808-49a1-8f07-484c37dfeb66" (UID: "653ad6ae-7808-49a1-8f07-484c37dfeb66"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:27:21 crc kubenswrapper[4830]: I0131 09:27:21.124983 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/211bb9c5-07d6-4936-b444-3544b2db1b19-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "211bb9c5-07d6-4936-b444-3544b2db1b19" (UID: "211bb9c5-07d6-4936-b444-3544b2db1b19"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:27:21 crc kubenswrapper[4830]: I0131 09:27:21.145680 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-8ce7-account-create-update-hb68c"] Jan 31 09:27:21 crc kubenswrapper[4830]: I0131 09:27:21.146415 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4jzsj\" (UniqueName: \"kubernetes.io/projected/1efaf577-ce46-4e44-a842-1d283d170872-kube-api-access-4jzsj\") pod \"aodh-db-create-gqmmk\" (UID: \"1efaf577-ce46-4e44-a842-1d283d170872\") " pod="openstack/aodh-db-create-gqmmk" Jan 31 09:27:21 crc kubenswrapper[4830]: I0131 09:27:21.146655 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1efaf577-ce46-4e44-a842-1d283d170872-operator-scripts\") pod \"aodh-db-create-gqmmk\" (UID: \"1efaf577-ce46-4e44-a842-1d283d170872\") " pod="openstack/aodh-db-create-gqmmk" Jan 31 09:27:21 crc kubenswrapper[4830]: I0131 09:27:21.147016 4830 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/653ad6ae-7808-49a1-8f07-484c37dfeb66-config\") on node \"crc\" DevicePath \"\"" Jan 31 09:27:21 crc kubenswrapper[4830]: I0131 09:27:21.147051 4830 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/211bb9c5-07d6-4936-b444-3544b2db1b19-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 09:27:21 crc kubenswrapper[4830]: I0131 09:27:21.147077 4830 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/653ad6ae-7808-49a1-8f07-484c37dfeb66-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 31 09:27:21 crc kubenswrapper[4830]: I0131 09:27:21.147092 4830 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/653ad6ae-7808-49a1-8f07-484c37dfeb66-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 31 09:27:21 crc kubenswrapper[4830]: I0131 09:27:21.148044 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-z7qcf" Jan 31 09:27:21 crc kubenswrapper[4830]: I0131 09:27:21.198923 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-8ce7-account-create-update-hb68c" Jan 31 09:27:21 crc kubenswrapper[4830]: I0131 09:27:21.205276 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/653ad6ae-7808-49a1-8f07-484c37dfeb66-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "653ad6ae-7808-49a1-8f07-484c37dfeb66" (UID: "653ad6ae-7808-49a1-8f07-484c37dfeb66"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:27:21 crc kubenswrapper[4830]: I0131 09:27:21.206021 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-db-secret" Jan 31 09:27:21 crc kubenswrapper[4830]: I0131 09:27:21.210381 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-create-gqmmk"] Jan 31 09:27:21 crc kubenswrapper[4830]: I0131 09:27:21.251520 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-8ce7-account-create-update-hb68c"] Jan 31 09:27:21 crc kubenswrapper[4830]: I0131 09:27:21.263023 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4jzsj\" (UniqueName: \"kubernetes.io/projected/1efaf577-ce46-4e44-a842-1d283d170872-kube-api-access-4jzsj\") pod \"aodh-db-create-gqmmk\" (UID: \"1efaf577-ce46-4e44-a842-1d283d170872\") " pod="openstack/aodh-db-create-gqmmk" Jan 31 09:27:21 crc kubenswrapper[4830]: I0131 09:27:21.263161 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1efaf577-ce46-4e44-a842-1d283d170872-operator-scripts\") pod \"aodh-db-create-gqmmk\" (UID: \"1efaf577-ce46-4e44-a842-1d283d170872\") " pod="openstack/aodh-db-create-gqmmk" Jan 31 09:27:21 crc kubenswrapper[4830]: I0131 09:27:21.263358 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lk429\" (UniqueName: \"kubernetes.io/projected/2d6a5be9-79bf-46d1-a45e-999d7bc615c0-kube-api-access-lk429\") pod \"aodh-8ce7-account-create-update-hb68c\" (UID: \"2d6a5be9-79bf-46d1-a45e-999d7bc615c0\") " pod="openstack/aodh-8ce7-account-create-update-hb68c" Jan 31 09:27:21 crc kubenswrapper[4830]: I0131 09:27:21.263420 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2d6a5be9-79bf-46d1-a45e-999d7bc615c0-operator-scripts\") pod \"aodh-8ce7-account-create-update-hb68c\" (UID: \"2d6a5be9-79bf-46d1-a45e-999d7bc615c0\") " pod="openstack/aodh-8ce7-account-create-update-hb68c" Jan 31 09:27:21 crc kubenswrapper[4830]: I0131 09:27:21.263525 4830 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/653ad6ae-7808-49a1-8f07-484c37dfeb66-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 31 09:27:21 crc kubenswrapper[4830]: I0131 09:27:21.264633 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1efaf577-ce46-4e44-a842-1d283d170872-operator-scripts\") pod \"aodh-db-create-gqmmk\" (UID: \"1efaf577-ce46-4e44-a842-1d283d170872\") " pod="openstack/aodh-db-create-gqmmk" Jan 31 09:27:21 crc kubenswrapper[4830]: I0131 09:27:21.280214 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xpkzh" event={"ID":"00eec1e5-054c-4c87-ad69-ce449d1aa577","Type":"ContainerStarted","Data":"b75d698ec4b93946631a20cdfcf2d09785fe647703b03030ae8d8a4d86af6798"} Jan 31 09:27:21 crc kubenswrapper[4830]: I0131 09:27:21.289254 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4jzsj\" (UniqueName: \"kubernetes.io/projected/1efaf577-ce46-4e44-a842-1d283d170872-kube-api-access-4jzsj\") pod \"aodh-db-create-gqmmk\" (UID: \"1efaf577-ce46-4e44-a842-1d283d170872\") " pod="openstack/aodh-db-create-gqmmk" Jan 31 09:27:21 crc kubenswrapper[4830]: I0131 09:27:21.292616 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ef00418a-82b1-46ac-b1af-d43bab22cdd7","Type":"ContainerStarted","Data":"c08c55d4973b1b44cefb6000659f130d7e3193eab1fd885a862b2dd81afef2ca"} Jan 31 09:27:21 crc kubenswrapper[4830]: I0131 09:27:21.306635 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f6bc4c6c9-pjvlb" event={"ID":"653ad6ae-7808-49a1-8f07-484c37dfeb66","Type":"ContainerDied","Data":"0cac50a0805fe024c0c817cc9846648968338493666183a89803e21af108886e"} Jan 31 09:27:21 crc kubenswrapper[4830]: I0131 09:27:21.306738 4830 scope.go:117] "RemoveContainer" containerID="e175ac6da10a42a62ab3d7bc4e420a07f3178def8398c209a369acf2010f25b4" Jan 31 09:27:21 crc kubenswrapper[4830]: I0131 09:27:21.307330 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f6bc4c6c9-pjvlb" Jan 31 09:27:21 crc kubenswrapper[4830]: I0131 09:27:21.315964 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"211bb9c5-07d6-4936-b444-3544b2db1b19","Type":"ContainerDied","Data":"a4f0fd01ca876d32e2c87a06a7eec19e5c685a00ba6c375951d8411967f97816"} Jan 31 09:27:21 crc kubenswrapper[4830]: I0131 09:27:21.316060 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 31 09:27:21 crc kubenswrapper[4830]: I0131 09:27:21.369850 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lk429\" (UniqueName: \"kubernetes.io/projected/2d6a5be9-79bf-46d1-a45e-999d7bc615c0-kube-api-access-lk429\") pod \"aodh-8ce7-account-create-update-hb68c\" (UID: \"2d6a5be9-79bf-46d1-a45e-999d7bc615c0\") " pod="openstack/aodh-8ce7-account-create-update-hb68c" Jan 31 09:27:21 crc kubenswrapper[4830]: I0131 09:27:21.370487 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2d6a5be9-79bf-46d1-a45e-999d7bc615c0-operator-scripts\") pod \"aodh-8ce7-account-create-update-hb68c\" (UID: \"2d6a5be9-79bf-46d1-a45e-999d7bc615c0\") " pod="openstack/aodh-8ce7-account-create-update-hb68c" Jan 31 09:27:21 crc kubenswrapper[4830]: I0131 09:27:21.371887 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2d6a5be9-79bf-46d1-a45e-999d7bc615c0-operator-scripts\") pod \"aodh-8ce7-account-create-update-hb68c\" (UID: \"2d6a5be9-79bf-46d1-a45e-999d7bc615c0\") " pod="openstack/aodh-8ce7-account-create-update-hb68c" Jan 31 09:27:21 crc kubenswrapper[4830]: I0131 09:27:21.381254 4830 scope.go:117] "RemoveContainer" containerID="3623580d42f6ceb7e958776487d7e6fd090435cc50f969e862a9e9df4b46a30c" Jan 31 09:27:21 crc kubenswrapper[4830]: I0131 09:27:21.413325 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-gqmmk" Jan 31 09:27:21 crc kubenswrapper[4830]: I0131 09:27:21.418148 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lk429\" (UniqueName: \"kubernetes.io/projected/2d6a5be9-79bf-46d1-a45e-999d7bc615c0-kube-api-access-lk429\") pod \"aodh-8ce7-account-create-update-hb68c\" (UID: \"2d6a5be9-79bf-46d1-a45e-999d7bc615c0\") " pod="openstack/aodh-8ce7-account-create-update-hb68c" Jan 31 09:27:21 crc kubenswrapper[4830]: I0131 09:27:21.419697 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-f6bc4c6c9-pjvlb"] Jan 31 09:27:21 crc kubenswrapper[4830]: I0131 09:27:21.426901 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-8ce7-account-create-update-hb68c" Jan 31 09:27:21 crc kubenswrapper[4830]: I0131 09:27:21.444287 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-f6bc4c6c9-pjvlb"] Jan 31 09:27:21 crc kubenswrapper[4830]: I0131 09:27:21.495874 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 31 09:27:21 crc kubenswrapper[4830]: I0131 09:27:21.507737 4830 scope.go:117] "RemoveContainer" containerID="c6cd6a87e3962717e5e0987185c6b76612e45ff28f1bcc2ea82afc8fd824deb1" Jan 31 09:27:21 crc kubenswrapper[4830]: I0131 09:27:21.541471 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 31 09:27:21 crc kubenswrapper[4830]: I0131 09:27:21.596546 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 31 09:27:21 crc kubenswrapper[4830]: I0131 09:27:21.598974 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 31 09:27:21 crc kubenswrapper[4830]: I0131 09:27:21.607150 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 31 09:27:21 crc kubenswrapper[4830]: I0131 09:27:21.617953 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 31 09:27:21 crc kubenswrapper[4830]: I0131 09:27:21.703012 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b7078937-4ecb-4aab-afd1-e60252550def-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"b7078937-4ecb-4aab-afd1-e60252550def\") " pod="openstack/nova-cell0-conductor-0" Jan 31 09:27:21 crc kubenswrapper[4830]: I0131 09:27:21.703902 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b7078937-4ecb-4aab-afd1-e60252550def-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"b7078937-4ecb-4aab-afd1-e60252550def\") " pod="openstack/nova-cell0-conductor-0" Jan 31 09:27:21 crc kubenswrapper[4830]: I0131 09:27:21.704043 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fdjjk\" (UniqueName: \"kubernetes.io/projected/b7078937-4ecb-4aab-afd1-e60252550def-kube-api-access-fdjjk\") pod \"nova-cell0-conductor-0\" (UID: \"b7078937-4ecb-4aab-afd1-e60252550def\") " pod="openstack/nova-cell0-conductor-0" Jan 31 09:27:21 crc kubenswrapper[4830]: I0131 09:27:21.808004 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b7078937-4ecb-4aab-afd1-e60252550def-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"b7078937-4ecb-4aab-afd1-e60252550def\") " pod="openstack/nova-cell0-conductor-0" Jan 31 09:27:21 crc kubenswrapper[4830]: I0131 09:27:21.808152 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fdjjk\" (UniqueName: \"kubernetes.io/projected/b7078937-4ecb-4aab-afd1-e60252550def-kube-api-access-fdjjk\") pod \"nova-cell0-conductor-0\" (UID: \"b7078937-4ecb-4aab-afd1-e60252550def\") " pod="openstack/nova-cell0-conductor-0" Jan 31 09:27:21 crc kubenswrapper[4830]: I0131 09:27:21.808248 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b7078937-4ecb-4aab-afd1-e60252550def-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"b7078937-4ecb-4aab-afd1-e60252550def\") " pod="openstack/nova-cell0-conductor-0" Jan 31 09:27:21 crc kubenswrapper[4830]: I0131 09:27:21.818008 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b7078937-4ecb-4aab-afd1-e60252550def-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"b7078937-4ecb-4aab-afd1-e60252550def\") " pod="openstack/nova-cell0-conductor-0" Jan 31 09:27:21 crc kubenswrapper[4830]: I0131 09:27:21.821801 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b7078937-4ecb-4aab-afd1-e60252550def-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"b7078937-4ecb-4aab-afd1-e60252550def\") " pod="openstack/nova-cell0-conductor-0" Jan 31 09:27:21 crc kubenswrapper[4830]: I0131 09:27:21.836763 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fdjjk\" (UniqueName: \"kubernetes.io/projected/b7078937-4ecb-4aab-afd1-e60252550def-kube-api-access-fdjjk\") pod \"nova-cell0-conductor-0\" (UID: \"b7078937-4ecb-4aab-afd1-e60252550def\") " pod="openstack/nova-cell0-conductor-0" Jan 31 09:27:21 crc kubenswrapper[4830]: I0131 09:27:21.963607 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 31 09:27:22 crc kubenswrapper[4830]: I0131 09:27:22.047631 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-z7qcf"] Jan 31 09:27:22 crc kubenswrapper[4830]: I0131 09:27:22.299925 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="211bb9c5-07d6-4936-b444-3544b2db1b19" path="/var/lib/kubelet/pods/211bb9c5-07d6-4936-b444-3544b2db1b19/volumes" Jan 31 09:27:22 crc kubenswrapper[4830]: I0131 09:27:22.302958 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="653ad6ae-7808-49a1-8f07-484c37dfeb66" path="/var/lib/kubelet/pods/653ad6ae-7808-49a1-8f07-484c37dfeb66/volumes" Jan 31 09:27:22 crc kubenswrapper[4830]: I0131 09:27:22.379529 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ef00418a-82b1-46ac-b1af-d43bab22cdd7","Type":"ContainerStarted","Data":"315ffd30559b1ed532e837143e2d92e855d451508e9d85df5ce2487b259e3029"} Jan 31 09:27:22 crc kubenswrapper[4830]: I0131 09:27:22.382914 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z7qcf" event={"ID":"01a16d5c-bea7-4cab-8c88-206e4c5c901d","Type":"ContainerStarted","Data":"c421d717381a092b193bbdd5a54e791946f7df44748267e7ccb27f58fd7a9072"} Jan 31 09:27:22 crc kubenswrapper[4830]: I0131 09:27:22.511576 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-create-gqmmk"] Jan 31 09:27:22 crc kubenswrapper[4830]: I0131 09:27:22.755405 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-8ce7-account-create-update-hb68c"] Jan 31 09:27:22 crc kubenswrapper[4830]: I0131 09:27:22.966695 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 31 09:27:23 crc kubenswrapper[4830]: I0131 09:27:23.426464 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"b7078937-4ecb-4aab-afd1-e60252550def","Type":"ContainerStarted","Data":"4428af1a68ba3b888bd7a99b21c6309f0a91c1bd7758cfab537746979ea99b5d"} Jan 31 09:27:23 crc kubenswrapper[4830]: I0131 09:27:23.426576 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"b7078937-4ecb-4aab-afd1-e60252550def","Type":"ContainerStarted","Data":"5d786e1735ea471152a95b320d671a8d615985b40234fd2de0cd43becce30cce"} Jan 31 09:27:23 crc kubenswrapper[4830]: I0131 09:27:23.428695 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Jan 31 09:27:23 crc kubenswrapper[4830]: I0131 09:27:23.434577 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-8ce7-account-create-update-hb68c" event={"ID":"2d6a5be9-79bf-46d1-a45e-999d7bc615c0","Type":"ContainerStarted","Data":"75c6e469317753e2f9b505e7dd6253dbbfdea1e9bd69ef61b34bc05ceb7c1481"} Jan 31 09:27:23 crc kubenswrapper[4830]: I0131 09:27:23.434671 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-8ce7-account-create-update-hb68c" event={"ID":"2d6a5be9-79bf-46d1-a45e-999d7bc615c0","Type":"ContainerStarted","Data":"89452d63f53abf639fef96ba16c60e1f72eefc1a5a490c63dcb7883284b90872"} Jan 31 09:27:23 crc kubenswrapper[4830]: I0131 09:27:23.447042 4830 generic.go:334] "Generic (PLEG): container finished" podID="01a16d5c-bea7-4cab-8c88-206e4c5c901d" containerID="d8b80451603f88d95edc41b647953eb26bdf22202f7a70ced037f6c1976b7e46" exitCode=0 Jan 31 09:27:23 crc kubenswrapper[4830]: I0131 09:27:23.447271 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z7qcf" event={"ID":"01a16d5c-bea7-4cab-8c88-206e4c5c901d","Type":"ContainerDied","Data":"d8b80451603f88d95edc41b647953eb26bdf22202f7a70ced037f6c1976b7e46"} Jan 31 09:27:23 crc kubenswrapper[4830]: I0131 09:27:23.454631 4830 generic.go:334] "Generic (PLEG): container finished" podID="1efaf577-ce46-4e44-a842-1d283d170872" containerID="96b58b789b29643ae16bc92e02e2044a4014e4f976d66272b70a7302d5729868" exitCode=0 Jan 31 09:27:23 crc kubenswrapper[4830]: I0131 09:27:23.455849 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-gqmmk" event={"ID":"1efaf577-ce46-4e44-a842-1d283d170872","Type":"ContainerDied","Data":"96b58b789b29643ae16bc92e02e2044a4014e4f976d66272b70a7302d5729868"} Jan 31 09:27:23 crc kubenswrapper[4830]: I0131 09:27:23.482162 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-gqmmk" event={"ID":"1efaf577-ce46-4e44-a842-1d283d170872","Type":"ContainerStarted","Data":"fba26276e9edc9778ac404bad1096a59cad8ac098f86fe41dbb6656d26f8979c"} Jan 31 09:27:23 crc kubenswrapper[4830]: I0131 09:27:23.509546 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.509511308 podStartE2EDuration="2.509511308s" podCreationTimestamp="2026-01-31 09:27:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 09:27:23.450821089 +0000 UTC m=+1587.944183531" watchObservedRunningTime="2026-01-31 09:27:23.509511308 +0000 UTC m=+1588.002873750" Jan 31 09:27:23 crc kubenswrapper[4830]: I0131 09:27:23.515454 4830 generic.go:334] "Generic (PLEG): container finished" podID="00eec1e5-054c-4c87-ad69-ce449d1aa577" containerID="b75d698ec4b93946631a20cdfcf2d09785fe647703b03030ae8d8a4d86af6798" exitCode=0 Jan 31 09:27:23 crc kubenswrapper[4830]: I0131 09:27:23.515520 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xpkzh" event={"ID":"00eec1e5-054c-4c87-ad69-ce449d1aa577","Type":"ContainerDied","Data":"b75d698ec4b93946631a20cdfcf2d09785fe647703b03030ae8d8a4d86af6798"} Jan 31 09:27:24 crc kubenswrapper[4830]: I0131 09:27:24.545485 4830 generic.go:334] "Generic (PLEG): container finished" podID="2d6a5be9-79bf-46d1-a45e-999d7bc615c0" containerID="75c6e469317753e2f9b505e7dd6253dbbfdea1e9bd69ef61b34bc05ceb7c1481" exitCode=0 Jan 31 09:27:24 crc kubenswrapper[4830]: I0131 09:27:24.546393 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-8ce7-account-create-update-hb68c" event={"ID":"2d6a5be9-79bf-46d1-a45e-999d7bc615c0","Type":"ContainerDied","Data":"75c6e469317753e2f9b505e7dd6253dbbfdea1e9bd69ef61b34bc05ceb7c1481"} Jan 31 09:27:24 crc kubenswrapper[4830]: I0131 09:27:24.553005 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ef00418a-82b1-46ac-b1af-d43bab22cdd7","Type":"ContainerStarted","Data":"077ae2e5ac08ad7f845a7b26112f7e5918825e4836931f9cd20feeac9df70d0b"} Jan 31 09:27:24 crc kubenswrapper[4830]: I0131 09:27:24.556570 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 31 09:27:24 crc kubenswrapper[4830]: I0131 09:27:24.559524 4830 generic.go:334] "Generic (PLEG): container finished" podID="714acb03-29b5-4da1-8f14-9587cabcd207" containerID="f0b45e6d646475b426502e17ae647c2de02a19d7fbebfbe7cacdbfcc6685fbf5" exitCode=0 Jan 31 09:27:24 crc kubenswrapper[4830]: I0131 09:27:24.559648 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-svqlf" event={"ID":"714acb03-29b5-4da1-8f14-9587cabcd207","Type":"ContainerDied","Data":"f0b45e6d646475b426502e17ae647c2de02a19d7fbebfbe7cacdbfcc6685fbf5"} Jan 31 09:27:24 crc kubenswrapper[4830]: I0131 09:27:24.562436 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xpkzh" event={"ID":"00eec1e5-054c-4c87-ad69-ce449d1aa577","Type":"ContainerStarted","Data":"a53e66388c2d40842773fc396da9ba83aea4465019fc1c5026ae47c2acc800d2"} Jan 31 09:27:24 crc kubenswrapper[4830]: I0131 09:27:24.616559 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.391251316 podStartE2EDuration="7.616528221s" podCreationTimestamp="2026-01-31 09:27:17 +0000 UTC" firstStartedPulling="2026-01-31 09:27:18.387054476 +0000 UTC m=+1582.880416928" lastFinishedPulling="2026-01-31 09:27:23.612331391 +0000 UTC m=+1588.105693833" observedRunningTime="2026-01-31 09:27:24.582119363 +0000 UTC m=+1589.075481805" watchObservedRunningTime="2026-01-31 09:27:24.616528221 +0000 UTC m=+1589.109890663" Jan 31 09:27:24 crc kubenswrapper[4830]: I0131 09:27:24.656675 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-xpkzh" podStartSLOduration=2.878471438 podStartE2EDuration="7.656643631s" podCreationTimestamp="2026-01-31 09:27:17 +0000 UTC" firstStartedPulling="2026-01-31 09:27:19.147179747 +0000 UTC m=+1583.640542189" lastFinishedPulling="2026-01-31 09:27:23.92535194 +0000 UTC m=+1588.418714382" observedRunningTime="2026-01-31 09:27:24.629829659 +0000 UTC m=+1589.123192101" watchObservedRunningTime="2026-01-31 09:27:24.656643631 +0000 UTC m=+1589.150006073" Jan 31 09:27:25 crc kubenswrapper[4830]: I0131 09:27:25.295115 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-gqmmk" Jan 31 09:27:25 crc kubenswrapper[4830]: I0131 09:27:25.306251 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-8ce7-account-create-update-hb68c" Jan 31 09:27:25 crc kubenswrapper[4830]: I0131 09:27:25.435591 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4jzsj\" (UniqueName: \"kubernetes.io/projected/1efaf577-ce46-4e44-a842-1d283d170872-kube-api-access-4jzsj\") pod \"1efaf577-ce46-4e44-a842-1d283d170872\" (UID: \"1efaf577-ce46-4e44-a842-1d283d170872\") " Jan 31 09:27:25 crc kubenswrapper[4830]: I0131 09:27:25.435785 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lk429\" (UniqueName: \"kubernetes.io/projected/2d6a5be9-79bf-46d1-a45e-999d7bc615c0-kube-api-access-lk429\") pod \"2d6a5be9-79bf-46d1-a45e-999d7bc615c0\" (UID: \"2d6a5be9-79bf-46d1-a45e-999d7bc615c0\") " Jan 31 09:27:25 crc kubenswrapper[4830]: I0131 09:27:25.435815 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2d6a5be9-79bf-46d1-a45e-999d7bc615c0-operator-scripts\") pod \"2d6a5be9-79bf-46d1-a45e-999d7bc615c0\" (UID: \"2d6a5be9-79bf-46d1-a45e-999d7bc615c0\") " Jan 31 09:27:25 crc kubenswrapper[4830]: I0131 09:27:25.435923 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1efaf577-ce46-4e44-a842-1d283d170872-operator-scripts\") pod \"1efaf577-ce46-4e44-a842-1d283d170872\" (UID: \"1efaf577-ce46-4e44-a842-1d283d170872\") " Jan 31 09:27:25 crc kubenswrapper[4830]: I0131 09:27:25.436484 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2d6a5be9-79bf-46d1-a45e-999d7bc615c0-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2d6a5be9-79bf-46d1-a45e-999d7bc615c0" (UID: "2d6a5be9-79bf-46d1-a45e-999d7bc615c0"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:27:25 crc kubenswrapper[4830]: I0131 09:27:25.436561 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1efaf577-ce46-4e44-a842-1d283d170872-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "1efaf577-ce46-4e44-a842-1d283d170872" (UID: "1efaf577-ce46-4e44-a842-1d283d170872"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:27:25 crc kubenswrapper[4830]: I0131 09:27:25.437543 4830 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2d6a5be9-79bf-46d1-a45e-999d7bc615c0-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 09:27:25 crc kubenswrapper[4830]: I0131 09:27:25.437569 4830 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1efaf577-ce46-4e44-a842-1d283d170872-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 09:27:25 crc kubenswrapper[4830]: I0131 09:27:25.451755 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1efaf577-ce46-4e44-a842-1d283d170872-kube-api-access-4jzsj" (OuterVolumeSpecName: "kube-api-access-4jzsj") pod "1efaf577-ce46-4e44-a842-1d283d170872" (UID: "1efaf577-ce46-4e44-a842-1d283d170872"). InnerVolumeSpecName "kube-api-access-4jzsj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:27:25 crc kubenswrapper[4830]: I0131 09:27:25.456020 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2d6a5be9-79bf-46d1-a45e-999d7bc615c0-kube-api-access-lk429" (OuterVolumeSpecName: "kube-api-access-lk429") pod "2d6a5be9-79bf-46d1-a45e-999d7bc615c0" (UID: "2d6a5be9-79bf-46d1-a45e-999d7bc615c0"). InnerVolumeSpecName "kube-api-access-lk429". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:27:25 crc kubenswrapper[4830]: I0131 09:27:25.539868 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4jzsj\" (UniqueName: \"kubernetes.io/projected/1efaf577-ce46-4e44-a842-1d283d170872-kube-api-access-4jzsj\") on node \"crc\" DevicePath \"\"" Jan 31 09:27:25 crc kubenswrapper[4830]: I0131 09:27:25.539920 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lk429\" (UniqueName: \"kubernetes.io/projected/2d6a5be9-79bf-46d1-a45e-999d7bc615c0-kube-api-access-lk429\") on node \"crc\" DevicePath \"\"" Jan 31 09:27:25 crc kubenswrapper[4830]: I0131 09:27:25.579097 4830 generic.go:334] "Generic (PLEG): container finished" podID="c1533bfd-c9f9-4c8d-9cb2-085f694b1f45" containerID="b107efaccf2b98b8561f3fc2480bbd19ae235859bc35b0e5ac1cbd07d9dadcb3" exitCode=0 Jan 31 09:27:25 crc kubenswrapper[4830]: I0131 09:27:25.579178 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-shj89" event={"ID":"c1533bfd-c9f9-4c8d-9cb2-085f694b1f45","Type":"ContainerDied","Data":"b107efaccf2b98b8561f3fc2480bbd19ae235859bc35b0e5ac1cbd07d9dadcb3"} Jan 31 09:27:25 crc kubenswrapper[4830]: I0131 09:27:25.582458 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z7qcf" event={"ID":"01a16d5c-bea7-4cab-8c88-206e4c5c901d","Type":"ContainerStarted","Data":"9038e9d98b25ae3230e069f5b241cb6a8f96e63cfae411b12e4c5bb53b61169f"} Jan 31 09:27:25 crc kubenswrapper[4830]: I0131 09:27:25.584858 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-gqmmk" Jan 31 09:27:25 crc kubenswrapper[4830]: I0131 09:27:25.584852 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-gqmmk" event={"ID":"1efaf577-ce46-4e44-a842-1d283d170872","Type":"ContainerDied","Data":"fba26276e9edc9778ac404bad1096a59cad8ac098f86fe41dbb6656d26f8979c"} Jan 31 09:27:25 crc kubenswrapper[4830]: I0131 09:27:25.585071 4830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fba26276e9edc9778ac404bad1096a59cad8ac098f86fe41dbb6656d26f8979c" Jan 31 09:27:25 crc kubenswrapper[4830]: I0131 09:27:25.587683 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-8ce7-account-create-update-hb68c" event={"ID":"2d6a5be9-79bf-46d1-a45e-999d7bc615c0","Type":"ContainerDied","Data":"89452d63f53abf639fef96ba16c60e1f72eefc1a5a490c63dcb7883284b90872"} Jan 31 09:27:25 crc kubenswrapper[4830]: I0131 09:27:25.587738 4830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="89452d63f53abf639fef96ba16c60e1f72eefc1a5a490c63dcb7883284b90872" Jan 31 09:27:25 crc kubenswrapper[4830]: I0131 09:27:25.587741 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-8ce7-account-create-update-hb68c" Jan 31 09:27:26 crc kubenswrapper[4830]: I0131 09:27:26.171864 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-svqlf" Jan 31 09:27:26 crc kubenswrapper[4830]: I0131 09:27:26.263351 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/714acb03-29b5-4da1-8f14-9587cabcd207-config-data\") pod \"714acb03-29b5-4da1-8f14-9587cabcd207\" (UID: \"714acb03-29b5-4da1-8f14-9587cabcd207\") " Jan 31 09:27:26 crc kubenswrapper[4830]: I0131 09:27:26.263445 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/714acb03-29b5-4da1-8f14-9587cabcd207-combined-ca-bundle\") pod \"714acb03-29b5-4da1-8f14-9587cabcd207\" (UID: \"714acb03-29b5-4da1-8f14-9587cabcd207\") " Jan 31 09:27:26 crc kubenswrapper[4830]: I0131 09:27:26.263542 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/714acb03-29b5-4da1-8f14-9587cabcd207-scripts\") pod \"714acb03-29b5-4da1-8f14-9587cabcd207\" (UID: \"714acb03-29b5-4da1-8f14-9587cabcd207\") " Jan 31 09:27:26 crc kubenswrapper[4830]: I0131 09:27:26.263698 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-88mr5\" (UniqueName: \"kubernetes.io/projected/714acb03-29b5-4da1-8f14-9587cabcd207-kube-api-access-88mr5\") pod \"714acb03-29b5-4da1-8f14-9587cabcd207\" (UID: \"714acb03-29b5-4da1-8f14-9587cabcd207\") " Jan 31 09:27:26 crc kubenswrapper[4830]: I0131 09:27:26.276439 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/714acb03-29b5-4da1-8f14-9587cabcd207-scripts" (OuterVolumeSpecName: "scripts") pod "714acb03-29b5-4da1-8f14-9587cabcd207" (UID: "714acb03-29b5-4da1-8f14-9587cabcd207"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:27:26 crc kubenswrapper[4830]: I0131 09:27:26.304217 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/714acb03-29b5-4da1-8f14-9587cabcd207-kube-api-access-88mr5" (OuterVolumeSpecName: "kube-api-access-88mr5") pod "714acb03-29b5-4da1-8f14-9587cabcd207" (UID: "714acb03-29b5-4da1-8f14-9587cabcd207"). InnerVolumeSpecName "kube-api-access-88mr5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:27:26 crc kubenswrapper[4830]: I0131 09:27:26.314020 4830 scope.go:117] "RemoveContainer" containerID="a04fad3617a9e38076099693ce6bd6f0b7e1a9b845b3b8a22acffddfa772e8f0" Jan 31 09:27:26 crc kubenswrapper[4830]: I0131 09:27:26.314244 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/714acb03-29b5-4da1-8f14-9587cabcd207-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "714acb03-29b5-4da1-8f14-9587cabcd207" (UID: "714acb03-29b5-4da1-8f14-9587cabcd207"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:27:26 crc kubenswrapper[4830]: E0131 09:27:26.316125 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gt7kd_openshift-machine-config-operator(158dbfda-9b0a-4809-9946-3c6ee2d082dc)\"" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" podUID="158dbfda-9b0a-4809-9946-3c6ee2d082dc" Jan 31 09:27:26 crc kubenswrapper[4830]: I0131 09:27:26.322950 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/714acb03-29b5-4da1-8f14-9587cabcd207-config-data" (OuterVolumeSpecName: "config-data") pod "714acb03-29b5-4da1-8f14-9587cabcd207" (UID: "714acb03-29b5-4da1-8f14-9587cabcd207"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:27:26 crc kubenswrapper[4830]: I0131 09:27:26.370964 4830 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/714acb03-29b5-4da1-8f14-9587cabcd207-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 09:27:26 crc kubenswrapper[4830]: I0131 09:27:26.371022 4830 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/714acb03-29b5-4da1-8f14-9587cabcd207-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 09:27:26 crc kubenswrapper[4830]: I0131 09:27:26.371041 4830 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/714acb03-29b5-4da1-8f14-9587cabcd207-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 09:27:26 crc kubenswrapper[4830]: I0131 09:27:26.371053 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-88mr5\" (UniqueName: \"kubernetes.io/projected/714acb03-29b5-4da1-8f14-9587cabcd207-kube-api-access-88mr5\") on node \"crc\" DevicePath \"\"" Jan 31 09:27:26 crc kubenswrapper[4830]: I0131 09:27:26.603419 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-svqlf" event={"ID":"714acb03-29b5-4da1-8f14-9587cabcd207","Type":"ContainerDied","Data":"d342facad64b2e2a01671efafb703490954b7cc14f9d493d80ec473b9d7c8ff7"} Jan 31 09:27:26 crc kubenswrapper[4830]: I0131 09:27:26.603486 4830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d342facad64b2e2a01671efafb703490954b7cc14f9d493d80ec473b9d7c8ff7" Jan 31 09:27:26 crc kubenswrapper[4830]: I0131 09:27:26.603492 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-svqlf" Jan 31 09:27:27 crc kubenswrapper[4830]: I0131 09:27:27.131420 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-shj89" Jan 31 09:27:27 crc kubenswrapper[4830]: I0131 09:27:27.303480 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1533bfd-c9f9-4c8d-9cb2-085f694b1f45-combined-ca-bundle\") pod \"c1533bfd-c9f9-4c8d-9cb2-085f694b1f45\" (UID: \"c1533bfd-c9f9-4c8d-9cb2-085f694b1f45\") " Jan 31 09:27:27 crc kubenswrapper[4830]: I0131 09:27:27.303819 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wzzxk\" (UniqueName: \"kubernetes.io/projected/c1533bfd-c9f9-4c8d-9cb2-085f694b1f45-kube-api-access-wzzxk\") pod \"c1533bfd-c9f9-4c8d-9cb2-085f694b1f45\" (UID: \"c1533bfd-c9f9-4c8d-9cb2-085f694b1f45\") " Jan 31 09:27:27 crc kubenswrapper[4830]: I0131 09:27:27.303893 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c1533bfd-c9f9-4c8d-9cb2-085f694b1f45-scripts\") pod \"c1533bfd-c9f9-4c8d-9cb2-085f694b1f45\" (UID: \"c1533bfd-c9f9-4c8d-9cb2-085f694b1f45\") " Jan 31 09:27:27 crc kubenswrapper[4830]: I0131 09:27:27.304038 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c1533bfd-c9f9-4c8d-9cb2-085f694b1f45-config-data\") pod \"c1533bfd-c9f9-4c8d-9cb2-085f694b1f45\" (UID: \"c1533bfd-c9f9-4c8d-9cb2-085f694b1f45\") " Jan 31 09:27:27 crc kubenswrapper[4830]: I0131 09:27:27.311054 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c1533bfd-c9f9-4c8d-9cb2-085f694b1f45-scripts" (OuterVolumeSpecName: "scripts") pod "c1533bfd-c9f9-4c8d-9cb2-085f694b1f45" (UID: "c1533bfd-c9f9-4c8d-9cb2-085f694b1f45"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:27:27 crc kubenswrapper[4830]: I0131 09:27:27.312095 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c1533bfd-c9f9-4c8d-9cb2-085f694b1f45-kube-api-access-wzzxk" (OuterVolumeSpecName: "kube-api-access-wzzxk") pod "c1533bfd-c9f9-4c8d-9cb2-085f694b1f45" (UID: "c1533bfd-c9f9-4c8d-9cb2-085f694b1f45"). InnerVolumeSpecName "kube-api-access-wzzxk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:27:27 crc kubenswrapper[4830]: I0131 09:27:27.346752 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c1533bfd-c9f9-4c8d-9cb2-085f694b1f45-config-data" (OuterVolumeSpecName: "config-data") pod "c1533bfd-c9f9-4c8d-9cb2-085f694b1f45" (UID: "c1533bfd-c9f9-4c8d-9cb2-085f694b1f45"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:27:27 crc kubenswrapper[4830]: I0131 09:27:27.355172 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c1533bfd-c9f9-4c8d-9cb2-085f694b1f45-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c1533bfd-c9f9-4c8d-9cb2-085f694b1f45" (UID: "c1533bfd-c9f9-4c8d-9cb2-085f694b1f45"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:27:27 crc kubenswrapper[4830]: I0131 09:27:27.392914 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-xpkzh" Jan 31 09:27:27 crc kubenswrapper[4830]: I0131 09:27:27.393345 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-xpkzh" Jan 31 09:27:27 crc kubenswrapper[4830]: I0131 09:27:27.407809 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wzzxk\" (UniqueName: \"kubernetes.io/projected/c1533bfd-c9f9-4c8d-9cb2-085f694b1f45-kube-api-access-wzzxk\") on node \"crc\" DevicePath \"\"" Jan 31 09:27:27 crc kubenswrapper[4830]: I0131 09:27:27.407850 4830 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c1533bfd-c9f9-4c8d-9cb2-085f694b1f45-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 09:27:27 crc kubenswrapper[4830]: I0131 09:27:27.407860 4830 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c1533bfd-c9f9-4c8d-9cb2-085f694b1f45-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 09:27:27 crc kubenswrapper[4830]: I0131 09:27:27.407869 4830 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1533bfd-c9f9-4c8d-9cb2-085f694b1f45-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 09:27:27 crc kubenswrapper[4830]: I0131 09:27:27.462093 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-xpkzh" Jan 31 09:27:27 crc kubenswrapper[4830]: I0131 09:27:27.619783 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-shj89" Jan 31 09:27:27 crc kubenswrapper[4830]: I0131 09:27:27.619712 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-shj89" event={"ID":"c1533bfd-c9f9-4c8d-9cb2-085f694b1f45","Type":"ContainerDied","Data":"fdf340414cc96a9f5ac3ecbd9a0a9e72f3e69a59e75b266492c58dd1f06eada7"} Jan 31 09:27:27 crc kubenswrapper[4830]: I0131 09:27:27.619937 4830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fdf340414cc96a9f5ac3ecbd9a0a9e72f3e69a59e75b266492c58dd1f06eada7" Jan 31 09:27:27 crc kubenswrapper[4830]: I0131 09:27:27.703913 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 31 09:27:27 crc kubenswrapper[4830]: E0131 09:27:27.704549 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1533bfd-c9f9-4c8d-9cb2-085f694b1f45" containerName="nova-cell1-conductor-db-sync" Jan 31 09:27:27 crc kubenswrapper[4830]: I0131 09:27:27.704562 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1533bfd-c9f9-4c8d-9cb2-085f694b1f45" containerName="nova-cell1-conductor-db-sync" Jan 31 09:27:27 crc kubenswrapper[4830]: E0131 09:27:27.704585 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="714acb03-29b5-4da1-8f14-9587cabcd207" containerName="nova-manage" Jan 31 09:27:27 crc kubenswrapper[4830]: I0131 09:27:27.704591 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="714acb03-29b5-4da1-8f14-9587cabcd207" containerName="nova-manage" Jan 31 09:27:27 crc kubenswrapper[4830]: E0131 09:27:27.704620 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d6a5be9-79bf-46d1-a45e-999d7bc615c0" containerName="mariadb-account-create-update" Jan 31 09:27:27 crc kubenswrapper[4830]: I0131 09:27:27.704632 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d6a5be9-79bf-46d1-a45e-999d7bc615c0" containerName="mariadb-account-create-update" Jan 31 09:27:27 crc kubenswrapper[4830]: E0131 09:27:27.704669 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1efaf577-ce46-4e44-a842-1d283d170872" containerName="mariadb-database-create" Jan 31 09:27:27 crc kubenswrapper[4830]: I0131 09:27:27.704675 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="1efaf577-ce46-4e44-a842-1d283d170872" containerName="mariadb-database-create" Jan 31 09:27:27 crc kubenswrapper[4830]: I0131 09:27:27.704903 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d6a5be9-79bf-46d1-a45e-999d7bc615c0" containerName="mariadb-account-create-update" Jan 31 09:27:27 crc kubenswrapper[4830]: I0131 09:27:27.704916 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="c1533bfd-c9f9-4c8d-9cb2-085f694b1f45" containerName="nova-cell1-conductor-db-sync" Jan 31 09:27:27 crc kubenswrapper[4830]: I0131 09:27:27.704929 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="714acb03-29b5-4da1-8f14-9587cabcd207" containerName="nova-manage" Jan 31 09:27:27 crc kubenswrapper[4830]: I0131 09:27:27.704939 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="1efaf577-ce46-4e44-a842-1d283d170872" containerName="mariadb-database-create" Jan 31 09:27:27 crc kubenswrapper[4830]: I0131 09:27:27.705905 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 31 09:27:27 crc kubenswrapper[4830]: I0131 09:27:27.728034 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 31 09:27:27 crc kubenswrapper[4830]: I0131 09:27:27.777409 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 31 09:27:27 crc kubenswrapper[4830]: I0131 09:27:27.820573 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3543533-b215-4345-b520-286551717692-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"e3543533-b215-4345-b520-286551717692\") " pod="openstack/nova-cell1-conductor-0" Jan 31 09:27:27 crc kubenswrapper[4830]: I0131 09:27:27.820713 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7rkrj\" (UniqueName: \"kubernetes.io/projected/e3543533-b215-4345-b520-286551717692-kube-api-access-7rkrj\") pod \"nova-cell1-conductor-0\" (UID: \"e3543533-b215-4345-b520-286551717692\") " pod="openstack/nova-cell1-conductor-0" Jan 31 09:27:27 crc kubenswrapper[4830]: I0131 09:27:27.820974 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e3543533-b215-4345-b520-286551717692-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"e3543533-b215-4345-b520-286551717692\") " pod="openstack/nova-cell1-conductor-0" Jan 31 09:27:27 crc kubenswrapper[4830]: I0131 09:27:27.926992 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e3543533-b215-4345-b520-286551717692-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"e3543533-b215-4345-b520-286551717692\") " pod="openstack/nova-cell1-conductor-0" Jan 31 09:27:27 crc kubenswrapper[4830]: I0131 09:27:27.927101 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3543533-b215-4345-b520-286551717692-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"e3543533-b215-4345-b520-286551717692\") " pod="openstack/nova-cell1-conductor-0" Jan 31 09:27:27 crc kubenswrapper[4830]: I0131 09:27:27.927190 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7rkrj\" (UniqueName: \"kubernetes.io/projected/e3543533-b215-4345-b520-286551717692-kube-api-access-7rkrj\") pod \"nova-cell1-conductor-0\" (UID: \"e3543533-b215-4345-b520-286551717692\") " pod="openstack/nova-cell1-conductor-0" Jan 31 09:27:27 crc kubenswrapper[4830]: I0131 09:27:27.937670 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3543533-b215-4345-b520-286551717692-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"e3543533-b215-4345-b520-286551717692\") " pod="openstack/nova-cell1-conductor-0" Jan 31 09:27:27 crc kubenswrapper[4830]: I0131 09:27:27.938621 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e3543533-b215-4345-b520-286551717692-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"e3543533-b215-4345-b520-286551717692\") " pod="openstack/nova-cell1-conductor-0" Jan 31 09:27:27 crc kubenswrapper[4830]: I0131 09:27:27.963516 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7rkrj\" (UniqueName: \"kubernetes.io/projected/e3543533-b215-4345-b520-286551717692-kube-api-access-7rkrj\") pod \"nova-cell1-conductor-0\" (UID: \"e3543533-b215-4345-b520-286551717692\") " pod="openstack/nova-cell1-conductor-0" Jan 31 09:27:28 crc kubenswrapper[4830]: I0131 09:27:28.035946 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 31 09:27:28 crc kubenswrapper[4830]: W0131 09:27:28.680190 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode3543533_b215_4345_b520_286551717692.slice/crio-f56ce8a2bc19d9e4e020853e33d53e34f9c18bc96414bbceca15e8fc5215a9ff WatchSource:0}: Error finding container f56ce8a2bc19d9e4e020853e33d53e34f9c18bc96414bbceca15e8fc5215a9ff: Status 404 returned error can't find the container with id f56ce8a2bc19d9e4e020853e33d53e34f9c18bc96414bbceca15e8fc5215a9ff Jan 31 09:27:28 crc kubenswrapper[4830]: I0131 09:27:28.683597 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 31 09:27:29 crc kubenswrapper[4830]: I0131 09:27:29.650017 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"e3543533-b215-4345-b520-286551717692","Type":"ContainerStarted","Data":"4adb2fc7279a594abba84d8eb57db33798c11d7d3f22a4e0ec48f913fdfb6751"} Jan 31 09:27:29 crc kubenswrapper[4830]: I0131 09:27:29.651092 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"e3543533-b215-4345-b520-286551717692","Type":"ContainerStarted","Data":"f56ce8a2bc19d9e4e020853e33d53e34f9c18bc96414bbceca15e8fc5215a9ff"} Jan 31 09:27:29 crc kubenswrapper[4830]: I0131 09:27:29.651131 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Jan 31 09:27:29 crc kubenswrapper[4830]: I0131 09:27:29.673609 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.673585976 podStartE2EDuration="2.673585976s" podCreationTimestamp="2026-01-31 09:27:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 09:27:29.673311018 +0000 UTC m=+1594.166673460" watchObservedRunningTime="2026-01-31 09:27:29.673585976 +0000 UTC m=+1594.166948418" Jan 31 09:27:32 crc kubenswrapper[4830]: I0131 09:27:32.007626 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Jan 31 09:27:32 crc kubenswrapper[4830]: I0131 09:27:32.220189 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-db-sync-5j7sd"] Jan 31 09:27:32 crc kubenswrapper[4830]: I0131 09:27:32.222621 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-5j7sd" Jan 31 09:27:32 crc kubenswrapper[4830]: I0131 09:27:32.225247 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 31 09:27:32 crc kubenswrapper[4830]: I0131 09:27:32.226057 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-mz4qw" Jan 31 09:27:32 crc kubenswrapper[4830]: I0131 09:27:32.226196 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Jan 31 09:27:32 crc kubenswrapper[4830]: I0131 09:27:32.233035 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Jan 31 09:27:32 crc kubenswrapper[4830]: I0131 09:27:32.235242 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-sync-5j7sd"] Jan 31 09:27:32 crc kubenswrapper[4830]: I0131 09:27:32.311997 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qkpds\" (UniqueName: \"kubernetes.io/projected/f854d4ac-83f5-411d-a3f0-67a0b771b474-kube-api-access-qkpds\") pod \"aodh-db-sync-5j7sd\" (UID: \"f854d4ac-83f5-411d-a3f0-67a0b771b474\") " pod="openstack/aodh-db-sync-5j7sd" Jan 31 09:27:32 crc kubenswrapper[4830]: I0131 09:27:32.312096 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f854d4ac-83f5-411d-a3f0-67a0b771b474-combined-ca-bundle\") pod \"aodh-db-sync-5j7sd\" (UID: \"f854d4ac-83f5-411d-a3f0-67a0b771b474\") " pod="openstack/aodh-db-sync-5j7sd" Jan 31 09:27:32 crc kubenswrapper[4830]: I0131 09:27:32.312400 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f854d4ac-83f5-411d-a3f0-67a0b771b474-config-data\") pod \"aodh-db-sync-5j7sd\" (UID: \"f854d4ac-83f5-411d-a3f0-67a0b771b474\") " pod="openstack/aodh-db-sync-5j7sd" Jan 31 09:27:32 crc kubenswrapper[4830]: I0131 09:27:32.312617 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f854d4ac-83f5-411d-a3f0-67a0b771b474-scripts\") pod \"aodh-db-sync-5j7sd\" (UID: \"f854d4ac-83f5-411d-a3f0-67a0b771b474\") " pod="openstack/aodh-db-sync-5j7sd" Jan 31 09:27:32 crc kubenswrapper[4830]: I0131 09:27:32.416814 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f854d4ac-83f5-411d-a3f0-67a0b771b474-scripts\") pod \"aodh-db-sync-5j7sd\" (UID: \"f854d4ac-83f5-411d-a3f0-67a0b771b474\") " pod="openstack/aodh-db-sync-5j7sd" Jan 31 09:27:32 crc kubenswrapper[4830]: I0131 09:27:32.417787 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qkpds\" (UniqueName: \"kubernetes.io/projected/f854d4ac-83f5-411d-a3f0-67a0b771b474-kube-api-access-qkpds\") pod \"aodh-db-sync-5j7sd\" (UID: \"f854d4ac-83f5-411d-a3f0-67a0b771b474\") " pod="openstack/aodh-db-sync-5j7sd" Jan 31 09:27:32 crc kubenswrapper[4830]: I0131 09:27:32.418117 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f854d4ac-83f5-411d-a3f0-67a0b771b474-combined-ca-bundle\") pod \"aodh-db-sync-5j7sd\" (UID: \"f854d4ac-83f5-411d-a3f0-67a0b771b474\") " pod="openstack/aodh-db-sync-5j7sd" Jan 31 09:27:32 crc kubenswrapper[4830]: I0131 09:27:32.418354 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f854d4ac-83f5-411d-a3f0-67a0b771b474-config-data\") pod \"aodh-db-sync-5j7sd\" (UID: \"f854d4ac-83f5-411d-a3f0-67a0b771b474\") " pod="openstack/aodh-db-sync-5j7sd" Jan 31 09:27:32 crc kubenswrapper[4830]: I0131 09:27:32.425124 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f854d4ac-83f5-411d-a3f0-67a0b771b474-combined-ca-bundle\") pod \"aodh-db-sync-5j7sd\" (UID: \"f854d4ac-83f5-411d-a3f0-67a0b771b474\") " pod="openstack/aodh-db-sync-5j7sd" Jan 31 09:27:32 crc kubenswrapper[4830]: I0131 09:27:32.427999 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f854d4ac-83f5-411d-a3f0-67a0b771b474-config-data\") pod \"aodh-db-sync-5j7sd\" (UID: \"f854d4ac-83f5-411d-a3f0-67a0b771b474\") " pod="openstack/aodh-db-sync-5j7sd" Jan 31 09:27:32 crc kubenswrapper[4830]: I0131 09:27:32.431270 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f854d4ac-83f5-411d-a3f0-67a0b771b474-scripts\") pod \"aodh-db-sync-5j7sd\" (UID: \"f854d4ac-83f5-411d-a3f0-67a0b771b474\") " pod="openstack/aodh-db-sync-5j7sd" Jan 31 09:27:32 crc kubenswrapper[4830]: I0131 09:27:32.442553 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qkpds\" (UniqueName: \"kubernetes.io/projected/f854d4ac-83f5-411d-a3f0-67a0b771b474-kube-api-access-qkpds\") pod \"aodh-db-sync-5j7sd\" (UID: \"f854d4ac-83f5-411d-a3f0-67a0b771b474\") " pod="openstack/aodh-db-sync-5j7sd" Jan 31 09:27:32 crc kubenswrapper[4830]: I0131 09:27:32.559627 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-5j7sd" Jan 31 09:27:33 crc kubenswrapper[4830]: I0131 09:27:33.207194 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-sync-5j7sd"] Jan 31 09:27:33 crc kubenswrapper[4830]: W0131 09:27:33.215387 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf854d4ac_83f5_411d_a3f0_67a0b771b474.slice/crio-3c337c7aea9d18ea7549a041d2cd4d0d53535081d4f8298a88e7b3b4c0c7e31b WatchSource:0}: Error finding container 3c337c7aea9d18ea7549a041d2cd4d0d53535081d4f8298a88e7b3b4c0c7e31b: Status 404 returned error can't find the container with id 3c337c7aea9d18ea7549a041d2cd4d0d53535081d4f8298a88e7b3b4c0c7e31b Jan 31 09:27:33 crc kubenswrapper[4830]: I0131 09:27:33.713579 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-5j7sd" event={"ID":"f854d4ac-83f5-411d-a3f0-67a0b771b474","Type":"ContainerStarted","Data":"3c337c7aea9d18ea7549a041d2cd4d0d53535081d4f8298a88e7b3b4c0c7e31b"} Jan 31 09:27:34 crc kubenswrapper[4830]: I0131 09:27:34.742209 4830 generic.go:334] "Generic (PLEG): container finished" podID="01a16d5c-bea7-4cab-8c88-206e4c5c901d" containerID="9038e9d98b25ae3230e069f5b241cb6a8f96e63cfae411b12e4c5bb53b61169f" exitCode=0 Jan 31 09:27:34 crc kubenswrapper[4830]: I0131 09:27:34.743672 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z7qcf" event={"ID":"01a16d5c-bea7-4cab-8c88-206e4c5c901d","Type":"ContainerDied","Data":"9038e9d98b25ae3230e069f5b241cb6a8f96e63cfae411b12e4c5bb53b61169f"} Jan 31 09:27:37 crc kubenswrapper[4830]: I0131 09:27:37.484157 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-xpkzh" Jan 31 09:27:37 crc kubenswrapper[4830]: I0131 09:27:37.554132 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-xpkzh"] Jan 31 09:27:37 crc kubenswrapper[4830]: I0131 09:27:37.791706 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-xpkzh" podUID="00eec1e5-054c-4c87-ad69-ce449d1aa577" containerName="registry-server" containerID="cri-o://a53e66388c2d40842773fc396da9ba83aea4465019fc1c5026ae47c2acc800d2" gracePeriod=2 Jan 31 09:27:37 crc kubenswrapper[4830]: I0131 09:27:37.792173 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z7qcf" event={"ID":"01a16d5c-bea7-4cab-8c88-206e4c5c901d","Type":"ContainerStarted","Data":"2187c5f1d23270780880c4f43561c556517a362b28d2de743139a8c9eccf5048"} Jan 31 09:27:37 crc kubenswrapper[4830]: I0131 09:27:37.837370 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-z7qcf" podStartSLOduration=4.347679186 podStartE2EDuration="17.837342662s" podCreationTimestamp="2026-01-31 09:27:20 +0000 UTC" firstStartedPulling="2026-01-31 09:27:23.451669403 +0000 UTC m=+1587.945031845" lastFinishedPulling="2026-01-31 09:27:36.941332869 +0000 UTC m=+1601.434695321" observedRunningTime="2026-01-31 09:27:37.821598955 +0000 UTC m=+1602.314961407" watchObservedRunningTime="2026-01-31 09:27:37.837342662 +0000 UTC m=+1602.330705104" Jan 31 09:27:38 crc kubenswrapper[4830]: I0131 09:27:38.072253 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Jan 31 09:27:38 crc kubenswrapper[4830]: I0131 09:27:38.809991 4830 generic.go:334] "Generic (PLEG): container finished" podID="00eec1e5-054c-4c87-ad69-ce449d1aa577" containerID="a53e66388c2d40842773fc396da9ba83aea4465019fc1c5026ae47c2acc800d2" exitCode=0 Jan 31 09:27:38 crc kubenswrapper[4830]: I0131 09:27:38.810049 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xpkzh" event={"ID":"00eec1e5-054c-4c87-ad69-ce449d1aa577","Type":"ContainerDied","Data":"a53e66388c2d40842773fc396da9ba83aea4465019fc1c5026ae47c2acc800d2"} Jan 31 09:27:39 crc kubenswrapper[4830]: I0131 09:27:39.017534 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 31 09:27:39 crc kubenswrapper[4830]: I0131 09:27:39.019031 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 31 09:27:40 crc kubenswrapper[4830]: I0131 09:27:40.735539 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xpkzh" Jan 31 09:27:40 crc kubenswrapper[4830]: I0131 09:27:40.846674 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-5j7sd" event={"ID":"f854d4ac-83f5-411d-a3f0-67a0b771b474","Type":"ContainerStarted","Data":"1c3087de53b5cdfe65bc7c2cefe319da0b574ee0938861f2361426e1b163eed1"} Jan 31 09:27:40 crc kubenswrapper[4830]: I0131 09:27:40.850799 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xpkzh" event={"ID":"00eec1e5-054c-4c87-ad69-ce449d1aa577","Type":"ContainerDied","Data":"78bd69b177c18208d51d058e7961fe03d1d36bd3def65cb7e8d9e6449bc2760e"} Jan 31 09:27:40 crc kubenswrapper[4830]: I0131 09:27:40.850870 4830 scope.go:117] "RemoveContainer" containerID="a53e66388c2d40842773fc396da9ba83aea4465019fc1c5026ae47c2acc800d2" Jan 31 09:27:40 crc kubenswrapper[4830]: I0131 09:27:40.851005 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xpkzh" Jan 31 09:27:40 crc kubenswrapper[4830]: I0131 09:27:40.856854 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/00eec1e5-054c-4c87-ad69-ce449d1aa577-utilities\") pod \"00eec1e5-054c-4c87-ad69-ce449d1aa577\" (UID: \"00eec1e5-054c-4c87-ad69-ce449d1aa577\") " Jan 31 09:27:40 crc kubenswrapper[4830]: I0131 09:27:40.857013 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/00eec1e5-054c-4c87-ad69-ce449d1aa577-catalog-content\") pod \"00eec1e5-054c-4c87-ad69-ce449d1aa577\" (UID: \"00eec1e5-054c-4c87-ad69-ce449d1aa577\") " Jan 31 09:27:40 crc kubenswrapper[4830]: I0131 09:27:40.857059 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2pmvd\" (UniqueName: \"kubernetes.io/projected/00eec1e5-054c-4c87-ad69-ce449d1aa577-kube-api-access-2pmvd\") pod \"00eec1e5-054c-4c87-ad69-ce449d1aa577\" (UID: \"00eec1e5-054c-4c87-ad69-ce449d1aa577\") " Jan 31 09:27:40 crc kubenswrapper[4830]: I0131 09:27:40.860464 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/00eec1e5-054c-4c87-ad69-ce449d1aa577-utilities" (OuterVolumeSpecName: "utilities") pod "00eec1e5-054c-4c87-ad69-ce449d1aa577" (UID: "00eec1e5-054c-4c87-ad69-ce449d1aa577"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 09:27:40 crc kubenswrapper[4830]: I0131 09:27:40.872978 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/00eec1e5-054c-4c87-ad69-ce449d1aa577-kube-api-access-2pmvd" (OuterVolumeSpecName: "kube-api-access-2pmvd") pod "00eec1e5-054c-4c87-ad69-ce449d1aa577" (UID: "00eec1e5-054c-4c87-ad69-ce449d1aa577"). InnerVolumeSpecName "kube-api-access-2pmvd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:27:40 crc kubenswrapper[4830]: I0131 09:27:40.882941 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-db-sync-5j7sd" podStartSLOduration=1.585241895 podStartE2EDuration="8.882915659s" podCreationTimestamp="2026-01-31 09:27:32 +0000 UTC" firstStartedPulling="2026-01-31 09:27:33.21854205 +0000 UTC m=+1597.711904492" lastFinishedPulling="2026-01-31 09:27:40.516215824 +0000 UTC m=+1605.009578256" observedRunningTime="2026-01-31 09:27:40.876196768 +0000 UTC m=+1605.369559220" watchObservedRunningTime="2026-01-31 09:27:40.882915659 +0000 UTC m=+1605.376278111" Jan 31 09:27:40 crc kubenswrapper[4830]: I0131 09:27:40.887031 4830 scope.go:117] "RemoveContainer" containerID="b75d698ec4b93946631a20cdfcf2d09785fe647703b03030ae8d8a4d86af6798" Jan 31 09:27:40 crc kubenswrapper[4830]: I0131 09:27:40.922598 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/00eec1e5-054c-4c87-ad69-ce449d1aa577-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "00eec1e5-054c-4c87-ad69-ce449d1aa577" (UID: "00eec1e5-054c-4c87-ad69-ce449d1aa577"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 09:27:40 crc kubenswrapper[4830]: I0131 09:27:40.956845 4830 scope.go:117] "RemoveContainer" containerID="80462b6937d6037ddcd0f0515f17adf59c3c73e242fd57b73e5fe1d0ad4147a9" Jan 31 09:27:40 crc kubenswrapper[4830]: I0131 09:27:40.960943 4830 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/00eec1e5-054c-4c87-ad69-ce449d1aa577-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 09:27:40 crc kubenswrapper[4830]: I0131 09:27:40.960973 4830 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/00eec1e5-054c-4c87-ad69-ce449d1aa577-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 09:27:40 crc kubenswrapper[4830]: I0131 09:27:40.960986 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2pmvd\" (UniqueName: \"kubernetes.io/projected/00eec1e5-054c-4c87-ad69-ce449d1aa577-kube-api-access-2pmvd\") on node \"crc\" DevicePath \"\"" Jan 31 09:27:41 crc kubenswrapper[4830]: I0131 09:27:41.148904 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-z7qcf" Jan 31 09:27:41 crc kubenswrapper[4830]: I0131 09:27:41.148966 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-z7qcf" Jan 31 09:27:41 crc kubenswrapper[4830]: I0131 09:27:41.199641 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-xpkzh"] Jan 31 09:27:41 crc kubenswrapper[4830]: I0131 09:27:41.221483 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-xpkzh"] Jan 31 09:27:41 crc kubenswrapper[4830]: I0131 09:27:41.254523 4830 scope.go:117] "RemoveContainer" containerID="a04fad3617a9e38076099693ce6bd6f0b7e1a9b845b3b8a22acffddfa772e8f0" Jan 31 09:27:41 crc kubenswrapper[4830]: E0131 09:27:41.255388 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gt7kd_openshift-machine-config-operator(158dbfda-9b0a-4809-9946-3c6ee2d082dc)\"" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" podUID="158dbfda-9b0a-4809-9946-3c6ee2d082dc" Jan 31 09:27:42 crc kubenswrapper[4830]: I0131 09:27:42.207477 4830 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-z7qcf" podUID="01a16d5c-bea7-4cab-8c88-206e4c5c901d" containerName="registry-server" probeResult="failure" output=< Jan 31 09:27:42 crc kubenswrapper[4830]: timeout: failed to connect service ":50051" within 1s Jan 31 09:27:42 crc kubenswrapper[4830]: > Jan 31 09:27:42 crc kubenswrapper[4830]: I0131 09:27:42.269406 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="00eec1e5-054c-4c87-ad69-ce449d1aa577" path="/var/lib/kubelet/pods/00eec1e5-054c-4c87-ad69-ce449d1aa577/volumes" Jan 31 09:27:45 crc kubenswrapper[4830]: I0131 09:27:45.938717 4830 generic.go:334] "Generic (PLEG): container finished" podID="f854d4ac-83f5-411d-a3f0-67a0b771b474" containerID="1c3087de53b5cdfe65bc7c2cefe319da0b574ee0938861f2361426e1b163eed1" exitCode=0 Jan 31 09:27:45 crc kubenswrapper[4830]: I0131 09:27:45.938986 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-5j7sd" event={"ID":"f854d4ac-83f5-411d-a3f0-67a0b771b474","Type":"ContainerDied","Data":"1c3087de53b5cdfe65bc7c2cefe319da0b574ee0938861f2361426e1b163eed1"} Jan 31 09:27:47 crc kubenswrapper[4830]: I0131 09:27:47.658409 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-5j7sd" Jan 31 09:27:47 crc kubenswrapper[4830]: I0131 09:27:47.767778 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qkpds\" (UniqueName: \"kubernetes.io/projected/f854d4ac-83f5-411d-a3f0-67a0b771b474-kube-api-access-qkpds\") pod \"f854d4ac-83f5-411d-a3f0-67a0b771b474\" (UID: \"f854d4ac-83f5-411d-a3f0-67a0b771b474\") " Jan 31 09:27:47 crc kubenswrapper[4830]: I0131 09:27:47.768004 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f854d4ac-83f5-411d-a3f0-67a0b771b474-combined-ca-bundle\") pod \"f854d4ac-83f5-411d-a3f0-67a0b771b474\" (UID: \"f854d4ac-83f5-411d-a3f0-67a0b771b474\") " Jan 31 09:27:47 crc kubenswrapper[4830]: I0131 09:27:47.768053 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f854d4ac-83f5-411d-a3f0-67a0b771b474-config-data\") pod \"f854d4ac-83f5-411d-a3f0-67a0b771b474\" (UID: \"f854d4ac-83f5-411d-a3f0-67a0b771b474\") " Jan 31 09:27:47 crc kubenswrapper[4830]: I0131 09:27:47.768254 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f854d4ac-83f5-411d-a3f0-67a0b771b474-scripts\") pod \"f854d4ac-83f5-411d-a3f0-67a0b771b474\" (UID: \"f854d4ac-83f5-411d-a3f0-67a0b771b474\") " Jan 31 09:27:47 crc kubenswrapper[4830]: I0131 09:27:47.777137 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f854d4ac-83f5-411d-a3f0-67a0b771b474-scripts" (OuterVolumeSpecName: "scripts") pod "f854d4ac-83f5-411d-a3f0-67a0b771b474" (UID: "f854d4ac-83f5-411d-a3f0-67a0b771b474"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:27:47 crc kubenswrapper[4830]: I0131 09:27:47.777284 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f854d4ac-83f5-411d-a3f0-67a0b771b474-kube-api-access-qkpds" (OuterVolumeSpecName: "kube-api-access-qkpds") pod "f854d4ac-83f5-411d-a3f0-67a0b771b474" (UID: "f854d4ac-83f5-411d-a3f0-67a0b771b474"). InnerVolumeSpecName "kube-api-access-qkpds". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:27:47 crc kubenswrapper[4830]: I0131 09:27:47.809748 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f854d4ac-83f5-411d-a3f0-67a0b771b474-config-data" (OuterVolumeSpecName: "config-data") pod "f854d4ac-83f5-411d-a3f0-67a0b771b474" (UID: "f854d4ac-83f5-411d-a3f0-67a0b771b474"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:27:47 crc kubenswrapper[4830]: I0131 09:27:47.813329 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f854d4ac-83f5-411d-a3f0-67a0b771b474-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f854d4ac-83f5-411d-a3f0-67a0b771b474" (UID: "f854d4ac-83f5-411d-a3f0-67a0b771b474"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:27:47 crc kubenswrapper[4830]: I0131 09:27:47.874220 4830 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f854d4ac-83f5-411d-a3f0-67a0b771b474-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 09:27:47 crc kubenswrapper[4830]: I0131 09:27:47.874263 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qkpds\" (UniqueName: \"kubernetes.io/projected/f854d4ac-83f5-411d-a3f0-67a0b771b474-kube-api-access-qkpds\") on node \"crc\" DevicePath \"\"" Jan 31 09:27:47 crc kubenswrapper[4830]: I0131 09:27:47.874283 4830 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f854d4ac-83f5-411d-a3f0-67a0b771b474-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 09:27:47 crc kubenswrapper[4830]: I0131 09:27:47.874297 4830 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f854d4ac-83f5-411d-a3f0-67a0b771b474-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 09:27:47 crc kubenswrapper[4830]: I0131 09:27:47.967340 4830 generic.go:334] "Generic (PLEG): container finished" podID="09cd85b5-2912-444f-89ae-06d177587496" containerID="751cc74ec4161c69d0d316b0488fea1b14c595ce785d0f45db9c36f7dbe1b8fd" exitCode=137 Jan 31 09:27:47 crc kubenswrapper[4830]: I0131 09:27:47.967470 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"09cd85b5-2912-444f-89ae-06d177587496","Type":"ContainerDied","Data":"751cc74ec4161c69d0d316b0488fea1b14c595ce785d0f45db9c36f7dbe1b8fd"} Jan 31 09:27:47 crc kubenswrapper[4830]: I0131 09:27:47.970585 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-5j7sd" event={"ID":"f854d4ac-83f5-411d-a3f0-67a0b771b474","Type":"ContainerDied","Data":"3c337c7aea9d18ea7549a041d2cd4d0d53535081d4f8298a88e7b3b4c0c7e31b"} Jan 31 09:27:47 crc kubenswrapper[4830]: I0131 09:27:47.970615 4830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3c337c7aea9d18ea7549a041d2cd4d0d53535081d4f8298a88e7b3b4c0c7e31b" Jan 31 09:27:47 crc kubenswrapper[4830]: I0131 09:27:47.970877 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-5j7sd" Jan 31 09:27:47 crc kubenswrapper[4830]: I0131 09:27:47.974841 4830 generic.go:334] "Generic (PLEG): container finished" podID="bf7ad62e-1ba6-47a8-a397-3f078d8291d4" containerID="5323de851424e5d94a8c2eeab1d38fae1563ff3fcdcde59f95c513bc6359a6f4" exitCode=137 Jan 31 09:27:47 crc kubenswrapper[4830]: I0131 09:27:47.974916 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"bf7ad62e-1ba6-47a8-a397-3f078d8291d4","Type":"ContainerDied","Data":"5323de851424e5d94a8c2eeab1d38fae1563ff3fcdcde59f95c513bc6359a6f4"} Jan 31 09:27:47 crc kubenswrapper[4830]: I0131 09:27:47.979766 4830 generic.go:334] "Generic (PLEG): container finished" podID="1e035fc4-d1e4-4716-ab3c-432991bca55e" containerID="05e42f2149384a6274173a7f5b605663add224965beeb44737ea148514066aff" exitCode=137 Jan 31 09:27:47 crc kubenswrapper[4830]: I0131 09:27:47.979767 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1e035fc4-d1e4-4716-ab3c-432991bca55e","Type":"ContainerDied","Data":"05e42f2149384a6274173a7f5b605663add224965beeb44737ea148514066aff"} Jan 31 09:27:47 crc kubenswrapper[4830]: I0131 09:27:47.982783 4830 generic.go:334] "Generic (PLEG): container finished" podID="81b8f0ff-f3d6-4e28-a3f3-cd17dc8550f5" containerID="9f918a294f70d26695851495e2acbb7eb081d0c0742cc06769aac18433e60ae2" exitCode=137 Jan 31 09:27:47 crc kubenswrapper[4830]: I0131 09:27:47.982815 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"81b8f0ff-f3d6-4e28-a3f3-cd17dc8550f5","Type":"ContainerDied","Data":"9f918a294f70d26695851495e2acbb7eb081d0c0742cc06769aac18433e60ae2"} Jan 31 09:27:47 crc kubenswrapper[4830]: I0131 09:27:47.996175 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 31 09:27:48 crc kubenswrapper[4830]: I0131 09:27:48.591523 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 31 09:27:48 crc kubenswrapper[4830]: I0131 09:27:48.698886 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1e035fc4-d1e4-4716-ab3c-432991bca55e-logs\") pod \"1e035fc4-d1e4-4716-ab3c-432991bca55e\" (UID: \"1e035fc4-d1e4-4716-ab3c-432991bca55e\") " Jan 31 09:27:48 crc kubenswrapper[4830]: I0131 09:27:48.699261 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1e035fc4-d1e4-4716-ab3c-432991bca55e-logs" (OuterVolumeSpecName: "logs") pod "1e035fc4-d1e4-4716-ab3c-432991bca55e" (UID: "1e035fc4-d1e4-4716-ab3c-432991bca55e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 09:27:48 crc kubenswrapper[4830]: I0131 09:27:48.699354 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1e035fc4-d1e4-4716-ab3c-432991bca55e-config-data\") pod \"1e035fc4-d1e4-4716-ab3c-432991bca55e\" (UID: \"1e035fc4-d1e4-4716-ab3c-432991bca55e\") " Jan 31 09:27:48 crc kubenswrapper[4830]: I0131 09:27:48.699401 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5f4s8\" (UniqueName: \"kubernetes.io/projected/1e035fc4-d1e4-4716-ab3c-432991bca55e-kube-api-access-5f4s8\") pod \"1e035fc4-d1e4-4716-ab3c-432991bca55e\" (UID: \"1e035fc4-d1e4-4716-ab3c-432991bca55e\") " Jan 31 09:27:48 crc kubenswrapper[4830]: I0131 09:27:48.699443 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e035fc4-d1e4-4716-ab3c-432991bca55e-combined-ca-bundle\") pod \"1e035fc4-d1e4-4716-ab3c-432991bca55e\" (UID: \"1e035fc4-d1e4-4716-ab3c-432991bca55e\") " Jan 31 09:27:48 crc kubenswrapper[4830]: I0131 09:27:48.700680 4830 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1e035fc4-d1e4-4716-ab3c-432991bca55e-logs\") on node \"crc\" DevicePath \"\"" Jan 31 09:27:48 crc kubenswrapper[4830]: I0131 09:27:48.706404 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1e035fc4-d1e4-4716-ab3c-432991bca55e-kube-api-access-5f4s8" (OuterVolumeSpecName: "kube-api-access-5f4s8") pod "1e035fc4-d1e4-4716-ab3c-432991bca55e" (UID: "1e035fc4-d1e4-4716-ab3c-432991bca55e"). InnerVolumeSpecName "kube-api-access-5f4s8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:27:48 crc kubenswrapper[4830]: I0131 09:27:48.773113 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e035fc4-d1e4-4716-ab3c-432991bca55e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1e035fc4-d1e4-4716-ab3c-432991bca55e" (UID: "1e035fc4-d1e4-4716-ab3c-432991bca55e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:27:48 crc kubenswrapper[4830]: I0131 09:27:48.773180 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e035fc4-d1e4-4716-ab3c-432991bca55e-config-data" (OuterVolumeSpecName: "config-data") pod "1e035fc4-d1e4-4716-ab3c-432991bca55e" (UID: "1e035fc4-d1e4-4716-ab3c-432991bca55e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:27:48 crc kubenswrapper[4830]: I0131 09:27:48.784663 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 31 09:27:48 crc kubenswrapper[4830]: I0131 09:27:48.805479 4830 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1e035fc4-d1e4-4716-ab3c-432991bca55e-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 09:27:48 crc kubenswrapper[4830]: I0131 09:27:48.805509 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5f4s8\" (UniqueName: \"kubernetes.io/projected/1e035fc4-d1e4-4716-ab3c-432991bca55e-kube-api-access-5f4s8\") on node \"crc\" DevicePath \"\"" Jan 31 09:27:48 crc kubenswrapper[4830]: I0131 09:27:48.805535 4830 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e035fc4-d1e4-4716-ab3c-432991bca55e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 09:27:48 crc kubenswrapper[4830]: I0131 09:27:48.907035 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bf7ad62e-1ba6-47a8-a397-3f078d8291d4-config-data\") pod \"bf7ad62e-1ba6-47a8-a397-3f078d8291d4\" (UID: \"bf7ad62e-1ba6-47a8-a397-3f078d8291d4\") " Jan 31 09:27:48 crc kubenswrapper[4830]: I0131 09:27:48.907568 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fhmz7\" (UniqueName: \"kubernetes.io/projected/bf7ad62e-1ba6-47a8-a397-3f078d8291d4-kube-api-access-fhmz7\") pod \"bf7ad62e-1ba6-47a8-a397-3f078d8291d4\" (UID: \"bf7ad62e-1ba6-47a8-a397-3f078d8291d4\") " Jan 31 09:27:48 crc kubenswrapper[4830]: I0131 09:27:48.907761 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bf7ad62e-1ba6-47a8-a397-3f078d8291d4-logs\") pod \"bf7ad62e-1ba6-47a8-a397-3f078d8291d4\" (UID: \"bf7ad62e-1ba6-47a8-a397-3f078d8291d4\") " Jan 31 09:27:48 crc kubenswrapper[4830]: I0131 09:27:48.907952 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf7ad62e-1ba6-47a8-a397-3f078d8291d4-combined-ca-bundle\") pod \"bf7ad62e-1ba6-47a8-a397-3f078d8291d4\" (UID: \"bf7ad62e-1ba6-47a8-a397-3f078d8291d4\") " Jan 31 09:27:48 crc kubenswrapper[4830]: I0131 09:27:48.909026 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bf7ad62e-1ba6-47a8-a397-3f078d8291d4-logs" (OuterVolumeSpecName: "logs") pod "bf7ad62e-1ba6-47a8-a397-3f078d8291d4" (UID: "bf7ad62e-1ba6-47a8-a397-3f078d8291d4"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 09:27:48 crc kubenswrapper[4830]: I0131 09:27:48.915061 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf7ad62e-1ba6-47a8-a397-3f078d8291d4-kube-api-access-fhmz7" (OuterVolumeSpecName: "kube-api-access-fhmz7") pod "bf7ad62e-1ba6-47a8-a397-3f078d8291d4" (UID: "bf7ad62e-1ba6-47a8-a397-3f078d8291d4"). InnerVolumeSpecName "kube-api-access-fhmz7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:27:48 crc kubenswrapper[4830]: I0131 09:27:48.954705 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf7ad62e-1ba6-47a8-a397-3f078d8291d4-config-data" (OuterVolumeSpecName: "config-data") pod "bf7ad62e-1ba6-47a8-a397-3f078d8291d4" (UID: "bf7ad62e-1ba6-47a8-a397-3f078d8291d4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:27:48 crc kubenswrapper[4830]: I0131 09:27:48.964546 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf7ad62e-1ba6-47a8-a397-3f078d8291d4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bf7ad62e-1ba6-47a8-a397-3f078d8291d4" (UID: "bf7ad62e-1ba6-47a8-a397-3f078d8291d4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:27:48 crc kubenswrapper[4830]: I0131 09:27:48.982459 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 31 09:27:49 crc kubenswrapper[4830]: I0131 09:27:49.015908 4830 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bf7ad62e-1ba6-47a8-a397-3f078d8291d4-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 09:27:49 crc kubenswrapper[4830]: I0131 09:27:49.015961 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fhmz7\" (UniqueName: \"kubernetes.io/projected/bf7ad62e-1ba6-47a8-a397-3f078d8291d4-kube-api-access-fhmz7\") on node \"crc\" DevicePath \"\"" Jan 31 09:27:49 crc kubenswrapper[4830]: I0131 09:27:49.015981 4830 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bf7ad62e-1ba6-47a8-a397-3f078d8291d4-logs\") on node \"crc\" DevicePath \"\"" Jan 31 09:27:49 crc kubenswrapper[4830]: I0131 09:27:49.015993 4830 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf7ad62e-1ba6-47a8-a397-3f078d8291d4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 09:27:49 crc kubenswrapper[4830]: I0131 09:27:49.068919 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"09cd85b5-2912-444f-89ae-06d177587496","Type":"ContainerDied","Data":"0006041b3c2c439b29ab50c71c99fd019c17e98b3cfef9a71628f3323c534f83"} Jan 31 09:27:49 crc kubenswrapper[4830]: I0131 09:27:49.068995 4830 scope.go:117] "RemoveContainer" containerID="751cc74ec4161c69d0d316b0488fea1b14c595ce785d0f45db9c36f7dbe1b8fd" Jan 31 09:27:49 crc kubenswrapper[4830]: I0131 09:27:49.069214 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 31 09:27:49 crc kubenswrapper[4830]: I0131 09:27:49.072416 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 31 09:27:49 crc kubenswrapper[4830]: I0131 09:27:49.079840 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 31 09:27:49 crc kubenswrapper[4830]: I0131 09:27:49.079887 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"bf7ad62e-1ba6-47a8-a397-3f078d8291d4","Type":"ContainerDied","Data":"db2ccfd019a3f1614af2af67c698a404ca641f8a5a337ba6e9a012e3f3d72c77"} Jan 31 09:27:49 crc kubenswrapper[4830]: I0131 09:27:49.090114 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1e035fc4-d1e4-4716-ab3c-432991bca55e","Type":"ContainerDied","Data":"0f07d675a6a751b01ff3ff542ae40338072f61c5596abd08371f4bd8f4924500"} Jan 31 09:27:49 crc kubenswrapper[4830]: I0131 09:27:49.090621 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 31 09:27:49 crc kubenswrapper[4830]: I0131 09:27:49.117445 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-46qgq\" (UniqueName: \"kubernetes.io/projected/09cd85b5-2912-444f-89ae-06d177587496-kube-api-access-46qgq\") pod \"09cd85b5-2912-444f-89ae-06d177587496\" (UID: \"09cd85b5-2912-444f-89ae-06d177587496\") " Jan 31 09:27:49 crc kubenswrapper[4830]: I0131 09:27:49.117545 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/09cd85b5-2912-444f-89ae-06d177587496-config-data\") pod \"09cd85b5-2912-444f-89ae-06d177587496\" (UID: \"09cd85b5-2912-444f-89ae-06d177587496\") " Jan 31 09:27:49 crc kubenswrapper[4830]: I0131 09:27:49.118039 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09cd85b5-2912-444f-89ae-06d177587496-combined-ca-bundle\") pod \"09cd85b5-2912-444f-89ae-06d177587496\" (UID: \"09cd85b5-2912-444f-89ae-06d177587496\") " Jan 31 09:27:49 crc kubenswrapper[4830]: I0131 09:27:49.124141 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09cd85b5-2912-444f-89ae-06d177587496-kube-api-access-46qgq" (OuterVolumeSpecName: "kube-api-access-46qgq") pod "09cd85b5-2912-444f-89ae-06d177587496" (UID: "09cd85b5-2912-444f-89ae-06d177587496"). InnerVolumeSpecName "kube-api-access-46qgq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:27:49 crc kubenswrapper[4830]: I0131 09:27:49.156696 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09cd85b5-2912-444f-89ae-06d177587496-config-data" (OuterVolumeSpecName: "config-data") pod "09cd85b5-2912-444f-89ae-06d177587496" (UID: "09cd85b5-2912-444f-89ae-06d177587496"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:27:49 crc kubenswrapper[4830]: I0131 09:27:49.158592 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09cd85b5-2912-444f-89ae-06d177587496-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "09cd85b5-2912-444f-89ae-06d177587496" (UID: "09cd85b5-2912-444f-89ae-06d177587496"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:27:49 crc kubenswrapper[4830]: I0131 09:27:49.222380 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g52ng\" (UniqueName: \"kubernetes.io/projected/81b8f0ff-f3d6-4e28-a3f3-cd17dc8550f5-kube-api-access-g52ng\") pod \"81b8f0ff-f3d6-4e28-a3f3-cd17dc8550f5\" (UID: \"81b8f0ff-f3d6-4e28-a3f3-cd17dc8550f5\") " Jan 31 09:27:49 crc kubenswrapper[4830]: I0131 09:27:49.223165 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81b8f0ff-f3d6-4e28-a3f3-cd17dc8550f5-combined-ca-bundle\") pod \"81b8f0ff-f3d6-4e28-a3f3-cd17dc8550f5\" (UID: \"81b8f0ff-f3d6-4e28-a3f3-cd17dc8550f5\") " Jan 31 09:27:49 crc kubenswrapper[4830]: I0131 09:27:49.223419 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/81b8f0ff-f3d6-4e28-a3f3-cd17dc8550f5-config-data\") pod \"81b8f0ff-f3d6-4e28-a3f3-cd17dc8550f5\" (UID: \"81b8f0ff-f3d6-4e28-a3f3-cd17dc8550f5\") " Jan 31 09:27:49 crc kubenswrapper[4830]: I0131 09:27:49.224293 4830 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/09cd85b5-2912-444f-89ae-06d177587496-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 09:27:49 crc kubenswrapper[4830]: I0131 09:27:49.224413 4830 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09cd85b5-2912-444f-89ae-06d177587496-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 09:27:49 crc kubenswrapper[4830]: I0131 09:27:49.224482 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-46qgq\" (UniqueName: \"kubernetes.io/projected/09cd85b5-2912-444f-89ae-06d177587496-kube-api-access-46qgq\") on node \"crc\" DevicePath \"\"" Jan 31 09:27:49 crc kubenswrapper[4830]: I0131 09:27:49.227185 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/81b8f0ff-f3d6-4e28-a3f3-cd17dc8550f5-kube-api-access-g52ng" (OuterVolumeSpecName: "kube-api-access-g52ng") pod "81b8f0ff-f3d6-4e28-a3f3-cd17dc8550f5" (UID: "81b8f0ff-f3d6-4e28-a3f3-cd17dc8550f5"). InnerVolumeSpecName "kube-api-access-g52ng". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:27:49 crc kubenswrapper[4830]: I0131 09:27:49.272006 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/81b8f0ff-f3d6-4e28-a3f3-cd17dc8550f5-config-data" (OuterVolumeSpecName: "config-data") pod "81b8f0ff-f3d6-4e28-a3f3-cd17dc8550f5" (UID: "81b8f0ff-f3d6-4e28-a3f3-cd17dc8550f5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:27:49 crc kubenswrapper[4830]: I0131 09:27:49.276522 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/81b8f0ff-f3d6-4e28-a3f3-cd17dc8550f5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "81b8f0ff-f3d6-4e28-a3f3-cd17dc8550f5" (UID: "81b8f0ff-f3d6-4e28-a3f3-cd17dc8550f5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:27:49 crc kubenswrapper[4830]: I0131 09:27:49.329210 4830 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81b8f0ff-f3d6-4e28-a3f3-cd17dc8550f5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 09:27:49 crc kubenswrapper[4830]: I0131 09:27:49.329281 4830 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/81b8f0ff-f3d6-4e28-a3f3-cd17dc8550f5-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 09:27:49 crc kubenswrapper[4830]: I0131 09:27:49.329295 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g52ng\" (UniqueName: \"kubernetes.io/projected/81b8f0ff-f3d6-4e28-a3f3-cd17dc8550f5-kube-api-access-g52ng\") on node \"crc\" DevicePath \"\"" Jan 31 09:27:49 crc kubenswrapper[4830]: I0131 09:27:49.408502 4830 scope.go:117] "RemoveContainer" containerID="5323de851424e5d94a8c2eeab1d38fae1563ff3fcdcde59f95c513bc6359a6f4" Jan 31 09:27:49 crc kubenswrapper[4830]: I0131 09:27:49.412883 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 31 09:27:49 crc kubenswrapper[4830]: I0131 09:27:49.434266 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 31 09:27:49 crc kubenswrapper[4830]: I0131 09:27:49.463891 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 31 09:27:49 crc kubenswrapper[4830]: I0131 09:27:49.469398 4830 scope.go:117] "RemoveContainer" containerID="8934f85d83e771f5120d5ce2dbdc20ee0153da86015ba6227acd3e1364f909d7" Jan 31 09:27:49 crc kubenswrapper[4830]: I0131 09:27:49.542212 4830 scope.go:117] "RemoveContainer" containerID="05e42f2149384a6274173a7f5b605663add224965beeb44737ea148514066aff" Jan 31 09:27:49 crc kubenswrapper[4830]: I0131 09:27:49.542470 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 31 09:27:49 crc kubenswrapper[4830]: I0131 09:27:49.568441 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 31 09:27:49 crc kubenswrapper[4830]: E0131 09:27:49.569315 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="09cd85b5-2912-444f-89ae-06d177587496" containerName="nova-cell1-novncproxy-novncproxy" Jan 31 09:27:49 crc kubenswrapper[4830]: I0131 09:27:49.569343 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="09cd85b5-2912-444f-89ae-06d177587496" containerName="nova-cell1-novncproxy-novncproxy" Jan 31 09:27:49 crc kubenswrapper[4830]: E0131 09:27:49.569382 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf7ad62e-1ba6-47a8-a397-3f078d8291d4" containerName="nova-metadata-metadata" Jan 31 09:27:49 crc kubenswrapper[4830]: I0131 09:27:49.569388 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf7ad62e-1ba6-47a8-a397-3f078d8291d4" containerName="nova-metadata-metadata" Jan 31 09:27:49 crc kubenswrapper[4830]: E0131 09:27:49.569406 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="00eec1e5-054c-4c87-ad69-ce449d1aa577" containerName="extract-utilities" Jan 31 09:27:49 crc kubenswrapper[4830]: I0131 09:27:49.569413 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="00eec1e5-054c-4c87-ad69-ce449d1aa577" containerName="extract-utilities" Jan 31 09:27:49 crc kubenswrapper[4830]: E0131 09:27:49.569433 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f854d4ac-83f5-411d-a3f0-67a0b771b474" containerName="aodh-db-sync" Jan 31 09:27:49 crc kubenswrapper[4830]: I0131 09:27:49.569439 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="f854d4ac-83f5-411d-a3f0-67a0b771b474" containerName="aodh-db-sync" Jan 31 09:27:49 crc kubenswrapper[4830]: E0131 09:27:49.569446 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf7ad62e-1ba6-47a8-a397-3f078d8291d4" containerName="nova-metadata-log" Jan 31 09:27:49 crc kubenswrapper[4830]: I0131 09:27:49.569454 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf7ad62e-1ba6-47a8-a397-3f078d8291d4" containerName="nova-metadata-log" Jan 31 09:27:49 crc kubenswrapper[4830]: E0131 09:27:49.569466 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e035fc4-d1e4-4716-ab3c-432991bca55e" containerName="nova-api-log" Jan 31 09:27:49 crc kubenswrapper[4830]: I0131 09:27:49.569475 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e035fc4-d1e4-4716-ab3c-432991bca55e" containerName="nova-api-log" Jan 31 09:27:49 crc kubenswrapper[4830]: E0131 09:27:49.569485 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="00eec1e5-054c-4c87-ad69-ce449d1aa577" containerName="registry-server" Jan 31 09:27:49 crc kubenswrapper[4830]: I0131 09:27:49.569494 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="00eec1e5-054c-4c87-ad69-ce449d1aa577" containerName="registry-server" Jan 31 09:27:49 crc kubenswrapper[4830]: E0131 09:27:49.569508 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="00eec1e5-054c-4c87-ad69-ce449d1aa577" containerName="extract-content" Jan 31 09:27:49 crc kubenswrapper[4830]: I0131 09:27:49.569514 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="00eec1e5-054c-4c87-ad69-ce449d1aa577" containerName="extract-content" Jan 31 09:27:49 crc kubenswrapper[4830]: E0131 09:27:49.569523 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e035fc4-d1e4-4716-ab3c-432991bca55e" containerName="nova-api-api" Jan 31 09:27:49 crc kubenswrapper[4830]: I0131 09:27:49.569529 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e035fc4-d1e4-4716-ab3c-432991bca55e" containerName="nova-api-api" Jan 31 09:27:49 crc kubenswrapper[4830]: E0131 09:27:49.569540 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81b8f0ff-f3d6-4e28-a3f3-cd17dc8550f5" containerName="nova-scheduler-scheduler" Jan 31 09:27:49 crc kubenswrapper[4830]: I0131 09:27:49.569547 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="81b8f0ff-f3d6-4e28-a3f3-cd17dc8550f5" containerName="nova-scheduler-scheduler" Jan 31 09:27:49 crc kubenswrapper[4830]: I0131 09:27:49.569840 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="bf7ad62e-1ba6-47a8-a397-3f078d8291d4" containerName="nova-metadata-metadata" Jan 31 09:27:49 crc kubenswrapper[4830]: I0131 09:27:49.569863 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="00eec1e5-054c-4c87-ad69-ce449d1aa577" containerName="registry-server" Jan 31 09:27:49 crc kubenswrapper[4830]: I0131 09:27:49.569873 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="bf7ad62e-1ba6-47a8-a397-3f078d8291d4" containerName="nova-metadata-log" Jan 31 09:27:49 crc kubenswrapper[4830]: I0131 09:27:49.569886 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="09cd85b5-2912-444f-89ae-06d177587496" containerName="nova-cell1-novncproxy-novncproxy" Jan 31 09:27:49 crc kubenswrapper[4830]: I0131 09:27:49.569895 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="f854d4ac-83f5-411d-a3f0-67a0b771b474" containerName="aodh-db-sync" Jan 31 09:27:49 crc kubenswrapper[4830]: I0131 09:27:49.569906 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="1e035fc4-d1e4-4716-ab3c-432991bca55e" containerName="nova-api-api" Jan 31 09:27:49 crc kubenswrapper[4830]: I0131 09:27:49.569920 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="81b8f0ff-f3d6-4e28-a3f3-cd17dc8550f5" containerName="nova-scheduler-scheduler" Jan 31 09:27:49 crc kubenswrapper[4830]: I0131 09:27:49.569932 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="1e035fc4-d1e4-4716-ab3c-432991bca55e" containerName="nova-api-log" Jan 31 09:27:49 crc kubenswrapper[4830]: I0131 09:27:49.571900 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 31 09:27:49 crc kubenswrapper[4830]: I0131 09:27:49.575074 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 31 09:27:49 crc kubenswrapper[4830]: I0131 09:27:49.575114 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 31 09:27:49 crc kubenswrapper[4830]: I0131 09:27:49.584506 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 31 09:27:49 crc kubenswrapper[4830]: I0131 09:27:49.599005 4830 scope.go:117] "RemoveContainer" containerID="fb620f43abe52b50211c6da287613be27f785ed7f330ce7dc9af4b71e8678607" Jan 31 09:27:49 crc kubenswrapper[4830]: I0131 09:27:49.612639 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 31 09:27:49 crc kubenswrapper[4830]: I0131 09:27:49.630698 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 31 09:27:49 crc kubenswrapper[4830]: I0131 09:27:49.639939 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/aa9e4ee2-a265-4397-a7ff-42e9ab868237-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"aa9e4ee2-a265-4397-a7ff-42e9ab868237\") " pod="openstack/nova-metadata-0" Jan 31 09:27:49 crc kubenswrapper[4830]: I0131 09:27:49.640092 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa9e4ee2-a265-4397-a7ff-42e9ab868237-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"aa9e4ee2-a265-4397-a7ff-42e9ab868237\") " pod="openstack/nova-metadata-0" Jan 31 09:27:49 crc kubenswrapper[4830]: I0131 09:27:49.640219 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sw2z5\" (UniqueName: \"kubernetes.io/projected/aa9e4ee2-a265-4397-a7ff-42e9ab868237-kube-api-access-sw2z5\") pod \"nova-metadata-0\" (UID: \"aa9e4ee2-a265-4397-a7ff-42e9ab868237\") " pod="openstack/nova-metadata-0" Jan 31 09:27:49 crc kubenswrapper[4830]: I0131 09:27:49.640278 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aa9e4ee2-a265-4397-a7ff-42e9ab868237-config-data\") pod \"nova-metadata-0\" (UID: \"aa9e4ee2-a265-4397-a7ff-42e9ab868237\") " pod="openstack/nova-metadata-0" Jan 31 09:27:49 crc kubenswrapper[4830]: I0131 09:27:49.640420 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aa9e4ee2-a265-4397-a7ff-42e9ab868237-logs\") pod \"nova-metadata-0\" (UID: \"aa9e4ee2-a265-4397-a7ff-42e9ab868237\") " pod="openstack/nova-metadata-0" Jan 31 09:27:49 crc kubenswrapper[4830]: I0131 09:27:49.645827 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 31 09:27:49 crc kubenswrapper[4830]: I0131 09:27:49.650307 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 31 09:27:49 crc kubenswrapper[4830]: I0131 09:27:49.653396 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 31 09:27:49 crc kubenswrapper[4830]: I0131 09:27:49.672749 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 31 09:27:49 crc kubenswrapper[4830]: I0131 09:27:49.675232 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 31 09:27:49 crc kubenswrapper[4830]: I0131 09:27:49.678273 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Jan 31 09:27:49 crc kubenswrapper[4830]: I0131 09:27:49.678577 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Jan 31 09:27:49 crc kubenswrapper[4830]: I0131 09:27:49.678762 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Jan 31 09:27:49 crc kubenswrapper[4830]: I0131 09:27:49.686954 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 31 09:27:49 crc kubenswrapper[4830]: I0131 09:27:49.703983 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 31 09:27:49 crc kubenswrapper[4830]: I0131 09:27:49.745683 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8dc5a196-8cf1-4387-8c40-7c9b5b1b55fc-config-data\") pod \"nova-api-0\" (UID: \"8dc5a196-8cf1-4387-8c40-7c9b5b1b55fc\") " pod="openstack/nova-api-0" Jan 31 09:27:49 crc kubenswrapper[4830]: I0131 09:27:49.745860 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa9e4ee2-a265-4397-a7ff-42e9ab868237-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"aa9e4ee2-a265-4397-a7ff-42e9ab868237\") " pod="openstack/nova-metadata-0" Jan 31 09:27:49 crc kubenswrapper[4830]: I0131 09:27:49.745998 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/d1290216-3656-4402-94a5-44d1fde53083-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"d1290216-3656-4402-94a5-44d1fde53083\") " pod="openstack/nova-cell1-novncproxy-0" Jan 31 09:27:49 crc kubenswrapper[4830]: I0131 09:27:49.746194 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sw2z5\" (UniqueName: \"kubernetes.io/projected/aa9e4ee2-a265-4397-a7ff-42e9ab868237-kube-api-access-sw2z5\") pod \"nova-metadata-0\" (UID: \"aa9e4ee2-a265-4397-a7ff-42e9ab868237\") " pod="openstack/nova-metadata-0" Jan 31 09:27:49 crc kubenswrapper[4830]: I0131 09:27:49.746337 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aa9e4ee2-a265-4397-a7ff-42e9ab868237-config-data\") pod \"nova-metadata-0\" (UID: \"aa9e4ee2-a265-4397-a7ff-42e9ab868237\") " pod="openstack/nova-metadata-0" Jan 31 09:27:49 crc kubenswrapper[4830]: I0131 09:27:49.746480 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s8nps\" (UniqueName: \"kubernetes.io/projected/8dc5a196-8cf1-4387-8c40-7c9b5b1b55fc-kube-api-access-s8nps\") pod \"nova-api-0\" (UID: \"8dc5a196-8cf1-4387-8c40-7c9b5b1b55fc\") " pod="openstack/nova-api-0" Jan 31 09:27:49 crc kubenswrapper[4830]: I0131 09:27:49.746554 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9mvvw\" (UniqueName: \"kubernetes.io/projected/d1290216-3656-4402-94a5-44d1fde53083-kube-api-access-9mvvw\") pod \"nova-cell1-novncproxy-0\" (UID: \"d1290216-3656-4402-94a5-44d1fde53083\") " pod="openstack/nova-cell1-novncproxy-0" Jan 31 09:27:49 crc kubenswrapper[4830]: I0131 09:27:49.746649 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aa9e4ee2-a265-4397-a7ff-42e9ab868237-logs\") pod \"nova-metadata-0\" (UID: \"aa9e4ee2-a265-4397-a7ff-42e9ab868237\") " pod="openstack/nova-metadata-0" Jan 31 09:27:49 crc kubenswrapper[4830]: I0131 09:27:49.747201 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8dc5a196-8cf1-4387-8c40-7c9b5b1b55fc-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"8dc5a196-8cf1-4387-8c40-7c9b5b1b55fc\") " pod="openstack/nova-api-0" Jan 31 09:27:49 crc kubenswrapper[4830]: I0131 09:27:49.747250 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/aa9e4ee2-a265-4397-a7ff-42e9ab868237-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"aa9e4ee2-a265-4397-a7ff-42e9ab868237\") " pod="openstack/nova-metadata-0" Jan 31 09:27:49 crc kubenswrapper[4830]: I0131 09:27:49.747291 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/d1290216-3656-4402-94a5-44d1fde53083-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"d1290216-3656-4402-94a5-44d1fde53083\") " pod="openstack/nova-cell1-novncproxy-0" Jan 31 09:27:49 crc kubenswrapper[4830]: I0131 09:27:49.747340 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8dc5a196-8cf1-4387-8c40-7c9b5b1b55fc-logs\") pod \"nova-api-0\" (UID: \"8dc5a196-8cf1-4387-8c40-7c9b5b1b55fc\") " pod="openstack/nova-api-0" Jan 31 09:27:49 crc kubenswrapper[4830]: I0131 09:27:49.747389 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1290216-3656-4402-94a5-44d1fde53083-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"d1290216-3656-4402-94a5-44d1fde53083\") " pod="openstack/nova-cell1-novncproxy-0" Jan 31 09:27:49 crc kubenswrapper[4830]: I0131 09:27:49.747466 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aa9e4ee2-a265-4397-a7ff-42e9ab868237-logs\") pod \"nova-metadata-0\" (UID: \"aa9e4ee2-a265-4397-a7ff-42e9ab868237\") " pod="openstack/nova-metadata-0" Jan 31 09:27:49 crc kubenswrapper[4830]: I0131 09:27:49.747479 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d1290216-3656-4402-94a5-44d1fde53083-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"d1290216-3656-4402-94a5-44d1fde53083\") " pod="openstack/nova-cell1-novncproxy-0" Jan 31 09:27:49 crc kubenswrapper[4830]: I0131 09:27:49.751146 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aa9e4ee2-a265-4397-a7ff-42e9ab868237-config-data\") pod \"nova-metadata-0\" (UID: \"aa9e4ee2-a265-4397-a7ff-42e9ab868237\") " pod="openstack/nova-metadata-0" Jan 31 09:27:49 crc kubenswrapper[4830]: I0131 09:27:49.752096 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/aa9e4ee2-a265-4397-a7ff-42e9ab868237-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"aa9e4ee2-a265-4397-a7ff-42e9ab868237\") " pod="openstack/nova-metadata-0" Jan 31 09:27:49 crc kubenswrapper[4830]: I0131 09:27:49.753455 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa9e4ee2-a265-4397-a7ff-42e9ab868237-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"aa9e4ee2-a265-4397-a7ff-42e9ab868237\") " pod="openstack/nova-metadata-0" Jan 31 09:27:49 crc kubenswrapper[4830]: I0131 09:27:49.770593 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sw2z5\" (UniqueName: \"kubernetes.io/projected/aa9e4ee2-a265-4397-a7ff-42e9ab868237-kube-api-access-sw2z5\") pod \"nova-metadata-0\" (UID: \"aa9e4ee2-a265-4397-a7ff-42e9ab868237\") " pod="openstack/nova-metadata-0" Jan 31 09:27:49 crc kubenswrapper[4830]: I0131 09:27:49.850517 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1290216-3656-4402-94a5-44d1fde53083-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"d1290216-3656-4402-94a5-44d1fde53083\") " pod="openstack/nova-cell1-novncproxy-0" Jan 31 09:27:49 crc kubenswrapper[4830]: I0131 09:27:49.850587 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d1290216-3656-4402-94a5-44d1fde53083-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"d1290216-3656-4402-94a5-44d1fde53083\") " pod="openstack/nova-cell1-novncproxy-0" Jan 31 09:27:49 crc kubenswrapper[4830]: I0131 09:27:49.850648 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8dc5a196-8cf1-4387-8c40-7c9b5b1b55fc-config-data\") pod \"nova-api-0\" (UID: \"8dc5a196-8cf1-4387-8c40-7c9b5b1b55fc\") " pod="openstack/nova-api-0" Jan 31 09:27:49 crc kubenswrapper[4830]: I0131 09:27:49.850701 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/d1290216-3656-4402-94a5-44d1fde53083-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"d1290216-3656-4402-94a5-44d1fde53083\") " pod="openstack/nova-cell1-novncproxy-0" Jan 31 09:27:49 crc kubenswrapper[4830]: I0131 09:27:49.850849 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s8nps\" (UniqueName: \"kubernetes.io/projected/8dc5a196-8cf1-4387-8c40-7c9b5b1b55fc-kube-api-access-s8nps\") pod \"nova-api-0\" (UID: \"8dc5a196-8cf1-4387-8c40-7c9b5b1b55fc\") " pod="openstack/nova-api-0" Jan 31 09:27:49 crc kubenswrapper[4830]: I0131 09:27:49.850882 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9mvvw\" (UniqueName: \"kubernetes.io/projected/d1290216-3656-4402-94a5-44d1fde53083-kube-api-access-9mvvw\") pod \"nova-cell1-novncproxy-0\" (UID: \"d1290216-3656-4402-94a5-44d1fde53083\") " pod="openstack/nova-cell1-novncproxy-0" Jan 31 09:27:49 crc kubenswrapper[4830]: I0131 09:27:49.850942 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8dc5a196-8cf1-4387-8c40-7c9b5b1b55fc-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"8dc5a196-8cf1-4387-8c40-7c9b5b1b55fc\") " pod="openstack/nova-api-0" Jan 31 09:27:49 crc kubenswrapper[4830]: I0131 09:27:49.850971 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/d1290216-3656-4402-94a5-44d1fde53083-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"d1290216-3656-4402-94a5-44d1fde53083\") " pod="openstack/nova-cell1-novncproxy-0" Jan 31 09:27:49 crc kubenswrapper[4830]: I0131 09:27:49.851001 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8dc5a196-8cf1-4387-8c40-7c9b5b1b55fc-logs\") pod \"nova-api-0\" (UID: \"8dc5a196-8cf1-4387-8c40-7c9b5b1b55fc\") " pod="openstack/nova-api-0" Jan 31 09:27:49 crc kubenswrapper[4830]: I0131 09:27:49.851394 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8dc5a196-8cf1-4387-8c40-7c9b5b1b55fc-logs\") pod \"nova-api-0\" (UID: \"8dc5a196-8cf1-4387-8c40-7c9b5b1b55fc\") " pod="openstack/nova-api-0" Jan 31 09:27:49 crc kubenswrapper[4830]: I0131 09:27:49.854937 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d1290216-3656-4402-94a5-44d1fde53083-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"d1290216-3656-4402-94a5-44d1fde53083\") " pod="openstack/nova-cell1-novncproxy-0" Jan 31 09:27:49 crc kubenswrapper[4830]: I0131 09:27:49.856313 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8dc5a196-8cf1-4387-8c40-7c9b5b1b55fc-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"8dc5a196-8cf1-4387-8c40-7c9b5b1b55fc\") " pod="openstack/nova-api-0" Jan 31 09:27:49 crc kubenswrapper[4830]: I0131 09:27:49.857176 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/d1290216-3656-4402-94a5-44d1fde53083-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"d1290216-3656-4402-94a5-44d1fde53083\") " pod="openstack/nova-cell1-novncproxy-0" Jan 31 09:27:49 crc kubenswrapper[4830]: I0131 09:27:49.858352 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1290216-3656-4402-94a5-44d1fde53083-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"d1290216-3656-4402-94a5-44d1fde53083\") " pod="openstack/nova-cell1-novncproxy-0" Jan 31 09:27:49 crc kubenswrapper[4830]: I0131 09:27:49.860175 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8dc5a196-8cf1-4387-8c40-7c9b5b1b55fc-config-data\") pod \"nova-api-0\" (UID: \"8dc5a196-8cf1-4387-8c40-7c9b5b1b55fc\") " pod="openstack/nova-api-0" Jan 31 09:27:49 crc kubenswrapper[4830]: I0131 09:27:49.869949 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/d1290216-3656-4402-94a5-44d1fde53083-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"d1290216-3656-4402-94a5-44d1fde53083\") " pod="openstack/nova-cell1-novncproxy-0" Jan 31 09:27:49 crc kubenswrapper[4830]: I0131 09:27:49.873693 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s8nps\" (UniqueName: \"kubernetes.io/projected/8dc5a196-8cf1-4387-8c40-7c9b5b1b55fc-kube-api-access-s8nps\") pod \"nova-api-0\" (UID: \"8dc5a196-8cf1-4387-8c40-7c9b5b1b55fc\") " pod="openstack/nova-api-0" Jan 31 09:27:49 crc kubenswrapper[4830]: I0131 09:27:49.890824 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9mvvw\" (UniqueName: \"kubernetes.io/projected/d1290216-3656-4402-94a5-44d1fde53083-kube-api-access-9mvvw\") pod \"nova-cell1-novncproxy-0\" (UID: \"d1290216-3656-4402-94a5-44d1fde53083\") " pod="openstack/nova-cell1-novncproxy-0" Jan 31 09:27:49 crc kubenswrapper[4830]: I0131 09:27:49.906875 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 31 09:27:49 crc kubenswrapper[4830]: I0131 09:27:49.973936 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 31 09:27:50 crc kubenswrapper[4830]: I0131 09:27:50.006641 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 31 09:27:50 crc kubenswrapper[4830]: I0131 09:27:50.184057 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"81b8f0ff-f3d6-4e28-a3f3-cd17dc8550f5","Type":"ContainerDied","Data":"8b2f88cce18a06397ce89bc4b1cdfd75501e183d7f30a5a02cba0a31520f30a7"} Jan 31 09:27:50 crc kubenswrapper[4830]: I0131 09:27:50.184735 4830 scope.go:117] "RemoveContainer" containerID="9f918a294f70d26695851495e2acbb7eb081d0c0742cc06769aac18433e60ae2" Jan 31 09:27:50 crc kubenswrapper[4830]: I0131 09:27:50.184207 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 31 09:27:50 crc kubenswrapper[4830]: I0131 09:27:50.278759 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09cd85b5-2912-444f-89ae-06d177587496" path="/var/lib/kubelet/pods/09cd85b5-2912-444f-89ae-06d177587496/volumes" Jan 31 09:27:50 crc kubenswrapper[4830]: I0131 09:27:50.279963 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1e035fc4-d1e4-4716-ab3c-432991bca55e" path="/var/lib/kubelet/pods/1e035fc4-d1e4-4716-ab3c-432991bca55e/volumes" Jan 31 09:27:50 crc kubenswrapper[4830]: I0131 09:27:50.281029 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf7ad62e-1ba6-47a8-a397-3f078d8291d4" path="/var/lib/kubelet/pods/bf7ad62e-1ba6-47a8-a397-3f078d8291d4/volumes" Jan 31 09:27:50 crc kubenswrapper[4830]: I0131 09:27:50.283566 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 31 09:27:50 crc kubenswrapper[4830]: I0131 09:27:50.298690 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 31 09:27:50 crc kubenswrapper[4830]: I0131 09:27:50.320854 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 31 09:27:50 crc kubenswrapper[4830]: I0131 09:27:50.323204 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 31 09:27:50 crc kubenswrapper[4830]: I0131 09:27:50.327638 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 31 09:27:50 crc kubenswrapper[4830]: I0131 09:27:50.334408 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 31 09:27:50 crc kubenswrapper[4830]: I0131 09:27:50.416045 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 31 09:27:50 crc kubenswrapper[4830]: W0131 09:27:50.420830 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podaa9e4ee2_a265_4397_a7ff_42e9ab868237.slice/crio-90a1fe2b8ece85c6268eb7faaac51353aa1947390daa0048f37c02ff3260c950 WatchSource:0}: Error finding container 90a1fe2b8ece85c6268eb7faaac51353aa1947390daa0048f37c02ff3260c950: Status 404 returned error can't find the container with id 90a1fe2b8ece85c6268eb7faaac51353aa1947390daa0048f37c02ff3260c950 Jan 31 09:27:50 crc kubenswrapper[4830]: I0131 09:27:50.477557 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rrkv7\" (UniqueName: \"kubernetes.io/projected/deed2020-a242-40ee-af68-bb3a30f6acf3-kube-api-access-rrkv7\") pod \"nova-scheduler-0\" (UID: \"deed2020-a242-40ee-af68-bb3a30f6acf3\") " pod="openstack/nova-scheduler-0" Jan 31 09:27:50 crc kubenswrapper[4830]: I0131 09:27:50.477620 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/deed2020-a242-40ee-af68-bb3a30f6acf3-config-data\") pod \"nova-scheduler-0\" (UID: \"deed2020-a242-40ee-af68-bb3a30f6acf3\") " pod="openstack/nova-scheduler-0" Jan 31 09:27:50 crc kubenswrapper[4830]: I0131 09:27:50.477786 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/deed2020-a242-40ee-af68-bb3a30f6acf3-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"deed2020-a242-40ee-af68-bb3a30f6acf3\") " pod="openstack/nova-scheduler-0" Jan 31 09:27:50 crc kubenswrapper[4830]: I0131 09:27:50.580495 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rrkv7\" (UniqueName: \"kubernetes.io/projected/deed2020-a242-40ee-af68-bb3a30f6acf3-kube-api-access-rrkv7\") pod \"nova-scheduler-0\" (UID: \"deed2020-a242-40ee-af68-bb3a30f6acf3\") " pod="openstack/nova-scheduler-0" Jan 31 09:27:50 crc kubenswrapper[4830]: I0131 09:27:50.580557 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/deed2020-a242-40ee-af68-bb3a30f6acf3-config-data\") pod \"nova-scheduler-0\" (UID: \"deed2020-a242-40ee-af68-bb3a30f6acf3\") " pod="openstack/nova-scheduler-0" Jan 31 09:27:50 crc kubenswrapper[4830]: I0131 09:27:50.580621 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/deed2020-a242-40ee-af68-bb3a30f6acf3-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"deed2020-a242-40ee-af68-bb3a30f6acf3\") " pod="openstack/nova-scheduler-0" Jan 31 09:27:50 crc kubenswrapper[4830]: I0131 09:27:50.590047 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/deed2020-a242-40ee-af68-bb3a30f6acf3-config-data\") pod \"nova-scheduler-0\" (UID: \"deed2020-a242-40ee-af68-bb3a30f6acf3\") " pod="openstack/nova-scheduler-0" Jan 31 09:27:50 crc kubenswrapper[4830]: I0131 09:27:50.590600 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/deed2020-a242-40ee-af68-bb3a30f6acf3-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"deed2020-a242-40ee-af68-bb3a30f6acf3\") " pod="openstack/nova-scheduler-0" Jan 31 09:27:50 crc kubenswrapper[4830]: I0131 09:27:50.607405 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rrkv7\" (UniqueName: \"kubernetes.io/projected/deed2020-a242-40ee-af68-bb3a30f6acf3-kube-api-access-rrkv7\") pod \"nova-scheduler-0\" (UID: \"deed2020-a242-40ee-af68-bb3a30f6acf3\") " pod="openstack/nova-scheduler-0" Jan 31 09:27:50 crc kubenswrapper[4830]: W0131 09:27:50.629278 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8dc5a196_8cf1_4387_8c40_7c9b5b1b55fc.slice/crio-946d8dc2dd07d6e050a664c06a0de75e9f2ee2bfdc22dd26e89daae8163a6932 WatchSource:0}: Error finding container 946d8dc2dd07d6e050a664c06a0de75e9f2ee2bfdc22dd26e89daae8163a6932: Status 404 returned error can't find the container with id 946d8dc2dd07d6e050a664c06a0de75e9f2ee2bfdc22dd26e89daae8163a6932 Jan 31 09:27:50 crc kubenswrapper[4830]: I0131 09:27:50.635180 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 31 09:27:50 crc kubenswrapper[4830]: I0131 09:27:50.644033 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 31 09:27:50 crc kubenswrapper[4830]: I0131 09:27:50.650303 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 31 09:27:51 crc kubenswrapper[4830]: I0131 09:27:51.205662 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"d1290216-3656-4402-94a5-44d1fde53083","Type":"ContainerStarted","Data":"d41cff3a924f005e4df23f6f629f098a510f2687b3334f432f24b0c7412810d2"} Jan 31 09:27:51 crc kubenswrapper[4830]: I0131 09:27:51.206495 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"d1290216-3656-4402-94a5-44d1fde53083","Type":"ContainerStarted","Data":"37c8c3fea34a2103db4cb76097ccb726e58a332cfff3077ac3dfed941a176ae2"} Jan 31 09:27:51 crc kubenswrapper[4830]: I0131 09:27:51.211578 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"8dc5a196-8cf1-4387-8c40-7c9b5b1b55fc","Type":"ContainerStarted","Data":"eb6a70f771a74535413152a30b974fbe478b4e49492acadca73167f8f1e8b78e"} Jan 31 09:27:51 crc kubenswrapper[4830]: I0131 09:27:51.211655 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"8dc5a196-8cf1-4387-8c40-7c9b5b1b55fc","Type":"ContainerStarted","Data":"946d8dc2dd07d6e050a664c06a0de75e9f2ee2bfdc22dd26e89daae8163a6932"} Jan 31 09:27:51 crc kubenswrapper[4830]: I0131 09:27:51.212781 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 31 09:27:51 crc kubenswrapper[4830]: I0131 09:27:51.216466 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"aa9e4ee2-a265-4397-a7ff-42e9ab868237","Type":"ContainerStarted","Data":"87f693d26ac57ee0aa982394e5dda671c8d87b55255cd5b2132721a7ad69c513"} Jan 31 09:27:51 crc kubenswrapper[4830]: I0131 09:27:51.216537 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"aa9e4ee2-a265-4397-a7ff-42e9ab868237","Type":"ContainerStarted","Data":"90a1fe2b8ece85c6268eb7faaac51353aa1947390daa0048f37c02ff3260c950"} Jan 31 09:27:51 crc kubenswrapper[4830]: I0131 09:27:51.245877 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.245851509 podStartE2EDuration="2.245851509s" podCreationTimestamp="2026-01-31 09:27:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 09:27:51.233586811 +0000 UTC m=+1615.726949253" watchObservedRunningTime="2026-01-31 09:27:51.245851509 +0000 UTC m=+1615.739213951" Jan 31 09:27:52 crc kubenswrapper[4830]: I0131 09:27:52.220467 4830 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-z7qcf" podUID="01a16d5c-bea7-4cab-8c88-206e4c5c901d" containerName="registry-server" probeResult="failure" output=< Jan 31 09:27:52 crc kubenswrapper[4830]: timeout: failed to connect service ":50051" within 1s Jan 31 09:27:52 crc kubenswrapper[4830]: > Jan 31 09:27:52 crc kubenswrapper[4830]: I0131 09:27:52.235954 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"deed2020-a242-40ee-af68-bb3a30f6acf3","Type":"ContainerStarted","Data":"bdaf28f5f273d04f1641c96500b95d21a4cb8cafbba934be5cac7966855ca0b0"} Jan 31 09:27:52 crc kubenswrapper[4830]: I0131 09:27:52.236030 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"deed2020-a242-40ee-af68-bb3a30f6acf3","Type":"ContainerStarted","Data":"9a443ab7bb93be6acc0b28f4cd471c7faf2c5e5f99f003b1e3d17c0d5912a299"} Jan 31 09:27:52 crc kubenswrapper[4830]: I0131 09:27:52.238682 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"aa9e4ee2-a265-4397-a7ff-42e9ab868237","Type":"ContainerStarted","Data":"017b2c6904fbdcbd8278344456c881534bb24a5ac4c8e0a05b69670b9bebf711"} Jan 31 09:27:52 crc kubenswrapper[4830]: I0131 09:27:52.243236 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"8dc5a196-8cf1-4387-8c40-7c9b5b1b55fc","Type":"ContainerStarted","Data":"fc8daedadd2b8254e191210fb2ee51454a3361d7e82dcf3c135e0fd60fcde1f7"} Jan 31 09:27:52 crc kubenswrapper[4830]: I0131 09:27:52.277438 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.277406896 podStartE2EDuration="2.277406896s" podCreationTimestamp="2026-01-31 09:27:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 09:27:52.25645393 +0000 UTC m=+1616.749816372" watchObservedRunningTime="2026-01-31 09:27:52.277406896 +0000 UTC m=+1616.770769328" Jan 31 09:27:52 crc kubenswrapper[4830]: I0131 09:27:52.293188 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="81b8f0ff-f3d6-4e28-a3f3-cd17dc8550f5" path="/var/lib/kubelet/pods/81b8f0ff-f3d6-4e28-a3f3-cd17dc8550f5/volumes" Jan 31 09:27:52 crc kubenswrapper[4830]: I0131 09:27:52.300465 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.300439821 podStartE2EDuration="3.300439821s" podCreationTimestamp="2026-01-31 09:27:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 09:27:52.299685989 +0000 UTC m=+1616.793048421" watchObservedRunningTime="2026-01-31 09:27:52.300439821 +0000 UTC m=+1616.793802263" Jan 31 09:27:52 crc kubenswrapper[4830]: I0131 09:27:52.344351 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.344325069 podStartE2EDuration="3.344325069s" podCreationTimestamp="2026-01-31 09:27:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 09:27:52.325476093 +0000 UTC m=+1616.818838525" watchObservedRunningTime="2026-01-31 09:27:52.344325069 +0000 UTC m=+1616.837687501" Jan 31 09:27:52 crc kubenswrapper[4830]: I0131 09:27:52.529587 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-0"] Jan 31 09:27:52 crc kubenswrapper[4830]: I0131 09:27:52.532997 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Jan 31 09:27:52 crc kubenswrapper[4830]: I0131 09:27:52.536466 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-mz4qw" Jan 31 09:27:52 crc kubenswrapper[4830]: I0131 09:27:52.536924 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Jan 31 09:27:52 crc kubenswrapper[4830]: I0131 09:27:52.537040 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Jan 31 09:27:52 crc kubenswrapper[4830]: I0131 09:27:52.560821 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Jan 31 09:27:52 crc kubenswrapper[4830]: I0131 09:27:52.596479 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mxv8h\" (UniqueName: \"kubernetes.io/projected/363ba132-8eb0-4c3d-b389-73ac72c26220-kube-api-access-mxv8h\") pod \"aodh-0\" (UID: \"363ba132-8eb0-4c3d-b389-73ac72c26220\") " pod="openstack/aodh-0" Jan 31 09:27:52 crc kubenswrapper[4830]: I0131 09:27:52.596552 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/363ba132-8eb0-4c3d-b389-73ac72c26220-combined-ca-bundle\") pod \"aodh-0\" (UID: \"363ba132-8eb0-4c3d-b389-73ac72c26220\") " pod="openstack/aodh-0" Jan 31 09:27:52 crc kubenswrapper[4830]: I0131 09:27:52.596631 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/363ba132-8eb0-4c3d-b389-73ac72c26220-config-data\") pod \"aodh-0\" (UID: \"363ba132-8eb0-4c3d-b389-73ac72c26220\") " pod="openstack/aodh-0" Jan 31 09:27:52 crc kubenswrapper[4830]: I0131 09:27:52.596676 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/363ba132-8eb0-4c3d-b389-73ac72c26220-scripts\") pod \"aodh-0\" (UID: \"363ba132-8eb0-4c3d-b389-73ac72c26220\") " pod="openstack/aodh-0" Jan 31 09:27:52 crc kubenswrapper[4830]: I0131 09:27:52.699566 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mxv8h\" (UniqueName: \"kubernetes.io/projected/363ba132-8eb0-4c3d-b389-73ac72c26220-kube-api-access-mxv8h\") pod \"aodh-0\" (UID: \"363ba132-8eb0-4c3d-b389-73ac72c26220\") " pod="openstack/aodh-0" Jan 31 09:27:52 crc kubenswrapper[4830]: I0131 09:27:52.700043 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/363ba132-8eb0-4c3d-b389-73ac72c26220-combined-ca-bundle\") pod \"aodh-0\" (UID: \"363ba132-8eb0-4c3d-b389-73ac72c26220\") " pod="openstack/aodh-0" Jan 31 09:27:52 crc kubenswrapper[4830]: I0131 09:27:52.700200 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/363ba132-8eb0-4c3d-b389-73ac72c26220-config-data\") pod \"aodh-0\" (UID: \"363ba132-8eb0-4c3d-b389-73ac72c26220\") " pod="openstack/aodh-0" Jan 31 09:27:52 crc kubenswrapper[4830]: I0131 09:27:52.700273 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/363ba132-8eb0-4c3d-b389-73ac72c26220-scripts\") pod \"aodh-0\" (UID: \"363ba132-8eb0-4c3d-b389-73ac72c26220\") " pod="openstack/aodh-0" Jan 31 09:27:52 crc kubenswrapper[4830]: I0131 09:27:52.709972 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/363ba132-8eb0-4c3d-b389-73ac72c26220-combined-ca-bundle\") pod \"aodh-0\" (UID: \"363ba132-8eb0-4c3d-b389-73ac72c26220\") " pod="openstack/aodh-0" Jan 31 09:27:52 crc kubenswrapper[4830]: I0131 09:27:52.710171 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/363ba132-8eb0-4c3d-b389-73ac72c26220-config-data\") pod \"aodh-0\" (UID: \"363ba132-8eb0-4c3d-b389-73ac72c26220\") " pod="openstack/aodh-0" Jan 31 09:27:52 crc kubenswrapper[4830]: I0131 09:27:52.725700 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/363ba132-8eb0-4c3d-b389-73ac72c26220-scripts\") pod \"aodh-0\" (UID: \"363ba132-8eb0-4c3d-b389-73ac72c26220\") " pod="openstack/aodh-0" Jan 31 09:27:52 crc kubenswrapper[4830]: I0131 09:27:52.734817 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mxv8h\" (UniqueName: \"kubernetes.io/projected/363ba132-8eb0-4c3d-b389-73ac72c26220-kube-api-access-mxv8h\") pod \"aodh-0\" (UID: \"363ba132-8eb0-4c3d-b389-73ac72c26220\") " pod="openstack/aodh-0" Jan 31 09:27:52 crc kubenswrapper[4830]: I0131 09:27:52.861709 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Jan 31 09:27:53 crc kubenswrapper[4830]: I0131 09:27:53.523523 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Jan 31 09:27:54 crc kubenswrapper[4830]: I0131 09:27:54.277625 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"363ba132-8eb0-4c3d-b389-73ac72c26220","Type":"ContainerStarted","Data":"cbe1a4afa716f3735ae80f9976e5951d27661260d820e2ddadd746393efab47c"} Jan 31 09:27:54 crc kubenswrapper[4830]: I0131 09:27:54.907447 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 31 09:27:54 crc kubenswrapper[4830]: I0131 09:27:54.908193 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 31 09:27:55 crc kubenswrapper[4830]: I0131 09:27:55.004277 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Jan 31 09:27:55 crc kubenswrapper[4830]: I0131 09:27:55.252039 4830 scope.go:117] "RemoveContainer" containerID="a04fad3617a9e38076099693ce6bd6f0b7e1a9b845b3b8a22acffddfa772e8f0" Jan 31 09:27:55 crc kubenswrapper[4830]: E0131 09:27:55.252421 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gt7kd_openshift-machine-config-operator(158dbfda-9b0a-4809-9946-3c6ee2d082dc)\"" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" podUID="158dbfda-9b0a-4809-9946-3c6ee2d082dc" Jan 31 09:27:55 crc kubenswrapper[4830]: I0131 09:27:55.293341 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"363ba132-8eb0-4c3d-b389-73ac72c26220","Type":"ContainerStarted","Data":"5ce9301cebb4a1abab1c58d14213878245aa5c5425548d00f93c5ea484bc291f"} Jan 31 09:27:55 crc kubenswrapper[4830]: I0131 09:27:55.645041 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 31 09:27:56 crc kubenswrapper[4830]: I0131 09:27:56.657069 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 31 09:27:56 crc kubenswrapper[4830]: I0131 09:27:56.657842 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ef00418a-82b1-46ac-b1af-d43bab22cdd7" containerName="ceilometer-central-agent" containerID="cri-o://93b933d398bf69ad4b58b7fdb977b74ffc92753a30cb95e9355a7916153aa722" gracePeriod=30 Jan 31 09:27:56 crc kubenswrapper[4830]: I0131 09:27:56.657889 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ef00418a-82b1-46ac-b1af-d43bab22cdd7" containerName="proxy-httpd" containerID="cri-o://077ae2e5ac08ad7f845a7b26112f7e5918825e4836931f9cd20feeac9df70d0b" gracePeriod=30 Jan 31 09:27:56 crc kubenswrapper[4830]: I0131 09:27:56.658076 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ef00418a-82b1-46ac-b1af-d43bab22cdd7" containerName="sg-core" containerID="cri-o://315ffd30559b1ed532e837143e2d92e855d451508e9d85df5ce2487b259e3029" gracePeriod=30 Jan 31 09:27:56 crc kubenswrapper[4830]: I0131 09:27:56.658163 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ef00418a-82b1-46ac-b1af-d43bab22cdd7" containerName="ceilometer-notification-agent" containerID="cri-o://c08c55d4973b1b44cefb6000659f130d7e3193eab1fd885a862b2dd81afef2ca" gracePeriod=30 Jan 31 09:27:57 crc kubenswrapper[4830]: I0131 09:27:57.330602 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"363ba132-8eb0-4c3d-b389-73ac72c26220","Type":"ContainerStarted","Data":"66dbbe851105235b2394a62f0d13e090891c227cb06e0ba9a59dad497b5f7c82"} Jan 31 09:27:57 crc kubenswrapper[4830]: I0131 09:27:57.335424 4830 generic.go:334] "Generic (PLEG): container finished" podID="ef00418a-82b1-46ac-b1af-d43bab22cdd7" containerID="077ae2e5ac08ad7f845a7b26112f7e5918825e4836931f9cd20feeac9df70d0b" exitCode=0 Jan 31 09:27:57 crc kubenswrapper[4830]: I0131 09:27:57.335468 4830 generic.go:334] "Generic (PLEG): container finished" podID="ef00418a-82b1-46ac-b1af-d43bab22cdd7" containerID="315ffd30559b1ed532e837143e2d92e855d451508e9d85df5ce2487b259e3029" exitCode=2 Jan 31 09:27:57 crc kubenswrapper[4830]: I0131 09:27:57.335479 4830 generic.go:334] "Generic (PLEG): container finished" podID="ef00418a-82b1-46ac-b1af-d43bab22cdd7" containerID="93b933d398bf69ad4b58b7fdb977b74ffc92753a30cb95e9355a7916153aa722" exitCode=0 Jan 31 09:27:57 crc kubenswrapper[4830]: I0131 09:27:57.335509 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ef00418a-82b1-46ac-b1af-d43bab22cdd7","Type":"ContainerDied","Data":"077ae2e5ac08ad7f845a7b26112f7e5918825e4836931f9cd20feeac9df70d0b"} Jan 31 09:27:57 crc kubenswrapper[4830]: I0131 09:27:57.335577 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ef00418a-82b1-46ac-b1af-d43bab22cdd7","Type":"ContainerDied","Data":"315ffd30559b1ed532e837143e2d92e855d451508e9d85df5ce2487b259e3029"} Jan 31 09:27:57 crc kubenswrapper[4830]: I0131 09:27:57.335608 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ef00418a-82b1-46ac-b1af-d43bab22cdd7","Type":"ContainerDied","Data":"93b933d398bf69ad4b58b7fdb977b74ffc92753a30cb95e9355a7916153aa722"} Jan 31 09:27:57 crc kubenswrapper[4830]: I0131 09:27:57.785799 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-0"] Jan 31 09:27:59 crc kubenswrapper[4830]: I0131 09:27:59.908050 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 31 09:27:59 crc kubenswrapper[4830]: I0131 09:27:59.908442 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 31 09:27:59 crc kubenswrapper[4830]: I0131 09:27:59.974935 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 31 09:27:59 crc kubenswrapper[4830]: I0131 09:27:59.975412 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 31 09:28:00 crc kubenswrapper[4830]: I0131 09:28:00.004638 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Jan 31 09:28:00 crc kubenswrapper[4830]: I0131 09:28:00.048027 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Jan 31 09:28:00 crc kubenswrapper[4830]: I0131 09:28:00.444032 4830 generic.go:334] "Generic (PLEG): container finished" podID="ef00418a-82b1-46ac-b1af-d43bab22cdd7" containerID="c08c55d4973b1b44cefb6000659f130d7e3193eab1fd885a862b2dd81afef2ca" exitCode=0 Jan 31 09:28:00 crc kubenswrapper[4830]: I0131 09:28:00.446678 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ef00418a-82b1-46ac-b1af-d43bab22cdd7","Type":"ContainerDied","Data":"c08c55d4973b1b44cefb6000659f130d7e3193eab1fd885a862b2dd81afef2ca"} Jan 31 09:28:00 crc kubenswrapper[4830]: I0131 09:28:00.616275 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Jan 31 09:28:00 crc kubenswrapper[4830]: I0131 09:28:00.644470 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 31 09:28:00 crc kubenswrapper[4830]: I0131 09:28:00.717643 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 31 09:28:00 crc kubenswrapper[4830]: I0131 09:28:00.959175 4830 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="aa9e4ee2-a265-4397-a7ff-42e9ab868237" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.251:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 31 09:28:00 crc kubenswrapper[4830]: I0131 09:28:00.959307 4830 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="aa9e4ee2-a265-4397-a7ff-42e9ab868237" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.251:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 31 09:28:00 crc kubenswrapper[4830]: I0131 09:28:00.982809 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-7ft4j"] Jan 31 09:28:00 crc kubenswrapper[4830]: I0131 09:28:00.984793 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-7ft4j" Jan 31 09:28:00 crc kubenswrapper[4830]: I0131 09:28:00.988705 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Jan 31 09:28:00 crc kubenswrapper[4830]: I0131 09:28:00.989113 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Jan 31 09:28:00 crc kubenswrapper[4830]: I0131 09:28:00.997069 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-7ft4j"] Jan 31 09:28:01 crc kubenswrapper[4830]: I0131 09:28:01.061389 4830 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="8dc5a196-8cf1-4387-8c40-7c9b5b1b55fc" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.252:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 31 09:28:01 crc kubenswrapper[4830]: I0131 09:28:01.061840 4830 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="8dc5a196-8cf1-4387-8c40-7c9b5b1b55fc" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.252:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 31 09:28:01 crc kubenswrapper[4830]: I0131 09:28:01.101998 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5adada53-61e1-406d-b9ac-0c004999b351-scripts\") pod \"nova-cell1-cell-mapping-7ft4j\" (UID: \"5adada53-61e1-406d-b9ac-0c004999b351\") " pod="openstack/nova-cell1-cell-mapping-7ft4j" Jan 31 09:28:01 crc kubenswrapper[4830]: I0131 09:28:01.102062 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5adada53-61e1-406d-b9ac-0c004999b351-config-data\") pod \"nova-cell1-cell-mapping-7ft4j\" (UID: \"5adada53-61e1-406d-b9ac-0c004999b351\") " pod="openstack/nova-cell1-cell-mapping-7ft4j" Jan 31 09:28:01 crc kubenswrapper[4830]: I0131 09:28:01.102114 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5adada53-61e1-406d-b9ac-0c004999b351-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-7ft4j\" (UID: \"5adada53-61e1-406d-b9ac-0c004999b351\") " pod="openstack/nova-cell1-cell-mapping-7ft4j" Jan 31 09:28:01 crc kubenswrapper[4830]: I0131 09:28:01.102165 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9vk2n\" (UniqueName: \"kubernetes.io/projected/5adada53-61e1-406d-b9ac-0c004999b351-kube-api-access-9vk2n\") pod \"nova-cell1-cell-mapping-7ft4j\" (UID: \"5adada53-61e1-406d-b9ac-0c004999b351\") " pod="openstack/nova-cell1-cell-mapping-7ft4j" Jan 31 09:28:01 crc kubenswrapper[4830]: I0131 09:28:01.212033 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5adada53-61e1-406d-b9ac-0c004999b351-scripts\") pod \"nova-cell1-cell-mapping-7ft4j\" (UID: \"5adada53-61e1-406d-b9ac-0c004999b351\") " pod="openstack/nova-cell1-cell-mapping-7ft4j" Jan 31 09:28:01 crc kubenswrapper[4830]: I0131 09:28:01.212553 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5adada53-61e1-406d-b9ac-0c004999b351-config-data\") pod \"nova-cell1-cell-mapping-7ft4j\" (UID: \"5adada53-61e1-406d-b9ac-0c004999b351\") " pod="openstack/nova-cell1-cell-mapping-7ft4j" Jan 31 09:28:01 crc kubenswrapper[4830]: I0131 09:28:01.212649 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5adada53-61e1-406d-b9ac-0c004999b351-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-7ft4j\" (UID: \"5adada53-61e1-406d-b9ac-0c004999b351\") " pod="openstack/nova-cell1-cell-mapping-7ft4j" Jan 31 09:28:01 crc kubenswrapper[4830]: I0131 09:28:01.212746 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9vk2n\" (UniqueName: \"kubernetes.io/projected/5adada53-61e1-406d-b9ac-0c004999b351-kube-api-access-9vk2n\") pod \"nova-cell1-cell-mapping-7ft4j\" (UID: \"5adada53-61e1-406d-b9ac-0c004999b351\") " pod="openstack/nova-cell1-cell-mapping-7ft4j" Jan 31 09:28:01 crc kubenswrapper[4830]: I0131 09:28:01.249288 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9vk2n\" (UniqueName: \"kubernetes.io/projected/5adada53-61e1-406d-b9ac-0c004999b351-kube-api-access-9vk2n\") pod \"nova-cell1-cell-mapping-7ft4j\" (UID: \"5adada53-61e1-406d-b9ac-0c004999b351\") " pod="openstack/nova-cell1-cell-mapping-7ft4j" Jan 31 09:28:01 crc kubenswrapper[4830]: I0131 09:28:01.256951 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5adada53-61e1-406d-b9ac-0c004999b351-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-7ft4j\" (UID: \"5adada53-61e1-406d-b9ac-0c004999b351\") " pod="openstack/nova-cell1-cell-mapping-7ft4j" Jan 31 09:28:01 crc kubenswrapper[4830]: I0131 09:28:01.259274 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5adada53-61e1-406d-b9ac-0c004999b351-config-data\") pod \"nova-cell1-cell-mapping-7ft4j\" (UID: \"5adada53-61e1-406d-b9ac-0c004999b351\") " pod="openstack/nova-cell1-cell-mapping-7ft4j" Jan 31 09:28:01 crc kubenswrapper[4830]: I0131 09:28:01.280743 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5adada53-61e1-406d-b9ac-0c004999b351-scripts\") pod \"nova-cell1-cell-mapping-7ft4j\" (UID: \"5adada53-61e1-406d-b9ac-0c004999b351\") " pod="openstack/nova-cell1-cell-mapping-7ft4j" Jan 31 09:28:01 crc kubenswrapper[4830]: I0131 09:28:01.327800 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-7ft4j" Jan 31 09:28:01 crc kubenswrapper[4830]: I0131 09:28:01.570397 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 31 09:28:01 crc kubenswrapper[4830]: I0131 09:28:01.571262 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ef00418a-82b1-46ac-b1af-d43bab22cdd7","Type":"ContainerDied","Data":"3dd1d03bf2c33bcda87c69ea7edb909a8f8932310d9dfe4bd8346bb3f65d11f8"} Jan 31 09:28:01 crc kubenswrapper[4830]: I0131 09:28:01.571343 4830 scope.go:117] "RemoveContainer" containerID="077ae2e5ac08ad7f845a7b26112f7e5918825e4836931f9cd20feeac9df70d0b" Jan 31 09:28:01 crc kubenswrapper[4830]: I0131 09:28:01.605166 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"363ba132-8eb0-4c3d-b389-73ac72c26220","Type":"ContainerStarted","Data":"d5bd2657769aff37f6c820103505904627bff79ece535343980de7eae4c805d0"} Jan 31 09:28:01 crc kubenswrapper[4830]: I0131 09:28:01.661967 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rv6s6\" (UniqueName: \"kubernetes.io/projected/ef00418a-82b1-46ac-b1af-d43bab22cdd7-kube-api-access-rv6s6\") pod \"ef00418a-82b1-46ac-b1af-d43bab22cdd7\" (UID: \"ef00418a-82b1-46ac-b1af-d43bab22cdd7\") " Jan 31 09:28:01 crc kubenswrapper[4830]: I0131 09:28:01.663502 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ef00418a-82b1-46ac-b1af-d43bab22cdd7-config-data\") pod \"ef00418a-82b1-46ac-b1af-d43bab22cdd7\" (UID: \"ef00418a-82b1-46ac-b1af-d43bab22cdd7\") " Jan 31 09:28:01 crc kubenswrapper[4830]: I0131 09:28:01.663619 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ef00418a-82b1-46ac-b1af-d43bab22cdd7-sg-core-conf-yaml\") pod \"ef00418a-82b1-46ac-b1af-d43bab22cdd7\" (UID: \"ef00418a-82b1-46ac-b1af-d43bab22cdd7\") " Jan 31 09:28:01 crc kubenswrapper[4830]: I0131 09:28:01.663745 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ef00418a-82b1-46ac-b1af-d43bab22cdd7-scripts\") pod \"ef00418a-82b1-46ac-b1af-d43bab22cdd7\" (UID: \"ef00418a-82b1-46ac-b1af-d43bab22cdd7\") " Jan 31 09:28:01 crc kubenswrapper[4830]: I0131 09:28:01.663837 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ef00418a-82b1-46ac-b1af-d43bab22cdd7-log-httpd\") pod \"ef00418a-82b1-46ac-b1af-d43bab22cdd7\" (UID: \"ef00418a-82b1-46ac-b1af-d43bab22cdd7\") " Jan 31 09:28:01 crc kubenswrapper[4830]: I0131 09:28:01.663930 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ef00418a-82b1-46ac-b1af-d43bab22cdd7-run-httpd\") pod \"ef00418a-82b1-46ac-b1af-d43bab22cdd7\" (UID: \"ef00418a-82b1-46ac-b1af-d43bab22cdd7\") " Jan 31 09:28:01 crc kubenswrapper[4830]: I0131 09:28:01.664044 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ef00418a-82b1-46ac-b1af-d43bab22cdd7-combined-ca-bundle\") pod \"ef00418a-82b1-46ac-b1af-d43bab22cdd7\" (UID: \"ef00418a-82b1-46ac-b1af-d43bab22cdd7\") " Jan 31 09:28:01 crc kubenswrapper[4830]: I0131 09:28:01.670349 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ef00418a-82b1-46ac-b1af-d43bab22cdd7-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "ef00418a-82b1-46ac-b1af-d43bab22cdd7" (UID: "ef00418a-82b1-46ac-b1af-d43bab22cdd7"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 09:28:01 crc kubenswrapper[4830]: I0131 09:28:01.697157 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ef00418a-82b1-46ac-b1af-d43bab22cdd7-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "ef00418a-82b1-46ac-b1af-d43bab22cdd7" (UID: "ef00418a-82b1-46ac-b1af-d43bab22cdd7"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 09:28:01 crc kubenswrapper[4830]: I0131 09:28:01.711888 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ef00418a-82b1-46ac-b1af-d43bab22cdd7-scripts" (OuterVolumeSpecName: "scripts") pod "ef00418a-82b1-46ac-b1af-d43bab22cdd7" (UID: "ef00418a-82b1-46ac-b1af-d43bab22cdd7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:28:01 crc kubenswrapper[4830]: I0131 09:28:01.714106 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ef00418a-82b1-46ac-b1af-d43bab22cdd7-kube-api-access-rv6s6" (OuterVolumeSpecName: "kube-api-access-rv6s6") pod "ef00418a-82b1-46ac-b1af-d43bab22cdd7" (UID: "ef00418a-82b1-46ac-b1af-d43bab22cdd7"). InnerVolumeSpecName "kube-api-access-rv6s6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:28:01 crc kubenswrapper[4830]: I0131 09:28:01.769531 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rv6s6\" (UniqueName: \"kubernetes.io/projected/ef00418a-82b1-46ac-b1af-d43bab22cdd7-kube-api-access-rv6s6\") on node \"crc\" DevicePath \"\"" Jan 31 09:28:01 crc kubenswrapper[4830]: I0131 09:28:01.770241 4830 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ef00418a-82b1-46ac-b1af-d43bab22cdd7-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 09:28:01 crc kubenswrapper[4830]: I0131 09:28:01.770256 4830 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ef00418a-82b1-46ac-b1af-d43bab22cdd7-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 31 09:28:01 crc kubenswrapper[4830]: I0131 09:28:01.770265 4830 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ef00418a-82b1-46ac-b1af-d43bab22cdd7-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 31 09:28:01 crc kubenswrapper[4830]: I0131 09:28:01.818009 4830 scope.go:117] "RemoveContainer" containerID="315ffd30559b1ed532e837143e2d92e855d451508e9d85df5ce2487b259e3029" Jan 31 09:28:01 crc kubenswrapper[4830]: I0131 09:28:01.844103 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 31 09:28:01 crc kubenswrapper[4830]: I0131 09:28:01.924990 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ef00418a-82b1-46ac-b1af-d43bab22cdd7-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "ef00418a-82b1-46ac-b1af-d43bab22cdd7" (UID: "ef00418a-82b1-46ac-b1af-d43bab22cdd7"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:28:02 crc kubenswrapper[4830]: I0131 09:28:02.001416 4830 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ef00418a-82b1-46ac-b1af-d43bab22cdd7-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 31 09:28:02 crc kubenswrapper[4830]: I0131 09:28:02.041482 4830 scope.go:117] "RemoveContainer" containerID="c08c55d4973b1b44cefb6000659f130d7e3193eab1fd885a862b2dd81afef2ca" Jan 31 09:28:02 crc kubenswrapper[4830]: I0131 09:28:02.130601 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ef00418a-82b1-46ac-b1af-d43bab22cdd7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ef00418a-82b1-46ac-b1af-d43bab22cdd7" (UID: "ef00418a-82b1-46ac-b1af-d43bab22cdd7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:28:02 crc kubenswrapper[4830]: I0131 09:28:02.137968 4830 scope.go:117] "RemoveContainer" containerID="93b933d398bf69ad4b58b7fdb977b74ffc92753a30cb95e9355a7916153aa722" Jan 31 09:28:02 crc kubenswrapper[4830]: I0131 09:28:02.241550 4830 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ef00418a-82b1-46ac-b1af-d43bab22cdd7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 09:28:02 crc kubenswrapper[4830]: I0131 09:28:02.243796 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ef00418a-82b1-46ac-b1af-d43bab22cdd7-config-data" (OuterVolumeSpecName: "config-data") pod "ef00418a-82b1-46ac-b1af-d43bab22cdd7" (UID: "ef00418a-82b1-46ac-b1af-d43bab22cdd7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:28:02 crc kubenswrapper[4830]: I0131 09:28:02.254997 4830 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-z7qcf" podUID="01a16d5c-bea7-4cab-8c88-206e4c5c901d" containerName="registry-server" probeResult="failure" output=< Jan 31 09:28:02 crc kubenswrapper[4830]: timeout: failed to connect service ":50051" within 1s Jan 31 09:28:02 crc kubenswrapper[4830]: > Jan 31 09:28:02 crc kubenswrapper[4830]: I0131 09:28:02.359220 4830 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ef00418a-82b1-46ac-b1af-d43bab22cdd7-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 09:28:02 crc kubenswrapper[4830]: I0131 09:28:02.649747 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 31 09:28:02 crc kubenswrapper[4830]: I0131 09:28:02.702562 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 31 09:28:02 crc kubenswrapper[4830]: I0131 09:28:02.733798 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 31 09:28:02 crc kubenswrapper[4830]: I0131 09:28:02.779715 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 31 09:28:02 crc kubenswrapper[4830]: E0131 09:28:02.780551 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef00418a-82b1-46ac-b1af-d43bab22cdd7" containerName="proxy-httpd" Jan 31 09:28:02 crc kubenswrapper[4830]: I0131 09:28:02.780574 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef00418a-82b1-46ac-b1af-d43bab22cdd7" containerName="proxy-httpd" Jan 31 09:28:02 crc kubenswrapper[4830]: E0131 09:28:02.780587 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef00418a-82b1-46ac-b1af-d43bab22cdd7" containerName="ceilometer-notification-agent" Jan 31 09:28:02 crc kubenswrapper[4830]: I0131 09:28:02.780594 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef00418a-82b1-46ac-b1af-d43bab22cdd7" containerName="ceilometer-notification-agent" Jan 31 09:28:02 crc kubenswrapper[4830]: E0131 09:28:02.780615 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef00418a-82b1-46ac-b1af-d43bab22cdd7" containerName="ceilometer-central-agent" Jan 31 09:28:02 crc kubenswrapper[4830]: I0131 09:28:02.780622 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef00418a-82b1-46ac-b1af-d43bab22cdd7" containerName="ceilometer-central-agent" Jan 31 09:28:02 crc kubenswrapper[4830]: E0131 09:28:02.780633 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef00418a-82b1-46ac-b1af-d43bab22cdd7" containerName="sg-core" Jan 31 09:28:02 crc kubenswrapper[4830]: I0131 09:28:02.780639 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef00418a-82b1-46ac-b1af-d43bab22cdd7" containerName="sg-core" Jan 31 09:28:02 crc kubenswrapper[4830]: I0131 09:28:02.780937 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="ef00418a-82b1-46ac-b1af-d43bab22cdd7" containerName="proxy-httpd" Jan 31 09:28:02 crc kubenswrapper[4830]: I0131 09:28:02.780964 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="ef00418a-82b1-46ac-b1af-d43bab22cdd7" containerName="sg-core" Jan 31 09:28:02 crc kubenswrapper[4830]: I0131 09:28:02.780976 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="ef00418a-82b1-46ac-b1af-d43bab22cdd7" containerName="ceilometer-notification-agent" Jan 31 09:28:02 crc kubenswrapper[4830]: I0131 09:28:02.780991 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="ef00418a-82b1-46ac-b1af-d43bab22cdd7" containerName="ceilometer-central-agent" Jan 31 09:28:02 crc kubenswrapper[4830]: I0131 09:28:02.788414 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 31 09:28:02 crc kubenswrapper[4830]: I0131 09:28:02.793826 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 31 09:28:02 crc kubenswrapper[4830]: I0131 09:28:02.794518 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 31 09:28:02 crc kubenswrapper[4830]: I0131 09:28:02.801908 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-7ft4j"] Jan 31 09:28:02 crc kubenswrapper[4830]: I0131 09:28:02.906054 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5857f9d0-2512-4a0b-bdf9-e236d864e814-run-httpd\") pod \"ceilometer-0\" (UID: \"5857f9d0-2512-4a0b-bdf9-e236d864e814\") " pod="openstack/ceilometer-0" Jan 31 09:28:02 crc kubenswrapper[4830]: I0131 09:28:02.906115 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5857f9d0-2512-4a0b-bdf9-e236d864e814-scripts\") pod \"ceilometer-0\" (UID: \"5857f9d0-2512-4a0b-bdf9-e236d864e814\") " pod="openstack/ceilometer-0" Jan 31 09:28:02 crc kubenswrapper[4830]: I0131 09:28:02.906174 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tng26\" (UniqueName: \"kubernetes.io/projected/5857f9d0-2512-4a0b-bdf9-e236d864e814-kube-api-access-tng26\") pod \"ceilometer-0\" (UID: \"5857f9d0-2512-4a0b-bdf9-e236d864e814\") " pod="openstack/ceilometer-0" Jan 31 09:28:02 crc kubenswrapper[4830]: I0131 09:28:02.906214 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5857f9d0-2512-4a0b-bdf9-e236d864e814-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"5857f9d0-2512-4a0b-bdf9-e236d864e814\") " pod="openstack/ceilometer-0" Jan 31 09:28:02 crc kubenswrapper[4830]: I0131 09:28:02.906334 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5857f9d0-2512-4a0b-bdf9-e236d864e814-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"5857f9d0-2512-4a0b-bdf9-e236d864e814\") " pod="openstack/ceilometer-0" Jan 31 09:28:02 crc kubenswrapper[4830]: I0131 09:28:02.906367 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5857f9d0-2512-4a0b-bdf9-e236d864e814-log-httpd\") pod \"ceilometer-0\" (UID: \"5857f9d0-2512-4a0b-bdf9-e236d864e814\") " pod="openstack/ceilometer-0" Jan 31 09:28:02 crc kubenswrapper[4830]: I0131 09:28:02.906401 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5857f9d0-2512-4a0b-bdf9-e236d864e814-config-data\") pod \"ceilometer-0\" (UID: \"5857f9d0-2512-4a0b-bdf9-e236d864e814\") " pod="openstack/ceilometer-0" Jan 31 09:28:03 crc kubenswrapper[4830]: I0131 09:28:03.007060 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 31 09:28:03 crc kubenswrapper[4830]: I0131 09:28:03.010434 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5857f9d0-2512-4a0b-bdf9-e236d864e814-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"5857f9d0-2512-4a0b-bdf9-e236d864e814\") " pod="openstack/ceilometer-0" Jan 31 09:28:03 crc kubenswrapper[4830]: I0131 09:28:03.010501 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5857f9d0-2512-4a0b-bdf9-e236d864e814-log-httpd\") pod \"ceilometer-0\" (UID: \"5857f9d0-2512-4a0b-bdf9-e236d864e814\") " pod="openstack/ceilometer-0" Jan 31 09:28:03 crc kubenswrapper[4830]: I0131 09:28:03.010550 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5857f9d0-2512-4a0b-bdf9-e236d864e814-config-data\") pod \"ceilometer-0\" (UID: \"5857f9d0-2512-4a0b-bdf9-e236d864e814\") " pod="openstack/ceilometer-0" Jan 31 09:28:03 crc kubenswrapper[4830]: I0131 09:28:03.010664 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5857f9d0-2512-4a0b-bdf9-e236d864e814-run-httpd\") pod \"ceilometer-0\" (UID: \"5857f9d0-2512-4a0b-bdf9-e236d864e814\") " pod="openstack/ceilometer-0" Jan 31 09:28:03 crc kubenswrapper[4830]: I0131 09:28:03.010703 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5857f9d0-2512-4a0b-bdf9-e236d864e814-scripts\") pod \"ceilometer-0\" (UID: \"5857f9d0-2512-4a0b-bdf9-e236d864e814\") " pod="openstack/ceilometer-0" Jan 31 09:28:03 crc kubenswrapper[4830]: I0131 09:28:03.010841 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tng26\" (UniqueName: \"kubernetes.io/projected/5857f9d0-2512-4a0b-bdf9-e236d864e814-kube-api-access-tng26\") pod \"ceilometer-0\" (UID: \"5857f9d0-2512-4a0b-bdf9-e236d864e814\") " pod="openstack/ceilometer-0" Jan 31 09:28:03 crc kubenswrapper[4830]: I0131 09:28:03.010909 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5857f9d0-2512-4a0b-bdf9-e236d864e814-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"5857f9d0-2512-4a0b-bdf9-e236d864e814\") " pod="openstack/ceilometer-0" Jan 31 09:28:03 crc kubenswrapper[4830]: I0131 09:28:03.011940 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5857f9d0-2512-4a0b-bdf9-e236d864e814-run-httpd\") pod \"ceilometer-0\" (UID: \"5857f9d0-2512-4a0b-bdf9-e236d864e814\") " pod="openstack/ceilometer-0" Jan 31 09:28:03 crc kubenswrapper[4830]: I0131 09:28:03.012001 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5857f9d0-2512-4a0b-bdf9-e236d864e814-log-httpd\") pod \"ceilometer-0\" (UID: \"5857f9d0-2512-4a0b-bdf9-e236d864e814\") " pod="openstack/ceilometer-0" Jan 31 09:28:03 crc kubenswrapper[4830]: I0131 09:28:03.034885 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5857f9d0-2512-4a0b-bdf9-e236d864e814-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"5857f9d0-2512-4a0b-bdf9-e236d864e814\") " pod="openstack/ceilometer-0" Jan 31 09:28:03 crc kubenswrapper[4830]: I0131 09:28:03.034887 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5857f9d0-2512-4a0b-bdf9-e236d864e814-scripts\") pod \"ceilometer-0\" (UID: \"5857f9d0-2512-4a0b-bdf9-e236d864e814\") " pod="openstack/ceilometer-0" Jan 31 09:28:03 crc kubenswrapper[4830]: I0131 09:28:03.037683 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5857f9d0-2512-4a0b-bdf9-e236d864e814-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"5857f9d0-2512-4a0b-bdf9-e236d864e814\") " pod="openstack/ceilometer-0" Jan 31 09:28:03 crc kubenswrapper[4830]: I0131 09:28:03.044500 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tng26\" (UniqueName: \"kubernetes.io/projected/5857f9d0-2512-4a0b-bdf9-e236d864e814-kube-api-access-tng26\") pod \"ceilometer-0\" (UID: \"5857f9d0-2512-4a0b-bdf9-e236d864e814\") " pod="openstack/ceilometer-0" Jan 31 09:28:03 crc kubenswrapper[4830]: I0131 09:28:03.061273 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5857f9d0-2512-4a0b-bdf9-e236d864e814-config-data\") pod \"ceilometer-0\" (UID: \"5857f9d0-2512-4a0b-bdf9-e236d864e814\") " pod="openstack/ceilometer-0" Jan 31 09:28:03 crc kubenswrapper[4830]: I0131 09:28:03.306095 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 31 09:28:03 crc kubenswrapper[4830]: I0131 09:28:03.695375 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-7ft4j" event={"ID":"5adada53-61e1-406d-b9ac-0c004999b351","Type":"ContainerStarted","Data":"7641337d358c0b661af22faa289cf47753f5bce205b9d5934eb013a464e16db3"} Jan 31 09:28:03 crc kubenswrapper[4830]: I0131 09:28:03.695443 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-7ft4j" event={"ID":"5adada53-61e1-406d-b9ac-0c004999b351","Type":"ContainerStarted","Data":"ec4dcab692d1b6b4df7c91ee12ec0736dc220f2be1e35b9fe2b5bdb0bd7629bf"} Jan 31 09:28:03 crc kubenswrapper[4830]: I0131 09:28:03.732761 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-7ft4j" podStartSLOduration=3.732706275 podStartE2EDuration="3.732706275s" podCreationTimestamp="2026-01-31 09:28:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 09:28:03.714673489 +0000 UTC m=+1628.208035931" watchObservedRunningTime="2026-01-31 09:28:03.732706275 +0000 UTC m=+1628.226068717" Jan 31 09:28:04 crc kubenswrapper[4830]: W0131 09:28:04.122514 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5857f9d0_2512_4a0b_bdf9_e236d864e814.slice/crio-a5176660c2c027f6ffb9631512c27edb99aae1fb47331f2cccf64041b62370cc WatchSource:0}: Error finding container a5176660c2c027f6ffb9631512c27edb99aae1fb47331f2cccf64041b62370cc: Status 404 returned error can't find the container with id a5176660c2c027f6ffb9631512c27edb99aae1fb47331f2cccf64041b62370cc Jan 31 09:28:04 crc kubenswrapper[4830]: I0131 09:28:04.128142 4830 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 31 09:28:04 crc kubenswrapper[4830]: I0131 09:28:04.132837 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 31 09:28:04 crc kubenswrapper[4830]: I0131 09:28:04.270672 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ef00418a-82b1-46ac-b1af-d43bab22cdd7" path="/var/lib/kubelet/pods/ef00418a-82b1-46ac-b1af-d43bab22cdd7/volumes" Jan 31 09:28:04 crc kubenswrapper[4830]: I0131 09:28:04.718546 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5857f9d0-2512-4a0b-bdf9-e236d864e814","Type":"ContainerStarted","Data":"a5176660c2c027f6ffb9631512c27edb99aae1fb47331f2cccf64041b62370cc"} Jan 31 09:28:04 crc kubenswrapper[4830]: I0131 09:28:04.772574 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 31 09:28:08 crc kubenswrapper[4830]: I0131 09:28:08.795924 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5857f9d0-2512-4a0b-bdf9-e236d864e814","Type":"ContainerStarted","Data":"9fce759799afb7fc35f253cbb28f800b653f9d711262b2ccdc49247cc45d536c"} Jan 31 09:28:08 crc kubenswrapper[4830]: I0131 09:28:08.802630 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"363ba132-8eb0-4c3d-b389-73ac72c26220","Type":"ContainerStarted","Data":"7f5b9097442abbf4fcd2ce132a5c0536e9536fa76adc4c85a44884c98d076da1"} Jan 31 09:28:08 crc kubenswrapper[4830]: I0131 09:28:08.802842 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="363ba132-8eb0-4c3d-b389-73ac72c26220" containerName="aodh-api" containerID="cri-o://5ce9301cebb4a1abab1c58d14213878245aa5c5425548d00f93c5ea484bc291f" gracePeriod=30 Jan 31 09:28:08 crc kubenswrapper[4830]: I0131 09:28:08.802908 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="363ba132-8eb0-4c3d-b389-73ac72c26220" containerName="aodh-listener" containerID="cri-o://7f5b9097442abbf4fcd2ce132a5c0536e9536fa76adc4c85a44884c98d076da1" gracePeriod=30 Jan 31 09:28:08 crc kubenswrapper[4830]: I0131 09:28:08.802958 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="363ba132-8eb0-4c3d-b389-73ac72c26220" containerName="aodh-evaluator" containerID="cri-o://66dbbe851105235b2394a62f0d13e090891c227cb06e0ba9a59dad497b5f7c82" gracePeriod=30 Jan 31 09:28:08 crc kubenswrapper[4830]: I0131 09:28:08.803007 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="363ba132-8eb0-4c3d-b389-73ac72c26220" containerName="aodh-notifier" containerID="cri-o://d5bd2657769aff37f6c820103505904627bff79ece535343980de7eae4c805d0" gracePeriod=30 Jan 31 09:28:08 crc kubenswrapper[4830]: I0131 09:28:08.836674 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-0" podStartSLOduration=3.196656135 podStartE2EDuration="16.836646542s" podCreationTimestamp="2026-01-31 09:27:52 +0000 UTC" firstStartedPulling="2026-01-31 09:27:53.531764158 +0000 UTC m=+1618.025126600" lastFinishedPulling="2026-01-31 09:28:07.171754565 +0000 UTC m=+1631.665117007" observedRunningTime="2026-01-31 09:28:08.824328689 +0000 UTC m=+1633.317691121" watchObservedRunningTime="2026-01-31 09:28:08.836646542 +0000 UTC m=+1633.330008974" Jan 31 09:28:09 crc kubenswrapper[4830]: I0131 09:28:09.267119 4830 scope.go:117] "RemoveContainer" containerID="a04fad3617a9e38076099693ce6bd6f0b7e1a9b845b3b8a22acffddfa772e8f0" Jan 31 09:28:09 crc kubenswrapper[4830]: E0131 09:28:09.267546 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gt7kd_openshift-machine-config-operator(158dbfda-9b0a-4809-9946-3c6ee2d082dc)\"" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" podUID="158dbfda-9b0a-4809-9946-3c6ee2d082dc" Jan 31 09:28:09 crc kubenswrapper[4830]: I0131 09:28:09.823429 4830 generic.go:334] "Generic (PLEG): container finished" podID="363ba132-8eb0-4c3d-b389-73ac72c26220" containerID="66dbbe851105235b2394a62f0d13e090891c227cb06e0ba9a59dad497b5f7c82" exitCode=0 Jan 31 09:28:09 crc kubenswrapper[4830]: I0131 09:28:09.824589 4830 generic.go:334] "Generic (PLEG): container finished" podID="363ba132-8eb0-4c3d-b389-73ac72c26220" containerID="5ce9301cebb4a1abab1c58d14213878245aa5c5425548d00f93c5ea484bc291f" exitCode=0 Jan 31 09:28:09 crc kubenswrapper[4830]: I0131 09:28:09.824190 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"363ba132-8eb0-4c3d-b389-73ac72c26220","Type":"ContainerDied","Data":"66dbbe851105235b2394a62f0d13e090891c227cb06e0ba9a59dad497b5f7c82"} Jan 31 09:28:09 crc kubenswrapper[4830]: I0131 09:28:09.828437 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"363ba132-8eb0-4c3d-b389-73ac72c26220","Type":"ContainerDied","Data":"5ce9301cebb4a1abab1c58d14213878245aa5c5425548d00f93c5ea484bc291f"} Jan 31 09:28:09 crc kubenswrapper[4830]: I0131 09:28:09.917068 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 31 09:28:09 crc kubenswrapper[4830]: I0131 09:28:09.925432 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 31 09:28:09 crc kubenswrapper[4830]: I0131 09:28:09.934343 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 31 09:28:09 crc kubenswrapper[4830]: I0131 09:28:09.993274 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 31 09:28:09 crc kubenswrapper[4830]: I0131 09:28:09.994814 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 31 09:28:10 crc kubenswrapper[4830]: I0131 09:28:10.007233 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 31 09:28:10 crc kubenswrapper[4830]: I0131 09:28:10.046276 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 31 09:28:10 crc kubenswrapper[4830]: I0131 09:28:10.846150 4830 generic.go:334] "Generic (PLEG): container finished" podID="363ba132-8eb0-4c3d-b389-73ac72c26220" containerID="d5bd2657769aff37f6c820103505904627bff79ece535343980de7eae4c805d0" exitCode=0 Jan 31 09:28:10 crc kubenswrapper[4830]: I0131 09:28:10.846272 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"363ba132-8eb0-4c3d-b389-73ac72c26220","Type":"ContainerDied","Data":"d5bd2657769aff37f6c820103505904627bff79ece535343980de7eae4c805d0"} Jan 31 09:28:10 crc kubenswrapper[4830]: I0131 09:28:10.851054 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5857f9d0-2512-4a0b-bdf9-e236d864e814","Type":"ContainerStarted","Data":"2f6403aa6c93a3710ea73f76bd0e339077242676de2969aed20e7ba76b4ab985"} Jan 31 09:28:10 crc kubenswrapper[4830]: I0131 09:28:10.852216 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 31 09:28:10 crc kubenswrapper[4830]: I0131 09:28:10.859544 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 31 09:28:10 crc kubenswrapper[4830]: I0131 09:28:10.863938 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 31 09:28:11 crc kubenswrapper[4830]: I0131 09:28:11.218356 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-79b5d74c8c-ftd5m"] Jan 31 09:28:11 crc kubenswrapper[4830]: I0131 09:28:11.231006 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79b5d74c8c-ftd5m" Jan 31 09:28:11 crc kubenswrapper[4830]: I0131 09:28:11.248704 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-79b5d74c8c-ftd5m"] Jan 31 09:28:11 crc kubenswrapper[4830]: I0131 09:28:11.285494 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/455ee04e-f0d7-431d-8127-c66beff070e7-ovsdbserver-sb\") pod \"dnsmasq-dns-79b5d74c8c-ftd5m\" (UID: \"455ee04e-f0d7-431d-8127-c66beff070e7\") " pod="openstack/dnsmasq-dns-79b5d74c8c-ftd5m" Jan 31 09:28:11 crc kubenswrapper[4830]: I0131 09:28:11.285586 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/455ee04e-f0d7-431d-8127-c66beff070e7-dns-svc\") pod \"dnsmasq-dns-79b5d74c8c-ftd5m\" (UID: \"455ee04e-f0d7-431d-8127-c66beff070e7\") " pod="openstack/dnsmasq-dns-79b5d74c8c-ftd5m" Jan 31 09:28:11 crc kubenswrapper[4830]: I0131 09:28:11.285756 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/455ee04e-f0d7-431d-8127-c66beff070e7-dns-swift-storage-0\") pod \"dnsmasq-dns-79b5d74c8c-ftd5m\" (UID: \"455ee04e-f0d7-431d-8127-c66beff070e7\") " pod="openstack/dnsmasq-dns-79b5d74c8c-ftd5m" Jan 31 09:28:11 crc kubenswrapper[4830]: I0131 09:28:11.285794 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/455ee04e-f0d7-431d-8127-c66beff070e7-config\") pod \"dnsmasq-dns-79b5d74c8c-ftd5m\" (UID: \"455ee04e-f0d7-431d-8127-c66beff070e7\") " pod="openstack/dnsmasq-dns-79b5d74c8c-ftd5m" Jan 31 09:28:11 crc kubenswrapper[4830]: I0131 09:28:11.285925 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b6chl\" (UniqueName: \"kubernetes.io/projected/455ee04e-f0d7-431d-8127-c66beff070e7-kube-api-access-b6chl\") pod \"dnsmasq-dns-79b5d74c8c-ftd5m\" (UID: \"455ee04e-f0d7-431d-8127-c66beff070e7\") " pod="openstack/dnsmasq-dns-79b5d74c8c-ftd5m" Jan 31 09:28:11 crc kubenswrapper[4830]: I0131 09:28:11.286088 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/455ee04e-f0d7-431d-8127-c66beff070e7-ovsdbserver-nb\") pod \"dnsmasq-dns-79b5d74c8c-ftd5m\" (UID: \"455ee04e-f0d7-431d-8127-c66beff070e7\") " pod="openstack/dnsmasq-dns-79b5d74c8c-ftd5m" Jan 31 09:28:11 crc kubenswrapper[4830]: I0131 09:28:11.313045 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-z7qcf" Jan 31 09:28:11 crc kubenswrapper[4830]: I0131 09:28:11.389715 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b6chl\" (UniqueName: \"kubernetes.io/projected/455ee04e-f0d7-431d-8127-c66beff070e7-kube-api-access-b6chl\") pod \"dnsmasq-dns-79b5d74c8c-ftd5m\" (UID: \"455ee04e-f0d7-431d-8127-c66beff070e7\") " pod="openstack/dnsmasq-dns-79b5d74c8c-ftd5m" Jan 31 09:28:11 crc kubenswrapper[4830]: I0131 09:28:11.389987 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/455ee04e-f0d7-431d-8127-c66beff070e7-ovsdbserver-nb\") pod \"dnsmasq-dns-79b5d74c8c-ftd5m\" (UID: \"455ee04e-f0d7-431d-8127-c66beff070e7\") " pod="openstack/dnsmasq-dns-79b5d74c8c-ftd5m" Jan 31 09:28:11 crc kubenswrapper[4830]: I0131 09:28:11.390087 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/455ee04e-f0d7-431d-8127-c66beff070e7-ovsdbserver-sb\") pod \"dnsmasq-dns-79b5d74c8c-ftd5m\" (UID: \"455ee04e-f0d7-431d-8127-c66beff070e7\") " pod="openstack/dnsmasq-dns-79b5d74c8c-ftd5m" Jan 31 09:28:11 crc kubenswrapper[4830]: I0131 09:28:11.390147 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/455ee04e-f0d7-431d-8127-c66beff070e7-dns-svc\") pod \"dnsmasq-dns-79b5d74c8c-ftd5m\" (UID: \"455ee04e-f0d7-431d-8127-c66beff070e7\") " pod="openstack/dnsmasq-dns-79b5d74c8c-ftd5m" Jan 31 09:28:11 crc kubenswrapper[4830]: I0131 09:28:11.390386 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/455ee04e-f0d7-431d-8127-c66beff070e7-dns-swift-storage-0\") pod \"dnsmasq-dns-79b5d74c8c-ftd5m\" (UID: \"455ee04e-f0d7-431d-8127-c66beff070e7\") " pod="openstack/dnsmasq-dns-79b5d74c8c-ftd5m" Jan 31 09:28:11 crc kubenswrapper[4830]: I0131 09:28:11.390443 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/455ee04e-f0d7-431d-8127-c66beff070e7-config\") pod \"dnsmasq-dns-79b5d74c8c-ftd5m\" (UID: \"455ee04e-f0d7-431d-8127-c66beff070e7\") " pod="openstack/dnsmasq-dns-79b5d74c8c-ftd5m" Jan 31 09:28:11 crc kubenswrapper[4830]: I0131 09:28:11.391466 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/455ee04e-f0d7-431d-8127-c66beff070e7-config\") pod \"dnsmasq-dns-79b5d74c8c-ftd5m\" (UID: \"455ee04e-f0d7-431d-8127-c66beff070e7\") " pod="openstack/dnsmasq-dns-79b5d74c8c-ftd5m" Jan 31 09:28:11 crc kubenswrapper[4830]: I0131 09:28:11.391827 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/455ee04e-f0d7-431d-8127-c66beff070e7-ovsdbserver-sb\") pod \"dnsmasq-dns-79b5d74c8c-ftd5m\" (UID: \"455ee04e-f0d7-431d-8127-c66beff070e7\") " pod="openstack/dnsmasq-dns-79b5d74c8c-ftd5m" Jan 31 09:28:11 crc kubenswrapper[4830]: I0131 09:28:11.392572 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/455ee04e-f0d7-431d-8127-c66beff070e7-dns-svc\") pod \"dnsmasq-dns-79b5d74c8c-ftd5m\" (UID: \"455ee04e-f0d7-431d-8127-c66beff070e7\") " pod="openstack/dnsmasq-dns-79b5d74c8c-ftd5m" Jan 31 09:28:11 crc kubenswrapper[4830]: I0131 09:28:11.410939 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/455ee04e-f0d7-431d-8127-c66beff070e7-dns-swift-storage-0\") pod \"dnsmasq-dns-79b5d74c8c-ftd5m\" (UID: \"455ee04e-f0d7-431d-8127-c66beff070e7\") " pod="openstack/dnsmasq-dns-79b5d74c8c-ftd5m" Jan 31 09:28:11 crc kubenswrapper[4830]: I0131 09:28:11.410939 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/455ee04e-f0d7-431d-8127-c66beff070e7-ovsdbserver-nb\") pod \"dnsmasq-dns-79b5d74c8c-ftd5m\" (UID: \"455ee04e-f0d7-431d-8127-c66beff070e7\") " pod="openstack/dnsmasq-dns-79b5d74c8c-ftd5m" Jan 31 09:28:11 crc kubenswrapper[4830]: I0131 09:28:11.456990 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b6chl\" (UniqueName: \"kubernetes.io/projected/455ee04e-f0d7-431d-8127-c66beff070e7-kube-api-access-b6chl\") pod \"dnsmasq-dns-79b5d74c8c-ftd5m\" (UID: \"455ee04e-f0d7-431d-8127-c66beff070e7\") " pod="openstack/dnsmasq-dns-79b5d74c8c-ftd5m" Jan 31 09:28:11 crc kubenswrapper[4830]: I0131 09:28:11.590578 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79b5d74c8c-ftd5m" Jan 31 09:28:11 crc kubenswrapper[4830]: I0131 09:28:11.622254 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-z7qcf" Jan 31 09:28:11 crc kubenswrapper[4830]: I0131 09:28:11.775758 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-z7qcf"] Jan 31 09:28:12 crc kubenswrapper[4830]: I0131 09:28:12.537942 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-79b5d74c8c-ftd5m"] Jan 31 09:28:12 crc kubenswrapper[4830]: I0131 09:28:12.891949 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79b5d74c8c-ftd5m" event={"ID":"455ee04e-f0d7-431d-8127-c66beff070e7","Type":"ContainerStarted","Data":"4c5839680f7a531830cf42ebe050cb9810070d19415d284f652b554a4d1c077c"} Jan 31 09:28:12 crc kubenswrapper[4830]: I0131 09:28:12.896710 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5857f9d0-2512-4a0b-bdf9-e236d864e814","Type":"ContainerStarted","Data":"19eb435786fcb9f0783ed5526263f7badc634eb78acbe7fc09f33de7d4f9c636"} Jan 31 09:28:12 crc kubenswrapper[4830]: I0131 09:28:12.898906 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-z7qcf" podUID="01a16d5c-bea7-4cab-8c88-206e4c5c901d" containerName="registry-server" containerID="cri-o://2187c5f1d23270780880c4f43561c556517a362b28d2de743139a8c9eccf5048" gracePeriod=2 Jan 31 09:28:13 crc kubenswrapper[4830]: I0131 09:28:13.738624 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-z7qcf" Jan 31 09:28:13 crc kubenswrapper[4830]: I0131 09:28:13.744261 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/01a16d5c-bea7-4cab-8c88-206e4c5c901d-utilities\") pod \"01a16d5c-bea7-4cab-8c88-206e4c5c901d\" (UID: \"01a16d5c-bea7-4cab-8c88-206e4c5c901d\") " Jan 31 09:28:13 crc kubenswrapper[4830]: I0131 09:28:13.744538 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/01a16d5c-bea7-4cab-8c88-206e4c5c901d-catalog-content\") pod \"01a16d5c-bea7-4cab-8c88-206e4c5c901d\" (UID: \"01a16d5c-bea7-4cab-8c88-206e4c5c901d\") " Jan 31 09:28:13 crc kubenswrapper[4830]: I0131 09:28:13.744627 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h9k8j\" (UniqueName: \"kubernetes.io/projected/01a16d5c-bea7-4cab-8c88-206e4c5c901d-kube-api-access-h9k8j\") pod \"01a16d5c-bea7-4cab-8c88-206e4c5c901d\" (UID: \"01a16d5c-bea7-4cab-8c88-206e4c5c901d\") " Jan 31 09:28:13 crc kubenswrapper[4830]: I0131 09:28:13.745561 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/01a16d5c-bea7-4cab-8c88-206e4c5c901d-utilities" (OuterVolumeSpecName: "utilities") pod "01a16d5c-bea7-4cab-8c88-206e4c5c901d" (UID: "01a16d5c-bea7-4cab-8c88-206e4c5c901d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 09:28:13 crc kubenswrapper[4830]: I0131 09:28:13.746373 4830 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/01a16d5c-bea7-4cab-8c88-206e4c5c901d-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 09:28:13 crc kubenswrapper[4830]: I0131 09:28:13.768087 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01a16d5c-bea7-4cab-8c88-206e4c5c901d-kube-api-access-h9k8j" (OuterVolumeSpecName: "kube-api-access-h9k8j") pod "01a16d5c-bea7-4cab-8c88-206e4c5c901d" (UID: "01a16d5c-bea7-4cab-8c88-206e4c5c901d"). InnerVolumeSpecName "kube-api-access-h9k8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:28:13 crc kubenswrapper[4830]: I0131 09:28:13.826189 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/01a16d5c-bea7-4cab-8c88-206e4c5c901d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "01a16d5c-bea7-4cab-8c88-206e4c5c901d" (UID: "01a16d5c-bea7-4cab-8c88-206e4c5c901d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 09:28:13 crc kubenswrapper[4830]: I0131 09:28:13.848427 4830 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/01a16d5c-bea7-4cab-8c88-206e4c5c901d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 09:28:13 crc kubenswrapper[4830]: I0131 09:28:13.848477 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h9k8j\" (UniqueName: \"kubernetes.io/projected/01a16d5c-bea7-4cab-8c88-206e4c5c901d-kube-api-access-h9k8j\") on node \"crc\" DevicePath \"\"" Jan 31 09:28:13 crc kubenswrapper[4830]: I0131 09:28:13.923178 4830 generic.go:334] "Generic (PLEG): container finished" podID="01a16d5c-bea7-4cab-8c88-206e4c5c901d" containerID="2187c5f1d23270780880c4f43561c556517a362b28d2de743139a8c9eccf5048" exitCode=0 Jan 31 09:28:13 crc kubenswrapper[4830]: I0131 09:28:13.923271 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z7qcf" event={"ID":"01a16d5c-bea7-4cab-8c88-206e4c5c901d","Type":"ContainerDied","Data":"2187c5f1d23270780880c4f43561c556517a362b28d2de743139a8c9eccf5048"} Jan 31 09:28:13 crc kubenswrapper[4830]: I0131 09:28:13.923323 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-z7qcf" Jan 31 09:28:13 crc kubenswrapper[4830]: I0131 09:28:13.923366 4830 scope.go:117] "RemoveContainer" containerID="2187c5f1d23270780880c4f43561c556517a362b28d2de743139a8c9eccf5048" Jan 31 09:28:13 crc kubenswrapper[4830]: I0131 09:28:13.923334 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z7qcf" event={"ID":"01a16d5c-bea7-4cab-8c88-206e4c5c901d","Type":"ContainerDied","Data":"c421d717381a092b193bbdd5a54e791946f7df44748267e7ccb27f58fd7a9072"} Jan 31 09:28:13 crc kubenswrapper[4830]: I0131 09:28:13.927686 4830 generic.go:334] "Generic (PLEG): container finished" podID="5adada53-61e1-406d-b9ac-0c004999b351" containerID="7641337d358c0b661af22faa289cf47753f5bce205b9d5934eb013a464e16db3" exitCode=0 Jan 31 09:28:13 crc kubenswrapper[4830]: I0131 09:28:13.927797 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-7ft4j" event={"ID":"5adada53-61e1-406d-b9ac-0c004999b351","Type":"ContainerDied","Data":"7641337d358c0b661af22faa289cf47753f5bce205b9d5934eb013a464e16db3"} Jan 31 09:28:13 crc kubenswrapper[4830]: I0131 09:28:13.934702 4830 generic.go:334] "Generic (PLEG): container finished" podID="455ee04e-f0d7-431d-8127-c66beff070e7" containerID="c82d00e226cf94442d0cd376d30070a03ec583990fada5821f8ef972eebd126d" exitCode=0 Jan 31 09:28:13 crc kubenswrapper[4830]: I0131 09:28:13.934788 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79b5d74c8c-ftd5m" event={"ID":"455ee04e-f0d7-431d-8127-c66beff070e7","Type":"ContainerDied","Data":"c82d00e226cf94442d0cd376d30070a03ec583990fada5821f8ef972eebd126d"} Jan 31 09:28:14 crc kubenswrapper[4830]: I0131 09:28:14.028297 4830 scope.go:117] "RemoveContainer" containerID="9038e9d98b25ae3230e069f5b241cb6a8f96e63cfae411b12e4c5bb53b61169f" Jan 31 09:28:14 crc kubenswrapper[4830]: I0131 09:28:14.052547 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-z7qcf"] Jan 31 09:28:14 crc kubenswrapper[4830]: I0131 09:28:14.072889 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-z7qcf"] Jan 31 09:28:14 crc kubenswrapper[4830]: I0131 09:28:14.177924 4830 scope.go:117] "RemoveContainer" containerID="d8b80451603f88d95edc41b647953eb26bdf22202f7a70ced037f6c1976b7e46" Jan 31 09:28:14 crc kubenswrapper[4830]: I0131 09:28:14.236684 4830 scope.go:117] "RemoveContainer" containerID="2187c5f1d23270780880c4f43561c556517a362b28d2de743139a8c9eccf5048" Jan 31 09:28:14 crc kubenswrapper[4830]: E0131 09:28:14.239450 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2187c5f1d23270780880c4f43561c556517a362b28d2de743139a8c9eccf5048\": container with ID starting with 2187c5f1d23270780880c4f43561c556517a362b28d2de743139a8c9eccf5048 not found: ID does not exist" containerID="2187c5f1d23270780880c4f43561c556517a362b28d2de743139a8c9eccf5048" Jan 31 09:28:14 crc kubenswrapper[4830]: I0131 09:28:14.239518 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2187c5f1d23270780880c4f43561c556517a362b28d2de743139a8c9eccf5048"} err="failed to get container status \"2187c5f1d23270780880c4f43561c556517a362b28d2de743139a8c9eccf5048\": rpc error: code = NotFound desc = could not find container \"2187c5f1d23270780880c4f43561c556517a362b28d2de743139a8c9eccf5048\": container with ID starting with 2187c5f1d23270780880c4f43561c556517a362b28d2de743139a8c9eccf5048 not found: ID does not exist" Jan 31 09:28:14 crc kubenswrapper[4830]: I0131 09:28:14.239562 4830 scope.go:117] "RemoveContainer" containerID="9038e9d98b25ae3230e069f5b241cb6a8f96e63cfae411b12e4c5bb53b61169f" Jan 31 09:28:14 crc kubenswrapper[4830]: E0131 09:28:14.239954 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9038e9d98b25ae3230e069f5b241cb6a8f96e63cfae411b12e4c5bb53b61169f\": container with ID starting with 9038e9d98b25ae3230e069f5b241cb6a8f96e63cfae411b12e4c5bb53b61169f not found: ID does not exist" containerID="9038e9d98b25ae3230e069f5b241cb6a8f96e63cfae411b12e4c5bb53b61169f" Jan 31 09:28:14 crc kubenswrapper[4830]: I0131 09:28:14.239980 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9038e9d98b25ae3230e069f5b241cb6a8f96e63cfae411b12e4c5bb53b61169f"} err="failed to get container status \"9038e9d98b25ae3230e069f5b241cb6a8f96e63cfae411b12e4c5bb53b61169f\": rpc error: code = NotFound desc = could not find container \"9038e9d98b25ae3230e069f5b241cb6a8f96e63cfae411b12e4c5bb53b61169f\": container with ID starting with 9038e9d98b25ae3230e069f5b241cb6a8f96e63cfae411b12e4c5bb53b61169f not found: ID does not exist" Jan 31 09:28:14 crc kubenswrapper[4830]: I0131 09:28:14.240019 4830 scope.go:117] "RemoveContainer" containerID="d8b80451603f88d95edc41b647953eb26bdf22202f7a70ced037f6c1976b7e46" Jan 31 09:28:14 crc kubenswrapper[4830]: E0131 09:28:14.244054 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d8b80451603f88d95edc41b647953eb26bdf22202f7a70ced037f6c1976b7e46\": container with ID starting with d8b80451603f88d95edc41b647953eb26bdf22202f7a70ced037f6c1976b7e46 not found: ID does not exist" containerID="d8b80451603f88d95edc41b647953eb26bdf22202f7a70ced037f6c1976b7e46" Jan 31 09:28:14 crc kubenswrapper[4830]: I0131 09:28:14.244108 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d8b80451603f88d95edc41b647953eb26bdf22202f7a70ced037f6c1976b7e46"} err="failed to get container status \"d8b80451603f88d95edc41b647953eb26bdf22202f7a70ced037f6c1976b7e46\": rpc error: code = NotFound desc = could not find container \"d8b80451603f88d95edc41b647953eb26bdf22202f7a70ced037f6c1976b7e46\": container with ID starting with d8b80451603f88d95edc41b647953eb26bdf22202f7a70ced037f6c1976b7e46 not found: ID does not exist" Jan 31 09:28:14 crc kubenswrapper[4830]: I0131 09:28:14.269918 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01a16d5c-bea7-4cab-8c88-206e4c5c901d" path="/var/lib/kubelet/pods/01a16d5c-bea7-4cab-8c88-206e4c5c901d/volumes" Jan 31 09:28:14 crc kubenswrapper[4830]: I0131 09:28:14.652959 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 31 09:28:14 crc kubenswrapper[4830]: I0131 09:28:14.653322 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="8dc5a196-8cf1-4387-8c40-7c9b5b1b55fc" containerName="nova-api-log" containerID="cri-o://eb6a70f771a74535413152a30b974fbe478b4e49492acadca73167f8f1e8b78e" gracePeriod=30 Jan 31 09:28:14 crc kubenswrapper[4830]: I0131 09:28:14.654187 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="8dc5a196-8cf1-4387-8c40-7c9b5b1b55fc" containerName="nova-api-api" containerID="cri-o://fc8daedadd2b8254e191210fb2ee51454a3361d7e82dcf3c135e0fd60fcde1f7" gracePeriod=30 Jan 31 09:28:14 crc kubenswrapper[4830]: I0131 09:28:14.990199 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79b5d74c8c-ftd5m" event={"ID":"455ee04e-f0d7-431d-8127-c66beff070e7","Type":"ContainerStarted","Data":"f4b2dc73f28a2cbb1eb45d61dd7c88ea9cc144a52ff072abd4fe43468db98a87"} Jan 31 09:28:14 crc kubenswrapper[4830]: I0131 09:28:14.992504 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-79b5d74c8c-ftd5m" Jan 31 09:28:15 crc kubenswrapper[4830]: I0131 09:28:15.039125 4830 generic.go:334] "Generic (PLEG): container finished" podID="8dc5a196-8cf1-4387-8c40-7c9b5b1b55fc" containerID="eb6a70f771a74535413152a30b974fbe478b4e49492acadca73167f8f1e8b78e" exitCode=143 Jan 31 09:28:15 crc kubenswrapper[4830]: I0131 09:28:15.039456 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"8dc5a196-8cf1-4387-8c40-7c9b5b1b55fc","Type":"ContainerDied","Data":"eb6a70f771a74535413152a30b974fbe478b4e49492acadca73167f8f1e8b78e"} Jan 31 09:28:15 crc kubenswrapper[4830]: I0131 09:28:15.099057 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-79b5d74c8c-ftd5m" podStartSLOduration=4.099022941 podStartE2EDuration="4.099022941s" podCreationTimestamp="2026-01-31 09:28:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 09:28:15.043950185 +0000 UTC m=+1639.537312637" watchObservedRunningTime="2026-01-31 09:28:15.099022941 +0000 UTC m=+1639.592385383" Jan 31 09:28:15 crc kubenswrapper[4830]: I0131 09:28:15.736346 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-7ft4j" Jan 31 09:28:15 crc kubenswrapper[4830]: I0131 09:28:15.826134 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5adada53-61e1-406d-b9ac-0c004999b351-config-data\") pod \"5adada53-61e1-406d-b9ac-0c004999b351\" (UID: \"5adada53-61e1-406d-b9ac-0c004999b351\") " Jan 31 09:28:15 crc kubenswrapper[4830]: I0131 09:28:15.826194 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5adada53-61e1-406d-b9ac-0c004999b351-scripts\") pod \"5adada53-61e1-406d-b9ac-0c004999b351\" (UID: \"5adada53-61e1-406d-b9ac-0c004999b351\") " Jan 31 09:28:15 crc kubenswrapper[4830]: I0131 09:28:15.826517 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5adada53-61e1-406d-b9ac-0c004999b351-combined-ca-bundle\") pod \"5adada53-61e1-406d-b9ac-0c004999b351\" (UID: \"5adada53-61e1-406d-b9ac-0c004999b351\") " Jan 31 09:28:15 crc kubenswrapper[4830]: I0131 09:28:15.826612 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9vk2n\" (UniqueName: \"kubernetes.io/projected/5adada53-61e1-406d-b9ac-0c004999b351-kube-api-access-9vk2n\") pod \"5adada53-61e1-406d-b9ac-0c004999b351\" (UID: \"5adada53-61e1-406d-b9ac-0c004999b351\") " Jan 31 09:28:15 crc kubenswrapper[4830]: I0131 09:28:15.835309 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5adada53-61e1-406d-b9ac-0c004999b351-kube-api-access-9vk2n" (OuterVolumeSpecName: "kube-api-access-9vk2n") pod "5adada53-61e1-406d-b9ac-0c004999b351" (UID: "5adada53-61e1-406d-b9ac-0c004999b351"). InnerVolumeSpecName "kube-api-access-9vk2n". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:28:15 crc kubenswrapper[4830]: I0131 09:28:15.857281 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5adada53-61e1-406d-b9ac-0c004999b351-scripts" (OuterVolumeSpecName: "scripts") pod "5adada53-61e1-406d-b9ac-0c004999b351" (UID: "5adada53-61e1-406d-b9ac-0c004999b351"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:28:15 crc kubenswrapper[4830]: I0131 09:28:15.864859 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5adada53-61e1-406d-b9ac-0c004999b351-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5adada53-61e1-406d-b9ac-0c004999b351" (UID: "5adada53-61e1-406d-b9ac-0c004999b351"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:28:15 crc kubenswrapper[4830]: I0131 09:28:15.867903 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5adada53-61e1-406d-b9ac-0c004999b351-config-data" (OuterVolumeSpecName: "config-data") pod "5adada53-61e1-406d-b9ac-0c004999b351" (UID: "5adada53-61e1-406d-b9ac-0c004999b351"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:28:15 crc kubenswrapper[4830]: I0131 09:28:15.930525 4830 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5adada53-61e1-406d-b9ac-0c004999b351-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 09:28:15 crc kubenswrapper[4830]: I0131 09:28:15.930579 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9vk2n\" (UniqueName: \"kubernetes.io/projected/5adada53-61e1-406d-b9ac-0c004999b351-kube-api-access-9vk2n\") on node \"crc\" DevicePath \"\"" Jan 31 09:28:15 crc kubenswrapper[4830]: I0131 09:28:15.930597 4830 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5adada53-61e1-406d-b9ac-0c004999b351-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 09:28:15 crc kubenswrapper[4830]: I0131 09:28:15.930612 4830 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5adada53-61e1-406d-b9ac-0c004999b351-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 09:28:16 crc kubenswrapper[4830]: I0131 09:28:16.084081 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-7ft4j" event={"ID":"5adada53-61e1-406d-b9ac-0c004999b351","Type":"ContainerDied","Data":"ec4dcab692d1b6b4df7c91ee12ec0736dc220f2be1e35b9fe2b5bdb0bd7629bf"} Jan 31 09:28:16 crc kubenswrapper[4830]: I0131 09:28:16.084142 4830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ec4dcab692d1b6b4df7c91ee12ec0736dc220f2be1e35b9fe2b5bdb0bd7629bf" Jan 31 09:28:16 crc kubenswrapper[4830]: I0131 09:28:16.084279 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-7ft4j" Jan 31 09:28:16 crc kubenswrapper[4830]: I0131 09:28:16.096313 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5857f9d0-2512-4a0b-bdf9-e236d864e814","Type":"ContainerStarted","Data":"5491c5f866d8022b80367ec9c8b9f4400d43f75d83479c50d03d437b37abbf15"} Jan 31 09:28:16 crc kubenswrapper[4830]: I0131 09:28:16.098128 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="5857f9d0-2512-4a0b-bdf9-e236d864e814" containerName="sg-core" containerID="cri-o://19eb435786fcb9f0783ed5526263f7badc634eb78acbe7fc09f33de7d4f9c636" gracePeriod=30 Jan 31 09:28:16 crc kubenswrapper[4830]: I0131 09:28:16.099710 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="5857f9d0-2512-4a0b-bdf9-e236d864e814" containerName="ceilometer-notification-agent" containerID="cri-o://2f6403aa6c93a3710ea73f76bd0e339077242676de2969aed20e7ba76b4ab985" gracePeriod=30 Jan 31 09:28:16 crc kubenswrapper[4830]: I0131 09:28:16.099875 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="5857f9d0-2512-4a0b-bdf9-e236d864e814" containerName="proxy-httpd" containerID="cri-o://5491c5f866d8022b80367ec9c8b9f4400d43f75d83479c50d03d437b37abbf15" gracePeriod=30 Jan 31 09:28:16 crc kubenswrapper[4830]: I0131 09:28:16.101423 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="5857f9d0-2512-4a0b-bdf9-e236d864e814" containerName="ceilometer-central-agent" containerID="cri-o://9fce759799afb7fc35f253cbb28f800b653f9d711262b2ccdc49247cc45d536c" gracePeriod=30 Jan 31 09:28:16 crc kubenswrapper[4830]: I0131 09:28:16.149560 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.509147005 podStartE2EDuration="14.149535545s" podCreationTimestamp="2026-01-31 09:28:02 +0000 UTC" firstStartedPulling="2026-01-31 09:28:04.127818233 +0000 UTC m=+1628.621180675" lastFinishedPulling="2026-01-31 09:28:14.768206773 +0000 UTC m=+1639.261569215" observedRunningTime="2026-01-31 09:28:16.133690582 +0000 UTC m=+1640.627053034" watchObservedRunningTime="2026-01-31 09:28:16.149535545 +0000 UTC m=+1640.642897987" Jan 31 09:28:16 crc kubenswrapper[4830]: I0131 09:28:16.325497 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 31 09:28:16 crc kubenswrapper[4830]: I0131 09:28:16.325865 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="deed2020-a242-40ee-af68-bb3a30f6acf3" containerName="nova-scheduler-scheduler" containerID="cri-o://bdaf28f5f273d04f1641c96500b95d21a4cb8cafbba934be5cac7966855ca0b0" gracePeriod=30 Jan 31 09:28:16 crc kubenswrapper[4830]: I0131 09:28:16.351210 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 31 09:28:16 crc kubenswrapper[4830]: I0131 09:28:16.351514 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="aa9e4ee2-a265-4397-a7ff-42e9ab868237" containerName="nova-metadata-log" containerID="cri-o://87f693d26ac57ee0aa982394e5dda671c8d87b55255cd5b2132721a7ad69c513" gracePeriod=30 Jan 31 09:28:16 crc kubenswrapper[4830]: I0131 09:28:16.352531 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="aa9e4ee2-a265-4397-a7ff-42e9ab868237" containerName="nova-metadata-metadata" containerID="cri-o://017b2c6904fbdcbd8278344456c881534bb24a5ac4c8e0a05b69670b9bebf711" gracePeriod=30 Jan 31 09:28:17 crc kubenswrapper[4830]: I0131 09:28:17.110480 4830 generic.go:334] "Generic (PLEG): container finished" podID="aa9e4ee2-a265-4397-a7ff-42e9ab868237" containerID="87f693d26ac57ee0aa982394e5dda671c8d87b55255cd5b2132721a7ad69c513" exitCode=143 Jan 31 09:28:17 crc kubenswrapper[4830]: I0131 09:28:17.110560 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"aa9e4ee2-a265-4397-a7ff-42e9ab868237","Type":"ContainerDied","Data":"87f693d26ac57ee0aa982394e5dda671c8d87b55255cd5b2132721a7ad69c513"} Jan 31 09:28:17 crc kubenswrapper[4830]: I0131 09:28:17.115211 4830 generic.go:334] "Generic (PLEG): container finished" podID="5857f9d0-2512-4a0b-bdf9-e236d864e814" containerID="5491c5f866d8022b80367ec9c8b9f4400d43f75d83479c50d03d437b37abbf15" exitCode=0 Jan 31 09:28:17 crc kubenswrapper[4830]: I0131 09:28:17.115255 4830 generic.go:334] "Generic (PLEG): container finished" podID="5857f9d0-2512-4a0b-bdf9-e236d864e814" containerID="19eb435786fcb9f0783ed5526263f7badc634eb78acbe7fc09f33de7d4f9c636" exitCode=2 Jan 31 09:28:17 crc kubenswrapper[4830]: I0131 09:28:17.115270 4830 generic.go:334] "Generic (PLEG): container finished" podID="5857f9d0-2512-4a0b-bdf9-e236d864e814" containerID="2f6403aa6c93a3710ea73f76bd0e339077242676de2969aed20e7ba76b4ab985" exitCode=0 Jan 31 09:28:17 crc kubenswrapper[4830]: I0131 09:28:17.115285 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5857f9d0-2512-4a0b-bdf9-e236d864e814","Type":"ContainerDied","Data":"5491c5f866d8022b80367ec9c8b9f4400d43f75d83479c50d03d437b37abbf15"} Jan 31 09:28:17 crc kubenswrapper[4830]: I0131 09:28:17.115345 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5857f9d0-2512-4a0b-bdf9-e236d864e814","Type":"ContainerDied","Data":"19eb435786fcb9f0783ed5526263f7badc634eb78acbe7fc09f33de7d4f9c636"} Jan 31 09:28:17 crc kubenswrapper[4830]: I0131 09:28:17.115357 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5857f9d0-2512-4a0b-bdf9-e236d864e814","Type":"ContainerDied","Data":"2f6403aa6c93a3710ea73f76bd0e339077242676de2969aed20e7ba76b4ab985"} Jan 31 09:28:18 crc kubenswrapper[4830]: I0131 09:28:18.957327 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 31 09:28:19 crc kubenswrapper[4830]: I0131 09:28:19.101613 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8dc5a196-8cf1-4387-8c40-7c9b5b1b55fc-config-data\") pod \"8dc5a196-8cf1-4387-8c40-7c9b5b1b55fc\" (UID: \"8dc5a196-8cf1-4387-8c40-7c9b5b1b55fc\") " Jan 31 09:28:19 crc kubenswrapper[4830]: I0131 09:28:19.102411 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s8nps\" (UniqueName: \"kubernetes.io/projected/8dc5a196-8cf1-4387-8c40-7c9b5b1b55fc-kube-api-access-s8nps\") pod \"8dc5a196-8cf1-4387-8c40-7c9b5b1b55fc\" (UID: \"8dc5a196-8cf1-4387-8c40-7c9b5b1b55fc\") " Jan 31 09:28:19 crc kubenswrapper[4830]: I0131 09:28:19.102459 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8dc5a196-8cf1-4387-8c40-7c9b5b1b55fc-logs\") pod \"8dc5a196-8cf1-4387-8c40-7c9b5b1b55fc\" (UID: \"8dc5a196-8cf1-4387-8c40-7c9b5b1b55fc\") " Jan 31 09:28:19 crc kubenswrapper[4830]: I0131 09:28:19.102598 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8dc5a196-8cf1-4387-8c40-7c9b5b1b55fc-combined-ca-bundle\") pod \"8dc5a196-8cf1-4387-8c40-7c9b5b1b55fc\" (UID: \"8dc5a196-8cf1-4387-8c40-7c9b5b1b55fc\") " Jan 31 09:28:19 crc kubenswrapper[4830]: I0131 09:28:19.108180 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8dc5a196-8cf1-4387-8c40-7c9b5b1b55fc-logs" (OuterVolumeSpecName: "logs") pod "8dc5a196-8cf1-4387-8c40-7c9b5b1b55fc" (UID: "8dc5a196-8cf1-4387-8c40-7c9b5b1b55fc"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 09:28:19 crc kubenswrapper[4830]: I0131 09:28:19.122355 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8dc5a196-8cf1-4387-8c40-7c9b5b1b55fc-kube-api-access-s8nps" (OuterVolumeSpecName: "kube-api-access-s8nps") pod "8dc5a196-8cf1-4387-8c40-7c9b5b1b55fc" (UID: "8dc5a196-8cf1-4387-8c40-7c9b5b1b55fc"). InnerVolumeSpecName "kube-api-access-s8nps". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:28:19 crc kubenswrapper[4830]: I0131 09:28:19.178043 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8dc5a196-8cf1-4387-8c40-7c9b5b1b55fc-config-data" (OuterVolumeSpecName: "config-data") pod "8dc5a196-8cf1-4387-8c40-7c9b5b1b55fc" (UID: "8dc5a196-8cf1-4387-8c40-7c9b5b1b55fc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:28:19 crc kubenswrapper[4830]: I0131 09:28:19.183432 4830 generic.go:334] "Generic (PLEG): container finished" podID="8dc5a196-8cf1-4387-8c40-7c9b5b1b55fc" containerID="fc8daedadd2b8254e191210fb2ee51454a3361d7e82dcf3c135e0fd60fcde1f7" exitCode=0 Jan 31 09:28:19 crc kubenswrapper[4830]: I0131 09:28:19.183496 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"8dc5a196-8cf1-4387-8c40-7c9b5b1b55fc","Type":"ContainerDied","Data":"fc8daedadd2b8254e191210fb2ee51454a3361d7e82dcf3c135e0fd60fcde1f7"} Jan 31 09:28:19 crc kubenswrapper[4830]: I0131 09:28:19.183536 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"8dc5a196-8cf1-4387-8c40-7c9b5b1b55fc","Type":"ContainerDied","Data":"946d8dc2dd07d6e050a664c06a0de75e9f2ee2bfdc22dd26e89daae8163a6932"} Jan 31 09:28:19 crc kubenswrapper[4830]: I0131 09:28:19.183560 4830 scope.go:117] "RemoveContainer" containerID="fc8daedadd2b8254e191210fb2ee51454a3361d7e82dcf3c135e0fd60fcde1f7" Jan 31 09:28:19 crc kubenswrapper[4830]: I0131 09:28:19.189218 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8dc5a196-8cf1-4387-8c40-7c9b5b1b55fc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8dc5a196-8cf1-4387-8c40-7c9b5b1b55fc" (UID: "8dc5a196-8cf1-4387-8c40-7c9b5b1b55fc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:28:19 crc kubenswrapper[4830]: I0131 09:28:19.192624 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 31 09:28:19 crc kubenswrapper[4830]: I0131 09:28:19.211354 4830 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8dc5a196-8cf1-4387-8c40-7c9b5b1b55fc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 09:28:19 crc kubenswrapper[4830]: I0131 09:28:19.211381 4830 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8dc5a196-8cf1-4387-8c40-7c9b5b1b55fc-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 09:28:19 crc kubenswrapper[4830]: I0131 09:28:19.211417 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s8nps\" (UniqueName: \"kubernetes.io/projected/8dc5a196-8cf1-4387-8c40-7c9b5b1b55fc-kube-api-access-s8nps\") on node \"crc\" DevicePath \"\"" Jan 31 09:28:19 crc kubenswrapper[4830]: I0131 09:28:19.211429 4830 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8dc5a196-8cf1-4387-8c40-7c9b5b1b55fc-logs\") on node \"crc\" DevicePath \"\"" Jan 31 09:28:19 crc kubenswrapper[4830]: I0131 09:28:19.369164 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 31 09:28:19 crc kubenswrapper[4830]: I0131 09:28:19.396664 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 31 09:28:19 crc kubenswrapper[4830]: I0131 09:28:19.418063 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 31 09:28:19 crc kubenswrapper[4830]: E0131 09:28:19.419683 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01a16d5c-bea7-4cab-8c88-206e4c5c901d" containerName="registry-server" Jan 31 09:28:19 crc kubenswrapper[4830]: I0131 09:28:19.419773 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="01a16d5c-bea7-4cab-8c88-206e4c5c901d" containerName="registry-server" Jan 31 09:28:19 crc kubenswrapper[4830]: E0131 09:28:19.419833 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5adada53-61e1-406d-b9ac-0c004999b351" containerName="nova-manage" Jan 31 09:28:19 crc kubenswrapper[4830]: I0131 09:28:19.419844 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="5adada53-61e1-406d-b9ac-0c004999b351" containerName="nova-manage" Jan 31 09:28:19 crc kubenswrapper[4830]: E0131 09:28:19.419867 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01a16d5c-bea7-4cab-8c88-206e4c5c901d" containerName="extract-content" Jan 31 09:28:19 crc kubenswrapper[4830]: I0131 09:28:19.419874 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="01a16d5c-bea7-4cab-8c88-206e4c5c901d" containerName="extract-content" Jan 31 09:28:19 crc kubenswrapper[4830]: E0131 09:28:19.419891 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01a16d5c-bea7-4cab-8c88-206e4c5c901d" containerName="extract-utilities" Jan 31 09:28:19 crc kubenswrapper[4830]: I0131 09:28:19.419900 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="01a16d5c-bea7-4cab-8c88-206e4c5c901d" containerName="extract-utilities" Jan 31 09:28:19 crc kubenswrapper[4830]: E0131 09:28:19.419953 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8dc5a196-8cf1-4387-8c40-7c9b5b1b55fc" containerName="nova-api-log" Jan 31 09:28:19 crc kubenswrapper[4830]: I0131 09:28:19.419960 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="8dc5a196-8cf1-4387-8c40-7c9b5b1b55fc" containerName="nova-api-log" Jan 31 09:28:19 crc kubenswrapper[4830]: E0131 09:28:19.419979 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8dc5a196-8cf1-4387-8c40-7c9b5b1b55fc" containerName="nova-api-api" Jan 31 09:28:19 crc kubenswrapper[4830]: I0131 09:28:19.419985 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="8dc5a196-8cf1-4387-8c40-7c9b5b1b55fc" containerName="nova-api-api" Jan 31 09:28:19 crc kubenswrapper[4830]: I0131 09:28:19.420528 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="01a16d5c-bea7-4cab-8c88-206e4c5c901d" containerName="registry-server" Jan 31 09:28:19 crc kubenswrapper[4830]: I0131 09:28:19.420582 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="5adada53-61e1-406d-b9ac-0c004999b351" containerName="nova-manage" Jan 31 09:28:19 crc kubenswrapper[4830]: I0131 09:28:19.420611 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="8dc5a196-8cf1-4387-8c40-7c9b5b1b55fc" containerName="nova-api-api" Jan 31 09:28:19 crc kubenswrapper[4830]: I0131 09:28:19.420637 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="8dc5a196-8cf1-4387-8c40-7c9b5b1b55fc" containerName="nova-api-log" Jan 31 09:28:19 crc kubenswrapper[4830]: I0131 09:28:19.422984 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 31 09:28:19 crc kubenswrapper[4830]: I0131 09:28:19.449172 4830 scope.go:117] "RemoveContainer" containerID="eb6a70f771a74535413152a30b974fbe478b4e49492acadca73167f8f1e8b78e" Jan 31 09:28:19 crc kubenswrapper[4830]: I0131 09:28:19.450258 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Jan 31 09:28:19 crc kubenswrapper[4830]: I0131 09:28:19.450515 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 31 09:28:19 crc kubenswrapper[4830]: I0131 09:28:19.450891 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Jan 31 09:28:19 crc kubenswrapper[4830]: I0131 09:28:19.470464 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 31 09:28:19 crc kubenswrapper[4830]: I0131 09:28:19.475984 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9b60a826-4072-45c8-91c8-469a728a68ae-public-tls-certs\") pod \"nova-api-0\" (UID: \"9b60a826-4072-45c8-91c8-469a728a68ae\") " pod="openstack/nova-api-0" Jan 31 09:28:19 crc kubenswrapper[4830]: I0131 09:28:19.476072 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gklxx\" (UniqueName: \"kubernetes.io/projected/9b60a826-4072-45c8-91c8-469a728a68ae-kube-api-access-gklxx\") pod \"nova-api-0\" (UID: \"9b60a826-4072-45c8-91c8-469a728a68ae\") " pod="openstack/nova-api-0" Jan 31 09:28:19 crc kubenswrapper[4830]: I0131 09:28:19.477058 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b60a826-4072-45c8-91c8-469a728a68ae-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"9b60a826-4072-45c8-91c8-469a728a68ae\") " pod="openstack/nova-api-0" Jan 31 09:28:19 crc kubenswrapper[4830]: I0131 09:28:19.477146 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9b60a826-4072-45c8-91c8-469a728a68ae-config-data\") pod \"nova-api-0\" (UID: \"9b60a826-4072-45c8-91c8-469a728a68ae\") " pod="openstack/nova-api-0" Jan 31 09:28:19 crc kubenswrapper[4830]: I0131 09:28:19.477298 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9b60a826-4072-45c8-91c8-469a728a68ae-internal-tls-certs\") pod \"nova-api-0\" (UID: \"9b60a826-4072-45c8-91c8-469a728a68ae\") " pod="openstack/nova-api-0" Jan 31 09:28:19 crc kubenswrapper[4830]: I0131 09:28:19.477504 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9b60a826-4072-45c8-91c8-469a728a68ae-logs\") pod \"nova-api-0\" (UID: \"9b60a826-4072-45c8-91c8-469a728a68ae\") " pod="openstack/nova-api-0" Jan 31 09:28:19 crc kubenswrapper[4830]: I0131 09:28:19.550059 4830 scope.go:117] "RemoveContainer" containerID="fc8daedadd2b8254e191210fb2ee51454a3361d7e82dcf3c135e0fd60fcde1f7" Jan 31 09:28:19 crc kubenswrapper[4830]: E0131 09:28:19.553937 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fc8daedadd2b8254e191210fb2ee51454a3361d7e82dcf3c135e0fd60fcde1f7\": container with ID starting with fc8daedadd2b8254e191210fb2ee51454a3361d7e82dcf3c135e0fd60fcde1f7 not found: ID does not exist" containerID="fc8daedadd2b8254e191210fb2ee51454a3361d7e82dcf3c135e0fd60fcde1f7" Jan 31 09:28:19 crc kubenswrapper[4830]: I0131 09:28:19.554014 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fc8daedadd2b8254e191210fb2ee51454a3361d7e82dcf3c135e0fd60fcde1f7"} err="failed to get container status \"fc8daedadd2b8254e191210fb2ee51454a3361d7e82dcf3c135e0fd60fcde1f7\": rpc error: code = NotFound desc = could not find container \"fc8daedadd2b8254e191210fb2ee51454a3361d7e82dcf3c135e0fd60fcde1f7\": container with ID starting with fc8daedadd2b8254e191210fb2ee51454a3361d7e82dcf3c135e0fd60fcde1f7 not found: ID does not exist" Jan 31 09:28:19 crc kubenswrapper[4830]: I0131 09:28:19.554053 4830 scope.go:117] "RemoveContainer" containerID="eb6a70f771a74535413152a30b974fbe478b4e49492acadca73167f8f1e8b78e" Jan 31 09:28:19 crc kubenswrapper[4830]: E0131 09:28:19.559507 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eb6a70f771a74535413152a30b974fbe478b4e49492acadca73167f8f1e8b78e\": container with ID starting with eb6a70f771a74535413152a30b974fbe478b4e49492acadca73167f8f1e8b78e not found: ID does not exist" containerID="eb6a70f771a74535413152a30b974fbe478b4e49492acadca73167f8f1e8b78e" Jan 31 09:28:19 crc kubenswrapper[4830]: I0131 09:28:19.559596 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eb6a70f771a74535413152a30b974fbe478b4e49492acadca73167f8f1e8b78e"} err="failed to get container status \"eb6a70f771a74535413152a30b974fbe478b4e49492acadca73167f8f1e8b78e\": rpc error: code = NotFound desc = could not find container \"eb6a70f771a74535413152a30b974fbe478b4e49492acadca73167f8f1e8b78e\": container with ID starting with eb6a70f771a74535413152a30b974fbe478b4e49492acadca73167f8f1e8b78e not found: ID does not exist" Jan 31 09:28:19 crc kubenswrapper[4830]: I0131 09:28:19.589811 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b60a826-4072-45c8-91c8-469a728a68ae-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"9b60a826-4072-45c8-91c8-469a728a68ae\") " pod="openstack/nova-api-0" Jan 31 09:28:19 crc kubenswrapper[4830]: I0131 09:28:19.589890 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9b60a826-4072-45c8-91c8-469a728a68ae-config-data\") pod \"nova-api-0\" (UID: \"9b60a826-4072-45c8-91c8-469a728a68ae\") " pod="openstack/nova-api-0" Jan 31 09:28:19 crc kubenswrapper[4830]: I0131 09:28:19.589948 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9b60a826-4072-45c8-91c8-469a728a68ae-internal-tls-certs\") pod \"nova-api-0\" (UID: \"9b60a826-4072-45c8-91c8-469a728a68ae\") " pod="openstack/nova-api-0" Jan 31 09:28:19 crc kubenswrapper[4830]: I0131 09:28:19.590026 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9b60a826-4072-45c8-91c8-469a728a68ae-logs\") pod \"nova-api-0\" (UID: \"9b60a826-4072-45c8-91c8-469a728a68ae\") " pod="openstack/nova-api-0" Jan 31 09:28:19 crc kubenswrapper[4830]: I0131 09:28:19.590237 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9b60a826-4072-45c8-91c8-469a728a68ae-public-tls-certs\") pod \"nova-api-0\" (UID: \"9b60a826-4072-45c8-91c8-469a728a68ae\") " pod="openstack/nova-api-0" Jan 31 09:28:19 crc kubenswrapper[4830]: I0131 09:28:19.590301 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gklxx\" (UniqueName: \"kubernetes.io/projected/9b60a826-4072-45c8-91c8-469a728a68ae-kube-api-access-gklxx\") pod \"nova-api-0\" (UID: \"9b60a826-4072-45c8-91c8-469a728a68ae\") " pod="openstack/nova-api-0" Jan 31 09:28:19 crc kubenswrapper[4830]: I0131 09:28:19.597322 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9b60a826-4072-45c8-91c8-469a728a68ae-logs\") pod \"nova-api-0\" (UID: \"9b60a826-4072-45c8-91c8-469a728a68ae\") " pod="openstack/nova-api-0" Jan 31 09:28:19 crc kubenswrapper[4830]: I0131 09:28:19.597480 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9b60a826-4072-45c8-91c8-469a728a68ae-internal-tls-certs\") pod \"nova-api-0\" (UID: \"9b60a826-4072-45c8-91c8-469a728a68ae\") " pod="openstack/nova-api-0" Jan 31 09:28:19 crc kubenswrapper[4830]: I0131 09:28:19.597978 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b60a826-4072-45c8-91c8-469a728a68ae-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"9b60a826-4072-45c8-91c8-469a728a68ae\") " pod="openstack/nova-api-0" Jan 31 09:28:19 crc kubenswrapper[4830]: I0131 09:28:19.599328 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9b60a826-4072-45c8-91c8-469a728a68ae-config-data\") pod \"nova-api-0\" (UID: \"9b60a826-4072-45c8-91c8-469a728a68ae\") " pod="openstack/nova-api-0" Jan 31 09:28:19 crc kubenswrapper[4830]: I0131 09:28:19.604477 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9b60a826-4072-45c8-91c8-469a728a68ae-public-tls-certs\") pod \"nova-api-0\" (UID: \"9b60a826-4072-45c8-91c8-469a728a68ae\") " pod="openstack/nova-api-0" Jan 31 09:28:19 crc kubenswrapper[4830]: I0131 09:28:19.612863 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gklxx\" (UniqueName: \"kubernetes.io/projected/9b60a826-4072-45c8-91c8-469a728a68ae-kube-api-access-gklxx\") pod \"nova-api-0\" (UID: \"9b60a826-4072-45c8-91c8-469a728a68ae\") " pod="openstack/nova-api-0" Jan 31 09:28:19 crc kubenswrapper[4830]: I0131 09:28:19.797640 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 31 09:28:19 crc kubenswrapper[4830]: I0131 09:28:19.908665 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="aa9e4ee2-a265-4397-a7ff-42e9ab868237" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.251:8775/\": dial tcp 10.217.0.251:8775: connect: connection refused" Jan 31 09:28:19 crc kubenswrapper[4830]: I0131 09:28:19.908695 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="aa9e4ee2-a265-4397-a7ff-42e9ab868237" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.251:8775/\": dial tcp 10.217.0.251:8775: connect: connection refused" Jan 31 09:28:20 crc kubenswrapper[4830]: I0131 09:28:20.208454 4830 generic.go:334] "Generic (PLEG): container finished" podID="aa9e4ee2-a265-4397-a7ff-42e9ab868237" containerID="017b2c6904fbdcbd8278344456c881534bb24a5ac4c8e0a05b69670b9bebf711" exitCode=0 Jan 31 09:28:20 crc kubenswrapper[4830]: I0131 09:28:20.208533 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"aa9e4ee2-a265-4397-a7ff-42e9ab868237","Type":"ContainerDied","Data":"017b2c6904fbdcbd8278344456c881534bb24a5ac4c8e0a05b69670b9bebf711"} Jan 31 09:28:20 crc kubenswrapper[4830]: I0131 09:28:20.274530 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8dc5a196-8cf1-4387-8c40-7c9b5b1b55fc" path="/var/lib/kubelet/pods/8dc5a196-8cf1-4387-8c40-7c9b5b1b55fc/volumes" Jan 31 09:28:20 crc kubenswrapper[4830]: I0131 09:28:20.343580 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 31 09:28:20 crc kubenswrapper[4830]: E0131 09:28:20.665140 4830 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of bdaf28f5f273d04f1641c96500b95d21a4cb8cafbba934be5cac7966855ca0b0 is running failed: container process not found" containerID="bdaf28f5f273d04f1641c96500b95d21a4cb8cafbba934be5cac7966855ca0b0" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 31 09:28:20 crc kubenswrapper[4830]: E0131 09:28:20.668887 4830 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of bdaf28f5f273d04f1641c96500b95d21a4cb8cafbba934be5cac7966855ca0b0 is running failed: container process not found" containerID="bdaf28f5f273d04f1641c96500b95d21a4cb8cafbba934be5cac7966855ca0b0" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 31 09:28:20 crc kubenswrapper[4830]: E0131 09:28:20.670138 4830 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of bdaf28f5f273d04f1641c96500b95d21a4cb8cafbba934be5cac7966855ca0b0 is running failed: container process not found" containerID="bdaf28f5f273d04f1641c96500b95d21a4cb8cafbba934be5cac7966855ca0b0" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 31 09:28:20 crc kubenswrapper[4830]: E0131 09:28:20.670189 4830 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of bdaf28f5f273d04f1641c96500b95d21a4cb8cafbba934be5cac7966855ca0b0 is running failed: container process not found" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="deed2020-a242-40ee-af68-bb3a30f6acf3" containerName="nova-scheduler-scheduler" Jan 31 09:28:21 crc kubenswrapper[4830]: I0131 09:28:21.004656 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 31 09:28:21 crc kubenswrapper[4830]: I0131 09:28:21.012196 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 31 09:28:21 crc kubenswrapper[4830]: I0131 09:28:21.038106 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aa9e4ee2-a265-4397-a7ff-42e9ab868237-config-data\") pod \"aa9e4ee2-a265-4397-a7ff-42e9ab868237\" (UID: \"aa9e4ee2-a265-4397-a7ff-42e9ab868237\") " Jan 31 09:28:21 crc kubenswrapper[4830]: I0131 09:28:21.038363 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/aa9e4ee2-a265-4397-a7ff-42e9ab868237-nova-metadata-tls-certs\") pod \"aa9e4ee2-a265-4397-a7ff-42e9ab868237\" (UID: \"aa9e4ee2-a265-4397-a7ff-42e9ab868237\") " Jan 31 09:28:21 crc kubenswrapper[4830]: I0131 09:28:21.038582 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aa9e4ee2-a265-4397-a7ff-42e9ab868237-logs\") pod \"aa9e4ee2-a265-4397-a7ff-42e9ab868237\" (UID: \"aa9e4ee2-a265-4397-a7ff-42e9ab868237\") " Jan 31 09:28:21 crc kubenswrapper[4830]: I0131 09:28:21.038656 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sw2z5\" (UniqueName: \"kubernetes.io/projected/aa9e4ee2-a265-4397-a7ff-42e9ab868237-kube-api-access-sw2z5\") pod \"aa9e4ee2-a265-4397-a7ff-42e9ab868237\" (UID: \"aa9e4ee2-a265-4397-a7ff-42e9ab868237\") " Jan 31 09:28:21 crc kubenswrapper[4830]: I0131 09:28:21.038740 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa9e4ee2-a265-4397-a7ff-42e9ab868237-combined-ca-bundle\") pod \"aa9e4ee2-a265-4397-a7ff-42e9ab868237\" (UID: \"aa9e4ee2-a265-4397-a7ff-42e9ab868237\") " Jan 31 09:28:21 crc kubenswrapper[4830]: I0131 09:28:21.052071 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aa9e4ee2-a265-4397-a7ff-42e9ab868237-logs" (OuterVolumeSpecName: "logs") pod "aa9e4ee2-a265-4397-a7ff-42e9ab868237" (UID: "aa9e4ee2-a265-4397-a7ff-42e9ab868237"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 09:28:21 crc kubenswrapper[4830]: I0131 09:28:21.062376 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aa9e4ee2-a265-4397-a7ff-42e9ab868237-kube-api-access-sw2z5" (OuterVolumeSpecName: "kube-api-access-sw2z5") pod "aa9e4ee2-a265-4397-a7ff-42e9ab868237" (UID: "aa9e4ee2-a265-4397-a7ff-42e9ab868237"). InnerVolumeSpecName "kube-api-access-sw2z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:28:21 crc kubenswrapper[4830]: I0131 09:28:21.141175 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rrkv7\" (UniqueName: \"kubernetes.io/projected/deed2020-a242-40ee-af68-bb3a30f6acf3-kube-api-access-rrkv7\") pod \"deed2020-a242-40ee-af68-bb3a30f6acf3\" (UID: \"deed2020-a242-40ee-af68-bb3a30f6acf3\") " Jan 31 09:28:21 crc kubenswrapper[4830]: I0131 09:28:21.141478 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/deed2020-a242-40ee-af68-bb3a30f6acf3-config-data\") pod \"deed2020-a242-40ee-af68-bb3a30f6acf3\" (UID: \"deed2020-a242-40ee-af68-bb3a30f6acf3\") " Jan 31 09:28:21 crc kubenswrapper[4830]: I0131 09:28:21.141567 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/deed2020-a242-40ee-af68-bb3a30f6acf3-combined-ca-bundle\") pod \"deed2020-a242-40ee-af68-bb3a30f6acf3\" (UID: \"deed2020-a242-40ee-af68-bb3a30f6acf3\") " Jan 31 09:28:21 crc kubenswrapper[4830]: I0131 09:28:21.146318 4830 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aa9e4ee2-a265-4397-a7ff-42e9ab868237-logs\") on node \"crc\" DevicePath \"\"" Jan 31 09:28:21 crc kubenswrapper[4830]: I0131 09:28:21.146359 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sw2z5\" (UniqueName: \"kubernetes.io/projected/aa9e4ee2-a265-4397-a7ff-42e9ab868237-kube-api-access-sw2z5\") on node \"crc\" DevicePath \"\"" Jan 31 09:28:21 crc kubenswrapper[4830]: I0131 09:28:21.147158 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aa9e4ee2-a265-4397-a7ff-42e9ab868237-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "aa9e4ee2-a265-4397-a7ff-42e9ab868237" (UID: "aa9e4ee2-a265-4397-a7ff-42e9ab868237"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:28:21 crc kubenswrapper[4830]: I0131 09:28:21.151281 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/deed2020-a242-40ee-af68-bb3a30f6acf3-kube-api-access-rrkv7" (OuterVolumeSpecName: "kube-api-access-rrkv7") pod "deed2020-a242-40ee-af68-bb3a30f6acf3" (UID: "deed2020-a242-40ee-af68-bb3a30f6acf3"). InnerVolumeSpecName "kube-api-access-rrkv7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:28:21 crc kubenswrapper[4830]: I0131 09:28:21.153865 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aa9e4ee2-a265-4397-a7ff-42e9ab868237-config-data" (OuterVolumeSpecName: "config-data") pod "aa9e4ee2-a265-4397-a7ff-42e9ab868237" (UID: "aa9e4ee2-a265-4397-a7ff-42e9ab868237"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:28:21 crc kubenswrapper[4830]: I0131 09:28:21.221811 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/deed2020-a242-40ee-af68-bb3a30f6acf3-config-data" (OuterVolumeSpecName: "config-data") pod "deed2020-a242-40ee-af68-bb3a30f6acf3" (UID: "deed2020-a242-40ee-af68-bb3a30f6acf3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:28:21 crc kubenswrapper[4830]: I0131 09:28:21.229335 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aa9e4ee2-a265-4397-a7ff-42e9ab868237-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "aa9e4ee2-a265-4397-a7ff-42e9ab868237" (UID: "aa9e4ee2-a265-4397-a7ff-42e9ab868237"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:28:21 crc kubenswrapper[4830]: I0131 09:28:21.249745 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"9b60a826-4072-45c8-91c8-469a728a68ae","Type":"ContainerStarted","Data":"1a75145129b18855d2ef4d7f3fc1085e631db7bb1918d417e76ef95a0d3e47ed"} Jan 31 09:28:21 crc kubenswrapper[4830]: I0131 09:28:21.249814 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"9b60a826-4072-45c8-91c8-469a728a68ae","Type":"ContainerStarted","Data":"56ec6659822e1dee3c84785e2667b13ac6814f90b58f4b6953e9d3f8bb200531"} Jan 31 09:28:21 crc kubenswrapper[4830]: I0131 09:28:21.250434 4830 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aa9e4ee2-a265-4397-a7ff-42e9ab868237-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 09:28:21 crc kubenswrapper[4830]: I0131 09:28:21.250463 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rrkv7\" (UniqueName: \"kubernetes.io/projected/deed2020-a242-40ee-af68-bb3a30f6acf3-kube-api-access-rrkv7\") on node \"crc\" DevicePath \"\"" Jan 31 09:28:21 crc kubenswrapper[4830]: I0131 09:28:21.250476 4830 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/aa9e4ee2-a265-4397-a7ff-42e9ab868237-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 31 09:28:21 crc kubenswrapper[4830]: I0131 09:28:21.250488 4830 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/deed2020-a242-40ee-af68-bb3a30f6acf3-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 09:28:21 crc kubenswrapper[4830]: I0131 09:28:21.250498 4830 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa9e4ee2-a265-4397-a7ff-42e9ab868237-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 09:28:21 crc kubenswrapper[4830]: I0131 09:28:21.271157 4830 generic.go:334] "Generic (PLEG): container finished" podID="deed2020-a242-40ee-af68-bb3a30f6acf3" containerID="bdaf28f5f273d04f1641c96500b95d21a4cb8cafbba934be5cac7966855ca0b0" exitCode=0 Jan 31 09:28:21 crc kubenswrapper[4830]: I0131 09:28:21.271299 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"deed2020-a242-40ee-af68-bb3a30f6acf3","Type":"ContainerDied","Data":"bdaf28f5f273d04f1641c96500b95d21a4cb8cafbba934be5cac7966855ca0b0"} Jan 31 09:28:21 crc kubenswrapper[4830]: I0131 09:28:21.271363 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"deed2020-a242-40ee-af68-bb3a30f6acf3","Type":"ContainerDied","Data":"9a443ab7bb93be6acc0b28f4cd471c7faf2c5e5f99f003b1e3d17c0d5912a299"} Jan 31 09:28:21 crc kubenswrapper[4830]: I0131 09:28:21.271390 4830 scope.go:117] "RemoveContainer" containerID="bdaf28f5f273d04f1641c96500b95d21a4cb8cafbba934be5cac7966855ca0b0" Jan 31 09:28:21 crc kubenswrapper[4830]: I0131 09:28:21.271603 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 31 09:28:21 crc kubenswrapper[4830]: I0131 09:28:21.279680 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/deed2020-a242-40ee-af68-bb3a30f6acf3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "deed2020-a242-40ee-af68-bb3a30f6acf3" (UID: "deed2020-a242-40ee-af68-bb3a30f6acf3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:28:21 crc kubenswrapper[4830]: I0131 09:28:21.307601 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"aa9e4ee2-a265-4397-a7ff-42e9ab868237","Type":"ContainerDied","Data":"90a1fe2b8ece85c6268eb7faaac51353aa1947390daa0048f37c02ff3260c950"} Jan 31 09:28:21 crc kubenswrapper[4830]: I0131 09:28:21.307757 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 31 09:28:21 crc kubenswrapper[4830]: I0131 09:28:21.349113 4830 scope.go:117] "RemoveContainer" containerID="bdaf28f5f273d04f1641c96500b95d21a4cb8cafbba934be5cac7966855ca0b0" Jan 31 09:28:21 crc kubenswrapper[4830]: I0131 09:28:21.355199 4830 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/deed2020-a242-40ee-af68-bb3a30f6acf3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 09:28:21 crc kubenswrapper[4830]: E0131 09:28:21.355442 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bdaf28f5f273d04f1641c96500b95d21a4cb8cafbba934be5cac7966855ca0b0\": container with ID starting with bdaf28f5f273d04f1641c96500b95d21a4cb8cafbba934be5cac7966855ca0b0 not found: ID does not exist" containerID="bdaf28f5f273d04f1641c96500b95d21a4cb8cafbba934be5cac7966855ca0b0" Jan 31 09:28:21 crc kubenswrapper[4830]: I0131 09:28:21.355485 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bdaf28f5f273d04f1641c96500b95d21a4cb8cafbba934be5cac7966855ca0b0"} err="failed to get container status \"bdaf28f5f273d04f1641c96500b95d21a4cb8cafbba934be5cac7966855ca0b0\": rpc error: code = NotFound desc = could not find container \"bdaf28f5f273d04f1641c96500b95d21a4cb8cafbba934be5cac7966855ca0b0\": container with ID starting with bdaf28f5f273d04f1641c96500b95d21a4cb8cafbba934be5cac7966855ca0b0 not found: ID does not exist" Jan 31 09:28:21 crc kubenswrapper[4830]: I0131 09:28:21.355519 4830 scope.go:117] "RemoveContainer" containerID="017b2c6904fbdcbd8278344456c881534bb24a5ac4c8e0a05b69670b9bebf711" Jan 31 09:28:21 crc kubenswrapper[4830]: I0131 09:28:21.390804 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 31 09:28:21 crc kubenswrapper[4830]: I0131 09:28:21.424044 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 31 09:28:21 crc kubenswrapper[4830]: I0131 09:28:21.439703 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 31 09:28:21 crc kubenswrapper[4830]: E0131 09:28:21.440346 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="deed2020-a242-40ee-af68-bb3a30f6acf3" containerName="nova-scheduler-scheduler" Jan 31 09:28:21 crc kubenswrapper[4830]: I0131 09:28:21.440361 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="deed2020-a242-40ee-af68-bb3a30f6acf3" containerName="nova-scheduler-scheduler" Jan 31 09:28:21 crc kubenswrapper[4830]: E0131 09:28:21.440372 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa9e4ee2-a265-4397-a7ff-42e9ab868237" containerName="nova-metadata-metadata" Jan 31 09:28:21 crc kubenswrapper[4830]: I0131 09:28:21.440378 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa9e4ee2-a265-4397-a7ff-42e9ab868237" containerName="nova-metadata-metadata" Jan 31 09:28:21 crc kubenswrapper[4830]: E0131 09:28:21.440437 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa9e4ee2-a265-4397-a7ff-42e9ab868237" containerName="nova-metadata-log" Jan 31 09:28:21 crc kubenswrapper[4830]: I0131 09:28:21.440445 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa9e4ee2-a265-4397-a7ff-42e9ab868237" containerName="nova-metadata-log" Jan 31 09:28:21 crc kubenswrapper[4830]: I0131 09:28:21.440662 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="deed2020-a242-40ee-af68-bb3a30f6acf3" containerName="nova-scheduler-scheduler" Jan 31 09:28:21 crc kubenswrapper[4830]: I0131 09:28:21.440680 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="aa9e4ee2-a265-4397-a7ff-42e9ab868237" containerName="nova-metadata-metadata" Jan 31 09:28:21 crc kubenswrapper[4830]: I0131 09:28:21.440695 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="aa9e4ee2-a265-4397-a7ff-42e9ab868237" containerName="nova-metadata-log" Jan 31 09:28:21 crc kubenswrapper[4830]: I0131 09:28:21.442188 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 31 09:28:21 crc kubenswrapper[4830]: I0131 09:28:21.446815 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 31 09:28:21 crc kubenswrapper[4830]: I0131 09:28:21.447509 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 31 09:28:21 crc kubenswrapper[4830]: I0131 09:28:21.459054 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 31 09:28:21 crc kubenswrapper[4830]: I0131 09:28:21.495355 4830 scope.go:117] "RemoveContainer" containerID="87f693d26ac57ee0aa982394e5dda671c8d87b55255cd5b2132721a7ad69c513" Jan 31 09:28:21 crc kubenswrapper[4830]: I0131 09:28:21.561903 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/efaf79e1-d68e-4987-a73f-42a782fb9f6a-config-data\") pod \"nova-metadata-0\" (UID: \"efaf79e1-d68e-4987-a73f-42a782fb9f6a\") " pod="openstack/nova-metadata-0" Jan 31 09:28:21 crc kubenswrapper[4830]: I0131 09:28:21.562009 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wkttg\" (UniqueName: \"kubernetes.io/projected/efaf79e1-d68e-4987-a73f-42a782fb9f6a-kube-api-access-wkttg\") pod \"nova-metadata-0\" (UID: \"efaf79e1-d68e-4987-a73f-42a782fb9f6a\") " pod="openstack/nova-metadata-0" Jan 31 09:28:21 crc kubenswrapper[4830]: I0131 09:28:21.562192 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/efaf79e1-d68e-4987-a73f-42a782fb9f6a-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"efaf79e1-d68e-4987-a73f-42a782fb9f6a\") " pod="openstack/nova-metadata-0" Jan 31 09:28:21 crc kubenswrapper[4830]: I0131 09:28:21.562261 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/efaf79e1-d68e-4987-a73f-42a782fb9f6a-logs\") pod \"nova-metadata-0\" (UID: \"efaf79e1-d68e-4987-a73f-42a782fb9f6a\") " pod="openstack/nova-metadata-0" Jan 31 09:28:21 crc kubenswrapper[4830]: I0131 09:28:21.562304 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/efaf79e1-d68e-4987-a73f-42a782fb9f6a-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"efaf79e1-d68e-4987-a73f-42a782fb9f6a\") " pod="openstack/nova-metadata-0" Jan 31 09:28:21 crc kubenswrapper[4830]: I0131 09:28:21.592973 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-79b5d74c8c-ftd5m" Jan 31 09:28:21 crc kubenswrapper[4830]: I0131 09:28:21.689278 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/efaf79e1-d68e-4987-a73f-42a782fb9f6a-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"efaf79e1-d68e-4987-a73f-42a782fb9f6a\") " pod="openstack/nova-metadata-0" Jan 31 09:28:21 crc kubenswrapper[4830]: I0131 09:28:21.689481 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/efaf79e1-d68e-4987-a73f-42a782fb9f6a-logs\") pod \"nova-metadata-0\" (UID: \"efaf79e1-d68e-4987-a73f-42a782fb9f6a\") " pod="openstack/nova-metadata-0" Jan 31 09:28:21 crc kubenswrapper[4830]: I0131 09:28:21.689573 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/efaf79e1-d68e-4987-a73f-42a782fb9f6a-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"efaf79e1-d68e-4987-a73f-42a782fb9f6a\") " pod="openstack/nova-metadata-0" Jan 31 09:28:21 crc kubenswrapper[4830]: I0131 09:28:21.690862 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/efaf79e1-d68e-4987-a73f-42a782fb9f6a-logs\") pod \"nova-metadata-0\" (UID: \"efaf79e1-d68e-4987-a73f-42a782fb9f6a\") " pod="openstack/nova-metadata-0" Jan 31 09:28:21 crc kubenswrapper[4830]: I0131 09:28:21.698798 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5fbc4d444f-vwp6p"] Jan 31 09:28:21 crc kubenswrapper[4830]: I0131 09:28:21.699128 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5fbc4d444f-vwp6p" podUID="084e69fe-072f-4659-a28c-f0000f8c16fe" containerName="dnsmasq-dns" containerID="cri-o://91ad512ef1df892b20ea0fe4ddccb641038a02dec8474a6237285250f14d6cdd" gracePeriod=10 Jan 31 09:28:21 crc kubenswrapper[4830]: I0131 09:28:21.700283 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/efaf79e1-d68e-4987-a73f-42a782fb9f6a-config-data\") pod \"nova-metadata-0\" (UID: \"efaf79e1-d68e-4987-a73f-42a782fb9f6a\") " pod="openstack/nova-metadata-0" Jan 31 09:28:21 crc kubenswrapper[4830]: I0131 09:28:21.700430 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wkttg\" (UniqueName: \"kubernetes.io/projected/efaf79e1-d68e-4987-a73f-42a782fb9f6a-kube-api-access-wkttg\") pod \"nova-metadata-0\" (UID: \"efaf79e1-d68e-4987-a73f-42a782fb9f6a\") " pod="openstack/nova-metadata-0" Jan 31 09:28:21 crc kubenswrapper[4830]: I0131 09:28:21.706195 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/efaf79e1-d68e-4987-a73f-42a782fb9f6a-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"efaf79e1-d68e-4987-a73f-42a782fb9f6a\") " pod="openstack/nova-metadata-0" Jan 31 09:28:21 crc kubenswrapper[4830]: I0131 09:28:21.707798 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/efaf79e1-d68e-4987-a73f-42a782fb9f6a-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"efaf79e1-d68e-4987-a73f-42a782fb9f6a\") " pod="openstack/nova-metadata-0" Jan 31 09:28:21 crc kubenswrapper[4830]: I0131 09:28:21.722782 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/efaf79e1-d68e-4987-a73f-42a782fb9f6a-config-data\") pod \"nova-metadata-0\" (UID: \"efaf79e1-d68e-4987-a73f-42a782fb9f6a\") " pod="openstack/nova-metadata-0" Jan 31 09:28:21 crc kubenswrapper[4830]: I0131 09:28:21.748568 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 31 09:28:21 crc kubenswrapper[4830]: I0131 09:28:21.759847 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wkttg\" (UniqueName: \"kubernetes.io/projected/efaf79e1-d68e-4987-a73f-42a782fb9f6a-kube-api-access-wkttg\") pod \"nova-metadata-0\" (UID: \"efaf79e1-d68e-4987-a73f-42a782fb9f6a\") " pod="openstack/nova-metadata-0" Jan 31 09:28:21 crc kubenswrapper[4830]: I0131 09:28:21.768309 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 31 09:28:21 crc kubenswrapper[4830]: I0131 09:28:21.778883 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 31 09:28:21 crc kubenswrapper[4830]: I0131 09:28:21.837655 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 31 09:28:21 crc kubenswrapper[4830]: I0131 09:28:21.839513 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 31 09:28:21 crc kubenswrapper[4830]: I0131 09:28:21.859422 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 31 09:28:21 crc kubenswrapper[4830]: I0131 09:28:21.918117 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 31 09:28:22 crc kubenswrapper[4830]: I0131 09:28:22.029189 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xlhsj\" (UniqueName: \"kubernetes.io/projected/65751981-c5c6-41a5-bf04-3ff6bee55188-kube-api-access-xlhsj\") pod \"nova-scheduler-0\" (UID: \"65751981-c5c6-41a5-bf04-3ff6bee55188\") " pod="openstack/nova-scheduler-0" Jan 31 09:28:22 crc kubenswrapper[4830]: I0131 09:28:22.029392 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/65751981-c5c6-41a5-bf04-3ff6bee55188-config-data\") pod \"nova-scheduler-0\" (UID: \"65751981-c5c6-41a5-bf04-3ff6bee55188\") " pod="openstack/nova-scheduler-0" Jan 31 09:28:22 crc kubenswrapper[4830]: I0131 09:28:22.029618 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65751981-c5c6-41a5-bf04-3ff6bee55188-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"65751981-c5c6-41a5-bf04-3ff6bee55188\") " pod="openstack/nova-scheduler-0" Jan 31 09:28:22 crc kubenswrapper[4830]: I0131 09:28:22.155654 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xlhsj\" (UniqueName: \"kubernetes.io/projected/65751981-c5c6-41a5-bf04-3ff6bee55188-kube-api-access-xlhsj\") pod \"nova-scheduler-0\" (UID: \"65751981-c5c6-41a5-bf04-3ff6bee55188\") " pod="openstack/nova-scheduler-0" Jan 31 09:28:22 crc kubenswrapper[4830]: I0131 09:28:22.155913 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/65751981-c5c6-41a5-bf04-3ff6bee55188-config-data\") pod \"nova-scheduler-0\" (UID: \"65751981-c5c6-41a5-bf04-3ff6bee55188\") " pod="openstack/nova-scheduler-0" Jan 31 09:28:22 crc kubenswrapper[4830]: I0131 09:28:22.156224 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65751981-c5c6-41a5-bf04-3ff6bee55188-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"65751981-c5c6-41a5-bf04-3ff6bee55188\") " pod="openstack/nova-scheduler-0" Jan 31 09:28:22 crc kubenswrapper[4830]: I0131 09:28:22.214582 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65751981-c5c6-41a5-bf04-3ff6bee55188-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"65751981-c5c6-41a5-bf04-3ff6bee55188\") " pod="openstack/nova-scheduler-0" Jan 31 09:28:22 crc kubenswrapper[4830]: I0131 09:28:22.234922 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/65751981-c5c6-41a5-bf04-3ff6bee55188-config-data\") pod \"nova-scheduler-0\" (UID: \"65751981-c5c6-41a5-bf04-3ff6bee55188\") " pod="openstack/nova-scheduler-0" Jan 31 09:28:22 crc kubenswrapper[4830]: I0131 09:28:22.284509 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xlhsj\" (UniqueName: \"kubernetes.io/projected/65751981-c5c6-41a5-bf04-3ff6bee55188-kube-api-access-xlhsj\") pod \"nova-scheduler-0\" (UID: \"65751981-c5c6-41a5-bf04-3ff6bee55188\") " pod="openstack/nova-scheduler-0" Jan 31 09:28:22 crc kubenswrapper[4830]: I0131 09:28:22.292009 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aa9e4ee2-a265-4397-a7ff-42e9ab868237" path="/var/lib/kubelet/pods/aa9e4ee2-a265-4397-a7ff-42e9ab868237/volumes" Jan 31 09:28:22 crc kubenswrapper[4830]: I0131 09:28:22.302203 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="deed2020-a242-40ee-af68-bb3a30f6acf3" path="/var/lib/kubelet/pods/deed2020-a242-40ee-af68-bb3a30f6acf3/volumes" Jan 31 09:28:22 crc kubenswrapper[4830]: I0131 09:28:22.368306 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 31 09:28:22 crc kubenswrapper[4830]: I0131 09:28:22.435230 4830 generic.go:334] "Generic (PLEG): container finished" podID="084e69fe-072f-4659-a28c-f0000f8c16fe" containerID="91ad512ef1df892b20ea0fe4ddccb641038a02dec8474a6237285250f14d6cdd" exitCode=0 Jan 31 09:28:22 crc kubenswrapper[4830]: I0131 09:28:22.435832 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5fbc4d444f-vwp6p" event={"ID":"084e69fe-072f-4659-a28c-f0000f8c16fe","Type":"ContainerDied","Data":"91ad512ef1df892b20ea0fe4ddccb641038a02dec8474a6237285250f14d6cdd"} Jan 31 09:28:22 crc kubenswrapper[4830]: I0131 09:28:22.805078 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 31 09:28:23 crc kubenswrapper[4830]: I0131 09:28:23.139971 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5fbc4d444f-vwp6p" Jan 31 09:28:23 crc kubenswrapper[4830]: I0131 09:28:23.160986 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 31 09:28:23 crc kubenswrapper[4830]: I0131 09:28:23.213688 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/084e69fe-072f-4659-a28c-f0000f8c16fe-ovsdbserver-nb\") pod \"084e69fe-072f-4659-a28c-f0000f8c16fe\" (UID: \"084e69fe-072f-4659-a28c-f0000f8c16fe\") " Jan 31 09:28:23 crc kubenswrapper[4830]: I0131 09:28:23.214790 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tfth\" (UniqueName: \"kubernetes.io/projected/084e69fe-072f-4659-a28c-f0000f8c16fe-kube-api-access-8tfth\") pod \"084e69fe-072f-4659-a28c-f0000f8c16fe\" (UID: \"084e69fe-072f-4659-a28c-f0000f8c16fe\") " Jan 31 09:28:23 crc kubenswrapper[4830]: I0131 09:28:23.215040 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/084e69fe-072f-4659-a28c-f0000f8c16fe-dns-svc\") pod \"084e69fe-072f-4659-a28c-f0000f8c16fe\" (UID: \"084e69fe-072f-4659-a28c-f0000f8c16fe\") " Jan 31 09:28:23 crc kubenswrapper[4830]: I0131 09:28:23.215212 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/084e69fe-072f-4659-a28c-f0000f8c16fe-config\") pod \"084e69fe-072f-4659-a28c-f0000f8c16fe\" (UID: \"084e69fe-072f-4659-a28c-f0000f8c16fe\") " Jan 31 09:28:23 crc kubenswrapper[4830]: I0131 09:28:23.215577 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/084e69fe-072f-4659-a28c-f0000f8c16fe-ovsdbserver-sb\") pod \"084e69fe-072f-4659-a28c-f0000f8c16fe\" (UID: \"084e69fe-072f-4659-a28c-f0000f8c16fe\") " Jan 31 09:28:23 crc kubenswrapper[4830]: I0131 09:28:23.215925 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/084e69fe-072f-4659-a28c-f0000f8c16fe-dns-swift-storage-0\") pod \"084e69fe-072f-4659-a28c-f0000f8c16fe\" (UID: \"084e69fe-072f-4659-a28c-f0000f8c16fe\") " Jan 31 09:28:23 crc kubenswrapper[4830]: I0131 09:28:23.222470 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/084e69fe-072f-4659-a28c-f0000f8c16fe-kube-api-access-8tfth" (OuterVolumeSpecName: "kube-api-access-8tfth") pod "084e69fe-072f-4659-a28c-f0000f8c16fe" (UID: "084e69fe-072f-4659-a28c-f0000f8c16fe"). InnerVolumeSpecName "kube-api-access-8tfth". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:28:23 crc kubenswrapper[4830]: I0131 09:28:23.223685 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tfth\" (UniqueName: \"kubernetes.io/projected/084e69fe-072f-4659-a28c-f0000f8c16fe-kube-api-access-8tfth\") on node \"crc\" DevicePath \"\"" Jan 31 09:28:23 crc kubenswrapper[4830]: I0131 09:28:23.353287 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/084e69fe-072f-4659-a28c-f0000f8c16fe-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "084e69fe-072f-4659-a28c-f0000f8c16fe" (UID: "084e69fe-072f-4659-a28c-f0000f8c16fe"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:28:23 crc kubenswrapper[4830]: I0131 09:28:23.424547 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/084e69fe-072f-4659-a28c-f0000f8c16fe-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "084e69fe-072f-4659-a28c-f0000f8c16fe" (UID: "084e69fe-072f-4659-a28c-f0000f8c16fe"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:28:23 crc kubenswrapper[4830]: I0131 09:28:23.438998 4830 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/084e69fe-072f-4659-a28c-f0000f8c16fe-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 31 09:28:23 crc kubenswrapper[4830]: I0131 09:28:23.439043 4830 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/084e69fe-072f-4659-a28c-f0000f8c16fe-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 31 09:28:23 crc kubenswrapper[4830]: I0131 09:28:23.446972 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/084e69fe-072f-4659-a28c-f0000f8c16fe-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "084e69fe-072f-4659-a28c-f0000f8c16fe" (UID: "084e69fe-072f-4659-a28c-f0000f8c16fe"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:28:23 crc kubenswrapper[4830]: I0131 09:28:23.452232 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"efaf79e1-d68e-4987-a73f-42a782fb9f6a","Type":"ContainerStarted","Data":"ddab17b7d6a229c1a26d358fbfca46230a181626ffa05565b9f4722f64c68495"} Jan 31 09:28:23 crc kubenswrapper[4830]: I0131 09:28:23.458522 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5fbc4d444f-vwp6p" event={"ID":"084e69fe-072f-4659-a28c-f0000f8c16fe","Type":"ContainerDied","Data":"8ed5339e48754bc5bb00e3c42fbcdf4a42994c8dbb486269a9450af25cb94150"} Jan 31 09:28:23 crc kubenswrapper[4830]: I0131 09:28:23.458554 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/084e69fe-072f-4659-a28c-f0000f8c16fe-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "084e69fe-072f-4659-a28c-f0000f8c16fe" (UID: "084e69fe-072f-4659-a28c-f0000f8c16fe"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:28:23 crc kubenswrapper[4830]: I0131 09:28:23.458604 4830 scope.go:117] "RemoveContainer" containerID="91ad512ef1df892b20ea0fe4ddccb641038a02dec8474a6237285250f14d6cdd" Jan 31 09:28:23 crc kubenswrapper[4830]: I0131 09:28:23.458706 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5fbc4d444f-vwp6p" Jan 31 09:28:23 crc kubenswrapper[4830]: I0131 09:28:23.465581 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"9b60a826-4072-45c8-91c8-469a728a68ae","Type":"ContainerStarted","Data":"c3e1e2560eddceb1c188697cea67215d80e9af757e52c4144ec68acac236fdc9"} Jan 31 09:28:23 crc kubenswrapper[4830]: I0131 09:28:23.468847 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/084e69fe-072f-4659-a28c-f0000f8c16fe-config" (OuterVolumeSpecName: "config") pod "084e69fe-072f-4659-a28c-f0000f8c16fe" (UID: "084e69fe-072f-4659-a28c-f0000f8c16fe"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:28:23 crc kubenswrapper[4830]: I0131 09:28:23.470008 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"65751981-c5c6-41a5-bf04-3ff6bee55188","Type":"ContainerStarted","Data":"b9801b9fa00e5f6dfe729ed5ee008189e2208b34ad2936eb57b1d3958cba561f"} Jan 31 09:28:23 crc kubenswrapper[4830]: I0131 09:28:23.492820 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=4.492792867 podStartE2EDuration="4.492792867s" podCreationTimestamp="2026-01-31 09:28:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 09:28:23.489888234 +0000 UTC m=+1647.983250686" watchObservedRunningTime="2026-01-31 09:28:23.492792867 +0000 UTC m=+1647.986155309" Jan 31 09:28:23 crc kubenswrapper[4830]: I0131 09:28:23.512053 4830 scope.go:117] "RemoveContainer" containerID="ef2ae4c1e7da16890c368e1398a228726fd3a2b9b647052381591b4bbe509814" Jan 31 09:28:23 crc kubenswrapper[4830]: I0131 09:28:23.545607 4830 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/084e69fe-072f-4659-a28c-f0000f8c16fe-config\") on node \"crc\" DevicePath \"\"" Jan 31 09:28:23 crc kubenswrapper[4830]: I0131 09:28:23.545650 4830 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/084e69fe-072f-4659-a28c-f0000f8c16fe-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 31 09:28:23 crc kubenswrapper[4830]: I0131 09:28:23.545661 4830 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/084e69fe-072f-4659-a28c-f0000f8c16fe-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 31 09:28:23 crc kubenswrapper[4830]: I0131 09:28:23.842122 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5fbc4d444f-vwp6p"] Jan 31 09:28:23 crc kubenswrapper[4830]: I0131 09:28:23.865602 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5fbc4d444f-vwp6p"] Jan 31 09:28:24 crc kubenswrapper[4830]: I0131 09:28:24.255910 4830 scope.go:117] "RemoveContainer" containerID="a04fad3617a9e38076099693ce6bd6f0b7e1a9b845b3b8a22acffddfa772e8f0" Jan 31 09:28:24 crc kubenswrapper[4830]: E0131 09:28:24.256524 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gt7kd_openshift-machine-config-operator(158dbfda-9b0a-4809-9946-3c6ee2d082dc)\"" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" podUID="158dbfda-9b0a-4809-9946-3c6ee2d082dc" Jan 31 09:28:24 crc kubenswrapper[4830]: I0131 09:28:24.277145 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="084e69fe-072f-4659-a28c-f0000f8c16fe" path="/var/lib/kubelet/pods/084e69fe-072f-4659-a28c-f0000f8c16fe/volumes" Jan 31 09:28:24 crc kubenswrapper[4830]: I0131 09:28:24.501455 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"efaf79e1-d68e-4987-a73f-42a782fb9f6a","Type":"ContainerStarted","Data":"a5324742daeb8679f7493a4050d3d60a5d2f1e8aad34fadba72acf86a629c194"} Jan 31 09:28:24 crc kubenswrapper[4830]: I0131 09:28:24.501513 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"efaf79e1-d68e-4987-a73f-42a782fb9f6a","Type":"ContainerStarted","Data":"d33f81413c7bbd9f9468324240abf29537409b07c5e2b597f4e35d3970db43aa"} Jan 31 09:28:24 crc kubenswrapper[4830]: I0131 09:28:24.513307 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"65751981-c5c6-41a5-bf04-3ff6bee55188","Type":"ContainerStarted","Data":"3453d57292217f7447e28a2081769eb5f87273f929ca91ed8bba1c24a10c9d1d"} Jan 31 09:28:24 crc kubenswrapper[4830]: I0131 09:28:24.547242 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.547216892 podStartE2EDuration="3.547216892s" podCreationTimestamp="2026-01-31 09:28:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 09:28:24.533962923 +0000 UTC m=+1649.027325385" watchObservedRunningTime="2026-01-31 09:28:24.547216892 +0000 UTC m=+1649.040579334" Jan 31 09:28:24 crc kubenswrapper[4830]: I0131 09:28:24.560404 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=3.560352268 podStartE2EDuration="3.560352268s" podCreationTimestamp="2026-01-31 09:28:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 09:28:24.555956582 +0000 UTC m=+1649.049319024" watchObservedRunningTime="2026-01-31 09:28:24.560352268 +0000 UTC m=+1649.053714710" Jan 31 09:28:25 crc kubenswrapper[4830]: I0131 09:28:25.344364 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 31 09:28:25 crc kubenswrapper[4830]: I0131 09:28:25.407268 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5857f9d0-2512-4a0b-bdf9-e236d864e814-combined-ca-bundle\") pod \"5857f9d0-2512-4a0b-bdf9-e236d864e814\" (UID: \"5857f9d0-2512-4a0b-bdf9-e236d864e814\") " Jan 31 09:28:25 crc kubenswrapper[4830]: I0131 09:28:25.407512 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5857f9d0-2512-4a0b-bdf9-e236d864e814-config-data\") pod \"5857f9d0-2512-4a0b-bdf9-e236d864e814\" (UID: \"5857f9d0-2512-4a0b-bdf9-e236d864e814\") " Jan 31 09:28:25 crc kubenswrapper[4830]: I0131 09:28:25.407571 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5857f9d0-2512-4a0b-bdf9-e236d864e814-scripts\") pod \"5857f9d0-2512-4a0b-bdf9-e236d864e814\" (UID: \"5857f9d0-2512-4a0b-bdf9-e236d864e814\") " Jan 31 09:28:25 crc kubenswrapper[4830]: I0131 09:28:25.407844 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5857f9d0-2512-4a0b-bdf9-e236d864e814-sg-core-conf-yaml\") pod \"5857f9d0-2512-4a0b-bdf9-e236d864e814\" (UID: \"5857f9d0-2512-4a0b-bdf9-e236d864e814\") " Jan 31 09:28:25 crc kubenswrapper[4830]: I0131 09:28:25.407948 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5857f9d0-2512-4a0b-bdf9-e236d864e814-run-httpd\") pod \"5857f9d0-2512-4a0b-bdf9-e236d864e814\" (UID: \"5857f9d0-2512-4a0b-bdf9-e236d864e814\") " Jan 31 09:28:25 crc kubenswrapper[4830]: I0131 09:28:25.408018 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5857f9d0-2512-4a0b-bdf9-e236d864e814-log-httpd\") pod \"5857f9d0-2512-4a0b-bdf9-e236d864e814\" (UID: \"5857f9d0-2512-4a0b-bdf9-e236d864e814\") " Jan 31 09:28:25 crc kubenswrapper[4830]: I0131 09:28:25.408632 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5857f9d0-2512-4a0b-bdf9-e236d864e814-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "5857f9d0-2512-4a0b-bdf9-e236d864e814" (UID: "5857f9d0-2512-4a0b-bdf9-e236d864e814"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 09:28:25 crc kubenswrapper[4830]: I0131 09:28:25.410298 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5857f9d0-2512-4a0b-bdf9-e236d864e814-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "5857f9d0-2512-4a0b-bdf9-e236d864e814" (UID: "5857f9d0-2512-4a0b-bdf9-e236d864e814"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 09:28:25 crc kubenswrapper[4830]: I0131 09:28:25.410394 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tng26\" (UniqueName: \"kubernetes.io/projected/5857f9d0-2512-4a0b-bdf9-e236d864e814-kube-api-access-tng26\") pod \"5857f9d0-2512-4a0b-bdf9-e236d864e814\" (UID: \"5857f9d0-2512-4a0b-bdf9-e236d864e814\") " Jan 31 09:28:25 crc kubenswrapper[4830]: I0131 09:28:25.412800 4830 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5857f9d0-2512-4a0b-bdf9-e236d864e814-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 31 09:28:25 crc kubenswrapper[4830]: I0131 09:28:25.412838 4830 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5857f9d0-2512-4a0b-bdf9-e236d864e814-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 31 09:28:25 crc kubenswrapper[4830]: I0131 09:28:25.443289 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5857f9d0-2512-4a0b-bdf9-e236d864e814-kube-api-access-tng26" (OuterVolumeSpecName: "kube-api-access-tng26") pod "5857f9d0-2512-4a0b-bdf9-e236d864e814" (UID: "5857f9d0-2512-4a0b-bdf9-e236d864e814"). InnerVolumeSpecName "kube-api-access-tng26". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:28:25 crc kubenswrapper[4830]: I0131 09:28:25.445266 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5857f9d0-2512-4a0b-bdf9-e236d864e814-scripts" (OuterVolumeSpecName: "scripts") pod "5857f9d0-2512-4a0b-bdf9-e236d864e814" (UID: "5857f9d0-2512-4a0b-bdf9-e236d864e814"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:28:25 crc kubenswrapper[4830]: I0131 09:28:25.471996 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5857f9d0-2512-4a0b-bdf9-e236d864e814-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "5857f9d0-2512-4a0b-bdf9-e236d864e814" (UID: "5857f9d0-2512-4a0b-bdf9-e236d864e814"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:28:25 crc kubenswrapper[4830]: I0131 09:28:25.528566 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tng26\" (UniqueName: \"kubernetes.io/projected/5857f9d0-2512-4a0b-bdf9-e236d864e814-kube-api-access-tng26\") on node \"crc\" DevicePath \"\"" Jan 31 09:28:25 crc kubenswrapper[4830]: I0131 09:28:25.528604 4830 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5857f9d0-2512-4a0b-bdf9-e236d864e814-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 09:28:25 crc kubenswrapper[4830]: I0131 09:28:25.528615 4830 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5857f9d0-2512-4a0b-bdf9-e236d864e814-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 31 09:28:25 crc kubenswrapper[4830]: I0131 09:28:25.553667 4830 generic.go:334] "Generic (PLEG): container finished" podID="5857f9d0-2512-4a0b-bdf9-e236d864e814" containerID="9fce759799afb7fc35f253cbb28f800b653f9d711262b2ccdc49247cc45d536c" exitCode=0 Jan 31 09:28:25 crc kubenswrapper[4830]: I0131 09:28:25.553806 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5857f9d0-2512-4a0b-bdf9-e236d864e814","Type":"ContainerDied","Data":"9fce759799afb7fc35f253cbb28f800b653f9d711262b2ccdc49247cc45d536c"} Jan 31 09:28:25 crc kubenswrapper[4830]: I0131 09:28:25.553880 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5857f9d0-2512-4a0b-bdf9-e236d864e814","Type":"ContainerDied","Data":"a5176660c2c027f6ffb9631512c27edb99aae1fb47331f2cccf64041b62370cc"} Jan 31 09:28:25 crc kubenswrapper[4830]: I0131 09:28:25.553902 4830 scope.go:117] "RemoveContainer" containerID="5491c5f866d8022b80367ec9c8b9f4400d43f75d83479c50d03d437b37abbf15" Jan 31 09:28:25 crc kubenswrapper[4830]: I0131 09:28:25.553821 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 31 09:28:25 crc kubenswrapper[4830]: I0131 09:28:25.595601 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5857f9d0-2512-4a0b-bdf9-e236d864e814-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5857f9d0-2512-4a0b-bdf9-e236d864e814" (UID: "5857f9d0-2512-4a0b-bdf9-e236d864e814"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:28:25 crc kubenswrapper[4830]: I0131 09:28:25.606410 4830 scope.go:117] "RemoveContainer" containerID="19eb435786fcb9f0783ed5526263f7badc634eb78acbe7fc09f33de7d4f9c636" Jan 31 09:28:25 crc kubenswrapper[4830]: I0131 09:28:25.632304 4830 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5857f9d0-2512-4a0b-bdf9-e236d864e814-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 09:28:25 crc kubenswrapper[4830]: I0131 09:28:25.636193 4830 scope.go:117] "RemoveContainer" containerID="2f6403aa6c93a3710ea73f76bd0e339077242676de2969aed20e7ba76b4ab985" Jan 31 09:28:25 crc kubenswrapper[4830]: I0131 09:28:25.678574 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5857f9d0-2512-4a0b-bdf9-e236d864e814-config-data" (OuterVolumeSpecName: "config-data") pod "5857f9d0-2512-4a0b-bdf9-e236d864e814" (UID: "5857f9d0-2512-4a0b-bdf9-e236d864e814"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:28:25 crc kubenswrapper[4830]: I0131 09:28:25.735015 4830 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5857f9d0-2512-4a0b-bdf9-e236d864e814-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 09:28:25 crc kubenswrapper[4830]: I0131 09:28:25.769683 4830 scope.go:117] "RemoveContainer" containerID="9fce759799afb7fc35f253cbb28f800b653f9d711262b2ccdc49247cc45d536c" Jan 31 09:28:25 crc kubenswrapper[4830]: I0131 09:28:25.796271 4830 scope.go:117] "RemoveContainer" containerID="5491c5f866d8022b80367ec9c8b9f4400d43f75d83479c50d03d437b37abbf15" Jan 31 09:28:25 crc kubenswrapper[4830]: E0131 09:28:25.797341 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5491c5f866d8022b80367ec9c8b9f4400d43f75d83479c50d03d437b37abbf15\": container with ID starting with 5491c5f866d8022b80367ec9c8b9f4400d43f75d83479c50d03d437b37abbf15 not found: ID does not exist" containerID="5491c5f866d8022b80367ec9c8b9f4400d43f75d83479c50d03d437b37abbf15" Jan 31 09:28:25 crc kubenswrapper[4830]: I0131 09:28:25.797381 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5491c5f866d8022b80367ec9c8b9f4400d43f75d83479c50d03d437b37abbf15"} err="failed to get container status \"5491c5f866d8022b80367ec9c8b9f4400d43f75d83479c50d03d437b37abbf15\": rpc error: code = NotFound desc = could not find container \"5491c5f866d8022b80367ec9c8b9f4400d43f75d83479c50d03d437b37abbf15\": container with ID starting with 5491c5f866d8022b80367ec9c8b9f4400d43f75d83479c50d03d437b37abbf15 not found: ID does not exist" Jan 31 09:28:25 crc kubenswrapper[4830]: I0131 09:28:25.797411 4830 scope.go:117] "RemoveContainer" containerID="19eb435786fcb9f0783ed5526263f7badc634eb78acbe7fc09f33de7d4f9c636" Jan 31 09:28:25 crc kubenswrapper[4830]: E0131 09:28:25.797666 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"19eb435786fcb9f0783ed5526263f7badc634eb78acbe7fc09f33de7d4f9c636\": container with ID starting with 19eb435786fcb9f0783ed5526263f7badc634eb78acbe7fc09f33de7d4f9c636 not found: ID does not exist" containerID="19eb435786fcb9f0783ed5526263f7badc634eb78acbe7fc09f33de7d4f9c636" Jan 31 09:28:25 crc kubenswrapper[4830]: I0131 09:28:25.797700 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"19eb435786fcb9f0783ed5526263f7badc634eb78acbe7fc09f33de7d4f9c636"} err="failed to get container status \"19eb435786fcb9f0783ed5526263f7badc634eb78acbe7fc09f33de7d4f9c636\": rpc error: code = NotFound desc = could not find container \"19eb435786fcb9f0783ed5526263f7badc634eb78acbe7fc09f33de7d4f9c636\": container with ID starting with 19eb435786fcb9f0783ed5526263f7badc634eb78acbe7fc09f33de7d4f9c636 not found: ID does not exist" Jan 31 09:28:25 crc kubenswrapper[4830]: I0131 09:28:25.797718 4830 scope.go:117] "RemoveContainer" containerID="2f6403aa6c93a3710ea73f76bd0e339077242676de2969aed20e7ba76b4ab985" Jan 31 09:28:25 crc kubenswrapper[4830]: E0131 09:28:25.797980 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2f6403aa6c93a3710ea73f76bd0e339077242676de2969aed20e7ba76b4ab985\": container with ID starting with 2f6403aa6c93a3710ea73f76bd0e339077242676de2969aed20e7ba76b4ab985 not found: ID does not exist" containerID="2f6403aa6c93a3710ea73f76bd0e339077242676de2969aed20e7ba76b4ab985" Jan 31 09:28:25 crc kubenswrapper[4830]: I0131 09:28:25.798005 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2f6403aa6c93a3710ea73f76bd0e339077242676de2969aed20e7ba76b4ab985"} err="failed to get container status \"2f6403aa6c93a3710ea73f76bd0e339077242676de2969aed20e7ba76b4ab985\": rpc error: code = NotFound desc = could not find container \"2f6403aa6c93a3710ea73f76bd0e339077242676de2969aed20e7ba76b4ab985\": container with ID starting with 2f6403aa6c93a3710ea73f76bd0e339077242676de2969aed20e7ba76b4ab985 not found: ID does not exist" Jan 31 09:28:25 crc kubenswrapper[4830]: I0131 09:28:25.798019 4830 scope.go:117] "RemoveContainer" containerID="9fce759799afb7fc35f253cbb28f800b653f9d711262b2ccdc49247cc45d536c" Jan 31 09:28:25 crc kubenswrapper[4830]: E0131 09:28:25.798237 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9fce759799afb7fc35f253cbb28f800b653f9d711262b2ccdc49247cc45d536c\": container with ID starting with 9fce759799afb7fc35f253cbb28f800b653f9d711262b2ccdc49247cc45d536c not found: ID does not exist" containerID="9fce759799afb7fc35f253cbb28f800b653f9d711262b2ccdc49247cc45d536c" Jan 31 09:28:25 crc kubenswrapper[4830]: I0131 09:28:25.798273 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9fce759799afb7fc35f253cbb28f800b653f9d711262b2ccdc49247cc45d536c"} err="failed to get container status \"9fce759799afb7fc35f253cbb28f800b653f9d711262b2ccdc49247cc45d536c\": rpc error: code = NotFound desc = could not find container \"9fce759799afb7fc35f253cbb28f800b653f9d711262b2ccdc49247cc45d536c\": container with ID starting with 9fce759799afb7fc35f253cbb28f800b653f9d711262b2ccdc49247cc45d536c not found: ID does not exist" Jan 31 09:28:25 crc kubenswrapper[4830]: I0131 09:28:25.898236 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 31 09:28:25 crc kubenswrapper[4830]: I0131 09:28:25.912485 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 31 09:28:25 crc kubenswrapper[4830]: I0131 09:28:25.944323 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 31 09:28:25 crc kubenswrapper[4830]: E0131 09:28:25.945076 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5857f9d0-2512-4a0b-bdf9-e236d864e814" containerName="sg-core" Jan 31 09:28:25 crc kubenswrapper[4830]: I0131 09:28:25.945106 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="5857f9d0-2512-4a0b-bdf9-e236d864e814" containerName="sg-core" Jan 31 09:28:25 crc kubenswrapper[4830]: E0131 09:28:25.945133 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5857f9d0-2512-4a0b-bdf9-e236d864e814" containerName="ceilometer-central-agent" Jan 31 09:28:25 crc kubenswrapper[4830]: I0131 09:28:25.945161 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="5857f9d0-2512-4a0b-bdf9-e236d864e814" containerName="ceilometer-central-agent" Jan 31 09:28:25 crc kubenswrapper[4830]: E0131 09:28:25.945188 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5857f9d0-2512-4a0b-bdf9-e236d864e814" containerName="proxy-httpd" Jan 31 09:28:25 crc kubenswrapper[4830]: I0131 09:28:25.945196 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="5857f9d0-2512-4a0b-bdf9-e236d864e814" containerName="proxy-httpd" Jan 31 09:28:25 crc kubenswrapper[4830]: E0131 09:28:25.945213 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5857f9d0-2512-4a0b-bdf9-e236d864e814" containerName="ceilometer-notification-agent" Jan 31 09:28:25 crc kubenswrapper[4830]: I0131 09:28:25.945219 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="5857f9d0-2512-4a0b-bdf9-e236d864e814" containerName="ceilometer-notification-agent" Jan 31 09:28:25 crc kubenswrapper[4830]: E0131 09:28:25.945235 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="084e69fe-072f-4659-a28c-f0000f8c16fe" containerName="init" Jan 31 09:28:25 crc kubenswrapper[4830]: I0131 09:28:25.945242 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="084e69fe-072f-4659-a28c-f0000f8c16fe" containerName="init" Jan 31 09:28:25 crc kubenswrapper[4830]: E0131 09:28:25.945269 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="084e69fe-072f-4659-a28c-f0000f8c16fe" containerName="dnsmasq-dns" Jan 31 09:28:25 crc kubenswrapper[4830]: I0131 09:28:25.945275 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="084e69fe-072f-4659-a28c-f0000f8c16fe" containerName="dnsmasq-dns" Jan 31 09:28:25 crc kubenswrapper[4830]: I0131 09:28:25.945505 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="084e69fe-072f-4659-a28c-f0000f8c16fe" containerName="dnsmasq-dns" Jan 31 09:28:25 crc kubenswrapper[4830]: I0131 09:28:25.945516 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="5857f9d0-2512-4a0b-bdf9-e236d864e814" containerName="proxy-httpd" Jan 31 09:28:25 crc kubenswrapper[4830]: I0131 09:28:25.945527 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="5857f9d0-2512-4a0b-bdf9-e236d864e814" containerName="ceilometer-notification-agent" Jan 31 09:28:25 crc kubenswrapper[4830]: I0131 09:28:25.945536 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="5857f9d0-2512-4a0b-bdf9-e236d864e814" containerName="ceilometer-central-agent" Jan 31 09:28:25 crc kubenswrapper[4830]: I0131 09:28:25.945547 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="5857f9d0-2512-4a0b-bdf9-e236d864e814" containerName="sg-core" Jan 31 09:28:25 crc kubenswrapper[4830]: I0131 09:28:25.948243 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 31 09:28:25 crc kubenswrapper[4830]: I0131 09:28:25.950699 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 31 09:28:25 crc kubenswrapper[4830]: I0131 09:28:25.950962 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 31 09:28:25 crc kubenswrapper[4830]: I0131 09:28:25.980325 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 31 09:28:26 crc kubenswrapper[4830]: I0131 09:28:26.044005 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2ssnt\" (UniqueName: \"kubernetes.io/projected/505aadad-2257-4f3e-b18b-65c745756366-kube-api-access-2ssnt\") pod \"ceilometer-0\" (UID: \"505aadad-2257-4f3e-b18b-65c745756366\") " pod="openstack/ceilometer-0" Jan 31 09:28:26 crc kubenswrapper[4830]: I0131 09:28:26.044553 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/505aadad-2257-4f3e-b18b-65c745756366-scripts\") pod \"ceilometer-0\" (UID: \"505aadad-2257-4f3e-b18b-65c745756366\") " pod="openstack/ceilometer-0" Jan 31 09:28:26 crc kubenswrapper[4830]: I0131 09:28:26.044700 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/505aadad-2257-4f3e-b18b-65c745756366-run-httpd\") pod \"ceilometer-0\" (UID: \"505aadad-2257-4f3e-b18b-65c745756366\") " pod="openstack/ceilometer-0" Jan 31 09:28:26 crc kubenswrapper[4830]: I0131 09:28:26.044758 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/505aadad-2257-4f3e-b18b-65c745756366-config-data\") pod \"ceilometer-0\" (UID: \"505aadad-2257-4f3e-b18b-65c745756366\") " pod="openstack/ceilometer-0" Jan 31 09:28:26 crc kubenswrapper[4830]: I0131 09:28:26.044858 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/505aadad-2257-4f3e-b18b-65c745756366-log-httpd\") pod \"ceilometer-0\" (UID: \"505aadad-2257-4f3e-b18b-65c745756366\") " pod="openstack/ceilometer-0" Jan 31 09:28:26 crc kubenswrapper[4830]: I0131 09:28:26.045111 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/505aadad-2257-4f3e-b18b-65c745756366-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"505aadad-2257-4f3e-b18b-65c745756366\") " pod="openstack/ceilometer-0" Jan 31 09:28:26 crc kubenswrapper[4830]: I0131 09:28:26.045335 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/505aadad-2257-4f3e-b18b-65c745756366-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"505aadad-2257-4f3e-b18b-65c745756366\") " pod="openstack/ceilometer-0" Jan 31 09:28:26 crc kubenswrapper[4830]: I0131 09:28:26.149012 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/505aadad-2257-4f3e-b18b-65c745756366-run-httpd\") pod \"ceilometer-0\" (UID: \"505aadad-2257-4f3e-b18b-65c745756366\") " pod="openstack/ceilometer-0" Jan 31 09:28:26 crc kubenswrapper[4830]: I0131 09:28:26.149111 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/505aadad-2257-4f3e-b18b-65c745756366-config-data\") pod \"ceilometer-0\" (UID: \"505aadad-2257-4f3e-b18b-65c745756366\") " pod="openstack/ceilometer-0" Jan 31 09:28:26 crc kubenswrapper[4830]: I0131 09:28:26.149166 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/505aadad-2257-4f3e-b18b-65c745756366-log-httpd\") pod \"ceilometer-0\" (UID: \"505aadad-2257-4f3e-b18b-65c745756366\") " pod="openstack/ceilometer-0" Jan 31 09:28:26 crc kubenswrapper[4830]: I0131 09:28:26.149274 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/505aadad-2257-4f3e-b18b-65c745756366-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"505aadad-2257-4f3e-b18b-65c745756366\") " pod="openstack/ceilometer-0" Jan 31 09:28:26 crc kubenswrapper[4830]: I0131 09:28:26.149359 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/505aadad-2257-4f3e-b18b-65c745756366-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"505aadad-2257-4f3e-b18b-65c745756366\") " pod="openstack/ceilometer-0" Jan 31 09:28:26 crc kubenswrapper[4830]: I0131 09:28:26.149541 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2ssnt\" (UniqueName: \"kubernetes.io/projected/505aadad-2257-4f3e-b18b-65c745756366-kube-api-access-2ssnt\") pod \"ceilometer-0\" (UID: \"505aadad-2257-4f3e-b18b-65c745756366\") " pod="openstack/ceilometer-0" Jan 31 09:28:26 crc kubenswrapper[4830]: I0131 09:28:26.149687 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/505aadad-2257-4f3e-b18b-65c745756366-scripts\") pod \"ceilometer-0\" (UID: \"505aadad-2257-4f3e-b18b-65c745756366\") " pod="openstack/ceilometer-0" Jan 31 09:28:26 crc kubenswrapper[4830]: I0131 09:28:26.158628 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/505aadad-2257-4f3e-b18b-65c745756366-scripts\") pod \"ceilometer-0\" (UID: \"505aadad-2257-4f3e-b18b-65c745756366\") " pod="openstack/ceilometer-0" Jan 31 09:28:26 crc kubenswrapper[4830]: I0131 09:28:26.159021 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/505aadad-2257-4f3e-b18b-65c745756366-run-httpd\") pod \"ceilometer-0\" (UID: \"505aadad-2257-4f3e-b18b-65c745756366\") " pod="openstack/ceilometer-0" Jan 31 09:28:26 crc kubenswrapper[4830]: I0131 09:28:26.168881 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/505aadad-2257-4f3e-b18b-65c745756366-log-httpd\") pod \"ceilometer-0\" (UID: \"505aadad-2257-4f3e-b18b-65c745756366\") " pod="openstack/ceilometer-0" Jan 31 09:28:26 crc kubenswrapper[4830]: I0131 09:28:26.173067 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/505aadad-2257-4f3e-b18b-65c745756366-config-data\") pod \"ceilometer-0\" (UID: \"505aadad-2257-4f3e-b18b-65c745756366\") " pod="openstack/ceilometer-0" Jan 31 09:28:26 crc kubenswrapper[4830]: I0131 09:28:26.177427 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/505aadad-2257-4f3e-b18b-65c745756366-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"505aadad-2257-4f3e-b18b-65c745756366\") " pod="openstack/ceilometer-0" Jan 31 09:28:26 crc kubenswrapper[4830]: I0131 09:28:26.178465 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/505aadad-2257-4f3e-b18b-65c745756366-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"505aadad-2257-4f3e-b18b-65c745756366\") " pod="openstack/ceilometer-0" Jan 31 09:28:26 crc kubenswrapper[4830]: I0131 09:28:26.224142 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2ssnt\" (UniqueName: \"kubernetes.io/projected/505aadad-2257-4f3e-b18b-65c745756366-kube-api-access-2ssnt\") pod \"ceilometer-0\" (UID: \"505aadad-2257-4f3e-b18b-65c745756366\") " pod="openstack/ceilometer-0" Jan 31 09:28:26 crc kubenswrapper[4830]: I0131 09:28:26.307302 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 31 09:28:26 crc kubenswrapper[4830]: I0131 09:28:26.389388 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5857f9d0-2512-4a0b-bdf9-e236d864e814" path="/var/lib/kubelet/pods/5857f9d0-2512-4a0b-bdf9-e236d864e814/volumes" Jan 31 09:28:26 crc kubenswrapper[4830]: I0131 09:28:26.780715 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 31 09:28:26 crc kubenswrapper[4830]: I0131 09:28:26.781294 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 31 09:28:26 crc kubenswrapper[4830]: W0131 09:28:26.935127 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod505aadad_2257_4f3e_b18b_65c745756366.slice/crio-371e31a63618c57297a991dfb9e365a1edf022347f2d6a8e0c1ac2491b4570c8 WatchSource:0}: Error finding container 371e31a63618c57297a991dfb9e365a1edf022347f2d6a8e0c1ac2491b4570c8: Status 404 returned error can't find the container with id 371e31a63618c57297a991dfb9e365a1edf022347f2d6a8e0c1ac2491b4570c8 Jan 31 09:28:26 crc kubenswrapper[4830]: I0131 09:28:26.937381 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 31 09:28:27 crc kubenswrapper[4830]: I0131 09:28:27.370095 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 31 09:28:27 crc kubenswrapper[4830]: I0131 09:28:27.598314 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"505aadad-2257-4f3e-b18b-65c745756366","Type":"ContainerStarted","Data":"371e31a63618c57297a991dfb9e365a1edf022347f2d6a8e0c1ac2491b4570c8"} Jan 31 09:28:28 crc kubenswrapper[4830]: I0131 09:28:28.613395 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"505aadad-2257-4f3e-b18b-65c745756366","Type":"ContainerStarted","Data":"66638ac456c0d9d507163fef278353e8b1077e7e07db2543b0bf46d1da32b2cb"} Jan 31 09:28:29 crc kubenswrapper[4830]: I0131 09:28:29.637441 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"505aadad-2257-4f3e-b18b-65c745756366","Type":"ContainerStarted","Data":"5bbf8a672093cd86de6e6200898c0f74d4ea93751cc6a9aeb30b688acb107b4b"} Jan 31 09:28:29 crc kubenswrapper[4830]: I0131 09:28:29.798378 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 31 09:28:29 crc kubenswrapper[4830]: I0131 09:28:29.798771 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 31 09:28:30 crc kubenswrapper[4830]: I0131 09:28:30.817983 4830 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="9b60a826-4072-45c8-91c8-469a728a68ae" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.1.3:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 31 09:28:30 crc kubenswrapper[4830]: I0131 09:28:30.818046 4830 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="9b60a826-4072-45c8-91c8-469a728a68ae" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.1.3:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 31 09:28:31 crc kubenswrapper[4830]: I0131 09:28:31.667803 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"505aadad-2257-4f3e-b18b-65c745756366","Type":"ContainerStarted","Data":"d892504adf000057837fec5666c41d08864aa82f8ebe684dd065617e721dd9cb"} Jan 31 09:28:31 crc kubenswrapper[4830]: I0131 09:28:31.779885 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 31 09:28:31 crc kubenswrapper[4830]: I0131 09:28:31.780232 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 31 09:28:32 crc kubenswrapper[4830]: I0131 09:28:32.369917 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 31 09:28:32 crc kubenswrapper[4830]: I0131 09:28:32.407325 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 31 09:28:32 crc kubenswrapper[4830]: I0131 09:28:32.716011 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 31 09:28:32 crc kubenswrapper[4830]: I0131 09:28:32.795009 4830 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="efaf79e1-d68e-4987-a73f-42a782fb9f6a" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.1.4:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 31 09:28:32 crc kubenswrapper[4830]: I0131 09:28:32.795046 4830 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="efaf79e1-d68e-4987-a73f-42a782fb9f6a" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.1.4:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 31 09:28:33 crc kubenswrapper[4830]: I0131 09:28:33.698840 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"505aadad-2257-4f3e-b18b-65c745756366","Type":"ContainerStarted","Data":"e0c909c35e589f45bb26bba2b3a06395be72ebde81d325156b16cfd172580d15"} Jan 31 09:28:33 crc kubenswrapper[4830]: I0131 09:28:33.733594 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.633038564 podStartE2EDuration="8.733561331s" podCreationTimestamp="2026-01-31 09:28:25 +0000 UTC" firstStartedPulling="2026-01-31 09:28:26.93787925 +0000 UTC m=+1651.431241692" lastFinishedPulling="2026-01-31 09:28:33.038402017 +0000 UTC m=+1657.531764459" observedRunningTime="2026-01-31 09:28:33.728756954 +0000 UTC m=+1658.222119426" watchObservedRunningTime="2026-01-31 09:28:33.733561331 +0000 UTC m=+1658.226923773" Jan 31 09:28:34 crc kubenswrapper[4830]: I0131 09:28:34.710827 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 31 09:28:36 crc kubenswrapper[4830]: I0131 09:28:36.276880 4830 scope.go:117] "RemoveContainer" containerID="a04fad3617a9e38076099693ce6bd6f0b7e1a9b845b3b8a22acffddfa772e8f0" Jan 31 09:28:36 crc kubenswrapper[4830]: E0131 09:28:36.277782 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gt7kd_openshift-machine-config-operator(158dbfda-9b0a-4809-9946-3c6ee2d082dc)\"" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" podUID="158dbfda-9b0a-4809-9946-3c6ee2d082dc" Jan 31 09:28:39 crc kubenswrapper[4830]: I0131 09:28:39.360956 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Jan 31 09:28:39 crc kubenswrapper[4830]: I0131 09:28:39.422401 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/363ba132-8eb0-4c3d-b389-73ac72c26220-scripts\") pod \"363ba132-8eb0-4c3d-b389-73ac72c26220\" (UID: \"363ba132-8eb0-4c3d-b389-73ac72c26220\") " Jan 31 09:28:39 crc kubenswrapper[4830]: I0131 09:28:39.422503 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/363ba132-8eb0-4c3d-b389-73ac72c26220-config-data\") pod \"363ba132-8eb0-4c3d-b389-73ac72c26220\" (UID: \"363ba132-8eb0-4c3d-b389-73ac72c26220\") " Jan 31 09:28:39 crc kubenswrapper[4830]: I0131 09:28:39.422593 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mxv8h\" (UniqueName: \"kubernetes.io/projected/363ba132-8eb0-4c3d-b389-73ac72c26220-kube-api-access-mxv8h\") pod \"363ba132-8eb0-4c3d-b389-73ac72c26220\" (UID: \"363ba132-8eb0-4c3d-b389-73ac72c26220\") " Jan 31 09:28:39 crc kubenswrapper[4830]: I0131 09:28:39.423105 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/363ba132-8eb0-4c3d-b389-73ac72c26220-combined-ca-bundle\") pod \"363ba132-8eb0-4c3d-b389-73ac72c26220\" (UID: \"363ba132-8eb0-4c3d-b389-73ac72c26220\") " Jan 31 09:28:39 crc kubenswrapper[4830]: I0131 09:28:39.451799 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/363ba132-8eb0-4c3d-b389-73ac72c26220-scripts" (OuterVolumeSpecName: "scripts") pod "363ba132-8eb0-4c3d-b389-73ac72c26220" (UID: "363ba132-8eb0-4c3d-b389-73ac72c26220"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:28:39 crc kubenswrapper[4830]: I0131 09:28:39.464317 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/363ba132-8eb0-4c3d-b389-73ac72c26220-kube-api-access-mxv8h" (OuterVolumeSpecName: "kube-api-access-mxv8h") pod "363ba132-8eb0-4c3d-b389-73ac72c26220" (UID: "363ba132-8eb0-4c3d-b389-73ac72c26220"). InnerVolumeSpecName "kube-api-access-mxv8h". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:28:39 crc kubenswrapper[4830]: I0131 09:28:39.533106 4830 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/363ba132-8eb0-4c3d-b389-73ac72c26220-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 09:28:39 crc kubenswrapper[4830]: I0131 09:28:39.533559 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mxv8h\" (UniqueName: \"kubernetes.io/projected/363ba132-8eb0-4c3d-b389-73ac72c26220-kube-api-access-mxv8h\") on node \"crc\" DevicePath \"\"" Jan 31 09:28:39 crc kubenswrapper[4830]: I0131 09:28:39.614820 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/363ba132-8eb0-4c3d-b389-73ac72c26220-config-data" (OuterVolumeSpecName: "config-data") pod "363ba132-8eb0-4c3d-b389-73ac72c26220" (UID: "363ba132-8eb0-4c3d-b389-73ac72c26220"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:28:39 crc kubenswrapper[4830]: I0131 09:28:39.626978 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/363ba132-8eb0-4c3d-b389-73ac72c26220-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "363ba132-8eb0-4c3d-b389-73ac72c26220" (UID: "363ba132-8eb0-4c3d-b389-73ac72c26220"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:28:39 crc kubenswrapper[4830]: I0131 09:28:39.638207 4830 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/363ba132-8eb0-4c3d-b389-73ac72c26220-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 09:28:39 crc kubenswrapper[4830]: I0131 09:28:39.638271 4830 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/363ba132-8eb0-4c3d-b389-73ac72c26220-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 09:28:39 crc kubenswrapper[4830]: I0131 09:28:39.790879 4830 generic.go:334] "Generic (PLEG): container finished" podID="363ba132-8eb0-4c3d-b389-73ac72c26220" containerID="7f5b9097442abbf4fcd2ce132a5c0536e9536fa76adc4c85a44884c98d076da1" exitCode=137 Jan 31 09:28:39 crc kubenswrapper[4830]: I0131 09:28:39.790940 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"363ba132-8eb0-4c3d-b389-73ac72c26220","Type":"ContainerDied","Data":"7f5b9097442abbf4fcd2ce132a5c0536e9536fa76adc4c85a44884c98d076da1"} Jan 31 09:28:39 crc kubenswrapper[4830]: I0131 09:28:39.790976 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"363ba132-8eb0-4c3d-b389-73ac72c26220","Type":"ContainerDied","Data":"cbe1a4afa716f3735ae80f9976e5951d27661260d820e2ddadd746393efab47c"} Jan 31 09:28:39 crc kubenswrapper[4830]: I0131 09:28:39.790997 4830 scope.go:117] "RemoveContainer" containerID="7f5b9097442abbf4fcd2ce132a5c0536e9536fa76adc4c85a44884c98d076da1" Jan 31 09:28:39 crc kubenswrapper[4830]: I0131 09:28:39.791358 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Jan 31 09:28:39 crc kubenswrapper[4830]: I0131 09:28:39.810106 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 31 09:28:39 crc kubenswrapper[4830]: I0131 09:28:39.810925 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 31 09:28:39 crc kubenswrapper[4830]: I0131 09:28:39.811295 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 31 09:28:39 crc kubenswrapper[4830]: I0131 09:28:39.817164 4830 scope.go:117] "RemoveContainer" containerID="d5bd2657769aff37f6c820103505904627bff79ece535343980de7eae4c805d0" Jan 31 09:28:39 crc kubenswrapper[4830]: I0131 09:28:39.829646 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 31 09:28:39 crc kubenswrapper[4830]: I0131 09:28:39.849956 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-0"] Jan 31 09:28:39 crc kubenswrapper[4830]: I0131 09:28:39.856508 4830 scope.go:117] "RemoveContainer" containerID="66dbbe851105235b2394a62f0d13e090891c227cb06e0ba9a59dad497b5f7c82" Jan 31 09:28:39 crc kubenswrapper[4830]: I0131 09:28:39.868212 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-0"] Jan 31 09:28:39 crc kubenswrapper[4830]: I0131 09:28:39.929820 4830 scope.go:117] "RemoveContainer" containerID="5ce9301cebb4a1abab1c58d14213878245aa5c5425548d00f93c5ea484bc291f" Jan 31 09:28:39 crc kubenswrapper[4830]: I0131 09:28:39.947530 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-0"] Jan 31 09:28:39 crc kubenswrapper[4830]: E0131 09:28:39.949947 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="363ba132-8eb0-4c3d-b389-73ac72c26220" containerName="aodh-listener" Jan 31 09:28:39 crc kubenswrapper[4830]: I0131 09:28:39.950016 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="363ba132-8eb0-4c3d-b389-73ac72c26220" containerName="aodh-listener" Jan 31 09:28:39 crc kubenswrapper[4830]: E0131 09:28:39.950050 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="363ba132-8eb0-4c3d-b389-73ac72c26220" containerName="aodh-evaluator" Jan 31 09:28:39 crc kubenswrapper[4830]: I0131 09:28:39.950059 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="363ba132-8eb0-4c3d-b389-73ac72c26220" containerName="aodh-evaluator" Jan 31 09:28:39 crc kubenswrapper[4830]: E0131 09:28:39.950133 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="363ba132-8eb0-4c3d-b389-73ac72c26220" containerName="aodh-notifier" Jan 31 09:28:39 crc kubenswrapper[4830]: I0131 09:28:39.950143 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="363ba132-8eb0-4c3d-b389-73ac72c26220" containerName="aodh-notifier" Jan 31 09:28:39 crc kubenswrapper[4830]: E0131 09:28:39.950210 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="363ba132-8eb0-4c3d-b389-73ac72c26220" containerName="aodh-api" Jan 31 09:28:39 crc kubenswrapper[4830]: I0131 09:28:39.950221 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="363ba132-8eb0-4c3d-b389-73ac72c26220" containerName="aodh-api" Jan 31 09:28:39 crc kubenswrapper[4830]: I0131 09:28:39.952531 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="363ba132-8eb0-4c3d-b389-73ac72c26220" containerName="aodh-notifier" Jan 31 09:28:39 crc kubenswrapper[4830]: I0131 09:28:39.952668 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="363ba132-8eb0-4c3d-b389-73ac72c26220" containerName="aodh-api" Jan 31 09:28:39 crc kubenswrapper[4830]: I0131 09:28:39.952694 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="363ba132-8eb0-4c3d-b389-73ac72c26220" containerName="aodh-evaluator" Jan 31 09:28:39 crc kubenswrapper[4830]: I0131 09:28:39.952737 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="363ba132-8eb0-4c3d-b389-73ac72c26220" containerName="aodh-listener" Jan 31 09:28:39 crc kubenswrapper[4830]: I0131 09:28:39.964300 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Jan 31 09:28:39 crc kubenswrapper[4830]: I0131 09:28:39.973374 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-aodh-internal-svc" Jan 31 09:28:39 crc kubenswrapper[4830]: I0131 09:28:39.973703 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Jan 31 09:28:39 crc kubenswrapper[4830]: I0131 09:28:39.973926 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-aodh-public-svc" Jan 31 09:28:39 crc kubenswrapper[4830]: I0131 09:28:39.974611 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-mz4qw" Jan 31 09:28:39 crc kubenswrapper[4830]: I0131 09:28:39.974751 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Jan 31 09:28:40 crc kubenswrapper[4830]: I0131 09:28:40.009969 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Jan 31 09:28:40 crc kubenswrapper[4830]: I0131 09:28:40.017660 4830 scope.go:117] "RemoveContainer" containerID="7f5b9097442abbf4fcd2ce132a5c0536e9536fa76adc4c85a44884c98d076da1" Jan 31 09:28:40 crc kubenswrapper[4830]: E0131 09:28:40.018444 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7f5b9097442abbf4fcd2ce132a5c0536e9536fa76adc4c85a44884c98d076da1\": container with ID starting with 7f5b9097442abbf4fcd2ce132a5c0536e9536fa76adc4c85a44884c98d076da1 not found: ID does not exist" containerID="7f5b9097442abbf4fcd2ce132a5c0536e9536fa76adc4c85a44884c98d076da1" Jan 31 09:28:40 crc kubenswrapper[4830]: I0131 09:28:40.018676 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7f5b9097442abbf4fcd2ce132a5c0536e9536fa76adc4c85a44884c98d076da1"} err="failed to get container status \"7f5b9097442abbf4fcd2ce132a5c0536e9536fa76adc4c85a44884c98d076da1\": rpc error: code = NotFound desc = could not find container \"7f5b9097442abbf4fcd2ce132a5c0536e9536fa76adc4c85a44884c98d076da1\": container with ID starting with 7f5b9097442abbf4fcd2ce132a5c0536e9536fa76adc4c85a44884c98d076da1 not found: ID does not exist" Jan 31 09:28:40 crc kubenswrapper[4830]: I0131 09:28:40.019153 4830 scope.go:117] "RemoveContainer" containerID="d5bd2657769aff37f6c820103505904627bff79ece535343980de7eae4c805d0" Jan 31 09:28:40 crc kubenswrapper[4830]: E0131 09:28:40.019842 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d5bd2657769aff37f6c820103505904627bff79ece535343980de7eae4c805d0\": container with ID starting with d5bd2657769aff37f6c820103505904627bff79ece535343980de7eae4c805d0 not found: ID does not exist" containerID="d5bd2657769aff37f6c820103505904627bff79ece535343980de7eae4c805d0" Jan 31 09:28:40 crc kubenswrapper[4830]: I0131 09:28:40.019971 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d5bd2657769aff37f6c820103505904627bff79ece535343980de7eae4c805d0"} err="failed to get container status \"d5bd2657769aff37f6c820103505904627bff79ece535343980de7eae4c805d0\": rpc error: code = NotFound desc = could not find container \"d5bd2657769aff37f6c820103505904627bff79ece535343980de7eae4c805d0\": container with ID starting with d5bd2657769aff37f6c820103505904627bff79ece535343980de7eae4c805d0 not found: ID does not exist" Jan 31 09:28:40 crc kubenswrapper[4830]: I0131 09:28:40.020075 4830 scope.go:117] "RemoveContainer" containerID="66dbbe851105235b2394a62f0d13e090891c227cb06e0ba9a59dad497b5f7c82" Jan 31 09:28:40 crc kubenswrapper[4830]: E0131 09:28:40.020706 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"66dbbe851105235b2394a62f0d13e090891c227cb06e0ba9a59dad497b5f7c82\": container with ID starting with 66dbbe851105235b2394a62f0d13e090891c227cb06e0ba9a59dad497b5f7c82 not found: ID does not exist" containerID="66dbbe851105235b2394a62f0d13e090891c227cb06e0ba9a59dad497b5f7c82" Jan 31 09:28:40 crc kubenswrapper[4830]: I0131 09:28:40.020875 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"66dbbe851105235b2394a62f0d13e090891c227cb06e0ba9a59dad497b5f7c82"} err="failed to get container status \"66dbbe851105235b2394a62f0d13e090891c227cb06e0ba9a59dad497b5f7c82\": rpc error: code = NotFound desc = could not find container \"66dbbe851105235b2394a62f0d13e090891c227cb06e0ba9a59dad497b5f7c82\": container with ID starting with 66dbbe851105235b2394a62f0d13e090891c227cb06e0ba9a59dad497b5f7c82 not found: ID does not exist" Jan 31 09:28:40 crc kubenswrapper[4830]: I0131 09:28:40.021023 4830 scope.go:117] "RemoveContainer" containerID="5ce9301cebb4a1abab1c58d14213878245aa5c5425548d00f93c5ea484bc291f" Jan 31 09:28:40 crc kubenswrapper[4830]: E0131 09:28:40.021501 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5ce9301cebb4a1abab1c58d14213878245aa5c5425548d00f93c5ea484bc291f\": container with ID starting with 5ce9301cebb4a1abab1c58d14213878245aa5c5425548d00f93c5ea484bc291f not found: ID does not exist" containerID="5ce9301cebb4a1abab1c58d14213878245aa5c5425548d00f93c5ea484bc291f" Jan 31 09:28:40 crc kubenswrapper[4830]: I0131 09:28:40.021565 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5ce9301cebb4a1abab1c58d14213878245aa5c5425548d00f93c5ea484bc291f"} err="failed to get container status \"5ce9301cebb4a1abab1c58d14213878245aa5c5425548d00f93c5ea484bc291f\": rpc error: code = NotFound desc = could not find container \"5ce9301cebb4a1abab1c58d14213878245aa5c5425548d00f93c5ea484bc291f\": container with ID starting with 5ce9301cebb4a1abab1c58d14213878245aa5c5425548d00f93c5ea484bc291f not found: ID does not exist" Jan 31 09:28:40 crc kubenswrapper[4830]: I0131 09:28:40.156908 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/04ec026b-cc18-426d-a922-7c1c73939a4a-scripts\") pod \"aodh-0\" (UID: \"04ec026b-cc18-426d-a922-7c1c73939a4a\") " pod="openstack/aodh-0" Jan 31 09:28:40 crc kubenswrapper[4830]: I0131 09:28:40.157186 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04ec026b-cc18-426d-a922-7c1c73939a4a-combined-ca-bundle\") pod \"aodh-0\" (UID: \"04ec026b-cc18-426d-a922-7c1c73939a4a\") " pod="openstack/aodh-0" Jan 31 09:28:40 crc kubenswrapper[4830]: I0131 09:28:40.157264 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04ec026b-cc18-426d-a922-7c1c73939a4a-config-data\") pod \"aodh-0\" (UID: \"04ec026b-cc18-426d-a922-7c1c73939a4a\") " pod="openstack/aodh-0" Jan 31 09:28:40 crc kubenswrapper[4830]: I0131 09:28:40.157519 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4ttfx\" (UniqueName: \"kubernetes.io/projected/04ec026b-cc18-426d-a922-7c1c73939a4a-kube-api-access-4ttfx\") pod \"aodh-0\" (UID: \"04ec026b-cc18-426d-a922-7c1c73939a4a\") " pod="openstack/aodh-0" Jan 31 09:28:40 crc kubenswrapper[4830]: I0131 09:28:40.157708 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/04ec026b-cc18-426d-a922-7c1c73939a4a-internal-tls-certs\") pod \"aodh-0\" (UID: \"04ec026b-cc18-426d-a922-7c1c73939a4a\") " pod="openstack/aodh-0" Jan 31 09:28:40 crc kubenswrapper[4830]: I0131 09:28:40.157779 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/04ec026b-cc18-426d-a922-7c1c73939a4a-public-tls-certs\") pod \"aodh-0\" (UID: \"04ec026b-cc18-426d-a922-7c1c73939a4a\") " pod="openstack/aodh-0" Jan 31 09:28:40 crc kubenswrapper[4830]: I0131 09:28:40.260667 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/04ec026b-cc18-426d-a922-7c1c73939a4a-internal-tls-certs\") pod \"aodh-0\" (UID: \"04ec026b-cc18-426d-a922-7c1c73939a4a\") " pod="openstack/aodh-0" Jan 31 09:28:40 crc kubenswrapper[4830]: I0131 09:28:40.260759 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/04ec026b-cc18-426d-a922-7c1c73939a4a-public-tls-certs\") pod \"aodh-0\" (UID: \"04ec026b-cc18-426d-a922-7c1c73939a4a\") " pod="openstack/aodh-0" Jan 31 09:28:40 crc kubenswrapper[4830]: I0131 09:28:40.260844 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/04ec026b-cc18-426d-a922-7c1c73939a4a-scripts\") pod \"aodh-0\" (UID: \"04ec026b-cc18-426d-a922-7c1c73939a4a\") " pod="openstack/aodh-0" Jan 31 09:28:40 crc kubenswrapper[4830]: I0131 09:28:40.261143 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04ec026b-cc18-426d-a922-7c1c73939a4a-combined-ca-bundle\") pod \"aodh-0\" (UID: \"04ec026b-cc18-426d-a922-7c1c73939a4a\") " pod="openstack/aodh-0" Jan 31 09:28:40 crc kubenswrapper[4830]: I0131 09:28:40.261269 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04ec026b-cc18-426d-a922-7c1c73939a4a-config-data\") pod \"aodh-0\" (UID: \"04ec026b-cc18-426d-a922-7c1c73939a4a\") " pod="openstack/aodh-0" Jan 31 09:28:40 crc kubenswrapper[4830]: I0131 09:28:40.261504 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4ttfx\" (UniqueName: \"kubernetes.io/projected/04ec026b-cc18-426d-a922-7c1c73939a4a-kube-api-access-4ttfx\") pod \"aodh-0\" (UID: \"04ec026b-cc18-426d-a922-7c1c73939a4a\") " pod="openstack/aodh-0" Jan 31 09:28:40 crc kubenswrapper[4830]: I0131 09:28:40.270519 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="363ba132-8eb0-4c3d-b389-73ac72c26220" path="/var/lib/kubelet/pods/363ba132-8eb0-4c3d-b389-73ac72c26220/volumes" Jan 31 09:28:40 crc kubenswrapper[4830]: I0131 09:28:40.276427 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04ec026b-cc18-426d-a922-7c1c73939a4a-combined-ca-bundle\") pod \"aodh-0\" (UID: \"04ec026b-cc18-426d-a922-7c1c73939a4a\") " pod="openstack/aodh-0" Jan 31 09:28:40 crc kubenswrapper[4830]: I0131 09:28:40.277029 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/04ec026b-cc18-426d-a922-7c1c73939a4a-scripts\") pod \"aodh-0\" (UID: \"04ec026b-cc18-426d-a922-7c1c73939a4a\") " pod="openstack/aodh-0" Jan 31 09:28:40 crc kubenswrapper[4830]: I0131 09:28:40.277458 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04ec026b-cc18-426d-a922-7c1c73939a4a-config-data\") pod \"aodh-0\" (UID: \"04ec026b-cc18-426d-a922-7c1c73939a4a\") " pod="openstack/aodh-0" Jan 31 09:28:40 crc kubenswrapper[4830]: I0131 09:28:40.280254 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/04ec026b-cc18-426d-a922-7c1c73939a4a-public-tls-certs\") pod \"aodh-0\" (UID: \"04ec026b-cc18-426d-a922-7c1c73939a4a\") " pod="openstack/aodh-0" Jan 31 09:28:40 crc kubenswrapper[4830]: I0131 09:28:40.285418 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/04ec026b-cc18-426d-a922-7c1c73939a4a-internal-tls-certs\") pod \"aodh-0\" (UID: \"04ec026b-cc18-426d-a922-7c1c73939a4a\") " pod="openstack/aodh-0" Jan 31 09:28:40 crc kubenswrapper[4830]: I0131 09:28:40.287346 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4ttfx\" (UniqueName: \"kubernetes.io/projected/04ec026b-cc18-426d-a922-7c1c73939a4a-kube-api-access-4ttfx\") pod \"aodh-0\" (UID: \"04ec026b-cc18-426d-a922-7c1c73939a4a\") " pod="openstack/aodh-0" Jan 31 09:28:40 crc kubenswrapper[4830]: I0131 09:28:40.327147 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Jan 31 09:28:40 crc kubenswrapper[4830]: I0131 09:28:40.806300 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 31 09:28:40 crc kubenswrapper[4830]: I0131 09:28:40.927385 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Jan 31 09:28:41 crc kubenswrapper[4830]: I0131 09:28:41.056908 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 31 09:28:41 crc kubenswrapper[4830]: I0131 09:28:41.795869 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 31 09:28:41 crc kubenswrapper[4830]: I0131 09:28:41.796519 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 31 09:28:41 crc kubenswrapper[4830]: I0131 09:28:41.803685 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 31 09:28:41 crc kubenswrapper[4830]: I0131 09:28:41.806769 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 31 09:28:41 crc kubenswrapper[4830]: I0131 09:28:41.821596 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"04ec026b-cc18-426d-a922-7c1c73939a4a","Type":"ContainerStarted","Data":"50071e6b5d96cd1993d9e05ebdae810927d7cc3669f619276451a68af26fc2ac"} Jan 31 09:28:41 crc kubenswrapper[4830]: I0131 09:28:41.821704 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"04ec026b-cc18-426d-a922-7c1c73939a4a","Type":"ContainerStarted","Data":"64823d46b88ca3668f9bb2f62a82cf4a433add1937021307d25ae889d4277cec"} Jan 31 09:28:42 crc kubenswrapper[4830]: I0131 09:28:42.856941 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"04ec026b-cc18-426d-a922-7c1c73939a4a","Type":"ContainerStarted","Data":"974e75bfcedc8ce61859794732b9b1f995c1eade9243c45f9113a7de8aa4a053"} Jan 31 09:28:43 crc kubenswrapper[4830]: I0131 09:28:43.871624 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"04ec026b-cc18-426d-a922-7c1c73939a4a","Type":"ContainerStarted","Data":"66d4968000c3864f1f017f991c368c64d34c57e56b3554b8ec09929e5e851568"} Jan 31 09:28:43 crc kubenswrapper[4830]: I0131 09:28:43.872560 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"04ec026b-cc18-426d-a922-7c1c73939a4a","Type":"ContainerStarted","Data":"3c966326a9cedf072323789da4c1bc61ed1746f57f26f7f7c829c7a2f89d0118"} Jan 31 09:28:43 crc kubenswrapper[4830]: I0131 09:28:43.917999 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-0" podStartSLOduration=2.350303161 podStartE2EDuration="4.917972323s" podCreationTimestamp="2026-01-31 09:28:39 +0000 UTC" firstStartedPulling="2026-01-31 09:28:40.920301024 +0000 UTC m=+1665.413663466" lastFinishedPulling="2026-01-31 09:28:43.487970186 +0000 UTC m=+1667.981332628" observedRunningTime="2026-01-31 09:28:43.901869872 +0000 UTC m=+1668.395232314" watchObservedRunningTime="2026-01-31 09:28:43.917972323 +0000 UTC m=+1668.411334755" Jan 31 09:28:47 crc kubenswrapper[4830]: I0131 09:28:47.251983 4830 scope.go:117] "RemoveContainer" containerID="a04fad3617a9e38076099693ce6bd6f0b7e1a9b845b3b8a22acffddfa772e8f0" Jan 31 09:28:47 crc kubenswrapper[4830]: E0131 09:28:47.253347 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gt7kd_openshift-machine-config-operator(158dbfda-9b0a-4809-9946-3c6ee2d082dc)\"" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" podUID="158dbfda-9b0a-4809-9946-3c6ee2d082dc" Jan 31 09:28:56 crc kubenswrapper[4830]: I0131 09:28:56.315140 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 31 09:29:00 crc kubenswrapper[4830]: I0131 09:29:00.252588 4830 scope.go:117] "RemoveContainer" containerID="a04fad3617a9e38076099693ce6bd6f0b7e1a9b845b3b8a22acffddfa772e8f0" Jan 31 09:29:00 crc kubenswrapper[4830]: E0131 09:29:00.253530 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gt7kd_openshift-machine-config-operator(158dbfda-9b0a-4809-9946-3c6ee2d082dc)\"" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" podUID="158dbfda-9b0a-4809-9946-3c6ee2d082dc" Jan 31 09:29:00 crc kubenswrapper[4830]: I0131 09:29:00.673819 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 31 09:29:00 crc kubenswrapper[4830]: I0131 09:29:00.674080 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="5359b6c7-375f-4424-bb43-f4b2a4d40329" containerName="kube-state-metrics" containerID="cri-o://8010b0b10105b7b4db334cd3b6743ebe9db3e797280e0df78b98d7bd4145477d" gracePeriod=30 Jan 31 09:29:00 crc kubenswrapper[4830]: I0131 09:29:00.754502 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-0"] Jan 31 09:29:00 crc kubenswrapper[4830]: I0131 09:29:00.754877 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/mysqld-exporter-0" podUID="d1ca860e-5493-40e2-bc10-ded100de4569" containerName="mysqld-exporter" containerID="cri-o://ad2c590706b8dbc973c2b43163b0440fd6e3529f84e4d4ce4eb07edc4d7484f2" gracePeriod=30 Jan 31 09:29:01 crc kubenswrapper[4830]: I0131 09:29:01.119561 4830 generic.go:334] "Generic (PLEG): container finished" podID="d1ca860e-5493-40e2-bc10-ded100de4569" containerID="ad2c590706b8dbc973c2b43163b0440fd6e3529f84e4d4ce4eb07edc4d7484f2" exitCode=2 Jan 31 09:29:01 crc kubenswrapper[4830]: I0131 09:29:01.119665 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"d1ca860e-5493-40e2-bc10-ded100de4569","Type":"ContainerDied","Data":"ad2c590706b8dbc973c2b43163b0440fd6e3529f84e4d4ce4eb07edc4d7484f2"} Jan 31 09:29:01 crc kubenswrapper[4830]: I0131 09:29:01.124770 4830 generic.go:334] "Generic (PLEG): container finished" podID="5359b6c7-375f-4424-bb43-f4b2a4d40329" containerID="8010b0b10105b7b4db334cd3b6743ebe9db3e797280e0df78b98d7bd4145477d" exitCode=2 Jan 31 09:29:01 crc kubenswrapper[4830]: I0131 09:29:01.124867 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"5359b6c7-375f-4424-bb43-f4b2a4d40329","Type":"ContainerDied","Data":"8010b0b10105b7b4db334cd3b6743ebe9db3e797280e0df78b98d7bd4145477d"} Jan 31 09:29:01 crc kubenswrapper[4830]: I0131 09:29:01.735997 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 31 09:29:01 crc kubenswrapper[4830]: I0131 09:29:01.851155 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rx6cz\" (UniqueName: \"kubernetes.io/projected/5359b6c7-375f-4424-bb43-f4b2a4d40329-kube-api-access-rx6cz\") pod \"5359b6c7-375f-4424-bb43-f4b2a4d40329\" (UID: \"5359b6c7-375f-4424-bb43-f4b2a4d40329\") " Jan 31 09:29:01 crc kubenswrapper[4830]: I0131 09:29:01.879047 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5359b6c7-375f-4424-bb43-f4b2a4d40329-kube-api-access-rx6cz" (OuterVolumeSpecName: "kube-api-access-rx6cz") pod "5359b6c7-375f-4424-bb43-f4b2a4d40329" (UID: "5359b6c7-375f-4424-bb43-f4b2a4d40329"). InnerVolumeSpecName "kube-api-access-rx6cz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:29:01 crc kubenswrapper[4830]: I0131 09:29:01.952274 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Jan 31 09:29:01 crc kubenswrapper[4830]: I0131 09:29:01.960792 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rx6cz\" (UniqueName: \"kubernetes.io/projected/5359b6c7-375f-4424-bb43-f4b2a4d40329-kube-api-access-rx6cz\") on node \"crc\" DevicePath \"\"" Jan 31 09:29:02 crc kubenswrapper[4830]: I0131 09:29:02.062005 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1ca860e-5493-40e2-bc10-ded100de4569-combined-ca-bundle\") pod \"d1ca860e-5493-40e2-bc10-ded100de4569\" (UID: \"d1ca860e-5493-40e2-bc10-ded100de4569\") " Jan 31 09:29:02 crc kubenswrapper[4830]: I0131 09:29:02.062186 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2thbl\" (UniqueName: \"kubernetes.io/projected/d1ca860e-5493-40e2-bc10-ded100de4569-kube-api-access-2thbl\") pod \"d1ca860e-5493-40e2-bc10-ded100de4569\" (UID: \"d1ca860e-5493-40e2-bc10-ded100de4569\") " Jan 31 09:29:02 crc kubenswrapper[4830]: I0131 09:29:02.063911 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d1ca860e-5493-40e2-bc10-ded100de4569-config-data\") pod \"d1ca860e-5493-40e2-bc10-ded100de4569\" (UID: \"d1ca860e-5493-40e2-bc10-ded100de4569\") " Jan 31 09:29:02 crc kubenswrapper[4830]: I0131 09:29:02.067256 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d1ca860e-5493-40e2-bc10-ded100de4569-kube-api-access-2thbl" (OuterVolumeSpecName: "kube-api-access-2thbl") pod "d1ca860e-5493-40e2-bc10-ded100de4569" (UID: "d1ca860e-5493-40e2-bc10-ded100de4569"). InnerVolumeSpecName "kube-api-access-2thbl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:29:02 crc kubenswrapper[4830]: I0131 09:29:02.121127 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d1ca860e-5493-40e2-bc10-ded100de4569-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d1ca860e-5493-40e2-bc10-ded100de4569" (UID: "d1ca860e-5493-40e2-bc10-ded100de4569"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:29:02 crc kubenswrapper[4830]: I0131 09:29:02.158144 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"5359b6c7-375f-4424-bb43-f4b2a4d40329","Type":"ContainerDied","Data":"37639203ab4b8d83607b483fc8dabad84364def21225bc6ef913ae771aaddddd"} Jan 31 09:29:02 crc kubenswrapper[4830]: I0131 09:29:02.158229 4830 scope.go:117] "RemoveContainer" containerID="8010b0b10105b7b4db334cd3b6743ebe9db3e797280e0df78b98d7bd4145477d" Jan 31 09:29:02 crc kubenswrapper[4830]: I0131 09:29:02.158501 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 31 09:29:02 crc kubenswrapper[4830]: I0131 09:29:02.166874 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"d1ca860e-5493-40e2-bc10-ded100de4569","Type":"ContainerDied","Data":"653d1ef297897642a2b36c89b67d4f6c55eb948472451e2be6750b1dec0c1c07"} Jan 31 09:29:02 crc kubenswrapper[4830]: I0131 09:29:02.167028 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Jan 31 09:29:02 crc kubenswrapper[4830]: I0131 09:29:02.171999 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d1ca860e-5493-40e2-bc10-ded100de4569-config-data" (OuterVolumeSpecName: "config-data") pod "d1ca860e-5493-40e2-bc10-ded100de4569" (UID: "d1ca860e-5493-40e2-bc10-ded100de4569"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:29:02 crc kubenswrapper[4830]: I0131 09:29:02.172692 4830 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1ca860e-5493-40e2-bc10-ded100de4569-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 09:29:02 crc kubenswrapper[4830]: I0131 09:29:02.172712 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2thbl\" (UniqueName: \"kubernetes.io/projected/d1ca860e-5493-40e2-bc10-ded100de4569-kube-api-access-2thbl\") on node \"crc\" DevicePath \"\"" Jan 31 09:29:02 crc kubenswrapper[4830]: I0131 09:29:02.172746 4830 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d1ca860e-5493-40e2-bc10-ded100de4569-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 09:29:02 crc kubenswrapper[4830]: I0131 09:29:02.351353 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 31 09:29:02 crc kubenswrapper[4830]: I0131 09:29:02.359000 4830 scope.go:117] "RemoveContainer" containerID="ad2c590706b8dbc973c2b43163b0440fd6e3529f84e4d4ce4eb07edc4d7484f2" Jan 31 09:29:02 crc kubenswrapper[4830]: I0131 09:29:02.373809 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 31 09:29:02 crc kubenswrapper[4830]: I0131 09:29:02.466463 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Jan 31 09:29:02 crc kubenswrapper[4830]: E0131 09:29:02.467450 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d1ca860e-5493-40e2-bc10-ded100de4569" containerName="mysqld-exporter" Jan 31 09:29:02 crc kubenswrapper[4830]: I0131 09:29:02.467485 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1ca860e-5493-40e2-bc10-ded100de4569" containerName="mysqld-exporter" Jan 31 09:29:02 crc kubenswrapper[4830]: E0131 09:29:02.467513 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5359b6c7-375f-4424-bb43-f4b2a4d40329" containerName="kube-state-metrics" Jan 31 09:29:02 crc kubenswrapper[4830]: I0131 09:29:02.467522 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="5359b6c7-375f-4424-bb43-f4b2a4d40329" containerName="kube-state-metrics" Jan 31 09:29:02 crc kubenswrapper[4830]: I0131 09:29:02.467846 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="d1ca860e-5493-40e2-bc10-ded100de4569" containerName="mysqld-exporter" Jan 31 09:29:02 crc kubenswrapper[4830]: I0131 09:29:02.467874 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="5359b6c7-375f-4424-bb43-f4b2a4d40329" containerName="kube-state-metrics" Jan 31 09:29:02 crc kubenswrapper[4830]: I0131 09:29:02.469748 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 31 09:29:02 crc kubenswrapper[4830]: I0131 09:29:02.474611 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Jan 31 09:29:02 crc kubenswrapper[4830]: I0131 09:29:02.474814 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Jan 31 09:29:02 crc kubenswrapper[4830]: I0131 09:29:02.529339 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 31 09:29:02 crc kubenswrapper[4830]: I0131 09:29:02.570264 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-0"] Jan 31 09:29:02 crc kubenswrapper[4830]: I0131 09:29:02.585355 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/adf0d571-b5dc-4d7c-9e8d-8813354a5128-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"adf0d571-b5dc-4d7c-9e8d-8813354a5128\") " pod="openstack/kube-state-metrics-0" Jan 31 09:29:02 crc kubenswrapper[4830]: I0131 09:29:02.585507 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lfxh2\" (UniqueName: \"kubernetes.io/projected/adf0d571-b5dc-4d7c-9e8d-8813354a5128-kube-api-access-lfxh2\") pod \"kube-state-metrics-0\" (UID: \"adf0d571-b5dc-4d7c-9e8d-8813354a5128\") " pod="openstack/kube-state-metrics-0" Jan 31 09:29:02 crc kubenswrapper[4830]: I0131 09:29:02.585571 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/adf0d571-b5dc-4d7c-9e8d-8813354a5128-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"adf0d571-b5dc-4d7c-9e8d-8813354a5128\") " pod="openstack/kube-state-metrics-0" Jan 31 09:29:02 crc kubenswrapper[4830]: I0131 09:29:02.585928 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/adf0d571-b5dc-4d7c-9e8d-8813354a5128-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"adf0d571-b5dc-4d7c-9e8d-8813354a5128\") " pod="openstack/kube-state-metrics-0" Jan 31 09:29:02 crc kubenswrapper[4830]: I0131 09:29:02.586013 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mysqld-exporter-0"] Jan 31 09:29:02 crc kubenswrapper[4830]: I0131 09:29:02.607124 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-0"] Jan 31 09:29:02 crc kubenswrapper[4830]: I0131 09:29:02.609672 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Jan 31 09:29:02 crc kubenswrapper[4830]: I0131 09:29:02.614882 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"mysqld-exporter-config-data" Jan 31 09:29:02 crc kubenswrapper[4830]: I0131 09:29:02.615170 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-mysqld-exporter-svc" Jan 31 09:29:02 crc kubenswrapper[4830]: I0131 09:29:02.623320 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-0"] Jan 31 09:29:02 crc kubenswrapper[4830]: I0131 09:29:02.689103 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f08189f-4613-4e22-b135-ef80b5bad065-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"5f08189f-4613-4e22-b135-ef80b5bad065\") " pod="openstack/mysqld-exporter-0" Jan 31 09:29:02 crc kubenswrapper[4830]: I0131 09:29:02.689170 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t8czf\" (UniqueName: \"kubernetes.io/projected/5f08189f-4613-4e22-b135-ef80b5bad065-kube-api-access-t8czf\") pod \"mysqld-exporter-0\" (UID: \"5f08189f-4613-4e22-b135-ef80b5bad065\") " pod="openstack/mysqld-exporter-0" Jan 31 09:29:02 crc kubenswrapper[4830]: I0131 09:29:02.689265 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/adf0d571-b5dc-4d7c-9e8d-8813354a5128-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"adf0d571-b5dc-4d7c-9e8d-8813354a5128\") " pod="openstack/kube-state-metrics-0" Jan 31 09:29:02 crc kubenswrapper[4830]: I0131 09:29:02.689356 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lfxh2\" (UniqueName: \"kubernetes.io/projected/adf0d571-b5dc-4d7c-9e8d-8813354a5128-kube-api-access-lfxh2\") pod \"kube-state-metrics-0\" (UID: \"adf0d571-b5dc-4d7c-9e8d-8813354a5128\") " pod="openstack/kube-state-metrics-0" Jan 31 09:29:02 crc kubenswrapper[4830]: I0131 09:29:02.689398 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/adf0d571-b5dc-4d7c-9e8d-8813354a5128-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"adf0d571-b5dc-4d7c-9e8d-8813354a5128\") " pod="openstack/kube-state-metrics-0" Jan 31 09:29:02 crc kubenswrapper[4830]: I0131 09:29:02.689521 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/adf0d571-b5dc-4d7c-9e8d-8813354a5128-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"adf0d571-b5dc-4d7c-9e8d-8813354a5128\") " pod="openstack/kube-state-metrics-0" Jan 31 09:29:02 crc kubenswrapper[4830]: I0131 09:29:02.689563 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mysqld-exporter-tls-certs\" (UniqueName: \"kubernetes.io/secret/5f08189f-4613-4e22-b135-ef80b5bad065-mysqld-exporter-tls-certs\") pod \"mysqld-exporter-0\" (UID: \"5f08189f-4613-4e22-b135-ef80b5bad065\") " pod="openstack/mysqld-exporter-0" Jan 31 09:29:02 crc kubenswrapper[4830]: I0131 09:29:02.689637 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5f08189f-4613-4e22-b135-ef80b5bad065-config-data\") pod \"mysqld-exporter-0\" (UID: \"5f08189f-4613-4e22-b135-ef80b5bad065\") " pod="openstack/mysqld-exporter-0" Jan 31 09:29:02 crc kubenswrapper[4830]: I0131 09:29:02.701739 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/adf0d571-b5dc-4d7c-9e8d-8813354a5128-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"adf0d571-b5dc-4d7c-9e8d-8813354a5128\") " pod="openstack/kube-state-metrics-0" Jan 31 09:29:02 crc kubenswrapper[4830]: I0131 09:29:02.701756 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/adf0d571-b5dc-4d7c-9e8d-8813354a5128-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"adf0d571-b5dc-4d7c-9e8d-8813354a5128\") " pod="openstack/kube-state-metrics-0" Jan 31 09:29:02 crc kubenswrapper[4830]: I0131 09:29:02.702361 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/adf0d571-b5dc-4d7c-9e8d-8813354a5128-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"adf0d571-b5dc-4d7c-9e8d-8813354a5128\") " pod="openstack/kube-state-metrics-0" Jan 31 09:29:02 crc kubenswrapper[4830]: I0131 09:29:02.709838 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lfxh2\" (UniqueName: \"kubernetes.io/projected/adf0d571-b5dc-4d7c-9e8d-8813354a5128-kube-api-access-lfxh2\") pod \"kube-state-metrics-0\" (UID: \"adf0d571-b5dc-4d7c-9e8d-8813354a5128\") " pod="openstack/kube-state-metrics-0" Jan 31 09:29:02 crc kubenswrapper[4830]: I0131 09:29:02.792528 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mysqld-exporter-tls-certs\" (UniqueName: \"kubernetes.io/secret/5f08189f-4613-4e22-b135-ef80b5bad065-mysqld-exporter-tls-certs\") pod \"mysqld-exporter-0\" (UID: \"5f08189f-4613-4e22-b135-ef80b5bad065\") " pod="openstack/mysqld-exporter-0" Jan 31 09:29:02 crc kubenswrapper[4830]: I0131 09:29:02.792635 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5f08189f-4613-4e22-b135-ef80b5bad065-config-data\") pod \"mysqld-exporter-0\" (UID: \"5f08189f-4613-4e22-b135-ef80b5bad065\") " pod="openstack/mysqld-exporter-0" Jan 31 09:29:02 crc kubenswrapper[4830]: I0131 09:29:02.792872 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f08189f-4613-4e22-b135-ef80b5bad065-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"5f08189f-4613-4e22-b135-ef80b5bad065\") " pod="openstack/mysqld-exporter-0" Jan 31 09:29:02 crc kubenswrapper[4830]: I0131 09:29:02.792921 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t8czf\" (UniqueName: \"kubernetes.io/projected/5f08189f-4613-4e22-b135-ef80b5bad065-kube-api-access-t8czf\") pod \"mysqld-exporter-0\" (UID: \"5f08189f-4613-4e22-b135-ef80b5bad065\") " pod="openstack/mysqld-exporter-0" Jan 31 09:29:02 crc kubenswrapper[4830]: I0131 09:29:02.797517 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mysqld-exporter-tls-certs\" (UniqueName: \"kubernetes.io/secret/5f08189f-4613-4e22-b135-ef80b5bad065-mysqld-exporter-tls-certs\") pod \"mysqld-exporter-0\" (UID: \"5f08189f-4613-4e22-b135-ef80b5bad065\") " pod="openstack/mysqld-exporter-0" Jan 31 09:29:02 crc kubenswrapper[4830]: I0131 09:29:02.797602 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f08189f-4613-4e22-b135-ef80b5bad065-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"5f08189f-4613-4e22-b135-ef80b5bad065\") " pod="openstack/mysqld-exporter-0" Jan 31 09:29:02 crc kubenswrapper[4830]: I0131 09:29:02.797651 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5f08189f-4613-4e22-b135-ef80b5bad065-config-data\") pod \"mysqld-exporter-0\" (UID: \"5f08189f-4613-4e22-b135-ef80b5bad065\") " pod="openstack/mysqld-exporter-0" Jan 31 09:29:02 crc kubenswrapper[4830]: I0131 09:29:02.817064 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t8czf\" (UniqueName: \"kubernetes.io/projected/5f08189f-4613-4e22-b135-ef80b5bad065-kube-api-access-t8czf\") pod \"mysqld-exporter-0\" (UID: \"5f08189f-4613-4e22-b135-ef80b5bad065\") " pod="openstack/mysqld-exporter-0" Jan 31 09:29:02 crc kubenswrapper[4830]: I0131 09:29:02.820026 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 31 09:29:02 crc kubenswrapper[4830]: I0131 09:29:02.944301 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Jan 31 09:29:03 crc kubenswrapper[4830]: I0131 09:29:03.451472 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 31 09:29:03 crc kubenswrapper[4830]: I0131 09:29:03.604647 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-0"] Jan 31 09:29:04 crc kubenswrapper[4830]: I0131 09:29:04.031577 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 31 09:29:04 crc kubenswrapper[4830]: I0131 09:29:04.031949 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="505aadad-2257-4f3e-b18b-65c745756366" containerName="ceilometer-central-agent" containerID="cri-o://66638ac456c0d9d507163fef278353e8b1077e7e07db2543b0bf46d1da32b2cb" gracePeriod=30 Jan 31 09:29:04 crc kubenswrapper[4830]: I0131 09:29:04.032041 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="505aadad-2257-4f3e-b18b-65c745756366" containerName="proxy-httpd" containerID="cri-o://e0c909c35e589f45bb26bba2b3a06395be72ebde81d325156b16cfd172580d15" gracePeriod=30 Jan 31 09:29:04 crc kubenswrapper[4830]: I0131 09:29:04.032142 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="505aadad-2257-4f3e-b18b-65c745756366" containerName="sg-core" containerID="cri-o://d892504adf000057837fec5666c41d08864aa82f8ebe684dd065617e721dd9cb" gracePeriod=30 Jan 31 09:29:04 crc kubenswrapper[4830]: I0131 09:29:04.032111 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="505aadad-2257-4f3e-b18b-65c745756366" containerName="ceilometer-notification-agent" containerID="cri-o://5bbf8a672093cd86de6e6200898c0f74d4ea93751cc6a9aeb30b688acb107b4b" gracePeriod=30 Jan 31 09:29:04 crc kubenswrapper[4830]: I0131 09:29:04.221007 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"adf0d571-b5dc-4d7c-9e8d-8813354a5128","Type":"ContainerStarted","Data":"37fecaa69f603f30553baf34d3333d5b4a7353da268bcc0f68e6d0b337f33ffd"} Jan 31 09:29:04 crc kubenswrapper[4830]: I0131 09:29:04.222565 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"5f08189f-4613-4e22-b135-ef80b5bad065","Type":"ContainerStarted","Data":"f2d7f35abda83681214f37c60a5c2e16d0ef2df56e5950df5282169951cab312"} Jan 31 09:29:04 crc kubenswrapper[4830]: I0131 09:29:04.226363 4830 generic.go:334] "Generic (PLEG): container finished" podID="505aadad-2257-4f3e-b18b-65c745756366" containerID="d892504adf000057837fec5666c41d08864aa82f8ebe684dd065617e721dd9cb" exitCode=2 Jan 31 09:29:04 crc kubenswrapper[4830]: I0131 09:29:04.226412 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"505aadad-2257-4f3e-b18b-65c745756366","Type":"ContainerDied","Data":"d892504adf000057837fec5666c41d08864aa82f8ebe684dd065617e721dd9cb"} Jan 31 09:29:04 crc kubenswrapper[4830]: I0131 09:29:04.282460 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5359b6c7-375f-4424-bb43-f4b2a4d40329" path="/var/lib/kubelet/pods/5359b6c7-375f-4424-bb43-f4b2a4d40329/volumes" Jan 31 09:29:04 crc kubenswrapper[4830]: I0131 09:29:04.283197 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d1ca860e-5493-40e2-bc10-ded100de4569" path="/var/lib/kubelet/pods/d1ca860e-5493-40e2-bc10-ded100de4569/volumes" Jan 31 09:29:05 crc kubenswrapper[4830]: I0131 09:29:05.243998 4830 generic.go:334] "Generic (PLEG): container finished" podID="505aadad-2257-4f3e-b18b-65c745756366" containerID="e0c909c35e589f45bb26bba2b3a06395be72ebde81d325156b16cfd172580d15" exitCode=0 Jan 31 09:29:05 crc kubenswrapper[4830]: I0131 09:29:05.245347 4830 generic.go:334] "Generic (PLEG): container finished" podID="505aadad-2257-4f3e-b18b-65c745756366" containerID="66638ac456c0d9d507163fef278353e8b1077e7e07db2543b0bf46d1da32b2cb" exitCode=0 Jan 31 09:29:05 crc kubenswrapper[4830]: I0131 09:29:05.245374 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"505aadad-2257-4f3e-b18b-65c745756366","Type":"ContainerDied","Data":"e0c909c35e589f45bb26bba2b3a06395be72ebde81d325156b16cfd172580d15"} Jan 31 09:29:05 crc kubenswrapper[4830]: I0131 09:29:05.245543 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"505aadad-2257-4f3e-b18b-65c745756366","Type":"ContainerDied","Data":"66638ac456c0d9d507163fef278353e8b1077e7e07db2543b0bf46d1da32b2cb"} Jan 31 09:29:05 crc kubenswrapper[4830]: I0131 09:29:05.888231 4830 scope.go:117] "RemoveContainer" containerID="dde1aa6a2cee6935983e3261958c75380b14a6f833287161d3a37a4e5640bb1f" Jan 31 09:29:06 crc kubenswrapper[4830]: I0131 09:29:06.152496 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/kube-state-metrics-0" podUID="5359b6c7-375f-4424-bb43-f4b2a4d40329" containerName="kube-state-metrics" probeResult="failure" output="Get \"http://10.217.0.135:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 31 09:29:06 crc kubenswrapper[4830]: I0131 09:29:06.183487 4830 scope.go:117] "RemoveContainer" containerID="67eb3a84cc34c3e1d7b3d5410cd4c3f7e9c2645411c53ddc5435603ce6326921" Jan 31 09:29:06 crc kubenswrapper[4830]: I0131 09:29:06.667773 4830 scope.go:117] "RemoveContainer" containerID="7eea604f8f4ad7fac69b2968ddf80bf85cc10c1cdc141647f80b606a0132cf56" Jan 31 09:29:08 crc kubenswrapper[4830]: I0131 09:29:08.324851 4830 generic.go:334] "Generic (PLEG): container finished" podID="505aadad-2257-4f3e-b18b-65c745756366" containerID="5bbf8a672093cd86de6e6200898c0f74d4ea93751cc6a9aeb30b688acb107b4b" exitCode=0 Jan 31 09:29:08 crc kubenswrapper[4830]: I0131 09:29:08.325289 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"505aadad-2257-4f3e-b18b-65c745756366","Type":"ContainerDied","Data":"5bbf8a672093cd86de6e6200898c0f74d4ea93751cc6a9aeb30b688acb107b4b"} Jan 31 09:29:08 crc kubenswrapper[4830]: I0131 09:29:08.618257 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-db-sync-hh79w"] Jan 31 09:29:08 crc kubenswrapper[4830]: I0131 09:29:08.632572 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 31 09:29:08 crc kubenswrapper[4830]: I0131 09:29:08.645479 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-db-sync-hh79w"] Jan 31 09:29:08 crc kubenswrapper[4830]: I0131 09:29:08.681743 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-db-sync-cblrm"] Jan 31 09:29:08 crc kubenswrapper[4830]: E0131 09:29:08.682717 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="505aadad-2257-4f3e-b18b-65c745756366" containerName="ceilometer-central-agent" Jan 31 09:29:08 crc kubenswrapper[4830]: I0131 09:29:08.682760 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="505aadad-2257-4f3e-b18b-65c745756366" containerName="ceilometer-central-agent" Jan 31 09:29:08 crc kubenswrapper[4830]: E0131 09:29:08.682781 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="505aadad-2257-4f3e-b18b-65c745756366" containerName="proxy-httpd" Jan 31 09:29:08 crc kubenswrapper[4830]: I0131 09:29:08.682789 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="505aadad-2257-4f3e-b18b-65c745756366" containerName="proxy-httpd" Jan 31 09:29:08 crc kubenswrapper[4830]: E0131 09:29:08.682805 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="505aadad-2257-4f3e-b18b-65c745756366" containerName="ceilometer-notification-agent" Jan 31 09:29:08 crc kubenswrapper[4830]: I0131 09:29:08.682812 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="505aadad-2257-4f3e-b18b-65c745756366" containerName="ceilometer-notification-agent" Jan 31 09:29:08 crc kubenswrapper[4830]: E0131 09:29:08.682824 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="505aadad-2257-4f3e-b18b-65c745756366" containerName="sg-core" Jan 31 09:29:08 crc kubenswrapper[4830]: I0131 09:29:08.682831 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="505aadad-2257-4f3e-b18b-65c745756366" containerName="sg-core" Jan 31 09:29:08 crc kubenswrapper[4830]: I0131 09:29:08.683144 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="505aadad-2257-4f3e-b18b-65c745756366" containerName="proxy-httpd" Jan 31 09:29:08 crc kubenswrapper[4830]: I0131 09:29:08.683182 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="505aadad-2257-4f3e-b18b-65c745756366" containerName="sg-core" Jan 31 09:29:08 crc kubenswrapper[4830]: I0131 09:29:08.683225 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="505aadad-2257-4f3e-b18b-65c745756366" containerName="ceilometer-notification-agent" Jan 31 09:29:08 crc kubenswrapper[4830]: I0131 09:29:08.683250 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="505aadad-2257-4f3e-b18b-65c745756366" containerName="ceilometer-central-agent" Jan 31 09:29:08 crc kubenswrapper[4830]: I0131 09:29:08.684426 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-cblrm" Jan 31 09:29:08 crc kubenswrapper[4830]: I0131 09:29:08.720565 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-cblrm"] Jan 31 09:29:08 crc kubenswrapper[4830]: I0131 09:29:08.723378 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/505aadad-2257-4f3e-b18b-65c745756366-config-data\") pod \"505aadad-2257-4f3e-b18b-65c745756366\" (UID: \"505aadad-2257-4f3e-b18b-65c745756366\") " Jan 31 09:29:08 crc kubenswrapper[4830]: I0131 09:29:08.724237 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/505aadad-2257-4f3e-b18b-65c745756366-scripts\") pod \"505aadad-2257-4f3e-b18b-65c745756366\" (UID: \"505aadad-2257-4f3e-b18b-65c745756366\") " Jan 31 09:29:08 crc kubenswrapper[4830]: I0131 09:29:08.724380 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/505aadad-2257-4f3e-b18b-65c745756366-sg-core-conf-yaml\") pod \"505aadad-2257-4f3e-b18b-65c745756366\" (UID: \"505aadad-2257-4f3e-b18b-65c745756366\") " Jan 31 09:29:08 crc kubenswrapper[4830]: I0131 09:29:08.724522 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/505aadad-2257-4f3e-b18b-65c745756366-combined-ca-bundle\") pod \"505aadad-2257-4f3e-b18b-65c745756366\" (UID: \"505aadad-2257-4f3e-b18b-65c745756366\") " Jan 31 09:29:08 crc kubenswrapper[4830]: I0131 09:29:08.724564 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/505aadad-2257-4f3e-b18b-65c745756366-log-httpd\") pod \"505aadad-2257-4f3e-b18b-65c745756366\" (UID: \"505aadad-2257-4f3e-b18b-65c745756366\") " Jan 31 09:29:08 crc kubenswrapper[4830]: I0131 09:29:08.724602 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/505aadad-2257-4f3e-b18b-65c745756366-run-httpd\") pod \"505aadad-2257-4f3e-b18b-65c745756366\" (UID: \"505aadad-2257-4f3e-b18b-65c745756366\") " Jan 31 09:29:08 crc kubenswrapper[4830]: I0131 09:29:08.724638 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2ssnt\" (UniqueName: \"kubernetes.io/projected/505aadad-2257-4f3e-b18b-65c745756366-kube-api-access-2ssnt\") pod \"505aadad-2257-4f3e-b18b-65c745756366\" (UID: \"505aadad-2257-4f3e-b18b-65c745756366\") " Jan 31 09:29:08 crc kubenswrapper[4830]: I0131 09:29:08.727280 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/505aadad-2257-4f3e-b18b-65c745756366-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "505aadad-2257-4f3e-b18b-65c745756366" (UID: "505aadad-2257-4f3e-b18b-65c745756366"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 09:29:08 crc kubenswrapper[4830]: I0131 09:29:08.727855 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/505aadad-2257-4f3e-b18b-65c745756366-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "505aadad-2257-4f3e-b18b-65c745756366" (UID: "505aadad-2257-4f3e-b18b-65c745756366"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 09:29:08 crc kubenswrapper[4830]: I0131 09:29:08.738861 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/505aadad-2257-4f3e-b18b-65c745756366-scripts" (OuterVolumeSpecName: "scripts") pod "505aadad-2257-4f3e-b18b-65c745756366" (UID: "505aadad-2257-4f3e-b18b-65c745756366"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:29:08 crc kubenswrapper[4830]: I0131 09:29:08.748635 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/505aadad-2257-4f3e-b18b-65c745756366-kube-api-access-2ssnt" (OuterVolumeSpecName: "kube-api-access-2ssnt") pod "505aadad-2257-4f3e-b18b-65c745756366" (UID: "505aadad-2257-4f3e-b18b-65c745756366"). InnerVolumeSpecName "kube-api-access-2ssnt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:29:08 crc kubenswrapper[4830]: I0131 09:29:08.780201 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/505aadad-2257-4f3e-b18b-65c745756366-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "505aadad-2257-4f3e-b18b-65c745756366" (UID: "505aadad-2257-4f3e-b18b-65c745756366"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:29:08 crc kubenswrapper[4830]: I0131 09:29:08.829537 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/35a75e79-079e-4905-9cc1-af2a81596943-config-data\") pod \"heat-db-sync-cblrm\" (UID: \"35a75e79-079e-4905-9cc1-af2a81596943\") " pod="openstack/heat-db-sync-cblrm" Jan 31 09:29:08 crc kubenswrapper[4830]: I0131 09:29:08.830169 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-66bl5\" (UniqueName: \"kubernetes.io/projected/35a75e79-079e-4905-9cc1-af2a81596943-kube-api-access-66bl5\") pod \"heat-db-sync-cblrm\" (UID: \"35a75e79-079e-4905-9cc1-af2a81596943\") " pod="openstack/heat-db-sync-cblrm" Jan 31 09:29:08 crc kubenswrapper[4830]: I0131 09:29:08.830542 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35a75e79-079e-4905-9cc1-af2a81596943-combined-ca-bundle\") pod \"heat-db-sync-cblrm\" (UID: \"35a75e79-079e-4905-9cc1-af2a81596943\") " pod="openstack/heat-db-sync-cblrm" Jan 31 09:29:08 crc kubenswrapper[4830]: I0131 09:29:08.830888 4830 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/505aadad-2257-4f3e-b18b-65c745756366-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 09:29:08 crc kubenswrapper[4830]: I0131 09:29:08.830907 4830 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/505aadad-2257-4f3e-b18b-65c745756366-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 31 09:29:08 crc kubenswrapper[4830]: I0131 09:29:08.830924 4830 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/505aadad-2257-4f3e-b18b-65c745756366-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 31 09:29:08 crc kubenswrapper[4830]: I0131 09:29:08.830935 4830 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/505aadad-2257-4f3e-b18b-65c745756366-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 31 09:29:08 crc kubenswrapper[4830]: I0131 09:29:08.830946 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2ssnt\" (UniqueName: \"kubernetes.io/projected/505aadad-2257-4f3e-b18b-65c745756366-kube-api-access-2ssnt\") on node \"crc\" DevicePath \"\"" Jan 31 09:29:08 crc kubenswrapper[4830]: I0131 09:29:08.881203 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/505aadad-2257-4f3e-b18b-65c745756366-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "505aadad-2257-4f3e-b18b-65c745756366" (UID: "505aadad-2257-4f3e-b18b-65c745756366"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:29:08 crc kubenswrapper[4830]: I0131 09:29:08.906276 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/505aadad-2257-4f3e-b18b-65c745756366-config-data" (OuterVolumeSpecName: "config-data") pod "505aadad-2257-4f3e-b18b-65c745756366" (UID: "505aadad-2257-4f3e-b18b-65c745756366"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:29:08 crc kubenswrapper[4830]: I0131 09:29:08.934010 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/35a75e79-079e-4905-9cc1-af2a81596943-config-data\") pod \"heat-db-sync-cblrm\" (UID: \"35a75e79-079e-4905-9cc1-af2a81596943\") " pod="openstack/heat-db-sync-cblrm" Jan 31 09:29:08 crc kubenswrapper[4830]: I0131 09:29:08.934074 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-66bl5\" (UniqueName: \"kubernetes.io/projected/35a75e79-079e-4905-9cc1-af2a81596943-kube-api-access-66bl5\") pod \"heat-db-sync-cblrm\" (UID: \"35a75e79-079e-4905-9cc1-af2a81596943\") " pod="openstack/heat-db-sync-cblrm" Jan 31 09:29:08 crc kubenswrapper[4830]: I0131 09:29:08.935441 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35a75e79-079e-4905-9cc1-af2a81596943-combined-ca-bundle\") pod \"heat-db-sync-cblrm\" (UID: \"35a75e79-079e-4905-9cc1-af2a81596943\") " pod="openstack/heat-db-sync-cblrm" Jan 31 09:29:08 crc kubenswrapper[4830]: I0131 09:29:08.935689 4830 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/505aadad-2257-4f3e-b18b-65c745756366-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 09:29:08 crc kubenswrapper[4830]: I0131 09:29:08.935705 4830 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/505aadad-2257-4f3e-b18b-65c745756366-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 09:29:08 crc kubenswrapper[4830]: I0131 09:29:08.940161 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/35a75e79-079e-4905-9cc1-af2a81596943-config-data\") pod \"heat-db-sync-cblrm\" (UID: \"35a75e79-079e-4905-9cc1-af2a81596943\") " pod="openstack/heat-db-sync-cblrm" Jan 31 09:29:08 crc kubenswrapper[4830]: I0131 09:29:08.944200 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35a75e79-079e-4905-9cc1-af2a81596943-combined-ca-bundle\") pod \"heat-db-sync-cblrm\" (UID: \"35a75e79-079e-4905-9cc1-af2a81596943\") " pod="openstack/heat-db-sync-cblrm" Jan 31 09:29:08 crc kubenswrapper[4830]: I0131 09:29:08.953927 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-66bl5\" (UniqueName: \"kubernetes.io/projected/35a75e79-079e-4905-9cc1-af2a81596943-kube-api-access-66bl5\") pod \"heat-db-sync-cblrm\" (UID: \"35a75e79-079e-4905-9cc1-af2a81596943\") " pod="openstack/heat-db-sync-cblrm" Jan 31 09:29:09 crc kubenswrapper[4830]: I0131 09:29:09.083508 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-cblrm" Jan 31 09:29:09 crc kubenswrapper[4830]: I0131 09:29:09.359632 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"505aadad-2257-4f3e-b18b-65c745756366","Type":"ContainerDied","Data":"371e31a63618c57297a991dfb9e365a1edf022347f2d6a8e0c1ac2491b4570c8"} Jan 31 09:29:09 crc kubenswrapper[4830]: I0131 09:29:09.360103 4830 scope.go:117] "RemoveContainer" containerID="e0c909c35e589f45bb26bba2b3a06395be72ebde81d325156b16cfd172580d15" Jan 31 09:29:09 crc kubenswrapper[4830]: I0131 09:29:09.359910 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 31 09:29:09 crc kubenswrapper[4830]: I0131 09:29:09.413909 4830 scope.go:117] "RemoveContainer" containerID="d892504adf000057837fec5666c41d08864aa82f8ebe684dd065617e721dd9cb" Jan 31 09:29:09 crc kubenswrapper[4830]: I0131 09:29:09.438904 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 31 09:29:09 crc kubenswrapper[4830]: I0131 09:29:09.465889 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 31 09:29:09 crc kubenswrapper[4830]: I0131 09:29:09.490899 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 31 09:29:09 crc kubenswrapper[4830]: I0131 09:29:09.494896 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 31 09:29:09 crc kubenswrapper[4830]: I0131 09:29:09.498120 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 31 09:29:09 crc kubenswrapper[4830]: I0131 09:29:09.498367 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 31 09:29:09 crc kubenswrapper[4830]: I0131 09:29:09.498909 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 31 09:29:09 crc kubenswrapper[4830]: I0131 09:29:09.522876 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 31 09:29:09 crc kubenswrapper[4830]: I0131 09:29:09.545901 4830 scope.go:117] "RemoveContainer" containerID="5bbf8a672093cd86de6e6200898c0f74d4ea93751cc6a9aeb30b688acb107b4b" Jan 31 09:29:09 crc kubenswrapper[4830]: I0131 09:29:09.566634 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f97ee114-e2b6-423b-b30a-dd1e2ada3169-run-httpd\") pod \"ceilometer-0\" (UID: \"f97ee114-e2b6-423b-b30a-dd1e2ada3169\") " pod="openstack/ceilometer-0" Jan 31 09:29:09 crc kubenswrapper[4830]: I0131 09:29:09.566672 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/f97ee114-e2b6-423b-b30a-dd1e2ada3169-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"f97ee114-e2b6-423b-b30a-dd1e2ada3169\") " pod="openstack/ceilometer-0" Jan 31 09:29:09 crc kubenswrapper[4830]: I0131 09:29:09.566710 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f97ee114-e2b6-423b-b30a-dd1e2ada3169-log-httpd\") pod \"ceilometer-0\" (UID: \"f97ee114-e2b6-423b-b30a-dd1e2ada3169\") " pod="openstack/ceilometer-0" Jan 31 09:29:09 crc kubenswrapper[4830]: I0131 09:29:09.566788 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f97ee114-e2b6-423b-b30a-dd1e2ada3169-scripts\") pod \"ceilometer-0\" (UID: \"f97ee114-e2b6-423b-b30a-dd1e2ada3169\") " pod="openstack/ceilometer-0" Jan 31 09:29:09 crc kubenswrapper[4830]: I0131 09:29:09.566934 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vxdjb\" (UniqueName: \"kubernetes.io/projected/f97ee114-e2b6-423b-b30a-dd1e2ada3169-kube-api-access-vxdjb\") pod \"ceilometer-0\" (UID: \"f97ee114-e2b6-423b-b30a-dd1e2ada3169\") " pod="openstack/ceilometer-0" Jan 31 09:29:09 crc kubenswrapper[4830]: I0131 09:29:09.567141 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f97ee114-e2b6-423b-b30a-dd1e2ada3169-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f97ee114-e2b6-423b-b30a-dd1e2ada3169\") " pod="openstack/ceilometer-0" Jan 31 09:29:09 crc kubenswrapper[4830]: I0131 09:29:09.567221 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f97ee114-e2b6-423b-b30a-dd1e2ada3169-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f97ee114-e2b6-423b-b30a-dd1e2ada3169\") " pod="openstack/ceilometer-0" Jan 31 09:29:09 crc kubenswrapper[4830]: I0131 09:29:09.567252 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f97ee114-e2b6-423b-b30a-dd1e2ada3169-config-data\") pod \"ceilometer-0\" (UID: \"f97ee114-e2b6-423b-b30a-dd1e2ada3169\") " pod="openstack/ceilometer-0" Jan 31 09:29:09 crc kubenswrapper[4830]: I0131 09:29:09.593936 4830 scope.go:117] "RemoveContainer" containerID="66638ac456c0d9d507163fef278353e8b1077e7e07db2543b0bf46d1da32b2cb" Jan 31 09:29:09 crc kubenswrapper[4830]: I0131 09:29:09.670594 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f97ee114-e2b6-423b-b30a-dd1e2ada3169-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f97ee114-e2b6-423b-b30a-dd1e2ada3169\") " pod="openstack/ceilometer-0" Jan 31 09:29:09 crc kubenswrapper[4830]: I0131 09:29:09.670701 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f97ee114-e2b6-423b-b30a-dd1e2ada3169-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f97ee114-e2b6-423b-b30a-dd1e2ada3169\") " pod="openstack/ceilometer-0" Jan 31 09:29:09 crc kubenswrapper[4830]: I0131 09:29:09.670760 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f97ee114-e2b6-423b-b30a-dd1e2ada3169-config-data\") pod \"ceilometer-0\" (UID: \"f97ee114-e2b6-423b-b30a-dd1e2ada3169\") " pod="openstack/ceilometer-0" Jan 31 09:29:09 crc kubenswrapper[4830]: I0131 09:29:09.670875 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f97ee114-e2b6-423b-b30a-dd1e2ada3169-run-httpd\") pod \"ceilometer-0\" (UID: \"f97ee114-e2b6-423b-b30a-dd1e2ada3169\") " pod="openstack/ceilometer-0" Jan 31 09:29:09 crc kubenswrapper[4830]: I0131 09:29:09.670903 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f97ee114-e2b6-423b-b30a-dd1e2ada3169-log-httpd\") pod \"ceilometer-0\" (UID: \"f97ee114-e2b6-423b-b30a-dd1e2ada3169\") " pod="openstack/ceilometer-0" Jan 31 09:29:09 crc kubenswrapper[4830]: I0131 09:29:09.670925 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/f97ee114-e2b6-423b-b30a-dd1e2ada3169-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"f97ee114-e2b6-423b-b30a-dd1e2ada3169\") " pod="openstack/ceilometer-0" Jan 31 09:29:09 crc kubenswrapper[4830]: I0131 09:29:09.671028 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f97ee114-e2b6-423b-b30a-dd1e2ada3169-scripts\") pod \"ceilometer-0\" (UID: \"f97ee114-e2b6-423b-b30a-dd1e2ada3169\") " pod="openstack/ceilometer-0" Jan 31 09:29:09 crc kubenswrapper[4830]: I0131 09:29:09.671126 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vxdjb\" (UniqueName: \"kubernetes.io/projected/f97ee114-e2b6-423b-b30a-dd1e2ada3169-kube-api-access-vxdjb\") pod \"ceilometer-0\" (UID: \"f97ee114-e2b6-423b-b30a-dd1e2ada3169\") " pod="openstack/ceilometer-0" Jan 31 09:29:09 crc kubenswrapper[4830]: I0131 09:29:09.671887 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f97ee114-e2b6-423b-b30a-dd1e2ada3169-run-httpd\") pod \"ceilometer-0\" (UID: \"f97ee114-e2b6-423b-b30a-dd1e2ada3169\") " pod="openstack/ceilometer-0" Jan 31 09:29:09 crc kubenswrapper[4830]: I0131 09:29:09.672143 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f97ee114-e2b6-423b-b30a-dd1e2ada3169-log-httpd\") pod \"ceilometer-0\" (UID: \"f97ee114-e2b6-423b-b30a-dd1e2ada3169\") " pod="openstack/ceilometer-0" Jan 31 09:29:09 crc kubenswrapper[4830]: I0131 09:29:09.680259 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/f97ee114-e2b6-423b-b30a-dd1e2ada3169-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"f97ee114-e2b6-423b-b30a-dd1e2ada3169\") " pod="openstack/ceilometer-0" Jan 31 09:29:09 crc kubenswrapper[4830]: I0131 09:29:09.685319 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f97ee114-e2b6-423b-b30a-dd1e2ada3169-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f97ee114-e2b6-423b-b30a-dd1e2ada3169\") " pod="openstack/ceilometer-0" Jan 31 09:29:09 crc kubenswrapper[4830]: I0131 09:29:09.685869 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f97ee114-e2b6-423b-b30a-dd1e2ada3169-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f97ee114-e2b6-423b-b30a-dd1e2ada3169\") " pod="openstack/ceilometer-0" Jan 31 09:29:09 crc kubenswrapper[4830]: I0131 09:29:09.686745 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f97ee114-e2b6-423b-b30a-dd1e2ada3169-scripts\") pod \"ceilometer-0\" (UID: \"f97ee114-e2b6-423b-b30a-dd1e2ada3169\") " pod="openstack/ceilometer-0" Jan 31 09:29:09 crc kubenswrapper[4830]: I0131 09:29:09.687516 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-cblrm"] Jan 31 09:29:09 crc kubenswrapper[4830]: I0131 09:29:09.691280 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f97ee114-e2b6-423b-b30a-dd1e2ada3169-config-data\") pod \"ceilometer-0\" (UID: \"f97ee114-e2b6-423b-b30a-dd1e2ada3169\") " pod="openstack/ceilometer-0" Jan 31 09:29:09 crc kubenswrapper[4830]: W0131 09:29:09.691423 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod35a75e79_079e_4905_9cc1_af2a81596943.slice/crio-d32c8faf232cd85e78e41e2f364a6a75012625faab7c79fcdce63446453848a9 WatchSource:0}: Error finding container d32c8faf232cd85e78e41e2f364a6a75012625faab7c79fcdce63446453848a9: Status 404 returned error can't find the container with id d32c8faf232cd85e78e41e2f364a6a75012625faab7c79fcdce63446453848a9 Jan 31 09:29:09 crc kubenswrapper[4830]: I0131 09:29:09.704395 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vxdjb\" (UniqueName: \"kubernetes.io/projected/f97ee114-e2b6-423b-b30a-dd1e2ada3169-kube-api-access-vxdjb\") pod \"ceilometer-0\" (UID: \"f97ee114-e2b6-423b-b30a-dd1e2ada3169\") " pod="openstack/ceilometer-0" Jan 31 09:29:09 crc kubenswrapper[4830]: I0131 09:29:09.852768 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 31 09:29:10 crc kubenswrapper[4830]: I0131 09:29:10.302265 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="505aadad-2257-4f3e-b18b-65c745756366" path="/var/lib/kubelet/pods/505aadad-2257-4f3e-b18b-65c745756366/volumes" Jan 31 09:29:10 crc kubenswrapper[4830]: I0131 09:29:10.327167 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6324b6ba-4288-44f4-bf87-1a4356c1a9f0" path="/var/lib/kubelet/pods/6324b6ba-4288-44f4-bf87-1a4356c1a9f0/volumes" Jan 31 09:29:10 crc kubenswrapper[4830]: I0131 09:29:10.428772 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-cblrm" event={"ID":"35a75e79-079e-4905-9cc1-af2a81596943","Type":"ContainerStarted","Data":"d32c8faf232cd85e78e41e2f364a6a75012625faab7c79fcdce63446453848a9"} Jan 31 09:29:10 crc kubenswrapper[4830]: I0131 09:29:10.431385 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"5f08189f-4613-4e22-b135-ef80b5bad065","Type":"ContainerStarted","Data":"314967ba9c0f7dbe8345d34cfd3374628515326248df50243742bf04dd33f4a8"} Jan 31 09:29:10 crc kubenswrapper[4830]: I0131 09:29:10.447059 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"adf0d571-b5dc-4d7c-9e8d-8813354a5128","Type":"ContainerStarted","Data":"184536029a48d98e756eccab3b9c57d61b4ae582035a9dd9a291492b0aec8e02"} Jan 31 09:29:10 crc kubenswrapper[4830]: I0131 09:29:10.448771 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Jan 31 09:29:10 crc kubenswrapper[4830]: I0131 09:29:10.532805 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 31 09:29:10 crc kubenswrapper[4830]: I0131 09:29:10.556329 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mysqld-exporter-0" podStartSLOduration=3.686276647 podStartE2EDuration="8.556302699s" podCreationTimestamp="2026-01-31 09:29:02 +0000 UTC" firstStartedPulling="2026-01-31 09:29:03.590836398 +0000 UTC m=+1688.084198840" lastFinishedPulling="2026-01-31 09:29:08.46086245 +0000 UTC m=+1692.954224892" observedRunningTime="2026-01-31 09:29:10.459429737 +0000 UTC m=+1694.952792179" watchObservedRunningTime="2026-01-31 09:29:10.556302699 +0000 UTC m=+1695.049665141" Jan 31 09:29:10 crc kubenswrapper[4830]: I0131 09:29:10.586939 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=5.343378961 podStartE2EDuration="8.586900265s" podCreationTimestamp="2026-01-31 09:29:02 +0000 UTC" firstStartedPulling="2026-01-31 09:29:03.424458377 +0000 UTC m=+1687.917820819" lastFinishedPulling="2026-01-31 09:29:06.667979671 +0000 UTC m=+1691.161342123" observedRunningTime="2026-01-31 09:29:10.51229512 +0000 UTC m=+1695.005657562" watchObservedRunningTime="2026-01-31 09:29:10.586900265 +0000 UTC m=+1695.080262707" Jan 31 09:29:10 crc kubenswrapper[4830]: I0131 09:29:10.889323 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-2"] Jan 31 09:29:11 crc kubenswrapper[4830]: I0131 09:29:11.487319 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f97ee114-e2b6-423b-b30a-dd1e2ada3169","Type":"ContainerStarted","Data":"37e0b212f05f847d594ff512e867406a2297e646fe74024431c8e9c385583b07"} Jan 31 09:29:11 crc kubenswrapper[4830]: I0131 09:29:11.951119 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 31 09:29:12 crc kubenswrapper[4830]: I0131 09:29:12.251932 4830 scope.go:117] "RemoveContainer" containerID="a04fad3617a9e38076099693ce6bd6f0b7e1a9b845b3b8a22acffddfa772e8f0" Jan 31 09:29:12 crc kubenswrapper[4830]: E0131 09:29:12.252200 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gt7kd_openshift-machine-config-operator(158dbfda-9b0a-4809-9946-3c6ee2d082dc)\"" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" podUID="158dbfda-9b0a-4809-9946-3c6ee2d082dc" Jan 31 09:29:14 crc kubenswrapper[4830]: I0131 09:29:14.053086 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 31 09:29:17 crc kubenswrapper[4830]: I0131 09:29:17.599840 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f97ee114-e2b6-423b-b30a-dd1e2ada3169","Type":"ContainerStarted","Data":"26b4ec876b6a68130f28c036e4967e5620736cb2584439d013918c4468bcb419"} Jan 31 09:29:19 crc kubenswrapper[4830]: I0131 09:29:19.052643 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="18af810d-9de4-4822-86d2-bb7e8a8a449b" containerName="rabbitmq" containerID="cri-o://acc702009ec1b1c264fd284a800bc7eafae655c03f62683636397a46f06f969c" gracePeriod=604793 Jan 31 09:29:19 crc kubenswrapper[4830]: I0131 09:29:19.649602 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f97ee114-e2b6-423b-b30a-dd1e2ada3169","Type":"ContainerStarted","Data":"a3e04444760ffb271d3b136607990d7e060963fa82b0b06859901489ab2ab0e3"} Jan 31 09:29:20 crc kubenswrapper[4830]: I0131 09:29:20.685064 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f97ee114-e2b6-423b-b30a-dd1e2ada3169","Type":"ContainerStarted","Data":"98b83b1eb53613cb65b57c9c1c64ccee4b22b7097155722808a03ef4c999bcb6"} Jan 31 09:29:22 crc kubenswrapper[4830]: I0131 09:29:22.832812 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Jan 31 09:29:23 crc kubenswrapper[4830]: E0131 09:29:23.084209 4830 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.53:33450->38.102.83.53:38781: write tcp 38.102.83.53:33450->38.102.83.53:38781: write: connection reset by peer Jan 31 09:29:25 crc kubenswrapper[4830]: I0131 09:29:25.757194 4830 generic.go:334] "Generic (PLEG): container finished" podID="18af810d-9de4-4822-86d2-bb7e8a8a449b" containerID="acc702009ec1b1c264fd284a800bc7eafae655c03f62683636397a46f06f969c" exitCode=0 Jan 31 09:29:25 crc kubenswrapper[4830]: I0131 09:29:25.757242 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"18af810d-9de4-4822-86d2-bb7e8a8a449b","Type":"ContainerDied","Data":"acc702009ec1b1c264fd284a800bc7eafae655c03f62683636397a46f06f969c"} Jan 31 09:29:26 crc kubenswrapper[4830]: I0131 09:29:26.226646 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="18af810d-9de4-4822-86d2-bb7e8a8a449b" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.131:5671: connect: connection refused" Jan 31 09:29:26 crc kubenswrapper[4830]: I0131 09:29:26.252254 4830 scope.go:117] "RemoveContainer" containerID="a04fad3617a9e38076099693ce6bd6f0b7e1a9b845b3b8a22acffddfa772e8f0" Jan 31 09:29:26 crc kubenswrapper[4830]: E0131 09:29:26.252791 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gt7kd_openshift-machine-config-operator(158dbfda-9b0a-4809-9946-3c6ee2d082dc)\"" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" podUID="158dbfda-9b0a-4809-9946-3c6ee2d082dc" Jan 31 09:29:29 crc kubenswrapper[4830]: I0131 09:29:29.300813 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-68df85789f-bj49s"] Jan 31 09:29:29 crc kubenswrapper[4830]: I0131 09:29:29.306125 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-68df85789f-bj49s" Jan 31 09:29:29 crc kubenswrapper[4830]: I0131 09:29:29.310619 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-edpm-ipam" Jan 31 09:29:29 crc kubenswrapper[4830]: I0131 09:29:29.342615 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-68df85789f-bj49s"] Jan 31 09:29:29 crc kubenswrapper[4830]: I0131 09:29:29.462479 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/668f930d-5dba-4a2a-bd14-589620626682-dns-svc\") pod \"dnsmasq-dns-68df85789f-bj49s\" (UID: \"668f930d-5dba-4a2a-bd14-589620626682\") " pod="openstack/dnsmasq-dns-68df85789f-bj49s" Jan 31 09:29:29 crc kubenswrapper[4830]: I0131 09:29:29.462643 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/668f930d-5dba-4a2a-bd14-589620626682-dns-swift-storage-0\") pod \"dnsmasq-dns-68df85789f-bj49s\" (UID: \"668f930d-5dba-4a2a-bd14-589620626682\") " pod="openstack/dnsmasq-dns-68df85789f-bj49s" Jan 31 09:29:29 crc kubenswrapper[4830]: I0131 09:29:29.462677 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/668f930d-5dba-4a2a-bd14-589620626682-openstack-edpm-ipam\") pod \"dnsmasq-dns-68df85789f-bj49s\" (UID: \"668f930d-5dba-4a2a-bd14-589620626682\") " pod="openstack/dnsmasq-dns-68df85789f-bj49s" Jan 31 09:29:29 crc kubenswrapper[4830]: I0131 09:29:29.462743 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/668f930d-5dba-4a2a-bd14-589620626682-ovsdbserver-sb\") pod \"dnsmasq-dns-68df85789f-bj49s\" (UID: \"668f930d-5dba-4a2a-bd14-589620626682\") " pod="openstack/dnsmasq-dns-68df85789f-bj49s" Jan 31 09:29:29 crc kubenswrapper[4830]: I0131 09:29:29.462800 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/668f930d-5dba-4a2a-bd14-589620626682-config\") pod \"dnsmasq-dns-68df85789f-bj49s\" (UID: \"668f930d-5dba-4a2a-bd14-589620626682\") " pod="openstack/dnsmasq-dns-68df85789f-bj49s" Jan 31 09:29:29 crc kubenswrapper[4830]: I0131 09:29:29.462841 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dw5rp\" (UniqueName: \"kubernetes.io/projected/668f930d-5dba-4a2a-bd14-589620626682-kube-api-access-dw5rp\") pod \"dnsmasq-dns-68df85789f-bj49s\" (UID: \"668f930d-5dba-4a2a-bd14-589620626682\") " pod="openstack/dnsmasq-dns-68df85789f-bj49s" Jan 31 09:29:29 crc kubenswrapper[4830]: I0131 09:29:29.462902 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/668f930d-5dba-4a2a-bd14-589620626682-ovsdbserver-nb\") pod \"dnsmasq-dns-68df85789f-bj49s\" (UID: \"668f930d-5dba-4a2a-bd14-589620626682\") " pod="openstack/dnsmasq-dns-68df85789f-bj49s" Jan 31 09:29:29 crc kubenswrapper[4830]: I0131 09:29:29.567325 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/668f930d-5dba-4a2a-bd14-589620626682-dns-swift-storage-0\") pod \"dnsmasq-dns-68df85789f-bj49s\" (UID: \"668f930d-5dba-4a2a-bd14-589620626682\") " pod="openstack/dnsmasq-dns-68df85789f-bj49s" Jan 31 09:29:29 crc kubenswrapper[4830]: I0131 09:29:29.568586 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/668f930d-5dba-4a2a-bd14-589620626682-openstack-edpm-ipam\") pod \"dnsmasq-dns-68df85789f-bj49s\" (UID: \"668f930d-5dba-4a2a-bd14-589620626682\") " pod="openstack/dnsmasq-dns-68df85789f-bj49s" Jan 31 09:29:29 crc kubenswrapper[4830]: I0131 09:29:29.569130 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/668f930d-5dba-4a2a-bd14-589620626682-ovsdbserver-sb\") pod \"dnsmasq-dns-68df85789f-bj49s\" (UID: \"668f930d-5dba-4a2a-bd14-589620626682\") " pod="openstack/dnsmasq-dns-68df85789f-bj49s" Jan 31 09:29:29 crc kubenswrapper[4830]: I0131 09:29:29.569362 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/668f930d-5dba-4a2a-bd14-589620626682-config\") pod \"dnsmasq-dns-68df85789f-bj49s\" (UID: \"668f930d-5dba-4a2a-bd14-589620626682\") " pod="openstack/dnsmasq-dns-68df85789f-bj49s" Jan 31 09:29:29 crc kubenswrapper[4830]: I0131 09:29:29.569498 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dw5rp\" (UniqueName: \"kubernetes.io/projected/668f930d-5dba-4a2a-bd14-589620626682-kube-api-access-dw5rp\") pod \"dnsmasq-dns-68df85789f-bj49s\" (UID: \"668f930d-5dba-4a2a-bd14-589620626682\") " pod="openstack/dnsmasq-dns-68df85789f-bj49s" Jan 31 09:29:29 crc kubenswrapper[4830]: I0131 09:29:29.569639 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/668f930d-5dba-4a2a-bd14-589620626682-ovsdbserver-nb\") pod \"dnsmasq-dns-68df85789f-bj49s\" (UID: \"668f930d-5dba-4a2a-bd14-589620626682\") " pod="openstack/dnsmasq-dns-68df85789f-bj49s" Jan 31 09:29:29 crc kubenswrapper[4830]: I0131 09:29:29.570630 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/668f930d-5dba-4a2a-bd14-589620626682-ovsdbserver-sb\") pod \"dnsmasq-dns-68df85789f-bj49s\" (UID: \"668f930d-5dba-4a2a-bd14-589620626682\") " pod="openstack/dnsmasq-dns-68df85789f-bj49s" Jan 31 09:29:29 crc kubenswrapper[4830]: I0131 09:29:29.570857 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/668f930d-5dba-4a2a-bd14-589620626682-dns-swift-storage-0\") pod \"dnsmasq-dns-68df85789f-bj49s\" (UID: \"668f930d-5dba-4a2a-bd14-589620626682\") " pod="openstack/dnsmasq-dns-68df85789f-bj49s" Jan 31 09:29:29 crc kubenswrapper[4830]: I0131 09:29:29.571179 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/668f930d-5dba-4a2a-bd14-589620626682-openstack-edpm-ipam\") pod \"dnsmasq-dns-68df85789f-bj49s\" (UID: \"668f930d-5dba-4a2a-bd14-589620626682\") " pod="openstack/dnsmasq-dns-68df85789f-bj49s" Jan 31 09:29:29 crc kubenswrapper[4830]: I0131 09:29:29.575856 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/668f930d-5dba-4a2a-bd14-589620626682-config\") pod \"dnsmasq-dns-68df85789f-bj49s\" (UID: \"668f930d-5dba-4a2a-bd14-589620626682\") " pod="openstack/dnsmasq-dns-68df85789f-bj49s" Jan 31 09:29:29 crc kubenswrapper[4830]: I0131 09:29:29.575993 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/668f930d-5dba-4a2a-bd14-589620626682-dns-svc\") pod \"dnsmasq-dns-68df85789f-bj49s\" (UID: \"668f930d-5dba-4a2a-bd14-589620626682\") " pod="openstack/dnsmasq-dns-68df85789f-bj49s" Jan 31 09:29:29 crc kubenswrapper[4830]: I0131 09:29:29.576432 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/668f930d-5dba-4a2a-bd14-589620626682-ovsdbserver-nb\") pod \"dnsmasq-dns-68df85789f-bj49s\" (UID: \"668f930d-5dba-4a2a-bd14-589620626682\") " pod="openstack/dnsmasq-dns-68df85789f-bj49s" Jan 31 09:29:29 crc kubenswrapper[4830]: I0131 09:29:29.576787 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/668f930d-5dba-4a2a-bd14-589620626682-dns-svc\") pod \"dnsmasq-dns-68df85789f-bj49s\" (UID: \"668f930d-5dba-4a2a-bd14-589620626682\") " pod="openstack/dnsmasq-dns-68df85789f-bj49s" Jan 31 09:29:29 crc kubenswrapper[4830]: I0131 09:29:29.619877 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dw5rp\" (UniqueName: \"kubernetes.io/projected/668f930d-5dba-4a2a-bd14-589620626682-kube-api-access-dw5rp\") pod \"dnsmasq-dns-68df85789f-bj49s\" (UID: \"668f930d-5dba-4a2a-bd14-589620626682\") " pod="openstack/dnsmasq-dns-68df85789f-bj49s" Jan 31 09:29:29 crc kubenswrapper[4830]: I0131 09:29:29.673332 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-68df85789f-bj49s" Jan 31 09:29:35 crc kubenswrapper[4830]: I0131 09:29:35.960473 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"18af810d-9de4-4822-86d2-bb7e8a8a449b","Type":"ContainerDied","Data":"fd552b526edb7808d77049658f8fe34756e8f14d369a3b2e8790070a45de1166"} Jan 31 09:29:35 crc kubenswrapper[4830]: I0131 09:29:35.961443 4830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fd552b526edb7808d77049658f8fe34756e8f14d369a3b2e8790070a45de1166" Jan 31 09:29:36 crc kubenswrapper[4830]: I0131 09:29:36.031011 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 31 09:29:36 crc kubenswrapper[4830]: I0131 09:29:36.114067 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/18af810d-9de4-4822-86d2-bb7e8a8a449b-rabbitmq-erlang-cookie\") pod \"18af810d-9de4-4822-86d2-bb7e8a8a449b\" (UID: \"18af810d-9de4-4822-86d2-bb7e8a8a449b\") " Jan 31 09:29:36 crc kubenswrapper[4830]: I0131 09:29:36.114137 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/18af810d-9de4-4822-86d2-bb7e8a8a449b-plugins-conf\") pod \"18af810d-9de4-4822-86d2-bb7e8a8a449b\" (UID: \"18af810d-9de4-4822-86d2-bb7e8a8a449b\") " Jan 31 09:29:36 crc kubenswrapper[4830]: I0131 09:29:36.114179 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/18af810d-9de4-4822-86d2-bb7e8a8a449b-pod-info\") pod \"18af810d-9de4-4822-86d2-bb7e8a8a449b\" (UID: \"18af810d-9de4-4822-86d2-bb7e8a8a449b\") " Jan 31 09:29:36 crc kubenswrapper[4830]: I0131 09:29:36.114377 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p2w7k\" (UniqueName: \"kubernetes.io/projected/18af810d-9de4-4822-86d2-bb7e8a8a449b-kube-api-access-p2w7k\") pod \"18af810d-9de4-4822-86d2-bb7e8a8a449b\" (UID: \"18af810d-9de4-4822-86d2-bb7e8a8a449b\") " Jan 31 09:29:36 crc kubenswrapper[4830]: I0131 09:29:36.114434 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/18af810d-9de4-4822-86d2-bb7e8a8a449b-rabbitmq-tls\") pod \"18af810d-9de4-4822-86d2-bb7e8a8a449b\" (UID: \"18af810d-9de4-4822-86d2-bb7e8a8a449b\") " Jan 31 09:29:36 crc kubenswrapper[4830]: I0131 09:29:36.114542 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/18af810d-9de4-4822-86d2-bb7e8a8a449b-config-data\") pod \"18af810d-9de4-4822-86d2-bb7e8a8a449b\" (UID: \"18af810d-9de4-4822-86d2-bb7e8a8a449b\") " Jan 31 09:29:36 crc kubenswrapper[4830]: I0131 09:29:36.114587 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/18af810d-9de4-4822-86d2-bb7e8a8a449b-erlang-cookie-secret\") pod \"18af810d-9de4-4822-86d2-bb7e8a8a449b\" (UID: \"18af810d-9de4-4822-86d2-bb7e8a8a449b\") " Jan 31 09:29:36 crc kubenswrapper[4830]: I0131 09:29:36.114632 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/18af810d-9de4-4822-86d2-bb7e8a8a449b-rabbitmq-confd\") pod \"18af810d-9de4-4822-86d2-bb7e8a8a449b\" (UID: \"18af810d-9de4-4822-86d2-bb7e8a8a449b\") " Jan 31 09:29:36 crc kubenswrapper[4830]: I0131 09:29:36.114658 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/18af810d-9de4-4822-86d2-bb7e8a8a449b-rabbitmq-plugins\") pod \"18af810d-9de4-4822-86d2-bb7e8a8a449b\" (UID: \"18af810d-9de4-4822-86d2-bb7e8a8a449b\") " Jan 31 09:29:36 crc kubenswrapper[4830]: I0131 09:29:36.114706 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/18af810d-9de4-4822-86d2-bb7e8a8a449b-server-conf\") pod \"18af810d-9de4-4822-86d2-bb7e8a8a449b\" (UID: \"18af810d-9de4-4822-86d2-bb7e8a8a449b\") " Jan 31 09:29:36 crc kubenswrapper[4830]: I0131 09:29:36.115177 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/18af810d-9de4-4822-86d2-bb7e8a8a449b-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "18af810d-9de4-4822-86d2-bb7e8a8a449b" (UID: "18af810d-9de4-4822-86d2-bb7e8a8a449b"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 09:29:36 crc kubenswrapper[4830]: I0131 09:29:36.115423 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-46eee7bd-8293-4ccd-8ae2-716ad9fd8039\") pod \"18af810d-9de4-4822-86d2-bb7e8a8a449b\" (UID: \"18af810d-9de4-4822-86d2-bb7e8a8a449b\") " Jan 31 09:29:36 crc kubenswrapper[4830]: I0131 09:29:36.116259 4830 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/18af810d-9de4-4822-86d2-bb7e8a8a449b-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 31 09:29:36 crc kubenswrapper[4830]: I0131 09:29:36.116658 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/18af810d-9de4-4822-86d2-bb7e8a8a449b-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "18af810d-9de4-4822-86d2-bb7e8a8a449b" (UID: "18af810d-9de4-4822-86d2-bb7e8a8a449b"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:29:36 crc kubenswrapper[4830]: I0131 09:29:36.118632 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/18af810d-9de4-4822-86d2-bb7e8a8a449b-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "18af810d-9de4-4822-86d2-bb7e8a8a449b" (UID: "18af810d-9de4-4822-86d2-bb7e8a8a449b"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 09:29:36 crc kubenswrapper[4830]: I0131 09:29:36.126557 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18af810d-9de4-4822-86d2-bb7e8a8a449b-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "18af810d-9de4-4822-86d2-bb7e8a8a449b" (UID: "18af810d-9de4-4822-86d2-bb7e8a8a449b"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:29:36 crc kubenswrapper[4830]: I0131 09:29:36.129350 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/18af810d-9de4-4822-86d2-bb7e8a8a449b-kube-api-access-p2w7k" (OuterVolumeSpecName: "kube-api-access-p2w7k") pod "18af810d-9de4-4822-86d2-bb7e8a8a449b" (UID: "18af810d-9de4-4822-86d2-bb7e8a8a449b"). InnerVolumeSpecName "kube-api-access-p2w7k". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:29:36 crc kubenswrapper[4830]: I0131 09:29:36.132911 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/18af810d-9de4-4822-86d2-bb7e8a8a449b-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "18af810d-9de4-4822-86d2-bb7e8a8a449b" (UID: "18af810d-9de4-4822-86d2-bb7e8a8a449b"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:29:36 crc kubenswrapper[4830]: I0131 09:29:36.190649 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/18af810d-9de4-4822-86d2-bb7e8a8a449b-pod-info" (OuterVolumeSpecName: "pod-info") pod "18af810d-9de4-4822-86d2-bb7e8a8a449b" (UID: "18af810d-9de4-4822-86d2-bb7e8a8a449b"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 31 09:29:36 crc kubenswrapper[4830]: I0131 09:29:36.233102 4830 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/18af810d-9de4-4822-86d2-bb7e8a8a449b-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 31 09:29:36 crc kubenswrapper[4830]: I0131 09:29:36.233614 4830 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/18af810d-9de4-4822-86d2-bb7e8a8a449b-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 31 09:29:36 crc kubenswrapper[4830]: I0131 09:29:36.233742 4830 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/18af810d-9de4-4822-86d2-bb7e8a8a449b-pod-info\") on node \"crc\" DevicePath \"\"" Jan 31 09:29:36 crc kubenswrapper[4830]: I0131 09:29:36.233837 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p2w7k\" (UniqueName: \"kubernetes.io/projected/18af810d-9de4-4822-86d2-bb7e8a8a449b-kube-api-access-p2w7k\") on node \"crc\" DevicePath \"\"" Jan 31 09:29:36 crc kubenswrapper[4830]: I0131 09:29:36.233906 4830 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/18af810d-9de4-4822-86d2-bb7e8a8a449b-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Jan 31 09:29:36 crc kubenswrapper[4830]: I0131 09:29:36.234379 4830 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/18af810d-9de4-4822-86d2-bb7e8a8a449b-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 31 09:29:36 crc kubenswrapper[4830]: I0131 09:29:36.246651 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/18af810d-9de4-4822-86d2-bb7e8a8a449b-config-data" (OuterVolumeSpecName: "config-data") pod "18af810d-9de4-4822-86d2-bb7e8a8a449b" (UID: "18af810d-9de4-4822-86d2-bb7e8a8a449b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:29:36 crc kubenswrapper[4830]: I0131 09:29:36.264668 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-46eee7bd-8293-4ccd-8ae2-716ad9fd8039" (OuterVolumeSpecName: "persistence") pod "18af810d-9de4-4822-86d2-bb7e8a8a449b" (UID: "18af810d-9de4-4822-86d2-bb7e8a8a449b"). InnerVolumeSpecName "pvc-46eee7bd-8293-4ccd-8ae2-716ad9fd8039". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 31 09:29:36 crc kubenswrapper[4830]: I0131 09:29:36.339252 4830 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-46eee7bd-8293-4ccd-8ae2-716ad9fd8039\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-46eee7bd-8293-4ccd-8ae2-716ad9fd8039\") on node \"crc\" " Jan 31 09:29:36 crc kubenswrapper[4830]: I0131 09:29:36.339300 4830 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/18af810d-9de4-4822-86d2-bb7e8a8a449b-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 09:29:36 crc kubenswrapper[4830]: I0131 09:29:36.342115 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/18af810d-9de4-4822-86d2-bb7e8a8a449b-server-conf" (OuterVolumeSpecName: "server-conf") pod "18af810d-9de4-4822-86d2-bb7e8a8a449b" (UID: "18af810d-9de4-4822-86d2-bb7e8a8a449b"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:29:36 crc kubenswrapper[4830]: I0131 09:29:36.392116 4830 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Jan 31 09:29:36 crc kubenswrapper[4830]: I0131 09:29:36.392330 4830 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-46eee7bd-8293-4ccd-8ae2-716ad9fd8039" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-46eee7bd-8293-4ccd-8ae2-716ad9fd8039") on node "crc" Jan 31 09:29:36 crc kubenswrapper[4830]: I0131 09:29:36.417767 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/18af810d-9de4-4822-86d2-bb7e8a8a449b-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "18af810d-9de4-4822-86d2-bb7e8a8a449b" (UID: "18af810d-9de4-4822-86d2-bb7e8a8a449b"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:29:36 crc kubenswrapper[4830]: I0131 09:29:36.442963 4830 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/18af810d-9de4-4822-86d2-bb7e8a8a449b-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 31 09:29:36 crc kubenswrapper[4830]: I0131 09:29:36.443017 4830 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/18af810d-9de4-4822-86d2-bb7e8a8a449b-server-conf\") on node \"crc\" DevicePath \"\"" Jan 31 09:29:36 crc kubenswrapper[4830]: I0131 09:29:36.443032 4830 reconciler_common.go:293] "Volume detached for volume \"pvc-46eee7bd-8293-4ccd-8ae2-716ad9fd8039\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-46eee7bd-8293-4ccd-8ae2-716ad9fd8039\") on node \"crc\" DevicePath \"\"" Jan 31 09:29:36 crc kubenswrapper[4830]: E0131 09:29:36.808679 4830 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Jan 31 09:29:36 crc kubenswrapper[4830]: E0131 09:29:36.808771 4830 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Jan 31 09:29:36 crc kubenswrapper[4830]: E0131 09:29:36.808935 4830 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-66bl5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-cblrm_openstack(35a75e79-079e-4905-9cc1-af2a81596943): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 31 09:29:36 crc kubenswrapper[4830]: E0131 09:29:36.810367 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/heat-db-sync-cblrm" podUID="35a75e79-079e-4905-9cc1-af2a81596943" Jan 31 09:29:36 crc kubenswrapper[4830]: I0131 09:29:36.974568 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 31 09:29:36 crc kubenswrapper[4830]: E0131 09:29:36.981234 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-cblrm" podUID="35a75e79-079e-4905-9cc1-af2a81596943" Jan 31 09:29:37 crc kubenswrapper[4830]: I0131 09:29:37.148790 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 31 09:29:37 crc kubenswrapper[4830]: I0131 09:29:37.181295 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 31 09:29:37 crc kubenswrapper[4830]: I0131 09:29:37.210419 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 31 09:29:37 crc kubenswrapper[4830]: E0131 09:29:37.211077 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18af810d-9de4-4822-86d2-bb7e8a8a449b" containerName="setup-container" Jan 31 09:29:37 crc kubenswrapper[4830]: I0131 09:29:37.211095 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="18af810d-9de4-4822-86d2-bb7e8a8a449b" containerName="setup-container" Jan 31 09:29:37 crc kubenswrapper[4830]: E0131 09:29:37.211108 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18af810d-9de4-4822-86d2-bb7e8a8a449b" containerName="rabbitmq" Jan 31 09:29:37 crc kubenswrapper[4830]: I0131 09:29:37.211116 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="18af810d-9de4-4822-86d2-bb7e8a8a449b" containerName="rabbitmq" Jan 31 09:29:37 crc kubenswrapper[4830]: I0131 09:29:37.212086 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="18af810d-9de4-4822-86d2-bb7e8a8a449b" containerName="rabbitmq" Jan 31 09:29:37 crc kubenswrapper[4830]: I0131 09:29:37.213670 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 31 09:29:37 crc kubenswrapper[4830]: I0131 09:29:37.218712 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Jan 31 09:29:37 crc kubenswrapper[4830]: I0131 09:29:37.218996 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Jan 31 09:29:37 crc kubenswrapper[4830]: I0131 09:29:37.219294 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Jan 31 09:29:37 crc kubenswrapper[4830]: I0131 09:29:37.220095 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Jan 31 09:29:37 crc kubenswrapper[4830]: I0131 09:29:37.220253 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Jan 31 09:29:37 crc kubenswrapper[4830]: I0131 09:29:37.220415 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Jan 31 09:29:37 crc kubenswrapper[4830]: I0131 09:29:37.220585 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-kqg76" Jan 31 09:29:37 crc kubenswrapper[4830]: I0131 09:29:37.253626 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 31 09:29:37 crc kubenswrapper[4830]: I0131 09:29:37.296806 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-46eee7bd-8293-4ccd-8ae2-716ad9fd8039\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-46eee7bd-8293-4ccd-8ae2-716ad9fd8039\") pod \"rabbitmq-cell1-server-0\" (UID: \"a5a14eb0-7ed3-44fd-a1e2-f8d582a70062\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 09:29:37 crc kubenswrapper[4830]: I0131 09:29:37.297362 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-57wpq\" (UniqueName: \"kubernetes.io/projected/a5a14eb0-7ed3-44fd-a1e2-f8d582a70062-kube-api-access-57wpq\") pod \"rabbitmq-cell1-server-0\" (UID: \"a5a14eb0-7ed3-44fd-a1e2-f8d582a70062\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 09:29:37 crc kubenswrapper[4830]: I0131 09:29:37.297518 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/a5a14eb0-7ed3-44fd-a1e2-f8d582a70062-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"a5a14eb0-7ed3-44fd-a1e2-f8d582a70062\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 09:29:37 crc kubenswrapper[4830]: I0131 09:29:37.297577 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/a5a14eb0-7ed3-44fd-a1e2-f8d582a70062-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"a5a14eb0-7ed3-44fd-a1e2-f8d582a70062\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 09:29:37 crc kubenswrapper[4830]: I0131 09:29:37.297608 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/a5a14eb0-7ed3-44fd-a1e2-f8d582a70062-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"a5a14eb0-7ed3-44fd-a1e2-f8d582a70062\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 09:29:37 crc kubenswrapper[4830]: I0131 09:29:37.297665 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/a5a14eb0-7ed3-44fd-a1e2-f8d582a70062-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"a5a14eb0-7ed3-44fd-a1e2-f8d582a70062\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 09:29:37 crc kubenswrapper[4830]: I0131 09:29:37.297755 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/a5a14eb0-7ed3-44fd-a1e2-f8d582a70062-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"a5a14eb0-7ed3-44fd-a1e2-f8d582a70062\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 09:29:37 crc kubenswrapper[4830]: I0131 09:29:37.297846 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/a5a14eb0-7ed3-44fd-a1e2-f8d582a70062-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"a5a14eb0-7ed3-44fd-a1e2-f8d582a70062\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 09:29:37 crc kubenswrapper[4830]: I0131 09:29:37.297897 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a5a14eb0-7ed3-44fd-a1e2-f8d582a70062-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"a5a14eb0-7ed3-44fd-a1e2-f8d582a70062\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 09:29:37 crc kubenswrapper[4830]: I0131 09:29:37.297986 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/a5a14eb0-7ed3-44fd-a1e2-f8d582a70062-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"a5a14eb0-7ed3-44fd-a1e2-f8d582a70062\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 09:29:37 crc kubenswrapper[4830]: I0131 09:29:37.303019 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/a5a14eb0-7ed3-44fd-a1e2-f8d582a70062-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"a5a14eb0-7ed3-44fd-a1e2-f8d582a70062\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 09:29:37 crc kubenswrapper[4830]: I0131 09:29:37.405748 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-57wpq\" (UniqueName: \"kubernetes.io/projected/a5a14eb0-7ed3-44fd-a1e2-f8d582a70062-kube-api-access-57wpq\") pod \"rabbitmq-cell1-server-0\" (UID: \"a5a14eb0-7ed3-44fd-a1e2-f8d582a70062\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 09:29:37 crc kubenswrapper[4830]: I0131 09:29:37.405813 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-46eee7bd-8293-4ccd-8ae2-716ad9fd8039\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-46eee7bd-8293-4ccd-8ae2-716ad9fd8039\") pod \"rabbitmq-cell1-server-0\" (UID: \"a5a14eb0-7ed3-44fd-a1e2-f8d582a70062\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 09:29:37 crc kubenswrapper[4830]: I0131 09:29:37.405890 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/a5a14eb0-7ed3-44fd-a1e2-f8d582a70062-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"a5a14eb0-7ed3-44fd-a1e2-f8d582a70062\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 09:29:37 crc kubenswrapper[4830]: I0131 09:29:37.405926 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/a5a14eb0-7ed3-44fd-a1e2-f8d582a70062-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"a5a14eb0-7ed3-44fd-a1e2-f8d582a70062\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 09:29:37 crc kubenswrapper[4830]: I0131 09:29:37.405947 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/a5a14eb0-7ed3-44fd-a1e2-f8d582a70062-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"a5a14eb0-7ed3-44fd-a1e2-f8d582a70062\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 09:29:37 crc kubenswrapper[4830]: I0131 09:29:37.405975 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/a5a14eb0-7ed3-44fd-a1e2-f8d582a70062-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"a5a14eb0-7ed3-44fd-a1e2-f8d582a70062\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 09:29:37 crc kubenswrapper[4830]: I0131 09:29:37.406018 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/a5a14eb0-7ed3-44fd-a1e2-f8d582a70062-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"a5a14eb0-7ed3-44fd-a1e2-f8d582a70062\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 09:29:37 crc kubenswrapper[4830]: I0131 09:29:37.406563 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/a5a14eb0-7ed3-44fd-a1e2-f8d582a70062-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"a5a14eb0-7ed3-44fd-a1e2-f8d582a70062\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 09:29:37 crc kubenswrapper[4830]: I0131 09:29:37.406665 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a5a14eb0-7ed3-44fd-a1e2-f8d582a70062-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"a5a14eb0-7ed3-44fd-a1e2-f8d582a70062\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 09:29:37 crc kubenswrapper[4830]: I0131 09:29:37.406933 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/a5a14eb0-7ed3-44fd-a1e2-f8d582a70062-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"a5a14eb0-7ed3-44fd-a1e2-f8d582a70062\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 09:29:37 crc kubenswrapper[4830]: I0131 09:29:37.406996 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/a5a14eb0-7ed3-44fd-a1e2-f8d582a70062-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"a5a14eb0-7ed3-44fd-a1e2-f8d582a70062\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 09:29:37 crc kubenswrapper[4830]: I0131 09:29:37.407360 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/a5a14eb0-7ed3-44fd-a1e2-f8d582a70062-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"a5a14eb0-7ed3-44fd-a1e2-f8d582a70062\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 09:29:37 crc kubenswrapper[4830]: I0131 09:29:37.407907 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/a5a14eb0-7ed3-44fd-a1e2-f8d582a70062-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"a5a14eb0-7ed3-44fd-a1e2-f8d582a70062\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 09:29:37 crc kubenswrapper[4830]: I0131 09:29:37.408497 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a5a14eb0-7ed3-44fd-a1e2-f8d582a70062-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"a5a14eb0-7ed3-44fd-a1e2-f8d582a70062\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 09:29:37 crc kubenswrapper[4830]: I0131 09:29:37.409217 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/a5a14eb0-7ed3-44fd-a1e2-f8d582a70062-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"a5a14eb0-7ed3-44fd-a1e2-f8d582a70062\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 09:29:37 crc kubenswrapper[4830]: I0131 09:29:37.409335 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/a5a14eb0-7ed3-44fd-a1e2-f8d582a70062-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"a5a14eb0-7ed3-44fd-a1e2-f8d582a70062\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 09:29:37 crc kubenswrapper[4830]: I0131 09:29:37.410942 4830 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 31 09:29:37 crc kubenswrapper[4830]: I0131 09:29:37.410984 4830 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-46eee7bd-8293-4ccd-8ae2-716ad9fd8039\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-46eee7bd-8293-4ccd-8ae2-716ad9fd8039\") pod \"rabbitmq-cell1-server-0\" (UID: \"a5a14eb0-7ed3-44fd-a1e2-f8d582a70062\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/40d9c20e0fa8978e0eed904adc2a30fbad9b0eabe83eb0834e2e4c5212f639ff/globalmount\"" pod="openstack/rabbitmq-cell1-server-0" Jan 31 09:29:37 crc kubenswrapper[4830]: I0131 09:29:37.415193 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/a5a14eb0-7ed3-44fd-a1e2-f8d582a70062-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"a5a14eb0-7ed3-44fd-a1e2-f8d582a70062\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 09:29:37 crc kubenswrapper[4830]: I0131 09:29:37.415495 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/a5a14eb0-7ed3-44fd-a1e2-f8d582a70062-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"a5a14eb0-7ed3-44fd-a1e2-f8d582a70062\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 09:29:37 crc kubenswrapper[4830]: I0131 09:29:37.417429 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/a5a14eb0-7ed3-44fd-a1e2-f8d582a70062-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"a5a14eb0-7ed3-44fd-a1e2-f8d582a70062\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 09:29:37 crc kubenswrapper[4830]: I0131 09:29:37.419054 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/a5a14eb0-7ed3-44fd-a1e2-f8d582a70062-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"a5a14eb0-7ed3-44fd-a1e2-f8d582a70062\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 09:29:37 crc kubenswrapper[4830]: I0131 09:29:37.425130 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-57wpq\" (UniqueName: \"kubernetes.io/projected/a5a14eb0-7ed3-44fd-a1e2-f8d582a70062-kube-api-access-57wpq\") pod \"rabbitmq-cell1-server-0\" (UID: \"a5a14eb0-7ed3-44fd-a1e2-f8d582a70062\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 09:29:37 crc kubenswrapper[4830]: I0131 09:29:37.473062 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-46eee7bd-8293-4ccd-8ae2-716ad9fd8039\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-46eee7bd-8293-4ccd-8ae2-716ad9fd8039\") pod \"rabbitmq-cell1-server-0\" (UID: \"a5a14eb0-7ed3-44fd-a1e2-f8d582a70062\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 09:29:37 crc kubenswrapper[4830]: W0131 09:29:37.495174 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod668f930d_5dba_4a2a_bd14_589620626682.slice/crio-bdc10cc246cda12110ce3667c3f9f3a55cae42cb701055072b1a2c6d57647e94 WatchSource:0}: Error finding container bdc10cc246cda12110ce3667c3f9f3a55cae42cb701055072b1a2c6d57647e94: Status 404 returned error can't find the container with id bdc10cc246cda12110ce3667c3f9f3a55cae42cb701055072b1a2c6d57647e94 Jan 31 09:29:37 crc kubenswrapper[4830]: I0131 09:29:37.544777 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 31 09:29:37 crc kubenswrapper[4830]: I0131 09:29:37.562016 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-68df85789f-bj49s"] Jan 31 09:29:37 crc kubenswrapper[4830]: I0131 09:29:37.992544 4830 generic.go:334] "Generic (PLEG): container finished" podID="668f930d-5dba-4a2a-bd14-589620626682" containerID="fc068b35667994746db8cc763a15c7bf5c7586d7055fea0653d7e130fe96fc6c" exitCode=0 Jan 31 09:29:37 crc kubenswrapper[4830]: I0131 09:29:37.992600 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-68df85789f-bj49s" event={"ID":"668f930d-5dba-4a2a-bd14-589620626682","Type":"ContainerDied","Data":"fc068b35667994746db8cc763a15c7bf5c7586d7055fea0653d7e130fe96fc6c"} Jan 31 09:29:37 crc kubenswrapper[4830]: I0131 09:29:37.992655 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-68df85789f-bj49s" event={"ID":"668f930d-5dba-4a2a-bd14-589620626682","Type":"ContainerStarted","Data":"bdc10cc246cda12110ce3667c3f9f3a55cae42cb701055072b1a2c6d57647e94"} Jan 31 09:29:38 crc kubenswrapper[4830]: I0131 09:29:38.002487 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f97ee114-e2b6-423b-b30a-dd1e2ada3169","Type":"ContainerStarted","Data":"afc1b74c5c8fa0201698dfed640a50cf501d517b0ce366be67fe5ef196197022"} Jan 31 09:29:38 crc kubenswrapper[4830]: I0131 09:29:38.002709 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f97ee114-e2b6-423b-b30a-dd1e2ada3169" containerName="ceilometer-central-agent" containerID="cri-o://26b4ec876b6a68130f28c036e4967e5620736cb2584439d013918c4468bcb419" gracePeriod=30 Jan 31 09:29:38 crc kubenswrapper[4830]: I0131 09:29:38.002846 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 31 09:29:38 crc kubenswrapper[4830]: I0131 09:29:38.002895 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f97ee114-e2b6-423b-b30a-dd1e2ada3169" containerName="proxy-httpd" containerID="cri-o://afc1b74c5c8fa0201698dfed640a50cf501d517b0ce366be67fe5ef196197022" gracePeriod=30 Jan 31 09:29:38 crc kubenswrapper[4830]: I0131 09:29:38.002939 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f97ee114-e2b6-423b-b30a-dd1e2ada3169" containerName="sg-core" containerID="cri-o://98b83b1eb53613cb65b57c9c1c64ccee4b22b7097155722808a03ef4c999bcb6" gracePeriod=30 Jan 31 09:29:38 crc kubenswrapper[4830]: I0131 09:29:38.002978 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f97ee114-e2b6-423b-b30a-dd1e2ada3169" containerName="ceilometer-notification-agent" containerID="cri-o://a3e04444760ffb271d3b136607990d7e060963fa82b0b06859901489ab2ab0e3" gracePeriod=30 Jan 31 09:29:38 crc kubenswrapper[4830]: I0131 09:29:38.063094 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.710567773 podStartE2EDuration="29.063070349s" podCreationTimestamp="2026-01-31 09:29:09 +0000 UTC" firstStartedPulling="2026-01-31 09:29:10.493968055 +0000 UTC m=+1694.987330497" lastFinishedPulling="2026-01-31 09:29:36.846470631 +0000 UTC m=+1721.339833073" observedRunningTime="2026-01-31 09:29:38.051456926 +0000 UTC m=+1722.544819368" watchObservedRunningTime="2026-01-31 09:29:38.063070349 +0000 UTC m=+1722.556432791" Jan 31 09:29:38 crc kubenswrapper[4830]: I0131 09:29:38.117212 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 31 09:29:38 crc kubenswrapper[4830]: I0131 09:29:38.281198 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="18af810d-9de4-4822-86d2-bb7e8a8a449b" path="/var/lib/kubelet/pods/18af810d-9de4-4822-86d2-bb7e8a8a449b/volumes" Jan 31 09:29:39 crc kubenswrapper[4830]: I0131 09:29:39.019187 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-68df85789f-bj49s" event={"ID":"668f930d-5dba-4a2a-bd14-589620626682","Type":"ContainerStarted","Data":"edc5e1e9c372c5d152c384902e3449aee4a8d1132db441d4ff1442f24dfb8342"} Jan 31 09:29:39 crc kubenswrapper[4830]: I0131 09:29:39.019527 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-68df85789f-bj49s" Jan 31 09:29:39 crc kubenswrapper[4830]: I0131 09:29:39.020893 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"a5a14eb0-7ed3-44fd-a1e2-f8d582a70062","Type":"ContainerStarted","Data":"a313adca9e31f194a64c0db863d60f94f0aecbca68928e3ae9477430ef40e994"} Jan 31 09:29:39 crc kubenswrapper[4830]: I0131 09:29:39.023794 4830 generic.go:334] "Generic (PLEG): container finished" podID="f97ee114-e2b6-423b-b30a-dd1e2ada3169" containerID="afc1b74c5c8fa0201698dfed640a50cf501d517b0ce366be67fe5ef196197022" exitCode=0 Jan 31 09:29:39 crc kubenswrapper[4830]: I0131 09:29:39.023828 4830 generic.go:334] "Generic (PLEG): container finished" podID="f97ee114-e2b6-423b-b30a-dd1e2ada3169" containerID="98b83b1eb53613cb65b57c9c1c64ccee4b22b7097155722808a03ef4c999bcb6" exitCode=2 Jan 31 09:29:39 crc kubenswrapper[4830]: I0131 09:29:39.023838 4830 generic.go:334] "Generic (PLEG): container finished" podID="f97ee114-e2b6-423b-b30a-dd1e2ada3169" containerID="26b4ec876b6a68130f28c036e4967e5620736cb2584439d013918c4468bcb419" exitCode=0 Jan 31 09:29:39 crc kubenswrapper[4830]: I0131 09:29:39.023864 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f97ee114-e2b6-423b-b30a-dd1e2ada3169","Type":"ContainerDied","Data":"afc1b74c5c8fa0201698dfed640a50cf501d517b0ce366be67fe5ef196197022"} Jan 31 09:29:39 crc kubenswrapper[4830]: I0131 09:29:39.023981 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f97ee114-e2b6-423b-b30a-dd1e2ada3169","Type":"ContainerDied","Data":"98b83b1eb53613cb65b57c9c1c64ccee4b22b7097155722808a03ef4c999bcb6"} Jan 31 09:29:39 crc kubenswrapper[4830]: I0131 09:29:39.023997 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f97ee114-e2b6-423b-b30a-dd1e2ada3169","Type":"ContainerDied","Data":"26b4ec876b6a68130f28c036e4967e5620736cb2584439d013918c4468bcb419"} Jan 31 09:29:39 crc kubenswrapper[4830]: I0131 09:29:39.042075 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-68df85789f-bj49s" podStartSLOduration=10.042052096 podStartE2EDuration="10.042052096s" podCreationTimestamp="2026-01-31 09:29:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 09:29:39.041269333 +0000 UTC m=+1723.534631785" watchObservedRunningTime="2026-01-31 09:29:39.042052096 +0000 UTC m=+1723.535414538" Jan 31 09:29:39 crc kubenswrapper[4830]: I0131 09:29:39.252622 4830 scope.go:117] "RemoveContainer" containerID="a04fad3617a9e38076099693ce6bd6f0b7e1a9b845b3b8a22acffddfa772e8f0" Jan 31 09:29:39 crc kubenswrapper[4830]: E0131 09:29:39.253010 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gt7kd_openshift-machine-config-operator(158dbfda-9b0a-4809-9946-3c6ee2d082dc)\"" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" podUID="158dbfda-9b0a-4809-9946-3c6ee2d082dc" Jan 31 09:29:41 crc kubenswrapper[4830]: I0131 09:29:41.049855 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"a5a14eb0-7ed3-44fd-a1e2-f8d582a70062","Type":"ContainerStarted","Data":"6f25216dcf8fe9092ff9750de186f48079e5e00afddb095ae784527c8c06f24a"} Jan 31 09:29:42 crc kubenswrapper[4830]: I0131 09:29:42.901132 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 31 09:29:43 crc kubenswrapper[4830]: I0131 09:29:43.091993 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f97ee114-e2b6-423b-b30a-dd1e2ada3169-log-httpd\") pod \"f97ee114-e2b6-423b-b30a-dd1e2ada3169\" (UID: \"f97ee114-e2b6-423b-b30a-dd1e2ada3169\") " Jan 31 09:29:43 crc kubenswrapper[4830]: I0131 09:29:43.092185 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f97ee114-e2b6-423b-b30a-dd1e2ada3169-sg-core-conf-yaml\") pod \"f97ee114-e2b6-423b-b30a-dd1e2ada3169\" (UID: \"f97ee114-e2b6-423b-b30a-dd1e2ada3169\") " Jan 31 09:29:43 crc kubenswrapper[4830]: I0131 09:29:43.092217 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f97ee114-e2b6-423b-b30a-dd1e2ada3169-config-data\") pod \"f97ee114-e2b6-423b-b30a-dd1e2ada3169\" (UID: \"f97ee114-e2b6-423b-b30a-dd1e2ada3169\") " Jan 31 09:29:43 crc kubenswrapper[4830]: I0131 09:29:43.092275 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f97ee114-e2b6-423b-b30a-dd1e2ada3169-run-httpd\") pod \"f97ee114-e2b6-423b-b30a-dd1e2ada3169\" (UID: \"f97ee114-e2b6-423b-b30a-dd1e2ada3169\") " Jan 31 09:29:43 crc kubenswrapper[4830]: I0131 09:29:43.092395 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f97ee114-e2b6-423b-b30a-dd1e2ada3169-combined-ca-bundle\") pod \"f97ee114-e2b6-423b-b30a-dd1e2ada3169\" (UID: \"f97ee114-e2b6-423b-b30a-dd1e2ada3169\") " Jan 31 09:29:43 crc kubenswrapper[4830]: I0131 09:29:43.092476 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vxdjb\" (UniqueName: \"kubernetes.io/projected/f97ee114-e2b6-423b-b30a-dd1e2ada3169-kube-api-access-vxdjb\") pod \"f97ee114-e2b6-423b-b30a-dd1e2ada3169\" (UID: \"f97ee114-e2b6-423b-b30a-dd1e2ada3169\") " Jan 31 09:29:43 crc kubenswrapper[4830]: I0131 09:29:43.092624 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f97ee114-e2b6-423b-b30a-dd1e2ada3169-scripts\") pod \"f97ee114-e2b6-423b-b30a-dd1e2ada3169\" (UID: \"f97ee114-e2b6-423b-b30a-dd1e2ada3169\") " Jan 31 09:29:43 crc kubenswrapper[4830]: I0131 09:29:43.092749 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/f97ee114-e2b6-423b-b30a-dd1e2ada3169-ceilometer-tls-certs\") pod \"f97ee114-e2b6-423b-b30a-dd1e2ada3169\" (UID: \"f97ee114-e2b6-423b-b30a-dd1e2ada3169\") " Jan 31 09:29:43 crc kubenswrapper[4830]: I0131 09:29:43.097992 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f97ee114-e2b6-423b-b30a-dd1e2ada3169-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "f97ee114-e2b6-423b-b30a-dd1e2ada3169" (UID: "f97ee114-e2b6-423b-b30a-dd1e2ada3169"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 09:29:43 crc kubenswrapper[4830]: I0131 09:29:43.098390 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f97ee114-e2b6-423b-b30a-dd1e2ada3169-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "f97ee114-e2b6-423b-b30a-dd1e2ada3169" (UID: "f97ee114-e2b6-423b-b30a-dd1e2ada3169"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 09:29:43 crc kubenswrapper[4830]: I0131 09:29:43.129270 4830 generic.go:334] "Generic (PLEG): container finished" podID="f97ee114-e2b6-423b-b30a-dd1e2ada3169" containerID="a3e04444760ffb271d3b136607990d7e060963fa82b0b06859901489ab2ab0e3" exitCode=0 Jan 31 09:29:43 crc kubenswrapper[4830]: I0131 09:29:43.129365 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f97ee114-e2b6-423b-b30a-dd1e2ada3169","Type":"ContainerDied","Data":"a3e04444760ffb271d3b136607990d7e060963fa82b0b06859901489ab2ab0e3"} Jan 31 09:29:43 crc kubenswrapper[4830]: I0131 09:29:43.129417 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f97ee114-e2b6-423b-b30a-dd1e2ada3169","Type":"ContainerDied","Data":"37e0b212f05f847d594ff512e867406a2297e646fe74024431c8e9c385583b07"} Jan 31 09:29:43 crc kubenswrapper[4830]: I0131 09:29:43.129442 4830 scope.go:117] "RemoveContainer" containerID="afc1b74c5c8fa0201698dfed640a50cf501d517b0ce366be67fe5ef196197022" Jan 31 09:29:43 crc kubenswrapper[4830]: I0131 09:29:43.129782 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 31 09:29:43 crc kubenswrapper[4830]: I0131 09:29:43.131053 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f97ee114-e2b6-423b-b30a-dd1e2ada3169-kube-api-access-vxdjb" (OuterVolumeSpecName: "kube-api-access-vxdjb") pod "f97ee114-e2b6-423b-b30a-dd1e2ada3169" (UID: "f97ee114-e2b6-423b-b30a-dd1e2ada3169"). InnerVolumeSpecName "kube-api-access-vxdjb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:29:43 crc kubenswrapper[4830]: I0131 09:29:43.132756 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f97ee114-e2b6-423b-b30a-dd1e2ada3169-scripts" (OuterVolumeSpecName: "scripts") pod "f97ee114-e2b6-423b-b30a-dd1e2ada3169" (UID: "f97ee114-e2b6-423b-b30a-dd1e2ada3169"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:29:43 crc kubenswrapper[4830]: I0131 09:29:43.170796 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f97ee114-e2b6-423b-b30a-dd1e2ada3169-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "f97ee114-e2b6-423b-b30a-dd1e2ada3169" (UID: "f97ee114-e2b6-423b-b30a-dd1e2ada3169"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:29:43 crc kubenswrapper[4830]: I0131 09:29:43.198686 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vxdjb\" (UniqueName: \"kubernetes.io/projected/f97ee114-e2b6-423b-b30a-dd1e2ada3169-kube-api-access-vxdjb\") on node \"crc\" DevicePath \"\"" Jan 31 09:29:43 crc kubenswrapper[4830]: I0131 09:29:43.199046 4830 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f97ee114-e2b6-423b-b30a-dd1e2ada3169-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 09:29:43 crc kubenswrapper[4830]: I0131 09:29:43.199060 4830 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f97ee114-e2b6-423b-b30a-dd1e2ada3169-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 31 09:29:43 crc kubenswrapper[4830]: I0131 09:29:43.199073 4830 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f97ee114-e2b6-423b-b30a-dd1e2ada3169-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 31 09:29:43 crc kubenswrapper[4830]: I0131 09:29:43.199084 4830 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f97ee114-e2b6-423b-b30a-dd1e2ada3169-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 31 09:29:43 crc kubenswrapper[4830]: I0131 09:29:43.270036 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f97ee114-e2b6-423b-b30a-dd1e2ada3169-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "f97ee114-e2b6-423b-b30a-dd1e2ada3169" (UID: "f97ee114-e2b6-423b-b30a-dd1e2ada3169"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:29:43 crc kubenswrapper[4830]: I0131 09:29:43.292304 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f97ee114-e2b6-423b-b30a-dd1e2ada3169-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f97ee114-e2b6-423b-b30a-dd1e2ada3169" (UID: "f97ee114-e2b6-423b-b30a-dd1e2ada3169"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:29:43 crc kubenswrapper[4830]: I0131 09:29:43.302002 4830 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f97ee114-e2b6-423b-b30a-dd1e2ada3169-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 09:29:43 crc kubenswrapper[4830]: I0131 09:29:43.302031 4830 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/f97ee114-e2b6-423b-b30a-dd1e2ada3169-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 31 09:29:43 crc kubenswrapper[4830]: I0131 09:29:43.322007 4830 scope.go:117] "RemoveContainer" containerID="98b83b1eb53613cb65b57c9c1c64ccee4b22b7097155722808a03ef4c999bcb6" Jan 31 09:29:43 crc kubenswrapper[4830]: I0131 09:29:43.333412 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f97ee114-e2b6-423b-b30a-dd1e2ada3169-config-data" (OuterVolumeSpecName: "config-data") pod "f97ee114-e2b6-423b-b30a-dd1e2ada3169" (UID: "f97ee114-e2b6-423b-b30a-dd1e2ada3169"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:29:43 crc kubenswrapper[4830]: I0131 09:29:43.350759 4830 scope.go:117] "RemoveContainer" containerID="a3e04444760ffb271d3b136607990d7e060963fa82b0b06859901489ab2ab0e3" Jan 31 09:29:43 crc kubenswrapper[4830]: I0131 09:29:43.384916 4830 scope.go:117] "RemoveContainer" containerID="26b4ec876b6a68130f28c036e4967e5620736cb2584439d013918c4468bcb419" Jan 31 09:29:43 crc kubenswrapper[4830]: I0131 09:29:43.405369 4830 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f97ee114-e2b6-423b-b30a-dd1e2ada3169-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 09:29:43 crc kubenswrapper[4830]: I0131 09:29:43.418630 4830 scope.go:117] "RemoveContainer" containerID="afc1b74c5c8fa0201698dfed640a50cf501d517b0ce366be67fe5ef196197022" Jan 31 09:29:43 crc kubenswrapper[4830]: E0131 09:29:43.422621 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"afc1b74c5c8fa0201698dfed640a50cf501d517b0ce366be67fe5ef196197022\": container with ID starting with afc1b74c5c8fa0201698dfed640a50cf501d517b0ce366be67fe5ef196197022 not found: ID does not exist" containerID="afc1b74c5c8fa0201698dfed640a50cf501d517b0ce366be67fe5ef196197022" Jan 31 09:29:43 crc kubenswrapper[4830]: I0131 09:29:43.422739 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"afc1b74c5c8fa0201698dfed640a50cf501d517b0ce366be67fe5ef196197022"} err="failed to get container status \"afc1b74c5c8fa0201698dfed640a50cf501d517b0ce366be67fe5ef196197022\": rpc error: code = NotFound desc = could not find container \"afc1b74c5c8fa0201698dfed640a50cf501d517b0ce366be67fe5ef196197022\": container with ID starting with afc1b74c5c8fa0201698dfed640a50cf501d517b0ce366be67fe5ef196197022 not found: ID does not exist" Jan 31 09:29:43 crc kubenswrapper[4830]: I0131 09:29:43.422781 4830 scope.go:117] "RemoveContainer" containerID="98b83b1eb53613cb65b57c9c1c64ccee4b22b7097155722808a03ef4c999bcb6" Jan 31 09:29:43 crc kubenswrapper[4830]: E0131 09:29:43.423314 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"98b83b1eb53613cb65b57c9c1c64ccee4b22b7097155722808a03ef4c999bcb6\": container with ID starting with 98b83b1eb53613cb65b57c9c1c64ccee4b22b7097155722808a03ef4c999bcb6 not found: ID does not exist" containerID="98b83b1eb53613cb65b57c9c1c64ccee4b22b7097155722808a03ef4c999bcb6" Jan 31 09:29:43 crc kubenswrapper[4830]: I0131 09:29:43.423348 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"98b83b1eb53613cb65b57c9c1c64ccee4b22b7097155722808a03ef4c999bcb6"} err="failed to get container status \"98b83b1eb53613cb65b57c9c1c64ccee4b22b7097155722808a03ef4c999bcb6\": rpc error: code = NotFound desc = could not find container \"98b83b1eb53613cb65b57c9c1c64ccee4b22b7097155722808a03ef4c999bcb6\": container with ID starting with 98b83b1eb53613cb65b57c9c1c64ccee4b22b7097155722808a03ef4c999bcb6 not found: ID does not exist" Jan 31 09:29:43 crc kubenswrapper[4830]: I0131 09:29:43.423365 4830 scope.go:117] "RemoveContainer" containerID="a3e04444760ffb271d3b136607990d7e060963fa82b0b06859901489ab2ab0e3" Jan 31 09:29:43 crc kubenswrapper[4830]: E0131 09:29:43.423709 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a3e04444760ffb271d3b136607990d7e060963fa82b0b06859901489ab2ab0e3\": container with ID starting with a3e04444760ffb271d3b136607990d7e060963fa82b0b06859901489ab2ab0e3 not found: ID does not exist" containerID="a3e04444760ffb271d3b136607990d7e060963fa82b0b06859901489ab2ab0e3" Jan 31 09:29:43 crc kubenswrapper[4830]: I0131 09:29:43.423759 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a3e04444760ffb271d3b136607990d7e060963fa82b0b06859901489ab2ab0e3"} err="failed to get container status \"a3e04444760ffb271d3b136607990d7e060963fa82b0b06859901489ab2ab0e3\": rpc error: code = NotFound desc = could not find container \"a3e04444760ffb271d3b136607990d7e060963fa82b0b06859901489ab2ab0e3\": container with ID starting with a3e04444760ffb271d3b136607990d7e060963fa82b0b06859901489ab2ab0e3 not found: ID does not exist" Jan 31 09:29:43 crc kubenswrapper[4830]: I0131 09:29:43.423817 4830 scope.go:117] "RemoveContainer" containerID="26b4ec876b6a68130f28c036e4967e5620736cb2584439d013918c4468bcb419" Jan 31 09:29:43 crc kubenswrapper[4830]: E0131 09:29:43.424132 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"26b4ec876b6a68130f28c036e4967e5620736cb2584439d013918c4468bcb419\": container with ID starting with 26b4ec876b6a68130f28c036e4967e5620736cb2584439d013918c4468bcb419 not found: ID does not exist" containerID="26b4ec876b6a68130f28c036e4967e5620736cb2584439d013918c4468bcb419" Jan 31 09:29:43 crc kubenswrapper[4830]: I0131 09:29:43.424159 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"26b4ec876b6a68130f28c036e4967e5620736cb2584439d013918c4468bcb419"} err="failed to get container status \"26b4ec876b6a68130f28c036e4967e5620736cb2584439d013918c4468bcb419\": rpc error: code = NotFound desc = could not find container \"26b4ec876b6a68130f28c036e4967e5620736cb2584439d013918c4468bcb419\": container with ID starting with 26b4ec876b6a68130f28c036e4967e5620736cb2584439d013918c4468bcb419 not found: ID does not exist" Jan 31 09:29:43 crc kubenswrapper[4830]: I0131 09:29:43.557236 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 31 09:29:43 crc kubenswrapper[4830]: I0131 09:29:43.570217 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 31 09:29:43 crc kubenswrapper[4830]: I0131 09:29:43.585039 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 31 09:29:43 crc kubenswrapper[4830]: E0131 09:29:43.585978 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f97ee114-e2b6-423b-b30a-dd1e2ada3169" containerName="sg-core" Jan 31 09:29:43 crc kubenswrapper[4830]: I0131 09:29:43.586007 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="f97ee114-e2b6-423b-b30a-dd1e2ada3169" containerName="sg-core" Jan 31 09:29:43 crc kubenswrapper[4830]: E0131 09:29:43.586045 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f97ee114-e2b6-423b-b30a-dd1e2ada3169" containerName="proxy-httpd" Jan 31 09:29:43 crc kubenswrapper[4830]: I0131 09:29:43.586052 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="f97ee114-e2b6-423b-b30a-dd1e2ada3169" containerName="proxy-httpd" Jan 31 09:29:43 crc kubenswrapper[4830]: E0131 09:29:43.586077 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f97ee114-e2b6-423b-b30a-dd1e2ada3169" containerName="ceilometer-central-agent" Jan 31 09:29:43 crc kubenswrapper[4830]: I0131 09:29:43.586085 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="f97ee114-e2b6-423b-b30a-dd1e2ada3169" containerName="ceilometer-central-agent" Jan 31 09:29:43 crc kubenswrapper[4830]: E0131 09:29:43.586113 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f97ee114-e2b6-423b-b30a-dd1e2ada3169" containerName="ceilometer-notification-agent" Jan 31 09:29:43 crc kubenswrapper[4830]: I0131 09:29:43.586121 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="f97ee114-e2b6-423b-b30a-dd1e2ada3169" containerName="ceilometer-notification-agent" Jan 31 09:29:43 crc kubenswrapper[4830]: I0131 09:29:43.586466 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="f97ee114-e2b6-423b-b30a-dd1e2ada3169" containerName="ceilometer-central-agent" Jan 31 09:29:43 crc kubenswrapper[4830]: I0131 09:29:43.586496 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="f97ee114-e2b6-423b-b30a-dd1e2ada3169" containerName="proxy-httpd" Jan 31 09:29:43 crc kubenswrapper[4830]: I0131 09:29:43.586517 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="f97ee114-e2b6-423b-b30a-dd1e2ada3169" containerName="ceilometer-notification-agent" Jan 31 09:29:43 crc kubenswrapper[4830]: I0131 09:29:43.586535 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="f97ee114-e2b6-423b-b30a-dd1e2ada3169" containerName="sg-core" Jan 31 09:29:43 crc kubenswrapper[4830]: I0131 09:29:43.589774 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 31 09:29:43 crc kubenswrapper[4830]: I0131 09:29:43.592025 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 31 09:29:43 crc kubenswrapper[4830]: I0131 09:29:43.592285 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 31 09:29:43 crc kubenswrapper[4830]: I0131 09:29:43.596672 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 31 09:29:43 crc kubenswrapper[4830]: I0131 09:29:43.615220 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 31 09:29:43 crc kubenswrapper[4830]: I0131 09:29:43.712646 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2ea7efa-c50b-4208-a9df-2c3fc454762b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f2ea7efa-c50b-4208-a9df-2c3fc454762b\") " pod="openstack/ceilometer-0" Jan 31 09:29:43 crc kubenswrapper[4830]: I0131 09:29:43.712801 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2ea7efa-c50b-4208-a9df-2c3fc454762b-config-data\") pod \"ceilometer-0\" (UID: \"f2ea7efa-c50b-4208-a9df-2c3fc454762b\") " pod="openstack/ceilometer-0" Jan 31 09:29:43 crc kubenswrapper[4830]: I0131 09:29:43.712869 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/f2ea7efa-c50b-4208-a9df-2c3fc454762b-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"f2ea7efa-c50b-4208-a9df-2c3fc454762b\") " pod="openstack/ceilometer-0" Jan 31 09:29:43 crc kubenswrapper[4830]: I0131 09:29:43.712899 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f2ea7efa-c50b-4208-a9df-2c3fc454762b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f2ea7efa-c50b-4208-a9df-2c3fc454762b\") " pod="openstack/ceilometer-0" Jan 31 09:29:43 crc kubenswrapper[4830]: I0131 09:29:43.713186 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f2ea7efa-c50b-4208-a9df-2c3fc454762b-scripts\") pod \"ceilometer-0\" (UID: \"f2ea7efa-c50b-4208-a9df-2c3fc454762b\") " pod="openstack/ceilometer-0" Jan 31 09:29:43 crc kubenswrapper[4830]: I0131 09:29:43.713625 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kst8t\" (UniqueName: \"kubernetes.io/projected/f2ea7efa-c50b-4208-a9df-2c3fc454762b-kube-api-access-kst8t\") pod \"ceilometer-0\" (UID: \"f2ea7efa-c50b-4208-a9df-2c3fc454762b\") " pod="openstack/ceilometer-0" Jan 31 09:29:43 crc kubenswrapper[4830]: I0131 09:29:43.713702 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f2ea7efa-c50b-4208-a9df-2c3fc454762b-log-httpd\") pod \"ceilometer-0\" (UID: \"f2ea7efa-c50b-4208-a9df-2c3fc454762b\") " pod="openstack/ceilometer-0" Jan 31 09:29:43 crc kubenswrapper[4830]: I0131 09:29:43.713890 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f2ea7efa-c50b-4208-a9df-2c3fc454762b-run-httpd\") pod \"ceilometer-0\" (UID: \"f2ea7efa-c50b-4208-a9df-2c3fc454762b\") " pod="openstack/ceilometer-0" Jan 31 09:29:43 crc kubenswrapper[4830]: I0131 09:29:43.831528 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2ea7efa-c50b-4208-a9df-2c3fc454762b-config-data\") pod \"ceilometer-0\" (UID: \"f2ea7efa-c50b-4208-a9df-2c3fc454762b\") " pod="openstack/ceilometer-0" Jan 31 09:29:43 crc kubenswrapper[4830]: I0131 09:29:43.831813 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/f2ea7efa-c50b-4208-a9df-2c3fc454762b-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"f2ea7efa-c50b-4208-a9df-2c3fc454762b\") " pod="openstack/ceilometer-0" Jan 31 09:29:43 crc kubenswrapper[4830]: I0131 09:29:43.831887 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f2ea7efa-c50b-4208-a9df-2c3fc454762b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f2ea7efa-c50b-4208-a9df-2c3fc454762b\") " pod="openstack/ceilometer-0" Jan 31 09:29:43 crc kubenswrapper[4830]: I0131 09:29:43.831981 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f2ea7efa-c50b-4208-a9df-2c3fc454762b-scripts\") pod \"ceilometer-0\" (UID: \"f2ea7efa-c50b-4208-a9df-2c3fc454762b\") " pod="openstack/ceilometer-0" Jan 31 09:29:43 crc kubenswrapper[4830]: I0131 09:29:43.832289 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kst8t\" (UniqueName: \"kubernetes.io/projected/f2ea7efa-c50b-4208-a9df-2c3fc454762b-kube-api-access-kst8t\") pod \"ceilometer-0\" (UID: \"f2ea7efa-c50b-4208-a9df-2c3fc454762b\") " pod="openstack/ceilometer-0" Jan 31 09:29:43 crc kubenswrapper[4830]: I0131 09:29:43.832338 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f2ea7efa-c50b-4208-a9df-2c3fc454762b-log-httpd\") pod \"ceilometer-0\" (UID: \"f2ea7efa-c50b-4208-a9df-2c3fc454762b\") " pod="openstack/ceilometer-0" Jan 31 09:29:43 crc kubenswrapper[4830]: I0131 09:29:43.832488 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f2ea7efa-c50b-4208-a9df-2c3fc454762b-run-httpd\") pod \"ceilometer-0\" (UID: \"f2ea7efa-c50b-4208-a9df-2c3fc454762b\") " pod="openstack/ceilometer-0" Jan 31 09:29:43 crc kubenswrapper[4830]: I0131 09:29:43.832671 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2ea7efa-c50b-4208-a9df-2c3fc454762b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f2ea7efa-c50b-4208-a9df-2c3fc454762b\") " pod="openstack/ceilometer-0" Jan 31 09:29:43 crc kubenswrapper[4830]: I0131 09:29:43.833181 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f2ea7efa-c50b-4208-a9df-2c3fc454762b-log-httpd\") pod \"ceilometer-0\" (UID: \"f2ea7efa-c50b-4208-a9df-2c3fc454762b\") " pod="openstack/ceilometer-0" Jan 31 09:29:43 crc kubenswrapper[4830]: I0131 09:29:43.833599 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f2ea7efa-c50b-4208-a9df-2c3fc454762b-run-httpd\") pod \"ceilometer-0\" (UID: \"f2ea7efa-c50b-4208-a9df-2c3fc454762b\") " pod="openstack/ceilometer-0" Jan 31 09:29:43 crc kubenswrapper[4830]: I0131 09:29:43.837587 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2ea7efa-c50b-4208-a9df-2c3fc454762b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f2ea7efa-c50b-4208-a9df-2c3fc454762b\") " pod="openstack/ceilometer-0" Jan 31 09:29:43 crc kubenswrapper[4830]: I0131 09:29:43.837602 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/f2ea7efa-c50b-4208-a9df-2c3fc454762b-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"f2ea7efa-c50b-4208-a9df-2c3fc454762b\") " pod="openstack/ceilometer-0" Jan 31 09:29:43 crc kubenswrapper[4830]: I0131 09:29:43.838178 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f2ea7efa-c50b-4208-a9df-2c3fc454762b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f2ea7efa-c50b-4208-a9df-2c3fc454762b\") " pod="openstack/ceilometer-0" Jan 31 09:29:43 crc kubenswrapper[4830]: I0131 09:29:43.839210 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2ea7efa-c50b-4208-a9df-2c3fc454762b-config-data\") pod \"ceilometer-0\" (UID: \"f2ea7efa-c50b-4208-a9df-2c3fc454762b\") " pod="openstack/ceilometer-0" Jan 31 09:29:43 crc kubenswrapper[4830]: I0131 09:29:43.839942 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f2ea7efa-c50b-4208-a9df-2c3fc454762b-scripts\") pod \"ceilometer-0\" (UID: \"f2ea7efa-c50b-4208-a9df-2c3fc454762b\") " pod="openstack/ceilometer-0" Jan 31 09:29:43 crc kubenswrapper[4830]: I0131 09:29:43.862999 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kst8t\" (UniqueName: \"kubernetes.io/projected/f2ea7efa-c50b-4208-a9df-2c3fc454762b-kube-api-access-kst8t\") pod \"ceilometer-0\" (UID: \"f2ea7efa-c50b-4208-a9df-2c3fc454762b\") " pod="openstack/ceilometer-0" Jan 31 09:29:43 crc kubenswrapper[4830]: I0131 09:29:43.909948 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 31 09:29:44 crc kubenswrapper[4830]: I0131 09:29:44.272499 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f97ee114-e2b6-423b-b30a-dd1e2ada3169" path="/var/lib/kubelet/pods/f97ee114-e2b6-423b-b30a-dd1e2ada3169/volumes" Jan 31 09:29:44 crc kubenswrapper[4830]: I0131 09:29:44.302753 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 31 09:29:44 crc kubenswrapper[4830]: I0131 09:29:44.676904 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-68df85789f-bj49s" Jan 31 09:29:44 crc kubenswrapper[4830]: I0131 09:29:44.750931 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-79b5d74c8c-ftd5m"] Jan 31 09:29:44 crc kubenswrapper[4830]: I0131 09:29:44.751793 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-79b5d74c8c-ftd5m" podUID="455ee04e-f0d7-431d-8127-c66beff070e7" containerName="dnsmasq-dns" containerID="cri-o://f4b2dc73f28a2cbb1eb45d61dd7c88ea9cc144a52ff072abd4fe43468db98a87" gracePeriod=10 Jan 31 09:29:44 crc kubenswrapper[4830]: I0131 09:29:44.994787 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-bb85b8995-fsxj6"] Jan 31 09:29:45 crc kubenswrapper[4830]: I0131 09:29:45.000845 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bb85b8995-fsxj6" Jan 31 09:29:45 crc kubenswrapper[4830]: I0131 09:29:45.036964 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-bb85b8995-fsxj6"] Jan 31 09:29:45 crc kubenswrapper[4830]: I0131 09:29:45.098586 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/009bab2e-2d97-42e2-aa01-2a5e9d4c74c2-config\") pod \"dnsmasq-dns-bb85b8995-fsxj6\" (UID: \"009bab2e-2d97-42e2-aa01-2a5e9d4c74c2\") " pod="openstack/dnsmasq-dns-bb85b8995-fsxj6" Jan 31 09:29:45 crc kubenswrapper[4830]: I0131 09:29:45.098647 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/009bab2e-2d97-42e2-aa01-2a5e9d4c74c2-ovsdbserver-sb\") pod \"dnsmasq-dns-bb85b8995-fsxj6\" (UID: \"009bab2e-2d97-42e2-aa01-2a5e9d4c74c2\") " pod="openstack/dnsmasq-dns-bb85b8995-fsxj6" Jan 31 09:29:45 crc kubenswrapper[4830]: I0131 09:29:45.098674 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/009bab2e-2d97-42e2-aa01-2a5e9d4c74c2-dns-svc\") pod \"dnsmasq-dns-bb85b8995-fsxj6\" (UID: \"009bab2e-2d97-42e2-aa01-2a5e9d4c74c2\") " pod="openstack/dnsmasq-dns-bb85b8995-fsxj6" Jan 31 09:29:45 crc kubenswrapper[4830]: I0131 09:29:45.098767 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rhjlr\" (UniqueName: \"kubernetes.io/projected/009bab2e-2d97-42e2-aa01-2a5e9d4c74c2-kube-api-access-rhjlr\") pod \"dnsmasq-dns-bb85b8995-fsxj6\" (UID: \"009bab2e-2d97-42e2-aa01-2a5e9d4c74c2\") " pod="openstack/dnsmasq-dns-bb85b8995-fsxj6" Jan 31 09:29:45 crc kubenswrapper[4830]: I0131 09:29:45.098871 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/009bab2e-2d97-42e2-aa01-2a5e9d4c74c2-openstack-edpm-ipam\") pod \"dnsmasq-dns-bb85b8995-fsxj6\" (UID: \"009bab2e-2d97-42e2-aa01-2a5e9d4c74c2\") " pod="openstack/dnsmasq-dns-bb85b8995-fsxj6" Jan 31 09:29:45 crc kubenswrapper[4830]: I0131 09:29:45.098905 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/009bab2e-2d97-42e2-aa01-2a5e9d4c74c2-ovsdbserver-nb\") pod \"dnsmasq-dns-bb85b8995-fsxj6\" (UID: \"009bab2e-2d97-42e2-aa01-2a5e9d4c74c2\") " pod="openstack/dnsmasq-dns-bb85b8995-fsxj6" Jan 31 09:29:45 crc kubenswrapper[4830]: I0131 09:29:45.098958 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/009bab2e-2d97-42e2-aa01-2a5e9d4c74c2-dns-swift-storage-0\") pod \"dnsmasq-dns-bb85b8995-fsxj6\" (UID: \"009bab2e-2d97-42e2-aa01-2a5e9d4c74c2\") " pod="openstack/dnsmasq-dns-bb85b8995-fsxj6" Jan 31 09:29:45 crc kubenswrapper[4830]: I0131 09:29:45.169090 4830 generic.go:334] "Generic (PLEG): container finished" podID="455ee04e-f0d7-431d-8127-c66beff070e7" containerID="f4b2dc73f28a2cbb1eb45d61dd7c88ea9cc144a52ff072abd4fe43468db98a87" exitCode=0 Jan 31 09:29:45 crc kubenswrapper[4830]: I0131 09:29:45.169170 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79b5d74c8c-ftd5m" event={"ID":"455ee04e-f0d7-431d-8127-c66beff070e7","Type":"ContainerDied","Data":"f4b2dc73f28a2cbb1eb45d61dd7c88ea9cc144a52ff072abd4fe43468db98a87"} Jan 31 09:29:45 crc kubenswrapper[4830]: I0131 09:29:45.171083 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f2ea7efa-c50b-4208-a9df-2c3fc454762b","Type":"ContainerStarted","Data":"ab8b10a68465574182346010619bc471cc557477ef54a6b36ffa997065d58324"} Jan 31 09:29:45 crc kubenswrapper[4830]: I0131 09:29:45.202136 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/009bab2e-2d97-42e2-aa01-2a5e9d4c74c2-openstack-edpm-ipam\") pod \"dnsmasq-dns-bb85b8995-fsxj6\" (UID: \"009bab2e-2d97-42e2-aa01-2a5e9d4c74c2\") " pod="openstack/dnsmasq-dns-bb85b8995-fsxj6" Jan 31 09:29:45 crc kubenswrapper[4830]: I0131 09:29:45.202226 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/009bab2e-2d97-42e2-aa01-2a5e9d4c74c2-ovsdbserver-nb\") pod \"dnsmasq-dns-bb85b8995-fsxj6\" (UID: \"009bab2e-2d97-42e2-aa01-2a5e9d4c74c2\") " pod="openstack/dnsmasq-dns-bb85b8995-fsxj6" Jan 31 09:29:45 crc kubenswrapper[4830]: I0131 09:29:45.202287 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/009bab2e-2d97-42e2-aa01-2a5e9d4c74c2-dns-swift-storage-0\") pod \"dnsmasq-dns-bb85b8995-fsxj6\" (UID: \"009bab2e-2d97-42e2-aa01-2a5e9d4c74c2\") " pod="openstack/dnsmasq-dns-bb85b8995-fsxj6" Jan 31 09:29:45 crc kubenswrapper[4830]: I0131 09:29:45.202385 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/009bab2e-2d97-42e2-aa01-2a5e9d4c74c2-config\") pod \"dnsmasq-dns-bb85b8995-fsxj6\" (UID: \"009bab2e-2d97-42e2-aa01-2a5e9d4c74c2\") " pod="openstack/dnsmasq-dns-bb85b8995-fsxj6" Jan 31 09:29:45 crc kubenswrapper[4830]: I0131 09:29:45.202429 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/009bab2e-2d97-42e2-aa01-2a5e9d4c74c2-ovsdbserver-sb\") pod \"dnsmasq-dns-bb85b8995-fsxj6\" (UID: \"009bab2e-2d97-42e2-aa01-2a5e9d4c74c2\") " pod="openstack/dnsmasq-dns-bb85b8995-fsxj6" Jan 31 09:29:45 crc kubenswrapper[4830]: I0131 09:29:45.202459 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/009bab2e-2d97-42e2-aa01-2a5e9d4c74c2-dns-svc\") pod \"dnsmasq-dns-bb85b8995-fsxj6\" (UID: \"009bab2e-2d97-42e2-aa01-2a5e9d4c74c2\") " pod="openstack/dnsmasq-dns-bb85b8995-fsxj6" Jan 31 09:29:45 crc kubenswrapper[4830]: I0131 09:29:45.202569 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rhjlr\" (UniqueName: \"kubernetes.io/projected/009bab2e-2d97-42e2-aa01-2a5e9d4c74c2-kube-api-access-rhjlr\") pod \"dnsmasq-dns-bb85b8995-fsxj6\" (UID: \"009bab2e-2d97-42e2-aa01-2a5e9d4c74c2\") " pod="openstack/dnsmasq-dns-bb85b8995-fsxj6" Jan 31 09:29:45 crc kubenswrapper[4830]: I0131 09:29:45.203311 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/009bab2e-2d97-42e2-aa01-2a5e9d4c74c2-openstack-edpm-ipam\") pod \"dnsmasq-dns-bb85b8995-fsxj6\" (UID: \"009bab2e-2d97-42e2-aa01-2a5e9d4c74c2\") " pod="openstack/dnsmasq-dns-bb85b8995-fsxj6" Jan 31 09:29:45 crc kubenswrapper[4830]: I0131 09:29:45.203995 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/009bab2e-2d97-42e2-aa01-2a5e9d4c74c2-ovsdbserver-nb\") pod \"dnsmasq-dns-bb85b8995-fsxj6\" (UID: \"009bab2e-2d97-42e2-aa01-2a5e9d4c74c2\") " pod="openstack/dnsmasq-dns-bb85b8995-fsxj6" Jan 31 09:29:45 crc kubenswrapper[4830]: I0131 09:29:45.204133 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/009bab2e-2d97-42e2-aa01-2a5e9d4c74c2-config\") pod \"dnsmasq-dns-bb85b8995-fsxj6\" (UID: \"009bab2e-2d97-42e2-aa01-2a5e9d4c74c2\") " pod="openstack/dnsmasq-dns-bb85b8995-fsxj6" Jan 31 09:29:45 crc kubenswrapper[4830]: I0131 09:29:45.204737 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/009bab2e-2d97-42e2-aa01-2a5e9d4c74c2-ovsdbserver-sb\") pod \"dnsmasq-dns-bb85b8995-fsxj6\" (UID: \"009bab2e-2d97-42e2-aa01-2a5e9d4c74c2\") " pod="openstack/dnsmasq-dns-bb85b8995-fsxj6" Jan 31 09:29:45 crc kubenswrapper[4830]: I0131 09:29:45.204938 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/009bab2e-2d97-42e2-aa01-2a5e9d4c74c2-dns-swift-storage-0\") pod \"dnsmasq-dns-bb85b8995-fsxj6\" (UID: \"009bab2e-2d97-42e2-aa01-2a5e9d4c74c2\") " pod="openstack/dnsmasq-dns-bb85b8995-fsxj6" Jan 31 09:29:45 crc kubenswrapper[4830]: I0131 09:29:45.205342 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/009bab2e-2d97-42e2-aa01-2a5e9d4c74c2-dns-svc\") pod \"dnsmasq-dns-bb85b8995-fsxj6\" (UID: \"009bab2e-2d97-42e2-aa01-2a5e9d4c74c2\") " pod="openstack/dnsmasq-dns-bb85b8995-fsxj6" Jan 31 09:29:45 crc kubenswrapper[4830]: I0131 09:29:45.245268 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rhjlr\" (UniqueName: \"kubernetes.io/projected/009bab2e-2d97-42e2-aa01-2a5e9d4c74c2-kube-api-access-rhjlr\") pod \"dnsmasq-dns-bb85b8995-fsxj6\" (UID: \"009bab2e-2d97-42e2-aa01-2a5e9d4c74c2\") " pod="openstack/dnsmasq-dns-bb85b8995-fsxj6" Jan 31 09:29:45 crc kubenswrapper[4830]: I0131 09:29:45.357860 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bb85b8995-fsxj6" Jan 31 09:29:45 crc kubenswrapper[4830]: I0131 09:29:45.610008 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79b5d74c8c-ftd5m" Jan 31 09:29:45 crc kubenswrapper[4830]: I0131 09:29:45.742805 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/455ee04e-f0d7-431d-8127-c66beff070e7-config\") pod \"455ee04e-f0d7-431d-8127-c66beff070e7\" (UID: \"455ee04e-f0d7-431d-8127-c66beff070e7\") " Jan 31 09:29:45 crc kubenswrapper[4830]: I0131 09:29:45.743333 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/455ee04e-f0d7-431d-8127-c66beff070e7-dns-svc\") pod \"455ee04e-f0d7-431d-8127-c66beff070e7\" (UID: \"455ee04e-f0d7-431d-8127-c66beff070e7\") " Jan 31 09:29:45 crc kubenswrapper[4830]: I0131 09:29:45.743373 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/455ee04e-f0d7-431d-8127-c66beff070e7-dns-swift-storage-0\") pod \"455ee04e-f0d7-431d-8127-c66beff070e7\" (UID: \"455ee04e-f0d7-431d-8127-c66beff070e7\") " Jan 31 09:29:45 crc kubenswrapper[4830]: I0131 09:29:45.743495 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/455ee04e-f0d7-431d-8127-c66beff070e7-ovsdbserver-nb\") pod \"455ee04e-f0d7-431d-8127-c66beff070e7\" (UID: \"455ee04e-f0d7-431d-8127-c66beff070e7\") " Jan 31 09:29:45 crc kubenswrapper[4830]: I0131 09:29:45.743566 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/455ee04e-f0d7-431d-8127-c66beff070e7-ovsdbserver-sb\") pod \"455ee04e-f0d7-431d-8127-c66beff070e7\" (UID: \"455ee04e-f0d7-431d-8127-c66beff070e7\") " Jan 31 09:29:45 crc kubenswrapper[4830]: I0131 09:29:45.743676 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b6chl\" (UniqueName: \"kubernetes.io/projected/455ee04e-f0d7-431d-8127-c66beff070e7-kube-api-access-b6chl\") pod \"455ee04e-f0d7-431d-8127-c66beff070e7\" (UID: \"455ee04e-f0d7-431d-8127-c66beff070e7\") " Jan 31 09:29:45 crc kubenswrapper[4830]: I0131 09:29:45.790654 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/455ee04e-f0d7-431d-8127-c66beff070e7-kube-api-access-b6chl" (OuterVolumeSpecName: "kube-api-access-b6chl") pod "455ee04e-f0d7-431d-8127-c66beff070e7" (UID: "455ee04e-f0d7-431d-8127-c66beff070e7"). InnerVolumeSpecName "kube-api-access-b6chl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:29:45 crc kubenswrapper[4830]: I0131 09:29:45.850325 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b6chl\" (UniqueName: \"kubernetes.io/projected/455ee04e-f0d7-431d-8127-c66beff070e7-kube-api-access-b6chl\") on node \"crc\" DevicePath \"\"" Jan 31 09:29:45 crc kubenswrapper[4830]: I0131 09:29:45.872466 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/455ee04e-f0d7-431d-8127-c66beff070e7-config" (OuterVolumeSpecName: "config") pod "455ee04e-f0d7-431d-8127-c66beff070e7" (UID: "455ee04e-f0d7-431d-8127-c66beff070e7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:29:45 crc kubenswrapper[4830]: I0131 09:29:45.896411 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/455ee04e-f0d7-431d-8127-c66beff070e7-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "455ee04e-f0d7-431d-8127-c66beff070e7" (UID: "455ee04e-f0d7-431d-8127-c66beff070e7"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:29:45 crc kubenswrapper[4830]: I0131 09:29:45.910587 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/455ee04e-f0d7-431d-8127-c66beff070e7-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "455ee04e-f0d7-431d-8127-c66beff070e7" (UID: "455ee04e-f0d7-431d-8127-c66beff070e7"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:29:45 crc kubenswrapper[4830]: I0131 09:29:45.927015 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/455ee04e-f0d7-431d-8127-c66beff070e7-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "455ee04e-f0d7-431d-8127-c66beff070e7" (UID: "455ee04e-f0d7-431d-8127-c66beff070e7"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:29:45 crc kubenswrapper[4830]: I0131 09:29:45.943899 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/455ee04e-f0d7-431d-8127-c66beff070e7-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "455ee04e-f0d7-431d-8127-c66beff070e7" (UID: "455ee04e-f0d7-431d-8127-c66beff070e7"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:29:45 crc kubenswrapper[4830]: I0131 09:29:45.963608 4830 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/455ee04e-f0d7-431d-8127-c66beff070e7-config\") on node \"crc\" DevicePath \"\"" Jan 31 09:29:45 crc kubenswrapper[4830]: I0131 09:29:45.963657 4830 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/455ee04e-f0d7-431d-8127-c66beff070e7-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 31 09:29:45 crc kubenswrapper[4830]: I0131 09:29:45.963669 4830 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/455ee04e-f0d7-431d-8127-c66beff070e7-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 31 09:29:45 crc kubenswrapper[4830]: I0131 09:29:45.963681 4830 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/455ee04e-f0d7-431d-8127-c66beff070e7-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 31 09:29:45 crc kubenswrapper[4830]: I0131 09:29:45.963693 4830 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/455ee04e-f0d7-431d-8127-c66beff070e7-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 31 09:29:46 crc kubenswrapper[4830]: I0131 09:29:46.009520 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-bb85b8995-fsxj6"] Jan 31 09:29:46 crc kubenswrapper[4830]: W0131 09:29:46.011847 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod009bab2e_2d97_42e2_aa01_2a5e9d4c74c2.slice/crio-1e52be1f27633c1aca08d86f03bbb86f63c59033499504a3897beb001f1074f6 WatchSource:0}: Error finding container 1e52be1f27633c1aca08d86f03bbb86f63c59033499504a3897beb001f1074f6: Status 404 returned error can't find the container with id 1e52be1f27633c1aca08d86f03bbb86f63c59033499504a3897beb001f1074f6 Jan 31 09:29:46 crc kubenswrapper[4830]: I0131 09:29:46.193891 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79b5d74c8c-ftd5m" Jan 31 09:29:46 crc kubenswrapper[4830]: I0131 09:29:46.193869 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79b5d74c8c-ftd5m" event={"ID":"455ee04e-f0d7-431d-8127-c66beff070e7","Type":"ContainerDied","Data":"4c5839680f7a531830cf42ebe050cb9810070d19415d284f652b554a4d1c077c"} Jan 31 09:29:46 crc kubenswrapper[4830]: I0131 09:29:46.194493 4830 scope.go:117] "RemoveContainer" containerID="f4b2dc73f28a2cbb1eb45d61dd7c88ea9cc144a52ff072abd4fe43468db98a87" Jan 31 09:29:46 crc kubenswrapper[4830]: I0131 09:29:46.196325 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bb85b8995-fsxj6" event={"ID":"009bab2e-2d97-42e2-aa01-2a5e9d4c74c2","Type":"ContainerStarted","Data":"1e52be1f27633c1aca08d86f03bbb86f63c59033499504a3897beb001f1074f6"} Jan 31 09:29:46 crc kubenswrapper[4830]: I0131 09:29:46.302733 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-79b5d74c8c-ftd5m"] Jan 31 09:29:46 crc kubenswrapper[4830]: I0131 09:29:46.304034 4830 scope.go:117] "RemoveContainer" containerID="c82d00e226cf94442d0cd376d30070a03ec583990fada5821f8ef972eebd126d" Jan 31 09:29:46 crc kubenswrapper[4830]: I0131 09:29:46.309014 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-79b5d74c8c-ftd5m"] Jan 31 09:29:47 crc kubenswrapper[4830]: I0131 09:29:47.211986 4830 generic.go:334] "Generic (PLEG): container finished" podID="009bab2e-2d97-42e2-aa01-2a5e9d4c74c2" containerID="9f7c9e4223d47698711da81ec93c721234d049eed6205a940cf28f4a8d098106" exitCode=0 Jan 31 09:29:47 crc kubenswrapper[4830]: I0131 09:29:47.212061 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bb85b8995-fsxj6" event={"ID":"009bab2e-2d97-42e2-aa01-2a5e9d4c74c2","Type":"ContainerDied","Data":"9f7c9e4223d47698711da81ec93c721234d049eed6205a940cf28f4a8d098106"} Jan 31 09:29:48 crc kubenswrapper[4830]: I0131 09:29:48.289793 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="455ee04e-f0d7-431d-8127-c66beff070e7" path="/var/lib/kubelet/pods/455ee04e-f0d7-431d-8127-c66beff070e7/volumes" Jan 31 09:29:50 crc kubenswrapper[4830]: I0131 09:29:50.274119 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bb85b8995-fsxj6" event={"ID":"009bab2e-2d97-42e2-aa01-2a5e9d4c74c2","Type":"ContainerStarted","Data":"f3dffdad8630a99720311a60709e5c71966dfbc054c2db0bba42e0fd509baba8"} Jan 31 09:29:50 crc kubenswrapper[4830]: I0131 09:29:50.274857 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-bb85b8995-fsxj6" Jan 31 09:29:50 crc kubenswrapper[4830]: I0131 09:29:50.279349 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f2ea7efa-c50b-4208-a9df-2c3fc454762b","Type":"ContainerStarted","Data":"7438470b0ded09ffc16921538313fba4d8d5737ade46eb0d1751c36880d19f27"} Jan 31 09:29:50 crc kubenswrapper[4830]: I0131 09:29:50.320048 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-bb85b8995-fsxj6" podStartSLOduration=6.320020312 podStartE2EDuration="6.320020312s" podCreationTimestamp="2026-01-31 09:29:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 09:29:50.314958867 +0000 UTC m=+1734.808321309" watchObservedRunningTime="2026-01-31 09:29:50.320020312 +0000 UTC m=+1734.813382754" Jan 31 09:29:51 crc kubenswrapper[4830]: I0131 09:29:51.252757 4830 scope.go:117] "RemoveContainer" containerID="a04fad3617a9e38076099693ce6bd6f0b7e1a9b845b3b8a22acffddfa772e8f0" Jan 31 09:29:51 crc kubenswrapper[4830]: E0131 09:29:51.253445 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gt7kd_openshift-machine-config-operator(158dbfda-9b0a-4809-9946-3c6ee2d082dc)\"" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" podUID="158dbfda-9b0a-4809-9946-3c6ee2d082dc" Jan 31 09:29:51 crc kubenswrapper[4830]: I0131 09:29:51.297943 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f2ea7efa-c50b-4208-a9df-2c3fc454762b","Type":"ContainerStarted","Data":"549dbd62f746a4094367d631ec0c0057f00d1dd7404743acd2e5584743dc9331"} Jan 31 09:29:52 crc kubenswrapper[4830]: I0131 09:29:52.311660 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f2ea7efa-c50b-4208-a9df-2c3fc454762b","Type":"ContainerStarted","Data":"0741fc6d486f768e01a7c384fa77365d6b23e95baf017cd60a917fa031ece659"} Jan 31 09:29:53 crc kubenswrapper[4830]: I0131 09:29:53.348434 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-cblrm" event={"ID":"35a75e79-079e-4905-9cc1-af2a81596943","Type":"ContainerStarted","Data":"6cfb76a0d3624c566f680f2bb724f6347824ed753c153f5b3beaf28afa3e5a4a"} Jan 31 09:29:53 crc kubenswrapper[4830]: I0131 09:29:53.370180 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-db-sync-cblrm" podStartSLOduration=2.6188476290000002 podStartE2EDuration="45.370155073s" podCreationTimestamp="2026-01-31 09:29:08 +0000 UTC" firstStartedPulling="2026-01-31 09:29:09.695314729 +0000 UTC m=+1694.188677171" lastFinishedPulling="2026-01-31 09:29:52.446622173 +0000 UTC m=+1736.939984615" observedRunningTime="2026-01-31 09:29:53.365815449 +0000 UTC m=+1737.859177901" watchObservedRunningTime="2026-01-31 09:29:53.370155073 +0000 UTC m=+1737.863517505" Jan 31 09:29:55 crc kubenswrapper[4830]: I0131 09:29:55.361199 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-bb85b8995-fsxj6" Jan 31 09:29:55 crc kubenswrapper[4830]: I0131 09:29:55.408411 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f2ea7efa-c50b-4208-a9df-2c3fc454762b","Type":"ContainerStarted","Data":"9c17bf784bfd94f9992a3297403ab4ffe65af222572d202fda396e13958f4473"} Jan 31 09:29:55 crc kubenswrapper[4830]: I0131 09:29:55.408866 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 31 09:29:55 crc kubenswrapper[4830]: I0131 09:29:55.486262 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-68df85789f-bj49s"] Jan 31 09:29:55 crc kubenswrapper[4830]: I0131 09:29:55.486908 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-68df85789f-bj49s" podUID="668f930d-5dba-4a2a-bd14-589620626682" containerName="dnsmasq-dns" containerID="cri-o://edc5e1e9c372c5d152c384902e3449aee4a8d1132db441d4ff1442f24dfb8342" gracePeriod=10 Jan 31 09:29:55 crc kubenswrapper[4830]: I0131 09:29:55.503931 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.6679035559999997 podStartE2EDuration="12.503901997s" podCreationTimestamp="2026-01-31 09:29:43 +0000 UTC" firstStartedPulling="2026-01-31 09:29:44.303954502 +0000 UTC m=+1728.797316944" lastFinishedPulling="2026-01-31 09:29:54.139952943 +0000 UTC m=+1738.633315385" observedRunningTime="2026-01-31 09:29:55.448360707 +0000 UTC m=+1739.941723139" watchObservedRunningTime="2026-01-31 09:29:55.503901997 +0000 UTC m=+1739.997264439" Jan 31 09:29:56 crc kubenswrapper[4830]: I0131 09:29:56.316225 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-68df85789f-bj49s" Jan 31 09:29:56 crc kubenswrapper[4830]: I0131 09:29:56.421885 4830 generic.go:334] "Generic (PLEG): container finished" podID="668f930d-5dba-4a2a-bd14-589620626682" containerID="edc5e1e9c372c5d152c384902e3449aee4a8d1132db441d4ff1442f24dfb8342" exitCode=0 Jan 31 09:29:56 crc kubenswrapper[4830]: I0131 09:29:56.421956 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-68df85789f-bj49s" event={"ID":"668f930d-5dba-4a2a-bd14-589620626682","Type":"ContainerDied","Data":"edc5e1e9c372c5d152c384902e3449aee4a8d1132db441d4ff1442f24dfb8342"} Jan 31 09:29:56 crc kubenswrapper[4830]: I0131 09:29:56.421974 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-68df85789f-bj49s" Jan 31 09:29:56 crc kubenswrapper[4830]: I0131 09:29:56.422007 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-68df85789f-bj49s" event={"ID":"668f930d-5dba-4a2a-bd14-589620626682","Type":"ContainerDied","Data":"bdc10cc246cda12110ce3667c3f9f3a55cae42cb701055072b1a2c6d57647e94"} Jan 31 09:29:56 crc kubenswrapper[4830]: I0131 09:29:56.422027 4830 scope.go:117] "RemoveContainer" containerID="edc5e1e9c372c5d152c384902e3449aee4a8d1132db441d4ff1442f24dfb8342" Jan 31 09:29:56 crc kubenswrapper[4830]: I0131 09:29:56.467817 4830 scope.go:117] "RemoveContainer" containerID="fc068b35667994746db8cc763a15c7bf5c7586d7055fea0653d7e130fe96fc6c" Jan 31 09:29:56 crc kubenswrapper[4830]: I0131 09:29:56.506345 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/668f930d-5dba-4a2a-bd14-589620626682-ovsdbserver-nb\") pod \"668f930d-5dba-4a2a-bd14-589620626682\" (UID: \"668f930d-5dba-4a2a-bd14-589620626682\") " Jan 31 09:29:56 crc kubenswrapper[4830]: I0131 09:29:56.506467 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/668f930d-5dba-4a2a-bd14-589620626682-ovsdbserver-sb\") pod \"668f930d-5dba-4a2a-bd14-589620626682\" (UID: \"668f930d-5dba-4a2a-bd14-589620626682\") " Jan 31 09:29:56 crc kubenswrapper[4830]: I0131 09:29:56.506534 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/668f930d-5dba-4a2a-bd14-589620626682-openstack-edpm-ipam\") pod \"668f930d-5dba-4a2a-bd14-589620626682\" (UID: \"668f930d-5dba-4a2a-bd14-589620626682\") " Jan 31 09:29:56 crc kubenswrapper[4830]: I0131 09:29:56.506636 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/668f930d-5dba-4a2a-bd14-589620626682-config\") pod \"668f930d-5dba-4a2a-bd14-589620626682\" (UID: \"668f930d-5dba-4a2a-bd14-589620626682\") " Jan 31 09:29:56 crc kubenswrapper[4830]: I0131 09:29:56.506653 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/668f930d-5dba-4a2a-bd14-589620626682-dns-swift-storage-0\") pod \"668f930d-5dba-4a2a-bd14-589620626682\" (UID: \"668f930d-5dba-4a2a-bd14-589620626682\") " Jan 31 09:29:56 crc kubenswrapper[4830]: I0131 09:29:56.506754 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/668f930d-5dba-4a2a-bd14-589620626682-dns-svc\") pod \"668f930d-5dba-4a2a-bd14-589620626682\" (UID: \"668f930d-5dba-4a2a-bd14-589620626682\") " Jan 31 09:29:56 crc kubenswrapper[4830]: I0131 09:29:56.506789 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dw5rp\" (UniqueName: \"kubernetes.io/projected/668f930d-5dba-4a2a-bd14-589620626682-kube-api-access-dw5rp\") pod \"668f930d-5dba-4a2a-bd14-589620626682\" (UID: \"668f930d-5dba-4a2a-bd14-589620626682\") " Jan 31 09:29:56 crc kubenswrapper[4830]: I0131 09:29:56.525233 4830 scope.go:117] "RemoveContainer" containerID="edc5e1e9c372c5d152c384902e3449aee4a8d1132db441d4ff1442f24dfb8342" Jan 31 09:29:56 crc kubenswrapper[4830]: E0131 09:29:56.526293 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"edc5e1e9c372c5d152c384902e3449aee4a8d1132db441d4ff1442f24dfb8342\": container with ID starting with edc5e1e9c372c5d152c384902e3449aee4a8d1132db441d4ff1442f24dfb8342 not found: ID does not exist" containerID="edc5e1e9c372c5d152c384902e3449aee4a8d1132db441d4ff1442f24dfb8342" Jan 31 09:29:56 crc kubenswrapper[4830]: I0131 09:29:56.526356 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"edc5e1e9c372c5d152c384902e3449aee4a8d1132db441d4ff1442f24dfb8342"} err="failed to get container status \"edc5e1e9c372c5d152c384902e3449aee4a8d1132db441d4ff1442f24dfb8342\": rpc error: code = NotFound desc = could not find container \"edc5e1e9c372c5d152c384902e3449aee4a8d1132db441d4ff1442f24dfb8342\": container with ID starting with edc5e1e9c372c5d152c384902e3449aee4a8d1132db441d4ff1442f24dfb8342 not found: ID does not exist" Jan 31 09:29:56 crc kubenswrapper[4830]: I0131 09:29:56.526448 4830 scope.go:117] "RemoveContainer" containerID="fc068b35667994746db8cc763a15c7bf5c7586d7055fea0653d7e130fe96fc6c" Jan 31 09:29:56 crc kubenswrapper[4830]: E0131 09:29:56.527123 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fc068b35667994746db8cc763a15c7bf5c7586d7055fea0653d7e130fe96fc6c\": container with ID starting with fc068b35667994746db8cc763a15c7bf5c7586d7055fea0653d7e130fe96fc6c not found: ID does not exist" containerID="fc068b35667994746db8cc763a15c7bf5c7586d7055fea0653d7e130fe96fc6c" Jan 31 09:29:56 crc kubenswrapper[4830]: I0131 09:29:56.527148 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fc068b35667994746db8cc763a15c7bf5c7586d7055fea0653d7e130fe96fc6c"} err="failed to get container status \"fc068b35667994746db8cc763a15c7bf5c7586d7055fea0653d7e130fe96fc6c\": rpc error: code = NotFound desc = could not find container \"fc068b35667994746db8cc763a15c7bf5c7586d7055fea0653d7e130fe96fc6c\": container with ID starting with fc068b35667994746db8cc763a15c7bf5c7586d7055fea0653d7e130fe96fc6c not found: ID does not exist" Jan 31 09:29:56 crc kubenswrapper[4830]: I0131 09:29:56.533452 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/668f930d-5dba-4a2a-bd14-589620626682-kube-api-access-dw5rp" (OuterVolumeSpecName: "kube-api-access-dw5rp") pod "668f930d-5dba-4a2a-bd14-589620626682" (UID: "668f930d-5dba-4a2a-bd14-589620626682"). InnerVolumeSpecName "kube-api-access-dw5rp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:29:56 crc kubenswrapper[4830]: I0131 09:29:56.610507 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dw5rp\" (UniqueName: \"kubernetes.io/projected/668f930d-5dba-4a2a-bd14-589620626682-kube-api-access-dw5rp\") on node \"crc\" DevicePath \"\"" Jan 31 09:29:56 crc kubenswrapper[4830]: I0131 09:29:56.642125 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/668f930d-5dba-4a2a-bd14-589620626682-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "668f930d-5dba-4a2a-bd14-589620626682" (UID: "668f930d-5dba-4a2a-bd14-589620626682"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:29:56 crc kubenswrapper[4830]: I0131 09:29:56.653808 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/668f930d-5dba-4a2a-bd14-589620626682-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "668f930d-5dba-4a2a-bd14-589620626682" (UID: "668f930d-5dba-4a2a-bd14-589620626682"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:29:56 crc kubenswrapper[4830]: I0131 09:29:56.659804 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/668f930d-5dba-4a2a-bd14-589620626682-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "668f930d-5dba-4a2a-bd14-589620626682" (UID: "668f930d-5dba-4a2a-bd14-589620626682"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:29:56 crc kubenswrapper[4830]: I0131 09:29:56.659932 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/668f930d-5dba-4a2a-bd14-589620626682-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "668f930d-5dba-4a2a-bd14-589620626682" (UID: "668f930d-5dba-4a2a-bd14-589620626682"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:29:56 crc kubenswrapper[4830]: I0131 09:29:56.665921 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/668f930d-5dba-4a2a-bd14-589620626682-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "668f930d-5dba-4a2a-bd14-589620626682" (UID: "668f930d-5dba-4a2a-bd14-589620626682"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:29:56 crc kubenswrapper[4830]: I0131 09:29:56.676235 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/668f930d-5dba-4a2a-bd14-589620626682-config" (OuterVolumeSpecName: "config") pod "668f930d-5dba-4a2a-bd14-589620626682" (UID: "668f930d-5dba-4a2a-bd14-589620626682"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:29:56 crc kubenswrapper[4830]: I0131 09:29:56.713692 4830 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/668f930d-5dba-4a2a-bd14-589620626682-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 31 09:29:56 crc kubenswrapper[4830]: I0131 09:29:56.713751 4830 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/668f930d-5dba-4a2a-bd14-589620626682-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 31 09:29:56 crc kubenswrapper[4830]: I0131 09:29:56.713767 4830 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/668f930d-5dba-4a2a-bd14-589620626682-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 31 09:29:56 crc kubenswrapper[4830]: I0131 09:29:56.713779 4830 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/668f930d-5dba-4a2a-bd14-589620626682-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 31 09:29:56 crc kubenswrapper[4830]: I0131 09:29:56.713792 4830 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/668f930d-5dba-4a2a-bd14-589620626682-config\") on node \"crc\" DevicePath \"\"" Jan 31 09:29:56 crc kubenswrapper[4830]: I0131 09:29:56.713804 4830 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/668f930d-5dba-4a2a-bd14-589620626682-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 31 09:29:56 crc kubenswrapper[4830]: I0131 09:29:56.859055 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-68df85789f-bj49s"] Jan 31 09:29:56 crc kubenswrapper[4830]: I0131 09:29:56.878598 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-68df85789f-bj49s"] Jan 31 09:29:57 crc kubenswrapper[4830]: I0131 09:29:57.436572 4830 generic.go:334] "Generic (PLEG): container finished" podID="35a75e79-079e-4905-9cc1-af2a81596943" containerID="6cfb76a0d3624c566f680f2bb724f6347824ed753c153f5b3beaf28afa3e5a4a" exitCode=0 Jan 31 09:29:57 crc kubenswrapper[4830]: I0131 09:29:57.436931 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-cblrm" event={"ID":"35a75e79-079e-4905-9cc1-af2a81596943","Type":"ContainerDied","Data":"6cfb76a0d3624c566f680f2bb724f6347824ed753c153f5b3beaf28afa3e5a4a"} Jan 31 09:29:58 crc kubenswrapper[4830]: I0131 09:29:58.267473 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="668f930d-5dba-4a2a-bd14-589620626682" path="/var/lib/kubelet/pods/668f930d-5dba-4a2a-bd14-589620626682/volumes" Jan 31 09:29:58 crc kubenswrapper[4830]: I0131 09:29:58.958254 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-cblrm" Jan 31 09:29:59 crc kubenswrapper[4830]: I0131 09:29:59.100742 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/35a75e79-079e-4905-9cc1-af2a81596943-config-data\") pod \"35a75e79-079e-4905-9cc1-af2a81596943\" (UID: \"35a75e79-079e-4905-9cc1-af2a81596943\") " Jan 31 09:29:59 crc kubenswrapper[4830]: I0131 09:29:59.101181 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35a75e79-079e-4905-9cc1-af2a81596943-combined-ca-bundle\") pod \"35a75e79-079e-4905-9cc1-af2a81596943\" (UID: \"35a75e79-079e-4905-9cc1-af2a81596943\") " Jan 31 09:29:59 crc kubenswrapper[4830]: I0131 09:29:59.101340 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-66bl5\" (UniqueName: \"kubernetes.io/projected/35a75e79-079e-4905-9cc1-af2a81596943-kube-api-access-66bl5\") pod \"35a75e79-079e-4905-9cc1-af2a81596943\" (UID: \"35a75e79-079e-4905-9cc1-af2a81596943\") " Jan 31 09:29:59 crc kubenswrapper[4830]: I0131 09:29:59.122999 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/35a75e79-079e-4905-9cc1-af2a81596943-kube-api-access-66bl5" (OuterVolumeSpecName: "kube-api-access-66bl5") pod "35a75e79-079e-4905-9cc1-af2a81596943" (UID: "35a75e79-079e-4905-9cc1-af2a81596943"). InnerVolumeSpecName "kube-api-access-66bl5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:29:59 crc kubenswrapper[4830]: I0131 09:29:59.201027 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/35a75e79-079e-4905-9cc1-af2a81596943-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "35a75e79-079e-4905-9cc1-af2a81596943" (UID: "35a75e79-079e-4905-9cc1-af2a81596943"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:29:59 crc kubenswrapper[4830]: I0131 09:29:59.205551 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-66bl5\" (UniqueName: \"kubernetes.io/projected/35a75e79-079e-4905-9cc1-af2a81596943-kube-api-access-66bl5\") on node \"crc\" DevicePath \"\"" Jan 31 09:29:59 crc kubenswrapper[4830]: I0131 09:29:59.205619 4830 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35a75e79-079e-4905-9cc1-af2a81596943-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 09:29:59 crc kubenswrapper[4830]: I0131 09:29:59.337867 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/35a75e79-079e-4905-9cc1-af2a81596943-config-data" (OuterVolumeSpecName: "config-data") pod "35a75e79-079e-4905-9cc1-af2a81596943" (UID: "35a75e79-079e-4905-9cc1-af2a81596943"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:29:59 crc kubenswrapper[4830]: I0131 09:29:59.413182 4830 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/35a75e79-079e-4905-9cc1-af2a81596943-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 09:29:59 crc kubenswrapper[4830]: I0131 09:29:59.487911 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-cblrm" event={"ID":"35a75e79-079e-4905-9cc1-af2a81596943","Type":"ContainerDied","Data":"d32c8faf232cd85e78e41e2f364a6a75012625faab7c79fcdce63446453848a9"} Jan 31 09:29:59 crc kubenswrapper[4830]: I0131 09:29:59.487962 4830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d32c8faf232cd85e78e41e2f364a6a75012625faab7c79fcdce63446453848a9" Jan 31 09:29:59 crc kubenswrapper[4830]: I0131 09:29:59.488019 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-cblrm" Jan 31 09:30:00 crc kubenswrapper[4830]: I0131 09:30:00.166337 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29497530-xhdxr"] Jan 31 09:30:00 crc kubenswrapper[4830]: E0131 09:30:00.167755 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="35a75e79-079e-4905-9cc1-af2a81596943" containerName="heat-db-sync" Jan 31 09:30:00 crc kubenswrapper[4830]: I0131 09:30:00.167776 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="35a75e79-079e-4905-9cc1-af2a81596943" containerName="heat-db-sync" Jan 31 09:30:00 crc kubenswrapper[4830]: E0131 09:30:00.167815 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="455ee04e-f0d7-431d-8127-c66beff070e7" containerName="init" Jan 31 09:30:00 crc kubenswrapper[4830]: I0131 09:30:00.167827 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="455ee04e-f0d7-431d-8127-c66beff070e7" containerName="init" Jan 31 09:30:00 crc kubenswrapper[4830]: E0131 09:30:00.167848 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="455ee04e-f0d7-431d-8127-c66beff070e7" containerName="dnsmasq-dns" Jan 31 09:30:00 crc kubenswrapper[4830]: I0131 09:30:00.167856 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="455ee04e-f0d7-431d-8127-c66beff070e7" containerName="dnsmasq-dns" Jan 31 09:30:00 crc kubenswrapper[4830]: E0131 09:30:00.167866 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="668f930d-5dba-4a2a-bd14-589620626682" containerName="init" Jan 31 09:30:00 crc kubenswrapper[4830]: I0131 09:30:00.167873 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="668f930d-5dba-4a2a-bd14-589620626682" containerName="init" Jan 31 09:30:00 crc kubenswrapper[4830]: E0131 09:30:00.167892 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="668f930d-5dba-4a2a-bd14-589620626682" containerName="dnsmasq-dns" Jan 31 09:30:00 crc kubenswrapper[4830]: I0131 09:30:00.167908 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="668f930d-5dba-4a2a-bd14-589620626682" containerName="dnsmasq-dns" Jan 31 09:30:00 crc kubenswrapper[4830]: I0131 09:30:00.168223 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="35a75e79-079e-4905-9cc1-af2a81596943" containerName="heat-db-sync" Jan 31 09:30:00 crc kubenswrapper[4830]: I0131 09:30:00.168255 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="668f930d-5dba-4a2a-bd14-589620626682" containerName="dnsmasq-dns" Jan 31 09:30:00 crc kubenswrapper[4830]: I0131 09:30:00.168272 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="455ee04e-f0d7-431d-8127-c66beff070e7" containerName="dnsmasq-dns" Jan 31 09:30:00 crc kubenswrapper[4830]: I0131 09:30:00.169532 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29497530-xhdxr" Jan 31 09:30:00 crc kubenswrapper[4830]: I0131 09:30:00.173659 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 31 09:30:00 crc kubenswrapper[4830]: I0131 09:30:00.174714 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 31 09:30:00 crc kubenswrapper[4830]: I0131 09:30:00.186296 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29497530-xhdxr"] Jan 31 09:30:00 crc kubenswrapper[4830]: I0131 09:30:00.341050 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e6a570c9-bd20-4f17-b62f-15eae189fedc-config-volume\") pod \"collect-profiles-29497530-xhdxr\" (UID: \"e6a570c9-bd20-4f17-b62f-15eae189fedc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497530-xhdxr" Jan 31 09:30:00 crc kubenswrapper[4830]: I0131 09:30:00.341207 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e6a570c9-bd20-4f17-b62f-15eae189fedc-secret-volume\") pod \"collect-profiles-29497530-xhdxr\" (UID: \"e6a570c9-bd20-4f17-b62f-15eae189fedc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497530-xhdxr" Jan 31 09:30:00 crc kubenswrapper[4830]: I0131 09:30:00.341857 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g4zhf\" (UniqueName: \"kubernetes.io/projected/e6a570c9-bd20-4f17-b62f-15eae189fedc-kube-api-access-g4zhf\") pod \"collect-profiles-29497530-xhdxr\" (UID: \"e6a570c9-bd20-4f17-b62f-15eae189fedc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497530-xhdxr" Jan 31 09:30:00 crc kubenswrapper[4830]: I0131 09:30:00.444772 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e6a570c9-bd20-4f17-b62f-15eae189fedc-secret-volume\") pod \"collect-profiles-29497530-xhdxr\" (UID: \"e6a570c9-bd20-4f17-b62f-15eae189fedc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497530-xhdxr" Jan 31 09:30:00 crc kubenswrapper[4830]: I0131 09:30:00.445522 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g4zhf\" (UniqueName: \"kubernetes.io/projected/e6a570c9-bd20-4f17-b62f-15eae189fedc-kube-api-access-g4zhf\") pod \"collect-profiles-29497530-xhdxr\" (UID: \"e6a570c9-bd20-4f17-b62f-15eae189fedc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497530-xhdxr" Jan 31 09:30:00 crc kubenswrapper[4830]: I0131 09:30:00.446416 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e6a570c9-bd20-4f17-b62f-15eae189fedc-config-volume\") pod \"collect-profiles-29497530-xhdxr\" (UID: \"e6a570c9-bd20-4f17-b62f-15eae189fedc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497530-xhdxr" Jan 31 09:30:00 crc kubenswrapper[4830]: I0131 09:30:00.447558 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e6a570c9-bd20-4f17-b62f-15eae189fedc-config-volume\") pod \"collect-profiles-29497530-xhdxr\" (UID: \"e6a570c9-bd20-4f17-b62f-15eae189fedc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497530-xhdxr" Jan 31 09:30:00 crc kubenswrapper[4830]: I0131 09:30:00.458950 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e6a570c9-bd20-4f17-b62f-15eae189fedc-secret-volume\") pod \"collect-profiles-29497530-xhdxr\" (UID: \"e6a570c9-bd20-4f17-b62f-15eae189fedc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497530-xhdxr" Jan 31 09:30:00 crc kubenswrapper[4830]: I0131 09:30:00.469613 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g4zhf\" (UniqueName: \"kubernetes.io/projected/e6a570c9-bd20-4f17-b62f-15eae189fedc-kube-api-access-g4zhf\") pod \"collect-profiles-29497530-xhdxr\" (UID: \"e6a570c9-bd20-4f17-b62f-15eae189fedc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497530-xhdxr" Jan 31 09:30:00 crc kubenswrapper[4830]: I0131 09:30:00.501996 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29497530-xhdxr" Jan 31 09:30:00 crc kubenswrapper[4830]: I0131 09:30:00.799974 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-engine-88757d59b-r55jf"] Jan 31 09:30:00 crc kubenswrapper[4830]: I0131 09:30:00.804259 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-88757d59b-r55jf" Jan 31 09:30:00 crc kubenswrapper[4830]: I0131 09:30:00.869031 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-88757d59b-r55jf"] Jan 31 09:30:00 crc kubenswrapper[4830]: I0131 09:30:00.905244 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-5677f68f94-9mmb8"] Jan 31 09:30:00 crc kubenswrapper[4830]: I0131 09:30:00.907573 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-5677f68f94-9mmb8" Jan 31 09:30:00 crc kubenswrapper[4830]: I0131 09:30:00.969855 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3d4efcc1-d98d-466c-a7ee-6a6aa3766681-config-data\") pod \"heat-engine-88757d59b-r55jf\" (UID: \"3d4efcc1-d98d-466c-a7ee-6a6aa3766681\") " pod="openstack/heat-engine-88757d59b-r55jf" Jan 31 09:30:00 crc kubenswrapper[4830]: I0131 09:30:00.969964 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d4efcc1-d98d-466c-a7ee-6a6aa3766681-combined-ca-bundle\") pod \"heat-engine-88757d59b-r55jf\" (UID: \"3d4efcc1-d98d-466c-a7ee-6a6aa3766681\") " pod="openstack/heat-engine-88757d59b-r55jf" Jan 31 09:30:00 crc kubenswrapper[4830]: I0131 09:30:00.970082 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3d4efcc1-d98d-466c-a7ee-6a6aa3766681-config-data-custom\") pod \"heat-engine-88757d59b-r55jf\" (UID: \"3d4efcc1-d98d-466c-a7ee-6a6aa3766681\") " pod="openstack/heat-engine-88757d59b-r55jf" Jan 31 09:30:00 crc kubenswrapper[4830]: I0131 09:30:00.970107 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-95l6g\" (UniqueName: \"kubernetes.io/projected/3d4efcc1-d98d-466c-a7ee-6a6aa3766681-kube-api-access-95l6g\") pod \"heat-engine-88757d59b-r55jf\" (UID: \"3d4efcc1-d98d-466c-a7ee-6a6aa3766681\") " pod="openstack/heat-engine-88757d59b-r55jf" Jan 31 09:30:01 crc kubenswrapper[4830]: I0131 09:30:01.000624 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-5677f68f94-9mmb8"] Jan 31 09:30:01 crc kubenswrapper[4830]: I0131 09:30:01.041816 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-546fb56cb7-54z2g"] Jan 31 09:30:01 crc kubenswrapper[4830]: I0131 09:30:01.044208 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-546fb56cb7-54z2g" Jan 31 09:30:01 crc kubenswrapper[4830]: I0131 09:30:01.074855 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3d4efcc1-d98d-466c-a7ee-6a6aa3766681-config-data\") pod \"heat-engine-88757d59b-r55jf\" (UID: \"3d4efcc1-d98d-466c-a7ee-6a6aa3766681\") " pod="openstack/heat-engine-88757d59b-r55jf" Jan 31 09:30:01 crc kubenswrapper[4830]: I0131 09:30:01.074979 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/99dbef57-35a0-4840-a293-fefe87379a4b-config-data\") pod \"heat-api-5677f68f94-9mmb8\" (UID: \"99dbef57-35a0-4840-a293-fefe87379a4b\") " pod="openstack/heat-api-5677f68f94-9mmb8" Jan 31 09:30:01 crc kubenswrapper[4830]: I0131 09:30:01.075020 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d4efcc1-d98d-466c-a7ee-6a6aa3766681-combined-ca-bundle\") pod \"heat-engine-88757d59b-r55jf\" (UID: \"3d4efcc1-d98d-466c-a7ee-6a6aa3766681\") " pod="openstack/heat-engine-88757d59b-r55jf" Jan 31 09:30:01 crc kubenswrapper[4830]: I0131 09:30:01.075045 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/99dbef57-35a0-4840-a293-fefe87379a4b-combined-ca-bundle\") pod \"heat-api-5677f68f94-9mmb8\" (UID: \"99dbef57-35a0-4840-a293-fefe87379a4b\") " pod="openstack/heat-api-5677f68f94-9mmb8" Jan 31 09:30:01 crc kubenswrapper[4830]: I0131 09:30:01.075073 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/99dbef57-35a0-4840-a293-fefe87379a4b-internal-tls-certs\") pod \"heat-api-5677f68f94-9mmb8\" (UID: \"99dbef57-35a0-4840-a293-fefe87379a4b\") " pod="openstack/heat-api-5677f68f94-9mmb8" Jan 31 09:30:01 crc kubenswrapper[4830]: I0131 09:30:01.075123 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/99dbef57-35a0-4840-a293-fefe87379a4b-config-data-custom\") pod \"heat-api-5677f68f94-9mmb8\" (UID: \"99dbef57-35a0-4840-a293-fefe87379a4b\") " pod="openstack/heat-api-5677f68f94-9mmb8" Jan 31 09:30:01 crc kubenswrapper[4830]: I0131 09:30:01.075154 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-drtsr\" (UniqueName: \"kubernetes.io/projected/99dbef57-35a0-4840-a293-fefe87379a4b-kube-api-access-drtsr\") pod \"heat-api-5677f68f94-9mmb8\" (UID: \"99dbef57-35a0-4840-a293-fefe87379a4b\") " pod="openstack/heat-api-5677f68f94-9mmb8" Jan 31 09:30:01 crc kubenswrapper[4830]: I0131 09:30:01.075215 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/99dbef57-35a0-4840-a293-fefe87379a4b-public-tls-certs\") pod \"heat-api-5677f68f94-9mmb8\" (UID: \"99dbef57-35a0-4840-a293-fefe87379a4b\") " pod="openstack/heat-api-5677f68f94-9mmb8" Jan 31 09:30:01 crc kubenswrapper[4830]: I0131 09:30:01.075285 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3d4efcc1-d98d-466c-a7ee-6a6aa3766681-config-data-custom\") pod \"heat-engine-88757d59b-r55jf\" (UID: \"3d4efcc1-d98d-466c-a7ee-6a6aa3766681\") " pod="openstack/heat-engine-88757d59b-r55jf" Jan 31 09:30:01 crc kubenswrapper[4830]: I0131 09:30:01.075317 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-95l6g\" (UniqueName: \"kubernetes.io/projected/3d4efcc1-d98d-466c-a7ee-6a6aa3766681-kube-api-access-95l6g\") pod \"heat-engine-88757d59b-r55jf\" (UID: \"3d4efcc1-d98d-466c-a7ee-6a6aa3766681\") " pod="openstack/heat-engine-88757d59b-r55jf" Jan 31 09:30:01 crc kubenswrapper[4830]: I0131 09:30:01.083771 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3d4efcc1-d98d-466c-a7ee-6a6aa3766681-config-data-custom\") pod \"heat-engine-88757d59b-r55jf\" (UID: \"3d4efcc1-d98d-466c-a7ee-6a6aa3766681\") " pod="openstack/heat-engine-88757d59b-r55jf" Jan 31 09:30:01 crc kubenswrapper[4830]: I0131 09:30:01.089792 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-546fb56cb7-54z2g"] Jan 31 09:30:01 crc kubenswrapper[4830]: I0131 09:30:01.094979 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d4efcc1-d98d-466c-a7ee-6a6aa3766681-combined-ca-bundle\") pod \"heat-engine-88757d59b-r55jf\" (UID: \"3d4efcc1-d98d-466c-a7ee-6a6aa3766681\") " pod="openstack/heat-engine-88757d59b-r55jf" Jan 31 09:30:01 crc kubenswrapper[4830]: I0131 09:30:01.096208 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3d4efcc1-d98d-466c-a7ee-6a6aa3766681-config-data\") pod \"heat-engine-88757d59b-r55jf\" (UID: \"3d4efcc1-d98d-466c-a7ee-6a6aa3766681\") " pod="openstack/heat-engine-88757d59b-r55jf" Jan 31 09:30:01 crc kubenswrapper[4830]: I0131 09:30:01.119858 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-95l6g\" (UniqueName: \"kubernetes.io/projected/3d4efcc1-d98d-466c-a7ee-6a6aa3766681-kube-api-access-95l6g\") pod \"heat-engine-88757d59b-r55jf\" (UID: \"3d4efcc1-d98d-466c-a7ee-6a6aa3766681\") " pod="openstack/heat-engine-88757d59b-r55jf" Jan 31 09:30:01 crc kubenswrapper[4830]: I0131 09:30:01.164521 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-88757d59b-r55jf" Jan 31 09:30:01 crc kubenswrapper[4830]: I0131 09:30:01.183801 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tjvwr\" (UniqueName: \"kubernetes.io/projected/bcd98bf8-a064-4c62-9847-37dd7939889b-kube-api-access-tjvwr\") pod \"heat-cfnapi-546fb56cb7-54z2g\" (UID: \"bcd98bf8-a064-4c62-9847-37dd7939889b\") " pod="openstack/heat-cfnapi-546fb56cb7-54z2g" Jan 31 09:30:01 crc kubenswrapper[4830]: I0131 09:30:01.184047 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bcd98bf8-a064-4c62-9847-37dd7939889b-combined-ca-bundle\") pod \"heat-cfnapi-546fb56cb7-54z2g\" (UID: \"bcd98bf8-a064-4c62-9847-37dd7939889b\") " pod="openstack/heat-cfnapi-546fb56cb7-54z2g" Jan 31 09:30:01 crc kubenswrapper[4830]: I0131 09:30:01.184074 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bcd98bf8-a064-4c62-9847-37dd7939889b-config-data\") pod \"heat-cfnapi-546fb56cb7-54z2g\" (UID: \"bcd98bf8-a064-4c62-9847-37dd7939889b\") " pod="openstack/heat-cfnapi-546fb56cb7-54z2g" Jan 31 09:30:01 crc kubenswrapper[4830]: I0131 09:30:01.184130 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/99dbef57-35a0-4840-a293-fefe87379a4b-config-data\") pod \"heat-api-5677f68f94-9mmb8\" (UID: \"99dbef57-35a0-4840-a293-fefe87379a4b\") " pod="openstack/heat-api-5677f68f94-9mmb8" Jan 31 09:30:01 crc kubenswrapper[4830]: I0131 09:30:01.184159 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/99dbef57-35a0-4840-a293-fefe87379a4b-combined-ca-bundle\") pod \"heat-api-5677f68f94-9mmb8\" (UID: \"99dbef57-35a0-4840-a293-fefe87379a4b\") " pod="openstack/heat-api-5677f68f94-9mmb8" Jan 31 09:30:01 crc kubenswrapper[4830]: I0131 09:30:01.184186 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/99dbef57-35a0-4840-a293-fefe87379a4b-internal-tls-certs\") pod \"heat-api-5677f68f94-9mmb8\" (UID: \"99dbef57-35a0-4840-a293-fefe87379a4b\") " pod="openstack/heat-api-5677f68f94-9mmb8" Jan 31 09:30:01 crc kubenswrapper[4830]: I0131 09:30:01.184218 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bcd98bf8-a064-4c62-9847-37dd7939889b-public-tls-certs\") pod \"heat-cfnapi-546fb56cb7-54z2g\" (UID: \"bcd98bf8-a064-4c62-9847-37dd7939889b\") " pod="openstack/heat-cfnapi-546fb56cb7-54z2g" Jan 31 09:30:01 crc kubenswrapper[4830]: I0131 09:30:01.184242 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bcd98bf8-a064-4c62-9847-37dd7939889b-internal-tls-certs\") pod \"heat-cfnapi-546fb56cb7-54z2g\" (UID: \"bcd98bf8-a064-4c62-9847-37dd7939889b\") " pod="openstack/heat-cfnapi-546fb56cb7-54z2g" Jan 31 09:30:01 crc kubenswrapper[4830]: I0131 09:30:01.184273 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/99dbef57-35a0-4840-a293-fefe87379a4b-config-data-custom\") pod \"heat-api-5677f68f94-9mmb8\" (UID: \"99dbef57-35a0-4840-a293-fefe87379a4b\") " pod="openstack/heat-api-5677f68f94-9mmb8" Jan 31 09:30:01 crc kubenswrapper[4830]: I0131 09:30:01.184312 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-drtsr\" (UniqueName: \"kubernetes.io/projected/99dbef57-35a0-4840-a293-fefe87379a4b-kube-api-access-drtsr\") pod \"heat-api-5677f68f94-9mmb8\" (UID: \"99dbef57-35a0-4840-a293-fefe87379a4b\") " pod="openstack/heat-api-5677f68f94-9mmb8" Jan 31 09:30:01 crc kubenswrapper[4830]: I0131 09:30:01.184342 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bcd98bf8-a064-4c62-9847-37dd7939889b-config-data-custom\") pod \"heat-cfnapi-546fb56cb7-54z2g\" (UID: \"bcd98bf8-a064-4c62-9847-37dd7939889b\") " pod="openstack/heat-cfnapi-546fb56cb7-54z2g" Jan 31 09:30:01 crc kubenswrapper[4830]: I0131 09:30:01.184375 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/99dbef57-35a0-4840-a293-fefe87379a4b-public-tls-certs\") pod \"heat-api-5677f68f94-9mmb8\" (UID: \"99dbef57-35a0-4840-a293-fefe87379a4b\") " pod="openstack/heat-api-5677f68f94-9mmb8" Jan 31 09:30:01 crc kubenswrapper[4830]: I0131 09:30:01.188856 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/99dbef57-35a0-4840-a293-fefe87379a4b-public-tls-certs\") pod \"heat-api-5677f68f94-9mmb8\" (UID: \"99dbef57-35a0-4840-a293-fefe87379a4b\") " pod="openstack/heat-api-5677f68f94-9mmb8" Jan 31 09:30:01 crc kubenswrapper[4830]: I0131 09:30:01.202436 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/99dbef57-35a0-4840-a293-fefe87379a4b-combined-ca-bundle\") pod \"heat-api-5677f68f94-9mmb8\" (UID: \"99dbef57-35a0-4840-a293-fefe87379a4b\") " pod="openstack/heat-api-5677f68f94-9mmb8" Jan 31 09:30:01 crc kubenswrapper[4830]: I0131 09:30:01.204142 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/99dbef57-35a0-4840-a293-fefe87379a4b-config-data\") pod \"heat-api-5677f68f94-9mmb8\" (UID: \"99dbef57-35a0-4840-a293-fefe87379a4b\") " pod="openstack/heat-api-5677f68f94-9mmb8" Jan 31 09:30:01 crc kubenswrapper[4830]: I0131 09:30:01.213651 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/99dbef57-35a0-4840-a293-fefe87379a4b-config-data-custom\") pod \"heat-api-5677f68f94-9mmb8\" (UID: \"99dbef57-35a0-4840-a293-fefe87379a4b\") " pod="openstack/heat-api-5677f68f94-9mmb8" Jan 31 09:30:01 crc kubenswrapper[4830]: I0131 09:30:01.221185 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-drtsr\" (UniqueName: \"kubernetes.io/projected/99dbef57-35a0-4840-a293-fefe87379a4b-kube-api-access-drtsr\") pod \"heat-api-5677f68f94-9mmb8\" (UID: \"99dbef57-35a0-4840-a293-fefe87379a4b\") " pod="openstack/heat-api-5677f68f94-9mmb8" Jan 31 09:30:01 crc kubenswrapper[4830]: I0131 09:30:01.232618 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/99dbef57-35a0-4840-a293-fefe87379a4b-internal-tls-certs\") pod \"heat-api-5677f68f94-9mmb8\" (UID: \"99dbef57-35a0-4840-a293-fefe87379a4b\") " pod="openstack/heat-api-5677f68f94-9mmb8" Jan 31 09:30:01 crc kubenswrapper[4830]: I0131 09:30:01.246943 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29497530-xhdxr"] Jan 31 09:30:01 crc kubenswrapper[4830]: I0131 09:30:01.249801 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-5677f68f94-9mmb8" Jan 31 09:30:01 crc kubenswrapper[4830]: I0131 09:30:01.287309 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bcd98bf8-a064-4c62-9847-37dd7939889b-combined-ca-bundle\") pod \"heat-cfnapi-546fb56cb7-54z2g\" (UID: \"bcd98bf8-a064-4c62-9847-37dd7939889b\") " pod="openstack/heat-cfnapi-546fb56cb7-54z2g" Jan 31 09:30:01 crc kubenswrapper[4830]: I0131 09:30:01.287379 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bcd98bf8-a064-4c62-9847-37dd7939889b-config-data\") pod \"heat-cfnapi-546fb56cb7-54z2g\" (UID: \"bcd98bf8-a064-4c62-9847-37dd7939889b\") " pod="openstack/heat-cfnapi-546fb56cb7-54z2g" Jan 31 09:30:01 crc kubenswrapper[4830]: I0131 09:30:01.287448 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bcd98bf8-a064-4c62-9847-37dd7939889b-public-tls-certs\") pod \"heat-cfnapi-546fb56cb7-54z2g\" (UID: \"bcd98bf8-a064-4c62-9847-37dd7939889b\") " pod="openstack/heat-cfnapi-546fb56cb7-54z2g" Jan 31 09:30:01 crc kubenswrapper[4830]: I0131 09:30:01.287469 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bcd98bf8-a064-4c62-9847-37dd7939889b-internal-tls-certs\") pod \"heat-cfnapi-546fb56cb7-54z2g\" (UID: \"bcd98bf8-a064-4c62-9847-37dd7939889b\") " pod="openstack/heat-cfnapi-546fb56cb7-54z2g" Jan 31 09:30:01 crc kubenswrapper[4830]: I0131 09:30:01.287514 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bcd98bf8-a064-4c62-9847-37dd7939889b-config-data-custom\") pod \"heat-cfnapi-546fb56cb7-54z2g\" (UID: \"bcd98bf8-a064-4c62-9847-37dd7939889b\") " pod="openstack/heat-cfnapi-546fb56cb7-54z2g" Jan 31 09:30:01 crc kubenswrapper[4830]: I0131 09:30:01.287555 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tjvwr\" (UniqueName: \"kubernetes.io/projected/bcd98bf8-a064-4c62-9847-37dd7939889b-kube-api-access-tjvwr\") pod \"heat-cfnapi-546fb56cb7-54z2g\" (UID: \"bcd98bf8-a064-4c62-9847-37dd7939889b\") " pod="openstack/heat-cfnapi-546fb56cb7-54z2g" Jan 31 09:30:01 crc kubenswrapper[4830]: I0131 09:30:01.292792 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bcd98bf8-a064-4c62-9847-37dd7939889b-combined-ca-bundle\") pod \"heat-cfnapi-546fb56cb7-54z2g\" (UID: \"bcd98bf8-a064-4c62-9847-37dd7939889b\") " pod="openstack/heat-cfnapi-546fb56cb7-54z2g" Jan 31 09:30:01 crc kubenswrapper[4830]: I0131 09:30:01.294120 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bcd98bf8-a064-4c62-9847-37dd7939889b-public-tls-certs\") pod \"heat-cfnapi-546fb56cb7-54z2g\" (UID: \"bcd98bf8-a064-4c62-9847-37dd7939889b\") " pod="openstack/heat-cfnapi-546fb56cb7-54z2g" Jan 31 09:30:01 crc kubenswrapper[4830]: I0131 09:30:01.295973 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bcd98bf8-a064-4c62-9847-37dd7939889b-internal-tls-certs\") pod \"heat-cfnapi-546fb56cb7-54z2g\" (UID: \"bcd98bf8-a064-4c62-9847-37dd7939889b\") " pod="openstack/heat-cfnapi-546fb56cb7-54z2g" Jan 31 09:30:01 crc kubenswrapper[4830]: I0131 09:30:01.296485 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bcd98bf8-a064-4c62-9847-37dd7939889b-config-data\") pod \"heat-cfnapi-546fb56cb7-54z2g\" (UID: \"bcd98bf8-a064-4c62-9847-37dd7939889b\") " pod="openstack/heat-cfnapi-546fb56cb7-54z2g" Jan 31 09:30:01 crc kubenswrapper[4830]: I0131 09:30:01.297258 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bcd98bf8-a064-4c62-9847-37dd7939889b-config-data-custom\") pod \"heat-cfnapi-546fb56cb7-54z2g\" (UID: \"bcd98bf8-a064-4c62-9847-37dd7939889b\") " pod="openstack/heat-cfnapi-546fb56cb7-54z2g" Jan 31 09:30:01 crc kubenswrapper[4830]: I0131 09:30:01.325469 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tjvwr\" (UniqueName: \"kubernetes.io/projected/bcd98bf8-a064-4c62-9847-37dd7939889b-kube-api-access-tjvwr\") pod \"heat-cfnapi-546fb56cb7-54z2g\" (UID: \"bcd98bf8-a064-4c62-9847-37dd7939889b\") " pod="openstack/heat-cfnapi-546fb56cb7-54z2g" Jan 31 09:30:01 crc kubenswrapper[4830]: I0131 09:30:01.377363 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-546fb56cb7-54z2g" Jan 31 09:30:01 crc kubenswrapper[4830]: I0131 09:30:01.528072 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29497530-xhdxr" event={"ID":"e6a570c9-bd20-4f17-b62f-15eae189fedc","Type":"ContainerStarted","Data":"6a6c963118e8310e09a9302ae9644d5fd36b72404e43cff12c29cbcd4f51c2aa"} Jan 31 09:30:01 crc kubenswrapper[4830]: I0131 09:30:01.918434 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-88757d59b-r55jf"] Jan 31 09:30:02 crc kubenswrapper[4830]: W0131 09:30:02.085365 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod99dbef57_35a0_4840_a293_fefe87379a4b.slice/crio-d5d88cb5768f4babb5ea441da183acf6a1d540c57cb08403946cbfe5167b1e38 WatchSource:0}: Error finding container d5d88cb5768f4babb5ea441da183acf6a1d540c57cb08403946cbfe5167b1e38: Status 404 returned error can't find the container with id d5d88cb5768f4babb5ea441da183acf6a1d540c57cb08403946cbfe5167b1e38 Jan 31 09:30:02 crc kubenswrapper[4830]: I0131 09:30:02.088124 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-5677f68f94-9mmb8"] Jan 31 09:30:02 crc kubenswrapper[4830]: I0131 09:30:02.342136 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-546fb56cb7-54z2g"] Jan 31 09:30:02 crc kubenswrapper[4830]: I0131 09:30:02.594664 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-546fb56cb7-54z2g" event={"ID":"bcd98bf8-a064-4c62-9847-37dd7939889b","Type":"ContainerStarted","Data":"a8a49e7e835169203765df7171904d9754a677ca4520c8a095deb91d207a97ed"} Jan 31 09:30:02 crc kubenswrapper[4830]: I0131 09:30:02.644102 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29497530-xhdxr" event={"ID":"e6a570c9-bd20-4f17-b62f-15eae189fedc","Type":"ContainerStarted","Data":"c6c8b24d2fdfb99982a546cf48b0c64c196f856ca919c18652c658185e58816a"} Jan 31 09:30:02 crc kubenswrapper[4830]: I0131 09:30:02.662234 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-88757d59b-r55jf" event={"ID":"3d4efcc1-d98d-466c-a7ee-6a6aa3766681","Type":"ContainerStarted","Data":"c0f9c601181e05bbf263c42884a16e1b76c440ad8a79f2efa8cc9dd64c149635"} Jan 31 09:30:02 crc kubenswrapper[4830]: I0131 09:30:02.686296 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-5677f68f94-9mmb8" event={"ID":"99dbef57-35a0-4840-a293-fefe87379a4b","Type":"ContainerStarted","Data":"d5d88cb5768f4babb5ea441da183acf6a1d540c57cb08403946cbfe5167b1e38"} Jan 31 09:30:02 crc kubenswrapper[4830]: I0131 09:30:02.712096 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29497530-xhdxr" podStartSLOduration=2.712069253 podStartE2EDuration="2.712069253s" podCreationTimestamp="2026-01-31 09:30:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 09:30:02.699960106 +0000 UTC m=+1747.193322538" watchObservedRunningTime="2026-01-31 09:30:02.712069253 +0000 UTC m=+1747.205431695" Jan 31 09:30:03 crc kubenswrapper[4830]: I0131 09:30:03.253482 4830 scope.go:117] "RemoveContainer" containerID="a04fad3617a9e38076099693ce6bd6f0b7e1a9b845b3b8a22acffddfa772e8f0" Jan 31 09:30:03 crc kubenswrapper[4830]: E0131 09:30:03.254979 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gt7kd_openshift-machine-config-operator(158dbfda-9b0a-4809-9946-3c6ee2d082dc)\"" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" podUID="158dbfda-9b0a-4809-9946-3c6ee2d082dc" Jan 31 09:30:03 crc kubenswrapper[4830]: I0131 09:30:03.731927 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-88757d59b-r55jf" event={"ID":"3d4efcc1-d98d-466c-a7ee-6a6aa3766681","Type":"ContainerStarted","Data":"63489c1bad21002b061879d92657c05f17edfca2d9e88ea1df202bc54364918f"} Jan 31 09:30:03 crc kubenswrapper[4830]: I0131 09:30:03.733813 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-engine-88757d59b-r55jf" Jan 31 09:30:03 crc kubenswrapper[4830]: I0131 09:30:03.744621 4830 generic.go:334] "Generic (PLEG): container finished" podID="e6a570c9-bd20-4f17-b62f-15eae189fedc" containerID="c6c8b24d2fdfb99982a546cf48b0c64c196f856ca919c18652c658185e58816a" exitCode=0 Jan 31 09:30:03 crc kubenswrapper[4830]: I0131 09:30:03.744676 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29497530-xhdxr" event={"ID":"e6a570c9-bd20-4f17-b62f-15eae189fedc","Type":"ContainerDied","Data":"c6c8b24d2fdfb99982a546cf48b0c64c196f856ca919c18652c658185e58816a"} Jan 31 09:30:03 crc kubenswrapper[4830]: I0131 09:30:03.821468 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-engine-88757d59b-r55jf" podStartSLOduration=3.821439682 podStartE2EDuration="3.821439682s" podCreationTimestamp="2026-01-31 09:30:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 09:30:03.788110978 +0000 UTC m=+1748.281473410" watchObservedRunningTime="2026-01-31 09:30:03.821439682 +0000 UTC m=+1748.314802124" Jan 31 09:30:05 crc kubenswrapper[4830]: I0131 09:30:05.445816 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-n4lwf"] Jan 31 09:30:05 crc kubenswrapper[4830]: I0131 09:30:05.448584 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-n4lwf" Jan 31 09:30:05 crc kubenswrapper[4830]: I0131 09:30:05.457687 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 31 09:30:05 crc kubenswrapper[4830]: I0131 09:30:05.458055 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 31 09:30:05 crc kubenswrapper[4830]: I0131 09:30:05.458253 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-vd24j" Jan 31 09:30:05 crc kubenswrapper[4830]: I0131 09:30:05.458420 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 31 09:30:05 crc kubenswrapper[4830]: I0131 09:30:05.501439 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-n4lwf"] Jan 31 09:30:05 crc kubenswrapper[4830]: I0131 09:30:05.536084 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b8c10133-0080-4638-a514-b1d8c87873e4-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-n4lwf\" (UID: \"b8c10133-0080-4638-a514-b1d8c87873e4\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-n4lwf" Jan 31 09:30:05 crc kubenswrapper[4830]: I0131 09:30:05.536177 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b8c10133-0080-4638-a514-b1d8c87873e4-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-n4lwf\" (UID: \"b8c10133-0080-4638-a514-b1d8c87873e4\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-n4lwf" Jan 31 09:30:05 crc kubenswrapper[4830]: I0131 09:30:05.536214 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-726ch\" (UniqueName: \"kubernetes.io/projected/b8c10133-0080-4638-a514-b1d8c87873e4-kube-api-access-726ch\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-n4lwf\" (UID: \"b8c10133-0080-4638-a514-b1d8c87873e4\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-n4lwf" Jan 31 09:30:05 crc kubenswrapper[4830]: I0131 09:30:05.536287 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b8c10133-0080-4638-a514-b1d8c87873e4-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-n4lwf\" (UID: \"b8c10133-0080-4638-a514-b1d8c87873e4\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-n4lwf" Jan 31 09:30:05 crc kubenswrapper[4830]: I0131 09:30:05.640441 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b8c10133-0080-4638-a514-b1d8c87873e4-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-n4lwf\" (UID: \"b8c10133-0080-4638-a514-b1d8c87873e4\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-n4lwf" Jan 31 09:30:05 crc kubenswrapper[4830]: I0131 09:30:05.640545 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b8c10133-0080-4638-a514-b1d8c87873e4-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-n4lwf\" (UID: \"b8c10133-0080-4638-a514-b1d8c87873e4\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-n4lwf" Jan 31 09:30:05 crc kubenswrapper[4830]: I0131 09:30:05.640599 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-726ch\" (UniqueName: \"kubernetes.io/projected/b8c10133-0080-4638-a514-b1d8c87873e4-kube-api-access-726ch\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-n4lwf\" (UID: \"b8c10133-0080-4638-a514-b1d8c87873e4\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-n4lwf" Jan 31 09:30:05 crc kubenswrapper[4830]: I0131 09:30:05.640717 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b8c10133-0080-4638-a514-b1d8c87873e4-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-n4lwf\" (UID: \"b8c10133-0080-4638-a514-b1d8c87873e4\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-n4lwf" Jan 31 09:30:05 crc kubenswrapper[4830]: I0131 09:30:05.653274 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b8c10133-0080-4638-a514-b1d8c87873e4-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-n4lwf\" (UID: \"b8c10133-0080-4638-a514-b1d8c87873e4\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-n4lwf" Jan 31 09:30:05 crc kubenswrapper[4830]: I0131 09:30:05.653685 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b8c10133-0080-4638-a514-b1d8c87873e4-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-n4lwf\" (UID: \"b8c10133-0080-4638-a514-b1d8c87873e4\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-n4lwf" Jan 31 09:30:05 crc kubenswrapper[4830]: I0131 09:30:05.654455 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b8c10133-0080-4638-a514-b1d8c87873e4-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-n4lwf\" (UID: \"b8c10133-0080-4638-a514-b1d8c87873e4\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-n4lwf" Jan 31 09:30:05 crc kubenswrapper[4830]: I0131 09:30:05.674147 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-726ch\" (UniqueName: \"kubernetes.io/projected/b8c10133-0080-4638-a514-b1d8c87873e4-kube-api-access-726ch\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-n4lwf\" (UID: \"b8c10133-0080-4638-a514-b1d8c87873e4\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-n4lwf" Jan 31 09:30:05 crc kubenswrapper[4830]: I0131 09:30:05.830267 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-n4lwf" Jan 31 09:30:06 crc kubenswrapper[4830]: I0131 09:30:06.391857 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29497530-xhdxr" Jan 31 09:30:06 crc kubenswrapper[4830]: I0131 09:30:06.472954 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g4zhf\" (UniqueName: \"kubernetes.io/projected/e6a570c9-bd20-4f17-b62f-15eae189fedc-kube-api-access-g4zhf\") pod \"e6a570c9-bd20-4f17-b62f-15eae189fedc\" (UID: \"e6a570c9-bd20-4f17-b62f-15eae189fedc\") " Jan 31 09:30:06 crc kubenswrapper[4830]: I0131 09:30:06.473393 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e6a570c9-bd20-4f17-b62f-15eae189fedc-secret-volume\") pod \"e6a570c9-bd20-4f17-b62f-15eae189fedc\" (UID: \"e6a570c9-bd20-4f17-b62f-15eae189fedc\") " Jan 31 09:30:06 crc kubenswrapper[4830]: I0131 09:30:06.475316 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e6a570c9-bd20-4f17-b62f-15eae189fedc-config-volume\") pod \"e6a570c9-bd20-4f17-b62f-15eae189fedc\" (UID: \"e6a570c9-bd20-4f17-b62f-15eae189fedc\") " Jan 31 09:30:06 crc kubenswrapper[4830]: I0131 09:30:06.476860 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e6a570c9-bd20-4f17-b62f-15eae189fedc-config-volume" (OuterVolumeSpecName: "config-volume") pod "e6a570c9-bd20-4f17-b62f-15eae189fedc" (UID: "e6a570c9-bd20-4f17-b62f-15eae189fedc"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:30:06 crc kubenswrapper[4830]: I0131 09:30:06.481303 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e6a570c9-bd20-4f17-b62f-15eae189fedc-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "e6a570c9-bd20-4f17-b62f-15eae189fedc" (UID: "e6a570c9-bd20-4f17-b62f-15eae189fedc"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:30:06 crc kubenswrapper[4830]: I0131 09:30:06.481782 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e6a570c9-bd20-4f17-b62f-15eae189fedc-kube-api-access-g4zhf" (OuterVolumeSpecName: "kube-api-access-g4zhf") pod "e6a570c9-bd20-4f17-b62f-15eae189fedc" (UID: "e6a570c9-bd20-4f17-b62f-15eae189fedc"). InnerVolumeSpecName "kube-api-access-g4zhf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:30:06 crc kubenswrapper[4830]: I0131 09:30:06.581475 4830 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e6a570c9-bd20-4f17-b62f-15eae189fedc-config-volume\") on node \"crc\" DevicePath \"\"" Jan 31 09:30:06 crc kubenswrapper[4830]: I0131 09:30:06.581525 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g4zhf\" (UniqueName: \"kubernetes.io/projected/e6a570c9-bd20-4f17-b62f-15eae189fedc-kube-api-access-g4zhf\") on node \"crc\" DevicePath \"\"" Jan 31 09:30:06 crc kubenswrapper[4830]: I0131 09:30:06.581540 4830 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e6a570c9-bd20-4f17-b62f-15eae189fedc-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 31 09:30:06 crc kubenswrapper[4830]: I0131 09:30:06.818853 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29497530-xhdxr" event={"ID":"e6a570c9-bd20-4f17-b62f-15eae189fedc","Type":"ContainerDied","Data":"6a6c963118e8310e09a9302ae9644d5fd36b72404e43cff12c29cbcd4f51c2aa"} Jan 31 09:30:06 crc kubenswrapper[4830]: I0131 09:30:06.819328 4830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6a6c963118e8310e09a9302ae9644d5fd36b72404e43cff12c29cbcd4f51c2aa" Jan 31 09:30:06 crc kubenswrapper[4830]: I0131 09:30:06.819422 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29497530-xhdxr" Jan 31 09:30:07 crc kubenswrapper[4830]: I0131 09:30:07.832557 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-5677f68f94-9mmb8" event={"ID":"99dbef57-35a0-4840-a293-fefe87379a4b","Type":"ContainerStarted","Data":"427dce6e95a469c7eeb06ac4228365da83f2fe8397a703db474e30fcd2f635c9"} Jan 31 09:30:07 crc kubenswrapper[4830]: I0131 09:30:07.833211 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-5677f68f94-9mmb8" Jan 31 09:30:07 crc kubenswrapper[4830]: I0131 09:30:07.836943 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-546fb56cb7-54z2g" event={"ID":"bcd98bf8-a064-4c62-9847-37dd7939889b","Type":"ContainerStarted","Data":"e0da7cd300da7ea2acdb03805bc0cdb7b674426e7a5f913d3fd5641a00d2c379"} Jan 31 09:30:07 crc kubenswrapper[4830]: I0131 09:30:07.837283 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-546fb56cb7-54z2g" Jan 31 09:30:07 crc kubenswrapper[4830]: I0131 09:30:07.880532 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-5677f68f94-9mmb8" podStartSLOduration=3.472284719 podStartE2EDuration="7.880505836s" podCreationTimestamp="2026-01-31 09:30:00 +0000 UTC" firstStartedPulling="2026-01-31 09:30:02.090559126 +0000 UTC m=+1746.583921568" lastFinishedPulling="2026-01-31 09:30:06.498780243 +0000 UTC m=+1750.992142685" observedRunningTime="2026-01-31 09:30:07.849944131 +0000 UTC m=+1752.343306603" watchObservedRunningTime="2026-01-31 09:30:07.880505836 +0000 UTC m=+1752.373868278" Jan 31 09:30:07 crc kubenswrapper[4830]: I0131 09:30:07.886480 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-546fb56cb7-54z2g" podStartSLOduration=3.7476348489999998 podStartE2EDuration="7.886463676s" podCreationTimestamp="2026-01-31 09:30:00 +0000 UTC" firstStartedPulling="2026-01-31 09:30:02.358946857 +0000 UTC m=+1746.852309289" lastFinishedPulling="2026-01-31 09:30:06.497775674 +0000 UTC m=+1750.991138116" observedRunningTime="2026-01-31 09:30:07.879474036 +0000 UTC m=+1752.372836478" watchObservedRunningTime="2026-01-31 09:30:07.886463676 +0000 UTC m=+1752.379826128" Jan 31 09:30:08 crc kubenswrapper[4830]: I0131 09:30:08.457046 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-n4lwf"] Jan 31 09:30:08 crc kubenswrapper[4830]: W0131 09:30:08.460762 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb8c10133_0080_4638_a514_b1d8c87873e4.slice/crio-7c3ff02e0269fe869b34464ed0539b3ac9638de0389dabc46291bb675d23841a WatchSource:0}: Error finding container 7c3ff02e0269fe869b34464ed0539b3ac9638de0389dabc46291bb675d23841a: Status 404 returned error can't find the container with id 7c3ff02e0269fe869b34464ed0539b3ac9638de0389dabc46291bb675d23841a Jan 31 09:30:08 crc kubenswrapper[4830]: I0131 09:30:08.889806 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-n4lwf" event={"ID":"b8c10133-0080-4638-a514-b1d8c87873e4","Type":"ContainerStarted","Data":"7c3ff02e0269fe869b34464ed0539b3ac9638de0389dabc46291bb675d23841a"} Jan 31 09:30:09 crc kubenswrapper[4830]: I0131 09:30:09.036627 4830 scope.go:117] "RemoveContainer" containerID="687f0bb00bc26d9a9a27626a9d07e5ffacb0e5e031c3732b898d3b85f3fbe4a0" Jan 31 09:30:09 crc kubenswrapper[4830]: I0131 09:30:09.119452 4830 scope.go:117] "RemoveContainer" containerID="b82d566e252a5e263e93f29e01a43117ffad3aa3827523d8c1a930eedb4b72fd" Jan 31 09:30:09 crc kubenswrapper[4830]: I0131 09:30:09.181960 4830 scope.go:117] "RemoveContainer" containerID="acc702009ec1b1c264fd284a800bc7eafae655c03f62683636397a46f06f969c" Jan 31 09:30:09 crc kubenswrapper[4830]: I0131 09:30:09.250393 4830 scope.go:117] "RemoveContainer" containerID="fdb1043ccf73c9d37bcc827f69f1b9499e832f9028fa7da41f5a19e3692877ea" Jan 31 09:30:09 crc kubenswrapper[4830]: I0131 09:30:09.299203 4830 scope.go:117] "RemoveContainer" containerID="bdc0cbdf11a607ea9e1342ee17c82a395fb7422100900716c1d145e147848ae5" Jan 31 09:30:12 crc kubenswrapper[4830]: I0131 09:30:12.957312 4830 generic.go:334] "Generic (PLEG): container finished" podID="a5a14eb0-7ed3-44fd-a1e2-f8d582a70062" containerID="6f25216dcf8fe9092ff9750de186f48079e5e00afddb095ae784527c8c06f24a" exitCode=0 Jan 31 09:30:12 crc kubenswrapper[4830]: I0131 09:30:12.957369 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"a5a14eb0-7ed3-44fd-a1e2-f8d582a70062","Type":"ContainerDied","Data":"6f25216dcf8fe9092ff9750de186f48079e5e00afddb095ae784527c8c06f24a"} Jan 31 09:30:13 crc kubenswrapper[4830]: I0131 09:30:13.979180 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 31 09:30:17 crc kubenswrapper[4830]: I0131 09:30:17.251105 4830 scope.go:117] "RemoveContainer" containerID="a04fad3617a9e38076099693ce6bd6f0b7e1a9b845b3b8a22acffddfa772e8f0" Jan 31 09:30:17 crc kubenswrapper[4830]: E0131 09:30:17.252125 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gt7kd_openshift-machine-config-operator(158dbfda-9b0a-4809-9946-3c6ee2d082dc)\"" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" podUID="158dbfda-9b0a-4809-9946-3c6ee2d082dc" Jan 31 09:30:19 crc kubenswrapper[4830]: I0131 09:30:19.315449 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-cfnapi-546fb56cb7-54z2g" Jan 31 09:30:19 crc kubenswrapper[4830]: I0131 09:30:19.415898 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-api-5677f68f94-9mmb8" Jan 31 09:30:19 crc kubenswrapper[4830]: I0131 09:30:19.415950 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-9f575bfb8-72ll7"] Jan 31 09:30:19 crc kubenswrapper[4830]: I0131 09:30:19.416974 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-cfnapi-9f575bfb8-72ll7" podUID="71e048bc-59e9-496e-8883-5374a863a094" containerName="heat-cfnapi" containerID="cri-o://c1a3d28b9afea684a4488518cad58a562bdee227ba35c305368e849581cd3782" gracePeriod=60 Jan 31 09:30:19 crc kubenswrapper[4830]: I0131 09:30:19.505421 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-bcd57748c-bwxdf"] Jan 31 09:30:19 crc kubenswrapper[4830]: I0131 09:30:19.505717 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-api-bcd57748c-bwxdf" podUID="de2e8918-df90-4e54-8365-e7148dbdbcd1" containerName="heat-api" containerID="cri-o://aca1b9c776a715fad2800fffa5c89f0af8ff618c62ffa18884f1656e6031454e" gracePeriod=60 Jan 31 09:30:21 crc kubenswrapper[4830]: I0131 09:30:21.742683 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-engine-88757d59b-r55jf" Jan 31 09:30:21 crc kubenswrapper[4830]: I0131 09:30:21.850611 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-engine-587fd67997-pvqls"] Jan 31 09:30:21 crc kubenswrapper[4830]: I0131 09:30:21.850939 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-engine-587fd67997-pvqls" podUID="e5627b4b-982b-41c1-8ff9-8ca07513680d" containerName="heat-engine" containerID="cri-o://de595e3300571102b1c4fb25a65643a6193ff774aeaf83f24ec9a38461e8a8e6" gracePeriod=60 Jan 31 09:30:24 crc kubenswrapper[4830]: I0131 09:30:24.591430 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-api-bcd57748c-bwxdf" podUID="de2e8918-df90-4e54-8365-e7148dbdbcd1" containerName="heat-api" probeResult="failure" output="Get \"https://10.217.0.222:8004/healthcheck\": read tcp 10.217.0.2:42842->10.217.0.222:8004: read: connection reset by peer" Jan 31 09:30:24 crc kubenswrapper[4830]: I0131 09:30:24.634382 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-cfnapi-9f575bfb8-72ll7" podUID="71e048bc-59e9-496e-8883-5374a863a094" containerName="heat-cfnapi" probeResult="failure" output="Get \"https://10.217.0.223:8000/healthcheck\": read tcp 10.217.0.2:46322->10.217.0.223:8000: read: connection reset by peer" Jan 31 09:30:25 crc kubenswrapper[4830]: I0131 09:30:25.193195 4830 generic.go:334] "Generic (PLEG): container finished" podID="71e048bc-59e9-496e-8883-5374a863a094" containerID="c1a3d28b9afea684a4488518cad58a562bdee227ba35c305368e849581cd3782" exitCode=0 Jan 31 09:30:25 crc kubenswrapper[4830]: I0131 09:30:25.193328 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-9f575bfb8-72ll7" event={"ID":"71e048bc-59e9-496e-8883-5374a863a094","Type":"ContainerDied","Data":"c1a3d28b9afea684a4488518cad58a562bdee227ba35c305368e849581cd3782"} Jan 31 09:30:25 crc kubenswrapper[4830]: I0131 09:30:25.208304 4830 generic.go:334] "Generic (PLEG): container finished" podID="de2e8918-df90-4e54-8365-e7148dbdbcd1" containerID="aca1b9c776a715fad2800fffa5c89f0af8ff618c62ffa18884f1656e6031454e" exitCode=0 Jan 31 09:30:25 crc kubenswrapper[4830]: I0131 09:30:25.208380 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-bcd57748c-bwxdf" event={"ID":"de2e8918-df90-4e54-8365-e7148dbdbcd1","Type":"ContainerDied","Data":"aca1b9c776a715fad2800fffa5c89f0af8ff618c62ffa18884f1656e6031454e"} Jan 31 09:30:25 crc kubenswrapper[4830]: E0131 09:30:25.953819 4830 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/openstack-ansibleee-runner:latest" Jan 31 09:30:25 crc kubenswrapper[4830]: E0131 09:30:25.954523 4830 kuberuntime_manager.go:1274] "Unhandled Error" err=< Jan 31 09:30:25 crc kubenswrapper[4830]: container &Container{Name:repo-setup-edpm-deployment-openstack-edpm-ipam,Image:quay.io/openstack-k8s-operators/openstack-ansibleee-runner:latest,Command:[],Args:[ansible-runner run /runner -p playbook.yaml -i repo-setup-edpm-deployment-openstack-edpm-ipam],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ANSIBLE_VERBOSITY,Value:2,ValueFrom:nil,},EnvVar{Name:RUNNER_PLAYBOOK,Value: Jan 31 09:30:25 crc kubenswrapper[4830]: - hosts: all Jan 31 09:30:25 crc kubenswrapper[4830]: strategy: linear Jan 31 09:30:25 crc kubenswrapper[4830]: tasks: Jan 31 09:30:25 crc kubenswrapper[4830]: - name: Enable podified-repos Jan 31 09:30:25 crc kubenswrapper[4830]: become: true Jan 31 09:30:25 crc kubenswrapper[4830]: ansible.builtin.shell: | Jan 31 09:30:25 crc kubenswrapper[4830]: set -euxo pipefail Jan 31 09:30:25 crc kubenswrapper[4830]: pushd /var/tmp Jan 31 09:30:25 crc kubenswrapper[4830]: curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz Jan 31 09:30:25 crc kubenswrapper[4830]: pushd repo-setup-main Jan 31 09:30:25 crc kubenswrapper[4830]: python3 -m venv ./venv Jan 31 09:30:25 crc kubenswrapper[4830]: PBR_VERSION=0.0.0 ./venv/bin/pip install ./ Jan 31 09:30:25 crc kubenswrapper[4830]: ./venv/bin/repo-setup current-podified -b antelope Jan 31 09:30:25 crc kubenswrapper[4830]: popd Jan 31 09:30:25 crc kubenswrapper[4830]: rm -rf repo-setup-main Jan 31 09:30:25 crc kubenswrapper[4830]: Jan 31 09:30:25 crc kubenswrapper[4830]: Jan 31 09:30:25 crc kubenswrapper[4830]: ,ValueFrom:nil,},EnvVar{Name:RUNNER_EXTRA_VARS,Value: Jan 31 09:30:25 crc kubenswrapper[4830]: edpm_override_hosts: openstack-edpm-ipam Jan 31 09:30:25 crc kubenswrapper[4830]: edpm_service_type: repo-setup Jan 31 09:30:25 crc kubenswrapper[4830]: Jan 31 09:30:25 crc kubenswrapper[4830]: Jan 31 09:30:25 crc kubenswrapper[4830]: ,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:repo-setup-combined-ca-bundle,ReadOnly:false,MountPath:/var/lib/openstack/cacerts/repo-setup,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ssh-key-openstack-edpm-ipam,ReadOnly:false,MountPath:/runner/env/ssh_key/ssh_key_openstack-edpm-ipam,SubPath:ssh_key_openstack-edpm-ipam,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:inventory,ReadOnly:false,MountPath:/runner/inventory/hosts,SubPath:inventory,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-726ch,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:openstack-aee-default-env,},Optional:*true,},SecretRef:nil,},},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod repo-setup-edpm-deployment-openstack-edpm-ipam-n4lwf_openstack(b8c10133-0080-4638-a514-b1d8c87873e4): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled Jan 31 09:30:25 crc kubenswrapper[4830]: > logger="UnhandledError" Jan 31 09:30:25 crc kubenswrapper[4830]: E0131 09:30:25.956146 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"repo-setup-edpm-deployment-openstack-edpm-ipam\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-n4lwf" podUID="b8c10133-0080-4638-a514-b1d8c87873e4" Jan 31 09:30:26 crc kubenswrapper[4830]: E0131 09:30:26.232814 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"repo-setup-edpm-deployment-openstack-edpm-ipam\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/openstack-ansibleee-runner:latest\\\"\"" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-n4lwf" podUID="b8c10133-0080-4638-a514-b1d8c87873e4" Jan 31 09:30:26 crc kubenswrapper[4830]: I0131 09:30:26.549809 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-9f575bfb8-72ll7" Jan 31 09:30:26 crc kubenswrapper[4830]: I0131 09:30:26.608038 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-bcd57748c-bwxdf" Jan 31 09:30:26 crc kubenswrapper[4830]: I0131 09:30:26.615976 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/71e048bc-59e9-496e-8883-5374a863a094-config-data-custom\") pod \"71e048bc-59e9-496e-8883-5374a863a094\" (UID: \"71e048bc-59e9-496e-8883-5374a863a094\") " Jan 31 09:30:26 crc kubenswrapper[4830]: I0131 09:30:26.616073 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/71e048bc-59e9-496e-8883-5374a863a094-internal-tls-certs\") pod \"71e048bc-59e9-496e-8883-5374a863a094\" (UID: \"71e048bc-59e9-496e-8883-5374a863a094\") " Jan 31 09:30:26 crc kubenswrapper[4830]: I0131 09:30:26.616099 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jtvcl\" (UniqueName: \"kubernetes.io/projected/71e048bc-59e9-496e-8883-5374a863a094-kube-api-access-jtvcl\") pod \"71e048bc-59e9-496e-8883-5374a863a094\" (UID: \"71e048bc-59e9-496e-8883-5374a863a094\") " Jan 31 09:30:26 crc kubenswrapper[4830]: I0131 09:30:26.616247 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/71e048bc-59e9-496e-8883-5374a863a094-public-tls-certs\") pod \"71e048bc-59e9-496e-8883-5374a863a094\" (UID: \"71e048bc-59e9-496e-8883-5374a863a094\") " Jan 31 09:30:26 crc kubenswrapper[4830]: I0131 09:30:26.616381 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/71e048bc-59e9-496e-8883-5374a863a094-config-data\") pod \"71e048bc-59e9-496e-8883-5374a863a094\" (UID: \"71e048bc-59e9-496e-8883-5374a863a094\") " Jan 31 09:30:26 crc kubenswrapper[4830]: I0131 09:30:26.616510 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71e048bc-59e9-496e-8883-5374a863a094-combined-ca-bundle\") pod \"71e048bc-59e9-496e-8883-5374a863a094\" (UID: \"71e048bc-59e9-496e-8883-5374a863a094\") " Jan 31 09:30:26 crc kubenswrapper[4830]: I0131 09:30:26.625358 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/71e048bc-59e9-496e-8883-5374a863a094-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "71e048bc-59e9-496e-8883-5374a863a094" (UID: "71e048bc-59e9-496e-8883-5374a863a094"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:30:26 crc kubenswrapper[4830]: I0131 09:30:26.634964 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/71e048bc-59e9-496e-8883-5374a863a094-kube-api-access-jtvcl" (OuterVolumeSpecName: "kube-api-access-jtvcl") pod "71e048bc-59e9-496e-8883-5374a863a094" (UID: "71e048bc-59e9-496e-8883-5374a863a094"). InnerVolumeSpecName "kube-api-access-jtvcl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:30:26 crc kubenswrapper[4830]: I0131 09:30:26.725502 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/de2e8918-df90-4e54-8365-e7148dbdbcd1-public-tls-certs\") pod \"de2e8918-df90-4e54-8365-e7148dbdbcd1\" (UID: \"de2e8918-df90-4e54-8365-e7148dbdbcd1\") " Jan 31 09:30:26 crc kubenswrapper[4830]: I0131 09:30:26.726227 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/de2e8918-df90-4e54-8365-e7148dbdbcd1-config-data-custom\") pod \"de2e8918-df90-4e54-8365-e7148dbdbcd1\" (UID: \"de2e8918-df90-4e54-8365-e7148dbdbcd1\") " Jan 31 09:30:26 crc kubenswrapper[4830]: I0131 09:30:26.726402 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/de2e8918-df90-4e54-8365-e7148dbdbcd1-internal-tls-certs\") pod \"de2e8918-df90-4e54-8365-e7148dbdbcd1\" (UID: \"de2e8918-df90-4e54-8365-e7148dbdbcd1\") " Jan 31 09:30:26 crc kubenswrapper[4830]: I0131 09:30:26.726460 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r6q7v\" (UniqueName: \"kubernetes.io/projected/de2e8918-df90-4e54-8365-e7148dbdbcd1-kube-api-access-r6q7v\") pod \"de2e8918-df90-4e54-8365-e7148dbdbcd1\" (UID: \"de2e8918-df90-4e54-8365-e7148dbdbcd1\") " Jan 31 09:30:26 crc kubenswrapper[4830]: I0131 09:30:26.726511 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de2e8918-df90-4e54-8365-e7148dbdbcd1-combined-ca-bundle\") pod \"de2e8918-df90-4e54-8365-e7148dbdbcd1\" (UID: \"de2e8918-df90-4e54-8365-e7148dbdbcd1\") " Jan 31 09:30:26 crc kubenswrapper[4830]: I0131 09:30:26.726665 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/de2e8918-df90-4e54-8365-e7148dbdbcd1-config-data\") pod \"de2e8918-df90-4e54-8365-e7148dbdbcd1\" (UID: \"de2e8918-df90-4e54-8365-e7148dbdbcd1\") " Jan 31 09:30:26 crc kubenswrapper[4830]: I0131 09:30:26.728086 4830 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/71e048bc-59e9-496e-8883-5374a863a094-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 31 09:30:26 crc kubenswrapper[4830]: I0131 09:30:26.728112 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jtvcl\" (UniqueName: \"kubernetes.io/projected/71e048bc-59e9-496e-8883-5374a863a094-kube-api-access-jtvcl\") on node \"crc\" DevicePath \"\"" Jan 31 09:30:26 crc kubenswrapper[4830]: I0131 09:30:26.760991 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/71e048bc-59e9-496e-8883-5374a863a094-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "71e048bc-59e9-496e-8883-5374a863a094" (UID: "71e048bc-59e9-496e-8883-5374a863a094"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:30:26 crc kubenswrapper[4830]: I0131 09:30:26.763105 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/de2e8918-df90-4e54-8365-e7148dbdbcd1-kube-api-access-r6q7v" (OuterVolumeSpecName: "kube-api-access-r6q7v") pod "de2e8918-df90-4e54-8365-e7148dbdbcd1" (UID: "de2e8918-df90-4e54-8365-e7148dbdbcd1"). InnerVolumeSpecName "kube-api-access-r6q7v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:30:26 crc kubenswrapper[4830]: I0131 09:30:26.763251 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de2e8918-df90-4e54-8365-e7148dbdbcd1-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "de2e8918-df90-4e54-8365-e7148dbdbcd1" (UID: "de2e8918-df90-4e54-8365-e7148dbdbcd1"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:30:26 crc kubenswrapper[4830]: I0131 09:30:26.768897 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/71e048bc-59e9-496e-8883-5374a863a094-config-data" (OuterVolumeSpecName: "config-data") pod "71e048bc-59e9-496e-8883-5374a863a094" (UID: "71e048bc-59e9-496e-8883-5374a863a094"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:30:26 crc kubenswrapper[4830]: I0131 09:30:26.798629 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/71e048bc-59e9-496e-8883-5374a863a094-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "71e048bc-59e9-496e-8883-5374a863a094" (UID: "71e048bc-59e9-496e-8883-5374a863a094"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:30:26 crc kubenswrapper[4830]: I0131 09:30:26.817937 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/71e048bc-59e9-496e-8883-5374a863a094-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "71e048bc-59e9-496e-8883-5374a863a094" (UID: "71e048bc-59e9-496e-8883-5374a863a094"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:30:26 crc kubenswrapper[4830]: I0131 09:30:26.836369 4830 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/de2e8918-df90-4e54-8365-e7148dbdbcd1-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 31 09:30:26 crc kubenswrapper[4830]: I0131 09:30:26.836677 4830 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/71e048bc-59e9-496e-8883-5374a863a094-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 31 09:30:26 crc kubenswrapper[4830]: I0131 09:30:26.836770 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r6q7v\" (UniqueName: \"kubernetes.io/projected/de2e8918-df90-4e54-8365-e7148dbdbcd1-kube-api-access-r6q7v\") on node \"crc\" DevicePath \"\"" Jan 31 09:30:26 crc kubenswrapper[4830]: I0131 09:30:26.836835 4830 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/71e048bc-59e9-496e-8883-5374a863a094-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 31 09:30:26 crc kubenswrapper[4830]: I0131 09:30:26.836889 4830 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/71e048bc-59e9-496e-8883-5374a863a094-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 09:30:26 crc kubenswrapper[4830]: I0131 09:30:26.836952 4830 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71e048bc-59e9-496e-8883-5374a863a094-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 09:30:26 crc kubenswrapper[4830]: I0131 09:30:26.858672 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de2e8918-df90-4e54-8365-e7148dbdbcd1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "de2e8918-df90-4e54-8365-e7148dbdbcd1" (UID: "de2e8918-df90-4e54-8365-e7148dbdbcd1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:30:26 crc kubenswrapper[4830]: I0131 09:30:26.896130 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de2e8918-df90-4e54-8365-e7148dbdbcd1-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "de2e8918-df90-4e54-8365-e7148dbdbcd1" (UID: "de2e8918-df90-4e54-8365-e7148dbdbcd1"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:30:26 crc kubenswrapper[4830]: I0131 09:30:26.909816 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de2e8918-df90-4e54-8365-e7148dbdbcd1-config-data" (OuterVolumeSpecName: "config-data") pod "de2e8918-df90-4e54-8365-e7148dbdbcd1" (UID: "de2e8918-df90-4e54-8365-e7148dbdbcd1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:30:26 crc kubenswrapper[4830]: I0131 09:30:26.910363 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de2e8918-df90-4e54-8365-e7148dbdbcd1-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "de2e8918-df90-4e54-8365-e7148dbdbcd1" (UID: "de2e8918-df90-4e54-8365-e7148dbdbcd1"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:30:26 crc kubenswrapper[4830]: I0131 09:30:26.942343 4830 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/de2e8918-df90-4e54-8365-e7148dbdbcd1-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 31 09:30:26 crc kubenswrapper[4830]: I0131 09:30:26.942393 4830 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de2e8918-df90-4e54-8365-e7148dbdbcd1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 09:30:26 crc kubenswrapper[4830]: I0131 09:30:26.942412 4830 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/de2e8918-df90-4e54-8365-e7148dbdbcd1-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 09:30:26 crc kubenswrapper[4830]: I0131 09:30:26.942429 4830 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/de2e8918-df90-4e54-8365-e7148dbdbcd1-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 31 09:30:27 crc kubenswrapper[4830]: I0131 09:30:27.247035 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"a5a14eb0-7ed3-44fd-a1e2-f8d582a70062","Type":"ContainerStarted","Data":"ea5f3e86041816f77644cd959d6fe4e87105fe875038e01b41b4dd516f0a354d"} Jan 31 09:30:27 crc kubenswrapper[4830]: I0131 09:30:27.249026 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Jan 31 09:30:27 crc kubenswrapper[4830]: I0131 09:30:27.257384 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-9f575bfb8-72ll7" Jan 31 09:30:27 crc kubenswrapper[4830]: I0131 09:30:27.257661 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-9f575bfb8-72ll7" event={"ID":"71e048bc-59e9-496e-8883-5374a863a094","Type":"ContainerDied","Data":"247ebe34066d018c324f9f68d5c96290ef51e1e05192402ba36cf1b74cf8e146"} Jan 31 09:30:27 crc kubenswrapper[4830]: I0131 09:30:27.258096 4830 scope.go:117] "RemoveContainer" containerID="c1a3d28b9afea684a4488518cad58a562bdee227ba35c305368e849581cd3782" Jan 31 09:30:27 crc kubenswrapper[4830]: I0131 09:30:27.260332 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-bcd57748c-bwxdf" event={"ID":"de2e8918-df90-4e54-8365-e7148dbdbcd1","Type":"ContainerDied","Data":"cca59e1b5e667529610d1c883344c1f8e23707b7c5a3a317953d3a7236b4997c"} Jan 31 09:30:27 crc kubenswrapper[4830]: I0131 09:30:27.261566 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-bcd57748c-bwxdf" Jan 31 09:30:27 crc kubenswrapper[4830]: I0131 09:30:27.294656 4830 scope.go:117] "RemoveContainer" containerID="aca1b9c776a715fad2800fffa5c89f0af8ff618c62ffa18884f1656e6031454e" Jan 31 09:30:27 crc kubenswrapper[4830]: I0131 09:30:27.332553 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=50.332525471 podStartE2EDuration="50.332525471s" podCreationTimestamp="2026-01-31 09:29:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 09:30:27.312694293 +0000 UTC m=+1771.806056755" watchObservedRunningTime="2026-01-31 09:30:27.332525471 +0000 UTC m=+1771.825887913" Jan 31 09:30:27 crc kubenswrapper[4830]: I0131 09:30:27.418400 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-9f575bfb8-72ll7"] Jan 31 09:30:27 crc kubenswrapper[4830]: I0131 09:30:27.444631 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-cfnapi-9f575bfb8-72ll7"] Jan 31 09:30:27 crc kubenswrapper[4830]: I0131 09:30:27.473684 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-bcd57748c-bwxdf"] Jan 31 09:30:27 crc kubenswrapper[4830]: I0131 09:30:27.489055 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-api-bcd57748c-bwxdf"] Jan 31 09:30:28 crc kubenswrapper[4830]: I0131 09:30:28.265631 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="71e048bc-59e9-496e-8883-5374a863a094" path="/var/lib/kubelet/pods/71e048bc-59e9-496e-8883-5374a863a094/volumes" Jan 31 09:30:28 crc kubenswrapper[4830]: I0131 09:30:28.266819 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="de2e8918-df90-4e54-8365-e7148dbdbcd1" path="/var/lib/kubelet/pods/de2e8918-df90-4e54-8365-e7148dbdbcd1/volumes" Jan 31 09:30:28 crc kubenswrapper[4830]: I0131 09:30:28.883851 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-db-sync-5j7sd"] Jan 31 09:30:28 crc kubenswrapper[4830]: I0131 09:30:28.897742 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-db-sync-5j7sd"] Jan 31 09:30:28 crc kubenswrapper[4830]: I0131 09:30:28.949320 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-db-sync-lmhjl"] Jan 31 09:30:28 crc kubenswrapper[4830]: E0131 09:30:28.949977 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de2e8918-df90-4e54-8365-e7148dbdbcd1" containerName="heat-api" Jan 31 09:30:28 crc kubenswrapper[4830]: I0131 09:30:28.949996 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="de2e8918-df90-4e54-8365-e7148dbdbcd1" containerName="heat-api" Jan 31 09:30:28 crc kubenswrapper[4830]: E0131 09:30:28.950016 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="71e048bc-59e9-496e-8883-5374a863a094" containerName="heat-cfnapi" Jan 31 09:30:28 crc kubenswrapper[4830]: I0131 09:30:28.950023 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="71e048bc-59e9-496e-8883-5374a863a094" containerName="heat-cfnapi" Jan 31 09:30:28 crc kubenswrapper[4830]: E0131 09:30:28.950055 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6a570c9-bd20-4f17-b62f-15eae189fedc" containerName="collect-profiles" Jan 31 09:30:28 crc kubenswrapper[4830]: I0131 09:30:28.950061 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6a570c9-bd20-4f17-b62f-15eae189fedc" containerName="collect-profiles" Jan 31 09:30:28 crc kubenswrapper[4830]: I0131 09:30:28.950382 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="de2e8918-df90-4e54-8365-e7148dbdbcd1" containerName="heat-api" Jan 31 09:30:28 crc kubenswrapper[4830]: I0131 09:30:28.950402 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="71e048bc-59e9-496e-8883-5374a863a094" containerName="heat-cfnapi" Jan 31 09:30:28 crc kubenswrapper[4830]: I0131 09:30:28.950418 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="e6a570c9-bd20-4f17-b62f-15eae189fedc" containerName="collect-profiles" Jan 31 09:30:28 crc kubenswrapper[4830]: I0131 09:30:28.951488 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-lmhjl" Jan 31 09:30:28 crc kubenswrapper[4830]: I0131 09:30:28.954156 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 31 09:30:28 crc kubenswrapper[4830]: I0131 09:30:28.962638 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-sync-lmhjl"] Jan 31 09:30:29 crc kubenswrapper[4830]: I0131 09:30:29.007888 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fcb6f2f8-cfc4-4ee0-999d-f599ce74f7ba-scripts\") pod \"aodh-db-sync-lmhjl\" (UID: \"fcb6f2f8-cfc4-4ee0-999d-f599ce74f7ba\") " pod="openstack/aodh-db-sync-lmhjl" Jan 31 09:30:29 crc kubenswrapper[4830]: I0131 09:30:29.008041 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bnglm\" (UniqueName: \"kubernetes.io/projected/fcb6f2f8-cfc4-4ee0-999d-f599ce74f7ba-kube-api-access-bnglm\") pod \"aodh-db-sync-lmhjl\" (UID: \"fcb6f2f8-cfc4-4ee0-999d-f599ce74f7ba\") " pod="openstack/aodh-db-sync-lmhjl" Jan 31 09:30:29 crc kubenswrapper[4830]: I0131 09:30:29.008164 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fcb6f2f8-cfc4-4ee0-999d-f599ce74f7ba-combined-ca-bundle\") pod \"aodh-db-sync-lmhjl\" (UID: \"fcb6f2f8-cfc4-4ee0-999d-f599ce74f7ba\") " pod="openstack/aodh-db-sync-lmhjl" Jan 31 09:30:29 crc kubenswrapper[4830]: I0131 09:30:29.008244 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fcb6f2f8-cfc4-4ee0-999d-f599ce74f7ba-config-data\") pod \"aodh-db-sync-lmhjl\" (UID: \"fcb6f2f8-cfc4-4ee0-999d-f599ce74f7ba\") " pod="openstack/aodh-db-sync-lmhjl" Jan 31 09:30:29 crc kubenswrapper[4830]: I0131 09:30:29.111199 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fcb6f2f8-cfc4-4ee0-999d-f599ce74f7ba-scripts\") pod \"aodh-db-sync-lmhjl\" (UID: \"fcb6f2f8-cfc4-4ee0-999d-f599ce74f7ba\") " pod="openstack/aodh-db-sync-lmhjl" Jan 31 09:30:29 crc kubenswrapper[4830]: I0131 09:30:29.111319 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bnglm\" (UniqueName: \"kubernetes.io/projected/fcb6f2f8-cfc4-4ee0-999d-f599ce74f7ba-kube-api-access-bnglm\") pod \"aodh-db-sync-lmhjl\" (UID: \"fcb6f2f8-cfc4-4ee0-999d-f599ce74f7ba\") " pod="openstack/aodh-db-sync-lmhjl" Jan 31 09:30:29 crc kubenswrapper[4830]: I0131 09:30:29.111403 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fcb6f2f8-cfc4-4ee0-999d-f599ce74f7ba-combined-ca-bundle\") pod \"aodh-db-sync-lmhjl\" (UID: \"fcb6f2f8-cfc4-4ee0-999d-f599ce74f7ba\") " pod="openstack/aodh-db-sync-lmhjl" Jan 31 09:30:29 crc kubenswrapper[4830]: I0131 09:30:29.111458 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fcb6f2f8-cfc4-4ee0-999d-f599ce74f7ba-config-data\") pod \"aodh-db-sync-lmhjl\" (UID: \"fcb6f2f8-cfc4-4ee0-999d-f599ce74f7ba\") " pod="openstack/aodh-db-sync-lmhjl" Jan 31 09:30:29 crc kubenswrapper[4830]: I0131 09:30:29.118029 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fcb6f2f8-cfc4-4ee0-999d-f599ce74f7ba-scripts\") pod \"aodh-db-sync-lmhjl\" (UID: \"fcb6f2f8-cfc4-4ee0-999d-f599ce74f7ba\") " pod="openstack/aodh-db-sync-lmhjl" Jan 31 09:30:29 crc kubenswrapper[4830]: I0131 09:30:29.119554 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fcb6f2f8-cfc4-4ee0-999d-f599ce74f7ba-config-data\") pod \"aodh-db-sync-lmhjl\" (UID: \"fcb6f2f8-cfc4-4ee0-999d-f599ce74f7ba\") " pod="openstack/aodh-db-sync-lmhjl" Jan 31 09:30:29 crc kubenswrapper[4830]: I0131 09:30:29.133596 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fcb6f2f8-cfc4-4ee0-999d-f599ce74f7ba-combined-ca-bundle\") pod \"aodh-db-sync-lmhjl\" (UID: \"fcb6f2f8-cfc4-4ee0-999d-f599ce74f7ba\") " pod="openstack/aodh-db-sync-lmhjl" Jan 31 09:30:29 crc kubenswrapper[4830]: I0131 09:30:29.133647 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bnglm\" (UniqueName: \"kubernetes.io/projected/fcb6f2f8-cfc4-4ee0-999d-f599ce74f7ba-kube-api-access-bnglm\") pod \"aodh-db-sync-lmhjl\" (UID: \"fcb6f2f8-cfc4-4ee0-999d-f599ce74f7ba\") " pod="openstack/aodh-db-sync-lmhjl" Jan 31 09:30:29 crc kubenswrapper[4830]: I0131 09:30:29.272972 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-lmhjl" Jan 31 09:30:29 crc kubenswrapper[4830]: W0131 09:30:29.886044 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfcb6f2f8_cfc4_4ee0_999d_f599ce74f7ba.slice/crio-399f0eab52a93286fbc3acc1070f23469bcd3ce7bbf43584926e770754b735d3 WatchSource:0}: Error finding container 399f0eab52a93286fbc3acc1070f23469bcd3ce7bbf43584926e770754b735d3: Status 404 returned error can't find the container with id 399f0eab52a93286fbc3acc1070f23469bcd3ce7bbf43584926e770754b735d3 Jan 31 09:30:29 crc kubenswrapper[4830]: I0131 09:30:29.887279 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-sync-lmhjl"] Jan 31 09:30:30 crc kubenswrapper[4830]: I0131 09:30:30.253933 4830 scope.go:117] "RemoveContainer" containerID="a04fad3617a9e38076099693ce6bd6f0b7e1a9b845b3b8a22acffddfa772e8f0" Jan 31 09:30:30 crc kubenswrapper[4830]: E0131 09:30:30.254402 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gt7kd_openshift-machine-config-operator(158dbfda-9b0a-4809-9946-3c6ee2d082dc)\"" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" podUID="158dbfda-9b0a-4809-9946-3c6ee2d082dc" Jan 31 09:30:30 crc kubenswrapper[4830]: I0131 09:30:30.294470 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f854d4ac-83f5-411d-a3f0-67a0b771b474" path="/var/lib/kubelet/pods/f854d4ac-83f5-411d-a3f0-67a0b771b474/volumes" Jan 31 09:30:30 crc kubenswrapper[4830]: I0131 09:30:30.312115 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-lmhjl" event={"ID":"fcb6f2f8-cfc4-4ee0-999d-f599ce74f7ba","Type":"ContainerStarted","Data":"399f0eab52a93286fbc3acc1070f23469bcd3ce7bbf43584926e770754b735d3"} Jan 31 09:30:31 crc kubenswrapper[4830]: E0131 09:30:31.785167 4830 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="de595e3300571102b1c4fb25a65643a6193ff774aeaf83f24ec9a38461e8a8e6" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Jan 31 09:30:31 crc kubenswrapper[4830]: E0131 09:30:31.789645 4830 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="de595e3300571102b1c4fb25a65643a6193ff774aeaf83f24ec9a38461e8a8e6" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Jan 31 09:30:31 crc kubenswrapper[4830]: E0131 09:30:31.796355 4830 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="de595e3300571102b1c4fb25a65643a6193ff774aeaf83f24ec9a38461e8a8e6" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Jan 31 09:30:31 crc kubenswrapper[4830]: E0131 09:30:31.796430 4830 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/heat-engine-587fd67997-pvqls" podUID="e5627b4b-982b-41c1-8ff9-8ca07513680d" containerName="heat-engine" Jan 31 09:30:32 crc kubenswrapper[4830]: I0131 09:30:32.360999 4830 generic.go:334] "Generic (PLEG): container finished" podID="e5627b4b-982b-41c1-8ff9-8ca07513680d" containerID="de595e3300571102b1c4fb25a65643a6193ff774aeaf83f24ec9a38461e8a8e6" exitCode=0 Jan 31 09:30:32 crc kubenswrapper[4830]: I0131 09:30:32.361572 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-587fd67997-pvqls" event={"ID":"e5627b4b-982b-41c1-8ff9-8ca07513680d","Type":"ContainerDied","Data":"de595e3300571102b1c4fb25a65643a6193ff774aeaf83f24ec9a38461e8a8e6"} Jan 31 09:30:32 crc kubenswrapper[4830]: I0131 09:30:32.779193 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-587fd67997-pvqls" Jan 31 09:30:32 crc kubenswrapper[4830]: I0131 09:30:32.866911 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e5627b4b-982b-41c1-8ff9-8ca07513680d-config-data-custom\") pod \"e5627b4b-982b-41c1-8ff9-8ca07513680d\" (UID: \"e5627b4b-982b-41c1-8ff9-8ca07513680d\") " Jan 31 09:30:32 crc kubenswrapper[4830]: I0131 09:30:32.867166 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e5627b4b-982b-41c1-8ff9-8ca07513680d-config-data\") pod \"e5627b4b-982b-41c1-8ff9-8ca07513680d\" (UID: \"e5627b4b-982b-41c1-8ff9-8ca07513680d\") " Jan 31 09:30:32 crc kubenswrapper[4830]: I0131 09:30:32.867195 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5627b4b-982b-41c1-8ff9-8ca07513680d-combined-ca-bundle\") pod \"e5627b4b-982b-41c1-8ff9-8ca07513680d\" (UID: \"e5627b4b-982b-41c1-8ff9-8ca07513680d\") " Jan 31 09:30:32 crc kubenswrapper[4830]: I0131 09:30:32.868463 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qsmkl\" (UniqueName: \"kubernetes.io/projected/e5627b4b-982b-41c1-8ff9-8ca07513680d-kube-api-access-qsmkl\") pod \"e5627b4b-982b-41c1-8ff9-8ca07513680d\" (UID: \"e5627b4b-982b-41c1-8ff9-8ca07513680d\") " Jan 31 09:30:32 crc kubenswrapper[4830]: I0131 09:30:32.876244 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e5627b4b-982b-41c1-8ff9-8ca07513680d-kube-api-access-qsmkl" (OuterVolumeSpecName: "kube-api-access-qsmkl") pod "e5627b4b-982b-41c1-8ff9-8ca07513680d" (UID: "e5627b4b-982b-41c1-8ff9-8ca07513680d"). InnerVolumeSpecName "kube-api-access-qsmkl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:30:32 crc kubenswrapper[4830]: I0131 09:30:32.913009 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e5627b4b-982b-41c1-8ff9-8ca07513680d-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "e5627b4b-982b-41c1-8ff9-8ca07513680d" (UID: "e5627b4b-982b-41c1-8ff9-8ca07513680d"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:30:32 crc kubenswrapper[4830]: I0131 09:30:32.938518 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e5627b4b-982b-41c1-8ff9-8ca07513680d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e5627b4b-982b-41c1-8ff9-8ca07513680d" (UID: "e5627b4b-982b-41c1-8ff9-8ca07513680d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:30:32 crc kubenswrapper[4830]: I0131 09:30:32.973577 4830 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5627b4b-982b-41c1-8ff9-8ca07513680d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 09:30:32 crc kubenswrapper[4830]: I0131 09:30:32.973627 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qsmkl\" (UniqueName: \"kubernetes.io/projected/e5627b4b-982b-41c1-8ff9-8ca07513680d-kube-api-access-qsmkl\") on node \"crc\" DevicePath \"\"" Jan 31 09:30:32 crc kubenswrapper[4830]: I0131 09:30:32.973642 4830 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e5627b4b-982b-41c1-8ff9-8ca07513680d-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 31 09:30:33 crc kubenswrapper[4830]: I0131 09:30:33.055852 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e5627b4b-982b-41c1-8ff9-8ca07513680d-config-data" (OuterVolumeSpecName: "config-data") pod "e5627b4b-982b-41c1-8ff9-8ca07513680d" (UID: "e5627b4b-982b-41c1-8ff9-8ca07513680d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:30:33 crc kubenswrapper[4830]: I0131 09:30:33.079485 4830 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e5627b4b-982b-41c1-8ff9-8ca07513680d-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 09:30:33 crc kubenswrapper[4830]: I0131 09:30:33.383502 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-587fd67997-pvqls" event={"ID":"e5627b4b-982b-41c1-8ff9-8ca07513680d","Type":"ContainerDied","Data":"a77be11b8b426e5fcbbca9cdf9bc97dfdb9a46bfefa1001c18b5f9ee3d60dda7"} Jan 31 09:30:33 crc kubenswrapper[4830]: I0131 09:30:33.383635 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-587fd67997-pvqls" Jan 31 09:30:33 crc kubenswrapper[4830]: I0131 09:30:33.384119 4830 scope.go:117] "RemoveContainer" containerID="de595e3300571102b1c4fb25a65643a6193ff774aeaf83f24ec9a38461e8a8e6" Jan 31 09:30:33 crc kubenswrapper[4830]: I0131 09:30:33.480529 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-engine-587fd67997-pvqls"] Jan 31 09:30:33 crc kubenswrapper[4830]: I0131 09:30:33.504955 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-engine-587fd67997-pvqls"] Jan 31 09:30:34 crc kubenswrapper[4830]: I0131 09:30:34.269258 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e5627b4b-982b-41c1-8ff9-8ca07513680d" path="/var/lib/kubelet/pods/e5627b4b-982b-41c1-8ff9-8ca07513680d/volumes" Jan 31 09:30:37 crc kubenswrapper[4830]: I0131 09:30:37.458008 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-lmhjl" event={"ID":"fcb6f2f8-cfc4-4ee0-999d-f599ce74f7ba","Type":"ContainerStarted","Data":"aaacdfb87dea1afdbfb12b6ef8c917df383edc7d71a1e313a10a0b822910ac0b"} Jan 31 09:30:37 crc kubenswrapper[4830]: I0131 09:30:37.484095 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-db-sync-lmhjl" podStartSLOduration=2.4364539 podStartE2EDuration="9.4840432s" podCreationTimestamp="2026-01-31 09:30:28 +0000 UTC" firstStartedPulling="2026-01-31 09:30:29.89031025 +0000 UTC m=+1774.383672692" lastFinishedPulling="2026-01-31 09:30:36.93789955 +0000 UTC m=+1781.431261992" observedRunningTime="2026-01-31 09:30:37.476934477 +0000 UTC m=+1781.970296919" watchObservedRunningTime="2026-01-31 09:30:37.4840432 +0000 UTC m=+1781.977405642" Jan 31 09:30:37 crc kubenswrapper[4830]: I0131 09:30:37.549055 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="a5a14eb0-7ed3-44fd-a1e2-f8d582a70062" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.1.13:5671: connect: connection refused" Jan 31 09:30:38 crc kubenswrapper[4830]: I0131 09:30:38.483124 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-n4lwf" event={"ID":"b8c10133-0080-4638-a514-b1d8c87873e4","Type":"ContainerStarted","Data":"931bc018d4b2ff5efd1abb7abd684008778530bc03600da3681d215466664648"} Jan 31 09:30:38 crc kubenswrapper[4830]: I0131 09:30:38.552130 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-n4lwf" podStartSLOduration=4.29198803 podStartE2EDuration="33.552101347s" podCreationTimestamp="2026-01-31 09:30:05 +0000 UTC" firstStartedPulling="2026-01-31 09:30:08.464040145 +0000 UTC m=+1752.957402587" lastFinishedPulling="2026-01-31 09:30:37.724153462 +0000 UTC m=+1782.217515904" observedRunningTime="2026-01-31 09:30:38.509869218 +0000 UTC m=+1783.003231660" watchObservedRunningTime="2026-01-31 09:30:38.552101347 +0000 UTC m=+1783.045463789" Jan 31 09:30:41 crc kubenswrapper[4830]: I0131 09:30:41.252387 4830 scope.go:117] "RemoveContainer" containerID="a04fad3617a9e38076099693ce6bd6f0b7e1a9b845b3b8a22acffddfa772e8f0" Jan 31 09:30:41 crc kubenswrapper[4830]: E0131 09:30:41.252671 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gt7kd_openshift-machine-config-operator(158dbfda-9b0a-4809-9946-3c6ee2d082dc)\"" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" podUID="158dbfda-9b0a-4809-9946-3c6ee2d082dc" Jan 31 09:30:41 crc kubenswrapper[4830]: I0131 09:30:41.541074 4830 generic.go:334] "Generic (PLEG): container finished" podID="fcb6f2f8-cfc4-4ee0-999d-f599ce74f7ba" containerID="aaacdfb87dea1afdbfb12b6ef8c917df383edc7d71a1e313a10a0b822910ac0b" exitCode=0 Jan 31 09:30:41 crc kubenswrapper[4830]: I0131 09:30:41.541126 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-lmhjl" event={"ID":"fcb6f2f8-cfc4-4ee0-999d-f599ce74f7ba","Type":"ContainerDied","Data":"aaacdfb87dea1afdbfb12b6ef8c917df383edc7d71a1e313a10a0b822910ac0b"} Jan 31 09:30:42 crc kubenswrapper[4830]: I0131 09:30:42.997072 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-lmhjl" Jan 31 09:30:43 crc kubenswrapper[4830]: I0131 09:30:43.111830 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fcb6f2f8-cfc4-4ee0-999d-f599ce74f7ba-combined-ca-bundle\") pod \"fcb6f2f8-cfc4-4ee0-999d-f599ce74f7ba\" (UID: \"fcb6f2f8-cfc4-4ee0-999d-f599ce74f7ba\") " Jan 31 09:30:43 crc kubenswrapper[4830]: I0131 09:30:43.112052 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fcb6f2f8-cfc4-4ee0-999d-f599ce74f7ba-scripts\") pod \"fcb6f2f8-cfc4-4ee0-999d-f599ce74f7ba\" (UID: \"fcb6f2f8-cfc4-4ee0-999d-f599ce74f7ba\") " Jan 31 09:30:43 crc kubenswrapper[4830]: I0131 09:30:43.112096 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fcb6f2f8-cfc4-4ee0-999d-f599ce74f7ba-config-data\") pod \"fcb6f2f8-cfc4-4ee0-999d-f599ce74f7ba\" (UID: \"fcb6f2f8-cfc4-4ee0-999d-f599ce74f7ba\") " Jan 31 09:30:43 crc kubenswrapper[4830]: I0131 09:30:43.112291 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bnglm\" (UniqueName: \"kubernetes.io/projected/fcb6f2f8-cfc4-4ee0-999d-f599ce74f7ba-kube-api-access-bnglm\") pod \"fcb6f2f8-cfc4-4ee0-999d-f599ce74f7ba\" (UID: \"fcb6f2f8-cfc4-4ee0-999d-f599ce74f7ba\") " Jan 31 09:30:43 crc kubenswrapper[4830]: I0131 09:30:43.124024 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fcb6f2f8-cfc4-4ee0-999d-f599ce74f7ba-scripts" (OuterVolumeSpecName: "scripts") pod "fcb6f2f8-cfc4-4ee0-999d-f599ce74f7ba" (UID: "fcb6f2f8-cfc4-4ee0-999d-f599ce74f7ba"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:30:43 crc kubenswrapper[4830]: I0131 09:30:43.130651 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fcb6f2f8-cfc4-4ee0-999d-f599ce74f7ba-kube-api-access-bnglm" (OuterVolumeSpecName: "kube-api-access-bnglm") pod "fcb6f2f8-cfc4-4ee0-999d-f599ce74f7ba" (UID: "fcb6f2f8-cfc4-4ee0-999d-f599ce74f7ba"). InnerVolumeSpecName "kube-api-access-bnglm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:30:43 crc kubenswrapper[4830]: I0131 09:30:43.159606 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fcb6f2f8-cfc4-4ee0-999d-f599ce74f7ba-config-data" (OuterVolumeSpecName: "config-data") pod "fcb6f2f8-cfc4-4ee0-999d-f599ce74f7ba" (UID: "fcb6f2f8-cfc4-4ee0-999d-f599ce74f7ba"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:30:43 crc kubenswrapper[4830]: I0131 09:30:43.161977 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fcb6f2f8-cfc4-4ee0-999d-f599ce74f7ba-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fcb6f2f8-cfc4-4ee0-999d-f599ce74f7ba" (UID: "fcb6f2f8-cfc4-4ee0-999d-f599ce74f7ba"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:30:43 crc kubenswrapper[4830]: I0131 09:30:43.216526 4830 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fcb6f2f8-cfc4-4ee0-999d-f599ce74f7ba-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 09:30:43 crc kubenswrapper[4830]: I0131 09:30:43.216960 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bnglm\" (UniqueName: \"kubernetes.io/projected/fcb6f2f8-cfc4-4ee0-999d-f599ce74f7ba-kube-api-access-bnglm\") on node \"crc\" DevicePath \"\"" Jan 31 09:30:43 crc kubenswrapper[4830]: I0131 09:30:43.216981 4830 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fcb6f2f8-cfc4-4ee0-999d-f599ce74f7ba-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 09:30:43 crc kubenswrapper[4830]: I0131 09:30:43.216998 4830 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fcb6f2f8-cfc4-4ee0-999d-f599ce74f7ba-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 09:30:43 crc kubenswrapper[4830]: I0131 09:30:43.568316 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-lmhjl" event={"ID":"fcb6f2f8-cfc4-4ee0-999d-f599ce74f7ba","Type":"ContainerDied","Data":"399f0eab52a93286fbc3acc1070f23469bcd3ce7bbf43584926e770754b735d3"} Jan 31 09:30:43 crc kubenswrapper[4830]: I0131 09:30:43.568365 4830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="399f0eab52a93286fbc3acc1070f23469bcd3ce7bbf43584926e770754b735d3" Jan 31 09:30:43 crc kubenswrapper[4830]: I0131 09:30:43.568375 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-lmhjl" Jan 31 09:30:43 crc kubenswrapper[4830]: I0131 09:30:43.927628 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-0"] Jan 31 09:30:43 crc kubenswrapper[4830]: I0131 09:30:43.928036 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="04ec026b-cc18-426d-a922-7c1c73939a4a" containerName="aodh-api" containerID="cri-o://50071e6b5d96cd1993d9e05ebdae810927d7cc3669f619276451a68af26fc2ac" gracePeriod=30 Jan 31 09:30:43 crc kubenswrapper[4830]: I0131 09:30:43.928130 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="04ec026b-cc18-426d-a922-7c1c73939a4a" containerName="aodh-listener" containerID="cri-o://66d4968000c3864f1f017f991c368c64d34c57e56b3554b8ec09929e5e851568" gracePeriod=30 Jan 31 09:30:43 crc kubenswrapper[4830]: I0131 09:30:43.928195 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="04ec026b-cc18-426d-a922-7c1c73939a4a" containerName="aodh-evaluator" containerID="cri-o://974e75bfcedc8ce61859794732b9b1f995c1eade9243c45f9113a7de8aa4a053" gracePeriod=30 Jan 31 09:30:43 crc kubenswrapper[4830]: I0131 09:30:43.928438 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="04ec026b-cc18-426d-a922-7c1c73939a4a" containerName="aodh-notifier" containerID="cri-o://3c966326a9cedf072323789da4c1bc61ed1746f57f26f7f7c829c7a2f89d0118" gracePeriod=30 Jan 31 09:30:44 crc kubenswrapper[4830]: I0131 09:30:44.584913 4830 generic.go:334] "Generic (PLEG): container finished" podID="04ec026b-cc18-426d-a922-7c1c73939a4a" containerID="974e75bfcedc8ce61859794732b9b1f995c1eade9243c45f9113a7de8aa4a053" exitCode=0 Jan 31 09:30:44 crc kubenswrapper[4830]: I0131 09:30:44.585289 4830 generic.go:334] "Generic (PLEG): container finished" podID="04ec026b-cc18-426d-a922-7c1c73939a4a" containerID="50071e6b5d96cd1993d9e05ebdae810927d7cc3669f619276451a68af26fc2ac" exitCode=0 Jan 31 09:30:44 crc kubenswrapper[4830]: I0131 09:30:44.585322 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"04ec026b-cc18-426d-a922-7c1c73939a4a","Type":"ContainerDied","Data":"974e75bfcedc8ce61859794732b9b1f995c1eade9243c45f9113a7de8aa4a053"} Jan 31 09:30:44 crc kubenswrapper[4830]: I0131 09:30:44.585358 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"04ec026b-cc18-426d-a922-7c1c73939a4a","Type":"ContainerDied","Data":"50071e6b5d96cd1993d9e05ebdae810927d7cc3669f619276451a68af26fc2ac"} Jan 31 09:30:47 crc kubenswrapper[4830]: I0131 09:30:47.547993 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Jan 31 09:30:51 crc kubenswrapper[4830]: I0131 09:30:51.717566 4830 generic.go:334] "Generic (PLEG): container finished" podID="04ec026b-cc18-426d-a922-7c1c73939a4a" containerID="66d4968000c3864f1f017f991c368c64d34c57e56b3554b8ec09929e5e851568" exitCode=0 Jan 31 09:30:51 crc kubenswrapper[4830]: I0131 09:30:51.718201 4830 generic.go:334] "Generic (PLEG): container finished" podID="04ec026b-cc18-426d-a922-7c1c73939a4a" containerID="3c966326a9cedf072323789da4c1bc61ed1746f57f26f7f7c829c7a2f89d0118" exitCode=0 Jan 31 09:30:51 crc kubenswrapper[4830]: I0131 09:30:51.717700 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"04ec026b-cc18-426d-a922-7c1c73939a4a","Type":"ContainerDied","Data":"66d4968000c3864f1f017f991c368c64d34c57e56b3554b8ec09929e5e851568"} Jan 31 09:30:51 crc kubenswrapper[4830]: I0131 09:30:51.718277 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"04ec026b-cc18-426d-a922-7c1c73939a4a","Type":"ContainerDied","Data":"3c966326a9cedf072323789da4c1bc61ed1746f57f26f7f7c829c7a2f89d0118"} Jan 31 09:30:52 crc kubenswrapper[4830]: I0131 09:30:52.086541 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Jan 31 09:30:52 crc kubenswrapper[4830]: I0131 09:30:52.176419 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4ttfx\" (UniqueName: \"kubernetes.io/projected/04ec026b-cc18-426d-a922-7c1c73939a4a-kube-api-access-4ttfx\") pod \"04ec026b-cc18-426d-a922-7c1c73939a4a\" (UID: \"04ec026b-cc18-426d-a922-7c1c73939a4a\") " Jan 31 09:30:52 crc kubenswrapper[4830]: I0131 09:30:52.176624 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/04ec026b-cc18-426d-a922-7c1c73939a4a-scripts\") pod \"04ec026b-cc18-426d-a922-7c1c73939a4a\" (UID: \"04ec026b-cc18-426d-a922-7c1c73939a4a\") " Jan 31 09:30:52 crc kubenswrapper[4830]: I0131 09:30:52.176687 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/04ec026b-cc18-426d-a922-7c1c73939a4a-public-tls-certs\") pod \"04ec026b-cc18-426d-a922-7c1c73939a4a\" (UID: \"04ec026b-cc18-426d-a922-7c1c73939a4a\") " Jan 31 09:30:52 crc kubenswrapper[4830]: I0131 09:30:52.176741 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/04ec026b-cc18-426d-a922-7c1c73939a4a-internal-tls-certs\") pod \"04ec026b-cc18-426d-a922-7c1c73939a4a\" (UID: \"04ec026b-cc18-426d-a922-7c1c73939a4a\") " Jan 31 09:30:52 crc kubenswrapper[4830]: I0131 09:30:52.176822 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04ec026b-cc18-426d-a922-7c1c73939a4a-combined-ca-bundle\") pod \"04ec026b-cc18-426d-a922-7c1c73939a4a\" (UID: \"04ec026b-cc18-426d-a922-7c1c73939a4a\") " Jan 31 09:30:52 crc kubenswrapper[4830]: I0131 09:30:52.176864 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04ec026b-cc18-426d-a922-7c1c73939a4a-config-data\") pod \"04ec026b-cc18-426d-a922-7c1c73939a4a\" (UID: \"04ec026b-cc18-426d-a922-7c1c73939a4a\") " Jan 31 09:30:52 crc kubenswrapper[4830]: I0131 09:30:52.186386 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/04ec026b-cc18-426d-a922-7c1c73939a4a-kube-api-access-4ttfx" (OuterVolumeSpecName: "kube-api-access-4ttfx") pod "04ec026b-cc18-426d-a922-7c1c73939a4a" (UID: "04ec026b-cc18-426d-a922-7c1c73939a4a"). InnerVolumeSpecName "kube-api-access-4ttfx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:30:52 crc kubenswrapper[4830]: I0131 09:30:52.207020 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/04ec026b-cc18-426d-a922-7c1c73939a4a-scripts" (OuterVolumeSpecName: "scripts") pod "04ec026b-cc18-426d-a922-7c1c73939a4a" (UID: "04ec026b-cc18-426d-a922-7c1c73939a4a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:30:52 crc kubenswrapper[4830]: I0131 09:30:52.255782 4830 scope.go:117] "RemoveContainer" containerID="a04fad3617a9e38076099693ce6bd6f0b7e1a9b845b3b8a22acffddfa772e8f0" Jan 31 09:30:52 crc kubenswrapper[4830]: I0131 09:30:52.262935 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/04ec026b-cc18-426d-a922-7c1c73939a4a-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "04ec026b-cc18-426d-a922-7c1c73939a4a" (UID: "04ec026b-cc18-426d-a922-7c1c73939a4a"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:30:52 crc kubenswrapper[4830]: I0131 09:30:52.265147 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/04ec026b-cc18-426d-a922-7c1c73939a4a-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "04ec026b-cc18-426d-a922-7c1c73939a4a" (UID: "04ec026b-cc18-426d-a922-7c1c73939a4a"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:30:52 crc kubenswrapper[4830]: I0131 09:30:52.280366 4830 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/04ec026b-cc18-426d-a922-7c1c73939a4a-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 09:30:52 crc kubenswrapper[4830]: I0131 09:30:52.280408 4830 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/04ec026b-cc18-426d-a922-7c1c73939a4a-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 31 09:30:52 crc kubenswrapper[4830]: I0131 09:30:52.280421 4830 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/04ec026b-cc18-426d-a922-7c1c73939a4a-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 31 09:30:52 crc kubenswrapper[4830]: I0131 09:30:52.280431 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4ttfx\" (UniqueName: \"kubernetes.io/projected/04ec026b-cc18-426d-a922-7c1c73939a4a-kube-api-access-4ttfx\") on node \"crc\" DevicePath \"\"" Jan 31 09:30:52 crc kubenswrapper[4830]: I0131 09:30:52.333203 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/04ec026b-cc18-426d-a922-7c1c73939a4a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "04ec026b-cc18-426d-a922-7c1c73939a4a" (UID: "04ec026b-cc18-426d-a922-7c1c73939a4a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:30:52 crc kubenswrapper[4830]: I0131 09:30:52.346716 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/04ec026b-cc18-426d-a922-7c1c73939a4a-config-data" (OuterVolumeSpecName: "config-data") pod "04ec026b-cc18-426d-a922-7c1c73939a4a" (UID: "04ec026b-cc18-426d-a922-7c1c73939a4a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:30:52 crc kubenswrapper[4830]: I0131 09:30:52.386718 4830 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04ec026b-cc18-426d-a922-7c1c73939a4a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 09:30:52 crc kubenswrapper[4830]: I0131 09:30:52.386771 4830 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04ec026b-cc18-426d-a922-7c1c73939a4a-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 09:30:52 crc kubenswrapper[4830]: I0131 09:30:52.738119 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Jan 31 09:30:52 crc kubenswrapper[4830]: I0131 09:30:52.738325 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"04ec026b-cc18-426d-a922-7c1c73939a4a","Type":"ContainerDied","Data":"64823d46b88ca3668f9bb2f62a82cf4a433add1937021307d25ae889d4277cec"} Jan 31 09:30:52 crc kubenswrapper[4830]: I0131 09:30:52.738777 4830 scope.go:117] "RemoveContainer" containerID="66d4968000c3864f1f017f991c368c64d34c57e56b3554b8ec09929e5e851568" Jan 31 09:30:52 crc kubenswrapper[4830]: I0131 09:30:52.742094 4830 generic.go:334] "Generic (PLEG): container finished" podID="b8c10133-0080-4638-a514-b1d8c87873e4" containerID="931bc018d4b2ff5efd1abb7abd684008778530bc03600da3681d215466664648" exitCode=0 Jan 31 09:30:52 crc kubenswrapper[4830]: I0131 09:30:52.742138 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-n4lwf" event={"ID":"b8c10133-0080-4638-a514-b1d8c87873e4","Type":"ContainerDied","Data":"931bc018d4b2ff5efd1abb7abd684008778530bc03600da3681d215466664648"} Jan 31 09:30:52 crc kubenswrapper[4830]: I0131 09:30:52.788045 4830 scope.go:117] "RemoveContainer" containerID="3c966326a9cedf072323789da4c1bc61ed1746f57f26f7f7c829c7a2f89d0118" Jan 31 09:30:52 crc kubenswrapper[4830]: I0131 09:30:52.818702 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-0"] Jan 31 09:30:52 crc kubenswrapper[4830]: I0131 09:30:52.819475 4830 scope.go:117] "RemoveContainer" containerID="974e75bfcedc8ce61859794732b9b1f995c1eade9243c45f9113a7de8aa4a053" Jan 31 09:30:52 crc kubenswrapper[4830]: I0131 09:30:52.837036 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-0"] Jan 31 09:30:52 crc kubenswrapper[4830]: I0131 09:30:52.849366 4830 scope.go:117] "RemoveContainer" containerID="50071e6b5d96cd1993d9e05ebdae810927d7cc3669f619276451a68af26fc2ac" Jan 31 09:30:52 crc kubenswrapper[4830]: I0131 09:30:52.850684 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-0"] Jan 31 09:30:52 crc kubenswrapper[4830]: E0131 09:30:52.851368 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04ec026b-cc18-426d-a922-7c1c73939a4a" containerName="aodh-evaluator" Jan 31 09:30:52 crc kubenswrapper[4830]: I0131 09:30:52.851387 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="04ec026b-cc18-426d-a922-7c1c73939a4a" containerName="aodh-evaluator" Jan 31 09:30:52 crc kubenswrapper[4830]: E0131 09:30:52.851408 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04ec026b-cc18-426d-a922-7c1c73939a4a" containerName="aodh-api" Jan 31 09:30:52 crc kubenswrapper[4830]: I0131 09:30:52.851418 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="04ec026b-cc18-426d-a922-7c1c73939a4a" containerName="aodh-api" Jan 31 09:30:52 crc kubenswrapper[4830]: E0131 09:30:52.851439 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fcb6f2f8-cfc4-4ee0-999d-f599ce74f7ba" containerName="aodh-db-sync" Jan 31 09:30:52 crc kubenswrapper[4830]: I0131 09:30:52.851445 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="fcb6f2f8-cfc4-4ee0-999d-f599ce74f7ba" containerName="aodh-db-sync" Jan 31 09:30:52 crc kubenswrapper[4830]: E0131 09:30:52.851460 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04ec026b-cc18-426d-a922-7c1c73939a4a" containerName="aodh-listener" Jan 31 09:30:52 crc kubenswrapper[4830]: I0131 09:30:52.851467 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="04ec026b-cc18-426d-a922-7c1c73939a4a" containerName="aodh-listener" Jan 31 09:30:52 crc kubenswrapper[4830]: E0131 09:30:52.851484 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5627b4b-982b-41c1-8ff9-8ca07513680d" containerName="heat-engine" Jan 31 09:30:52 crc kubenswrapper[4830]: I0131 09:30:52.851490 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5627b4b-982b-41c1-8ff9-8ca07513680d" containerName="heat-engine" Jan 31 09:30:52 crc kubenswrapper[4830]: E0131 09:30:52.851517 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04ec026b-cc18-426d-a922-7c1c73939a4a" containerName="aodh-notifier" Jan 31 09:30:52 crc kubenswrapper[4830]: I0131 09:30:52.851524 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="04ec026b-cc18-426d-a922-7c1c73939a4a" containerName="aodh-notifier" Jan 31 09:30:52 crc kubenswrapper[4830]: I0131 09:30:52.851816 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="04ec026b-cc18-426d-a922-7c1c73939a4a" containerName="aodh-notifier" Jan 31 09:30:52 crc kubenswrapper[4830]: I0131 09:30:52.851863 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="04ec026b-cc18-426d-a922-7c1c73939a4a" containerName="aodh-listener" Jan 31 09:30:52 crc kubenswrapper[4830]: I0131 09:30:52.851873 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="04ec026b-cc18-426d-a922-7c1c73939a4a" containerName="aodh-api" Jan 31 09:30:52 crc kubenswrapper[4830]: I0131 09:30:52.851889 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="fcb6f2f8-cfc4-4ee0-999d-f599ce74f7ba" containerName="aodh-db-sync" Jan 31 09:30:52 crc kubenswrapper[4830]: I0131 09:30:52.851904 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="04ec026b-cc18-426d-a922-7c1c73939a4a" containerName="aodh-evaluator" Jan 31 09:30:52 crc kubenswrapper[4830]: I0131 09:30:52.851913 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="e5627b4b-982b-41c1-8ff9-8ca07513680d" containerName="heat-engine" Jan 31 09:30:52 crc kubenswrapper[4830]: I0131 09:30:52.854785 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Jan 31 09:30:52 crc kubenswrapper[4830]: I0131 09:30:52.864050 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Jan 31 09:30:52 crc kubenswrapper[4830]: I0131 09:30:52.864395 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-mz4qw" Jan 31 09:30:52 crc kubenswrapper[4830]: I0131 09:30:52.864519 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Jan 31 09:30:52 crc kubenswrapper[4830]: I0131 09:30:52.865499 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-aodh-public-svc" Jan 31 09:30:52 crc kubenswrapper[4830]: I0131 09:30:52.867719 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-aodh-internal-svc" Jan 31 09:30:52 crc kubenswrapper[4830]: I0131 09:30:52.872679 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Jan 31 09:30:52 crc kubenswrapper[4830]: I0131 09:30:52.902079 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/11142d9d-4725-4a33-b10e-8fc21e30c6a3-internal-tls-certs\") pod \"aodh-0\" (UID: \"11142d9d-4725-4a33-b10e-8fc21e30c6a3\") " pod="openstack/aodh-0" Jan 31 09:30:52 crc kubenswrapper[4830]: I0131 09:30:52.902198 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/11142d9d-4725-4a33-b10e-8fc21e30c6a3-scripts\") pod \"aodh-0\" (UID: \"11142d9d-4725-4a33-b10e-8fc21e30c6a3\") " pod="openstack/aodh-0" Jan 31 09:30:52 crc kubenswrapper[4830]: I0131 09:30:52.902280 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/11142d9d-4725-4a33-b10e-8fc21e30c6a3-config-data\") pod \"aodh-0\" (UID: \"11142d9d-4725-4a33-b10e-8fc21e30c6a3\") " pod="openstack/aodh-0" Jan 31 09:30:52 crc kubenswrapper[4830]: I0131 09:30:52.902629 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/11142d9d-4725-4a33-b10e-8fc21e30c6a3-public-tls-certs\") pod \"aodh-0\" (UID: \"11142d9d-4725-4a33-b10e-8fc21e30c6a3\") " pod="openstack/aodh-0" Jan 31 09:30:52 crc kubenswrapper[4830]: I0131 09:30:52.902933 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x24qt\" (UniqueName: \"kubernetes.io/projected/11142d9d-4725-4a33-b10e-8fc21e30c6a3-kube-api-access-x24qt\") pod \"aodh-0\" (UID: \"11142d9d-4725-4a33-b10e-8fc21e30c6a3\") " pod="openstack/aodh-0" Jan 31 09:30:52 crc kubenswrapper[4830]: I0131 09:30:52.903073 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11142d9d-4725-4a33-b10e-8fc21e30c6a3-combined-ca-bundle\") pod \"aodh-0\" (UID: \"11142d9d-4725-4a33-b10e-8fc21e30c6a3\") " pod="openstack/aodh-0" Jan 31 09:30:53 crc kubenswrapper[4830]: I0131 09:30:53.005997 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11142d9d-4725-4a33-b10e-8fc21e30c6a3-combined-ca-bundle\") pod \"aodh-0\" (UID: \"11142d9d-4725-4a33-b10e-8fc21e30c6a3\") " pod="openstack/aodh-0" Jan 31 09:30:53 crc kubenswrapper[4830]: I0131 09:30:53.006139 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/11142d9d-4725-4a33-b10e-8fc21e30c6a3-internal-tls-certs\") pod \"aodh-0\" (UID: \"11142d9d-4725-4a33-b10e-8fc21e30c6a3\") " pod="openstack/aodh-0" Jan 31 09:30:53 crc kubenswrapper[4830]: I0131 09:30:53.006183 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/11142d9d-4725-4a33-b10e-8fc21e30c6a3-scripts\") pod \"aodh-0\" (UID: \"11142d9d-4725-4a33-b10e-8fc21e30c6a3\") " pod="openstack/aodh-0" Jan 31 09:30:53 crc kubenswrapper[4830]: I0131 09:30:53.006216 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/11142d9d-4725-4a33-b10e-8fc21e30c6a3-config-data\") pod \"aodh-0\" (UID: \"11142d9d-4725-4a33-b10e-8fc21e30c6a3\") " pod="openstack/aodh-0" Jan 31 09:30:53 crc kubenswrapper[4830]: I0131 09:30:53.006273 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/11142d9d-4725-4a33-b10e-8fc21e30c6a3-public-tls-certs\") pod \"aodh-0\" (UID: \"11142d9d-4725-4a33-b10e-8fc21e30c6a3\") " pod="openstack/aodh-0" Jan 31 09:30:53 crc kubenswrapper[4830]: I0131 09:30:53.006327 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x24qt\" (UniqueName: \"kubernetes.io/projected/11142d9d-4725-4a33-b10e-8fc21e30c6a3-kube-api-access-x24qt\") pod \"aodh-0\" (UID: \"11142d9d-4725-4a33-b10e-8fc21e30c6a3\") " pod="openstack/aodh-0" Jan 31 09:30:53 crc kubenswrapper[4830]: I0131 09:30:53.012263 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/11142d9d-4725-4a33-b10e-8fc21e30c6a3-scripts\") pod \"aodh-0\" (UID: \"11142d9d-4725-4a33-b10e-8fc21e30c6a3\") " pod="openstack/aodh-0" Jan 31 09:30:53 crc kubenswrapper[4830]: I0131 09:30:53.012436 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/11142d9d-4725-4a33-b10e-8fc21e30c6a3-public-tls-certs\") pod \"aodh-0\" (UID: \"11142d9d-4725-4a33-b10e-8fc21e30c6a3\") " pod="openstack/aodh-0" Jan 31 09:30:53 crc kubenswrapper[4830]: I0131 09:30:53.012676 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11142d9d-4725-4a33-b10e-8fc21e30c6a3-combined-ca-bundle\") pod \"aodh-0\" (UID: \"11142d9d-4725-4a33-b10e-8fc21e30c6a3\") " pod="openstack/aodh-0" Jan 31 09:30:53 crc kubenswrapper[4830]: I0131 09:30:53.015769 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/11142d9d-4725-4a33-b10e-8fc21e30c6a3-config-data\") pod \"aodh-0\" (UID: \"11142d9d-4725-4a33-b10e-8fc21e30c6a3\") " pod="openstack/aodh-0" Jan 31 09:30:53 crc kubenswrapper[4830]: I0131 09:30:53.028717 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/11142d9d-4725-4a33-b10e-8fc21e30c6a3-internal-tls-certs\") pod \"aodh-0\" (UID: \"11142d9d-4725-4a33-b10e-8fc21e30c6a3\") " pod="openstack/aodh-0" Jan 31 09:30:53 crc kubenswrapper[4830]: I0131 09:30:53.030316 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x24qt\" (UniqueName: \"kubernetes.io/projected/11142d9d-4725-4a33-b10e-8fc21e30c6a3-kube-api-access-x24qt\") pod \"aodh-0\" (UID: \"11142d9d-4725-4a33-b10e-8fc21e30c6a3\") " pod="openstack/aodh-0" Jan 31 09:30:53 crc kubenswrapper[4830]: I0131 09:30:53.177015 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Jan 31 09:30:53 crc kubenswrapper[4830]: I0131 09:30:53.763754 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" event={"ID":"158dbfda-9b0a-4809-9946-3c6ee2d082dc","Type":"ContainerStarted","Data":"1bae58408ac9eb8b3a90089200da7949f59de4442790857fb510d734d497929e"} Jan 31 09:30:53 crc kubenswrapper[4830]: I0131 09:30:53.866239 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Jan 31 09:30:54 crc kubenswrapper[4830]: I0131 09:30:54.273100 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="04ec026b-cc18-426d-a922-7c1c73939a4a" path="/var/lib/kubelet/pods/04ec026b-cc18-426d-a922-7c1c73939a4a/volumes" Jan 31 09:30:54 crc kubenswrapper[4830]: I0131 09:30:54.444238 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-n4lwf" Jan 31 09:30:54 crc kubenswrapper[4830]: I0131 09:30:54.588134 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b8c10133-0080-4638-a514-b1d8c87873e4-inventory\") pod \"b8c10133-0080-4638-a514-b1d8c87873e4\" (UID: \"b8c10133-0080-4638-a514-b1d8c87873e4\") " Jan 31 09:30:54 crc kubenswrapper[4830]: I0131 09:30:54.588783 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-726ch\" (UniqueName: \"kubernetes.io/projected/b8c10133-0080-4638-a514-b1d8c87873e4-kube-api-access-726ch\") pod \"b8c10133-0080-4638-a514-b1d8c87873e4\" (UID: \"b8c10133-0080-4638-a514-b1d8c87873e4\") " Jan 31 09:30:54 crc kubenswrapper[4830]: I0131 09:30:54.588875 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b8c10133-0080-4638-a514-b1d8c87873e4-ssh-key-openstack-edpm-ipam\") pod \"b8c10133-0080-4638-a514-b1d8c87873e4\" (UID: \"b8c10133-0080-4638-a514-b1d8c87873e4\") " Jan 31 09:30:54 crc kubenswrapper[4830]: I0131 09:30:54.589107 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b8c10133-0080-4638-a514-b1d8c87873e4-repo-setup-combined-ca-bundle\") pod \"b8c10133-0080-4638-a514-b1d8c87873e4\" (UID: \"b8c10133-0080-4638-a514-b1d8c87873e4\") " Jan 31 09:30:54 crc kubenswrapper[4830]: I0131 09:30:54.595296 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b8c10133-0080-4638-a514-b1d8c87873e4-kube-api-access-726ch" (OuterVolumeSpecName: "kube-api-access-726ch") pod "b8c10133-0080-4638-a514-b1d8c87873e4" (UID: "b8c10133-0080-4638-a514-b1d8c87873e4"). InnerVolumeSpecName "kube-api-access-726ch". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:30:54 crc kubenswrapper[4830]: I0131 09:30:54.599565 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b8c10133-0080-4638-a514-b1d8c87873e4-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "b8c10133-0080-4638-a514-b1d8c87873e4" (UID: "b8c10133-0080-4638-a514-b1d8c87873e4"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:30:54 crc kubenswrapper[4830]: I0131 09:30:54.646237 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b8c10133-0080-4638-a514-b1d8c87873e4-inventory" (OuterVolumeSpecName: "inventory") pod "b8c10133-0080-4638-a514-b1d8c87873e4" (UID: "b8c10133-0080-4638-a514-b1d8c87873e4"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:30:54 crc kubenswrapper[4830]: I0131 09:30:54.648503 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b8c10133-0080-4638-a514-b1d8c87873e4-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "b8c10133-0080-4638-a514-b1d8c87873e4" (UID: "b8c10133-0080-4638-a514-b1d8c87873e4"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:30:54 crc kubenswrapper[4830]: I0131 09:30:54.692445 4830 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b8c10133-0080-4638-a514-b1d8c87873e4-inventory\") on node \"crc\" DevicePath \"\"" Jan 31 09:30:54 crc kubenswrapper[4830]: I0131 09:30:54.692487 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-726ch\" (UniqueName: \"kubernetes.io/projected/b8c10133-0080-4638-a514-b1d8c87873e4-kube-api-access-726ch\") on node \"crc\" DevicePath \"\"" Jan 31 09:30:54 crc kubenswrapper[4830]: I0131 09:30:54.692501 4830 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b8c10133-0080-4638-a514-b1d8c87873e4-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 31 09:30:54 crc kubenswrapper[4830]: I0131 09:30:54.692511 4830 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b8c10133-0080-4638-a514-b1d8c87873e4-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 09:30:54 crc kubenswrapper[4830]: I0131 09:30:54.786402 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-n4lwf" event={"ID":"b8c10133-0080-4638-a514-b1d8c87873e4","Type":"ContainerDied","Data":"7c3ff02e0269fe869b34464ed0539b3ac9638de0389dabc46291bb675d23841a"} Jan 31 09:30:54 crc kubenswrapper[4830]: I0131 09:30:54.786843 4830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7c3ff02e0269fe869b34464ed0539b3ac9638de0389dabc46291bb675d23841a" Jan 31 09:30:54 crc kubenswrapper[4830]: I0131 09:30:54.786443 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-n4lwf" Jan 31 09:30:54 crc kubenswrapper[4830]: I0131 09:30:54.789320 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"11142d9d-4725-4a33-b10e-8fc21e30c6a3","Type":"ContainerStarted","Data":"70e5de62960d78d52232ee8852a1a45739456c1da3a18aef21958a2855f9b2ed"} Jan 31 09:30:54 crc kubenswrapper[4830]: I0131 09:30:54.789391 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"11142d9d-4725-4a33-b10e-8fc21e30c6a3","Type":"ContainerStarted","Data":"c3ba52c6c482a79957266021a91a4b59c0e7a99ff5f3c9d393c120c8e663fff4"} Jan 31 09:30:54 crc kubenswrapper[4830]: I0131 09:30:54.962025 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-z7ztp"] Jan 31 09:30:54 crc kubenswrapper[4830]: E0131 09:30:54.964144 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b8c10133-0080-4638-a514-b1d8c87873e4" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 31 09:30:54 crc kubenswrapper[4830]: I0131 09:30:54.964171 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="b8c10133-0080-4638-a514-b1d8c87873e4" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 31 09:30:54 crc kubenswrapper[4830]: I0131 09:30:54.964639 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="b8c10133-0080-4638-a514-b1d8c87873e4" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 31 09:30:54 crc kubenswrapper[4830]: I0131 09:30:54.974718 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-z7ztp" Jan 31 09:30:54 crc kubenswrapper[4830]: I0131 09:30:54.978431 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 31 09:30:54 crc kubenswrapper[4830]: I0131 09:30:54.978892 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 31 09:30:54 crc kubenswrapper[4830]: I0131 09:30:54.979053 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-vd24j" Jan 31 09:30:54 crc kubenswrapper[4830]: I0131 09:30:54.979220 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 31 09:30:54 crc kubenswrapper[4830]: I0131 09:30:54.994787 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-z7ztp"] Jan 31 09:30:55 crc kubenswrapper[4830]: I0131 09:30:55.112645 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/795ae09a-4f64-42d2-ad54-45bf5b5f8954-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-z7ztp\" (UID: \"795ae09a-4f64-42d2-ad54-45bf5b5f8954\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-z7ztp" Jan 31 09:30:55 crc kubenswrapper[4830]: I0131 09:30:55.113044 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/795ae09a-4f64-42d2-ad54-45bf5b5f8954-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-z7ztp\" (UID: \"795ae09a-4f64-42d2-ad54-45bf5b5f8954\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-z7ztp" Jan 31 09:30:55 crc kubenswrapper[4830]: I0131 09:30:55.113200 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zh64h\" (UniqueName: \"kubernetes.io/projected/795ae09a-4f64-42d2-ad54-45bf5b5f8954-kube-api-access-zh64h\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-z7ztp\" (UID: \"795ae09a-4f64-42d2-ad54-45bf5b5f8954\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-z7ztp" Jan 31 09:30:55 crc kubenswrapper[4830]: I0131 09:30:55.216504 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/795ae09a-4f64-42d2-ad54-45bf5b5f8954-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-z7ztp\" (UID: \"795ae09a-4f64-42d2-ad54-45bf5b5f8954\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-z7ztp" Jan 31 09:30:55 crc kubenswrapper[4830]: I0131 09:30:55.216576 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/795ae09a-4f64-42d2-ad54-45bf5b5f8954-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-z7ztp\" (UID: \"795ae09a-4f64-42d2-ad54-45bf5b5f8954\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-z7ztp" Jan 31 09:30:55 crc kubenswrapper[4830]: I0131 09:30:55.217003 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zh64h\" (UniqueName: \"kubernetes.io/projected/795ae09a-4f64-42d2-ad54-45bf5b5f8954-kube-api-access-zh64h\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-z7ztp\" (UID: \"795ae09a-4f64-42d2-ad54-45bf5b5f8954\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-z7ztp" Jan 31 09:30:55 crc kubenswrapper[4830]: I0131 09:30:55.223453 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/795ae09a-4f64-42d2-ad54-45bf5b5f8954-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-z7ztp\" (UID: \"795ae09a-4f64-42d2-ad54-45bf5b5f8954\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-z7ztp" Jan 31 09:30:55 crc kubenswrapper[4830]: I0131 09:30:55.224710 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/795ae09a-4f64-42d2-ad54-45bf5b5f8954-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-z7ztp\" (UID: \"795ae09a-4f64-42d2-ad54-45bf5b5f8954\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-z7ztp" Jan 31 09:30:55 crc kubenswrapper[4830]: I0131 09:30:55.237018 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zh64h\" (UniqueName: \"kubernetes.io/projected/795ae09a-4f64-42d2-ad54-45bf5b5f8954-kube-api-access-zh64h\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-z7ztp\" (UID: \"795ae09a-4f64-42d2-ad54-45bf5b5f8954\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-z7ztp" Jan 31 09:30:55 crc kubenswrapper[4830]: I0131 09:30:55.316414 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-z7ztp" Jan 31 09:30:56 crc kubenswrapper[4830]: I0131 09:30:56.196113 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-z7ztp"] Jan 31 09:30:56 crc kubenswrapper[4830]: I0131 09:30:56.836699 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"11142d9d-4725-4a33-b10e-8fc21e30c6a3","Type":"ContainerStarted","Data":"d0030c37738303001bb5b73e4d53e68f558294f011c74bd432a37db7ecf790c0"} Jan 31 09:30:56 crc kubenswrapper[4830]: I0131 09:30:56.838745 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-z7ztp" event={"ID":"795ae09a-4f64-42d2-ad54-45bf5b5f8954","Type":"ContainerStarted","Data":"818e29c513840d5a9f81518c88c1b999a77b0ea50212c0e0797c4e1354276428"} Jan 31 09:30:56 crc kubenswrapper[4830]: I0131 09:30:56.974170 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 31 09:30:57 crc kubenswrapper[4830]: I0131 09:30:57.855844 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-z7ztp" event={"ID":"795ae09a-4f64-42d2-ad54-45bf5b5f8954","Type":"ContainerStarted","Data":"641e89b2f0a22ae26bc9d36850ae01f96c6e56e7b7992140704527b107afb593"} Jan 31 09:30:57 crc kubenswrapper[4830]: I0131 09:30:57.860516 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"11142d9d-4725-4a33-b10e-8fc21e30c6a3","Type":"ContainerStarted","Data":"3e82c0da826ef8b914e76bc5e06647be6197213ccf73e385f61fc62d267c8ade"} Jan 31 09:30:57 crc kubenswrapper[4830]: I0131 09:30:57.888133 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-z7ztp" podStartSLOduration=3.132412076 podStartE2EDuration="3.888103543s" podCreationTimestamp="2026-01-31 09:30:54 +0000 UTC" firstStartedPulling="2026-01-31 09:30:56.211649395 +0000 UTC m=+1800.705011837" lastFinishedPulling="2026-01-31 09:30:56.967340862 +0000 UTC m=+1801.460703304" observedRunningTime="2026-01-31 09:30:57.881651908 +0000 UTC m=+1802.375014360" watchObservedRunningTime="2026-01-31 09:30:57.888103543 +0000 UTC m=+1802.381465985" Jan 31 09:30:58 crc kubenswrapper[4830]: I0131 09:30:58.882006 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"11142d9d-4725-4a33-b10e-8fc21e30c6a3","Type":"ContainerStarted","Data":"f3723fe1c13095dd1c4d1819bf92a2e64be83988efb1fcdd12406c9349ef8b60"} Jan 31 09:30:58 crc kubenswrapper[4830]: I0131 09:30:58.928866 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-0" podStartSLOduration=2.493642699 podStartE2EDuration="6.927686154s" podCreationTimestamp="2026-01-31 09:30:52 +0000 UTC" firstStartedPulling="2026-01-31 09:30:53.882479998 +0000 UTC m=+1798.375842440" lastFinishedPulling="2026-01-31 09:30:58.316523453 +0000 UTC m=+1802.809885895" observedRunningTime="2026-01-31 09:30:58.907431634 +0000 UTC m=+1803.400794076" watchObservedRunningTime="2026-01-31 09:30:58.927686154 +0000 UTC m=+1803.421048586" Jan 31 09:31:00 crc kubenswrapper[4830]: I0131 09:31:00.907230 4830 generic.go:334] "Generic (PLEG): container finished" podID="795ae09a-4f64-42d2-ad54-45bf5b5f8954" containerID="641e89b2f0a22ae26bc9d36850ae01f96c6e56e7b7992140704527b107afb593" exitCode=0 Jan 31 09:31:00 crc kubenswrapper[4830]: I0131 09:31:00.907361 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-z7ztp" event={"ID":"795ae09a-4f64-42d2-ad54-45bf5b5f8954","Type":"ContainerDied","Data":"641e89b2f0a22ae26bc9d36850ae01f96c6e56e7b7992140704527b107afb593"} Jan 31 09:31:02 crc kubenswrapper[4830]: I0131 09:31:02.421117 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-z7ztp" Jan 31 09:31:02 crc kubenswrapper[4830]: I0131 09:31:02.550236 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/795ae09a-4f64-42d2-ad54-45bf5b5f8954-ssh-key-openstack-edpm-ipam\") pod \"795ae09a-4f64-42d2-ad54-45bf5b5f8954\" (UID: \"795ae09a-4f64-42d2-ad54-45bf5b5f8954\") " Jan 31 09:31:02 crc kubenswrapper[4830]: I0131 09:31:02.550576 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/795ae09a-4f64-42d2-ad54-45bf5b5f8954-inventory\") pod \"795ae09a-4f64-42d2-ad54-45bf5b5f8954\" (UID: \"795ae09a-4f64-42d2-ad54-45bf5b5f8954\") " Jan 31 09:31:02 crc kubenswrapper[4830]: I0131 09:31:02.550679 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zh64h\" (UniqueName: \"kubernetes.io/projected/795ae09a-4f64-42d2-ad54-45bf5b5f8954-kube-api-access-zh64h\") pod \"795ae09a-4f64-42d2-ad54-45bf5b5f8954\" (UID: \"795ae09a-4f64-42d2-ad54-45bf5b5f8954\") " Jan 31 09:31:02 crc kubenswrapper[4830]: I0131 09:31:02.558883 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/795ae09a-4f64-42d2-ad54-45bf5b5f8954-kube-api-access-zh64h" (OuterVolumeSpecName: "kube-api-access-zh64h") pod "795ae09a-4f64-42d2-ad54-45bf5b5f8954" (UID: "795ae09a-4f64-42d2-ad54-45bf5b5f8954"). InnerVolumeSpecName "kube-api-access-zh64h". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:31:02 crc kubenswrapper[4830]: I0131 09:31:02.592776 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/795ae09a-4f64-42d2-ad54-45bf5b5f8954-inventory" (OuterVolumeSpecName: "inventory") pod "795ae09a-4f64-42d2-ad54-45bf5b5f8954" (UID: "795ae09a-4f64-42d2-ad54-45bf5b5f8954"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:31:02 crc kubenswrapper[4830]: I0131 09:31:02.612144 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/795ae09a-4f64-42d2-ad54-45bf5b5f8954-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "795ae09a-4f64-42d2-ad54-45bf5b5f8954" (UID: "795ae09a-4f64-42d2-ad54-45bf5b5f8954"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:31:02 crc kubenswrapper[4830]: I0131 09:31:02.654532 4830 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/795ae09a-4f64-42d2-ad54-45bf5b5f8954-inventory\") on node \"crc\" DevicePath \"\"" Jan 31 09:31:02 crc kubenswrapper[4830]: I0131 09:31:02.654583 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zh64h\" (UniqueName: \"kubernetes.io/projected/795ae09a-4f64-42d2-ad54-45bf5b5f8954-kube-api-access-zh64h\") on node \"crc\" DevicePath \"\"" Jan 31 09:31:02 crc kubenswrapper[4830]: I0131 09:31:02.654597 4830 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/795ae09a-4f64-42d2-ad54-45bf5b5f8954-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 31 09:31:02 crc kubenswrapper[4830]: I0131 09:31:02.934577 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-z7ztp" event={"ID":"795ae09a-4f64-42d2-ad54-45bf5b5f8954","Type":"ContainerDied","Data":"818e29c513840d5a9f81518c88c1b999a77b0ea50212c0e0797c4e1354276428"} Jan 31 09:31:02 crc kubenswrapper[4830]: I0131 09:31:02.934691 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-z7ztp" Jan 31 09:31:02 crc kubenswrapper[4830]: I0131 09:31:02.934871 4830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="818e29c513840d5a9f81518c88c1b999a77b0ea50212c0e0797c4e1354276428" Jan 31 09:31:03 crc kubenswrapper[4830]: I0131 09:31:03.045385 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vs6t2"] Jan 31 09:31:03 crc kubenswrapper[4830]: E0131 09:31:03.046608 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="795ae09a-4f64-42d2-ad54-45bf5b5f8954" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Jan 31 09:31:03 crc kubenswrapper[4830]: I0131 09:31:03.046641 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="795ae09a-4f64-42d2-ad54-45bf5b5f8954" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Jan 31 09:31:03 crc kubenswrapper[4830]: I0131 09:31:03.047015 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="795ae09a-4f64-42d2-ad54-45bf5b5f8954" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Jan 31 09:31:03 crc kubenswrapper[4830]: I0131 09:31:03.048278 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vs6t2" Jan 31 09:31:03 crc kubenswrapper[4830]: I0131 09:31:03.052811 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-vd24j" Jan 31 09:31:03 crc kubenswrapper[4830]: I0131 09:31:03.052877 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 31 09:31:03 crc kubenswrapper[4830]: I0131 09:31:03.052958 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 31 09:31:03 crc kubenswrapper[4830]: I0131 09:31:03.053404 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 31 09:31:03 crc kubenswrapper[4830]: I0131 09:31:03.064889 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vs6t2"] Jan 31 09:31:03 crc kubenswrapper[4830]: I0131 09:31:03.176693 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2drj4\" (UniqueName: \"kubernetes.io/projected/45dd1e1a-bac5-460f-9c7e-df3f8e11aa52-kube-api-access-2drj4\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-vs6t2\" (UID: \"45dd1e1a-bac5-460f-9c7e-df3f8e11aa52\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vs6t2" Jan 31 09:31:03 crc kubenswrapper[4830]: I0131 09:31:03.176868 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/45dd1e1a-bac5-460f-9c7e-df3f8e11aa52-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-vs6t2\" (UID: \"45dd1e1a-bac5-460f-9c7e-df3f8e11aa52\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vs6t2" Jan 31 09:31:03 crc kubenswrapper[4830]: I0131 09:31:03.176966 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45dd1e1a-bac5-460f-9c7e-df3f8e11aa52-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-vs6t2\" (UID: \"45dd1e1a-bac5-460f-9c7e-df3f8e11aa52\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vs6t2" Jan 31 09:31:03 crc kubenswrapper[4830]: I0131 09:31:03.177113 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/45dd1e1a-bac5-460f-9c7e-df3f8e11aa52-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-vs6t2\" (UID: \"45dd1e1a-bac5-460f-9c7e-df3f8e11aa52\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vs6t2" Jan 31 09:31:03 crc kubenswrapper[4830]: I0131 09:31:03.279649 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/45dd1e1a-bac5-460f-9c7e-df3f8e11aa52-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-vs6t2\" (UID: \"45dd1e1a-bac5-460f-9c7e-df3f8e11aa52\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vs6t2" Jan 31 09:31:03 crc kubenswrapper[4830]: I0131 09:31:03.279803 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2drj4\" (UniqueName: \"kubernetes.io/projected/45dd1e1a-bac5-460f-9c7e-df3f8e11aa52-kube-api-access-2drj4\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-vs6t2\" (UID: \"45dd1e1a-bac5-460f-9c7e-df3f8e11aa52\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vs6t2" Jan 31 09:31:03 crc kubenswrapper[4830]: I0131 09:31:03.279876 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/45dd1e1a-bac5-460f-9c7e-df3f8e11aa52-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-vs6t2\" (UID: \"45dd1e1a-bac5-460f-9c7e-df3f8e11aa52\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vs6t2" Jan 31 09:31:03 crc kubenswrapper[4830]: I0131 09:31:03.279964 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45dd1e1a-bac5-460f-9c7e-df3f8e11aa52-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-vs6t2\" (UID: \"45dd1e1a-bac5-460f-9c7e-df3f8e11aa52\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vs6t2" Jan 31 09:31:03 crc kubenswrapper[4830]: I0131 09:31:03.290250 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/45dd1e1a-bac5-460f-9c7e-df3f8e11aa52-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-vs6t2\" (UID: \"45dd1e1a-bac5-460f-9c7e-df3f8e11aa52\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vs6t2" Jan 31 09:31:03 crc kubenswrapper[4830]: I0131 09:31:03.290259 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45dd1e1a-bac5-460f-9c7e-df3f8e11aa52-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-vs6t2\" (UID: \"45dd1e1a-bac5-460f-9c7e-df3f8e11aa52\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vs6t2" Jan 31 09:31:03 crc kubenswrapper[4830]: I0131 09:31:03.295418 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/45dd1e1a-bac5-460f-9c7e-df3f8e11aa52-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-vs6t2\" (UID: \"45dd1e1a-bac5-460f-9c7e-df3f8e11aa52\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vs6t2" Jan 31 09:31:03 crc kubenswrapper[4830]: I0131 09:31:03.299055 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2drj4\" (UniqueName: \"kubernetes.io/projected/45dd1e1a-bac5-460f-9c7e-df3f8e11aa52-kube-api-access-2drj4\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-vs6t2\" (UID: \"45dd1e1a-bac5-460f-9c7e-df3f8e11aa52\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vs6t2" Jan 31 09:31:03 crc kubenswrapper[4830]: I0131 09:31:03.376770 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vs6t2" Jan 31 09:31:04 crc kubenswrapper[4830]: I0131 09:31:04.008001 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vs6t2"] Jan 31 09:31:04 crc kubenswrapper[4830]: I0131 09:31:04.961771 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vs6t2" event={"ID":"45dd1e1a-bac5-460f-9c7e-df3f8e11aa52","Type":"ContainerStarted","Data":"09a30a36459d23a3bf1c95b5431b60063f6f40f00a77ad968bccb725c174b8d9"} Jan 31 09:31:04 crc kubenswrapper[4830]: I0131 09:31:04.962380 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vs6t2" event={"ID":"45dd1e1a-bac5-460f-9c7e-df3f8e11aa52","Type":"ContainerStarted","Data":"6ac87c2f5cf8156bc6c5052f448d7785f17cc9c32c2cf9fea511360b8523b31a"} Jan 31 09:31:04 crc kubenswrapper[4830]: I0131 09:31:04.995356 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vs6t2" podStartSLOduration=1.537995232 podStartE2EDuration="1.99532811s" podCreationTimestamp="2026-01-31 09:31:03 +0000 UTC" firstStartedPulling="2026-01-31 09:31:04.014627144 +0000 UTC m=+1808.507989586" lastFinishedPulling="2026-01-31 09:31:04.471960022 +0000 UTC m=+1808.965322464" observedRunningTime="2026-01-31 09:31:04.982443502 +0000 UTC m=+1809.475805944" watchObservedRunningTime="2026-01-31 09:31:04.99532811 +0000 UTC m=+1809.488690552" Jan 31 09:31:09 crc kubenswrapper[4830]: I0131 09:31:09.621897 4830 scope.go:117] "RemoveContainer" containerID="68201a955abf6cbbb906e8ea8b01b1ed9f44aea832f898360508559e3d2781fe" Jan 31 09:31:09 crc kubenswrapper[4830]: I0131 09:31:09.657377 4830 scope.go:117] "RemoveContainer" containerID="eabcd1235c056e6d23ed658dd38e5f2e72bac2c473103f6f3e4acd1aa0dacec8" Jan 31 09:31:09 crc kubenswrapper[4830]: I0131 09:31:09.743501 4830 scope.go:117] "RemoveContainer" containerID="7c6922b39c4dd9c7624db328248b385cabff90417731eff072afdbd30b6ab102" Jan 31 09:32:09 crc kubenswrapper[4830]: I0131 09:32:09.914711 4830 scope.go:117] "RemoveContainer" containerID="432786d33d3771aab5e5d32e3cafd9b8a281299a22963e8a340e9dc5bdc1494a" Jan 31 09:32:10 crc kubenswrapper[4830]: I0131 09:32:10.563528 4830 scope.go:117] "RemoveContainer" containerID="cb4a05ac9302c7356f4830d38a00f8f941d688e43deecb9bdbf3ea14257b5c5e" Jan 31 09:32:10 crc kubenswrapper[4830]: I0131 09:32:10.596040 4830 scope.go:117] "RemoveContainer" containerID="410d3fa387ba52fc900df14a4ccefea9f4c22babba4e0a3efb0d6b88d925adb6" Jan 31 09:32:10 crc kubenswrapper[4830]: I0131 09:32:10.623053 4830 scope.go:117] "RemoveContainer" containerID="00d9abf46523e252c342902e9571685e6008daa60278da5c785f45f9d550fc4b" Jan 31 09:33:00 crc kubenswrapper[4830]: I0131 09:33:00.051851 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-571a-account-create-update-95fgz"] Jan 31 09:33:00 crc kubenswrapper[4830]: I0131 09:33:00.064133 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mysqld-exporter-571a-account-create-update-95fgz"] Jan 31 09:33:00 crc kubenswrapper[4830]: I0131 09:33:00.075500 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-openstack-db-create-xvrr7"] Jan 31 09:33:00 crc kubenswrapper[4830]: I0131 09:33:00.088439 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mysqld-exporter-openstack-db-create-xvrr7"] Jan 31 09:33:00 crc kubenswrapper[4830]: I0131 09:33:00.265903 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8c5176e5-9abf-4ae4-b4da-4b50704cb0a4" path="/var/lib/kubelet/pods/8c5176e5-9abf-4ae4-b4da-4b50704cb0a4/volumes" Jan 31 09:33:00 crc kubenswrapper[4830]: I0131 09:33:00.266694 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8c9dab0c-38e9-435b-8d48-9dfdaa4af87b" path="/var/lib/kubelet/pods/8c9dab0c-38e9-435b-8d48-9dfdaa4af87b/volumes" Jan 31 09:33:04 crc kubenswrapper[4830]: I0131 09:33:04.037222 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-56987"] Jan 31 09:33:04 crc kubenswrapper[4830]: I0131 09:33:04.052604 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-56987"] Jan 31 09:33:04 crc kubenswrapper[4830]: I0131 09:33:04.282289 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cf11bdf9-7bbe-4713-9711-e6aff7e0c0c5" path="/var/lib/kubelet/pods/cf11bdf9-7bbe-4713-9711-e6aff7e0c0c5/volumes" Jan 31 09:33:05 crc kubenswrapper[4830]: I0131 09:33:05.041701 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-sqm2h"] Jan 31 09:33:05 crc kubenswrapper[4830]: I0131 09:33:05.059200 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-04a9-account-create-update-xswbl"] Jan 31 09:33:05 crc kubenswrapper[4830]: I0131 09:33:05.090067 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-1d60-account-create-update-vbk6p"] Jan 31 09:33:05 crc kubenswrapper[4830]: I0131 09:33:05.105778 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-wfp8z"] Jan 31 09:33:05 crc kubenswrapper[4830]: I0131 09:33:05.121076 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-04a9-account-create-update-xswbl"] Jan 31 09:33:05 crc kubenswrapper[4830]: I0131 09:33:05.131158 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-sqm2h"] Jan 31 09:33:05 crc kubenswrapper[4830]: I0131 09:33:05.142741 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-wfp8z"] Jan 31 09:33:05 crc kubenswrapper[4830]: I0131 09:33:05.156501 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-a9ea-account-create-update-cvwgh"] Jan 31 09:33:05 crc kubenswrapper[4830]: I0131 09:33:05.170053 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-1d60-account-create-update-vbk6p"] Jan 31 09:33:05 crc kubenswrapper[4830]: I0131 09:33:05.182878 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-a9ea-account-create-update-cvwgh"] Jan 31 09:33:06 crc kubenswrapper[4830]: I0131 09:33:06.272516 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2830665c-1d23-4c36-8324-7362068ae08f" path="/var/lib/kubelet/pods/2830665c-1d23-4c36-8324-7362068ae08f/volumes" Jan 31 09:33:06 crc kubenswrapper[4830]: I0131 09:33:06.274519 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2d4e564d-bb74-4fb4-b180-bdd6c81a3d6c" path="/var/lib/kubelet/pods/2d4e564d-bb74-4fb4-b180-bdd6c81a3d6c/volumes" Jan 31 09:33:06 crc kubenswrapper[4830]: I0131 09:33:06.281197 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8af6a38f-c8ba-464d-acd5-417848530657" path="/var/lib/kubelet/pods/8af6a38f-c8ba-464d-acd5-417848530657/volumes" Jan 31 09:33:06 crc kubenswrapper[4830]: I0131 09:33:06.282814 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b4141b8b-513a-4210-9abd-bfba363d6986" path="/var/lib/kubelet/pods/b4141b8b-513a-4210-9abd-bfba363d6986/volumes" Jan 31 09:33:06 crc kubenswrapper[4830]: I0131 09:33:06.284219 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ca85abf4-a6ba-4080-a544-fcce2de88b2b" path="/var/lib/kubelet/pods/ca85abf4-a6ba-4080-a544-fcce2de88b2b/volumes" Jan 31 09:33:07 crc kubenswrapper[4830]: I0131 09:33:07.043059 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-openstack-cell1-db-create-8sjj4"] Jan 31 09:33:07 crc kubenswrapper[4830]: I0131 09:33:07.053920 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-c662-account-create-update-rwxdv"] Jan 31 09:33:07 crc kubenswrapper[4830]: I0131 09:33:07.071831 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mysqld-exporter-openstack-cell1-db-create-8sjj4"] Jan 31 09:33:07 crc kubenswrapper[4830]: I0131 09:33:07.087877 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mysqld-exporter-c662-account-create-update-rwxdv"] Jan 31 09:33:08 crc kubenswrapper[4830]: I0131 09:33:08.293738 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="035a3263-c7af-45d8-a14c-5b86e594c818" path="/var/lib/kubelet/pods/035a3263-c7af-45d8-a14c-5b86e594c818/volumes" Jan 31 09:33:08 crc kubenswrapper[4830]: I0131 09:33:08.298307 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="73de016b-d1c0-45cf-b3a6-fe6d3138f630" path="/var/lib/kubelet/pods/73de016b-d1c0-45cf-b3a6-fe6d3138f630/volumes" Jan 31 09:33:10 crc kubenswrapper[4830]: I0131 09:33:10.683803 4830 scope.go:117] "RemoveContainer" containerID="19ba30438c2d9de9341594c603a0643b6578f2d4736c1e125ccc4407bbf7a309" Jan 31 09:33:10 crc kubenswrapper[4830]: I0131 09:33:10.719932 4830 scope.go:117] "RemoveContainer" containerID="1edad15e2be8ae446fa14322acc01be55a42b8ced2bea799c4d280293645b05a" Jan 31 09:33:10 crc kubenswrapper[4830]: I0131 09:33:10.794926 4830 scope.go:117] "RemoveContainer" containerID="8d1c4eee78341f44f679b500f0d207a5e474b317db60cd624ef0f401abd5b231" Jan 31 09:33:10 crc kubenswrapper[4830]: I0131 09:33:10.850908 4830 scope.go:117] "RemoveContainer" containerID="9496b9bef2552761732cd1b337753278cf6c9a5d77c6293d395c3002a513b34c" Jan 31 09:33:10 crc kubenswrapper[4830]: I0131 09:33:10.913977 4830 scope.go:117] "RemoveContainer" containerID="1a540976789812b4e3da15c2e7ea712bb4f6a503080de42ca1fa2374180f34fd" Jan 31 09:33:10 crc kubenswrapper[4830]: I0131 09:33:10.973562 4830 scope.go:117] "RemoveContainer" containerID="08726d443e962f560c4c7a00bb3fbc90b8bf85df9df11a78cc3cff705f1ab571" Jan 31 09:33:11 crc kubenswrapper[4830]: I0131 09:33:11.037501 4830 scope.go:117] "RemoveContainer" containerID="279ba1094f266cb5d4dae197ff642f8341a0402b74dcfa2eace5939eb69a0e7d" Jan 31 09:33:11 crc kubenswrapper[4830]: I0131 09:33:11.071296 4830 scope.go:117] "RemoveContainer" containerID="1f7f3d7c70997ea5294afc049daa565ad1580abd8255e828795e85adf50aada4" Jan 31 09:33:11 crc kubenswrapper[4830]: I0131 09:33:11.095128 4830 scope.go:117] "RemoveContainer" containerID="01206d5478a26bc2285e3b5be49ac89f5002949ad540ee4794b6867baaa5d0fd" Jan 31 09:33:11 crc kubenswrapper[4830]: I0131 09:33:11.120008 4830 scope.go:117] "RemoveContainer" containerID="5084136b971b4112f29d18b0dfab4a73eb35cf3507d380929ff5ba9bb1967c39" Jan 31 09:33:11 crc kubenswrapper[4830]: I0131 09:33:11.145049 4830 scope.go:117] "RemoveContainer" containerID="e3131aa63899f34c9258a9403856ddb9e084db8fda9d9677b7fef5eeb6a7b503" Jan 31 09:33:11 crc kubenswrapper[4830]: I0131 09:33:11.173942 4830 scope.go:117] "RemoveContainer" containerID="ba6dfcf1e350a5368d73f6940d4dd5634d85299c29af73e7b546e020eb54c2d9" Jan 31 09:33:11 crc kubenswrapper[4830]: I0131 09:33:11.228323 4830 scope.go:117] "RemoveContainer" containerID="840c1bade5bb6a14f7f1be80e3774d07a66c9e1933e911852e4d990baa6d6bda" Jan 31 09:33:11 crc kubenswrapper[4830]: I0131 09:33:11.252526 4830 scope.go:117] "RemoveContainer" containerID="988a85f4c50f4c98a103f88f1face3040057a4ba445958e813df2ce5f514b4e1" Jan 31 09:33:14 crc kubenswrapper[4830]: I0131 09:33:14.353371 4830 patch_prober.go:28] interesting pod/machine-config-daemon-gt7kd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 09:33:14 crc kubenswrapper[4830]: I0131 09:33:14.353702 4830 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" podUID="158dbfda-9b0a-4809-9946-3c6ee2d082dc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 09:33:17 crc kubenswrapper[4830]: I0131 09:33:17.060039 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-22bk9"] Jan 31 09:33:17 crc kubenswrapper[4830]: I0131 09:33:17.075450 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-22bk9"] Jan 31 09:33:18 crc kubenswrapper[4830]: I0131 09:33:18.267773 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ef16ab0e-944c-4b5c-9203-e15202c4a3eb" path="/var/lib/kubelet/pods/ef16ab0e-944c-4b5c-9203-e15202c4a3eb/volumes" Jan 31 09:33:42 crc kubenswrapper[4830]: I0131 09:33:42.690247 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-7brtk"] Jan 31 09:33:42 crc kubenswrapper[4830]: I0131 09:33:42.693712 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7brtk" Jan 31 09:33:42 crc kubenswrapper[4830]: I0131 09:33:42.712820 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-7brtk"] Jan 31 09:33:42 crc kubenswrapper[4830]: I0131 09:33:42.861790 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5wkjh\" (UniqueName: \"kubernetes.io/projected/2de80243-0f12-4bf7-8f32-c7f13fa8a118-kube-api-access-5wkjh\") pod \"redhat-operators-7brtk\" (UID: \"2de80243-0f12-4bf7-8f32-c7f13fa8a118\") " pod="openshift-marketplace/redhat-operators-7brtk" Jan 31 09:33:42 crc kubenswrapper[4830]: I0131 09:33:42.861862 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2de80243-0f12-4bf7-8f32-c7f13fa8a118-catalog-content\") pod \"redhat-operators-7brtk\" (UID: \"2de80243-0f12-4bf7-8f32-c7f13fa8a118\") " pod="openshift-marketplace/redhat-operators-7brtk" Jan 31 09:33:42 crc kubenswrapper[4830]: I0131 09:33:42.861888 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2de80243-0f12-4bf7-8f32-c7f13fa8a118-utilities\") pod \"redhat-operators-7brtk\" (UID: \"2de80243-0f12-4bf7-8f32-c7f13fa8a118\") " pod="openshift-marketplace/redhat-operators-7brtk" Jan 31 09:33:42 crc kubenswrapper[4830]: I0131 09:33:42.964828 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5wkjh\" (UniqueName: \"kubernetes.io/projected/2de80243-0f12-4bf7-8f32-c7f13fa8a118-kube-api-access-5wkjh\") pod \"redhat-operators-7brtk\" (UID: \"2de80243-0f12-4bf7-8f32-c7f13fa8a118\") " pod="openshift-marketplace/redhat-operators-7brtk" Jan 31 09:33:42 crc kubenswrapper[4830]: I0131 09:33:42.964890 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2de80243-0f12-4bf7-8f32-c7f13fa8a118-catalog-content\") pod \"redhat-operators-7brtk\" (UID: \"2de80243-0f12-4bf7-8f32-c7f13fa8a118\") " pod="openshift-marketplace/redhat-operators-7brtk" Jan 31 09:33:42 crc kubenswrapper[4830]: I0131 09:33:42.964914 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2de80243-0f12-4bf7-8f32-c7f13fa8a118-utilities\") pod \"redhat-operators-7brtk\" (UID: \"2de80243-0f12-4bf7-8f32-c7f13fa8a118\") " pod="openshift-marketplace/redhat-operators-7brtk" Jan 31 09:33:42 crc kubenswrapper[4830]: I0131 09:33:42.965526 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2de80243-0f12-4bf7-8f32-c7f13fa8a118-utilities\") pod \"redhat-operators-7brtk\" (UID: \"2de80243-0f12-4bf7-8f32-c7f13fa8a118\") " pod="openshift-marketplace/redhat-operators-7brtk" Jan 31 09:33:42 crc kubenswrapper[4830]: I0131 09:33:42.965568 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2de80243-0f12-4bf7-8f32-c7f13fa8a118-catalog-content\") pod \"redhat-operators-7brtk\" (UID: \"2de80243-0f12-4bf7-8f32-c7f13fa8a118\") " pod="openshift-marketplace/redhat-operators-7brtk" Jan 31 09:33:42 crc kubenswrapper[4830]: I0131 09:33:42.987484 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5wkjh\" (UniqueName: \"kubernetes.io/projected/2de80243-0f12-4bf7-8f32-c7f13fa8a118-kube-api-access-5wkjh\") pod \"redhat-operators-7brtk\" (UID: \"2de80243-0f12-4bf7-8f32-c7f13fa8a118\") " pod="openshift-marketplace/redhat-operators-7brtk" Jan 31 09:33:43 crc kubenswrapper[4830]: I0131 09:33:43.023602 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7brtk" Jan 31 09:33:43 crc kubenswrapper[4830]: I0131 09:33:43.577338 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-7brtk"] Jan 31 09:33:43 crc kubenswrapper[4830]: I0131 09:33:43.976277 4830 generic.go:334] "Generic (PLEG): container finished" podID="2de80243-0f12-4bf7-8f32-c7f13fa8a118" containerID="6caa06dfe58b3b32c52f8b6d3af154ac9287bd8bcfe6527d4d466354763bb3af" exitCode=0 Jan 31 09:33:43 crc kubenswrapper[4830]: I0131 09:33:43.976638 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7brtk" event={"ID":"2de80243-0f12-4bf7-8f32-c7f13fa8a118","Type":"ContainerDied","Data":"6caa06dfe58b3b32c52f8b6d3af154ac9287bd8bcfe6527d4d466354763bb3af"} Jan 31 09:33:43 crc kubenswrapper[4830]: I0131 09:33:43.976687 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7brtk" event={"ID":"2de80243-0f12-4bf7-8f32-c7f13fa8a118","Type":"ContainerStarted","Data":"a8ad1705ca59b820942597e75314b0050401357dee6375c9b1dc808179ae35d7"} Jan 31 09:33:43 crc kubenswrapper[4830]: I0131 09:33:43.979595 4830 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 31 09:33:44 crc kubenswrapper[4830]: I0131 09:33:44.356561 4830 patch_prober.go:28] interesting pod/machine-config-daemon-gt7kd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 09:33:44 crc kubenswrapper[4830]: I0131 09:33:44.356628 4830 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" podUID="158dbfda-9b0a-4809-9946-3c6ee2d082dc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 09:33:45 crc kubenswrapper[4830]: I0131 09:33:45.062354 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-7jb5m"] Jan 31 09:33:45 crc kubenswrapper[4830]: I0131 09:33:45.085283 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-3e38-account-create-update-4vpgq"] Jan 31 09:33:45 crc kubenswrapper[4830]: I0131 09:33:45.105800 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-7jb5m"] Jan 31 09:33:45 crc kubenswrapper[4830]: I0131 09:33:45.124398 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-3e38-account-create-update-4vpgq"] Jan 31 09:33:46 crc kubenswrapper[4830]: I0131 09:33:46.003826 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7brtk" event={"ID":"2de80243-0f12-4bf7-8f32-c7f13fa8a118","Type":"ContainerStarted","Data":"86131869bb37ff7a26b0f2e97b7b804cf9a2c34dce496445eb5751569810ab99"} Jan 31 09:33:46 crc kubenswrapper[4830]: I0131 09:33:46.059952 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-db-create-w9h2w"] Jan 31 09:33:46 crc kubenswrapper[4830]: I0131 09:33:46.074641 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-db-create-w9h2w"] Jan 31 09:33:46 crc kubenswrapper[4830]: I0131 09:33:46.299850 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2af2731d-2c7c-46c2-abcc-4846583de531" path="/var/lib/kubelet/pods/2af2731d-2c7c-46c2-abcc-4846583de531/volumes" Jan 31 09:33:46 crc kubenswrapper[4830]: I0131 09:33:46.306865 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="52fdb459-dc6a-4e56-8a6b-379d4c74ce62" path="/var/lib/kubelet/pods/52fdb459-dc6a-4e56-8a6b-379d4c74ce62/volumes" Jan 31 09:33:46 crc kubenswrapper[4830]: I0131 09:33:46.307702 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ce165a30-da01-4e57-996c-de05fbe74498" path="/var/lib/kubelet/pods/ce165a30-da01-4e57-996c-de05fbe74498/volumes" Jan 31 09:33:47 crc kubenswrapper[4830]: I0131 09:33:47.053955 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-w7rt2"] Jan 31 09:33:47 crc kubenswrapper[4830]: I0131 09:33:47.068444 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-cn5jd"] Jan 31 09:33:47 crc kubenswrapper[4830]: I0131 09:33:47.088399 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-87e0-account-create-update-6vbtx"] Jan 31 09:33:47 crc kubenswrapper[4830]: I0131 09:33:47.104609 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-541b-account-create-update-pssj9"] Jan 31 09:33:47 crc kubenswrapper[4830]: I0131 09:33:47.119087 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-cn5jd"] Jan 31 09:33:47 crc kubenswrapper[4830]: I0131 09:33:47.133626 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-87e0-account-create-update-6vbtx"] Jan 31 09:33:47 crc kubenswrapper[4830]: I0131 09:33:47.148287 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-541b-account-create-update-pssj9"] Jan 31 09:33:47 crc kubenswrapper[4830]: I0131 09:33:47.161150 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-w7rt2"] Jan 31 09:33:47 crc kubenswrapper[4830]: I0131 09:33:47.175337 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-b610-account-create-update-c8ck9"] Jan 31 09:33:47 crc kubenswrapper[4830]: I0131 09:33:47.186680 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-b610-account-create-update-c8ck9"] Jan 31 09:33:48 crc kubenswrapper[4830]: I0131 09:33:48.273130 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d8e473a-4e99-400b-be95-bd490bd2228b" path="/var/lib/kubelet/pods/1d8e473a-4e99-400b-be95-bd490bd2228b/volumes" Jan 31 09:33:48 crc kubenswrapper[4830]: I0131 09:33:48.274493 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4140bbd2-fcdd-482d-9224-5248d75e4317" path="/var/lib/kubelet/pods/4140bbd2-fcdd-482d-9224-5248d75e4317/volumes" Jan 31 09:33:48 crc kubenswrapper[4830]: I0131 09:33:48.276001 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="528add2c-0e7d-4050-a900-0970487688f3" path="/var/lib/kubelet/pods/528add2c-0e7d-4050-a900-0970487688f3/volumes" Jan 31 09:33:48 crc kubenswrapper[4830]: I0131 09:33:48.277067 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8e66d083-dfc7-41d1-b955-752fdc14a3c2" path="/var/lib/kubelet/pods/8e66d083-dfc7-41d1-b955-752fdc14a3c2/volumes" Jan 31 09:33:48 crc kubenswrapper[4830]: I0131 09:33:48.278475 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e1444b15-29b5-4433-8ea5-4b533b54f08a" path="/var/lib/kubelet/pods/e1444b15-29b5-4433-8ea5-4b533b54f08a/volumes" Jan 31 09:33:51 crc kubenswrapper[4830]: I0131 09:33:51.042555 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-bktdp"] Jan 31 09:33:51 crc kubenswrapper[4830]: I0131 09:33:51.057012 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-bktdp"] Jan 31 09:33:51 crc kubenswrapper[4830]: I0131 09:33:51.081437 4830 generic.go:334] "Generic (PLEG): container finished" podID="2de80243-0f12-4bf7-8f32-c7f13fa8a118" containerID="86131869bb37ff7a26b0f2e97b7b804cf9a2c34dce496445eb5751569810ab99" exitCode=0 Jan 31 09:33:51 crc kubenswrapper[4830]: I0131 09:33:51.081837 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7brtk" event={"ID":"2de80243-0f12-4bf7-8f32-c7f13fa8a118","Type":"ContainerDied","Data":"86131869bb37ff7a26b0f2e97b7b804cf9a2c34dce496445eb5751569810ab99"} Jan 31 09:33:52 crc kubenswrapper[4830]: I0131 09:33:52.267289 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="42eafeb6-68c0-479b-bc77-62967566390e" path="/var/lib/kubelet/pods/42eafeb6-68c0-479b-bc77-62967566390e/volumes" Jan 31 09:33:53 crc kubenswrapper[4830]: I0131 09:33:53.105970 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7brtk" event={"ID":"2de80243-0f12-4bf7-8f32-c7f13fa8a118","Type":"ContainerStarted","Data":"d31c85e4a1ad13dd1a94eb44c51b404e181072cd1ae8cf20dddf10af376d914d"} Jan 31 09:33:53 crc kubenswrapper[4830]: I0131 09:33:53.128315 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-7brtk" podStartSLOduration=3.346755293 podStartE2EDuration="11.12828795s" podCreationTimestamp="2026-01-31 09:33:42 +0000 UTC" firstStartedPulling="2026-01-31 09:33:43.979286714 +0000 UTC m=+1968.472649156" lastFinishedPulling="2026-01-31 09:33:51.760819371 +0000 UTC m=+1976.254181813" observedRunningTime="2026-01-31 09:33:53.124715529 +0000 UTC m=+1977.618077981" watchObservedRunningTime="2026-01-31 09:33:53.12828795 +0000 UTC m=+1977.621650392" Jan 31 09:33:56 crc kubenswrapper[4830]: I0131 09:33:56.035236 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-hccz6"] Jan 31 09:33:56 crc kubenswrapper[4830]: I0131 09:33:56.049826 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-hccz6"] Jan 31 09:33:56 crc kubenswrapper[4830]: I0131 09:33:56.276441 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f97b7b49-f0d6-4f7c-a8ed-792cbfa32504" path="/var/lib/kubelet/pods/f97b7b49-f0d6-4f7c-a8ed-792cbfa32504/volumes" Jan 31 09:34:03 crc kubenswrapper[4830]: I0131 09:34:03.023862 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-7brtk" Jan 31 09:34:03 crc kubenswrapper[4830]: I0131 09:34:03.024640 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-7brtk" Jan 31 09:34:04 crc kubenswrapper[4830]: I0131 09:34:04.084489 4830 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-7brtk" podUID="2de80243-0f12-4bf7-8f32-c7f13fa8a118" containerName="registry-server" probeResult="failure" output=< Jan 31 09:34:04 crc kubenswrapper[4830]: timeout: failed to connect service ":50051" within 1s Jan 31 09:34:04 crc kubenswrapper[4830]: > Jan 31 09:34:11 crc kubenswrapper[4830]: I0131 09:34:11.535436 4830 scope.go:117] "RemoveContainer" containerID="1c3087de53b5cdfe65bc7c2cefe319da0b574ee0938861f2361426e1b163eed1" Jan 31 09:34:11 crc kubenswrapper[4830]: I0131 09:34:11.569868 4830 scope.go:117] "RemoveContainer" containerID="cd09f39c8b606e0207ce294d7fcfea1783f3cddd44a037ff3ed316fd176521a6" Jan 31 09:34:11 crc kubenswrapper[4830]: I0131 09:34:11.646315 4830 scope.go:117] "RemoveContainer" containerID="67905eb9bab90d265771b80fa447edbfdada0d23a9a8ebd6c567074c7d71c248" Jan 31 09:34:11 crc kubenswrapper[4830]: I0131 09:34:11.727644 4830 scope.go:117] "RemoveContainer" containerID="a9fea2162be47a2617675b01ee25975cc0f969ac4085fe30a352a60229108deb" Jan 31 09:34:11 crc kubenswrapper[4830]: I0131 09:34:11.797889 4830 scope.go:117] "RemoveContainer" containerID="6f37797019de65423359308de85954a8c167fc047dac50a8bb217196a6d744b8" Jan 31 09:34:11 crc kubenswrapper[4830]: I0131 09:34:11.865793 4830 scope.go:117] "RemoveContainer" containerID="2425a143d49eee8c420681009c13f58ef81bd146f09c31561f96fc2adad60cab" Jan 31 09:34:11 crc kubenswrapper[4830]: I0131 09:34:11.928439 4830 scope.go:117] "RemoveContainer" containerID="92ba01ee4673a32e19630a1774c6b842ef9680ec14c9c3a47683549e748e3cf0" Jan 31 09:34:11 crc kubenswrapper[4830]: I0131 09:34:11.969555 4830 scope.go:117] "RemoveContainer" containerID="c11a8c154f7fd5ec5bd847b50555ad9c581e8302954c26162b88dd6d00ba2007" Jan 31 09:34:12 crc kubenswrapper[4830]: I0131 09:34:12.000152 4830 scope.go:117] "RemoveContainer" containerID="948d37a0adf9cf63fe9c284791563ef7e989cc67ae703351cf0e15dccc7ba20e" Jan 31 09:34:12 crc kubenswrapper[4830]: I0131 09:34:12.028922 4830 scope.go:117] "RemoveContainer" containerID="cbe92bf9a4067d1c05c9f7af4a36a6499a7ce9ba145c65a23dc54f836a20bb44" Jan 31 09:34:12 crc kubenswrapper[4830]: I0131 09:34:12.055554 4830 scope.go:117] "RemoveContainer" containerID="b44444dadc89a2a407b63173c75f695b101f9bbf2eafcb9d8f114430787f4991" Jan 31 09:34:12 crc kubenswrapper[4830]: I0131 09:34:12.076447 4830 scope.go:117] "RemoveContainer" containerID="81ad4efb4d6ce18fe32a6ba05e07e2b4ad4998b237f01d1422dedc8b209c0f22" Jan 31 09:34:14 crc kubenswrapper[4830]: I0131 09:34:14.084315 4830 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-7brtk" podUID="2de80243-0f12-4bf7-8f32-c7f13fa8a118" containerName="registry-server" probeResult="failure" output=< Jan 31 09:34:14 crc kubenswrapper[4830]: timeout: failed to connect service ":50051" within 1s Jan 31 09:34:14 crc kubenswrapper[4830]: > Jan 31 09:34:14 crc kubenswrapper[4830]: I0131 09:34:14.352966 4830 patch_prober.go:28] interesting pod/machine-config-daemon-gt7kd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 09:34:14 crc kubenswrapper[4830]: I0131 09:34:14.353053 4830 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" podUID="158dbfda-9b0a-4809-9946-3c6ee2d082dc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 09:34:14 crc kubenswrapper[4830]: I0131 09:34:14.353112 4830 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" Jan 31 09:34:14 crc kubenswrapper[4830]: I0131 09:34:14.354460 4830 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"1bae58408ac9eb8b3a90089200da7949f59de4442790857fb510d734d497929e"} pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 31 09:34:14 crc kubenswrapper[4830]: I0131 09:34:14.354553 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" podUID="158dbfda-9b0a-4809-9946-3c6ee2d082dc" containerName="machine-config-daemon" containerID="cri-o://1bae58408ac9eb8b3a90089200da7949f59de4442790857fb510d734d497929e" gracePeriod=600 Jan 31 09:34:15 crc kubenswrapper[4830]: I0131 09:34:15.395489 4830 generic.go:334] "Generic (PLEG): container finished" podID="158dbfda-9b0a-4809-9946-3c6ee2d082dc" containerID="1bae58408ac9eb8b3a90089200da7949f59de4442790857fb510d734d497929e" exitCode=0 Jan 31 09:34:15 crc kubenswrapper[4830]: I0131 09:34:15.395579 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" event={"ID":"158dbfda-9b0a-4809-9946-3c6ee2d082dc","Type":"ContainerDied","Data":"1bae58408ac9eb8b3a90089200da7949f59de4442790857fb510d734d497929e"} Jan 31 09:34:15 crc kubenswrapper[4830]: I0131 09:34:15.396417 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" event={"ID":"158dbfda-9b0a-4809-9946-3c6ee2d082dc","Type":"ContainerStarted","Data":"5a8f61ac813e58f2725a65e088faabbabc4f4a08bd1c263d53e2f3530d252de8"} Jan 31 09:34:15 crc kubenswrapper[4830]: I0131 09:34:15.396455 4830 scope.go:117] "RemoveContainer" containerID="a04fad3617a9e38076099693ce6bd6f0b7e1a9b845b3b8a22acffddfa772e8f0" Jan 31 09:34:23 crc kubenswrapper[4830]: I0131 09:34:23.081755 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-7brtk" Jan 31 09:34:23 crc kubenswrapper[4830]: I0131 09:34:23.141749 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-7brtk" Jan 31 09:34:23 crc kubenswrapper[4830]: I0131 09:34:23.329970 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-7brtk"] Jan 31 09:34:24 crc kubenswrapper[4830]: I0131 09:34:24.511563 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-7brtk" podUID="2de80243-0f12-4bf7-8f32-c7f13fa8a118" containerName="registry-server" containerID="cri-o://d31c85e4a1ad13dd1a94eb44c51b404e181072cd1ae8cf20dddf10af376d914d" gracePeriod=2 Jan 31 09:34:25 crc kubenswrapper[4830]: I0131 09:34:25.157773 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7brtk" Jan 31 09:34:25 crc kubenswrapper[4830]: I0131 09:34:25.261977 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2de80243-0f12-4bf7-8f32-c7f13fa8a118-utilities\") pod \"2de80243-0f12-4bf7-8f32-c7f13fa8a118\" (UID: \"2de80243-0f12-4bf7-8f32-c7f13fa8a118\") " Jan 31 09:34:25 crc kubenswrapper[4830]: I0131 09:34:25.262435 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5wkjh\" (UniqueName: \"kubernetes.io/projected/2de80243-0f12-4bf7-8f32-c7f13fa8a118-kube-api-access-5wkjh\") pod \"2de80243-0f12-4bf7-8f32-c7f13fa8a118\" (UID: \"2de80243-0f12-4bf7-8f32-c7f13fa8a118\") " Jan 31 09:34:25 crc kubenswrapper[4830]: I0131 09:34:25.262484 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2de80243-0f12-4bf7-8f32-c7f13fa8a118-catalog-content\") pod \"2de80243-0f12-4bf7-8f32-c7f13fa8a118\" (UID: \"2de80243-0f12-4bf7-8f32-c7f13fa8a118\") " Jan 31 09:34:25 crc kubenswrapper[4830]: I0131 09:34:25.262930 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2de80243-0f12-4bf7-8f32-c7f13fa8a118-utilities" (OuterVolumeSpecName: "utilities") pod "2de80243-0f12-4bf7-8f32-c7f13fa8a118" (UID: "2de80243-0f12-4bf7-8f32-c7f13fa8a118"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 09:34:25 crc kubenswrapper[4830]: I0131 09:34:25.263478 4830 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2de80243-0f12-4bf7-8f32-c7f13fa8a118-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 09:34:25 crc kubenswrapper[4830]: I0131 09:34:25.271255 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2de80243-0f12-4bf7-8f32-c7f13fa8a118-kube-api-access-5wkjh" (OuterVolumeSpecName: "kube-api-access-5wkjh") pod "2de80243-0f12-4bf7-8f32-c7f13fa8a118" (UID: "2de80243-0f12-4bf7-8f32-c7f13fa8a118"). InnerVolumeSpecName "kube-api-access-5wkjh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:34:25 crc kubenswrapper[4830]: I0131 09:34:25.368020 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5wkjh\" (UniqueName: \"kubernetes.io/projected/2de80243-0f12-4bf7-8f32-c7f13fa8a118-kube-api-access-5wkjh\") on node \"crc\" DevicePath \"\"" Jan 31 09:34:25 crc kubenswrapper[4830]: I0131 09:34:25.390284 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2de80243-0f12-4bf7-8f32-c7f13fa8a118-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2de80243-0f12-4bf7-8f32-c7f13fa8a118" (UID: "2de80243-0f12-4bf7-8f32-c7f13fa8a118"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 09:34:25 crc kubenswrapper[4830]: I0131 09:34:25.470941 4830 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2de80243-0f12-4bf7-8f32-c7f13fa8a118-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 09:34:25 crc kubenswrapper[4830]: I0131 09:34:25.526716 4830 generic.go:334] "Generic (PLEG): container finished" podID="2de80243-0f12-4bf7-8f32-c7f13fa8a118" containerID="d31c85e4a1ad13dd1a94eb44c51b404e181072cd1ae8cf20dddf10af376d914d" exitCode=0 Jan 31 09:34:25 crc kubenswrapper[4830]: I0131 09:34:25.526766 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7brtk" event={"ID":"2de80243-0f12-4bf7-8f32-c7f13fa8a118","Type":"ContainerDied","Data":"d31c85e4a1ad13dd1a94eb44c51b404e181072cd1ae8cf20dddf10af376d914d"} Jan 31 09:34:25 crc kubenswrapper[4830]: I0131 09:34:25.526820 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7brtk" Jan 31 09:34:25 crc kubenswrapper[4830]: I0131 09:34:25.526833 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7brtk" event={"ID":"2de80243-0f12-4bf7-8f32-c7f13fa8a118","Type":"ContainerDied","Data":"a8ad1705ca59b820942597e75314b0050401357dee6375c9b1dc808179ae35d7"} Jan 31 09:34:25 crc kubenswrapper[4830]: I0131 09:34:25.526855 4830 scope.go:117] "RemoveContainer" containerID="d31c85e4a1ad13dd1a94eb44c51b404e181072cd1ae8cf20dddf10af376d914d" Jan 31 09:34:25 crc kubenswrapper[4830]: I0131 09:34:25.554573 4830 scope.go:117] "RemoveContainer" containerID="86131869bb37ff7a26b0f2e97b7b804cf9a2c34dce496445eb5751569810ab99" Jan 31 09:34:25 crc kubenswrapper[4830]: I0131 09:34:25.575593 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-7brtk"] Jan 31 09:34:25 crc kubenswrapper[4830]: I0131 09:34:25.589758 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-7brtk"] Jan 31 09:34:25 crc kubenswrapper[4830]: I0131 09:34:25.598956 4830 scope.go:117] "RemoveContainer" containerID="6caa06dfe58b3b32c52f8b6d3af154ac9287bd8bcfe6527d4d466354763bb3af" Jan 31 09:34:25 crc kubenswrapper[4830]: I0131 09:34:25.656091 4830 scope.go:117] "RemoveContainer" containerID="d31c85e4a1ad13dd1a94eb44c51b404e181072cd1ae8cf20dddf10af376d914d" Jan 31 09:34:25 crc kubenswrapper[4830]: E0131 09:34:25.656569 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d31c85e4a1ad13dd1a94eb44c51b404e181072cd1ae8cf20dddf10af376d914d\": container with ID starting with d31c85e4a1ad13dd1a94eb44c51b404e181072cd1ae8cf20dddf10af376d914d not found: ID does not exist" containerID="d31c85e4a1ad13dd1a94eb44c51b404e181072cd1ae8cf20dddf10af376d914d" Jan 31 09:34:25 crc kubenswrapper[4830]: I0131 09:34:25.656604 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d31c85e4a1ad13dd1a94eb44c51b404e181072cd1ae8cf20dddf10af376d914d"} err="failed to get container status \"d31c85e4a1ad13dd1a94eb44c51b404e181072cd1ae8cf20dddf10af376d914d\": rpc error: code = NotFound desc = could not find container \"d31c85e4a1ad13dd1a94eb44c51b404e181072cd1ae8cf20dddf10af376d914d\": container with ID starting with d31c85e4a1ad13dd1a94eb44c51b404e181072cd1ae8cf20dddf10af376d914d not found: ID does not exist" Jan 31 09:34:25 crc kubenswrapper[4830]: I0131 09:34:25.656631 4830 scope.go:117] "RemoveContainer" containerID="86131869bb37ff7a26b0f2e97b7b804cf9a2c34dce496445eb5751569810ab99" Jan 31 09:34:25 crc kubenswrapper[4830]: E0131 09:34:25.657294 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"86131869bb37ff7a26b0f2e97b7b804cf9a2c34dce496445eb5751569810ab99\": container with ID starting with 86131869bb37ff7a26b0f2e97b7b804cf9a2c34dce496445eb5751569810ab99 not found: ID does not exist" containerID="86131869bb37ff7a26b0f2e97b7b804cf9a2c34dce496445eb5751569810ab99" Jan 31 09:34:25 crc kubenswrapper[4830]: I0131 09:34:25.657348 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"86131869bb37ff7a26b0f2e97b7b804cf9a2c34dce496445eb5751569810ab99"} err="failed to get container status \"86131869bb37ff7a26b0f2e97b7b804cf9a2c34dce496445eb5751569810ab99\": rpc error: code = NotFound desc = could not find container \"86131869bb37ff7a26b0f2e97b7b804cf9a2c34dce496445eb5751569810ab99\": container with ID starting with 86131869bb37ff7a26b0f2e97b7b804cf9a2c34dce496445eb5751569810ab99 not found: ID does not exist" Jan 31 09:34:25 crc kubenswrapper[4830]: I0131 09:34:25.657391 4830 scope.go:117] "RemoveContainer" containerID="6caa06dfe58b3b32c52f8b6d3af154ac9287bd8bcfe6527d4d466354763bb3af" Jan 31 09:34:25 crc kubenswrapper[4830]: E0131 09:34:25.657847 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6caa06dfe58b3b32c52f8b6d3af154ac9287bd8bcfe6527d4d466354763bb3af\": container with ID starting with 6caa06dfe58b3b32c52f8b6d3af154ac9287bd8bcfe6527d4d466354763bb3af not found: ID does not exist" containerID="6caa06dfe58b3b32c52f8b6d3af154ac9287bd8bcfe6527d4d466354763bb3af" Jan 31 09:34:25 crc kubenswrapper[4830]: I0131 09:34:25.657884 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6caa06dfe58b3b32c52f8b6d3af154ac9287bd8bcfe6527d4d466354763bb3af"} err="failed to get container status \"6caa06dfe58b3b32c52f8b6d3af154ac9287bd8bcfe6527d4d466354763bb3af\": rpc error: code = NotFound desc = could not find container \"6caa06dfe58b3b32c52f8b6d3af154ac9287bd8bcfe6527d4d466354763bb3af\": container with ID starting with 6caa06dfe58b3b32c52f8b6d3af154ac9287bd8bcfe6527d4d466354763bb3af not found: ID does not exist" Jan 31 09:34:26 crc kubenswrapper[4830]: I0131 09:34:26.327967 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2de80243-0f12-4bf7-8f32-c7f13fa8a118" path="/var/lib/kubelet/pods/2de80243-0f12-4bf7-8f32-c7f13fa8a118/volumes" Jan 31 09:34:35 crc kubenswrapper[4830]: I0131 09:34:35.083986 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-ztgnf"] Jan 31 09:34:35 crc kubenswrapper[4830]: I0131 09:34:35.102024 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-ztgnf"] Jan 31 09:34:36 crc kubenswrapper[4830]: I0131 09:34:36.273813 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ce550202-087a-49b1-8796-10f03f0ab9be" path="/var/lib/kubelet/pods/ce550202-087a-49b1-8796-10f03f0ab9be/volumes" Jan 31 09:34:41 crc kubenswrapper[4830]: I0131 09:34:41.043875 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-t2klw"] Jan 31 09:34:41 crc kubenswrapper[4830]: I0131 09:34:41.058925 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-t2klw"] Jan 31 09:34:42 crc kubenswrapper[4830]: I0131 09:34:42.266821 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b8de8318-1eda-43cc-b522-86d6492c6376" path="/var/lib/kubelet/pods/b8de8318-1eda-43cc-b522-86d6492c6376/volumes" Jan 31 09:34:53 crc kubenswrapper[4830]: I0131 09:34:53.060433 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-pr2kp"] Jan 31 09:34:53 crc kubenswrapper[4830]: I0131 09:34:53.082103 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-pr2kp"] Jan 31 09:34:53 crc kubenswrapper[4830]: I0131 09:34:53.099194 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-t2pjz"] Jan 31 09:34:53 crc kubenswrapper[4830]: I0131 09:34:53.112447 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-t2pjz"] Jan 31 09:34:54 crc kubenswrapper[4830]: I0131 09:34:54.267903 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a9974259-ec4c-411a-ba74-95664c116f34" path="/var/lib/kubelet/pods/a9974259-ec4c-411a-ba74-95664c116f34/volumes" Jan 31 09:34:54 crc kubenswrapper[4830]: I0131 09:34:54.269983 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bb9aed03-7e56-43de-92fc-3ac6352194af" path="/var/lib/kubelet/pods/bb9aed03-7e56-43de-92fc-3ac6352194af/volumes" Jan 31 09:34:59 crc kubenswrapper[4830]: I0131 09:34:59.936416 4830 generic.go:334] "Generic (PLEG): container finished" podID="45dd1e1a-bac5-460f-9c7e-df3f8e11aa52" containerID="09a30a36459d23a3bf1c95b5431b60063f6f40f00a77ad968bccb725c174b8d9" exitCode=0 Jan 31 09:34:59 crc kubenswrapper[4830]: I0131 09:34:59.936996 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vs6t2" event={"ID":"45dd1e1a-bac5-460f-9c7e-df3f8e11aa52","Type":"ContainerDied","Data":"09a30a36459d23a3bf1c95b5431b60063f6f40f00a77ad968bccb725c174b8d9"} Jan 31 09:35:01 crc kubenswrapper[4830]: I0131 09:35:01.502118 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vs6t2" Jan 31 09:35:01 crc kubenswrapper[4830]: I0131 09:35:01.632479 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/45dd1e1a-bac5-460f-9c7e-df3f8e11aa52-inventory\") pod \"45dd1e1a-bac5-460f-9c7e-df3f8e11aa52\" (UID: \"45dd1e1a-bac5-460f-9c7e-df3f8e11aa52\") " Jan 31 09:35:01 crc kubenswrapper[4830]: I0131 09:35:01.632708 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/45dd1e1a-bac5-460f-9c7e-df3f8e11aa52-ssh-key-openstack-edpm-ipam\") pod \"45dd1e1a-bac5-460f-9c7e-df3f8e11aa52\" (UID: \"45dd1e1a-bac5-460f-9c7e-df3f8e11aa52\") " Jan 31 09:35:01 crc kubenswrapper[4830]: I0131 09:35:01.632815 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2drj4\" (UniqueName: \"kubernetes.io/projected/45dd1e1a-bac5-460f-9c7e-df3f8e11aa52-kube-api-access-2drj4\") pod \"45dd1e1a-bac5-460f-9c7e-df3f8e11aa52\" (UID: \"45dd1e1a-bac5-460f-9c7e-df3f8e11aa52\") " Jan 31 09:35:01 crc kubenswrapper[4830]: I0131 09:35:01.632931 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45dd1e1a-bac5-460f-9c7e-df3f8e11aa52-bootstrap-combined-ca-bundle\") pod \"45dd1e1a-bac5-460f-9c7e-df3f8e11aa52\" (UID: \"45dd1e1a-bac5-460f-9c7e-df3f8e11aa52\") " Jan 31 09:35:01 crc kubenswrapper[4830]: I0131 09:35:01.640453 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/45dd1e1a-bac5-460f-9c7e-df3f8e11aa52-kube-api-access-2drj4" (OuterVolumeSpecName: "kube-api-access-2drj4") pod "45dd1e1a-bac5-460f-9c7e-df3f8e11aa52" (UID: "45dd1e1a-bac5-460f-9c7e-df3f8e11aa52"). InnerVolumeSpecName "kube-api-access-2drj4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:35:01 crc kubenswrapper[4830]: I0131 09:35:01.640812 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/45dd1e1a-bac5-460f-9c7e-df3f8e11aa52-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "45dd1e1a-bac5-460f-9c7e-df3f8e11aa52" (UID: "45dd1e1a-bac5-460f-9c7e-df3f8e11aa52"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:35:01 crc kubenswrapper[4830]: I0131 09:35:01.681994 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/45dd1e1a-bac5-460f-9c7e-df3f8e11aa52-inventory" (OuterVolumeSpecName: "inventory") pod "45dd1e1a-bac5-460f-9c7e-df3f8e11aa52" (UID: "45dd1e1a-bac5-460f-9c7e-df3f8e11aa52"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:35:01 crc kubenswrapper[4830]: I0131 09:35:01.692117 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/45dd1e1a-bac5-460f-9c7e-df3f8e11aa52-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "45dd1e1a-bac5-460f-9c7e-df3f8e11aa52" (UID: "45dd1e1a-bac5-460f-9c7e-df3f8e11aa52"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:35:01 crc kubenswrapper[4830]: I0131 09:35:01.736365 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2drj4\" (UniqueName: \"kubernetes.io/projected/45dd1e1a-bac5-460f-9c7e-df3f8e11aa52-kube-api-access-2drj4\") on node \"crc\" DevicePath \"\"" Jan 31 09:35:01 crc kubenswrapper[4830]: I0131 09:35:01.736814 4830 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45dd1e1a-bac5-460f-9c7e-df3f8e11aa52-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 09:35:01 crc kubenswrapper[4830]: I0131 09:35:01.736832 4830 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/45dd1e1a-bac5-460f-9c7e-df3f8e11aa52-inventory\") on node \"crc\" DevicePath \"\"" Jan 31 09:35:01 crc kubenswrapper[4830]: I0131 09:35:01.736845 4830 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/45dd1e1a-bac5-460f-9c7e-df3f8e11aa52-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 31 09:35:01 crc kubenswrapper[4830]: I0131 09:35:01.959261 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vs6t2" event={"ID":"45dd1e1a-bac5-460f-9c7e-df3f8e11aa52","Type":"ContainerDied","Data":"6ac87c2f5cf8156bc6c5052f448d7785f17cc9c32c2cf9fea511360b8523b31a"} Jan 31 09:35:01 crc kubenswrapper[4830]: I0131 09:35:01.959314 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vs6t2" Jan 31 09:35:01 crc kubenswrapper[4830]: I0131 09:35:01.959316 4830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6ac87c2f5cf8156bc6c5052f448d7785f17cc9c32c2cf9fea511360b8523b31a" Jan 31 09:35:02 crc kubenswrapper[4830]: I0131 09:35:02.067038 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-9jg6m"] Jan 31 09:35:02 crc kubenswrapper[4830]: E0131 09:35:02.075777 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2de80243-0f12-4bf7-8f32-c7f13fa8a118" containerName="extract-content" Jan 31 09:35:02 crc kubenswrapper[4830]: I0131 09:35:02.075831 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="2de80243-0f12-4bf7-8f32-c7f13fa8a118" containerName="extract-content" Jan 31 09:35:02 crc kubenswrapper[4830]: E0131 09:35:02.075883 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2de80243-0f12-4bf7-8f32-c7f13fa8a118" containerName="registry-server" Jan 31 09:35:02 crc kubenswrapper[4830]: I0131 09:35:02.075897 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="2de80243-0f12-4bf7-8f32-c7f13fa8a118" containerName="registry-server" Jan 31 09:35:02 crc kubenswrapper[4830]: E0131 09:35:02.075922 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2de80243-0f12-4bf7-8f32-c7f13fa8a118" containerName="extract-utilities" Jan 31 09:35:02 crc kubenswrapper[4830]: I0131 09:35:02.075930 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="2de80243-0f12-4bf7-8f32-c7f13fa8a118" containerName="extract-utilities" Jan 31 09:35:02 crc kubenswrapper[4830]: E0131 09:35:02.075949 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45dd1e1a-bac5-460f-9c7e-df3f8e11aa52" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 31 09:35:02 crc kubenswrapper[4830]: I0131 09:35:02.075958 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="45dd1e1a-bac5-460f-9c7e-df3f8e11aa52" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 31 09:35:02 crc kubenswrapper[4830]: I0131 09:35:02.076224 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="45dd1e1a-bac5-460f-9c7e-df3f8e11aa52" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 31 09:35:02 crc kubenswrapper[4830]: I0131 09:35:02.076241 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="2de80243-0f12-4bf7-8f32-c7f13fa8a118" containerName="registry-server" Jan 31 09:35:02 crc kubenswrapper[4830]: I0131 09:35:02.077325 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-9jg6m" Jan 31 09:35:02 crc kubenswrapper[4830]: I0131 09:35:02.079795 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 31 09:35:02 crc kubenswrapper[4830]: I0131 09:35:02.080005 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-9jg6m"] Jan 31 09:35:02 crc kubenswrapper[4830]: I0131 09:35:02.081314 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 31 09:35:02 crc kubenswrapper[4830]: I0131 09:35:02.081513 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 31 09:35:02 crc kubenswrapper[4830]: I0131 09:35:02.086361 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-vd24j" Jan 31 09:35:02 crc kubenswrapper[4830]: I0131 09:35:02.148444 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/88f0db8e-690d-4b60-8eb5-473a1ab51029-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-9jg6m\" (UID: \"88f0db8e-690d-4b60-8eb5-473a1ab51029\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-9jg6m" Jan 31 09:35:02 crc kubenswrapper[4830]: I0131 09:35:02.148827 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fjmn6\" (UniqueName: \"kubernetes.io/projected/88f0db8e-690d-4b60-8eb5-473a1ab51029-kube-api-access-fjmn6\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-9jg6m\" (UID: \"88f0db8e-690d-4b60-8eb5-473a1ab51029\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-9jg6m" Jan 31 09:35:02 crc kubenswrapper[4830]: I0131 09:35:02.148932 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/88f0db8e-690d-4b60-8eb5-473a1ab51029-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-9jg6m\" (UID: \"88f0db8e-690d-4b60-8eb5-473a1ab51029\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-9jg6m" Jan 31 09:35:02 crc kubenswrapper[4830]: I0131 09:35:02.251440 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fjmn6\" (UniqueName: \"kubernetes.io/projected/88f0db8e-690d-4b60-8eb5-473a1ab51029-kube-api-access-fjmn6\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-9jg6m\" (UID: \"88f0db8e-690d-4b60-8eb5-473a1ab51029\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-9jg6m" Jan 31 09:35:02 crc kubenswrapper[4830]: I0131 09:35:02.251585 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/88f0db8e-690d-4b60-8eb5-473a1ab51029-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-9jg6m\" (UID: \"88f0db8e-690d-4b60-8eb5-473a1ab51029\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-9jg6m" Jan 31 09:35:02 crc kubenswrapper[4830]: I0131 09:35:02.251665 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/88f0db8e-690d-4b60-8eb5-473a1ab51029-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-9jg6m\" (UID: \"88f0db8e-690d-4b60-8eb5-473a1ab51029\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-9jg6m" Jan 31 09:35:02 crc kubenswrapper[4830]: I0131 09:35:02.260549 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/88f0db8e-690d-4b60-8eb5-473a1ab51029-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-9jg6m\" (UID: \"88f0db8e-690d-4b60-8eb5-473a1ab51029\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-9jg6m" Jan 31 09:35:02 crc kubenswrapper[4830]: I0131 09:35:02.265248 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/88f0db8e-690d-4b60-8eb5-473a1ab51029-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-9jg6m\" (UID: \"88f0db8e-690d-4b60-8eb5-473a1ab51029\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-9jg6m" Jan 31 09:35:02 crc kubenswrapper[4830]: I0131 09:35:02.288051 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fjmn6\" (UniqueName: \"kubernetes.io/projected/88f0db8e-690d-4b60-8eb5-473a1ab51029-kube-api-access-fjmn6\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-9jg6m\" (UID: \"88f0db8e-690d-4b60-8eb5-473a1ab51029\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-9jg6m" Jan 31 09:35:02 crc kubenswrapper[4830]: I0131 09:35:02.408579 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-9jg6m" Jan 31 09:35:03 crc kubenswrapper[4830]: I0131 09:35:03.280099 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-9jg6m"] Jan 31 09:35:03 crc kubenswrapper[4830]: I0131 09:35:03.986037 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-9jg6m" event={"ID":"88f0db8e-690d-4b60-8eb5-473a1ab51029","Type":"ContainerStarted","Data":"0ca05769d5b12b5fa76574e40d9ee4ce5f575a50b9072e4ce090323ba366445e"} Jan 31 09:35:05 crc kubenswrapper[4830]: I0131 09:35:05.000117 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-9jg6m" event={"ID":"88f0db8e-690d-4b60-8eb5-473a1ab51029","Type":"ContainerStarted","Data":"ff37456af2816cfe594fcb8ed2b14aac802b3dc3adb9b2251349f5c2478cbf28"} Jan 31 09:35:05 crc kubenswrapper[4830]: I0131 09:35:05.027807 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-9jg6m" podStartSLOduration=2.532502568 podStartE2EDuration="3.0277833s" podCreationTimestamp="2026-01-31 09:35:02 +0000 UTC" firstStartedPulling="2026-01-31 09:35:03.284589828 +0000 UTC m=+2047.777952270" lastFinishedPulling="2026-01-31 09:35:03.77987056 +0000 UTC m=+2048.273233002" observedRunningTime="2026-01-31 09:35:05.017787786 +0000 UTC m=+2049.511150228" watchObservedRunningTime="2026-01-31 09:35:05.0277833 +0000 UTC m=+2049.521145742" Jan 31 09:35:12 crc kubenswrapper[4830]: I0131 09:35:12.346503 4830 scope.go:117] "RemoveContainer" containerID="296c2ca9ab37f0c52f42055edffd7d00fb9c21ff74b16a10dc1b092b5ca95878" Jan 31 09:35:12 crc kubenswrapper[4830]: I0131 09:35:12.389305 4830 scope.go:117] "RemoveContainer" containerID="c9fb7d799f1d6dd9a5876fd3363ab7922287e7e766c564c9787e2b2952eb9668" Jan 31 09:35:12 crc kubenswrapper[4830]: I0131 09:35:12.453252 4830 scope.go:117] "RemoveContainer" containerID="e10402784224c0a8f234f5cbb87b74b8d71f6b4e4370aa3a10b8d8b1768b3e70" Jan 31 09:35:12 crc kubenswrapper[4830]: I0131 09:35:12.518880 4830 scope.go:117] "RemoveContainer" containerID="322baa3e2f6855883d523ac77dd9bbb2e8423fc4f8a4b4da22034d570a99227b" Jan 31 09:35:19 crc kubenswrapper[4830]: I0131 09:35:19.071789 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-w6kxz"] Jan 31 09:35:19 crc kubenswrapper[4830]: I0131 09:35:19.087850 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-w6kxz"] Jan 31 09:35:20 crc kubenswrapper[4830]: I0131 09:35:20.269299 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0617092f-40a9-4d3d-b472-f284a2b24000" path="/var/lib/kubelet/pods/0617092f-40a9-4d3d-b472-f284a2b24000/volumes" Jan 31 09:35:56 crc kubenswrapper[4830]: I0131 09:35:56.124865 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-dssrq"] Jan 31 09:35:56 crc kubenswrapper[4830]: I0131 09:35:56.129654 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dssrq" Jan 31 09:35:56 crc kubenswrapper[4830]: I0131 09:35:56.148879 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-dssrq"] Jan 31 09:35:56 crc kubenswrapper[4830]: I0131 09:35:56.285684 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e731b9b0-d261-4533-b8c3-92af24d06c58-utilities\") pod \"redhat-marketplace-dssrq\" (UID: \"e731b9b0-d261-4533-b8c3-92af24d06c58\") " pod="openshift-marketplace/redhat-marketplace-dssrq" Jan 31 09:35:56 crc kubenswrapper[4830]: I0131 09:35:56.286422 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e731b9b0-d261-4533-b8c3-92af24d06c58-catalog-content\") pod \"redhat-marketplace-dssrq\" (UID: \"e731b9b0-d261-4533-b8c3-92af24d06c58\") " pod="openshift-marketplace/redhat-marketplace-dssrq" Jan 31 09:35:56 crc kubenswrapper[4830]: I0131 09:35:56.286518 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7br8z\" (UniqueName: \"kubernetes.io/projected/e731b9b0-d261-4533-b8c3-92af24d06c58-kube-api-access-7br8z\") pod \"redhat-marketplace-dssrq\" (UID: \"e731b9b0-d261-4533-b8c3-92af24d06c58\") " pod="openshift-marketplace/redhat-marketplace-dssrq" Jan 31 09:35:56 crc kubenswrapper[4830]: I0131 09:35:56.393371 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7br8z\" (UniqueName: \"kubernetes.io/projected/e731b9b0-d261-4533-b8c3-92af24d06c58-kube-api-access-7br8z\") pod \"redhat-marketplace-dssrq\" (UID: \"e731b9b0-d261-4533-b8c3-92af24d06c58\") " pod="openshift-marketplace/redhat-marketplace-dssrq" Jan 31 09:35:56 crc kubenswrapper[4830]: I0131 09:35:56.394132 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e731b9b0-d261-4533-b8c3-92af24d06c58-utilities\") pod \"redhat-marketplace-dssrq\" (UID: \"e731b9b0-d261-4533-b8c3-92af24d06c58\") " pod="openshift-marketplace/redhat-marketplace-dssrq" Jan 31 09:35:56 crc kubenswrapper[4830]: I0131 09:35:56.394291 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e731b9b0-d261-4533-b8c3-92af24d06c58-catalog-content\") pod \"redhat-marketplace-dssrq\" (UID: \"e731b9b0-d261-4533-b8c3-92af24d06c58\") " pod="openshift-marketplace/redhat-marketplace-dssrq" Jan 31 09:35:56 crc kubenswrapper[4830]: I0131 09:35:56.394797 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e731b9b0-d261-4533-b8c3-92af24d06c58-catalog-content\") pod \"redhat-marketplace-dssrq\" (UID: \"e731b9b0-d261-4533-b8c3-92af24d06c58\") " pod="openshift-marketplace/redhat-marketplace-dssrq" Jan 31 09:35:56 crc kubenswrapper[4830]: I0131 09:35:56.395752 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e731b9b0-d261-4533-b8c3-92af24d06c58-utilities\") pod \"redhat-marketplace-dssrq\" (UID: \"e731b9b0-d261-4533-b8c3-92af24d06c58\") " pod="openshift-marketplace/redhat-marketplace-dssrq" Jan 31 09:35:56 crc kubenswrapper[4830]: I0131 09:35:56.416524 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7br8z\" (UniqueName: \"kubernetes.io/projected/e731b9b0-d261-4533-b8c3-92af24d06c58-kube-api-access-7br8z\") pod \"redhat-marketplace-dssrq\" (UID: \"e731b9b0-d261-4533-b8c3-92af24d06c58\") " pod="openshift-marketplace/redhat-marketplace-dssrq" Jan 31 09:35:56 crc kubenswrapper[4830]: I0131 09:35:56.455213 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dssrq" Jan 31 09:35:57 crc kubenswrapper[4830]: I0131 09:35:57.039219 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-dssrq"] Jan 31 09:35:57 crc kubenswrapper[4830]: I0131 09:35:57.677670 4830 generic.go:334] "Generic (PLEG): container finished" podID="e731b9b0-d261-4533-b8c3-92af24d06c58" containerID="10e86d29ff85301674a2f5420e8f904672902a3fe35cf2f35bc9fa2510454974" exitCode=0 Jan 31 09:35:57 crc kubenswrapper[4830]: I0131 09:35:57.677787 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dssrq" event={"ID":"e731b9b0-d261-4533-b8c3-92af24d06c58","Type":"ContainerDied","Data":"10e86d29ff85301674a2f5420e8f904672902a3fe35cf2f35bc9fa2510454974"} Jan 31 09:35:57 crc kubenswrapper[4830]: I0131 09:35:57.678062 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dssrq" event={"ID":"e731b9b0-d261-4533-b8c3-92af24d06c58","Type":"ContainerStarted","Data":"b8acb0633bc673aabb951ec191290055a4b4540b193f5c5a958c10ff06784623"} Jan 31 09:35:58 crc kubenswrapper[4830]: I0131 09:35:58.691615 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dssrq" event={"ID":"e731b9b0-d261-4533-b8c3-92af24d06c58","Type":"ContainerStarted","Data":"4b7374e002fe770a89d721b77222c34a3aab4415229bf8d4c4409dbc265ac9ab"} Jan 31 09:36:00 crc kubenswrapper[4830]: I0131 09:36:00.713912 4830 generic.go:334] "Generic (PLEG): container finished" podID="e731b9b0-d261-4533-b8c3-92af24d06c58" containerID="4b7374e002fe770a89d721b77222c34a3aab4415229bf8d4c4409dbc265ac9ab" exitCode=0 Jan 31 09:36:00 crc kubenswrapper[4830]: I0131 09:36:00.713964 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dssrq" event={"ID":"e731b9b0-d261-4533-b8c3-92af24d06c58","Type":"ContainerDied","Data":"4b7374e002fe770a89d721b77222c34a3aab4415229bf8d4c4409dbc265ac9ab"} Jan 31 09:36:02 crc kubenswrapper[4830]: I0131 09:36:02.740599 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dssrq" event={"ID":"e731b9b0-d261-4533-b8c3-92af24d06c58","Type":"ContainerStarted","Data":"242d541eb2be0ae9456f2451cfc8228571af3fc6d76224c65705501fa9f61896"} Jan 31 09:36:02 crc kubenswrapper[4830]: I0131 09:36:02.779754 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-dssrq" podStartSLOduration=2.468271889 podStartE2EDuration="6.779716589s" podCreationTimestamp="2026-01-31 09:35:56 +0000 UTC" firstStartedPulling="2026-01-31 09:35:57.679960307 +0000 UTC m=+2102.173322749" lastFinishedPulling="2026-01-31 09:36:01.991405007 +0000 UTC m=+2106.484767449" observedRunningTime="2026-01-31 09:36:02.763480597 +0000 UTC m=+2107.256843039" watchObservedRunningTime="2026-01-31 09:36:02.779716589 +0000 UTC m=+2107.273079031" Jan 31 09:36:06 crc kubenswrapper[4830]: I0131 09:36:06.456335 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-dssrq" Jan 31 09:36:06 crc kubenswrapper[4830]: I0131 09:36:06.457074 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-dssrq" Jan 31 09:36:06 crc kubenswrapper[4830]: I0131 09:36:06.526077 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-dssrq" Jan 31 09:36:12 crc kubenswrapper[4830]: I0131 09:36:12.714044 4830 scope.go:117] "RemoveContainer" containerID="9bec4c50fcc4d62de1378f26906fe163c84640b76bf85c39066f6aa600cbcc69" Jan 31 09:36:14 crc kubenswrapper[4830]: I0131 09:36:14.353574 4830 patch_prober.go:28] interesting pod/machine-config-daemon-gt7kd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 09:36:14 crc kubenswrapper[4830]: I0131 09:36:14.354601 4830 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" podUID="158dbfda-9b0a-4809-9946-3c6ee2d082dc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 09:36:16 crc kubenswrapper[4830]: I0131 09:36:16.047141 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-sl874"] Jan 31 09:36:16 crc kubenswrapper[4830]: I0131 09:36:16.061061 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-v2w79"] Jan 31 09:36:16 crc kubenswrapper[4830]: I0131 09:36:16.076715 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-sl874"] Jan 31 09:36:16 crc kubenswrapper[4830]: I0131 09:36:16.087794 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-v2w79"] Jan 31 09:36:16 crc kubenswrapper[4830]: I0131 09:36:16.270623 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="631e221b-b504-4f59-8848-c9427f67c0df" path="/var/lib/kubelet/pods/631e221b-b504-4f59-8848-c9427f67c0df/volumes" Jan 31 09:36:16 crc kubenswrapper[4830]: I0131 09:36:16.272638 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ced551a8-224d-488b-aa58-c424e387ccca" path="/var/lib/kubelet/pods/ced551a8-224d-488b-aa58-c424e387ccca/volumes" Jan 31 09:36:16 crc kubenswrapper[4830]: I0131 09:36:16.508207 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-dssrq" Jan 31 09:36:16 crc kubenswrapper[4830]: I0131 09:36:16.565381 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-dssrq"] Jan 31 09:36:16 crc kubenswrapper[4830]: I0131 09:36:16.918034 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-dssrq" podUID="e731b9b0-d261-4533-b8c3-92af24d06c58" containerName="registry-server" containerID="cri-o://242d541eb2be0ae9456f2451cfc8228571af3fc6d76224c65705501fa9f61896" gracePeriod=2 Jan 31 09:36:17 crc kubenswrapper[4830]: I0131 09:36:17.059378 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-184b-account-create-update-gwvpg"] Jan 31 09:36:17 crc kubenswrapper[4830]: I0131 09:36:17.074623 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-184b-account-create-update-gwvpg"] Jan 31 09:36:17 crc kubenswrapper[4830]: I0131 09:36:17.477439 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dssrq" Jan 31 09:36:17 crc kubenswrapper[4830]: I0131 09:36:17.548244 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e731b9b0-d261-4533-b8c3-92af24d06c58-utilities\") pod \"e731b9b0-d261-4533-b8c3-92af24d06c58\" (UID: \"e731b9b0-d261-4533-b8c3-92af24d06c58\") " Jan 31 09:36:17 crc kubenswrapper[4830]: I0131 09:36:17.548746 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7br8z\" (UniqueName: \"kubernetes.io/projected/e731b9b0-d261-4533-b8c3-92af24d06c58-kube-api-access-7br8z\") pod \"e731b9b0-d261-4533-b8c3-92af24d06c58\" (UID: \"e731b9b0-d261-4533-b8c3-92af24d06c58\") " Jan 31 09:36:17 crc kubenswrapper[4830]: I0131 09:36:17.549180 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e731b9b0-d261-4533-b8c3-92af24d06c58-utilities" (OuterVolumeSpecName: "utilities") pod "e731b9b0-d261-4533-b8c3-92af24d06c58" (UID: "e731b9b0-d261-4533-b8c3-92af24d06c58"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 09:36:17 crc kubenswrapper[4830]: I0131 09:36:17.549382 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e731b9b0-d261-4533-b8c3-92af24d06c58-catalog-content\") pod \"e731b9b0-d261-4533-b8c3-92af24d06c58\" (UID: \"e731b9b0-d261-4533-b8c3-92af24d06c58\") " Jan 31 09:36:17 crc kubenswrapper[4830]: I0131 09:36:17.550504 4830 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e731b9b0-d261-4533-b8c3-92af24d06c58-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 09:36:17 crc kubenswrapper[4830]: I0131 09:36:17.556148 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e731b9b0-d261-4533-b8c3-92af24d06c58-kube-api-access-7br8z" (OuterVolumeSpecName: "kube-api-access-7br8z") pod "e731b9b0-d261-4533-b8c3-92af24d06c58" (UID: "e731b9b0-d261-4533-b8c3-92af24d06c58"). InnerVolumeSpecName "kube-api-access-7br8z". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:36:17 crc kubenswrapper[4830]: I0131 09:36:17.573153 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e731b9b0-d261-4533-b8c3-92af24d06c58-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e731b9b0-d261-4533-b8c3-92af24d06c58" (UID: "e731b9b0-d261-4533-b8c3-92af24d06c58"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 09:36:17 crc kubenswrapper[4830]: I0131 09:36:17.653097 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7br8z\" (UniqueName: \"kubernetes.io/projected/e731b9b0-d261-4533-b8c3-92af24d06c58-kube-api-access-7br8z\") on node \"crc\" DevicePath \"\"" Jan 31 09:36:17 crc kubenswrapper[4830]: I0131 09:36:17.653445 4830 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e731b9b0-d261-4533-b8c3-92af24d06c58-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 09:36:17 crc kubenswrapper[4830]: I0131 09:36:17.932380 4830 generic.go:334] "Generic (PLEG): container finished" podID="e731b9b0-d261-4533-b8c3-92af24d06c58" containerID="242d541eb2be0ae9456f2451cfc8228571af3fc6d76224c65705501fa9f61896" exitCode=0 Jan 31 09:36:17 crc kubenswrapper[4830]: I0131 09:36:17.932442 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dssrq" event={"ID":"e731b9b0-d261-4533-b8c3-92af24d06c58","Type":"ContainerDied","Data":"242d541eb2be0ae9456f2451cfc8228571af3fc6d76224c65705501fa9f61896"} Jan 31 09:36:17 crc kubenswrapper[4830]: I0131 09:36:17.932445 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dssrq" Jan 31 09:36:17 crc kubenswrapper[4830]: I0131 09:36:17.932504 4830 scope.go:117] "RemoveContainer" containerID="242d541eb2be0ae9456f2451cfc8228571af3fc6d76224c65705501fa9f61896" Jan 31 09:36:17 crc kubenswrapper[4830]: I0131 09:36:17.932489 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dssrq" event={"ID":"e731b9b0-d261-4533-b8c3-92af24d06c58","Type":"ContainerDied","Data":"b8acb0633bc673aabb951ec191290055a4b4540b193f5c5a958c10ff06784623"} Jan 31 09:36:17 crc kubenswrapper[4830]: I0131 09:36:17.966516 4830 scope.go:117] "RemoveContainer" containerID="4b7374e002fe770a89d721b77222c34a3aab4415229bf8d4c4409dbc265ac9ab" Jan 31 09:36:17 crc kubenswrapper[4830]: I0131 09:36:17.973746 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-dssrq"] Jan 31 09:36:17 crc kubenswrapper[4830]: I0131 09:36:17.986082 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-dssrq"] Jan 31 09:36:18 crc kubenswrapper[4830]: I0131 09:36:18.007392 4830 scope.go:117] "RemoveContainer" containerID="10e86d29ff85301674a2f5420e8f904672902a3fe35cf2f35bc9fa2510454974" Jan 31 09:36:18 crc kubenswrapper[4830]: I0131 09:36:18.046001 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-6858-account-create-update-mxmt7"] Jan 31 09:36:18 crc kubenswrapper[4830]: I0131 09:36:18.060455 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-6858-account-create-update-mxmt7"] Jan 31 09:36:18 crc kubenswrapper[4830]: I0131 09:36:18.071669 4830 scope.go:117] "RemoveContainer" containerID="242d541eb2be0ae9456f2451cfc8228571af3fc6d76224c65705501fa9f61896" Jan 31 09:36:18 crc kubenswrapper[4830]: E0131 09:36:18.072353 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"242d541eb2be0ae9456f2451cfc8228571af3fc6d76224c65705501fa9f61896\": container with ID starting with 242d541eb2be0ae9456f2451cfc8228571af3fc6d76224c65705501fa9f61896 not found: ID does not exist" containerID="242d541eb2be0ae9456f2451cfc8228571af3fc6d76224c65705501fa9f61896" Jan 31 09:36:18 crc kubenswrapper[4830]: I0131 09:36:18.072386 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"242d541eb2be0ae9456f2451cfc8228571af3fc6d76224c65705501fa9f61896"} err="failed to get container status \"242d541eb2be0ae9456f2451cfc8228571af3fc6d76224c65705501fa9f61896\": rpc error: code = NotFound desc = could not find container \"242d541eb2be0ae9456f2451cfc8228571af3fc6d76224c65705501fa9f61896\": container with ID starting with 242d541eb2be0ae9456f2451cfc8228571af3fc6d76224c65705501fa9f61896 not found: ID does not exist" Jan 31 09:36:18 crc kubenswrapper[4830]: I0131 09:36:18.073107 4830 scope.go:117] "RemoveContainer" containerID="4b7374e002fe770a89d721b77222c34a3aab4415229bf8d4c4409dbc265ac9ab" Jan 31 09:36:18 crc kubenswrapper[4830]: E0131 09:36:18.073640 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4b7374e002fe770a89d721b77222c34a3aab4415229bf8d4c4409dbc265ac9ab\": container with ID starting with 4b7374e002fe770a89d721b77222c34a3aab4415229bf8d4c4409dbc265ac9ab not found: ID does not exist" containerID="4b7374e002fe770a89d721b77222c34a3aab4415229bf8d4c4409dbc265ac9ab" Jan 31 09:36:18 crc kubenswrapper[4830]: I0131 09:36:18.073699 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4b7374e002fe770a89d721b77222c34a3aab4415229bf8d4c4409dbc265ac9ab"} err="failed to get container status \"4b7374e002fe770a89d721b77222c34a3aab4415229bf8d4c4409dbc265ac9ab\": rpc error: code = NotFound desc = could not find container \"4b7374e002fe770a89d721b77222c34a3aab4415229bf8d4c4409dbc265ac9ab\": container with ID starting with 4b7374e002fe770a89d721b77222c34a3aab4415229bf8d4c4409dbc265ac9ab not found: ID does not exist" Jan 31 09:36:18 crc kubenswrapper[4830]: I0131 09:36:18.073781 4830 scope.go:117] "RemoveContainer" containerID="10e86d29ff85301674a2f5420e8f904672902a3fe35cf2f35bc9fa2510454974" Jan 31 09:36:18 crc kubenswrapper[4830]: E0131 09:36:18.074172 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"10e86d29ff85301674a2f5420e8f904672902a3fe35cf2f35bc9fa2510454974\": container with ID starting with 10e86d29ff85301674a2f5420e8f904672902a3fe35cf2f35bc9fa2510454974 not found: ID does not exist" containerID="10e86d29ff85301674a2f5420e8f904672902a3fe35cf2f35bc9fa2510454974" Jan 31 09:36:18 crc kubenswrapper[4830]: I0131 09:36:18.074221 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"10e86d29ff85301674a2f5420e8f904672902a3fe35cf2f35bc9fa2510454974"} err="failed to get container status \"10e86d29ff85301674a2f5420e8f904672902a3fe35cf2f35bc9fa2510454974\": rpc error: code = NotFound desc = could not find container \"10e86d29ff85301674a2f5420e8f904672902a3fe35cf2f35bc9fa2510454974\": container with ID starting with 10e86d29ff85301674a2f5420e8f904672902a3fe35cf2f35bc9fa2510454974 not found: ID does not exist" Jan 31 09:36:18 crc kubenswrapper[4830]: I0131 09:36:18.266104 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6090d149-6116-4ccf-981f-67ad48e42a1f" path="/var/lib/kubelet/pods/6090d149-6116-4ccf-981f-67ad48e42a1f/volumes" Jan 31 09:36:18 crc kubenswrapper[4830]: I0131 09:36:18.267231 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9378cf4e-8ab3-4e97-8955-158a9b0c4c26" path="/var/lib/kubelet/pods/9378cf4e-8ab3-4e97-8955-158a9b0c4c26/volumes" Jan 31 09:36:18 crc kubenswrapper[4830]: I0131 09:36:18.268020 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e731b9b0-d261-4533-b8c3-92af24d06c58" path="/var/lib/kubelet/pods/e731b9b0-d261-4533-b8c3-92af24d06c58/volumes" Jan 31 09:36:19 crc kubenswrapper[4830]: I0131 09:36:19.039872 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-ttdxz"] Jan 31 09:36:19 crc kubenswrapper[4830]: I0131 09:36:19.057654 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-61ea-account-create-update-8wzt7"] Jan 31 09:36:19 crc kubenswrapper[4830]: I0131 09:36:19.071671 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-61ea-account-create-update-8wzt7"] Jan 31 09:36:19 crc kubenswrapper[4830]: I0131 09:36:19.115813 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-ttdxz"] Jan 31 09:36:20 crc kubenswrapper[4830]: I0131 09:36:20.266375 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="89ec8e55-13ac-45e1-b5a6-b38ee34a1702" path="/var/lib/kubelet/pods/89ec8e55-13ac-45e1-b5a6-b38ee34a1702/volumes" Jan 31 09:36:20 crc kubenswrapper[4830]: I0131 09:36:20.267207 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fb75ec26-4fae-4778-b520-828660b869cb" path="/var/lib/kubelet/pods/fb75ec26-4fae-4778-b520-828660b869cb/volumes" Jan 31 09:36:44 crc kubenswrapper[4830]: I0131 09:36:44.353007 4830 patch_prober.go:28] interesting pod/machine-config-daemon-gt7kd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 09:36:44 crc kubenswrapper[4830]: I0131 09:36:44.353629 4830 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" podUID="158dbfda-9b0a-4809-9946-3c6ee2d082dc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 09:36:56 crc kubenswrapper[4830]: I0131 09:36:56.062954 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-tgdwk"] Jan 31 09:36:56 crc kubenswrapper[4830]: I0131 09:36:56.078411 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-tgdwk"] Jan 31 09:36:56 crc kubenswrapper[4830]: I0131 09:36:56.268066 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5da52f5f-a3fa-4dbc-8089-bf0dac06c78f" path="/var/lib/kubelet/pods/5da52f5f-a3fa-4dbc-8089-bf0dac06c78f/volumes" Jan 31 09:37:12 crc kubenswrapper[4830]: I0131 09:37:12.587884 4830 generic.go:334] "Generic (PLEG): container finished" podID="88f0db8e-690d-4b60-8eb5-473a1ab51029" containerID="ff37456af2816cfe594fcb8ed2b14aac802b3dc3adb9b2251349f5c2478cbf28" exitCode=0 Jan 31 09:37:12 crc kubenswrapper[4830]: I0131 09:37:12.588516 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-9jg6m" event={"ID":"88f0db8e-690d-4b60-8eb5-473a1ab51029","Type":"ContainerDied","Data":"ff37456af2816cfe594fcb8ed2b14aac802b3dc3adb9b2251349f5c2478cbf28"} Jan 31 09:37:12 crc kubenswrapper[4830]: I0131 09:37:12.797922 4830 scope.go:117] "RemoveContainer" containerID="33ce36b1ce77eaeff55c653e8e8346ca8a6889b2299dbcca7791a0a92d4139ed" Jan 31 09:37:12 crc kubenswrapper[4830]: I0131 09:37:12.834146 4830 scope.go:117] "RemoveContainer" containerID="d6d4cf01fce114709d28b91bcb628afcf2a464d89cd0dada7c17fb07e07a31f8" Jan 31 09:37:12 crc kubenswrapper[4830]: I0131 09:37:12.888060 4830 scope.go:117] "RemoveContainer" containerID="565dce176c45178d9047506d1a941b39571d032c601b0b7aa5a9b05eb3d88775" Jan 31 09:37:12 crc kubenswrapper[4830]: I0131 09:37:12.962619 4830 scope.go:117] "RemoveContainer" containerID="33e9246aab8d9009071735890e4a4e0d3d6ac097623378334c317c3ff4076293" Jan 31 09:37:13 crc kubenswrapper[4830]: I0131 09:37:13.032104 4830 scope.go:117] "RemoveContainer" containerID="f4affda6e7f7c44cfc96fba7823dea3f1025d50b218f98323e79ade3447ba6f9" Jan 31 09:37:13 crc kubenswrapper[4830]: I0131 09:37:13.071299 4830 scope.go:117] "RemoveContainer" containerID="4516c190e9bc838a21d47cc181276a28f31ec0ad1385177a11bce69c5cdfacfa" Jan 31 09:37:13 crc kubenswrapper[4830]: I0131 09:37:13.143408 4830 scope.go:117] "RemoveContainer" containerID="8d18b4c06c2bf2ed5e89d4d11232dd02eb2d60da09b3d362bf9e195fa7b1ee30" Jan 31 09:37:14 crc kubenswrapper[4830]: I0131 09:37:14.312892 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-9jg6m" Jan 31 09:37:14 crc kubenswrapper[4830]: I0131 09:37:14.353355 4830 patch_prober.go:28] interesting pod/machine-config-daemon-gt7kd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 09:37:14 crc kubenswrapper[4830]: I0131 09:37:14.353450 4830 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" podUID="158dbfda-9b0a-4809-9946-3c6ee2d082dc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 09:37:14 crc kubenswrapper[4830]: I0131 09:37:14.353514 4830 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" Jan 31 09:37:14 crc kubenswrapper[4830]: I0131 09:37:14.354882 4830 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"5a8f61ac813e58f2725a65e088faabbabc4f4a08bd1c263d53e2f3530d252de8"} pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 31 09:37:14 crc kubenswrapper[4830]: I0131 09:37:14.354955 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" podUID="158dbfda-9b0a-4809-9946-3c6ee2d082dc" containerName="machine-config-daemon" containerID="cri-o://5a8f61ac813e58f2725a65e088faabbabc4f4a08bd1c263d53e2f3530d252de8" gracePeriod=600 Jan 31 09:37:14 crc kubenswrapper[4830]: I0131 09:37:14.429706 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/88f0db8e-690d-4b60-8eb5-473a1ab51029-inventory\") pod \"88f0db8e-690d-4b60-8eb5-473a1ab51029\" (UID: \"88f0db8e-690d-4b60-8eb5-473a1ab51029\") " Jan 31 09:37:14 crc kubenswrapper[4830]: I0131 09:37:14.429917 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fjmn6\" (UniqueName: \"kubernetes.io/projected/88f0db8e-690d-4b60-8eb5-473a1ab51029-kube-api-access-fjmn6\") pod \"88f0db8e-690d-4b60-8eb5-473a1ab51029\" (UID: \"88f0db8e-690d-4b60-8eb5-473a1ab51029\") " Jan 31 09:37:14 crc kubenswrapper[4830]: I0131 09:37:14.430149 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/88f0db8e-690d-4b60-8eb5-473a1ab51029-ssh-key-openstack-edpm-ipam\") pod \"88f0db8e-690d-4b60-8eb5-473a1ab51029\" (UID: \"88f0db8e-690d-4b60-8eb5-473a1ab51029\") " Jan 31 09:37:14 crc kubenswrapper[4830]: I0131 09:37:14.442305 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/88f0db8e-690d-4b60-8eb5-473a1ab51029-kube-api-access-fjmn6" (OuterVolumeSpecName: "kube-api-access-fjmn6") pod "88f0db8e-690d-4b60-8eb5-473a1ab51029" (UID: "88f0db8e-690d-4b60-8eb5-473a1ab51029"). InnerVolumeSpecName "kube-api-access-fjmn6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:37:14 crc kubenswrapper[4830]: I0131 09:37:14.476157 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/88f0db8e-690d-4b60-8eb5-473a1ab51029-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "88f0db8e-690d-4b60-8eb5-473a1ab51029" (UID: "88f0db8e-690d-4b60-8eb5-473a1ab51029"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:37:14 crc kubenswrapper[4830]: I0131 09:37:14.481887 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/88f0db8e-690d-4b60-8eb5-473a1ab51029-inventory" (OuterVolumeSpecName: "inventory") pod "88f0db8e-690d-4b60-8eb5-473a1ab51029" (UID: "88f0db8e-690d-4b60-8eb5-473a1ab51029"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:37:14 crc kubenswrapper[4830]: I0131 09:37:14.540097 4830 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/88f0db8e-690d-4b60-8eb5-473a1ab51029-inventory\") on node \"crc\" DevicePath \"\"" Jan 31 09:37:14 crc kubenswrapper[4830]: I0131 09:37:14.540146 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fjmn6\" (UniqueName: \"kubernetes.io/projected/88f0db8e-690d-4b60-8eb5-473a1ab51029-kube-api-access-fjmn6\") on node \"crc\" DevicePath \"\"" Jan 31 09:37:14 crc kubenswrapper[4830]: I0131 09:37:14.540161 4830 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/88f0db8e-690d-4b60-8eb5-473a1ab51029-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 31 09:37:14 crc kubenswrapper[4830]: I0131 09:37:14.613257 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-9jg6m" event={"ID":"88f0db8e-690d-4b60-8eb5-473a1ab51029","Type":"ContainerDied","Data":"0ca05769d5b12b5fa76574e40d9ee4ce5f575a50b9072e4ce090323ba366445e"} Jan 31 09:37:14 crc kubenswrapper[4830]: I0131 09:37:14.613303 4830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0ca05769d5b12b5fa76574e40d9ee4ce5f575a50b9072e4ce090323ba366445e" Jan 31 09:37:14 crc kubenswrapper[4830]: I0131 09:37:14.613335 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-9jg6m" Jan 31 09:37:14 crc kubenswrapper[4830]: I0131 09:37:14.617153 4830 generic.go:334] "Generic (PLEG): container finished" podID="158dbfda-9b0a-4809-9946-3c6ee2d082dc" containerID="5a8f61ac813e58f2725a65e088faabbabc4f4a08bd1c263d53e2f3530d252de8" exitCode=0 Jan 31 09:37:14 crc kubenswrapper[4830]: I0131 09:37:14.617190 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" event={"ID":"158dbfda-9b0a-4809-9946-3c6ee2d082dc","Type":"ContainerDied","Data":"5a8f61ac813e58f2725a65e088faabbabc4f4a08bd1c263d53e2f3530d252de8"} Jan 31 09:37:14 crc kubenswrapper[4830]: I0131 09:37:14.617285 4830 scope.go:117] "RemoveContainer" containerID="1bae58408ac9eb8b3a90089200da7949f59de4442790857fb510d734d497929e" Jan 31 09:37:14 crc kubenswrapper[4830]: I0131 09:37:14.717563 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-4wcht"] Jan 31 09:37:14 crc kubenswrapper[4830]: E0131 09:37:14.718446 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e731b9b0-d261-4533-b8c3-92af24d06c58" containerName="extract-content" Jan 31 09:37:14 crc kubenswrapper[4830]: I0131 09:37:14.718470 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="e731b9b0-d261-4533-b8c3-92af24d06c58" containerName="extract-content" Jan 31 09:37:14 crc kubenswrapper[4830]: E0131 09:37:14.718491 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="88f0db8e-690d-4b60-8eb5-473a1ab51029" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Jan 31 09:37:14 crc kubenswrapper[4830]: I0131 09:37:14.718501 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="88f0db8e-690d-4b60-8eb5-473a1ab51029" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Jan 31 09:37:14 crc kubenswrapper[4830]: E0131 09:37:14.718518 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e731b9b0-d261-4533-b8c3-92af24d06c58" containerName="extract-utilities" Jan 31 09:37:14 crc kubenswrapper[4830]: I0131 09:37:14.718527 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="e731b9b0-d261-4533-b8c3-92af24d06c58" containerName="extract-utilities" Jan 31 09:37:14 crc kubenswrapper[4830]: E0131 09:37:14.718565 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e731b9b0-d261-4533-b8c3-92af24d06c58" containerName="registry-server" Jan 31 09:37:14 crc kubenswrapper[4830]: I0131 09:37:14.718573 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="e731b9b0-d261-4533-b8c3-92af24d06c58" containerName="registry-server" Jan 31 09:37:14 crc kubenswrapper[4830]: I0131 09:37:14.718895 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="e731b9b0-d261-4533-b8c3-92af24d06c58" containerName="registry-server" Jan 31 09:37:14 crc kubenswrapper[4830]: I0131 09:37:14.718950 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="88f0db8e-690d-4b60-8eb5-473a1ab51029" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Jan 31 09:37:14 crc kubenswrapper[4830]: I0131 09:37:14.720163 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-4wcht" Jan 31 09:37:14 crc kubenswrapper[4830]: I0131 09:37:14.722947 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 31 09:37:14 crc kubenswrapper[4830]: I0131 09:37:14.723108 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-vd24j" Jan 31 09:37:14 crc kubenswrapper[4830]: I0131 09:37:14.723175 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 31 09:37:14 crc kubenswrapper[4830]: I0131 09:37:14.723439 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 31 09:37:14 crc kubenswrapper[4830]: I0131 09:37:14.735786 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-4wcht"] Jan 31 09:37:14 crc kubenswrapper[4830]: I0131 09:37:14.849992 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/dbfcb990-512f-4840-b83b-32279cec5a26-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-4wcht\" (UID: \"dbfcb990-512f-4840-b83b-32279cec5a26\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-4wcht" Jan 31 09:37:14 crc kubenswrapper[4830]: I0131 09:37:14.850630 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dbfcb990-512f-4840-b83b-32279cec5a26-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-4wcht\" (UID: \"dbfcb990-512f-4840-b83b-32279cec5a26\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-4wcht" Jan 31 09:37:14 crc kubenswrapper[4830]: I0131 09:37:14.850716 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cw4cw\" (UniqueName: \"kubernetes.io/projected/dbfcb990-512f-4840-b83b-32279cec5a26-kube-api-access-cw4cw\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-4wcht\" (UID: \"dbfcb990-512f-4840-b83b-32279cec5a26\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-4wcht" Jan 31 09:37:14 crc kubenswrapper[4830]: I0131 09:37:14.953690 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dbfcb990-512f-4840-b83b-32279cec5a26-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-4wcht\" (UID: \"dbfcb990-512f-4840-b83b-32279cec5a26\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-4wcht" Jan 31 09:37:14 crc kubenswrapper[4830]: I0131 09:37:14.953778 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cw4cw\" (UniqueName: \"kubernetes.io/projected/dbfcb990-512f-4840-b83b-32279cec5a26-kube-api-access-cw4cw\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-4wcht\" (UID: \"dbfcb990-512f-4840-b83b-32279cec5a26\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-4wcht" Jan 31 09:37:14 crc kubenswrapper[4830]: I0131 09:37:14.953912 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/dbfcb990-512f-4840-b83b-32279cec5a26-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-4wcht\" (UID: \"dbfcb990-512f-4840-b83b-32279cec5a26\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-4wcht" Jan 31 09:37:14 crc kubenswrapper[4830]: I0131 09:37:14.963467 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dbfcb990-512f-4840-b83b-32279cec5a26-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-4wcht\" (UID: \"dbfcb990-512f-4840-b83b-32279cec5a26\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-4wcht" Jan 31 09:37:14 crc kubenswrapper[4830]: I0131 09:37:14.996765 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/dbfcb990-512f-4840-b83b-32279cec5a26-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-4wcht\" (UID: \"dbfcb990-512f-4840-b83b-32279cec5a26\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-4wcht" Jan 31 09:37:15 crc kubenswrapper[4830]: I0131 09:37:15.007543 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cw4cw\" (UniqueName: \"kubernetes.io/projected/dbfcb990-512f-4840-b83b-32279cec5a26-kube-api-access-cw4cw\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-4wcht\" (UID: \"dbfcb990-512f-4840-b83b-32279cec5a26\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-4wcht" Jan 31 09:37:15 crc kubenswrapper[4830]: I0131 09:37:15.045495 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-4wcht" Jan 31 09:37:15 crc kubenswrapper[4830]: E0131 09:37:15.108110 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gt7kd_openshift-machine-config-operator(158dbfda-9b0a-4809-9946-3c6ee2d082dc)\"" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" podUID="158dbfda-9b0a-4809-9946-3c6ee2d082dc" Jan 31 09:37:15 crc kubenswrapper[4830]: I0131 09:37:15.623741 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-4wcht"] Jan 31 09:37:15 crc kubenswrapper[4830]: W0131 09:37:15.630953 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddbfcb990_512f_4840_b83b_32279cec5a26.slice/crio-15e4c96d034c85f30b1ad666e275c1cb189cb042477b214ee175575135a3d928 WatchSource:0}: Error finding container 15e4c96d034c85f30b1ad666e275c1cb189cb042477b214ee175575135a3d928: Status 404 returned error can't find the container with id 15e4c96d034c85f30b1ad666e275c1cb189cb042477b214ee175575135a3d928 Jan 31 09:37:15 crc kubenswrapper[4830]: I0131 09:37:15.655601 4830 scope.go:117] "RemoveContainer" containerID="5a8f61ac813e58f2725a65e088faabbabc4f4a08bd1c263d53e2f3530d252de8" Jan 31 09:37:15 crc kubenswrapper[4830]: E0131 09:37:15.656072 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gt7kd_openshift-machine-config-operator(158dbfda-9b0a-4809-9946-3c6ee2d082dc)\"" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" podUID="158dbfda-9b0a-4809-9946-3c6ee2d082dc" Jan 31 09:37:16 crc kubenswrapper[4830]: I0131 09:37:16.665089 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-4wcht" event={"ID":"dbfcb990-512f-4840-b83b-32279cec5a26","Type":"ContainerStarted","Data":"15e4c96d034c85f30b1ad666e275c1cb189cb042477b214ee175575135a3d928"} Jan 31 09:37:17 crc kubenswrapper[4830]: I0131 09:37:17.678655 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-4wcht" event={"ID":"dbfcb990-512f-4840-b83b-32279cec5a26","Type":"ContainerStarted","Data":"f7eedc3ea29fe8b96a9f66e7a9698136cf957e585f4f1231bf476e4864fb8ba3"} Jan 31 09:37:17 crc kubenswrapper[4830]: I0131 09:37:17.712870 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-4wcht" podStartSLOduration=2.360864553 podStartE2EDuration="3.712843574s" podCreationTimestamp="2026-01-31 09:37:14 +0000 UTC" firstStartedPulling="2026-01-31 09:37:15.635808655 +0000 UTC m=+2180.129171087" lastFinishedPulling="2026-01-31 09:37:16.987787646 +0000 UTC m=+2181.481150108" observedRunningTime="2026-01-31 09:37:17.699938077 +0000 UTC m=+2182.193300539" watchObservedRunningTime="2026-01-31 09:37:17.712843574 +0000 UTC m=+2182.206206016" Jan 31 09:37:26 crc kubenswrapper[4830]: I0131 09:37:26.056659 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-8ce7-account-create-update-hb68c"] Jan 31 09:37:26 crc kubenswrapper[4830]: I0131 09:37:26.068030 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-db-create-gqmmk"] Jan 31 09:37:26 crc kubenswrapper[4830]: I0131 09:37:26.080661 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-svqlf"] Jan 31 09:37:26 crc kubenswrapper[4830]: I0131 09:37:26.090552 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-db-create-gqmmk"] Jan 31 09:37:26 crc kubenswrapper[4830]: I0131 09:37:26.101373 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-svqlf"] Jan 31 09:37:26 crc kubenswrapper[4830]: I0131 09:37:26.111258 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-8ce7-account-create-update-hb68c"] Jan 31 09:37:26 crc kubenswrapper[4830]: I0131 09:37:26.288828 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1efaf577-ce46-4e44-a842-1d283d170872" path="/var/lib/kubelet/pods/1efaf577-ce46-4e44-a842-1d283d170872/volumes" Jan 31 09:37:26 crc kubenswrapper[4830]: I0131 09:37:26.291094 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2d6a5be9-79bf-46d1-a45e-999d7bc615c0" path="/var/lib/kubelet/pods/2d6a5be9-79bf-46d1-a45e-999d7bc615c0/volumes" Jan 31 09:37:26 crc kubenswrapper[4830]: I0131 09:37:26.292280 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="714acb03-29b5-4da1-8f14-9587cabcd207" path="/var/lib/kubelet/pods/714acb03-29b5-4da1-8f14-9587cabcd207/volumes" Jan 31 09:37:27 crc kubenswrapper[4830]: I0131 09:37:27.036987 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-shj89"] Jan 31 09:37:27 crc kubenswrapper[4830]: I0131 09:37:27.054247 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-shj89"] Jan 31 09:37:28 crc kubenswrapper[4830]: I0131 09:37:28.265344 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c1533bfd-c9f9-4c8d-9cb2-085f694b1f45" path="/var/lib/kubelet/pods/c1533bfd-c9f9-4c8d-9cb2-085f694b1f45/volumes" Jan 31 09:37:31 crc kubenswrapper[4830]: I0131 09:37:31.252747 4830 scope.go:117] "RemoveContainer" containerID="5a8f61ac813e58f2725a65e088faabbabc4f4a08bd1c263d53e2f3530d252de8" Jan 31 09:37:31 crc kubenswrapper[4830]: E0131 09:37:31.253691 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gt7kd_openshift-machine-config-operator(158dbfda-9b0a-4809-9946-3c6ee2d082dc)\"" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" podUID="158dbfda-9b0a-4809-9946-3c6ee2d082dc" Jan 31 09:37:35 crc kubenswrapper[4830]: I0131 09:37:35.227485 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-f49kg"] Jan 31 09:37:35 crc kubenswrapper[4830]: I0131 09:37:35.230629 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-f49kg" Jan 31 09:37:35 crc kubenswrapper[4830]: I0131 09:37:35.247668 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-f49kg"] Jan 31 09:37:35 crc kubenswrapper[4830]: I0131 09:37:35.304799 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d77cf759-ea9a-4728-9a69-b6bc353b1568-utilities\") pod \"certified-operators-f49kg\" (UID: \"d77cf759-ea9a-4728-9a69-b6bc353b1568\") " pod="openshift-marketplace/certified-operators-f49kg" Jan 31 09:37:35 crc kubenswrapper[4830]: I0131 09:37:35.304874 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d77cf759-ea9a-4728-9a69-b6bc353b1568-catalog-content\") pod \"certified-operators-f49kg\" (UID: \"d77cf759-ea9a-4728-9a69-b6bc353b1568\") " pod="openshift-marketplace/certified-operators-f49kg" Jan 31 09:37:35 crc kubenswrapper[4830]: I0131 09:37:35.305145 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x6blb\" (UniqueName: \"kubernetes.io/projected/d77cf759-ea9a-4728-9a69-b6bc353b1568-kube-api-access-x6blb\") pod \"certified-operators-f49kg\" (UID: \"d77cf759-ea9a-4728-9a69-b6bc353b1568\") " pod="openshift-marketplace/certified-operators-f49kg" Jan 31 09:37:35 crc kubenswrapper[4830]: I0131 09:37:35.408210 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d77cf759-ea9a-4728-9a69-b6bc353b1568-utilities\") pod \"certified-operators-f49kg\" (UID: \"d77cf759-ea9a-4728-9a69-b6bc353b1568\") " pod="openshift-marketplace/certified-operators-f49kg" Jan 31 09:37:35 crc kubenswrapper[4830]: I0131 09:37:35.408294 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d77cf759-ea9a-4728-9a69-b6bc353b1568-catalog-content\") pod \"certified-operators-f49kg\" (UID: \"d77cf759-ea9a-4728-9a69-b6bc353b1568\") " pod="openshift-marketplace/certified-operators-f49kg" Jan 31 09:37:35 crc kubenswrapper[4830]: I0131 09:37:35.408480 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x6blb\" (UniqueName: \"kubernetes.io/projected/d77cf759-ea9a-4728-9a69-b6bc353b1568-kube-api-access-x6blb\") pod \"certified-operators-f49kg\" (UID: \"d77cf759-ea9a-4728-9a69-b6bc353b1568\") " pod="openshift-marketplace/certified-operators-f49kg" Jan 31 09:37:35 crc kubenswrapper[4830]: I0131 09:37:35.408956 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d77cf759-ea9a-4728-9a69-b6bc353b1568-utilities\") pod \"certified-operators-f49kg\" (UID: \"d77cf759-ea9a-4728-9a69-b6bc353b1568\") " pod="openshift-marketplace/certified-operators-f49kg" Jan 31 09:37:35 crc kubenswrapper[4830]: I0131 09:37:35.409071 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d77cf759-ea9a-4728-9a69-b6bc353b1568-catalog-content\") pod \"certified-operators-f49kg\" (UID: \"d77cf759-ea9a-4728-9a69-b6bc353b1568\") " pod="openshift-marketplace/certified-operators-f49kg" Jan 31 09:37:35 crc kubenswrapper[4830]: I0131 09:37:35.428073 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-sltlp"] Jan 31 09:37:35 crc kubenswrapper[4830]: I0131 09:37:35.431698 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sltlp" Jan 31 09:37:35 crc kubenswrapper[4830]: I0131 09:37:35.444941 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x6blb\" (UniqueName: \"kubernetes.io/projected/d77cf759-ea9a-4728-9a69-b6bc353b1568-kube-api-access-x6blb\") pod \"certified-operators-f49kg\" (UID: \"d77cf759-ea9a-4728-9a69-b6bc353b1568\") " pod="openshift-marketplace/certified-operators-f49kg" Jan 31 09:37:35 crc kubenswrapper[4830]: I0131 09:37:35.453760 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-sltlp"] Jan 31 09:37:35 crc kubenswrapper[4830]: I0131 09:37:35.511586 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/633e616f-83bc-419c-8362-3adc0bc0970c-catalog-content\") pod \"community-operators-sltlp\" (UID: \"633e616f-83bc-419c-8362-3adc0bc0970c\") " pod="openshift-marketplace/community-operators-sltlp" Jan 31 09:37:35 crc kubenswrapper[4830]: I0131 09:37:35.511941 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6g7v2\" (UniqueName: \"kubernetes.io/projected/633e616f-83bc-419c-8362-3adc0bc0970c-kube-api-access-6g7v2\") pod \"community-operators-sltlp\" (UID: \"633e616f-83bc-419c-8362-3adc0bc0970c\") " pod="openshift-marketplace/community-operators-sltlp" Jan 31 09:37:35 crc kubenswrapper[4830]: I0131 09:37:35.512793 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/633e616f-83bc-419c-8362-3adc0bc0970c-utilities\") pod \"community-operators-sltlp\" (UID: \"633e616f-83bc-419c-8362-3adc0bc0970c\") " pod="openshift-marketplace/community-operators-sltlp" Jan 31 09:37:35 crc kubenswrapper[4830]: I0131 09:37:35.557302 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-f49kg" Jan 31 09:37:35 crc kubenswrapper[4830]: I0131 09:37:35.616449 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/633e616f-83bc-419c-8362-3adc0bc0970c-utilities\") pod \"community-operators-sltlp\" (UID: \"633e616f-83bc-419c-8362-3adc0bc0970c\") " pod="openshift-marketplace/community-operators-sltlp" Jan 31 09:37:35 crc kubenswrapper[4830]: I0131 09:37:35.617202 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/633e616f-83bc-419c-8362-3adc0bc0970c-catalog-content\") pod \"community-operators-sltlp\" (UID: \"633e616f-83bc-419c-8362-3adc0bc0970c\") " pod="openshift-marketplace/community-operators-sltlp" Jan 31 09:37:35 crc kubenswrapper[4830]: I0131 09:37:35.617231 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/633e616f-83bc-419c-8362-3adc0bc0970c-utilities\") pod \"community-operators-sltlp\" (UID: \"633e616f-83bc-419c-8362-3adc0bc0970c\") " pod="openshift-marketplace/community-operators-sltlp" Jan 31 09:37:35 crc kubenswrapper[4830]: I0131 09:37:35.617345 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6g7v2\" (UniqueName: \"kubernetes.io/projected/633e616f-83bc-419c-8362-3adc0bc0970c-kube-api-access-6g7v2\") pod \"community-operators-sltlp\" (UID: \"633e616f-83bc-419c-8362-3adc0bc0970c\") " pod="openshift-marketplace/community-operators-sltlp" Jan 31 09:37:35 crc kubenswrapper[4830]: I0131 09:37:35.618316 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/633e616f-83bc-419c-8362-3adc0bc0970c-catalog-content\") pod \"community-operators-sltlp\" (UID: \"633e616f-83bc-419c-8362-3adc0bc0970c\") " pod="openshift-marketplace/community-operators-sltlp" Jan 31 09:37:35 crc kubenswrapper[4830]: I0131 09:37:35.645416 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6g7v2\" (UniqueName: \"kubernetes.io/projected/633e616f-83bc-419c-8362-3adc0bc0970c-kube-api-access-6g7v2\") pod \"community-operators-sltlp\" (UID: \"633e616f-83bc-419c-8362-3adc0bc0970c\") " pod="openshift-marketplace/community-operators-sltlp" Jan 31 09:37:35 crc kubenswrapper[4830]: I0131 09:37:35.830580 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sltlp" Jan 31 09:37:36 crc kubenswrapper[4830]: I0131 09:37:36.216435 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-f49kg"] Jan 31 09:37:36 crc kubenswrapper[4830]: I0131 09:37:36.636127 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-sltlp"] Jan 31 09:37:36 crc kubenswrapper[4830]: W0131 09:37:36.653328 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod633e616f_83bc_419c_8362_3adc0bc0970c.slice/crio-e98208274f9d4d0a37590cc5ea29ff5e2eff2740b72160386d340fd64209fd25 WatchSource:0}: Error finding container e98208274f9d4d0a37590cc5ea29ff5e2eff2740b72160386d340fd64209fd25: Status 404 returned error can't find the container with id e98208274f9d4d0a37590cc5ea29ff5e2eff2740b72160386d340fd64209fd25 Jan 31 09:37:36 crc kubenswrapper[4830]: I0131 09:37:36.928346 4830 generic.go:334] "Generic (PLEG): container finished" podID="d77cf759-ea9a-4728-9a69-b6bc353b1568" containerID="88742af130405a504ff91a3480f38410a3731b0a17da1895799e40d62ec51076" exitCode=0 Jan 31 09:37:36 crc kubenswrapper[4830]: I0131 09:37:36.928571 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-f49kg" event={"ID":"d77cf759-ea9a-4728-9a69-b6bc353b1568","Type":"ContainerDied","Data":"88742af130405a504ff91a3480f38410a3731b0a17da1895799e40d62ec51076"} Jan 31 09:37:36 crc kubenswrapper[4830]: I0131 09:37:36.928656 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-f49kg" event={"ID":"d77cf759-ea9a-4728-9a69-b6bc353b1568","Type":"ContainerStarted","Data":"373b5d18fc5b3d847c59022381a5a777745d5053699e3b5d0495b8739bce15e5"} Jan 31 09:37:36 crc kubenswrapper[4830]: I0131 09:37:36.931529 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sltlp" event={"ID":"633e616f-83bc-419c-8362-3adc0bc0970c","Type":"ContainerStarted","Data":"16dc5a2d6126932b62ba658cae688bff1b43a7b706d983cdc41b280c279c4ec2"} Jan 31 09:37:36 crc kubenswrapper[4830]: I0131 09:37:36.931582 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sltlp" event={"ID":"633e616f-83bc-419c-8362-3adc0bc0970c","Type":"ContainerStarted","Data":"e98208274f9d4d0a37590cc5ea29ff5e2eff2740b72160386d340fd64209fd25"} Jan 31 09:37:37 crc kubenswrapper[4830]: I0131 09:37:37.960960 4830 generic.go:334] "Generic (PLEG): container finished" podID="633e616f-83bc-419c-8362-3adc0bc0970c" containerID="16dc5a2d6126932b62ba658cae688bff1b43a7b706d983cdc41b280c279c4ec2" exitCode=0 Jan 31 09:37:37 crc kubenswrapper[4830]: I0131 09:37:37.961089 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sltlp" event={"ID":"633e616f-83bc-419c-8362-3adc0bc0970c","Type":"ContainerDied","Data":"16dc5a2d6126932b62ba658cae688bff1b43a7b706d983cdc41b280c279c4ec2"} Jan 31 09:37:37 crc kubenswrapper[4830]: I0131 09:37:37.965752 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-f49kg" event={"ID":"d77cf759-ea9a-4728-9a69-b6bc353b1568","Type":"ContainerStarted","Data":"c76e45c2825d44e25fc58904ce9e416694f28c3099e14f646fbfdd23b0a8041f"} Jan 31 09:37:38 crc kubenswrapper[4830]: I0131 09:37:38.981421 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sltlp" event={"ID":"633e616f-83bc-419c-8362-3adc0bc0970c","Type":"ContainerStarted","Data":"2559aa2051561f726168898ba10775b386b4b0aae72e1dff52f8fbf64954ba83"} Jan 31 09:37:42 crc kubenswrapper[4830]: I0131 09:37:42.034950 4830 generic.go:334] "Generic (PLEG): container finished" podID="633e616f-83bc-419c-8362-3adc0bc0970c" containerID="2559aa2051561f726168898ba10775b386b4b0aae72e1dff52f8fbf64954ba83" exitCode=0 Jan 31 09:37:42 crc kubenswrapper[4830]: I0131 09:37:42.035036 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sltlp" event={"ID":"633e616f-83bc-419c-8362-3adc0bc0970c","Type":"ContainerDied","Data":"2559aa2051561f726168898ba10775b386b4b0aae72e1dff52f8fbf64954ba83"} Jan 31 09:37:42 crc kubenswrapper[4830]: I0131 09:37:42.041225 4830 generic.go:334] "Generic (PLEG): container finished" podID="d77cf759-ea9a-4728-9a69-b6bc353b1568" containerID="c76e45c2825d44e25fc58904ce9e416694f28c3099e14f646fbfdd23b0a8041f" exitCode=0 Jan 31 09:37:42 crc kubenswrapper[4830]: I0131 09:37:42.041277 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-f49kg" event={"ID":"d77cf759-ea9a-4728-9a69-b6bc353b1568","Type":"ContainerDied","Data":"c76e45c2825d44e25fc58904ce9e416694f28c3099e14f646fbfdd23b0a8041f"} Jan 31 09:37:43 crc kubenswrapper[4830]: I0131 09:37:43.057407 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sltlp" event={"ID":"633e616f-83bc-419c-8362-3adc0bc0970c","Type":"ContainerStarted","Data":"fba4151c87e84bbd68338f10d96404dacd92fe40c2f6264df33450fc573fc93f"} Jan 31 09:37:43 crc kubenswrapper[4830]: I0131 09:37:43.064472 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-f49kg" event={"ID":"d77cf759-ea9a-4728-9a69-b6bc353b1568","Type":"ContainerStarted","Data":"b8256c808af6edd5459141c9c4fa0470a84f3773a5dfa63cc88a59f0ac3c108d"} Jan 31 09:37:43 crc kubenswrapper[4830]: I0131 09:37:43.080995 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-sltlp" podStartSLOduration=2.395025287 podStartE2EDuration="8.080965779s" podCreationTimestamp="2026-01-31 09:37:35 +0000 UTC" firstStartedPulling="2026-01-31 09:37:36.934078515 +0000 UTC m=+2201.427440957" lastFinishedPulling="2026-01-31 09:37:42.620018997 +0000 UTC m=+2207.113381449" observedRunningTime="2026-01-31 09:37:43.079440197 +0000 UTC m=+2207.572802639" watchObservedRunningTime="2026-01-31 09:37:43.080965779 +0000 UTC m=+2207.574328221" Jan 31 09:37:43 crc kubenswrapper[4830]: I0131 09:37:43.123103 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-f49kg" podStartSLOduration=2.3870693259999998 podStartE2EDuration="8.123079436s" podCreationTimestamp="2026-01-31 09:37:35 +0000 UTC" firstStartedPulling="2026-01-31 09:37:36.931092822 +0000 UTC m=+2201.424455274" lastFinishedPulling="2026-01-31 09:37:42.667102942 +0000 UTC m=+2207.160465384" observedRunningTime="2026-01-31 09:37:43.106143127 +0000 UTC m=+2207.599505569" watchObservedRunningTime="2026-01-31 09:37:43.123079436 +0000 UTC m=+2207.616441878" Jan 31 09:37:45 crc kubenswrapper[4830]: I0131 09:37:45.252900 4830 scope.go:117] "RemoveContainer" containerID="5a8f61ac813e58f2725a65e088faabbabc4f4a08bd1c263d53e2f3530d252de8" Jan 31 09:37:45 crc kubenswrapper[4830]: E0131 09:37:45.254067 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gt7kd_openshift-machine-config-operator(158dbfda-9b0a-4809-9946-3c6ee2d082dc)\"" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" podUID="158dbfda-9b0a-4809-9946-3c6ee2d082dc" Jan 31 09:37:45 crc kubenswrapper[4830]: I0131 09:37:45.557714 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-f49kg" Jan 31 09:37:45 crc kubenswrapper[4830]: I0131 09:37:45.557814 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-f49kg" Jan 31 09:37:45 crc kubenswrapper[4830]: I0131 09:37:45.834648 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-sltlp" Jan 31 09:37:45 crc kubenswrapper[4830]: I0131 09:37:45.836087 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-sltlp" Jan 31 09:37:46 crc kubenswrapper[4830]: I0131 09:37:46.620465 4830 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-f49kg" podUID="d77cf759-ea9a-4728-9a69-b6bc353b1568" containerName="registry-server" probeResult="failure" output=< Jan 31 09:37:46 crc kubenswrapper[4830]: timeout: failed to connect service ":50051" within 1s Jan 31 09:37:46 crc kubenswrapper[4830]: > Jan 31 09:37:46 crc kubenswrapper[4830]: I0131 09:37:46.886638 4830 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-sltlp" podUID="633e616f-83bc-419c-8362-3adc0bc0970c" containerName="registry-server" probeResult="failure" output=< Jan 31 09:37:46 crc kubenswrapper[4830]: timeout: failed to connect service ":50051" within 1s Jan 31 09:37:46 crc kubenswrapper[4830]: > Jan 31 09:37:56 crc kubenswrapper[4830]: I0131 09:37:56.649608 4830 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-f49kg" podUID="d77cf759-ea9a-4728-9a69-b6bc353b1568" containerName="registry-server" probeResult="failure" output=< Jan 31 09:37:56 crc kubenswrapper[4830]: timeout: failed to connect service ":50051" within 1s Jan 31 09:37:56 crc kubenswrapper[4830]: > Jan 31 09:37:56 crc kubenswrapper[4830]: I0131 09:37:56.890006 4830 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-sltlp" podUID="633e616f-83bc-419c-8362-3adc0bc0970c" containerName="registry-server" probeResult="failure" output=< Jan 31 09:37:56 crc kubenswrapper[4830]: timeout: failed to connect service ":50051" within 1s Jan 31 09:37:56 crc kubenswrapper[4830]: > Jan 31 09:38:00 crc kubenswrapper[4830]: I0131 09:38:00.252563 4830 scope.go:117] "RemoveContainer" containerID="5a8f61ac813e58f2725a65e088faabbabc4f4a08bd1c263d53e2f3530d252de8" Jan 31 09:38:00 crc kubenswrapper[4830]: E0131 09:38:00.253948 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gt7kd_openshift-machine-config-operator(158dbfda-9b0a-4809-9946-3c6ee2d082dc)\"" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" podUID="158dbfda-9b0a-4809-9946-3c6ee2d082dc" Jan 31 09:38:05 crc kubenswrapper[4830]: I0131 09:38:05.613900 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-f49kg" Jan 31 09:38:05 crc kubenswrapper[4830]: I0131 09:38:05.673974 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-f49kg" Jan 31 09:38:05 crc kubenswrapper[4830]: I0131 09:38:05.883736 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-sltlp" Jan 31 09:38:05 crc kubenswrapper[4830]: I0131 09:38:05.938164 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-sltlp" Jan 31 09:38:06 crc kubenswrapper[4830]: I0131 09:38:06.823280 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-f49kg"] Jan 31 09:38:07 crc kubenswrapper[4830]: I0131 09:38:07.354860 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-f49kg" podUID="d77cf759-ea9a-4728-9a69-b6bc353b1568" containerName="registry-server" containerID="cri-o://b8256c808af6edd5459141c9c4fa0470a84f3773a5dfa63cc88a59f0ac3c108d" gracePeriod=2 Jan 31 09:38:07 crc kubenswrapper[4830]: I0131 09:38:07.940368 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-f49kg" Jan 31 09:38:08 crc kubenswrapper[4830]: I0131 09:38:08.017903 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d77cf759-ea9a-4728-9a69-b6bc353b1568-utilities\") pod \"d77cf759-ea9a-4728-9a69-b6bc353b1568\" (UID: \"d77cf759-ea9a-4728-9a69-b6bc353b1568\") " Jan 31 09:38:08 crc kubenswrapper[4830]: I0131 09:38:08.018184 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d77cf759-ea9a-4728-9a69-b6bc353b1568-catalog-content\") pod \"d77cf759-ea9a-4728-9a69-b6bc353b1568\" (UID: \"d77cf759-ea9a-4728-9a69-b6bc353b1568\") " Jan 31 09:38:08 crc kubenswrapper[4830]: I0131 09:38:08.018289 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x6blb\" (UniqueName: \"kubernetes.io/projected/d77cf759-ea9a-4728-9a69-b6bc353b1568-kube-api-access-x6blb\") pod \"d77cf759-ea9a-4728-9a69-b6bc353b1568\" (UID: \"d77cf759-ea9a-4728-9a69-b6bc353b1568\") " Jan 31 09:38:08 crc kubenswrapper[4830]: I0131 09:38:08.019000 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d77cf759-ea9a-4728-9a69-b6bc353b1568-utilities" (OuterVolumeSpecName: "utilities") pod "d77cf759-ea9a-4728-9a69-b6bc353b1568" (UID: "d77cf759-ea9a-4728-9a69-b6bc353b1568"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 09:38:08 crc kubenswrapper[4830]: I0131 09:38:08.019288 4830 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d77cf759-ea9a-4728-9a69-b6bc353b1568-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 09:38:08 crc kubenswrapper[4830]: I0131 09:38:08.024658 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d77cf759-ea9a-4728-9a69-b6bc353b1568-kube-api-access-x6blb" (OuterVolumeSpecName: "kube-api-access-x6blb") pod "d77cf759-ea9a-4728-9a69-b6bc353b1568" (UID: "d77cf759-ea9a-4728-9a69-b6bc353b1568"). InnerVolumeSpecName "kube-api-access-x6blb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:38:08 crc kubenswrapper[4830]: I0131 09:38:08.075675 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d77cf759-ea9a-4728-9a69-b6bc353b1568-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d77cf759-ea9a-4728-9a69-b6bc353b1568" (UID: "d77cf759-ea9a-4728-9a69-b6bc353b1568"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 09:38:08 crc kubenswrapper[4830]: I0131 09:38:08.122102 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x6blb\" (UniqueName: \"kubernetes.io/projected/d77cf759-ea9a-4728-9a69-b6bc353b1568-kube-api-access-x6blb\") on node \"crc\" DevicePath \"\"" Jan 31 09:38:08 crc kubenswrapper[4830]: I0131 09:38:08.122147 4830 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d77cf759-ea9a-4728-9a69-b6bc353b1568-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 09:38:08 crc kubenswrapper[4830]: I0131 09:38:08.227664 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-sltlp"] Jan 31 09:38:08 crc kubenswrapper[4830]: I0131 09:38:08.228075 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-sltlp" podUID="633e616f-83bc-419c-8362-3adc0bc0970c" containerName="registry-server" containerID="cri-o://fba4151c87e84bbd68338f10d96404dacd92fe40c2f6264df33450fc573fc93f" gracePeriod=2 Jan 31 09:38:08 crc kubenswrapper[4830]: I0131 09:38:08.376990 4830 generic.go:334] "Generic (PLEG): container finished" podID="d77cf759-ea9a-4728-9a69-b6bc353b1568" containerID="b8256c808af6edd5459141c9c4fa0470a84f3773a5dfa63cc88a59f0ac3c108d" exitCode=0 Jan 31 09:38:08 crc kubenswrapper[4830]: I0131 09:38:08.377133 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-f49kg" Jan 31 09:38:08 crc kubenswrapper[4830]: I0131 09:38:08.377528 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-f49kg" event={"ID":"d77cf759-ea9a-4728-9a69-b6bc353b1568","Type":"ContainerDied","Data":"b8256c808af6edd5459141c9c4fa0470a84f3773a5dfa63cc88a59f0ac3c108d"} Jan 31 09:38:08 crc kubenswrapper[4830]: I0131 09:38:08.377678 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-f49kg" event={"ID":"d77cf759-ea9a-4728-9a69-b6bc353b1568","Type":"ContainerDied","Data":"373b5d18fc5b3d847c59022381a5a777745d5053699e3b5d0495b8739bce15e5"} Jan 31 09:38:08 crc kubenswrapper[4830]: I0131 09:38:08.377819 4830 scope.go:117] "RemoveContainer" containerID="b8256c808af6edd5459141c9c4fa0470a84f3773a5dfa63cc88a59f0ac3c108d" Jan 31 09:38:08 crc kubenswrapper[4830]: I0131 09:38:08.383096 4830 generic.go:334] "Generic (PLEG): container finished" podID="633e616f-83bc-419c-8362-3adc0bc0970c" containerID="fba4151c87e84bbd68338f10d96404dacd92fe40c2f6264df33450fc573fc93f" exitCode=0 Jan 31 09:38:08 crc kubenswrapper[4830]: I0131 09:38:08.383188 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sltlp" event={"ID":"633e616f-83bc-419c-8362-3adc0bc0970c","Type":"ContainerDied","Data":"fba4151c87e84bbd68338f10d96404dacd92fe40c2f6264df33450fc573fc93f"} Jan 31 09:38:08 crc kubenswrapper[4830]: I0131 09:38:08.432836 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-f49kg"] Jan 31 09:38:08 crc kubenswrapper[4830]: I0131 09:38:08.433039 4830 scope.go:117] "RemoveContainer" containerID="c76e45c2825d44e25fc58904ce9e416694f28c3099e14f646fbfdd23b0a8041f" Jan 31 09:38:08 crc kubenswrapper[4830]: I0131 09:38:08.443232 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-f49kg"] Jan 31 09:38:08 crc kubenswrapper[4830]: I0131 09:38:08.499524 4830 scope.go:117] "RemoveContainer" containerID="88742af130405a504ff91a3480f38410a3731b0a17da1895799e40d62ec51076" Jan 31 09:38:08 crc kubenswrapper[4830]: I0131 09:38:08.568918 4830 scope.go:117] "RemoveContainer" containerID="b8256c808af6edd5459141c9c4fa0470a84f3773a5dfa63cc88a59f0ac3c108d" Jan 31 09:38:08 crc kubenswrapper[4830]: E0131 09:38:08.572865 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b8256c808af6edd5459141c9c4fa0470a84f3773a5dfa63cc88a59f0ac3c108d\": container with ID starting with b8256c808af6edd5459141c9c4fa0470a84f3773a5dfa63cc88a59f0ac3c108d not found: ID does not exist" containerID="b8256c808af6edd5459141c9c4fa0470a84f3773a5dfa63cc88a59f0ac3c108d" Jan 31 09:38:08 crc kubenswrapper[4830]: I0131 09:38:08.572926 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b8256c808af6edd5459141c9c4fa0470a84f3773a5dfa63cc88a59f0ac3c108d"} err="failed to get container status \"b8256c808af6edd5459141c9c4fa0470a84f3773a5dfa63cc88a59f0ac3c108d\": rpc error: code = NotFound desc = could not find container \"b8256c808af6edd5459141c9c4fa0470a84f3773a5dfa63cc88a59f0ac3c108d\": container with ID starting with b8256c808af6edd5459141c9c4fa0470a84f3773a5dfa63cc88a59f0ac3c108d not found: ID does not exist" Jan 31 09:38:08 crc kubenswrapper[4830]: I0131 09:38:08.572961 4830 scope.go:117] "RemoveContainer" containerID="c76e45c2825d44e25fc58904ce9e416694f28c3099e14f646fbfdd23b0a8041f" Jan 31 09:38:08 crc kubenswrapper[4830]: E0131 09:38:08.573938 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c76e45c2825d44e25fc58904ce9e416694f28c3099e14f646fbfdd23b0a8041f\": container with ID starting with c76e45c2825d44e25fc58904ce9e416694f28c3099e14f646fbfdd23b0a8041f not found: ID does not exist" containerID="c76e45c2825d44e25fc58904ce9e416694f28c3099e14f646fbfdd23b0a8041f" Jan 31 09:38:08 crc kubenswrapper[4830]: I0131 09:38:08.573996 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c76e45c2825d44e25fc58904ce9e416694f28c3099e14f646fbfdd23b0a8041f"} err="failed to get container status \"c76e45c2825d44e25fc58904ce9e416694f28c3099e14f646fbfdd23b0a8041f\": rpc error: code = NotFound desc = could not find container \"c76e45c2825d44e25fc58904ce9e416694f28c3099e14f646fbfdd23b0a8041f\": container with ID starting with c76e45c2825d44e25fc58904ce9e416694f28c3099e14f646fbfdd23b0a8041f not found: ID does not exist" Jan 31 09:38:08 crc kubenswrapper[4830]: I0131 09:38:08.574029 4830 scope.go:117] "RemoveContainer" containerID="88742af130405a504ff91a3480f38410a3731b0a17da1895799e40d62ec51076" Jan 31 09:38:08 crc kubenswrapper[4830]: E0131 09:38:08.577137 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"88742af130405a504ff91a3480f38410a3731b0a17da1895799e40d62ec51076\": container with ID starting with 88742af130405a504ff91a3480f38410a3731b0a17da1895799e40d62ec51076 not found: ID does not exist" containerID="88742af130405a504ff91a3480f38410a3731b0a17da1895799e40d62ec51076" Jan 31 09:38:08 crc kubenswrapper[4830]: I0131 09:38:08.577215 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"88742af130405a504ff91a3480f38410a3731b0a17da1895799e40d62ec51076"} err="failed to get container status \"88742af130405a504ff91a3480f38410a3731b0a17da1895799e40d62ec51076\": rpc error: code = NotFound desc = could not find container \"88742af130405a504ff91a3480f38410a3731b0a17da1895799e40d62ec51076\": container with ID starting with 88742af130405a504ff91a3480f38410a3731b0a17da1895799e40d62ec51076 not found: ID does not exist" Jan 31 09:38:09 crc kubenswrapper[4830]: I0131 09:38:09.391639 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sltlp" Jan 31 09:38:09 crc kubenswrapper[4830]: I0131 09:38:09.404766 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sltlp" Jan 31 09:38:09 crc kubenswrapper[4830]: I0131 09:38:09.404716 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sltlp" event={"ID":"633e616f-83bc-419c-8362-3adc0bc0970c","Type":"ContainerDied","Data":"e98208274f9d4d0a37590cc5ea29ff5e2eff2740b72160386d340fd64209fd25"} Jan 31 09:38:09 crc kubenswrapper[4830]: I0131 09:38:09.404903 4830 scope.go:117] "RemoveContainer" containerID="fba4151c87e84bbd68338f10d96404dacd92fe40c2f6264df33450fc573fc93f" Jan 31 09:38:09 crc kubenswrapper[4830]: I0131 09:38:09.444595 4830 scope.go:117] "RemoveContainer" containerID="2559aa2051561f726168898ba10775b386b4b0aae72e1dff52f8fbf64954ba83" Jan 31 09:38:09 crc kubenswrapper[4830]: I0131 09:38:09.476028 4830 scope.go:117] "RemoveContainer" containerID="16dc5a2d6126932b62ba658cae688bff1b43a7b706d983cdc41b280c279c4ec2" Jan 31 09:38:09 crc kubenswrapper[4830]: I0131 09:38:09.552629 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g7v2\" (UniqueName: \"kubernetes.io/projected/633e616f-83bc-419c-8362-3adc0bc0970c-kube-api-access-6g7v2\") pod \"633e616f-83bc-419c-8362-3adc0bc0970c\" (UID: \"633e616f-83bc-419c-8362-3adc0bc0970c\") " Jan 31 09:38:09 crc kubenswrapper[4830]: I0131 09:38:09.552787 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/633e616f-83bc-419c-8362-3adc0bc0970c-utilities\") pod \"633e616f-83bc-419c-8362-3adc0bc0970c\" (UID: \"633e616f-83bc-419c-8362-3adc0bc0970c\") " Jan 31 09:38:09 crc kubenswrapper[4830]: I0131 09:38:09.552874 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/633e616f-83bc-419c-8362-3adc0bc0970c-catalog-content\") pod \"633e616f-83bc-419c-8362-3adc0bc0970c\" (UID: \"633e616f-83bc-419c-8362-3adc0bc0970c\") " Jan 31 09:38:09 crc kubenswrapper[4830]: I0131 09:38:09.553836 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/633e616f-83bc-419c-8362-3adc0bc0970c-utilities" (OuterVolumeSpecName: "utilities") pod "633e616f-83bc-419c-8362-3adc0bc0970c" (UID: "633e616f-83bc-419c-8362-3adc0bc0970c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 09:38:09 crc kubenswrapper[4830]: I0131 09:38:09.560149 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/633e616f-83bc-419c-8362-3adc0bc0970c-kube-api-access-6g7v2" (OuterVolumeSpecName: "kube-api-access-6g7v2") pod "633e616f-83bc-419c-8362-3adc0bc0970c" (UID: "633e616f-83bc-419c-8362-3adc0bc0970c"). InnerVolumeSpecName "kube-api-access-6g7v2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:38:09 crc kubenswrapper[4830]: I0131 09:38:09.616755 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/633e616f-83bc-419c-8362-3adc0bc0970c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "633e616f-83bc-419c-8362-3adc0bc0970c" (UID: "633e616f-83bc-419c-8362-3adc0bc0970c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 09:38:09 crc kubenswrapper[4830]: I0131 09:38:09.657301 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g7v2\" (UniqueName: \"kubernetes.io/projected/633e616f-83bc-419c-8362-3adc0bc0970c-kube-api-access-6g7v2\") on node \"crc\" DevicePath \"\"" Jan 31 09:38:09 crc kubenswrapper[4830]: I0131 09:38:09.657343 4830 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/633e616f-83bc-419c-8362-3adc0bc0970c-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 09:38:09 crc kubenswrapper[4830]: I0131 09:38:09.657355 4830 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/633e616f-83bc-419c-8362-3adc0bc0970c-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 09:38:09 crc kubenswrapper[4830]: I0131 09:38:09.744180 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-sltlp"] Jan 31 09:38:09 crc kubenswrapper[4830]: I0131 09:38:09.754104 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-sltlp"] Jan 31 09:38:10 crc kubenswrapper[4830]: I0131 09:38:10.265029 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="633e616f-83bc-419c-8362-3adc0bc0970c" path="/var/lib/kubelet/pods/633e616f-83bc-419c-8362-3adc0bc0970c/volumes" Jan 31 09:38:10 crc kubenswrapper[4830]: I0131 09:38:10.265711 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d77cf759-ea9a-4728-9a69-b6bc353b1568" path="/var/lib/kubelet/pods/d77cf759-ea9a-4728-9a69-b6bc353b1568/volumes" Jan 31 09:38:13 crc kubenswrapper[4830]: I0131 09:38:13.336915 4830 scope.go:117] "RemoveContainer" containerID="96b58b789b29643ae16bc92e02e2044a4014e4f976d66272b70a7302d5729868" Jan 31 09:38:13 crc kubenswrapper[4830]: I0131 09:38:13.361166 4830 scope.go:117] "RemoveContainer" containerID="b107efaccf2b98b8561f3fc2480bbd19ae235859bc35b0e5ac1cbd07d9dadcb3" Jan 31 09:38:13 crc kubenswrapper[4830]: I0131 09:38:13.435484 4830 scope.go:117] "RemoveContainer" containerID="f0b45e6d646475b426502e17ae647c2de02a19d7fbebfbe7cacdbfcc6685fbf5" Jan 31 09:38:13 crc kubenswrapper[4830]: I0131 09:38:13.506097 4830 scope.go:117] "RemoveContainer" containerID="75c6e469317753e2f9b505e7dd6253dbbfdea1e9bd69ef61b34bc05ceb7c1481" Jan 31 09:38:14 crc kubenswrapper[4830]: I0131 09:38:14.252002 4830 scope.go:117] "RemoveContainer" containerID="5a8f61ac813e58f2725a65e088faabbabc4f4a08bd1c263d53e2f3530d252de8" Jan 31 09:38:14 crc kubenswrapper[4830]: E0131 09:38:14.252826 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gt7kd_openshift-machine-config-operator(158dbfda-9b0a-4809-9946-3c6ee2d082dc)\"" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" podUID="158dbfda-9b0a-4809-9946-3c6ee2d082dc" Jan 31 09:38:16 crc kubenswrapper[4830]: I0131 09:38:16.082870 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-7ft4j"] Jan 31 09:38:16 crc kubenswrapper[4830]: I0131 09:38:16.105921 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-7ft4j"] Jan 31 09:38:16 crc kubenswrapper[4830]: I0131 09:38:16.273669 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5adada53-61e1-406d-b9ac-0c004999b351" path="/var/lib/kubelet/pods/5adada53-61e1-406d-b9ac-0c004999b351/volumes" Jan 31 09:38:20 crc kubenswrapper[4830]: I0131 09:38:20.548247 4830 generic.go:334] "Generic (PLEG): container finished" podID="dbfcb990-512f-4840-b83b-32279cec5a26" containerID="f7eedc3ea29fe8b96a9f66e7a9698136cf957e585f4f1231bf476e4864fb8ba3" exitCode=0 Jan 31 09:38:20 crc kubenswrapper[4830]: I0131 09:38:20.548325 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-4wcht" event={"ID":"dbfcb990-512f-4840-b83b-32279cec5a26","Type":"ContainerDied","Data":"f7eedc3ea29fe8b96a9f66e7a9698136cf957e585f4f1231bf476e4864fb8ba3"} Jan 31 09:38:22 crc kubenswrapper[4830]: I0131 09:38:22.113420 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-4wcht" Jan 31 09:38:22 crc kubenswrapper[4830]: I0131 09:38:22.226076 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dbfcb990-512f-4840-b83b-32279cec5a26-inventory\") pod \"dbfcb990-512f-4840-b83b-32279cec5a26\" (UID: \"dbfcb990-512f-4840-b83b-32279cec5a26\") " Jan 31 09:38:22 crc kubenswrapper[4830]: I0131 09:38:22.226147 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cw4cw\" (UniqueName: \"kubernetes.io/projected/dbfcb990-512f-4840-b83b-32279cec5a26-kube-api-access-cw4cw\") pod \"dbfcb990-512f-4840-b83b-32279cec5a26\" (UID: \"dbfcb990-512f-4840-b83b-32279cec5a26\") " Jan 31 09:38:22 crc kubenswrapper[4830]: I0131 09:38:22.226380 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/dbfcb990-512f-4840-b83b-32279cec5a26-ssh-key-openstack-edpm-ipam\") pod \"dbfcb990-512f-4840-b83b-32279cec5a26\" (UID: \"dbfcb990-512f-4840-b83b-32279cec5a26\") " Jan 31 09:38:22 crc kubenswrapper[4830]: I0131 09:38:22.232358 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dbfcb990-512f-4840-b83b-32279cec5a26-kube-api-access-cw4cw" (OuterVolumeSpecName: "kube-api-access-cw4cw") pod "dbfcb990-512f-4840-b83b-32279cec5a26" (UID: "dbfcb990-512f-4840-b83b-32279cec5a26"). InnerVolumeSpecName "kube-api-access-cw4cw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:38:22 crc kubenswrapper[4830]: I0131 09:38:22.266648 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dbfcb990-512f-4840-b83b-32279cec5a26-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "dbfcb990-512f-4840-b83b-32279cec5a26" (UID: "dbfcb990-512f-4840-b83b-32279cec5a26"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:38:22 crc kubenswrapper[4830]: I0131 09:38:22.275116 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dbfcb990-512f-4840-b83b-32279cec5a26-inventory" (OuterVolumeSpecName: "inventory") pod "dbfcb990-512f-4840-b83b-32279cec5a26" (UID: "dbfcb990-512f-4840-b83b-32279cec5a26"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:38:22 crc kubenswrapper[4830]: I0131 09:38:22.329717 4830 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/dbfcb990-512f-4840-b83b-32279cec5a26-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 31 09:38:22 crc kubenswrapper[4830]: I0131 09:38:22.329782 4830 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dbfcb990-512f-4840-b83b-32279cec5a26-inventory\") on node \"crc\" DevicePath \"\"" Jan 31 09:38:22 crc kubenswrapper[4830]: I0131 09:38:22.329796 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cw4cw\" (UniqueName: \"kubernetes.io/projected/dbfcb990-512f-4840-b83b-32279cec5a26-kube-api-access-cw4cw\") on node \"crc\" DevicePath \"\"" Jan 31 09:38:22 crc kubenswrapper[4830]: I0131 09:38:22.571262 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-4wcht" event={"ID":"dbfcb990-512f-4840-b83b-32279cec5a26","Type":"ContainerDied","Data":"15e4c96d034c85f30b1ad666e275c1cb189cb042477b214ee175575135a3d928"} Jan 31 09:38:22 crc kubenswrapper[4830]: I0131 09:38:22.571320 4830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="15e4c96d034c85f30b1ad666e275c1cb189cb042477b214ee175575135a3d928" Jan 31 09:38:22 crc kubenswrapper[4830]: I0131 09:38:22.571391 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-4wcht" Jan 31 09:38:22 crc kubenswrapper[4830]: I0131 09:38:22.704485 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-b2dgl"] Jan 31 09:38:22 crc kubenswrapper[4830]: E0131 09:38:22.705492 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="633e616f-83bc-419c-8362-3adc0bc0970c" containerName="extract-content" Jan 31 09:38:22 crc kubenswrapper[4830]: I0131 09:38:22.705517 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="633e616f-83bc-419c-8362-3adc0bc0970c" containerName="extract-content" Jan 31 09:38:22 crc kubenswrapper[4830]: E0131 09:38:22.705540 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d77cf759-ea9a-4728-9a69-b6bc353b1568" containerName="extract-content" Jan 31 09:38:22 crc kubenswrapper[4830]: I0131 09:38:22.705548 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="d77cf759-ea9a-4728-9a69-b6bc353b1568" containerName="extract-content" Jan 31 09:38:22 crc kubenswrapper[4830]: E0131 09:38:22.705570 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="633e616f-83bc-419c-8362-3adc0bc0970c" containerName="extract-utilities" Jan 31 09:38:22 crc kubenswrapper[4830]: I0131 09:38:22.705580 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="633e616f-83bc-419c-8362-3adc0bc0970c" containerName="extract-utilities" Jan 31 09:38:22 crc kubenswrapper[4830]: E0131 09:38:22.705612 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d77cf759-ea9a-4728-9a69-b6bc353b1568" containerName="registry-server" Jan 31 09:38:22 crc kubenswrapper[4830]: I0131 09:38:22.705619 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="d77cf759-ea9a-4728-9a69-b6bc353b1568" containerName="registry-server" Jan 31 09:38:22 crc kubenswrapper[4830]: E0131 09:38:22.705638 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d77cf759-ea9a-4728-9a69-b6bc353b1568" containerName="extract-utilities" Jan 31 09:38:22 crc kubenswrapper[4830]: I0131 09:38:22.705646 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="d77cf759-ea9a-4728-9a69-b6bc353b1568" containerName="extract-utilities" Jan 31 09:38:22 crc kubenswrapper[4830]: E0131 09:38:22.705654 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="633e616f-83bc-419c-8362-3adc0bc0970c" containerName="registry-server" Jan 31 09:38:22 crc kubenswrapper[4830]: I0131 09:38:22.705660 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="633e616f-83bc-419c-8362-3adc0bc0970c" containerName="registry-server" Jan 31 09:38:22 crc kubenswrapper[4830]: E0131 09:38:22.705698 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dbfcb990-512f-4840-b83b-32279cec5a26" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 31 09:38:22 crc kubenswrapper[4830]: I0131 09:38:22.705707 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="dbfcb990-512f-4840-b83b-32279cec5a26" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 31 09:38:22 crc kubenswrapper[4830]: I0131 09:38:22.706020 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="dbfcb990-512f-4840-b83b-32279cec5a26" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 31 09:38:22 crc kubenswrapper[4830]: I0131 09:38:22.706065 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="633e616f-83bc-419c-8362-3adc0bc0970c" containerName="registry-server" Jan 31 09:38:22 crc kubenswrapper[4830]: I0131 09:38:22.706086 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="d77cf759-ea9a-4728-9a69-b6bc353b1568" containerName="registry-server" Jan 31 09:38:22 crc kubenswrapper[4830]: I0131 09:38:22.707326 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-b2dgl" Jan 31 09:38:22 crc kubenswrapper[4830]: I0131 09:38:22.710561 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 31 09:38:22 crc kubenswrapper[4830]: I0131 09:38:22.710895 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 31 09:38:22 crc kubenswrapper[4830]: I0131 09:38:22.711316 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 31 09:38:22 crc kubenswrapper[4830]: I0131 09:38:22.724953 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-vd24j" Jan 31 09:38:22 crc kubenswrapper[4830]: I0131 09:38:22.724991 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-b2dgl"] Jan 31 09:38:22 crc kubenswrapper[4830]: I0131 09:38:22.841487 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/402466f0-5362-40ba-830b-698e51883c01-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-b2dgl\" (UID: \"402466f0-5362-40ba-830b-698e51883c01\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-b2dgl" Jan 31 09:38:22 crc kubenswrapper[4830]: I0131 09:38:22.841590 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/402466f0-5362-40ba-830b-698e51883c01-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-b2dgl\" (UID: \"402466f0-5362-40ba-830b-698e51883c01\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-b2dgl" Jan 31 09:38:22 crc kubenswrapper[4830]: I0131 09:38:22.841669 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b9m8r\" (UniqueName: \"kubernetes.io/projected/402466f0-5362-40ba-830b-698e51883c01-kube-api-access-b9m8r\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-b2dgl\" (UID: \"402466f0-5362-40ba-830b-698e51883c01\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-b2dgl" Jan 31 09:38:22 crc kubenswrapper[4830]: I0131 09:38:22.944128 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b9m8r\" (UniqueName: \"kubernetes.io/projected/402466f0-5362-40ba-830b-698e51883c01-kube-api-access-b9m8r\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-b2dgl\" (UID: \"402466f0-5362-40ba-830b-698e51883c01\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-b2dgl" Jan 31 09:38:22 crc kubenswrapper[4830]: I0131 09:38:22.944777 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/402466f0-5362-40ba-830b-698e51883c01-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-b2dgl\" (UID: \"402466f0-5362-40ba-830b-698e51883c01\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-b2dgl" Jan 31 09:38:22 crc kubenswrapper[4830]: I0131 09:38:22.945004 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/402466f0-5362-40ba-830b-698e51883c01-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-b2dgl\" (UID: \"402466f0-5362-40ba-830b-698e51883c01\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-b2dgl" Jan 31 09:38:22 crc kubenswrapper[4830]: I0131 09:38:22.949922 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/402466f0-5362-40ba-830b-698e51883c01-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-b2dgl\" (UID: \"402466f0-5362-40ba-830b-698e51883c01\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-b2dgl" Jan 31 09:38:22 crc kubenswrapper[4830]: I0131 09:38:22.952600 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/402466f0-5362-40ba-830b-698e51883c01-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-b2dgl\" (UID: \"402466f0-5362-40ba-830b-698e51883c01\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-b2dgl" Jan 31 09:38:22 crc kubenswrapper[4830]: I0131 09:38:22.976344 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b9m8r\" (UniqueName: \"kubernetes.io/projected/402466f0-5362-40ba-830b-698e51883c01-kube-api-access-b9m8r\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-b2dgl\" (UID: \"402466f0-5362-40ba-830b-698e51883c01\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-b2dgl" Jan 31 09:38:23 crc kubenswrapper[4830]: I0131 09:38:23.063142 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-b2dgl" Jan 31 09:38:23 crc kubenswrapper[4830]: I0131 09:38:23.749410 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-b2dgl"] Jan 31 09:38:24 crc kubenswrapper[4830]: I0131 09:38:24.598049 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-b2dgl" event={"ID":"402466f0-5362-40ba-830b-698e51883c01","Type":"ContainerStarted","Data":"07bf8b99f0d5e8c44de9ed9c2064ca51f18fc29a9d2c5fea182dddd6a316c015"} Jan 31 09:38:24 crc kubenswrapper[4830]: I0131 09:38:24.598360 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-b2dgl" event={"ID":"402466f0-5362-40ba-830b-698e51883c01","Type":"ContainerStarted","Data":"9e8c7f50d32ae57c771989698df84fbc171cebdbc8826d1fe931aa4ad9640ad6"} Jan 31 09:38:24 crc kubenswrapper[4830]: I0131 09:38:24.634325 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-b2dgl" podStartSLOduration=2.184268686 podStartE2EDuration="2.634294165s" podCreationTimestamp="2026-01-31 09:38:22 +0000 UTC" firstStartedPulling="2026-01-31 09:38:23.752439411 +0000 UTC m=+2248.245801863" lastFinishedPulling="2026-01-31 09:38:24.2024649 +0000 UTC m=+2248.695827342" observedRunningTime="2026-01-31 09:38:24.615398032 +0000 UTC m=+2249.108760474" watchObservedRunningTime="2026-01-31 09:38:24.634294165 +0000 UTC m=+2249.127656617" Jan 31 09:38:28 crc kubenswrapper[4830]: I0131 09:38:28.253281 4830 scope.go:117] "RemoveContainer" containerID="5a8f61ac813e58f2725a65e088faabbabc4f4a08bd1c263d53e2f3530d252de8" Jan 31 09:38:28 crc kubenswrapper[4830]: E0131 09:38:28.254176 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gt7kd_openshift-machine-config-operator(158dbfda-9b0a-4809-9946-3c6ee2d082dc)\"" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" podUID="158dbfda-9b0a-4809-9946-3c6ee2d082dc" Jan 31 09:38:29 crc kubenswrapper[4830]: I0131 09:38:29.651087 4830 generic.go:334] "Generic (PLEG): container finished" podID="402466f0-5362-40ba-830b-698e51883c01" containerID="07bf8b99f0d5e8c44de9ed9c2064ca51f18fc29a9d2c5fea182dddd6a316c015" exitCode=0 Jan 31 09:38:29 crc kubenswrapper[4830]: I0131 09:38:29.651154 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-b2dgl" event={"ID":"402466f0-5362-40ba-830b-698e51883c01","Type":"ContainerDied","Data":"07bf8b99f0d5e8c44de9ed9c2064ca51f18fc29a9d2c5fea182dddd6a316c015"} Jan 31 09:38:31 crc kubenswrapper[4830]: I0131 09:38:31.248413 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-b2dgl" Jan 31 09:38:31 crc kubenswrapper[4830]: I0131 09:38:31.274321 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/402466f0-5362-40ba-830b-698e51883c01-inventory\") pod \"402466f0-5362-40ba-830b-698e51883c01\" (UID: \"402466f0-5362-40ba-830b-698e51883c01\") " Jan 31 09:38:31 crc kubenswrapper[4830]: I0131 09:38:31.274659 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b9m8r\" (UniqueName: \"kubernetes.io/projected/402466f0-5362-40ba-830b-698e51883c01-kube-api-access-b9m8r\") pod \"402466f0-5362-40ba-830b-698e51883c01\" (UID: \"402466f0-5362-40ba-830b-698e51883c01\") " Jan 31 09:38:31 crc kubenswrapper[4830]: I0131 09:38:31.274763 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/402466f0-5362-40ba-830b-698e51883c01-ssh-key-openstack-edpm-ipam\") pod \"402466f0-5362-40ba-830b-698e51883c01\" (UID: \"402466f0-5362-40ba-830b-698e51883c01\") " Jan 31 09:38:31 crc kubenswrapper[4830]: I0131 09:38:31.291142 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/402466f0-5362-40ba-830b-698e51883c01-kube-api-access-b9m8r" (OuterVolumeSpecName: "kube-api-access-b9m8r") pod "402466f0-5362-40ba-830b-698e51883c01" (UID: "402466f0-5362-40ba-830b-698e51883c01"). InnerVolumeSpecName "kube-api-access-b9m8r". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:38:31 crc kubenswrapper[4830]: I0131 09:38:31.328966 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/402466f0-5362-40ba-830b-698e51883c01-inventory" (OuterVolumeSpecName: "inventory") pod "402466f0-5362-40ba-830b-698e51883c01" (UID: "402466f0-5362-40ba-830b-698e51883c01"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:38:31 crc kubenswrapper[4830]: I0131 09:38:31.341282 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/402466f0-5362-40ba-830b-698e51883c01-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "402466f0-5362-40ba-830b-698e51883c01" (UID: "402466f0-5362-40ba-830b-698e51883c01"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:38:31 crc kubenswrapper[4830]: I0131 09:38:31.380356 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b9m8r\" (UniqueName: \"kubernetes.io/projected/402466f0-5362-40ba-830b-698e51883c01-kube-api-access-b9m8r\") on node \"crc\" DevicePath \"\"" Jan 31 09:38:31 crc kubenswrapper[4830]: I0131 09:38:31.380449 4830 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/402466f0-5362-40ba-830b-698e51883c01-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 31 09:38:31 crc kubenswrapper[4830]: I0131 09:38:31.380462 4830 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/402466f0-5362-40ba-830b-698e51883c01-inventory\") on node \"crc\" DevicePath \"\"" Jan 31 09:38:31 crc kubenswrapper[4830]: I0131 09:38:31.675950 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-b2dgl" event={"ID":"402466f0-5362-40ba-830b-698e51883c01","Type":"ContainerDied","Data":"9e8c7f50d32ae57c771989698df84fbc171cebdbc8826d1fe931aa4ad9640ad6"} Jan 31 09:38:31 crc kubenswrapper[4830]: I0131 09:38:31.676013 4830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9e8c7f50d32ae57c771989698df84fbc171cebdbc8826d1fe931aa4ad9640ad6" Jan 31 09:38:31 crc kubenswrapper[4830]: I0131 09:38:31.676123 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-b2dgl" Jan 31 09:38:31 crc kubenswrapper[4830]: I0131 09:38:31.763538 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-vxplq"] Jan 31 09:38:31 crc kubenswrapper[4830]: E0131 09:38:31.764560 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="402466f0-5362-40ba-830b-698e51883c01" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 31 09:38:31 crc kubenswrapper[4830]: I0131 09:38:31.764591 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="402466f0-5362-40ba-830b-698e51883c01" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 31 09:38:31 crc kubenswrapper[4830]: I0131 09:38:31.765099 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="402466f0-5362-40ba-830b-698e51883c01" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 31 09:38:31 crc kubenswrapper[4830]: I0131 09:38:31.766180 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-vxplq" Jan 31 09:38:31 crc kubenswrapper[4830]: I0131 09:38:31.772054 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 31 09:38:31 crc kubenswrapper[4830]: I0131 09:38:31.772287 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 31 09:38:31 crc kubenswrapper[4830]: I0131 09:38:31.772498 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 31 09:38:31 crc kubenswrapper[4830]: I0131 09:38:31.772714 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-vd24j" Jan 31 09:38:31 crc kubenswrapper[4830]: I0131 09:38:31.782885 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-vxplq"] Jan 31 09:38:31 crc kubenswrapper[4830]: I0131 09:38:31.791191 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/52cc6156-9fe9-433a-a363-8aa0197a9bac-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-vxplq\" (UID: \"52cc6156-9fe9-433a-a363-8aa0197a9bac\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-vxplq" Jan 31 09:38:31 crc kubenswrapper[4830]: I0131 09:38:31.791297 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c6rwj\" (UniqueName: \"kubernetes.io/projected/52cc6156-9fe9-433a-a363-8aa0197a9bac-kube-api-access-c6rwj\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-vxplq\" (UID: \"52cc6156-9fe9-433a-a363-8aa0197a9bac\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-vxplq" Jan 31 09:38:31 crc kubenswrapper[4830]: I0131 09:38:31.791436 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/52cc6156-9fe9-433a-a363-8aa0197a9bac-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-vxplq\" (UID: \"52cc6156-9fe9-433a-a363-8aa0197a9bac\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-vxplq" Jan 31 09:38:31 crc kubenswrapper[4830]: I0131 09:38:31.894478 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/52cc6156-9fe9-433a-a363-8aa0197a9bac-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-vxplq\" (UID: \"52cc6156-9fe9-433a-a363-8aa0197a9bac\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-vxplq" Jan 31 09:38:31 crc kubenswrapper[4830]: I0131 09:38:31.894803 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/52cc6156-9fe9-433a-a363-8aa0197a9bac-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-vxplq\" (UID: \"52cc6156-9fe9-433a-a363-8aa0197a9bac\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-vxplq" Jan 31 09:38:31 crc kubenswrapper[4830]: I0131 09:38:31.894859 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c6rwj\" (UniqueName: \"kubernetes.io/projected/52cc6156-9fe9-433a-a363-8aa0197a9bac-kube-api-access-c6rwj\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-vxplq\" (UID: \"52cc6156-9fe9-433a-a363-8aa0197a9bac\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-vxplq" Jan 31 09:38:31 crc kubenswrapper[4830]: I0131 09:38:31.900204 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/52cc6156-9fe9-433a-a363-8aa0197a9bac-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-vxplq\" (UID: \"52cc6156-9fe9-433a-a363-8aa0197a9bac\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-vxplq" Jan 31 09:38:31 crc kubenswrapper[4830]: I0131 09:38:31.900640 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/52cc6156-9fe9-433a-a363-8aa0197a9bac-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-vxplq\" (UID: \"52cc6156-9fe9-433a-a363-8aa0197a9bac\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-vxplq" Jan 31 09:38:31 crc kubenswrapper[4830]: I0131 09:38:31.914457 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c6rwj\" (UniqueName: \"kubernetes.io/projected/52cc6156-9fe9-433a-a363-8aa0197a9bac-kube-api-access-c6rwj\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-vxplq\" (UID: \"52cc6156-9fe9-433a-a363-8aa0197a9bac\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-vxplq" Jan 31 09:38:32 crc kubenswrapper[4830]: I0131 09:38:32.097590 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-vxplq" Jan 31 09:38:32 crc kubenswrapper[4830]: I0131 09:38:32.710943 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-vxplq"] Jan 31 09:38:33 crc kubenswrapper[4830]: I0131 09:38:33.706112 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-vxplq" event={"ID":"52cc6156-9fe9-433a-a363-8aa0197a9bac","Type":"ContainerStarted","Data":"9f24993272bfe2a90b1ce88bdb65e7ef0ba29bd42bdcae797a78d1819dce8803"} Jan 31 09:38:33 crc kubenswrapper[4830]: I0131 09:38:33.707274 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-vxplq" event={"ID":"52cc6156-9fe9-433a-a363-8aa0197a9bac","Type":"ContainerStarted","Data":"bfe7bc2ad17b2e5655cac9acf5a25eb4754274fa5d1195b9eb6c29b777036032"} Jan 31 09:38:33 crc kubenswrapper[4830]: I0131 09:38:33.742805 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-vxplq" podStartSLOduration=2.290926838 podStartE2EDuration="2.742779778s" podCreationTimestamp="2026-01-31 09:38:31 +0000 UTC" firstStartedPulling="2026-01-31 09:38:32.707448241 +0000 UTC m=+2257.200810673" lastFinishedPulling="2026-01-31 09:38:33.159301171 +0000 UTC m=+2257.652663613" observedRunningTime="2026-01-31 09:38:33.732203445 +0000 UTC m=+2258.225565887" watchObservedRunningTime="2026-01-31 09:38:33.742779778 +0000 UTC m=+2258.236142220" Jan 31 09:38:38 crc kubenswrapper[4830]: I0131 09:38:38.341200 4830 pod_container_manager_linux.go:210] "Failed to delete cgroup paths" cgroupName=["kubepods","burstable","podd77cf759-ea9a-4728-9a69-b6bc353b1568"] err="unable to destroy cgroup paths for cgroup [kubepods burstable podd77cf759-ea9a-4728-9a69-b6bc353b1568] : Timed out while waiting for systemd to remove kubepods-burstable-podd77cf759_ea9a_4728_9a69_b6bc353b1568.slice" Jan 31 09:38:43 crc kubenswrapper[4830]: I0131 09:38:43.251537 4830 scope.go:117] "RemoveContainer" containerID="5a8f61ac813e58f2725a65e088faabbabc4f4a08bd1c263d53e2f3530d252de8" Jan 31 09:38:43 crc kubenswrapper[4830]: E0131 09:38:43.252474 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gt7kd_openshift-machine-config-operator(158dbfda-9b0a-4809-9946-3c6ee2d082dc)\"" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" podUID="158dbfda-9b0a-4809-9946-3c6ee2d082dc" Jan 31 09:38:54 crc kubenswrapper[4830]: I0131 09:38:54.252604 4830 scope.go:117] "RemoveContainer" containerID="5a8f61ac813e58f2725a65e088faabbabc4f4a08bd1c263d53e2f3530d252de8" Jan 31 09:38:54 crc kubenswrapper[4830]: E0131 09:38:54.253740 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gt7kd_openshift-machine-config-operator(158dbfda-9b0a-4809-9946-3c6ee2d082dc)\"" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" podUID="158dbfda-9b0a-4809-9946-3c6ee2d082dc" Jan 31 09:39:09 crc kubenswrapper[4830]: I0131 09:39:09.252322 4830 scope.go:117] "RemoveContainer" containerID="5a8f61ac813e58f2725a65e088faabbabc4f4a08bd1c263d53e2f3530d252de8" Jan 31 09:39:09 crc kubenswrapper[4830]: E0131 09:39:09.253614 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gt7kd_openshift-machine-config-operator(158dbfda-9b0a-4809-9946-3c6ee2d082dc)\"" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" podUID="158dbfda-9b0a-4809-9946-3c6ee2d082dc" Jan 31 09:39:11 crc kubenswrapper[4830]: I0131 09:39:11.143066 4830 generic.go:334] "Generic (PLEG): container finished" podID="52cc6156-9fe9-433a-a363-8aa0197a9bac" containerID="9f24993272bfe2a90b1ce88bdb65e7ef0ba29bd42bdcae797a78d1819dce8803" exitCode=0 Jan 31 09:39:11 crc kubenswrapper[4830]: I0131 09:39:11.143415 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-vxplq" event={"ID":"52cc6156-9fe9-433a-a363-8aa0197a9bac","Type":"ContainerDied","Data":"9f24993272bfe2a90b1ce88bdb65e7ef0ba29bd42bdcae797a78d1819dce8803"} Jan 31 09:39:13 crc kubenswrapper[4830]: I0131 09:39:13.168639 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-vxplq" event={"ID":"52cc6156-9fe9-433a-a363-8aa0197a9bac","Type":"ContainerDied","Data":"bfe7bc2ad17b2e5655cac9acf5a25eb4754274fa5d1195b9eb6c29b777036032"} Jan 31 09:39:13 crc kubenswrapper[4830]: I0131 09:39:13.169377 4830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bfe7bc2ad17b2e5655cac9acf5a25eb4754274fa5d1195b9eb6c29b777036032" Jan 31 09:39:13 crc kubenswrapper[4830]: I0131 09:39:13.260070 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-vxplq" Jan 31 09:39:13 crc kubenswrapper[4830]: I0131 09:39:13.437689 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/52cc6156-9fe9-433a-a363-8aa0197a9bac-ssh-key-openstack-edpm-ipam\") pod \"52cc6156-9fe9-433a-a363-8aa0197a9bac\" (UID: \"52cc6156-9fe9-433a-a363-8aa0197a9bac\") " Jan 31 09:39:13 crc kubenswrapper[4830]: I0131 09:39:13.437798 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/52cc6156-9fe9-433a-a363-8aa0197a9bac-inventory\") pod \"52cc6156-9fe9-433a-a363-8aa0197a9bac\" (UID: \"52cc6156-9fe9-433a-a363-8aa0197a9bac\") " Jan 31 09:39:13 crc kubenswrapper[4830]: I0131 09:39:13.437916 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c6rwj\" (UniqueName: \"kubernetes.io/projected/52cc6156-9fe9-433a-a363-8aa0197a9bac-kube-api-access-c6rwj\") pod \"52cc6156-9fe9-433a-a363-8aa0197a9bac\" (UID: \"52cc6156-9fe9-433a-a363-8aa0197a9bac\") " Jan 31 09:39:13 crc kubenswrapper[4830]: I0131 09:39:13.443647 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/52cc6156-9fe9-433a-a363-8aa0197a9bac-kube-api-access-c6rwj" (OuterVolumeSpecName: "kube-api-access-c6rwj") pod "52cc6156-9fe9-433a-a363-8aa0197a9bac" (UID: "52cc6156-9fe9-433a-a363-8aa0197a9bac"). InnerVolumeSpecName "kube-api-access-c6rwj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:39:13 crc kubenswrapper[4830]: I0131 09:39:13.472512 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/52cc6156-9fe9-433a-a363-8aa0197a9bac-inventory" (OuterVolumeSpecName: "inventory") pod "52cc6156-9fe9-433a-a363-8aa0197a9bac" (UID: "52cc6156-9fe9-433a-a363-8aa0197a9bac"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:39:13 crc kubenswrapper[4830]: I0131 09:39:13.474528 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/52cc6156-9fe9-433a-a363-8aa0197a9bac-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "52cc6156-9fe9-433a-a363-8aa0197a9bac" (UID: "52cc6156-9fe9-433a-a363-8aa0197a9bac"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:39:13 crc kubenswrapper[4830]: I0131 09:39:13.542118 4830 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/52cc6156-9fe9-433a-a363-8aa0197a9bac-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 31 09:39:13 crc kubenswrapper[4830]: I0131 09:39:13.542162 4830 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/52cc6156-9fe9-433a-a363-8aa0197a9bac-inventory\") on node \"crc\" DevicePath \"\"" Jan 31 09:39:13 crc kubenswrapper[4830]: I0131 09:39:13.542175 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c6rwj\" (UniqueName: \"kubernetes.io/projected/52cc6156-9fe9-433a-a363-8aa0197a9bac-kube-api-access-c6rwj\") on node \"crc\" DevicePath \"\"" Jan 31 09:39:13 crc kubenswrapper[4830]: I0131 09:39:13.703493 4830 scope.go:117] "RemoveContainer" containerID="7641337d358c0b661af22faa289cf47753f5bce205b9d5934eb013a464e16db3" Jan 31 09:39:14 crc kubenswrapper[4830]: I0131 09:39:14.179053 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-vxplq" Jan 31 09:39:14 crc kubenswrapper[4830]: I0131 09:39:14.441793 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-p5bb9"] Jan 31 09:39:14 crc kubenswrapper[4830]: E0131 09:39:14.442490 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52cc6156-9fe9-433a-a363-8aa0197a9bac" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 31 09:39:14 crc kubenswrapper[4830]: I0131 09:39:14.442521 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="52cc6156-9fe9-433a-a363-8aa0197a9bac" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 31 09:39:14 crc kubenswrapper[4830]: I0131 09:39:14.442916 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="52cc6156-9fe9-433a-a363-8aa0197a9bac" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 31 09:39:14 crc kubenswrapper[4830]: I0131 09:39:14.444054 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-p5bb9" Jan 31 09:39:14 crc kubenswrapper[4830]: I0131 09:39:14.449274 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 31 09:39:14 crc kubenswrapper[4830]: I0131 09:39:14.449490 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 31 09:39:14 crc kubenswrapper[4830]: I0131 09:39:14.449608 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 31 09:39:14 crc kubenswrapper[4830]: I0131 09:39:14.461785 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-vd24j" Jan 31 09:39:14 crc kubenswrapper[4830]: I0131 09:39:14.471292 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-p5bb9"] Jan 31 09:39:14 crc kubenswrapper[4830]: I0131 09:39:14.579469 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/93ba1174-bbf6-485c-bd6a-5f44b9f96116-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-p5bb9\" (UID: \"93ba1174-bbf6-485c-bd6a-5f44b9f96116\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-p5bb9" Jan 31 09:39:14 crc kubenswrapper[4830]: I0131 09:39:14.579674 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l7x2q\" (UniqueName: \"kubernetes.io/projected/93ba1174-bbf6-485c-bd6a-5f44b9f96116-kube-api-access-l7x2q\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-p5bb9\" (UID: \"93ba1174-bbf6-485c-bd6a-5f44b9f96116\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-p5bb9" Jan 31 09:39:14 crc kubenswrapper[4830]: I0131 09:39:14.579758 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/93ba1174-bbf6-485c-bd6a-5f44b9f96116-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-p5bb9\" (UID: \"93ba1174-bbf6-485c-bd6a-5f44b9f96116\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-p5bb9" Jan 31 09:39:14 crc kubenswrapper[4830]: I0131 09:39:14.684315 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/93ba1174-bbf6-485c-bd6a-5f44b9f96116-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-p5bb9\" (UID: \"93ba1174-bbf6-485c-bd6a-5f44b9f96116\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-p5bb9" Jan 31 09:39:14 crc kubenswrapper[4830]: I0131 09:39:14.684854 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/93ba1174-bbf6-485c-bd6a-5f44b9f96116-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-p5bb9\" (UID: \"93ba1174-bbf6-485c-bd6a-5f44b9f96116\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-p5bb9" Jan 31 09:39:14 crc kubenswrapper[4830]: I0131 09:39:14.685010 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l7x2q\" (UniqueName: \"kubernetes.io/projected/93ba1174-bbf6-485c-bd6a-5f44b9f96116-kube-api-access-l7x2q\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-p5bb9\" (UID: \"93ba1174-bbf6-485c-bd6a-5f44b9f96116\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-p5bb9" Jan 31 09:39:14 crc kubenswrapper[4830]: I0131 09:39:14.690284 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/93ba1174-bbf6-485c-bd6a-5f44b9f96116-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-p5bb9\" (UID: \"93ba1174-bbf6-485c-bd6a-5f44b9f96116\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-p5bb9" Jan 31 09:39:14 crc kubenswrapper[4830]: I0131 09:39:14.702434 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/93ba1174-bbf6-485c-bd6a-5f44b9f96116-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-p5bb9\" (UID: \"93ba1174-bbf6-485c-bd6a-5f44b9f96116\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-p5bb9" Jan 31 09:39:14 crc kubenswrapper[4830]: I0131 09:39:14.707097 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l7x2q\" (UniqueName: \"kubernetes.io/projected/93ba1174-bbf6-485c-bd6a-5f44b9f96116-kube-api-access-l7x2q\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-p5bb9\" (UID: \"93ba1174-bbf6-485c-bd6a-5f44b9f96116\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-p5bb9" Jan 31 09:39:14 crc kubenswrapper[4830]: I0131 09:39:14.776535 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-p5bb9" Jan 31 09:39:15 crc kubenswrapper[4830]: I0131 09:39:15.361910 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-p5bb9"] Jan 31 09:39:15 crc kubenswrapper[4830]: I0131 09:39:15.378281 4830 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 31 09:39:16 crc kubenswrapper[4830]: I0131 09:39:16.201148 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-p5bb9" event={"ID":"93ba1174-bbf6-485c-bd6a-5f44b9f96116","Type":"ContainerStarted","Data":"36b0783aacf45c5ea93717e8e23d4ad35291e5ebd6932c5fea288e625cef258e"} Jan 31 09:39:17 crc kubenswrapper[4830]: I0131 09:39:17.215771 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-p5bb9" event={"ID":"93ba1174-bbf6-485c-bd6a-5f44b9f96116","Type":"ContainerStarted","Data":"5bdf2bd5a55f59d3811175261c10d4b40a861361bb4d114957736a5c808ecce8"} Jan 31 09:39:17 crc kubenswrapper[4830]: I0131 09:39:17.240780 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-p5bb9" podStartSLOduration=1.949346234 podStartE2EDuration="3.240748405s" podCreationTimestamp="2026-01-31 09:39:14 +0000 UTC" firstStartedPulling="2026-01-31 09:39:15.377754937 +0000 UTC m=+2299.871117379" lastFinishedPulling="2026-01-31 09:39:16.669157098 +0000 UTC m=+2301.162519550" observedRunningTime="2026-01-31 09:39:17.237393092 +0000 UTC m=+2301.730755544" watchObservedRunningTime="2026-01-31 09:39:17.240748405 +0000 UTC m=+2301.734110847" Jan 31 09:39:21 crc kubenswrapper[4830]: I0131 09:39:21.251595 4830 scope.go:117] "RemoveContainer" containerID="5a8f61ac813e58f2725a65e088faabbabc4f4a08bd1c263d53e2f3530d252de8" Jan 31 09:39:21 crc kubenswrapper[4830]: E0131 09:39:21.252970 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gt7kd_openshift-machine-config-operator(158dbfda-9b0a-4809-9946-3c6ee2d082dc)\"" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" podUID="158dbfda-9b0a-4809-9946-3c6ee2d082dc" Jan 31 09:39:33 crc kubenswrapper[4830]: I0131 09:39:33.252915 4830 scope.go:117] "RemoveContainer" containerID="5a8f61ac813e58f2725a65e088faabbabc4f4a08bd1c263d53e2f3530d252de8" Jan 31 09:39:33 crc kubenswrapper[4830]: E0131 09:39:33.254280 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gt7kd_openshift-machine-config-operator(158dbfda-9b0a-4809-9946-3c6ee2d082dc)\"" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" podUID="158dbfda-9b0a-4809-9946-3c6ee2d082dc" Jan 31 09:39:44 crc kubenswrapper[4830]: I0131 09:39:44.252217 4830 scope.go:117] "RemoveContainer" containerID="5a8f61ac813e58f2725a65e088faabbabc4f4a08bd1c263d53e2f3530d252de8" Jan 31 09:39:44 crc kubenswrapper[4830]: E0131 09:39:44.253095 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gt7kd_openshift-machine-config-operator(158dbfda-9b0a-4809-9946-3c6ee2d082dc)\"" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" podUID="158dbfda-9b0a-4809-9946-3c6ee2d082dc" Jan 31 09:39:56 crc kubenswrapper[4830]: I0131 09:39:56.263915 4830 scope.go:117] "RemoveContainer" containerID="5a8f61ac813e58f2725a65e088faabbabc4f4a08bd1c263d53e2f3530d252de8" Jan 31 09:39:56 crc kubenswrapper[4830]: E0131 09:39:56.265003 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gt7kd_openshift-machine-config-operator(158dbfda-9b0a-4809-9946-3c6ee2d082dc)\"" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" podUID="158dbfda-9b0a-4809-9946-3c6ee2d082dc" Jan 31 09:39:59 crc kubenswrapper[4830]: I0131 09:39:59.061156 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-db-sync-cblrm"] Jan 31 09:39:59 crc kubenswrapper[4830]: I0131 09:39:59.073796 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-db-sync-cblrm"] Jan 31 09:40:00 crc kubenswrapper[4830]: I0131 09:40:00.265623 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="35a75e79-079e-4905-9cc1-af2a81596943" path="/var/lib/kubelet/pods/35a75e79-079e-4905-9cc1-af2a81596943/volumes" Jan 31 09:40:01 crc kubenswrapper[4830]: I0131 09:40:01.765179 4830 generic.go:334] "Generic (PLEG): container finished" podID="93ba1174-bbf6-485c-bd6a-5f44b9f96116" containerID="5bdf2bd5a55f59d3811175261c10d4b40a861361bb4d114957736a5c808ecce8" exitCode=0 Jan 31 09:40:01 crc kubenswrapper[4830]: I0131 09:40:01.765290 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-p5bb9" event={"ID":"93ba1174-bbf6-485c-bd6a-5f44b9f96116","Type":"ContainerDied","Data":"5bdf2bd5a55f59d3811175261c10d4b40a861361bb4d114957736a5c808ecce8"} Jan 31 09:40:03 crc kubenswrapper[4830]: I0131 09:40:03.337363 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-p5bb9" Jan 31 09:40:03 crc kubenswrapper[4830]: I0131 09:40:03.373680 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l7x2q\" (UniqueName: \"kubernetes.io/projected/93ba1174-bbf6-485c-bd6a-5f44b9f96116-kube-api-access-l7x2q\") pod \"93ba1174-bbf6-485c-bd6a-5f44b9f96116\" (UID: \"93ba1174-bbf6-485c-bd6a-5f44b9f96116\") " Jan 31 09:40:03 crc kubenswrapper[4830]: I0131 09:40:03.377297 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/93ba1174-bbf6-485c-bd6a-5f44b9f96116-inventory\") pod \"93ba1174-bbf6-485c-bd6a-5f44b9f96116\" (UID: \"93ba1174-bbf6-485c-bd6a-5f44b9f96116\") " Jan 31 09:40:03 crc kubenswrapper[4830]: I0131 09:40:03.377496 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/93ba1174-bbf6-485c-bd6a-5f44b9f96116-ssh-key-openstack-edpm-ipam\") pod \"93ba1174-bbf6-485c-bd6a-5f44b9f96116\" (UID: \"93ba1174-bbf6-485c-bd6a-5f44b9f96116\") " Jan 31 09:40:03 crc kubenswrapper[4830]: I0131 09:40:03.398882 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/93ba1174-bbf6-485c-bd6a-5f44b9f96116-kube-api-access-l7x2q" (OuterVolumeSpecName: "kube-api-access-l7x2q") pod "93ba1174-bbf6-485c-bd6a-5f44b9f96116" (UID: "93ba1174-bbf6-485c-bd6a-5f44b9f96116"). InnerVolumeSpecName "kube-api-access-l7x2q". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:40:03 crc kubenswrapper[4830]: I0131 09:40:03.477433 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/93ba1174-bbf6-485c-bd6a-5f44b9f96116-inventory" (OuterVolumeSpecName: "inventory") pod "93ba1174-bbf6-485c-bd6a-5f44b9f96116" (UID: "93ba1174-bbf6-485c-bd6a-5f44b9f96116"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:40:03 crc kubenswrapper[4830]: I0131 09:40:03.483031 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/93ba1174-bbf6-485c-bd6a-5f44b9f96116-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "93ba1174-bbf6-485c-bd6a-5f44b9f96116" (UID: "93ba1174-bbf6-485c-bd6a-5f44b9f96116"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:40:03 crc kubenswrapper[4830]: I0131 09:40:03.485221 4830 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/93ba1174-bbf6-485c-bd6a-5f44b9f96116-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 31 09:40:03 crc kubenswrapper[4830]: I0131 09:40:03.485270 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l7x2q\" (UniqueName: \"kubernetes.io/projected/93ba1174-bbf6-485c-bd6a-5f44b9f96116-kube-api-access-l7x2q\") on node \"crc\" DevicePath \"\"" Jan 31 09:40:03 crc kubenswrapper[4830]: I0131 09:40:03.485285 4830 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/93ba1174-bbf6-485c-bd6a-5f44b9f96116-inventory\") on node \"crc\" DevicePath \"\"" Jan 31 09:40:03 crc kubenswrapper[4830]: I0131 09:40:03.794194 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-p5bb9" event={"ID":"93ba1174-bbf6-485c-bd6a-5f44b9f96116","Type":"ContainerDied","Data":"36b0783aacf45c5ea93717e8e23d4ad35291e5ebd6932c5fea288e625cef258e"} Jan 31 09:40:03 crc kubenswrapper[4830]: I0131 09:40:03.794528 4830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="36b0783aacf45c5ea93717e8e23d4ad35291e5ebd6932c5fea288e625cef258e" Jan 31 09:40:03 crc kubenswrapper[4830]: I0131 09:40:03.794388 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-p5bb9" Jan 31 09:40:03 crc kubenswrapper[4830]: I0131 09:40:03.888759 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-976m8"] Jan 31 09:40:03 crc kubenswrapper[4830]: E0131 09:40:03.889433 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="93ba1174-bbf6-485c-bd6a-5f44b9f96116" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 31 09:40:03 crc kubenswrapper[4830]: I0131 09:40:03.889456 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="93ba1174-bbf6-485c-bd6a-5f44b9f96116" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 31 09:40:03 crc kubenswrapper[4830]: I0131 09:40:03.889791 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="93ba1174-bbf6-485c-bd6a-5f44b9f96116" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 31 09:40:03 crc kubenswrapper[4830]: I0131 09:40:03.890959 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-976m8" Jan 31 09:40:03 crc kubenswrapper[4830]: I0131 09:40:03.893159 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-vd24j" Jan 31 09:40:03 crc kubenswrapper[4830]: I0131 09:40:03.893483 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 31 09:40:03 crc kubenswrapper[4830]: I0131 09:40:03.893875 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 31 09:40:03 crc kubenswrapper[4830]: I0131 09:40:03.895970 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 31 09:40:03 crc kubenswrapper[4830]: I0131 09:40:03.918689 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-976m8"] Jan 31 09:40:04 crc kubenswrapper[4830]: I0131 09:40:04.000624 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/80af0309-f30b-4a92-9457-0f9c982807c0-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-976m8\" (UID: \"80af0309-f30b-4a92-9457-0f9c982807c0\") " pod="openstack/ssh-known-hosts-edpm-deployment-976m8" Jan 31 09:40:04 crc kubenswrapper[4830]: I0131 09:40:04.001252 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/80af0309-f30b-4a92-9457-0f9c982807c0-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-976m8\" (UID: \"80af0309-f30b-4a92-9457-0f9c982807c0\") " pod="openstack/ssh-known-hosts-edpm-deployment-976m8" Jan 31 09:40:04 crc kubenswrapper[4830]: I0131 09:40:04.001358 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vflsm\" (UniqueName: \"kubernetes.io/projected/80af0309-f30b-4a92-9457-0f9c982807c0-kube-api-access-vflsm\") pod \"ssh-known-hosts-edpm-deployment-976m8\" (UID: \"80af0309-f30b-4a92-9457-0f9c982807c0\") " pod="openstack/ssh-known-hosts-edpm-deployment-976m8" Jan 31 09:40:04 crc kubenswrapper[4830]: I0131 09:40:04.103642 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/80af0309-f30b-4a92-9457-0f9c982807c0-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-976m8\" (UID: \"80af0309-f30b-4a92-9457-0f9c982807c0\") " pod="openstack/ssh-known-hosts-edpm-deployment-976m8" Jan 31 09:40:04 crc kubenswrapper[4830]: I0131 09:40:04.103697 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vflsm\" (UniqueName: \"kubernetes.io/projected/80af0309-f30b-4a92-9457-0f9c982807c0-kube-api-access-vflsm\") pod \"ssh-known-hosts-edpm-deployment-976m8\" (UID: \"80af0309-f30b-4a92-9457-0f9c982807c0\") " pod="openstack/ssh-known-hosts-edpm-deployment-976m8" Jan 31 09:40:04 crc kubenswrapper[4830]: I0131 09:40:04.103858 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/80af0309-f30b-4a92-9457-0f9c982807c0-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-976m8\" (UID: \"80af0309-f30b-4a92-9457-0f9c982807c0\") " pod="openstack/ssh-known-hosts-edpm-deployment-976m8" Jan 31 09:40:04 crc kubenswrapper[4830]: I0131 09:40:04.108645 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/80af0309-f30b-4a92-9457-0f9c982807c0-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-976m8\" (UID: \"80af0309-f30b-4a92-9457-0f9c982807c0\") " pod="openstack/ssh-known-hosts-edpm-deployment-976m8" Jan 31 09:40:04 crc kubenswrapper[4830]: I0131 09:40:04.108804 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/80af0309-f30b-4a92-9457-0f9c982807c0-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-976m8\" (UID: \"80af0309-f30b-4a92-9457-0f9c982807c0\") " pod="openstack/ssh-known-hosts-edpm-deployment-976m8" Jan 31 09:40:04 crc kubenswrapper[4830]: I0131 09:40:04.124025 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vflsm\" (UniqueName: \"kubernetes.io/projected/80af0309-f30b-4a92-9457-0f9c982807c0-kube-api-access-vflsm\") pod \"ssh-known-hosts-edpm-deployment-976m8\" (UID: \"80af0309-f30b-4a92-9457-0f9c982807c0\") " pod="openstack/ssh-known-hosts-edpm-deployment-976m8" Jan 31 09:40:04 crc kubenswrapper[4830]: I0131 09:40:04.215354 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-976m8" Jan 31 09:40:04 crc kubenswrapper[4830]: I0131 09:40:04.832424 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-976m8"] Jan 31 09:40:05 crc kubenswrapper[4830]: I0131 09:40:05.824510 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-976m8" event={"ID":"80af0309-f30b-4a92-9457-0f9c982807c0","Type":"ContainerStarted","Data":"2943219f67c596ce997634a84415a1af1660b0e1ec11224910d969b3738560e0"} Jan 31 09:40:05 crc kubenswrapper[4830]: I0131 09:40:05.825202 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-976m8" event={"ID":"80af0309-f30b-4a92-9457-0f9c982807c0","Type":"ContainerStarted","Data":"8c7f8c4258d4d7c5b9cba60072b7b4e1f5f9cbe96c3b67cd323cea22b899dabd"} Jan 31 09:40:09 crc kubenswrapper[4830]: I0131 09:40:09.252398 4830 scope.go:117] "RemoveContainer" containerID="5a8f61ac813e58f2725a65e088faabbabc4f4a08bd1c263d53e2f3530d252de8" Jan 31 09:40:09 crc kubenswrapper[4830]: E0131 09:40:09.254771 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gt7kd_openshift-machine-config-operator(158dbfda-9b0a-4809-9946-3c6ee2d082dc)\"" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" podUID="158dbfda-9b0a-4809-9946-3c6ee2d082dc" Jan 31 09:40:11 crc kubenswrapper[4830]: I0131 09:40:11.904099 4830 generic.go:334] "Generic (PLEG): container finished" podID="80af0309-f30b-4a92-9457-0f9c982807c0" containerID="2943219f67c596ce997634a84415a1af1660b0e1ec11224910d969b3738560e0" exitCode=0 Jan 31 09:40:11 crc kubenswrapper[4830]: I0131 09:40:11.904279 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-976m8" event={"ID":"80af0309-f30b-4a92-9457-0f9c982807c0","Type":"ContainerDied","Data":"2943219f67c596ce997634a84415a1af1660b0e1ec11224910d969b3738560e0"} Jan 31 09:40:13 crc kubenswrapper[4830]: I0131 09:40:13.456323 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-976m8" Jan 31 09:40:13 crc kubenswrapper[4830]: I0131 09:40:13.585636 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/80af0309-f30b-4a92-9457-0f9c982807c0-inventory-0\") pod \"80af0309-f30b-4a92-9457-0f9c982807c0\" (UID: \"80af0309-f30b-4a92-9457-0f9c982807c0\") " Jan 31 09:40:13 crc kubenswrapper[4830]: I0131 09:40:13.585766 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/80af0309-f30b-4a92-9457-0f9c982807c0-ssh-key-openstack-edpm-ipam\") pod \"80af0309-f30b-4a92-9457-0f9c982807c0\" (UID: \"80af0309-f30b-4a92-9457-0f9c982807c0\") " Jan 31 09:40:13 crc kubenswrapper[4830]: I0131 09:40:13.585799 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vflsm\" (UniqueName: \"kubernetes.io/projected/80af0309-f30b-4a92-9457-0f9c982807c0-kube-api-access-vflsm\") pod \"80af0309-f30b-4a92-9457-0f9c982807c0\" (UID: \"80af0309-f30b-4a92-9457-0f9c982807c0\") " Jan 31 09:40:13 crc kubenswrapper[4830]: I0131 09:40:13.591874 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/80af0309-f30b-4a92-9457-0f9c982807c0-kube-api-access-vflsm" (OuterVolumeSpecName: "kube-api-access-vflsm") pod "80af0309-f30b-4a92-9457-0f9c982807c0" (UID: "80af0309-f30b-4a92-9457-0f9c982807c0"). InnerVolumeSpecName "kube-api-access-vflsm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:40:13 crc kubenswrapper[4830]: I0131 09:40:13.618193 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/80af0309-f30b-4a92-9457-0f9c982807c0-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "80af0309-f30b-4a92-9457-0f9c982807c0" (UID: "80af0309-f30b-4a92-9457-0f9c982807c0"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:40:13 crc kubenswrapper[4830]: I0131 09:40:13.630707 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/80af0309-f30b-4a92-9457-0f9c982807c0-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "80af0309-f30b-4a92-9457-0f9c982807c0" (UID: "80af0309-f30b-4a92-9457-0f9c982807c0"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:40:13 crc kubenswrapper[4830]: I0131 09:40:13.690022 4830 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/80af0309-f30b-4a92-9457-0f9c982807c0-inventory-0\") on node \"crc\" DevicePath \"\"" Jan 31 09:40:13 crc kubenswrapper[4830]: I0131 09:40:13.690296 4830 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/80af0309-f30b-4a92-9457-0f9c982807c0-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 31 09:40:13 crc kubenswrapper[4830]: I0131 09:40:13.690417 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vflsm\" (UniqueName: \"kubernetes.io/projected/80af0309-f30b-4a92-9457-0f9c982807c0-kube-api-access-vflsm\") on node \"crc\" DevicePath \"\"" Jan 31 09:40:13 crc kubenswrapper[4830]: I0131 09:40:13.852242 4830 scope.go:117] "RemoveContainer" containerID="6cfb76a0d3624c566f680f2bb724f6347824ed753c153f5b3beaf28afa3e5a4a" Jan 31 09:40:13 crc kubenswrapper[4830]: I0131 09:40:13.934351 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-976m8" event={"ID":"80af0309-f30b-4a92-9457-0f9c982807c0","Type":"ContainerDied","Data":"8c7f8c4258d4d7c5b9cba60072b7b4e1f5f9cbe96c3b67cd323cea22b899dabd"} Jan 31 09:40:13 crc kubenswrapper[4830]: I0131 09:40:13.934642 4830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8c7f8c4258d4d7c5b9cba60072b7b4e1f5f9cbe96c3b67cd323cea22b899dabd" Jan 31 09:40:13 crc kubenswrapper[4830]: I0131 09:40:13.934450 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-976m8" Jan 31 09:40:14 crc kubenswrapper[4830]: I0131 09:40:14.104657 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-s9kdh"] Jan 31 09:40:14 crc kubenswrapper[4830]: E0131 09:40:14.105699 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80af0309-f30b-4a92-9457-0f9c982807c0" containerName="ssh-known-hosts-edpm-deployment" Jan 31 09:40:14 crc kubenswrapper[4830]: I0131 09:40:14.105746 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="80af0309-f30b-4a92-9457-0f9c982807c0" containerName="ssh-known-hosts-edpm-deployment" Jan 31 09:40:14 crc kubenswrapper[4830]: I0131 09:40:14.106044 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="80af0309-f30b-4a92-9457-0f9c982807c0" containerName="ssh-known-hosts-edpm-deployment" Jan 31 09:40:14 crc kubenswrapper[4830]: I0131 09:40:14.107217 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-s9kdh" Jan 31 09:40:14 crc kubenswrapper[4830]: I0131 09:40:14.109242 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 31 09:40:14 crc kubenswrapper[4830]: I0131 09:40:14.110650 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 31 09:40:14 crc kubenswrapper[4830]: I0131 09:40:14.110961 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-vd24j" Jan 31 09:40:14 crc kubenswrapper[4830]: I0131 09:40:14.111242 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 31 09:40:14 crc kubenswrapper[4830]: I0131 09:40:14.117078 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-s9kdh"] Jan 31 09:40:14 crc kubenswrapper[4830]: I0131 09:40:14.203539 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/85767787-3aed-4aaf-a30b-a02b9aebadf7-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-s9kdh\" (UID: \"85767787-3aed-4aaf-a30b-a02b9aebadf7\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-s9kdh" Jan 31 09:40:14 crc kubenswrapper[4830]: I0131 09:40:14.203652 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qc2xt\" (UniqueName: \"kubernetes.io/projected/85767787-3aed-4aaf-a30b-a02b9aebadf7-kube-api-access-qc2xt\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-s9kdh\" (UID: \"85767787-3aed-4aaf-a30b-a02b9aebadf7\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-s9kdh" Jan 31 09:40:14 crc kubenswrapper[4830]: I0131 09:40:14.203828 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/85767787-3aed-4aaf-a30b-a02b9aebadf7-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-s9kdh\" (UID: \"85767787-3aed-4aaf-a30b-a02b9aebadf7\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-s9kdh" Jan 31 09:40:14 crc kubenswrapper[4830]: I0131 09:40:14.306658 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/85767787-3aed-4aaf-a30b-a02b9aebadf7-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-s9kdh\" (UID: \"85767787-3aed-4aaf-a30b-a02b9aebadf7\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-s9kdh" Jan 31 09:40:14 crc kubenswrapper[4830]: I0131 09:40:14.306952 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/85767787-3aed-4aaf-a30b-a02b9aebadf7-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-s9kdh\" (UID: \"85767787-3aed-4aaf-a30b-a02b9aebadf7\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-s9kdh" Jan 31 09:40:14 crc kubenswrapper[4830]: I0131 09:40:14.307017 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qc2xt\" (UniqueName: \"kubernetes.io/projected/85767787-3aed-4aaf-a30b-a02b9aebadf7-kube-api-access-qc2xt\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-s9kdh\" (UID: \"85767787-3aed-4aaf-a30b-a02b9aebadf7\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-s9kdh" Jan 31 09:40:14 crc kubenswrapper[4830]: I0131 09:40:14.311601 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/85767787-3aed-4aaf-a30b-a02b9aebadf7-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-s9kdh\" (UID: \"85767787-3aed-4aaf-a30b-a02b9aebadf7\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-s9kdh" Jan 31 09:40:14 crc kubenswrapper[4830]: I0131 09:40:14.311747 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/85767787-3aed-4aaf-a30b-a02b9aebadf7-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-s9kdh\" (UID: \"85767787-3aed-4aaf-a30b-a02b9aebadf7\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-s9kdh" Jan 31 09:40:14 crc kubenswrapper[4830]: I0131 09:40:14.327188 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qc2xt\" (UniqueName: \"kubernetes.io/projected/85767787-3aed-4aaf-a30b-a02b9aebadf7-kube-api-access-qc2xt\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-s9kdh\" (UID: \"85767787-3aed-4aaf-a30b-a02b9aebadf7\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-s9kdh" Jan 31 09:40:14 crc kubenswrapper[4830]: I0131 09:40:14.444528 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-s9kdh" Jan 31 09:40:15 crc kubenswrapper[4830]: I0131 09:40:15.031998 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-s9kdh"] Jan 31 09:40:15 crc kubenswrapper[4830]: I0131 09:40:15.959516 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-s9kdh" event={"ID":"85767787-3aed-4aaf-a30b-a02b9aebadf7","Type":"ContainerStarted","Data":"c99a93656e462b4f8a0f8ba4c044386f9d9527284183e5f1dbe02fdb168fbf50"} Jan 31 09:40:15 crc kubenswrapper[4830]: I0131 09:40:15.960209 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-s9kdh" event={"ID":"85767787-3aed-4aaf-a30b-a02b9aebadf7","Type":"ContainerStarted","Data":"1008e937250676293fa8e7d7a8df20a3e144e4c97cb563996096be4431059e66"} Jan 31 09:40:15 crc kubenswrapper[4830]: I0131 09:40:15.994021 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-s9kdh" podStartSLOduration=1.448371761 podStartE2EDuration="1.99399806s" podCreationTimestamp="2026-01-31 09:40:14 +0000 UTC" firstStartedPulling="2026-01-31 09:40:15.03399843 +0000 UTC m=+2359.527360872" lastFinishedPulling="2026-01-31 09:40:15.579624729 +0000 UTC m=+2360.072987171" observedRunningTime="2026-01-31 09:40:15.981493253 +0000 UTC m=+2360.474855695" watchObservedRunningTime="2026-01-31 09:40:15.99399806 +0000 UTC m=+2360.487360502" Jan 31 09:40:21 crc kubenswrapper[4830]: I0131 09:40:21.252627 4830 scope.go:117] "RemoveContainer" containerID="5a8f61ac813e58f2725a65e088faabbabc4f4a08bd1c263d53e2f3530d252de8" Jan 31 09:40:21 crc kubenswrapper[4830]: E0131 09:40:21.253299 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gt7kd_openshift-machine-config-operator(158dbfda-9b0a-4809-9946-3c6ee2d082dc)\"" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" podUID="158dbfda-9b0a-4809-9946-3c6ee2d082dc" Jan 31 09:40:24 crc kubenswrapper[4830]: I0131 09:40:24.049368 4830 generic.go:334] "Generic (PLEG): container finished" podID="85767787-3aed-4aaf-a30b-a02b9aebadf7" containerID="c99a93656e462b4f8a0f8ba4c044386f9d9527284183e5f1dbe02fdb168fbf50" exitCode=0 Jan 31 09:40:24 crc kubenswrapper[4830]: I0131 09:40:24.049478 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-s9kdh" event={"ID":"85767787-3aed-4aaf-a30b-a02b9aebadf7","Type":"ContainerDied","Data":"c99a93656e462b4f8a0f8ba4c044386f9d9527284183e5f1dbe02fdb168fbf50"} Jan 31 09:40:25 crc kubenswrapper[4830]: I0131 09:40:25.610241 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-s9kdh" Jan 31 09:40:25 crc kubenswrapper[4830]: I0131 09:40:25.724119 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/85767787-3aed-4aaf-a30b-a02b9aebadf7-ssh-key-openstack-edpm-ipam\") pod \"85767787-3aed-4aaf-a30b-a02b9aebadf7\" (UID: \"85767787-3aed-4aaf-a30b-a02b9aebadf7\") " Jan 31 09:40:25 crc kubenswrapper[4830]: I0131 09:40:25.724431 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qc2xt\" (UniqueName: \"kubernetes.io/projected/85767787-3aed-4aaf-a30b-a02b9aebadf7-kube-api-access-qc2xt\") pod \"85767787-3aed-4aaf-a30b-a02b9aebadf7\" (UID: \"85767787-3aed-4aaf-a30b-a02b9aebadf7\") " Jan 31 09:40:25 crc kubenswrapper[4830]: I0131 09:40:25.724625 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/85767787-3aed-4aaf-a30b-a02b9aebadf7-inventory\") pod \"85767787-3aed-4aaf-a30b-a02b9aebadf7\" (UID: \"85767787-3aed-4aaf-a30b-a02b9aebadf7\") " Jan 31 09:40:25 crc kubenswrapper[4830]: I0131 09:40:25.737104 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/85767787-3aed-4aaf-a30b-a02b9aebadf7-kube-api-access-qc2xt" (OuterVolumeSpecName: "kube-api-access-qc2xt") pod "85767787-3aed-4aaf-a30b-a02b9aebadf7" (UID: "85767787-3aed-4aaf-a30b-a02b9aebadf7"). InnerVolumeSpecName "kube-api-access-qc2xt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:40:25 crc kubenswrapper[4830]: I0131 09:40:25.756741 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/85767787-3aed-4aaf-a30b-a02b9aebadf7-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "85767787-3aed-4aaf-a30b-a02b9aebadf7" (UID: "85767787-3aed-4aaf-a30b-a02b9aebadf7"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:40:25 crc kubenswrapper[4830]: I0131 09:40:25.757754 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/85767787-3aed-4aaf-a30b-a02b9aebadf7-inventory" (OuterVolumeSpecName: "inventory") pod "85767787-3aed-4aaf-a30b-a02b9aebadf7" (UID: "85767787-3aed-4aaf-a30b-a02b9aebadf7"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:40:25 crc kubenswrapper[4830]: I0131 09:40:25.827940 4830 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/85767787-3aed-4aaf-a30b-a02b9aebadf7-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 31 09:40:25 crc kubenswrapper[4830]: I0131 09:40:25.827990 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qc2xt\" (UniqueName: \"kubernetes.io/projected/85767787-3aed-4aaf-a30b-a02b9aebadf7-kube-api-access-qc2xt\") on node \"crc\" DevicePath \"\"" Jan 31 09:40:25 crc kubenswrapper[4830]: I0131 09:40:25.828004 4830 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/85767787-3aed-4aaf-a30b-a02b9aebadf7-inventory\") on node \"crc\" DevicePath \"\"" Jan 31 09:40:26 crc kubenswrapper[4830]: I0131 09:40:26.072989 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-s9kdh" event={"ID":"85767787-3aed-4aaf-a30b-a02b9aebadf7","Type":"ContainerDied","Data":"1008e937250676293fa8e7d7a8df20a3e144e4c97cb563996096be4431059e66"} Jan 31 09:40:26 crc kubenswrapper[4830]: I0131 09:40:26.073085 4830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1008e937250676293fa8e7d7a8df20a3e144e4c97cb563996096be4431059e66" Jan 31 09:40:26 crc kubenswrapper[4830]: I0131 09:40:26.073398 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-s9kdh" Jan 31 09:40:26 crc kubenswrapper[4830]: I0131 09:40:26.228914 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-5wfdc"] Jan 31 09:40:26 crc kubenswrapper[4830]: E0131 09:40:26.230612 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="85767787-3aed-4aaf-a30b-a02b9aebadf7" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 31 09:40:26 crc kubenswrapper[4830]: I0131 09:40:26.230695 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="85767787-3aed-4aaf-a30b-a02b9aebadf7" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 31 09:40:26 crc kubenswrapper[4830]: I0131 09:40:26.231254 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="85767787-3aed-4aaf-a30b-a02b9aebadf7" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 31 09:40:26 crc kubenswrapper[4830]: I0131 09:40:26.237519 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-5wfdc" Jan 31 09:40:26 crc kubenswrapper[4830]: I0131 09:40:26.241256 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 31 09:40:26 crc kubenswrapper[4830]: I0131 09:40:26.243627 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 31 09:40:26 crc kubenswrapper[4830]: I0131 09:40:26.243921 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-vd24j" Jan 31 09:40:26 crc kubenswrapper[4830]: I0131 09:40:26.244043 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 31 09:40:26 crc kubenswrapper[4830]: I0131 09:40:26.249121 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-5wfdc"] Jan 31 09:40:26 crc kubenswrapper[4830]: I0131 09:40:26.345706 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ded31260-653f-4e1c-8840-c06cfa56a070-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-5wfdc\" (UID: \"ded31260-653f-4e1c-8840-c06cfa56a070\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-5wfdc" Jan 31 09:40:26 crc kubenswrapper[4830]: I0131 09:40:26.345975 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lnfbv\" (UniqueName: \"kubernetes.io/projected/ded31260-653f-4e1c-8840-c06cfa56a070-kube-api-access-lnfbv\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-5wfdc\" (UID: \"ded31260-653f-4e1c-8840-c06cfa56a070\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-5wfdc" Jan 31 09:40:26 crc kubenswrapper[4830]: I0131 09:40:26.346245 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ded31260-653f-4e1c-8840-c06cfa56a070-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-5wfdc\" (UID: \"ded31260-653f-4e1c-8840-c06cfa56a070\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-5wfdc" Jan 31 09:40:26 crc kubenswrapper[4830]: I0131 09:40:26.448441 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ded31260-653f-4e1c-8840-c06cfa56a070-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-5wfdc\" (UID: \"ded31260-653f-4e1c-8840-c06cfa56a070\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-5wfdc" Jan 31 09:40:26 crc kubenswrapper[4830]: I0131 09:40:26.448681 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ded31260-653f-4e1c-8840-c06cfa56a070-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-5wfdc\" (UID: \"ded31260-653f-4e1c-8840-c06cfa56a070\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-5wfdc" Jan 31 09:40:26 crc kubenswrapper[4830]: I0131 09:40:26.448869 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lnfbv\" (UniqueName: \"kubernetes.io/projected/ded31260-653f-4e1c-8840-c06cfa56a070-kube-api-access-lnfbv\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-5wfdc\" (UID: \"ded31260-653f-4e1c-8840-c06cfa56a070\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-5wfdc" Jan 31 09:40:26 crc kubenswrapper[4830]: I0131 09:40:26.455348 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ded31260-653f-4e1c-8840-c06cfa56a070-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-5wfdc\" (UID: \"ded31260-653f-4e1c-8840-c06cfa56a070\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-5wfdc" Jan 31 09:40:26 crc kubenswrapper[4830]: I0131 09:40:26.456214 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ded31260-653f-4e1c-8840-c06cfa56a070-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-5wfdc\" (UID: \"ded31260-653f-4e1c-8840-c06cfa56a070\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-5wfdc" Jan 31 09:40:26 crc kubenswrapper[4830]: I0131 09:40:26.468551 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lnfbv\" (UniqueName: \"kubernetes.io/projected/ded31260-653f-4e1c-8840-c06cfa56a070-kube-api-access-lnfbv\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-5wfdc\" (UID: \"ded31260-653f-4e1c-8840-c06cfa56a070\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-5wfdc" Jan 31 09:40:26 crc kubenswrapper[4830]: I0131 09:40:26.569273 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-5wfdc" Jan 31 09:40:27 crc kubenswrapper[4830]: I0131 09:40:27.182009 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-5wfdc"] Jan 31 09:40:28 crc kubenswrapper[4830]: I0131 09:40:28.106212 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-5wfdc" event={"ID":"ded31260-653f-4e1c-8840-c06cfa56a070","Type":"ContainerStarted","Data":"a0f72c4f2f03005fb1dda4225838141d59fb749447b819d019d6d8528b595590"} Jan 31 09:40:29 crc kubenswrapper[4830]: I0131 09:40:29.121315 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-5wfdc" event={"ID":"ded31260-653f-4e1c-8840-c06cfa56a070","Type":"ContainerStarted","Data":"6ddc65bbd80888ee841b473d0c564534b8b6d9d288e1383533344a43e381d5c1"} Jan 31 09:40:29 crc kubenswrapper[4830]: I0131 09:40:29.147239 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-5wfdc" podStartSLOduration=2.321743411 podStartE2EDuration="3.147218513s" podCreationTimestamp="2026-01-31 09:40:26 +0000 UTC" firstStartedPulling="2026-01-31 09:40:27.175976734 +0000 UTC m=+2371.669339176" lastFinishedPulling="2026-01-31 09:40:28.001451826 +0000 UTC m=+2372.494814278" observedRunningTime="2026-01-31 09:40:29.139275013 +0000 UTC m=+2373.632637445" watchObservedRunningTime="2026-01-31 09:40:29.147218513 +0000 UTC m=+2373.640580955" Jan 31 09:40:34 crc kubenswrapper[4830]: I0131 09:40:34.251901 4830 scope.go:117] "RemoveContainer" containerID="5a8f61ac813e58f2725a65e088faabbabc4f4a08bd1c263d53e2f3530d252de8" Jan 31 09:40:34 crc kubenswrapper[4830]: E0131 09:40:34.252648 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gt7kd_openshift-machine-config-operator(158dbfda-9b0a-4809-9946-3c6ee2d082dc)\"" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" podUID="158dbfda-9b0a-4809-9946-3c6ee2d082dc" Jan 31 09:40:38 crc kubenswrapper[4830]: I0131 09:40:38.231853 4830 generic.go:334] "Generic (PLEG): container finished" podID="ded31260-653f-4e1c-8840-c06cfa56a070" containerID="6ddc65bbd80888ee841b473d0c564534b8b6d9d288e1383533344a43e381d5c1" exitCode=0 Jan 31 09:40:38 crc kubenswrapper[4830]: I0131 09:40:38.231965 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-5wfdc" event={"ID":"ded31260-653f-4e1c-8840-c06cfa56a070","Type":"ContainerDied","Data":"6ddc65bbd80888ee841b473d0c564534b8b6d9d288e1383533344a43e381d5c1"} Jan 31 09:40:39 crc kubenswrapper[4830]: I0131 09:40:39.844580 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-5wfdc" Jan 31 09:40:40 crc kubenswrapper[4830]: I0131 09:40:40.033888 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lnfbv\" (UniqueName: \"kubernetes.io/projected/ded31260-653f-4e1c-8840-c06cfa56a070-kube-api-access-lnfbv\") pod \"ded31260-653f-4e1c-8840-c06cfa56a070\" (UID: \"ded31260-653f-4e1c-8840-c06cfa56a070\") " Jan 31 09:40:40 crc kubenswrapper[4830]: I0131 09:40:40.034014 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ded31260-653f-4e1c-8840-c06cfa56a070-inventory\") pod \"ded31260-653f-4e1c-8840-c06cfa56a070\" (UID: \"ded31260-653f-4e1c-8840-c06cfa56a070\") " Jan 31 09:40:40 crc kubenswrapper[4830]: I0131 09:40:40.034190 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ded31260-653f-4e1c-8840-c06cfa56a070-ssh-key-openstack-edpm-ipam\") pod \"ded31260-653f-4e1c-8840-c06cfa56a070\" (UID: \"ded31260-653f-4e1c-8840-c06cfa56a070\") " Jan 31 09:40:40 crc kubenswrapper[4830]: I0131 09:40:40.067368 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ded31260-653f-4e1c-8840-c06cfa56a070-kube-api-access-lnfbv" (OuterVolumeSpecName: "kube-api-access-lnfbv") pod "ded31260-653f-4e1c-8840-c06cfa56a070" (UID: "ded31260-653f-4e1c-8840-c06cfa56a070"). InnerVolumeSpecName "kube-api-access-lnfbv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:40:40 crc kubenswrapper[4830]: I0131 09:40:40.074305 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ded31260-653f-4e1c-8840-c06cfa56a070-inventory" (OuterVolumeSpecName: "inventory") pod "ded31260-653f-4e1c-8840-c06cfa56a070" (UID: "ded31260-653f-4e1c-8840-c06cfa56a070"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:40:40 crc kubenswrapper[4830]: I0131 09:40:40.075929 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ded31260-653f-4e1c-8840-c06cfa56a070-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "ded31260-653f-4e1c-8840-c06cfa56a070" (UID: "ded31260-653f-4e1c-8840-c06cfa56a070"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:40:40 crc kubenswrapper[4830]: I0131 09:40:40.137525 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lnfbv\" (UniqueName: \"kubernetes.io/projected/ded31260-653f-4e1c-8840-c06cfa56a070-kube-api-access-lnfbv\") on node \"crc\" DevicePath \"\"" Jan 31 09:40:40 crc kubenswrapper[4830]: I0131 09:40:40.137585 4830 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ded31260-653f-4e1c-8840-c06cfa56a070-inventory\") on node \"crc\" DevicePath \"\"" Jan 31 09:40:40 crc kubenswrapper[4830]: I0131 09:40:40.137597 4830 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ded31260-653f-4e1c-8840-c06cfa56a070-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 31 09:40:40 crc kubenswrapper[4830]: I0131 09:40:40.255787 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-5wfdc" Jan 31 09:40:40 crc kubenswrapper[4830]: I0131 09:40:40.270447 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-5wfdc" event={"ID":"ded31260-653f-4e1c-8840-c06cfa56a070","Type":"ContainerDied","Data":"a0f72c4f2f03005fb1dda4225838141d59fb749447b819d019d6d8528b595590"} Jan 31 09:40:40 crc kubenswrapper[4830]: I0131 09:40:40.270502 4830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a0f72c4f2f03005fb1dda4225838141d59fb749447b819d019d6d8528b595590" Jan 31 09:40:40 crc kubenswrapper[4830]: I0131 09:40:40.366610 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7qkc2"] Jan 31 09:40:40 crc kubenswrapper[4830]: E0131 09:40:40.367371 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ded31260-653f-4e1c-8840-c06cfa56a070" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 31 09:40:40 crc kubenswrapper[4830]: I0131 09:40:40.367400 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="ded31260-653f-4e1c-8840-c06cfa56a070" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 31 09:40:40 crc kubenswrapper[4830]: I0131 09:40:40.367750 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="ded31260-653f-4e1c-8840-c06cfa56a070" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 31 09:40:40 crc kubenswrapper[4830]: I0131 09:40:40.369127 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7qkc2" Jan 31 09:40:40 crc kubenswrapper[4830]: I0131 09:40:40.373599 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 31 09:40:40 crc kubenswrapper[4830]: I0131 09:40:40.373810 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-neutron-metadata-default-certs-0" Jan 31 09:40:40 crc kubenswrapper[4830]: I0131 09:40:40.373835 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-vd24j" Jan 31 09:40:40 crc kubenswrapper[4830]: I0131 09:40:40.373915 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 31 09:40:40 crc kubenswrapper[4830]: I0131 09:40:40.373995 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0" Jan 31 09:40:40 crc kubenswrapper[4830]: I0131 09:40:40.374308 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 31 09:40:40 crc kubenswrapper[4830]: I0131 09:40:40.374434 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-libvirt-default-certs-0" Jan 31 09:40:40 crc kubenswrapper[4830]: I0131 09:40:40.374592 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-telemetry-default-certs-0" Jan 31 09:40:40 crc kubenswrapper[4830]: I0131 09:40:40.374693 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-ovn-default-certs-0" Jan 31 09:40:40 crc kubenswrapper[4830]: I0131 09:40:40.385665 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7qkc2"] Jan 31 09:40:40 crc kubenswrapper[4830]: I0131 09:40:40.446056 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dbdc4551-3d56-4feb-b897-89d6d0367388-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7qkc2\" (UID: \"dbdc4551-3d56-4feb-b897-89d6d0367388\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7qkc2" Jan 31 09:40:40 crc kubenswrapper[4830]: I0131 09:40:40.446133 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\" (UniqueName: \"kubernetes.io/projected/dbdc4551-3d56-4feb-b897-89d6d0367388-openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7qkc2\" (UID: \"dbdc4551-3d56-4feb-b897-89d6d0367388\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7qkc2" Jan 31 09:40:40 crc kubenswrapper[4830]: I0131 09:40:40.446218 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/dbdc4551-3d56-4feb-b897-89d6d0367388-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7qkc2\" (UID: \"dbdc4551-3d56-4feb-b897-89d6d0367388\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7qkc2" Jan 31 09:40:40 crc kubenswrapper[4830]: I0131 09:40:40.446408 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbdc4551-3d56-4feb-b897-89d6d0367388-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7qkc2\" (UID: \"dbdc4551-3d56-4feb-b897-89d6d0367388\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7qkc2" Jan 31 09:40:40 crc kubenswrapper[4830]: I0131 09:40:40.446445 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbdc4551-3d56-4feb-b897-89d6d0367388-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7qkc2\" (UID: \"dbdc4551-3d56-4feb-b897-89d6d0367388\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7qkc2" Jan 31 09:40:40 crc kubenswrapper[4830]: I0131 09:40:40.446476 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/dbdc4551-3d56-4feb-b897-89d6d0367388-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7qkc2\" (UID: \"dbdc4551-3d56-4feb-b897-89d6d0367388\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7qkc2" Jan 31 09:40:40 crc kubenswrapper[4830]: I0131 09:40:40.446544 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbdc4551-3d56-4feb-b897-89d6d0367388-telemetry-power-monitoring-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7qkc2\" (UID: \"dbdc4551-3d56-4feb-b897-89d6d0367388\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7qkc2" Jan 31 09:40:40 crc kubenswrapper[4830]: I0131 09:40:40.446600 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/dbdc4551-3d56-4feb-b897-89d6d0367388-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7qkc2\" (UID: \"dbdc4551-3d56-4feb-b897-89d6d0367388\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7qkc2" Jan 31 09:40:40 crc kubenswrapper[4830]: I0131 09:40:40.446715 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jk4fs\" (UniqueName: \"kubernetes.io/projected/dbdc4551-3d56-4feb-b897-89d6d0367388-kube-api-access-jk4fs\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7qkc2\" (UID: \"dbdc4551-3d56-4feb-b897-89d6d0367388\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7qkc2" Jan 31 09:40:40 crc kubenswrapper[4830]: I0131 09:40:40.446806 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/dbdc4551-3d56-4feb-b897-89d6d0367388-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7qkc2\" (UID: \"dbdc4551-3d56-4feb-b897-89d6d0367388\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7qkc2" Jan 31 09:40:40 crc kubenswrapper[4830]: I0131 09:40:40.446866 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/dbdc4551-3d56-4feb-b897-89d6d0367388-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7qkc2\" (UID: \"dbdc4551-3d56-4feb-b897-89d6d0367388\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7qkc2" Jan 31 09:40:40 crc kubenswrapper[4830]: I0131 09:40:40.446917 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbdc4551-3d56-4feb-b897-89d6d0367388-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7qkc2\" (UID: \"dbdc4551-3d56-4feb-b897-89d6d0367388\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7qkc2" Jan 31 09:40:40 crc kubenswrapper[4830]: I0131 09:40:40.447016 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbdc4551-3d56-4feb-b897-89d6d0367388-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7qkc2\" (UID: \"dbdc4551-3d56-4feb-b897-89d6d0367388\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7qkc2" Jan 31 09:40:40 crc kubenswrapper[4830]: I0131 09:40:40.447075 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbdc4551-3d56-4feb-b897-89d6d0367388-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7qkc2\" (UID: \"dbdc4551-3d56-4feb-b897-89d6d0367388\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7qkc2" Jan 31 09:40:40 crc kubenswrapper[4830]: I0131 09:40:40.447162 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbdc4551-3d56-4feb-b897-89d6d0367388-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7qkc2\" (UID: \"dbdc4551-3d56-4feb-b897-89d6d0367388\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7qkc2" Jan 31 09:40:40 crc kubenswrapper[4830]: I0131 09:40:40.447210 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbdc4551-3d56-4feb-b897-89d6d0367388-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7qkc2\" (UID: \"dbdc4551-3d56-4feb-b897-89d6d0367388\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7qkc2" Jan 31 09:40:40 crc kubenswrapper[4830]: I0131 09:40:40.549503 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/dbdc4551-3d56-4feb-b897-89d6d0367388-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7qkc2\" (UID: \"dbdc4551-3d56-4feb-b897-89d6d0367388\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7qkc2" Jan 31 09:40:40 crc kubenswrapper[4830]: I0131 09:40:40.550077 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbdc4551-3d56-4feb-b897-89d6d0367388-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7qkc2\" (UID: \"dbdc4551-3d56-4feb-b897-89d6d0367388\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7qkc2" Jan 31 09:40:40 crc kubenswrapper[4830]: I0131 09:40:40.550295 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbdc4551-3d56-4feb-b897-89d6d0367388-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7qkc2\" (UID: \"dbdc4551-3d56-4feb-b897-89d6d0367388\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7qkc2" Jan 31 09:40:40 crc kubenswrapper[4830]: I0131 09:40:40.550406 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/dbdc4551-3d56-4feb-b897-89d6d0367388-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7qkc2\" (UID: \"dbdc4551-3d56-4feb-b897-89d6d0367388\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7qkc2" Jan 31 09:40:40 crc kubenswrapper[4830]: I0131 09:40:40.550504 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbdc4551-3d56-4feb-b897-89d6d0367388-telemetry-power-monitoring-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7qkc2\" (UID: \"dbdc4551-3d56-4feb-b897-89d6d0367388\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7qkc2" Jan 31 09:40:40 crc kubenswrapper[4830]: I0131 09:40:40.550603 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/dbdc4551-3d56-4feb-b897-89d6d0367388-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7qkc2\" (UID: \"dbdc4551-3d56-4feb-b897-89d6d0367388\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7qkc2" Jan 31 09:40:40 crc kubenswrapper[4830]: I0131 09:40:40.550790 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jk4fs\" (UniqueName: \"kubernetes.io/projected/dbdc4551-3d56-4feb-b897-89d6d0367388-kube-api-access-jk4fs\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7qkc2\" (UID: \"dbdc4551-3d56-4feb-b897-89d6d0367388\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7qkc2" Jan 31 09:40:40 crc kubenswrapper[4830]: I0131 09:40:40.550915 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/dbdc4551-3d56-4feb-b897-89d6d0367388-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7qkc2\" (UID: \"dbdc4551-3d56-4feb-b897-89d6d0367388\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7qkc2" Jan 31 09:40:40 crc kubenswrapper[4830]: I0131 09:40:40.551054 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/dbdc4551-3d56-4feb-b897-89d6d0367388-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7qkc2\" (UID: \"dbdc4551-3d56-4feb-b897-89d6d0367388\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7qkc2" Jan 31 09:40:40 crc kubenswrapper[4830]: I0131 09:40:40.551155 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbdc4551-3d56-4feb-b897-89d6d0367388-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7qkc2\" (UID: \"dbdc4551-3d56-4feb-b897-89d6d0367388\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7qkc2" Jan 31 09:40:40 crc kubenswrapper[4830]: I0131 09:40:40.551269 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbdc4551-3d56-4feb-b897-89d6d0367388-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7qkc2\" (UID: \"dbdc4551-3d56-4feb-b897-89d6d0367388\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7qkc2" Jan 31 09:40:40 crc kubenswrapper[4830]: I0131 09:40:40.551351 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbdc4551-3d56-4feb-b897-89d6d0367388-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7qkc2\" (UID: \"dbdc4551-3d56-4feb-b897-89d6d0367388\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7qkc2" Jan 31 09:40:40 crc kubenswrapper[4830]: I0131 09:40:40.551456 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbdc4551-3d56-4feb-b897-89d6d0367388-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7qkc2\" (UID: \"dbdc4551-3d56-4feb-b897-89d6d0367388\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7qkc2" Jan 31 09:40:40 crc kubenswrapper[4830]: I0131 09:40:40.551560 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbdc4551-3d56-4feb-b897-89d6d0367388-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7qkc2\" (UID: \"dbdc4551-3d56-4feb-b897-89d6d0367388\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7qkc2" Jan 31 09:40:40 crc kubenswrapper[4830]: I0131 09:40:40.551698 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dbdc4551-3d56-4feb-b897-89d6d0367388-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7qkc2\" (UID: \"dbdc4551-3d56-4feb-b897-89d6d0367388\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7qkc2" Jan 31 09:40:40 crc kubenswrapper[4830]: I0131 09:40:40.551846 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\" (UniqueName: \"kubernetes.io/projected/dbdc4551-3d56-4feb-b897-89d6d0367388-openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7qkc2\" (UID: \"dbdc4551-3d56-4feb-b897-89d6d0367388\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7qkc2" Jan 31 09:40:40 crc kubenswrapper[4830]: I0131 09:40:40.554623 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/dbdc4551-3d56-4feb-b897-89d6d0367388-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7qkc2\" (UID: \"dbdc4551-3d56-4feb-b897-89d6d0367388\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7qkc2" Jan 31 09:40:40 crc kubenswrapper[4830]: I0131 09:40:40.554887 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbdc4551-3d56-4feb-b897-89d6d0367388-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7qkc2\" (UID: \"dbdc4551-3d56-4feb-b897-89d6d0367388\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7qkc2" Jan 31 09:40:40 crc kubenswrapper[4830]: I0131 09:40:40.555039 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/dbdc4551-3d56-4feb-b897-89d6d0367388-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7qkc2\" (UID: \"dbdc4551-3d56-4feb-b897-89d6d0367388\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7qkc2" Jan 31 09:40:40 crc kubenswrapper[4830]: I0131 09:40:40.555681 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\" (UniqueName: \"kubernetes.io/projected/dbdc4551-3d56-4feb-b897-89d6d0367388-openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7qkc2\" (UID: \"dbdc4551-3d56-4feb-b897-89d6d0367388\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7qkc2" Jan 31 09:40:40 crc kubenswrapper[4830]: I0131 09:40:40.556760 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/dbdc4551-3d56-4feb-b897-89d6d0367388-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7qkc2\" (UID: \"dbdc4551-3d56-4feb-b897-89d6d0367388\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7qkc2" Jan 31 09:40:40 crc kubenswrapper[4830]: I0131 09:40:40.557073 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbdc4551-3d56-4feb-b897-89d6d0367388-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7qkc2\" (UID: \"dbdc4551-3d56-4feb-b897-89d6d0367388\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7qkc2" Jan 31 09:40:40 crc kubenswrapper[4830]: I0131 09:40:40.557069 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbdc4551-3d56-4feb-b897-89d6d0367388-telemetry-power-monitoring-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7qkc2\" (UID: \"dbdc4551-3d56-4feb-b897-89d6d0367388\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7qkc2" Jan 31 09:40:40 crc kubenswrapper[4830]: I0131 09:40:40.558249 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/dbdc4551-3d56-4feb-b897-89d6d0367388-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7qkc2\" (UID: \"dbdc4551-3d56-4feb-b897-89d6d0367388\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7qkc2" Jan 31 09:40:40 crc kubenswrapper[4830]: I0131 09:40:40.559082 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbdc4551-3d56-4feb-b897-89d6d0367388-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7qkc2\" (UID: \"dbdc4551-3d56-4feb-b897-89d6d0367388\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7qkc2" Jan 31 09:40:40 crc kubenswrapper[4830]: I0131 09:40:40.559290 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbdc4551-3d56-4feb-b897-89d6d0367388-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7qkc2\" (UID: \"dbdc4551-3d56-4feb-b897-89d6d0367388\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7qkc2" Jan 31 09:40:40 crc kubenswrapper[4830]: I0131 09:40:40.561706 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbdc4551-3d56-4feb-b897-89d6d0367388-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7qkc2\" (UID: \"dbdc4551-3d56-4feb-b897-89d6d0367388\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7qkc2" Jan 31 09:40:40 crc kubenswrapper[4830]: I0131 09:40:40.561867 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbdc4551-3d56-4feb-b897-89d6d0367388-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7qkc2\" (UID: \"dbdc4551-3d56-4feb-b897-89d6d0367388\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7qkc2" Jan 31 09:40:40 crc kubenswrapper[4830]: I0131 09:40:40.563426 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/dbdc4551-3d56-4feb-b897-89d6d0367388-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7qkc2\" (UID: \"dbdc4551-3d56-4feb-b897-89d6d0367388\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7qkc2" Jan 31 09:40:40 crc kubenswrapper[4830]: I0131 09:40:40.563525 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbdc4551-3d56-4feb-b897-89d6d0367388-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7qkc2\" (UID: \"dbdc4551-3d56-4feb-b897-89d6d0367388\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7qkc2" Jan 31 09:40:40 crc kubenswrapper[4830]: I0131 09:40:40.563626 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dbdc4551-3d56-4feb-b897-89d6d0367388-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7qkc2\" (UID: \"dbdc4551-3d56-4feb-b897-89d6d0367388\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7qkc2" Jan 31 09:40:40 crc kubenswrapper[4830]: I0131 09:40:40.574459 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jk4fs\" (UniqueName: \"kubernetes.io/projected/dbdc4551-3d56-4feb-b897-89d6d0367388-kube-api-access-jk4fs\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7qkc2\" (UID: \"dbdc4551-3d56-4feb-b897-89d6d0367388\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7qkc2" Jan 31 09:40:40 crc kubenswrapper[4830]: I0131 09:40:40.689927 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7qkc2" Jan 31 09:40:41 crc kubenswrapper[4830]: I0131 09:40:41.277200 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7qkc2"] Jan 31 09:40:42 crc kubenswrapper[4830]: I0131 09:40:42.276959 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7qkc2" event={"ID":"dbdc4551-3d56-4feb-b897-89d6d0367388","Type":"ContainerStarted","Data":"dcaba47693e0982d7fe9c068ff665b3d5bdbbf6c8be3b6523116053a0c828d1a"} Jan 31 09:40:43 crc kubenswrapper[4830]: I0131 09:40:43.045854 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-db-sync-lmhjl"] Jan 31 09:40:43 crc kubenswrapper[4830]: I0131 09:40:43.057390 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-db-sync-lmhjl"] Jan 31 09:40:44 crc kubenswrapper[4830]: I0131 09:40:44.267021 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fcb6f2f8-cfc4-4ee0-999d-f599ce74f7ba" path="/var/lib/kubelet/pods/fcb6f2f8-cfc4-4ee0-999d-f599ce74f7ba/volumes" Jan 31 09:40:44 crc kubenswrapper[4830]: I0131 09:40:44.306482 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7qkc2" event={"ID":"dbdc4551-3d56-4feb-b897-89d6d0367388","Type":"ContainerStarted","Data":"2f1b2da9a2b794f68ac785c94c52ea9c8fd9eb0cf5acf7eb2d6d3bd9d3d04ae0"} Jan 31 09:40:44 crc kubenswrapper[4830]: I0131 09:40:44.340275 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7qkc2" podStartSLOduration=2.150791319 podStartE2EDuration="4.340248983s" podCreationTimestamp="2026-01-31 09:40:40 +0000 UTC" firstStartedPulling="2026-01-31 09:40:41.272150344 +0000 UTC m=+2385.765512786" lastFinishedPulling="2026-01-31 09:40:43.461608008 +0000 UTC m=+2387.954970450" observedRunningTime="2026-01-31 09:40:44.326531063 +0000 UTC m=+2388.819893505" watchObservedRunningTime="2026-01-31 09:40:44.340248983 +0000 UTC m=+2388.833611425" Jan 31 09:40:46 crc kubenswrapper[4830]: I0131 09:40:46.264114 4830 scope.go:117] "RemoveContainer" containerID="5a8f61ac813e58f2725a65e088faabbabc4f4a08bd1c263d53e2f3530d252de8" Jan 31 09:40:46 crc kubenswrapper[4830]: E0131 09:40:46.265798 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gt7kd_openshift-machine-config-operator(158dbfda-9b0a-4809-9946-3c6ee2d082dc)\"" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" podUID="158dbfda-9b0a-4809-9946-3c6ee2d082dc" Jan 31 09:40:57 crc kubenswrapper[4830]: I0131 09:40:57.252660 4830 scope.go:117] "RemoveContainer" containerID="5a8f61ac813e58f2725a65e088faabbabc4f4a08bd1c263d53e2f3530d252de8" Jan 31 09:40:57 crc kubenswrapper[4830]: E0131 09:40:57.253502 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gt7kd_openshift-machine-config-operator(158dbfda-9b0a-4809-9946-3c6ee2d082dc)\"" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" podUID="158dbfda-9b0a-4809-9946-3c6ee2d082dc" Jan 31 09:41:11 crc kubenswrapper[4830]: I0131 09:41:11.252462 4830 scope.go:117] "RemoveContainer" containerID="5a8f61ac813e58f2725a65e088faabbabc4f4a08bd1c263d53e2f3530d252de8" Jan 31 09:41:11 crc kubenswrapper[4830]: E0131 09:41:11.253268 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gt7kd_openshift-machine-config-operator(158dbfda-9b0a-4809-9946-3c6ee2d082dc)\"" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" podUID="158dbfda-9b0a-4809-9946-3c6ee2d082dc" Jan 31 09:41:13 crc kubenswrapper[4830]: I0131 09:41:13.937552 4830 scope.go:117] "RemoveContainer" containerID="aaacdfb87dea1afdbfb12b6ef8c917df383edc7d71a1e313a10a0b822910ac0b" Jan 31 09:41:25 crc kubenswrapper[4830]: I0131 09:41:25.870242 4830 generic.go:334] "Generic (PLEG): container finished" podID="dbdc4551-3d56-4feb-b897-89d6d0367388" containerID="2f1b2da9a2b794f68ac785c94c52ea9c8fd9eb0cf5acf7eb2d6d3bd9d3d04ae0" exitCode=0 Jan 31 09:41:25 crc kubenswrapper[4830]: I0131 09:41:25.870349 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7qkc2" event={"ID":"dbdc4551-3d56-4feb-b897-89d6d0367388","Type":"ContainerDied","Data":"2f1b2da9a2b794f68ac785c94c52ea9c8fd9eb0cf5acf7eb2d6d3bd9d3d04ae0"} Jan 31 09:41:26 crc kubenswrapper[4830]: I0131 09:41:26.261177 4830 scope.go:117] "RemoveContainer" containerID="5a8f61ac813e58f2725a65e088faabbabc4f4a08bd1c263d53e2f3530d252de8" Jan 31 09:41:26 crc kubenswrapper[4830]: E0131 09:41:26.261556 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gt7kd_openshift-machine-config-operator(158dbfda-9b0a-4809-9946-3c6ee2d082dc)\"" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" podUID="158dbfda-9b0a-4809-9946-3c6ee2d082dc" Jan 31 09:41:27 crc kubenswrapper[4830]: I0131 09:41:27.356940 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7qkc2" Jan 31 09:41:27 crc kubenswrapper[4830]: I0131 09:41:27.460739 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbdc4551-3d56-4feb-b897-89d6d0367388-repo-setup-combined-ca-bundle\") pod \"dbdc4551-3d56-4feb-b897-89d6d0367388\" (UID: \"dbdc4551-3d56-4feb-b897-89d6d0367388\") " Jan 31 09:41:27 crc kubenswrapper[4830]: I0131 09:41:27.460823 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\" (UniqueName: \"kubernetes.io/projected/dbdc4551-3d56-4feb-b897-89d6d0367388-openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\") pod \"dbdc4551-3d56-4feb-b897-89d6d0367388\" (UID: \"dbdc4551-3d56-4feb-b897-89d6d0367388\") " Jan 31 09:41:27 crc kubenswrapper[4830]: I0131 09:41:27.460864 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbdc4551-3d56-4feb-b897-89d6d0367388-telemetry-combined-ca-bundle\") pod \"dbdc4551-3d56-4feb-b897-89d6d0367388\" (UID: \"dbdc4551-3d56-4feb-b897-89d6d0367388\") " Jan 31 09:41:27 crc kubenswrapper[4830]: I0131 09:41:27.460934 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbdc4551-3d56-4feb-b897-89d6d0367388-bootstrap-combined-ca-bundle\") pod \"dbdc4551-3d56-4feb-b897-89d6d0367388\" (UID: \"dbdc4551-3d56-4feb-b897-89d6d0367388\") " Jan 31 09:41:27 crc kubenswrapper[4830]: I0131 09:41:27.460964 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbdc4551-3d56-4feb-b897-89d6d0367388-ovn-combined-ca-bundle\") pod \"dbdc4551-3d56-4feb-b897-89d6d0367388\" (UID: \"dbdc4551-3d56-4feb-b897-89d6d0367388\") " Jan 31 09:41:27 crc kubenswrapper[4830]: I0131 09:41:27.461716 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/dbdc4551-3d56-4feb-b897-89d6d0367388-openstack-edpm-ipam-ovn-default-certs-0\") pod \"dbdc4551-3d56-4feb-b897-89d6d0367388\" (UID: \"dbdc4551-3d56-4feb-b897-89d6d0367388\") " Jan 31 09:41:27 crc kubenswrapper[4830]: I0131 09:41:27.461828 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/dbdc4551-3d56-4feb-b897-89d6d0367388-ssh-key-openstack-edpm-ipam\") pod \"dbdc4551-3d56-4feb-b897-89d6d0367388\" (UID: \"dbdc4551-3d56-4feb-b897-89d6d0367388\") " Jan 31 09:41:27 crc kubenswrapper[4830]: I0131 09:41:27.461887 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/dbdc4551-3d56-4feb-b897-89d6d0367388-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"dbdc4551-3d56-4feb-b897-89d6d0367388\" (UID: \"dbdc4551-3d56-4feb-b897-89d6d0367388\") " Jan 31 09:41:27 crc kubenswrapper[4830]: I0131 09:41:27.461964 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbdc4551-3d56-4feb-b897-89d6d0367388-libvirt-combined-ca-bundle\") pod \"dbdc4551-3d56-4feb-b897-89d6d0367388\" (UID: \"dbdc4551-3d56-4feb-b897-89d6d0367388\") " Jan 31 09:41:27 crc kubenswrapper[4830]: I0131 09:41:27.462049 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbdc4551-3d56-4feb-b897-89d6d0367388-neutron-metadata-combined-ca-bundle\") pod \"dbdc4551-3d56-4feb-b897-89d6d0367388\" (UID: \"dbdc4551-3d56-4feb-b897-89d6d0367388\") " Jan 31 09:41:27 crc kubenswrapper[4830]: I0131 09:41:27.462142 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dbdc4551-3d56-4feb-b897-89d6d0367388-inventory\") pod \"dbdc4551-3d56-4feb-b897-89d6d0367388\" (UID: \"dbdc4551-3d56-4feb-b897-89d6d0367388\") " Jan 31 09:41:27 crc kubenswrapper[4830]: I0131 09:41:27.462218 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbdc4551-3d56-4feb-b897-89d6d0367388-telemetry-power-monitoring-combined-ca-bundle\") pod \"dbdc4551-3d56-4feb-b897-89d6d0367388\" (UID: \"dbdc4551-3d56-4feb-b897-89d6d0367388\") " Jan 31 09:41:27 crc kubenswrapper[4830]: I0131 09:41:27.462263 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbdc4551-3d56-4feb-b897-89d6d0367388-nova-combined-ca-bundle\") pod \"dbdc4551-3d56-4feb-b897-89d6d0367388\" (UID: \"dbdc4551-3d56-4feb-b897-89d6d0367388\") " Jan 31 09:41:27 crc kubenswrapper[4830]: I0131 09:41:27.462411 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/dbdc4551-3d56-4feb-b897-89d6d0367388-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"dbdc4551-3d56-4feb-b897-89d6d0367388\" (UID: \"dbdc4551-3d56-4feb-b897-89d6d0367388\") " Jan 31 09:41:27 crc kubenswrapper[4830]: I0131 09:41:27.462451 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jk4fs\" (UniqueName: \"kubernetes.io/projected/dbdc4551-3d56-4feb-b897-89d6d0367388-kube-api-access-jk4fs\") pod \"dbdc4551-3d56-4feb-b897-89d6d0367388\" (UID: \"dbdc4551-3d56-4feb-b897-89d6d0367388\") " Jan 31 09:41:27 crc kubenswrapper[4830]: I0131 09:41:27.462485 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/dbdc4551-3d56-4feb-b897-89d6d0367388-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"dbdc4551-3d56-4feb-b897-89d6d0367388\" (UID: \"dbdc4551-3d56-4feb-b897-89d6d0367388\") " Jan 31 09:41:27 crc kubenswrapper[4830]: I0131 09:41:27.469363 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dbdc4551-3d56-4feb-b897-89d6d0367388-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "dbdc4551-3d56-4feb-b897-89d6d0367388" (UID: "dbdc4551-3d56-4feb-b897-89d6d0367388"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:41:27 crc kubenswrapper[4830]: I0131 09:41:27.469444 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dbdc4551-3d56-4feb-b897-89d6d0367388-openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0") pod "dbdc4551-3d56-4feb-b897-89d6d0367388" (UID: "dbdc4551-3d56-4feb-b897-89d6d0367388"). InnerVolumeSpecName "openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:41:27 crc kubenswrapper[4830]: I0131 09:41:27.469704 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dbdc4551-3d56-4feb-b897-89d6d0367388-openstack-edpm-ipam-ovn-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-ovn-default-certs-0") pod "dbdc4551-3d56-4feb-b897-89d6d0367388" (UID: "dbdc4551-3d56-4feb-b897-89d6d0367388"). InnerVolumeSpecName "openstack-edpm-ipam-ovn-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:41:27 crc kubenswrapper[4830]: I0131 09:41:27.470433 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dbdc4551-3d56-4feb-b897-89d6d0367388-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "dbdc4551-3d56-4feb-b897-89d6d0367388" (UID: "dbdc4551-3d56-4feb-b897-89d6d0367388"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:41:27 crc kubenswrapper[4830]: I0131 09:41:27.470711 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dbdc4551-3d56-4feb-b897-89d6d0367388-openstack-edpm-ipam-telemetry-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-telemetry-default-certs-0") pod "dbdc4551-3d56-4feb-b897-89d6d0367388" (UID: "dbdc4551-3d56-4feb-b897-89d6d0367388"). InnerVolumeSpecName "openstack-edpm-ipam-telemetry-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:41:27 crc kubenswrapper[4830]: I0131 09:41:27.472640 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dbdc4551-3d56-4feb-b897-89d6d0367388-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "dbdc4551-3d56-4feb-b897-89d6d0367388" (UID: "dbdc4551-3d56-4feb-b897-89d6d0367388"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:41:27 crc kubenswrapper[4830]: I0131 09:41:27.473390 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dbdc4551-3d56-4feb-b897-89d6d0367388-openstack-edpm-ipam-neutron-metadata-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-neutron-metadata-default-certs-0") pod "dbdc4551-3d56-4feb-b897-89d6d0367388" (UID: "dbdc4551-3d56-4feb-b897-89d6d0367388"). InnerVolumeSpecName "openstack-edpm-ipam-neutron-metadata-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:41:27 crc kubenswrapper[4830]: I0131 09:41:27.473695 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dbdc4551-3d56-4feb-b897-89d6d0367388-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "dbdc4551-3d56-4feb-b897-89d6d0367388" (UID: "dbdc4551-3d56-4feb-b897-89d6d0367388"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:41:27 crc kubenswrapper[4830]: I0131 09:41:27.473905 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dbdc4551-3d56-4feb-b897-89d6d0367388-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "dbdc4551-3d56-4feb-b897-89d6d0367388" (UID: "dbdc4551-3d56-4feb-b897-89d6d0367388"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:41:27 crc kubenswrapper[4830]: I0131 09:41:27.475221 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dbdc4551-3d56-4feb-b897-89d6d0367388-telemetry-power-monitoring-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-power-monitoring-combined-ca-bundle") pod "dbdc4551-3d56-4feb-b897-89d6d0367388" (UID: "dbdc4551-3d56-4feb-b897-89d6d0367388"). InnerVolumeSpecName "telemetry-power-monitoring-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:41:27 crc kubenswrapper[4830]: I0131 09:41:27.475364 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dbdc4551-3d56-4feb-b897-89d6d0367388-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "dbdc4551-3d56-4feb-b897-89d6d0367388" (UID: "dbdc4551-3d56-4feb-b897-89d6d0367388"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:41:27 crc kubenswrapper[4830]: I0131 09:41:27.475358 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dbdc4551-3d56-4feb-b897-89d6d0367388-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "dbdc4551-3d56-4feb-b897-89d6d0367388" (UID: "dbdc4551-3d56-4feb-b897-89d6d0367388"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:41:27 crc kubenswrapper[4830]: I0131 09:41:27.476272 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dbdc4551-3d56-4feb-b897-89d6d0367388-openstack-edpm-ipam-libvirt-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-libvirt-default-certs-0") pod "dbdc4551-3d56-4feb-b897-89d6d0367388" (UID: "dbdc4551-3d56-4feb-b897-89d6d0367388"). InnerVolumeSpecName "openstack-edpm-ipam-libvirt-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:41:27 crc kubenswrapper[4830]: I0131 09:41:27.476831 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dbdc4551-3d56-4feb-b897-89d6d0367388-kube-api-access-jk4fs" (OuterVolumeSpecName: "kube-api-access-jk4fs") pod "dbdc4551-3d56-4feb-b897-89d6d0367388" (UID: "dbdc4551-3d56-4feb-b897-89d6d0367388"). InnerVolumeSpecName "kube-api-access-jk4fs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:41:27 crc kubenswrapper[4830]: I0131 09:41:27.505787 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dbdc4551-3d56-4feb-b897-89d6d0367388-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "dbdc4551-3d56-4feb-b897-89d6d0367388" (UID: "dbdc4551-3d56-4feb-b897-89d6d0367388"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:41:27 crc kubenswrapper[4830]: I0131 09:41:27.512615 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dbdc4551-3d56-4feb-b897-89d6d0367388-inventory" (OuterVolumeSpecName: "inventory") pod "dbdc4551-3d56-4feb-b897-89d6d0367388" (UID: "dbdc4551-3d56-4feb-b897-89d6d0367388"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:41:27 crc kubenswrapper[4830]: I0131 09:41:27.567527 4830 reconciler_common.go:293] "Volume detached for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbdc4551-3d56-4feb-b897-89d6d0367388-telemetry-power-monitoring-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 09:41:27 crc kubenswrapper[4830]: I0131 09:41:27.567619 4830 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbdc4551-3d56-4feb-b897-89d6d0367388-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 09:41:27 crc kubenswrapper[4830]: I0131 09:41:27.567634 4830 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/dbdc4551-3d56-4feb-b897-89d6d0367388-openstack-edpm-ipam-libvirt-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 31 09:41:27 crc kubenswrapper[4830]: I0131 09:41:27.567646 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jk4fs\" (UniqueName: \"kubernetes.io/projected/dbdc4551-3d56-4feb-b897-89d6d0367388-kube-api-access-jk4fs\") on node \"crc\" DevicePath \"\"" Jan 31 09:41:27 crc kubenswrapper[4830]: I0131 09:41:27.567658 4830 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/dbdc4551-3d56-4feb-b897-89d6d0367388-openstack-edpm-ipam-telemetry-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 31 09:41:27 crc kubenswrapper[4830]: I0131 09:41:27.567670 4830 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbdc4551-3d56-4feb-b897-89d6d0367388-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 09:41:27 crc kubenswrapper[4830]: I0131 09:41:27.567685 4830 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\" (UniqueName: \"kubernetes.io/projected/dbdc4551-3d56-4feb-b897-89d6d0367388-openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 31 09:41:27 crc kubenswrapper[4830]: I0131 09:41:27.567699 4830 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbdc4551-3d56-4feb-b897-89d6d0367388-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 09:41:27 crc kubenswrapper[4830]: I0131 09:41:27.567709 4830 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbdc4551-3d56-4feb-b897-89d6d0367388-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 09:41:27 crc kubenswrapper[4830]: I0131 09:41:27.567733 4830 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbdc4551-3d56-4feb-b897-89d6d0367388-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 09:41:27 crc kubenswrapper[4830]: I0131 09:41:27.567742 4830 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/dbdc4551-3d56-4feb-b897-89d6d0367388-openstack-edpm-ipam-ovn-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 31 09:41:27 crc kubenswrapper[4830]: I0131 09:41:27.567753 4830 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/dbdc4551-3d56-4feb-b897-89d6d0367388-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 31 09:41:27 crc kubenswrapper[4830]: I0131 09:41:27.567763 4830 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/dbdc4551-3d56-4feb-b897-89d6d0367388-openstack-edpm-ipam-neutron-metadata-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 31 09:41:27 crc kubenswrapper[4830]: I0131 09:41:27.567772 4830 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbdc4551-3d56-4feb-b897-89d6d0367388-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 09:41:27 crc kubenswrapper[4830]: I0131 09:41:27.567783 4830 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbdc4551-3d56-4feb-b897-89d6d0367388-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 09:41:27 crc kubenswrapper[4830]: I0131 09:41:27.567794 4830 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dbdc4551-3d56-4feb-b897-89d6d0367388-inventory\") on node \"crc\" DevicePath \"\"" Jan 31 09:41:27 crc kubenswrapper[4830]: I0131 09:41:27.893886 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7qkc2" event={"ID":"dbdc4551-3d56-4feb-b897-89d6d0367388","Type":"ContainerDied","Data":"dcaba47693e0982d7fe9c068ff665b3d5bdbbf6c8be3b6523116053a0c828d1a"} Jan 31 09:41:27 crc kubenswrapper[4830]: I0131 09:41:27.893937 4830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dcaba47693e0982d7fe9c068ff665b3d5bdbbf6c8be3b6523116053a0c828d1a" Jan 31 09:41:27 crc kubenswrapper[4830]: I0131 09:41:27.893952 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7qkc2" Jan 31 09:41:28 crc kubenswrapper[4830]: I0131 09:41:28.031658 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-tvl2h"] Jan 31 09:41:28 crc kubenswrapper[4830]: E0131 09:41:28.032429 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dbdc4551-3d56-4feb-b897-89d6d0367388" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Jan 31 09:41:28 crc kubenswrapper[4830]: I0131 09:41:28.032451 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="dbdc4551-3d56-4feb-b897-89d6d0367388" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Jan 31 09:41:28 crc kubenswrapper[4830]: I0131 09:41:28.032757 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="dbdc4551-3d56-4feb-b897-89d6d0367388" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Jan 31 09:41:28 crc kubenswrapper[4830]: I0131 09:41:28.033911 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-tvl2h" Jan 31 09:41:28 crc kubenswrapper[4830]: I0131 09:41:28.037221 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 31 09:41:28 crc kubenswrapper[4830]: I0131 09:41:28.037480 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-vd24j" Jan 31 09:41:28 crc kubenswrapper[4830]: I0131 09:41:28.037688 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 31 09:41:28 crc kubenswrapper[4830]: I0131 09:41:28.038216 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 31 09:41:28 crc kubenswrapper[4830]: I0131 09:41:28.040157 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-config" Jan 31 09:41:28 crc kubenswrapper[4830]: I0131 09:41:28.059501 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-tvl2h"] Jan 31 09:41:28 crc kubenswrapper[4830]: I0131 09:41:28.081354 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3b46573d-c2d5-4fe7-9bef-4a5718c0ffe1-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-tvl2h\" (UID: \"3b46573d-c2d5-4fe7-9bef-4a5718c0ffe1\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-tvl2h" Jan 31 09:41:28 crc kubenswrapper[4830]: I0131 09:41:28.081465 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b46573d-c2d5-4fe7-9bef-4a5718c0ffe1-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-tvl2h\" (UID: \"3b46573d-c2d5-4fe7-9bef-4a5718c0ffe1\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-tvl2h" Jan 31 09:41:28 crc kubenswrapper[4830]: I0131 09:41:28.081679 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/3b46573d-c2d5-4fe7-9bef-4a5718c0ffe1-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-tvl2h\" (UID: \"3b46573d-c2d5-4fe7-9bef-4a5718c0ffe1\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-tvl2h" Jan 31 09:41:28 crc kubenswrapper[4830]: I0131 09:41:28.081750 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3b46573d-c2d5-4fe7-9bef-4a5718c0ffe1-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-tvl2h\" (UID: \"3b46573d-c2d5-4fe7-9bef-4a5718c0ffe1\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-tvl2h" Jan 31 09:41:28 crc kubenswrapper[4830]: I0131 09:41:28.082061 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l99fv\" (UniqueName: \"kubernetes.io/projected/3b46573d-c2d5-4fe7-9bef-4a5718c0ffe1-kube-api-access-l99fv\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-tvl2h\" (UID: \"3b46573d-c2d5-4fe7-9bef-4a5718c0ffe1\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-tvl2h" Jan 31 09:41:28 crc kubenswrapper[4830]: I0131 09:41:28.185343 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b46573d-c2d5-4fe7-9bef-4a5718c0ffe1-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-tvl2h\" (UID: \"3b46573d-c2d5-4fe7-9bef-4a5718c0ffe1\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-tvl2h" Jan 31 09:41:28 crc kubenswrapper[4830]: I0131 09:41:28.185808 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/3b46573d-c2d5-4fe7-9bef-4a5718c0ffe1-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-tvl2h\" (UID: \"3b46573d-c2d5-4fe7-9bef-4a5718c0ffe1\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-tvl2h" Jan 31 09:41:28 crc kubenswrapper[4830]: I0131 09:41:28.185888 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3b46573d-c2d5-4fe7-9bef-4a5718c0ffe1-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-tvl2h\" (UID: \"3b46573d-c2d5-4fe7-9bef-4a5718c0ffe1\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-tvl2h" Jan 31 09:41:28 crc kubenswrapper[4830]: I0131 09:41:28.186533 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l99fv\" (UniqueName: \"kubernetes.io/projected/3b46573d-c2d5-4fe7-9bef-4a5718c0ffe1-kube-api-access-l99fv\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-tvl2h\" (UID: \"3b46573d-c2d5-4fe7-9bef-4a5718c0ffe1\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-tvl2h" Jan 31 09:41:28 crc kubenswrapper[4830]: I0131 09:41:28.186884 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/3b46573d-c2d5-4fe7-9bef-4a5718c0ffe1-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-tvl2h\" (UID: \"3b46573d-c2d5-4fe7-9bef-4a5718c0ffe1\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-tvl2h" Jan 31 09:41:28 crc kubenswrapper[4830]: I0131 09:41:28.186914 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3b46573d-c2d5-4fe7-9bef-4a5718c0ffe1-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-tvl2h\" (UID: \"3b46573d-c2d5-4fe7-9bef-4a5718c0ffe1\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-tvl2h" Jan 31 09:41:28 crc kubenswrapper[4830]: I0131 09:41:28.189482 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3b46573d-c2d5-4fe7-9bef-4a5718c0ffe1-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-tvl2h\" (UID: \"3b46573d-c2d5-4fe7-9bef-4a5718c0ffe1\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-tvl2h" Jan 31 09:41:28 crc kubenswrapper[4830]: I0131 09:41:28.190455 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b46573d-c2d5-4fe7-9bef-4a5718c0ffe1-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-tvl2h\" (UID: \"3b46573d-c2d5-4fe7-9bef-4a5718c0ffe1\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-tvl2h" Jan 31 09:41:28 crc kubenswrapper[4830]: I0131 09:41:28.190927 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3b46573d-c2d5-4fe7-9bef-4a5718c0ffe1-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-tvl2h\" (UID: \"3b46573d-c2d5-4fe7-9bef-4a5718c0ffe1\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-tvl2h" Jan 31 09:41:28 crc kubenswrapper[4830]: I0131 09:41:28.207523 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l99fv\" (UniqueName: \"kubernetes.io/projected/3b46573d-c2d5-4fe7-9bef-4a5718c0ffe1-kube-api-access-l99fv\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-tvl2h\" (UID: \"3b46573d-c2d5-4fe7-9bef-4a5718c0ffe1\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-tvl2h" Jan 31 09:41:28 crc kubenswrapper[4830]: I0131 09:41:28.380056 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-tvl2h" Jan 31 09:41:29 crc kubenswrapper[4830]: W0131 09:41:29.002134 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3b46573d_c2d5_4fe7_9bef_4a5718c0ffe1.slice/crio-7b3211f7998a1ac9450dbdacafe3eb67d90737e0be0c32b40b81b550b7956cd9 WatchSource:0}: Error finding container 7b3211f7998a1ac9450dbdacafe3eb67d90737e0be0c32b40b81b550b7956cd9: Status 404 returned error can't find the container with id 7b3211f7998a1ac9450dbdacafe3eb67d90737e0be0c32b40b81b550b7956cd9 Jan 31 09:41:29 crc kubenswrapper[4830]: I0131 09:41:29.039102 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-tvl2h"] Jan 31 09:41:29 crc kubenswrapper[4830]: I0131 09:41:29.926224 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-tvl2h" event={"ID":"3b46573d-c2d5-4fe7-9bef-4a5718c0ffe1","Type":"ContainerStarted","Data":"7b3211f7998a1ac9450dbdacafe3eb67d90737e0be0c32b40b81b550b7956cd9"} Jan 31 09:41:30 crc kubenswrapper[4830]: I0131 09:41:30.941594 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-tvl2h" event={"ID":"3b46573d-c2d5-4fe7-9bef-4a5718c0ffe1","Type":"ContainerStarted","Data":"f390f6873064227f687635b9c7898f77fa431dc82816b40624c1cd941e4874b4"} Jan 31 09:41:30 crc kubenswrapper[4830]: I0131 09:41:30.973249 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-tvl2h" podStartSLOduration=2.5376872820000003 podStartE2EDuration="2.973217762s" podCreationTimestamp="2026-01-31 09:41:28 +0000 UTC" firstStartedPulling="2026-01-31 09:41:29.044440588 +0000 UTC m=+2433.537803040" lastFinishedPulling="2026-01-31 09:41:29.479971078 +0000 UTC m=+2433.973333520" observedRunningTime="2026-01-31 09:41:30.961676608 +0000 UTC m=+2435.455039050" watchObservedRunningTime="2026-01-31 09:41:30.973217762 +0000 UTC m=+2435.466580204" Jan 31 09:41:39 crc kubenswrapper[4830]: I0131 09:41:39.251891 4830 scope.go:117] "RemoveContainer" containerID="5a8f61ac813e58f2725a65e088faabbabc4f4a08bd1c263d53e2f3530d252de8" Jan 31 09:41:39 crc kubenswrapper[4830]: E0131 09:41:39.252792 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gt7kd_openshift-machine-config-operator(158dbfda-9b0a-4809-9946-3c6ee2d082dc)\"" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" podUID="158dbfda-9b0a-4809-9946-3c6ee2d082dc" Jan 31 09:41:50 crc kubenswrapper[4830]: I0131 09:41:50.251967 4830 scope.go:117] "RemoveContainer" containerID="5a8f61ac813e58f2725a65e088faabbabc4f4a08bd1c263d53e2f3530d252de8" Jan 31 09:41:50 crc kubenswrapper[4830]: E0131 09:41:50.252981 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gt7kd_openshift-machine-config-operator(158dbfda-9b0a-4809-9946-3c6ee2d082dc)\"" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" podUID="158dbfda-9b0a-4809-9946-3c6ee2d082dc" Jan 31 09:42:05 crc kubenswrapper[4830]: I0131 09:42:05.251961 4830 scope.go:117] "RemoveContainer" containerID="5a8f61ac813e58f2725a65e088faabbabc4f4a08bd1c263d53e2f3530d252de8" Jan 31 09:42:05 crc kubenswrapper[4830]: E0131 09:42:05.253040 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gt7kd_openshift-machine-config-operator(158dbfda-9b0a-4809-9946-3c6ee2d082dc)\"" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" podUID="158dbfda-9b0a-4809-9946-3c6ee2d082dc" Jan 31 09:42:16 crc kubenswrapper[4830]: I0131 09:42:16.261635 4830 scope.go:117] "RemoveContainer" containerID="5a8f61ac813e58f2725a65e088faabbabc4f4a08bd1c263d53e2f3530d252de8" Jan 31 09:42:17 crc kubenswrapper[4830]: I0131 09:42:17.442052 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" event={"ID":"158dbfda-9b0a-4809-9946-3c6ee2d082dc","Type":"ContainerStarted","Data":"a20ea2322cd4062ecdd9c286d63df058ebc8744e0a83dc5a6a03d87d2b70305c"} Jan 31 09:42:28 crc kubenswrapper[4830]: I0131 09:42:28.581303 4830 generic.go:334] "Generic (PLEG): container finished" podID="3b46573d-c2d5-4fe7-9bef-4a5718c0ffe1" containerID="f390f6873064227f687635b9c7898f77fa431dc82816b40624c1cd941e4874b4" exitCode=0 Jan 31 09:42:28 crc kubenswrapper[4830]: I0131 09:42:28.581423 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-tvl2h" event={"ID":"3b46573d-c2d5-4fe7-9bef-4a5718c0ffe1","Type":"ContainerDied","Data":"f390f6873064227f687635b9c7898f77fa431dc82816b40624c1cd941e4874b4"} Jan 31 09:42:30 crc kubenswrapper[4830]: I0131 09:42:30.096890 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-tvl2h" Jan 31 09:42:30 crc kubenswrapper[4830]: I0131 09:42:30.186743 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3b46573d-c2d5-4fe7-9bef-4a5718c0ffe1-inventory\") pod \"3b46573d-c2d5-4fe7-9bef-4a5718c0ffe1\" (UID: \"3b46573d-c2d5-4fe7-9bef-4a5718c0ffe1\") " Jan 31 09:42:30 crc kubenswrapper[4830]: I0131 09:42:30.186837 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b46573d-c2d5-4fe7-9bef-4a5718c0ffe1-ovn-combined-ca-bundle\") pod \"3b46573d-c2d5-4fe7-9bef-4a5718c0ffe1\" (UID: \"3b46573d-c2d5-4fe7-9bef-4a5718c0ffe1\") " Jan 31 09:42:30 crc kubenswrapper[4830]: I0131 09:42:30.186908 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/3b46573d-c2d5-4fe7-9bef-4a5718c0ffe1-ovncontroller-config-0\") pod \"3b46573d-c2d5-4fe7-9bef-4a5718c0ffe1\" (UID: \"3b46573d-c2d5-4fe7-9bef-4a5718c0ffe1\") " Jan 31 09:42:30 crc kubenswrapper[4830]: I0131 09:42:30.187231 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l99fv\" (UniqueName: \"kubernetes.io/projected/3b46573d-c2d5-4fe7-9bef-4a5718c0ffe1-kube-api-access-l99fv\") pod \"3b46573d-c2d5-4fe7-9bef-4a5718c0ffe1\" (UID: \"3b46573d-c2d5-4fe7-9bef-4a5718c0ffe1\") " Jan 31 09:42:30 crc kubenswrapper[4830]: I0131 09:42:30.187278 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3b46573d-c2d5-4fe7-9bef-4a5718c0ffe1-ssh-key-openstack-edpm-ipam\") pod \"3b46573d-c2d5-4fe7-9bef-4a5718c0ffe1\" (UID: \"3b46573d-c2d5-4fe7-9bef-4a5718c0ffe1\") " Jan 31 09:42:30 crc kubenswrapper[4830]: I0131 09:42:30.195349 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3b46573d-c2d5-4fe7-9bef-4a5718c0ffe1-kube-api-access-l99fv" (OuterVolumeSpecName: "kube-api-access-l99fv") pod "3b46573d-c2d5-4fe7-9bef-4a5718c0ffe1" (UID: "3b46573d-c2d5-4fe7-9bef-4a5718c0ffe1"). InnerVolumeSpecName "kube-api-access-l99fv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:42:30 crc kubenswrapper[4830]: I0131 09:42:30.195495 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b46573d-c2d5-4fe7-9bef-4a5718c0ffe1-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "3b46573d-c2d5-4fe7-9bef-4a5718c0ffe1" (UID: "3b46573d-c2d5-4fe7-9bef-4a5718c0ffe1"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:42:30 crc kubenswrapper[4830]: I0131 09:42:30.221295 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3b46573d-c2d5-4fe7-9bef-4a5718c0ffe1-ovncontroller-config-0" (OuterVolumeSpecName: "ovncontroller-config-0") pod "3b46573d-c2d5-4fe7-9bef-4a5718c0ffe1" (UID: "3b46573d-c2d5-4fe7-9bef-4a5718c0ffe1"). InnerVolumeSpecName "ovncontroller-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:42:30 crc kubenswrapper[4830]: I0131 09:42:30.228490 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b46573d-c2d5-4fe7-9bef-4a5718c0ffe1-inventory" (OuterVolumeSpecName: "inventory") pod "3b46573d-c2d5-4fe7-9bef-4a5718c0ffe1" (UID: "3b46573d-c2d5-4fe7-9bef-4a5718c0ffe1"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:42:30 crc kubenswrapper[4830]: I0131 09:42:30.233376 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b46573d-c2d5-4fe7-9bef-4a5718c0ffe1-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "3b46573d-c2d5-4fe7-9bef-4a5718c0ffe1" (UID: "3b46573d-c2d5-4fe7-9bef-4a5718c0ffe1"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:42:30 crc kubenswrapper[4830]: I0131 09:42:30.290889 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l99fv\" (UniqueName: \"kubernetes.io/projected/3b46573d-c2d5-4fe7-9bef-4a5718c0ffe1-kube-api-access-l99fv\") on node \"crc\" DevicePath \"\"" Jan 31 09:42:30 crc kubenswrapper[4830]: I0131 09:42:30.291197 4830 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3b46573d-c2d5-4fe7-9bef-4a5718c0ffe1-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 31 09:42:30 crc kubenswrapper[4830]: I0131 09:42:30.291208 4830 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3b46573d-c2d5-4fe7-9bef-4a5718c0ffe1-inventory\") on node \"crc\" DevicePath \"\"" Jan 31 09:42:30 crc kubenswrapper[4830]: I0131 09:42:30.291221 4830 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b46573d-c2d5-4fe7-9bef-4a5718c0ffe1-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 09:42:30 crc kubenswrapper[4830]: I0131 09:42:30.291233 4830 reconciler_common.go:293] "Volume detached for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/3b46573d-c2d5-4fe7-9bef-4a5718c0ffe1-ovncontroller-config-0\") on node \"crc\" DevicePath \"\"" Jan 31 09:42:30 crc kubenswrapper[4830]: I0131 09:42:30.607015 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-tvl2h" event={"ID":"3b46573d-c2d5-4fe7-9bef-4a5718c0ffe1","Type":"ContainerDied","Data":"7b3211f7998a1ac9450dbdacafe3eb67d90737e0be0c32b40b81b550b7956cd9"} Jan 31 09:42:30 crc kubenswrapper[4830]: I0131 09:42:30.607078 4830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7b3211f7998a1ac9450dbdacafe3eb67d90737e0be0c32b40b81b550b7956cd9" Jan 31 09:42:30 crc kubenswrapper[4830]: I0131 09:42:30.607104 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-tvl2h" Jan 31 09:42:30 crc kubenswrapper[4830]: I0131 09:42:30.710063 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-bc986"] Jan 31 09:42:30 crc kubenswrapper[4830]: E0131 09:42:30.710577 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b46573d-c2d5-4fe7-9bef-4a5718c0ffe1" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Jan 31 09:42:30 crc kubenswrapper[4830]: I0131 09:42:30.710595 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b46573d-c2d5-4fe7-9bef-4a5718c0ffe1" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Jan 31 09:42:30 crc kubenswrapper[4830]: I0131 09:42:30.710912 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b46573d-c2d5-4fe7-9bef-4a5718c0ffe1" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Jan 31 09:42:30 crc kubenswrapper[4830]: I0131 09:42:30.711871 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-bc986" Jan 31 09:42:30 crc kubenswrapper[4830]: I0131 09:42:30.715580 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-neutron-config" Jan 31 09:42:30 crc kubenswrapper[4830]: I0131 09:42:30.718426 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-vd24j" Jan 31 09:42:30 crc kubenswrapper[4830]: I0131 09:42:30.718901 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 31 09:42:30 crc kubenswrapper[4830]: I0131 09:42:30.719075 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-ovn-metadata-agent-neutron-config" Jan 31 09:42:30 crc kubenswrapper[4830]: I0131 09:42:30.719104 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 31 09:42:30 crc kubenswrapper[4830]: I0131 09:42:30.721369 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 31 09:42:30 crc kubenswrapper[4830]: I0131 09:42:30.730227 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41d8850d-86d0-4b11-ac11-7738b2359233-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-bc986\" (UID: \"41d8850d-86d0-4b11-ac11-7738b2359233\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-bc986" Jan 31 09:42:30 crc kubenswrapper[4830]: I0131 09:42:30.730907 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/41d8850d-86d0-4b11-ac11-7738b2359233-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-bc986\" (UID: \"41d8850d-86d0-4b11-ac11-7738b2359233\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-bc986" Jan 31 09:42:30 crc kubenswrapper[4830]: I0131 09:42:30.731305 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/41d8850d-86d0-4b11-ac11-7738b2359233-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-bc986\" (UID: \"41d8850d-86d0-4b11-ac11-7738b2359233\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-bc986" Jan 31 09:42:30 crc kubenswrapper[4830]: I0131 09:42:30.732024 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/41d8850d-86d0-4b11-ac11-7738b2359233-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-bc986\" (UID: \"41d8850d-86d0-4b11-ac11-7738b2359233\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-bc986" Jan 31 09:42:30 crc kubenswrapper[4830]: I0131 09:42:30.732074 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/41d8850d-86d0-4b11-ac11-7738b2359233-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-bc986\" (UID: \"41d8850d-86d0-4b11-ac11-7738b2359233\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-bc986" Jan 31 09:42:30 crc kubenswrapper[4830]: I0131 09:42:30.732159 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vbcj4\" (UniqueName: \"kubernetes.io/projected/41d8850d-86d0-4b11-ac11-7738b2359233-kube-api-access-vbcj4\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-bc986\" (UID: \"41d8850d-86d0-4b11-ac11-7738b2359233\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-bc986" Jan 31 09:42:30 crc kubenswrapper[4830]: I0131 09:42:30.732519 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-bc986"] Jan 31 09:42:30 crc kubenswrapper[4830]: I0131 09:42:30.833466 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/41d8850d-86d0-4b11-ac11-7738b2359233-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-bc986\" (UID: \"41d8850d-86d0-4b11-ac11-7738b2359233\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-bc986" Jan 31 09:42:30 crc kubenswrapper[4830]: I0131 09:42:30.833545 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vbcj4\" (UniqueName: \"kubernetes.io/projected/41d8850d-86d0-4b11-ac11-7738b2359233-kube-api-access-vbcj4\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-bc986\" (UID: \"41d8850d-86d0-4b11-ac11-7738b2359233\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-bc986" Jan 31 09:42:30 crc kubenswrapper[4830]: I0131 09:42:30.833603 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41d8850d-86d0-4b11-ac11-7738b2359233-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-bc986\" (UID: \"41d8850d-86d0-4b11-ac11-7738b2359233\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-bc986" Jan 31 09:42:30 crc kubenswrapper[4830]: I0131 09:42:30.833645 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/41d8850d-86d0-4b11-ac11-7738b2359233-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-bc986\" (UID: \"41d8850d-86d0-4b11-ac11-7738b2359233\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-bc986" Jan 31 09:42:30 crc kubenswrapper[4830]: I0131 09:42:30.833778 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/41d8850d-86d0-4b11-ac11-7738b2359233-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-bc986\" (UID: \"41d8850d-86d0-4b11-ac11-7738b2359233\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-bc986" Jan 31 09:42:30 crc kubenswrapper[4830]: I0131 09:42:30.833891 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/41d8850d-86d0-4b11-ac11-7738b2359233-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-bc986\" (UID: \"41d8850d-86d0-4b11-ac11-7738b2359233\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-bc986" Jan 31 09:42:30 crc kubenswrapper[4830]: I0131 09:42:30.838760 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/41d8850d-86d0-4b11-ac11-7738b2359233-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-bc986\" (UID: \"41d8850d-86d0-4b11-ac11-7738b2359233\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-bc986" Jan 31 09:42:30 crc kubenswrapper[4830]: I0131 09:42:30.839417 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/41d8850d-86d0-4b11-ac11-7738b2359233-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-bc986\" (UID: \"41d8850d-86d0-4b11-ac11-7738b2359233\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-bc986" Jan 31 09:42:30 crc kubenswrapper[4830]: I0131 09:42:30.839435 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/41d8850d-86d0-4b11-ac11-7738b2359233-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-bc986\" (UID: \"41d8850d-86d0-4b11-ac11-7738b2359233\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-bc986" Jan 31 09:42:30 crc kubenswrapper[4830]: I0131 09:42:30.841251 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/41d8850d-86d0-4b11-ac11-7738b2359233-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-bc986\" (UID: \"41d8850d-86d0-4b11-ac11-7738b2359233\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-bc986" Jan 31 09:42:30 crc kubenswrapper[4830]: I0131 09:42:30.845316 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41d8850d-86d0-4b11-ac11-7738b2359233-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-bc986\" (UID: \"41d8850d-86d0-4b11-ac11-7738b2359233\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-bc986" Jan 31 09:42:30 crc kubenswrapper[4830]: I0131 09:42:30.857646 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vbcj4\" (UniqueName: \"kubernetes.io/projected/41d8850d-86d0-4b11-ac11-7738b2359233-kube-api-access-vbcj4\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-bc986\" (UID: \"41d8850d-86d0-4b11-ac11-7738b2359233\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-bc986" Jan 31 09:42:31 crc kubenswrapper[4830]: I0131 09:42:31.032778 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-bc986" Jan 31 09:42:31 crc kubenswrapper[4830]: I0131 09:42:31.625310 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-bc986"] Jan 31 09:42:31 crc kubenswrapper[4830]: W0131 09:42:31.630498 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod41d8850d_86d0_4b11_ac11_7738b2359233.slice/crio-2ae460d024df97f629ea98acf28c43315cbae2260a619d27234a73a111a92da7 WatchSource:0}: Error finding container 2ae460d024df97f629ea98acf28c43315cbae2260a619d27234a73a111a92da7: Status 404 returned error can't find the container with id 2ae460d024df97f629ea98acf28c43315cbae2260a619d27234a73a111a92da7 Jan 31 09:42:32 crc kubenswrapper[4830]: I0131 09:42:32.634500 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-bc986" event={"ID":"41d8850d-86d0-4b11-ac11-7738b2359233","Type":"ContainerStarted","Data":"86002e13a98769542d597c6a7aa868acb7ba1183b5fab9a8c8bd05c615bd03eb"} Jan 31 09:42:32 crc kubenswrapper[4830]: I0131 09:42:32.634956 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-bc986" event={"ID":"41d8850d-86d0-4b11-ac11-7738b2359233","Type":"ContainerStarted","Data":"2ae460d024df97f629ea98acf28c43315cbae2260a619d27234a73a111a92da7"} Jan 31 09:42:32 crc kubenswrapper[4830]: I0131 09:42:32.667857 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-bc986" podStartSLOduration=2.177206174 podStartE2EDuration="2.667683354s" podCreationTimestamp="2026-01-31 09:42:30 +0000 UTC" firstStartedPulling="2026-01-31 09:42:31.633873452 +0000 UTC m=+2496.127235894" lastFinishedPulling="2026-01-31 09:42:32.124350632 +0000 UTC m=+2496.617713074" observedRunningTime="2026-01-31 09:42:32.659154975 +0000 UTC m=+2497.152517417" watchObservedRunningTime="2026-01-31 09:42:32.667683354 +0000 UTC m=+2497.161045786" Jan 31 09:43:18 crc kubenswrapper[4830]: I0131 09:43:18.150593 4830 generic.go:334] "Generic (PLEG): container finished" podID="41d8850d-86d0-4b11-ac11-7738b2359233" containerID="86002e13a98769542d597c6a7aa868acb7ba1183b5fab9a8c8bd05c615bd03eb" exitCode=0 Jan 31 09:43:18 crc kubenswrapper[4830]: I0131 09:43:18.150704 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-bc986" event={"ID":"41d8850d-86d0-4b11-ac11-7738b2359233","Type":"ContainerDied","Data":"86002e13a98769542d597c6a7aa868acb7ba1183b5fab9a8c8bd05c615bd03eb"} Jan 31 09:43:19 crc kubenswrapper[4830]: I0131 09:43:19.745690 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-bc986" Jan 31 09:43:19 crc kubenswrapper[4830]: I0131 09:43:19.838413 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/41d8850d-86d0-4b11-ac11-7738b2359233-ssh-key-openstack-edpm-ipam\") pod \"41d8850d-86d0-4b11-ac11-7738b2359233\" (UID: \"41d8850d-86d0-4b11-ac11-7738b2359233\") " Jan 31 09:43:19 crc kubenswrapper[4830]: I0131 09:43:19.838518 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/41d8850d-86d0-4b11-ac11-7738b2359233-neutron-ovn-metadata-agent-neutron-config-0\") pod \"41d8850d-86d0-4b11-ac11-7738b2359233\" (UID: \"41d8850d-86d0-4b11-ac11-7738b2359233\") " Jan 31 09:43:19 crc kubenswrapper[4830]: I0131 09:43:19.838660 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/41d8850d-86d0-4b11-ac11-7738b2359233-inventory\") pod \"41d8850d-86d0-4b11-ac11-7738b2359233\" (UID: \"41d8850d-86d0-4b11-ac11-7738b2359233\") " Jan 31 09:43:19 crc kubenswrapper[4830]: I0131 09:43:19.838830 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41d8850d-86d0-4b11-ac11-7738b2359233-neutron-metadata-combined-ca-bundle\") pod \"41d8850d-86d0-4b11-ac11-7738b2359233\" (UID: \"41d8850d-86d0-4b11-ac11-7738b2359233\") " Jan 31 09:43:19 crc kubenswrapper[4830]: I0131 09:43:19.838926 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/41d8850d-86d0-4b11-ac11-7738b2359233-nova-metadata-neutron-config-0\") pod \"41d8850d-86d0-4b11-ac11-7738b2359233\" (UID: \"41d8850d-86d0-4b11-ac11-7738b2359233\") " Jan 31 09:43:19 crc kubenswrapper[4830]: I0131 09:43:19.839007 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vbcj4\" (UniqueName: \"kubernetes.io/projected/41d8850d-86d0-4b11-ac11-7738b2359233-kube-api-access-vbcj4\") pod \"41d8850d-86d0-4b11-ac11-7738b2359233\" (UID: \"41d8850d-86d0-4b11-ac11-7738b2359233\") " Jan 31 09:43:19 crc kubenswrapper[4830]: I0131 09:43:19.846187 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/41d8850d-86d0-4b11-ac11-7738b2359233-kube-api-access-vbcj4" (OuterVolumeSpecName: "kube-api-access-vbcj4") pod "41d8850d-86d0-4b11-ac11-7738b2359233" (UID: "41d8850d-86d0-4b11-ac11-7738b2359233"). InnerVolumeSpecName "kube-api-access-vbcj4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:43:19 crc kubenswrapper[4830]: I0131 09:43:19.846495 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/41d8850d-86d0-4b11-ac11-7738b2359233-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "41d8850d-86d0-4b11-ac11-7738b2359233" (UID: "41d8850d-86d0-4b11-ac11-7738b2359233"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:43:19 crc kubenswrapper[4830]: I0131 09:43:19.893607 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/41d8850d-86d0-4b11-ac11-7738b2359233-inventory" (OuterVolumeSpecName: "inventory") pod "41d8850d-86d0-4b11-ac11-7738b2359233" (UID: "41d8850d-86d0-4b11-ac11-7738b2359233"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:43:19 crc kubenswrapper[4830]: I0131 09:43:19.897857 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/41d8850d-86d0-4b11-ac11-7738b2359233-neutron-ovn-metadata-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-ovn-metadata-agent-neutron-config-0") pod "41d8850d-86d0-4b11-ac11-7738b2359233" (UID: "41d8850d-86d0-4b11-ac11-7738b2359233"). InnerVolumeSpecName "neutron-ovn-metadata-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:43:19 crc kubenswrapper[4830]: I0131 09:43:19.943214 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vbcj4\" (UniqueName: \"kubernetes.io/projected/41d8850d-86d0-4b11-ac11-7738b2359233-kube-api-access-vbcj4\") on node \"crc\" DevicePath \"\"" Jan 31 09:43:19 crc kubenswrapper[4830]: I0131 09:43:19.943256 4830 reconciler_common.go:293] "Volume detached for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/41d8850d-86d0-4b11-ac11-7738b2359233-neutron-ovn-metadata-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Jan 31 09:43:19 crc kubenswrapper[4830]: I0131 09:43:19.943275 4830 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/41d8850d-86d0-4b11-ac11-7738b2359233-inventory\") on node \"crc\" DevicePath \"\"" Jan 31 09:43:19 crc kubenswrapper[4830]: I0131 09:43:19.943287 4830 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41d8850d-86d0-4b11-ac11-7738b2359233-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 09:43:19 crc kubenswrapper[4830]: I0131 09:43:19.946213 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/41d8850d-86d0-4b11-ac11-7738b2359233-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "41d8850d-86d0-4b11-ac11-7738b2359233" (UID: "41d8850d-86d0-4b11-ac11-7738b2359233"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:43:20 crc kubenswrapper[4830]: I0131 09:43:20.030905 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/41d8850d-86d0-4b11-ac11-7738b2359233-nova-metadata-neutron-config-0" (OuterVolumeSpecName: "nova-metadata-neutron-config-0") pod "41d8850d-86d0-4b11-ac11-7738b2359233" (UID: "41d8850d-86d0-4b11-ac11-7738b2359233"). InnerVolumeSpecName "nova-metadata-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:43:20 crc kubenswrapper[4830]: I0131 09:43:20.046127 4830 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/41d8850d-86d0-4b11-ac11-7738b2359233-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 31 09:43:20 crc kubenswrapper[4830]: I0131 09:43:20.046164 4830 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/41d8850d-86d0-4b11-ac11-7738b2359233-nova-metadata-neutron-config-0\") on node \"crc\" DevicePath \"\"" Jan 31 09:43:20 crc kubenswrapper[4830]: I0131 09:43:20.177098 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-bc986" event={"ID":"41d8850d-86d0-4b11-ac11-7738b2359233","Type":"ContainerDied","Data":"2ae460d024df97f629ea98acf28c43315cbae2260a619d27234a73a111a92da7"} Jan 31 09:43:20 crc kubenswrapper[4830]: I0131 09:43:20.177163 4830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2ae460d024df97f629ea98acf28c43315cbae2260a619d27234a73a111a92da7" Jan 31 09:43:20 crc kubenswrapper[4830]: I0131 09:43:20.177182 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-bc986" Jan 31 09:43:20 crc kubenswrapper[4830]: I0131 09:43:20.292759 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-94xgt"] Jan 31 09:43:20 crc kubenswrapper[4830]: E0131 09:43:20.293317 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41d8850d-86d0-4b11-ac11-7738b2359233" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Jan 31 09:43:20 crc kubenswrapper[4830]: I0131 09:43:20.293338 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="41d8850d-86d0-4b11-ac11-7738b2359233" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Jan 31 09:43:20 crc kubenswrapper[4830]: I0131 09:43:20.293560 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="41d8850d-86d0-4b11-ac11-7738b2359233" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Jan 31 09:43:20 crc kubenswrapper[4830]: I0131 09:43:20.294531 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-94xgt" Jan 31 09:43:20 crc kubenswrapper[4830]: I0131 09:43:20.297474 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 31 09:43:20 crc kubenswrapper[4830]: I0131 09:43:20.297495 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-vd24j" Jan 31 09:43:20 crc kubenswrapper[4830]: I0131 09:43:20.298824 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"libvirt-secret" Jan 31 09:43:20 crc kubenswrapper[4830]: I0131 09:43:20.299280 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 31 09:43:20 crc kubenswrapper[4830]: I0131 09:43:20.299370 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 31 09:43:20 crc kubenswrapper[4830]: I0131 09:43:20.314503 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-94xgt"] Jan 31 09:43:20 crc kubenswrapper[4830]: I0131 09:43:20.356682 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/464743a2-b75e-49de-9628-6c12d7c7f8b7-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-94xgt\" (UID: \"464743a2-b75e-49de-9628-6c12d7c7f8b7\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-94xgt" Jan 31 09:43:20 crc kubenswrapper[4830]: I0131 09:43:20.356770 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/464743a2-b75e-49de-9628-6c12d7c7f8b7-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-94xgt\" (UID: \"464743a2-b75e-49de-9628-6c12d7c7f8b7\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-94xgt" Jan 31 09:43:20 crc kubenswrapper[4830]: I0131 09:43:20.356884 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/464743a2-b75e-49de-9628-6c12d7c7f8b7-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-94xgt\" (UID: \"464743a2-b75e-49de-9628-6c12d7c7f8b7\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-94xgt" Jan 31 09:43:20 crc kubenswrapper[4830]: I0131 09:43:20.357119 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/464743a2-b75e-49de-9628-6c12d7c7f8b7-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-94xgt\" (UID: \"464743a2-b75e-49de-9628-6c12d7c7f8b7\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-94xgt" Jan 31 09:43:20 crc kubenswrapper[4830]: I0131 09:43:20.357195 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xt42p\" (UniqueName: \"kubernetes.io/projected/464743a2-b75e-49de-9628-6c12d7c7f8b7-kube-api-access-xt42p\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-94xgt\" (UID: \"464743a2-b75e-49de-9628-6c12d7c7f8b7\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-94xgt" Jan 31 09:43:20 crc kubenswrapper[4830]: I0131 09:43:20.459956 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/464743a2-b75e-49de-9628-6c12d7c7f8b7-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-94xgt\" (UID: \"464743a2-b75e-49de-9628-6c12d7c7f8b7\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-94xgt" Jan 31 09:43:20 crc kubenswrapper[4830]: I0131 09:43:20.460130 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/464743a2-b75e-49de-9628-6c12d7c7f8b7-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-94xgt\" (UID: \"464743a2-b75e-49de-9628-6c12d7c7f8b7\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-94xgt" Jan 31 09:43:20 crc kubenswrapper[4830]: I0131 09:43:20.460193 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xt42p\" (UniqueName: \"kubernetes.io/projected/464743a2-b75e-49de-9628-6c12d7c7f8b7-kube-api-access-xt42p\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-94xgt\" (UID: \"464743a2-b75e-49de-9628-6c12d7c7f8b7\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-94xgt" Jan 31 09:43:20 crc kubenswrapper[4830]: I0131 09:43:20.460332 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/464743a2-b75e-49de-9628-6c12d7c7f8b7-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-94xgt\" (UID: \"464743a2-b75e-49de-9628-6c12d7c7f8b7\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-94xgt" Jan 31 09:43:20 crc kubenswrapper[4830]: I0131 09:43:20.460355 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/464743a2-b75e-49de-9628-6c12d7c7f8b7-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-94xgt\" (UID: \"464743a2-b75e-49de-9628-6c12d7c7f8b7\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-94xgt" Jan 31 09:43:20 crc kubenswrapper[4830]: I0131 09:43:20.466023 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/464743a2-b75e-49de-9628-6c12d7c7f8b7-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-94xgt\" (UID: \"464743a2-b75e-49de-9628-6c12d7c7f8b7\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-94xgt" Jan 31 09:43:20 crc kubenswrapper[4830]: I0131 09:43:20.467524 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/464743a2-b75e-49de-9628-6c12d7c7f8b7-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-94xgt\" (UID: \"464743a2-b75e-49de-9628-6c12d7c7f8b7\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-94xgt" Jan 31 09:43:20 crc kubenswrapper[4830]: I0131 09:43:20.468001 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/464743a2-b75e-49de-9628-6c12d7c7f8b7-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-94xgt\" (UID: \"464743a2-b75e-49de-9628-6c12d7c7f8b7\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-94xgt" Jan 31 09:43:20 crc kubenswrapper[4830]: I0131 09:43:20.468345 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/464743a2-b75e-49de-9628-6c12d7c7f8b7-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-94xgt\" (UID: \"464743a2-b75e-49de-9628-6c12d7c7f8b7\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-94xgt" Jan 31 09:43:20 crc kubenswrapper[4830]: I0131 09:43:20.487280 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xt42p\" (UniqueName: \"kubernetes.io/projected/464743a2-b75e-49de-9628-6c12d7c7f8b7-kube-api-access-xt42p\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-94xgt\" (UID: \"464743a2-b75e-49de-9628-6c12d7c7f8b7\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-94xgt" Jan 31 09:43:20 crc kubenswrapper[4830]: I0131 09:43:20.616022 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-94xgt" Jan 31 09:43:21 crc kubenswrapper[4830]: I0131 09:43:21.244147 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-94xgt"] Jan 31 09:43:22 crc kubenswrapper[4830]: I0131 09:43:22.209507 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-94xgt" event={"ID":"464743a2-b75e-49de-9628-6c12d7c7f8b7","Type":"ContainerStarted","Data":"379dd8d56c3169561eb7b44f60782b10a04225476dce5f7b0f727449560cfe0e"} Jan 31 09:43:22 crc kubenswrapper[4830]: I0131 09:43:22.210564 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-94xgt" event={"ID":"464743a2-b75e-49de-9628-6c12d7c7f8b7","Type":"ContainerStarted","Data":"3d1a4bdbce826f0b234f5a2b934b0a0133b36b24d3569f9090e41309736fc9a3"} Jan 31 09:43:22 crc kubenswrapper[4830]: I0131 09:43:22.235238 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-94xgt" podStartSLOduration=1.629755329 podStartE2EDuration="2.235212094s" podCreationTimestamp="2026-01-31 09:43:20 +0000 UTC" firstStartedPulling="2026-01-31 09:43:21.243881532 +0000 UTC m=+2545.737243974" lastFinishedPulling="2026-01-31 09:43:21.849338297 +0000 UTC m=+2546.342700739" observedRunningTime="2026-01-31 09:43:22.230608855 +0000 UTC m=+2546.723971297" watchObservedRunningTime="2026-01-31 09:43:22.235212094 +0000 UTC m=+2546.728574536" Jan 31 09:44:17 crc kubenswrapper[4830]: I0131 09:44:17.967442 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-9vwrf"] Jan 31 09:44:17 crc kubenswrapper[4830]: I0131 09:44:17.970611 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9vwrf" Jan 31 09:44:17 crc kubenswrapper[4830]: I0131 09:44:17.979208 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-9vwrf"] Jan 31 09:44:18 crc kubenswrapper[4830]: I0131 09:44:18.119208 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-29h8w\" (UniqueName: \"kubernetes.io/projected/95197978-4b38-45a8-b6f8-f02110ee335f-kube-api-access-29h8w\") pod \"redhat-operators-9vwrf\" (UID: \"95197978-4b38-45a8-b6f8-f02110ee335f\") " pod="openshift-marketplace/redhat-operators-9vwrf" Jan 31 09:44:18 crc kubenswrapper[4830]: I0131 09:44:18.119337 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/95197978-4b38-45a8-b6f8-f02110ee335f-utilities\") pod \"redhat-operators-9vwrf\" (UID: \"95197978-4b38-45a8-b6f8-f02110ee335f\") " pod="openshift-marketplace/redhat-operators-9vwrf" Jan 31 09:44:18 crc kubenswrapper[4830]: I0131 09:44:18.119455 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/95197978-4b38-45a8-b6f8-f02110ee335f-catalog-content\") pod \"redhat-operators-9vwrf\" (UID: \"95197978-4b38-45a8-b6f8-f02110ee335f\") " pod="openshift-marketplace/redhat-operators-9vwrf" Jan 31 09:44:18 crc kubenswrapper[4830]: I0131 09:44:18.223052 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-29h8w\" (UniqueName: \"kubernetes.io/projected/95197978-4b38-45a8-b6f8-f02110ee335f-kube-api-access-29h8w\") pod \"redhat-operators-9vwrf\" (UID: \"95197978-4b38-45a8-b6f8-f02110ee335f\") " pod="openshift-marketplace/redhat-operators-9vwrf" Jan 31 09:44:18 crc kubenswrapper[4830]: I0131 09:44:18.223493 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/95197978-4b38-45a8-b6f8-f02110ee335f-utilities\") pod \"redhat-operators-9vwrf\" (UID: \"95197978-4b38-45a8-b6f8-f02110ee335f\") " pod="openshift-marketplace/redhat-operators-9vwrf" Jan 31 09:44:18 crc kubenswrapper[4830]: I0131 09:44:18.223667 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/95197978-4b38-45a8-b6f8-f02110ee335f-catalog-content\") pod \"redhat-operators-9vwrf\" (UID: \"95197978-4b38-45a8-b6f8-f02110ee335f\") " pod="openshift-marketplace/redhat-operators-9vwrf" Jan 31 09:44:18 crc kubenswrapper[4830]: I0131 09:44:18.224070 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/95197978-4b38-45a8-b6f8-f02110ee335f-utilities\") pod \"redhat-operators-9vwrf\" (UID: \"95197978-4b38-45a8-b6f8-f02110ee335f\") " pod="openshift-marketplace/redhat-operators-9vwrf" Jan 31 09:44:18 crc kubenswrapper[4830]: I0131 09:44:18.224208 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/95197978-4b38-45a8-b6f8-f02110ee335f-catalog-content\") pod \"redhat-operators-9vwrf\" (UID: \"95197978-4b38-45a8-b6f8-f02110ee335f\") " pod="openshift-marketplace/redhat-operators-9vwrf" Jan 31 09:44:18 crc kubenswrapper[4830]: I0131 09:44:18.251634 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-29h8w\" (UniqueName: \"kubernetes.io/projected/95197978-4b38-45a8-b6f8-f02110ee335f-kube-api-access-29h8w\") pod \"redhat-operators-9vwrf\" (UID: \"95197978-4b38-45a8-b6f8-f02110ee335f\") " pod="openshift-marketplace/redhat-operators-9vwrf" Jan 31 09:44:18 crc kubenswrapper[4830]: I0131 09:44:18.293382 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9vwrf" Jan 31 09:44:18 crc kubenswrapper[4830]: W0131 09:44:18.880452 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod95197978_4b38_45a8_b6f8_f02110ee335f.slice/crio-8308aa5bed7154c32c102970a0fa4f2414e8eef2e8cfd89f847199cf617a6398 WatchSource:0}: Error finding container 8308aa5bed7154c32c102970a0fa4f2414e8eef2e8cfd89f847199cf617a6398: Status 404 returned error can't find the container with id 8308aa5bed7154c32c102970a0fa4f2414e8eef2e8cfd89f847199cf617a6398 Jan 31 09:44:18 crc kubenswrapper[4830]: I0131 09:44:18.880950 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-9vwrf"] Jan 31 09:44:19 crc kubenswrapper[4830]: I0131 09:44:19.870311 4830 generic.go:334] "Generic (PLEG): container finished" podID="95197978-4b38-45a8-b6f8-f02110ee335f" containerID="ac7555b61439921375096882d9918f5b8a348de13801e8a218db956b9ca4d8f0" exitCode=0 Jan 31 09:44:19 crc kubenswrapper[4830]: I0131 09:44:19.870409 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9vwrf" event={"ID":"95197978-4b38-45a8-b6f8-f02110ee335f","Type":"ContainerDied","Data":"ac7555b61439921375096882d9918f5b8a348de13801e8a218db956b9ca4d8f0"} Jan 31 09:44:19 crc kubenswrapper[4830]: I0131 09:44:19.870667 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9vwrf" event={"ID":"95197978-4b38-45a8-b6f8-f02110ee335f","Type":"ContainerStarted","Data":"8308aa5bed7154c32c102970a0fa4f2414e8eef2e8cfd89f847199cf617a6398"} Jan 31 09:44:19 crc kubenswrapper[4830]: I0131 09:44:19.873602 4830 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 31 09:44:20 crc kubenswrapper[4830]: I0131 09:44:20.893467 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9vwrf" event={"ID":"95197978-4b38-45a8-b6f8-f02110ee335f","Type":"ContainerStarted","Data":"48c469b565f3eac17ad6e7380eb4a61621a838a7be8bd14576e7a8ef4e9a8a64"} Jan 31 09:44:29 crc kubenswrapper[4830]: I0131 09:44:29.990388 4830 generic.go:334] "Generic (PLEG): container finished" podID="95197978-4b38-45a8-b6f8-f02110ee335f" containerID="48c469b565f3eac17ad6e7380eb4a61621a838a7be8bd14576e7a8ef4e9a8a64" exitCode=0 Jan 31 09:44:29 crc kubenswrapper[4830]: I0131 09:44:29.990466 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9vwrf" event={"ID":"95197978-4b38-45a8-b6f8-f02110ee335f","Type":"ContainerDied","Data":"48c469b565f3eac17ad6e7380eb4a61621a838a7be8bd14576e7a8ef4e9a8a64"} Jan 31 09:44:31 crc kubenswrapper[4830]: I0131 09:44:31.005247 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9vwrf" event={"ID":"95197978-4b38-45a8-b6f8-f02110ee335f","Type":"ContainerStarted","Data":"754694f593084df61c7cfebac5d450725bce6a450d260794287cebf87db707ca"} Jan 31 09:44:31 crc kubenswrapper[4830]: I0131 09:44:31.042898 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-9vwrf" podStartSLOduration=3.254907523 podStartE2EDuration="14.042872522s" podCreationTimestamp="2026-01-31 09:44:17 +0000 UTC" firstStartedPulling="2026-01-31 09:44:19.873305305 +0000 UTC m=+2604.366667737" lastFinishedPulling="2026-01-31 09:44:30.661270294 +0000 UTC m=+2615.154632736" observedRunningTime="2026-01-31 09:44:31.031187284 +0000 UTC m=+2615.524549726" watchObservedRunningTime="2026-01-31 09:44:31.042872522 +0000 UTC m=+2615.536234964" Jan 31 09:44:38 crc kubenswrapper[4830]: I0131 09:44:38.294210 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-9vwrf" Jan 31 09:44:38 crc kubenswrapper[4830]: I0131 09:44:38.294799 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-9vwrf" Jan 31 09:44:39 crc kubenswrapper[4830]: I0131 09:44:39.349507 4830 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-9vwrf" podUID="95197978-4b38-45a8-b6f8-f02110ee335f" containerName="registry-server" probeResult="failure" output=< Jan 31 09:44:39 crc kubenswrapper[4830]: timeout: failed to connect service ":50051" within 1s Jan 31 09:44:39 crc kubenswrapper[4830]: > Jan 31 09:44:44 crc kubenswrapper[4830]: I0131 09:44:44.352891 4830 patch_prober.go:28] interesting pod/machine-config-daemon-gt7kd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 09:44:44 crc kubenswrapper[4830]: I0131 09:44:44.353630 4830 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" podUID="158dbfda-9b0a-4809-9946-3c6ee2d082dc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 09:44:48 crc kubenswrapper[4830]: I0131 09:44:48.350394 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-9vwrf" Jan 31 09:44:48 crc kubenswrapper[4830]: I0131 09:44:48.409987 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-9vwrf" Jan 31 09:44:49 crc kubenswrapper[4830]: I0131 09:44:49.167808 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-9vwrf"] Jan 31 09:44:50 crc kubenswrapper[4830]: I0131 09:44:50.218075 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-9vwrf" podUID="95197978-4b38-45a8-b6f8-f02110ee335f" containerName="registry-server" containerID="cri-o://754694f593084df61c7cfebac5d450725bce6a450d260794287cebf87db707ca" gracePeriod=2 Jan 31 09:44:50 crc kubenswrapper[4830]: I0131 09:44:50.796710 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9vwrf" Jan 31 09:44:50 crc kubenswrapper[4830]: I0131 09:44:50.865086 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/95197978-4b38-45a8-b6f8-f02110ee335f-utilities\") pod \"95197978-4b38-45a8-b6f8-f02110ee335f\" (UID: \"95197978-4b38-45a8-b6f8-f02110ee335f\") " Jan 31 09:44:50 crc kubenswrapper[4830]: I0131 09:44:50.865264 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-29h8w\" (UniqueName: \"kubernetes.io/projected/95197978-4b38-45a8-b6f8-f02110ee335f-kube-api-access-29h8w\") pod \"95197978-4b38-45a8-b6f8-f02110ee335f\" (UID: \"95197978-4b38-45a8-b6f8-f02110ee335f\") " Jan 31 09:44:50 crc kubenswrapper[4830]: I0131 09:44:50.865315 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/95197978-4b38-45a8-b6f8-f02110ee335f-catalog-content\") pod \"95197978-4b38-45a8-b6f8-f02110ee335f\" (UID: \"95197978-4b38-45a8-b6f8-f02110ee335f\") " Jan 31 09:44:50 crc kubenswrapper[4830]: I0131 09:44:50.866705 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/95197978-4b38-45a8-b6f8-f02110ee335f-utilities" (OuterVolumeSpecName: "utilities") pod "95197978-4b38-45a8-b6f8-f02110ee335f" (UID: "95197978-4b38-45a8-b6f8-f02110ee335f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 09:44:50 crc kubenswrapper[4830]: I0131 09:44:50.875981 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/95197978-4b38-45a8-b6f8-f02110ee335f-kube-api-access-29h8w" (OuterVolumeSpecName: "kube-api-access-29h8w") pod "95197978-4b38-45a8-b6f8-f02110ee335f" (UID: "95197978-4b38-45a8-b6f8-f02110ee335f"). InnerVolumeSpecName "kube-api-access-29h8w". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:44:50 crc kubenswrapper[4830]: I0131 09:44:50.969610 4830 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/95197978-4b38-45a8-b6f8-f02110ee335f-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 09:44:50 crc kubenswrapper[4830]: I0131 09:44:50.969985 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-29h8w\" (UniqueName: \"kubernetes.io/projected/95197978-4b38-45a8-b6f8-f02110ee335f-kube-api-access-29h8w\") on node \"crc\" DevicePath \"\"" Jan 31 09:44:51 crc kubenswrapper[4830]: I0131 09:44:51.018969 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/95197978-4b38-45a8-b6f8-f02110ee335f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "95197978-4b38-45a8-b6f8-f02110ee335f" (UID: "95197978-4b38-45a8-b6f8-f02110ee335f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 09:44:51 crc kubenswrapper[4830]: I0131 09:44:51.072439 4830 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/95197978-4b38-45a8-b6f8-f02110ee335f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 09:44:51 crc kubenswrapper[4830]: I0131 09:44:51.230850 4830 generic.go:334] "Generic (PLEG): container finished" podID="95197978-4b38-45a8-b6f8-f02110ee335f" containerID="754694f593084df61c7cfebac5d450725bce6a450d260794287cebf87db707ca" exitCode=0 Jan 31 09:44:51 crc kubenswrapper[4830]: I0131 09:44:51.230918 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9vwrf" event={"ID":"95197978-4b38-45a8-b6f8-f02110ee335f","Type":"ContainerDied","Data":"754694f593084df61c7cfebac5d450725bce6a450d260794287cebf87db707ca"} Jan 31 09:44:51 crc kubenswrapper[4830]: I0131 09:44:51.230962 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9vwrf" event={"ID":"95197978-4b38-45a8-b6f8-f02110ee335f","Type":"ContainerDied","Data":"8308aa5bed7154c32c102970a0fa4f2414e8eef2e8cfd89f847199cf617a6398"} Jan 31 09:44:51 crc kubenswrapper[4830]: I0131 09:44:51.230969 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9vwrf" Jan 31 09:44:51 crc kubenswrapper[4830]: I0131 09:44:51.230988 4830 scope.go:117] "RemoveContainer" containerID="754694f593084df61c7cfebac5d450725bce6a450d260794287cebf87db707ca" Jan 31 09:44:51 crc kubenswrapper[4830]: I0131 09:44:51.266135 4830 scope.go:117] "RemoveContainer" containerID="48c469b565f3eac17ad6e7380eb4a61621a838a7be8bd14576e7a8ef4e9a8a64" Jan 31 09:44:51 crc kubenswrapper[4830]: I0131 09:44:51.279817 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-9vwrf"] Jan 31 09:44:51 crc kubenswrapper[4830]: I0131 09:44:51.290256 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-9vwrf"] Jan 31 09:44:51 crc kubenswrapper[4830]: I0131 09:44:51.295448 4830 scope.go:117] "RemoveContainer" containerID="ac7555b61439921375096882d9918f5b8a348de13801e8a218db956b9ca4d8f0" Jan 31 09:44:51 crc kubenswrapper[4830]: I0131 09:44:51.356376 4830 scope.go:117] "RemoveContainer" containerID="754694f593084df61c7cfebac5d450725bce6a450d260794287cebf87db707ca" Jan 31 09:44:51 crc kubenswrapper[4830]: E0131 09:44:51.356973 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"754694f593084df61c7cfebac5d450725bce6a450d260794287cebf87db707ca\": container with ID starting with 754694f593084df61c7cfebac5d450725bce6a450d260794287cebf87db707ca not found: ID does not exist" containerID="754694f593084df61c7cfebac5d450725bce6a450d260794287cebf87db707ca" Jan 31 09:44:51 crc kubenswrapper[4830]: I0131 09:44:51.357038 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"754694f593084df61c7cfebac5d450725bce6a450d260794287cebf87db707ca"} err="failed to get container status \"754694f593084df61c7cfebac5d450725bce6a450d260794287cebf87db707ca\": rpc error: code = NotFound desc = could not find container \"754694f593084df61c7cfebac5d450725bce6a450d260794287cebf87db707ca\": container with ID starting with 754694f593084df61c7cfebac5d450725bce6a450d260794287cebf87db707ca not found: ID does not exist" Jan 31 09:44:51 crc kubenswrapper[4830]: I0131 09:44:51.357072 4830 scope.go:117] "RemoveContainer" containerID="48c469b565f3eac17ad6e7380eb4a61621a838a7be8bd14576e7a8ef4e9a8a64" Jan 31 09:44:51 crc kubenswrapper[4830]: E0131 09:44:51.357570 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"48c469b565f3eac17ad6e7380eb4a61621a838a7be8bd14576e7a8ef4e9a8a64\": container with ID starting with 48c469b565f3eac17ad6e7380eb4a61621a838a7be8bd14576e7a8ef4e9a8a64 not found: ID does not exist" containerID="48c469b565f3eac17ad6e7380eb4a61621a838a7be8bd14576e7a8ef4e9a8a64" Jan 31 09:44:51 crc kubenswrapper[4830]: I0131 09:44:51.357611 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"48c469b565f3eac17ad6e7380eb4a61621a838a7be8bd14576e7a8ef4e9a8a64"} err="failed to get container status \"48c469b565f3eac17ad6e7380eb4a61621a838a7be8bd14576e7a8ef4e9a8a64\": rpc error: code = NotFound desc = could not find container \"48c469b565f3eac17ad6e7380eb4a61621a838a7be8bd14576e7a8ef4e9a8a64\": container with ID starting with 48c469b565f3eac17ad6e7380eb4a61621a838a7be8bd14576e7a8ef4e9a8a64 not found: ID does not exist" Jan 31 09:44:51 crc kubenswrapper[4830]: I0131 09:44:51.357652 4830 scope.go:117] "RemoveContainer" containerID="ac7555b61439921375096882d9918f5b8a348de13801e8a218db956b9ca4d8f0" Jan 31 09:44:51 crc kubenswrapper[4830]: E0131 09:44:51.358585 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ac7555b61439921375096882d9918f5b8a348de13801e8a218db956b9ca4d8f0\": container with ID starting with ac7555b61439921375096882d9918f5b8a348de13801e8a218db956b9ca4d8f0 not found: ID does not exist" containerID="ac7555b61439921375096882d9918f5b8a348de13801e8a218db956b9ca4d8f0" Jan 31 09:44:51 crc kubenswrapper[4830]: I0131 09:44:51.358618 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ac7555b61439921375096882d9918f5b8a348de13801e8a218db956b9ca4d8f0"} err="failed to get container status \"ac7555b61439921375096882d9918f5b8a348de13801e8a218db956b9ca4d8f0\": rpc error: code = NotFound desc = could not find container \"ac7555b61439921375096882d9918f5b8a348de13801e8a218db956b9ca4d8f0\": container with ID starting with ac7555b61439921375096882d9918f5b8a348de13801e8a218db956b9ca4d8f0 not found: ID does not exist" Jan 31 09:44:52 crc kubenswrapper[4830]: I0131 09:44:52.265716 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="95197978-4b38-45a8-b6f8-f02110ee335f" path="/var/lib/kubelet/pods/95197978-4b38-45a8-b6f8-f02110ee335f/volumes" Jan 31 09:45:00 crc kubenswrapper[4830]: I0131 09:45:00.173495 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29497545-s6jfx"] Jan 31 09:45:00 crc kubenswrapper[4830]: E0131 09:45:00.176243 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95197978-4b38-45a8-b6f8-f02110ee335f" containerName="registry-server" Jan 31 09:45:00 crc kubenswrapper[4830]: I0131 09:45:00.176406 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="95197978-4b38-45a8-b6f8-f02110ee335f" containerName="registry-server" Jan 31 09:45:00 crc kubenswrapper[4830]: E0131 09:45:00.176536 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95197978-4b38-45a8-b6f8-f02110ee335f" containerName="extract-utilities" Jan 31 09:45:00 crc kubenswrapper[4830]: I0131 09:45:00.176608 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="95197978-4b38-45a8-b6f8-f02110ee335f" containerName="extract-utilities" Jan 31 09:45:00 crc kubenswrapper[4830]: E0131 09:45:00.176770 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95197978-4b38-45a8-b6f8-f02110ee335f" containerName="extract-content" Jan 31 09:45:00 crc kubenswrapper[4830]: I0131 09:45:00.176864 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="95197978-4b38-45a8-b6f8-f02110ee335f" containerName="extract-content" Jan 31 09:45:00 crc kubenswrapper[4830]: I0131 09:45:00.177371 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="95197978-4b38-45a8-b6f8-f02110ee335f" containerName="registry-server" Jan 31 09:45:00 crc kubenswrapper[4830]: I0131 09:45:00.179441 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29497545-s6jfx" Jan 31 09:45:00 crc kubenswrapper[4830]: I0131 09:45:00.183919 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 31 09:45:00 crc kubenswrapper[4830]: I0131 09:45:00.184143 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 31 09:45:00 crc kubenswrapper[4830]: I0131 09:45:00.191538 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29497545-s6jfx"] Jan 31 09:45:00 crc kubenswrapper[4830]: I0131 09:45:00.321720 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nzqx4\" (UniqueName: \"kubernetes.io/projected/fb5cecb5-4005-43e1-bf40-b620150d746c-kube-api-access-nzqx4\") pod \"collect-profiles-29497545-s6jfx\" (UID: \"fb5cecb5-4005-43e1-bf40-b620150d746c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497545-s6jfx" Jan 31 09:45:00 crc kubenswrapper[4830]: I0131 09:45:00.322175 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fb5cecb5-4005-43e1-bf40-b620150d746c-config-volume\") pod \"collect-profiles-29497545-s6jfx\" (UID: \"fb5cecb5-4005-43e1-bf40-b620150d746c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497545-s6jfx" Jan 31 09:45:00 crc kubenswrapper[4830]: I0131 09:45:00.322237 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/fb5cecb5-4005-43e1-bf40-b620150d746c-secret-volume\") pod \"collect-profiles-29497545-s6jfx\" (UID: \"fb5cecb5-4005-43e1-bf40-b620150d746c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497545-s6jfx" Jan 31 09:45:00 crc kubenswrapper[4830]: I0131 09:45:00.424897 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nzqx4\" (UniqueName: \"kubernetes.io/projected/fb5cecb5-4005-43e1-bf40-b620150d746c-kube-api-access-nzqx4\") pod \"collect-profiles-29497545-s6jfx\" (UID: \"fb5cecb5-4005-43e1-bf40-b620150d746c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497545-s6jfx" Jan 31 09:45:00 crc kubenswrapper[4830]: I0131 09:45:00.424981 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fb5cecb5-4005-43e1-bf40-b620150d746c-config-volume\") pod \"collect-profiles-29497545-s6jfx\" (UID: \"fb5cecb5-4005-43e1-bf40-b620150d746c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497545-s6jfx" Jan 31 09:45:00 crc kubenswrapper[4830]: I0131 09:45:00.425039 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/fb5cecb5-4005-43e1-bf40-b620150d746c-secret-volume\") pod \"collect-profiles-29497545-s6jfx\" (UID: \"fb5cecb5-4005-43e1-bf40-b620150d746c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497545-s6jfx" Jan 31 09:45:00 crc kubenswrapper[4830]: I0131 09:45:00.426320 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fb5cecb5-4005-43e1-bf40-b620150d746c-config-volume\") pod \"collect-profiles-29497545-s6jfx\" (UID: \"fb5cecb5-4005-43e1-bf40-b620150d746c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497545-s6jfx" Jan 31 09:45:00 crc kubenswrapper[4830]: I0131 09:45:00.436704 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/fb5cecb5-4005-43e1-bf40-b620150d746c-secret-volume\") pod \"collect-profiles-29497545-s6jfx\" (UID: \"fb5cecb5-4005-43e1-bf40-b620150d746c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497545-s6jfx" Jan 31 09:45:00 crc kubenswrapper[4830]: I0131 09:45:00.447709 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nzqx4\" (UniqueName: \"kubernetes.io/projected/fb5cecb5-4005-43e1-bf40-b620150d746c-kube-api-access-nzqx4\") pod \"collect-profiles-29497545-s6jfx\" (UID: \"fb5cecb5-4005-43e1-bf40-b620150d746c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497545-s6jfx" Jan 31 09:45:00 crc kubenswrapper[4830]: I0131 09:45:00.523308 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29497545-s6jfx" Jan 31 09:45:01 crc kubenswrapper[4830]: I0131 09:45:01.116970 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29497545-s6jfx"] Jan 31 09:45:01 crc kubenswrapper[4830]: I0131 09:45:01.358970 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29497545-s6jfx" event={"ID":"fb5cecb5-4005-43e1-bf40-b620150d746c","Type":"ContainerStarted","Data":"1682852e71abcec8ed0688ab639b47fe5e23b0809a7fc14f0944dd7216651adc"} Jan 31 09:45:02 crc kubenswrapper[4830]: I0131 09:45:02.375366 4830 generic.go:334] "Generic (PLEG): container finished" podID="fb5cecb5-4005-43e1-bf40-b620150d746c" containerID="74f8b9685969693e5971df89538f336d0957a5b3af1f58cf4cefb74a71fa3b33" exitCode=0 Jan 31 09:45:02 crc kubenswrapper[4830]: I0131 09:45:02.376763 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29497545-s6jfx" event={"ID":"fb5cecb5-4005-43e1-bf40-b620150d746c","Type":"ContainerDied","Data":"74f8b9685969693e5971df89538f336d0957a5b3af1f58cf4cefb74a71fa3b33"} Jan 31 09:45:03 crc kubenswrapper[4830]: I0131 09:45:03.924626 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29497545-s6jfx" Jan 31 09:45:04 crc kubenswrapper[4830]: I0131 09:45:04.039297 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzqx4\" (UniqueName: \"kubernetes.io/projected/fb5cecb5-4005-43e1-bf40-b620150d746c-kube-api-access-nzqx4\") pod \"fb5cecb5-4005-43e1-bf40-b620150d746c\" (UID: \"fb5cecb5-4005-43e1-bf40-b620150d746c\") " Jan 31 09:45:04 crc kubenswrapper[4830]: I0131 09:45:04.039451 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fb5cecb5-4005-43e1-bf40-b620150d746c-config-volume\") pod \"fb5cecb5-4005-43e1-bf40-b620150d746c\" (UID: \"fb5cecb5-4005-43e1-bf40-b620150d746c\") " Jan 31 09:45:04 crc kubenswrapper[4830]: I0131 09:45:04.039536 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/fb5cecb5-4005-43e1-bf40-b620150d746c-secret-volume\") pod \"fb5cecb5-4005-43e1-bf40-b620150d746c\" (UID: \"fb5cecb5-4005-43e1-bf40-b620150d746c\") " Jan 31 09:45:04 crc kubenswrapper[4830]: I0131 09:45:04.040302 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fb5cecb5-4005-43e1-bf40-b620150d746c-config-volume" (OuterVolumeSpecName: "config-volume") pod "fb5cecb5-4005-43e1-bf40-b620150d746c" (UID: "fb5cecb5-4005-43e1-bf40-b620150d746c"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:45:04 crc kubenswrapper[4830]: I0131 09:45:04.040952 4830 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fb5cecb5-4005-43e1-bf40-b620150d746c-config-volume\") on node \"crc\" DevicePath \"\"" Jan 31 09:45:04 crc kubenswrapper[4830]: I0131 09:45:04.048823 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fb5cecb5-4005-43e1-bf40-b620150d746c-kube-api-access-nzqx4" (OuterVolumeSpecName: "kube-api-access-nzqx4") pod "fb5cecb5-4005-43e1-bf40-b620150d746c" (UID: "fb5cecb5-4005-43e1-bf40-b620150d746c"). InnerVolumeSpecName "kube-api-access-nzqx4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:45:04 crc kubenswrapper[4830]: I0131 09:45:04.051770 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fb5cecb5-4005-43e1-bf40-b620150d746c-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "fb5cecb5-4005-43e1-bf40-b620150d746c" (UID: "fb5cecb5-4005-43e1-bf40-b620150d746c"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:45:04 crc kubenswrapper[4830]: I0131 09:45:04.144393 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzqx4\" (UniqueName: \"kubernetes.io/projected/fb5cecb5-4005-43e1-bf40-b620150d746c-kube-api-access-nzqx4\") on node \"crc\" DevicePath \"\"" Jan 31 09:45:04 crc kubenswrapper[4830]: I0131 09:45:04.144460 4830 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/fb5cecb5-4005-43e1-bf40-b620150d746c-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 31 09:45:04 crc kubenswrapper[4830]: I0131 09:45:04.409674 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29497545-s6jfx" event={"ID":"fb5cecb5-4005-43e1-bf40-b620150d746c","Type":"ContainerDied","Data":"1682852e71abcec8ed0688ab639b47fe5e23b0809a7fc14f0944dd7216651adc"} Jan 31 09:45:04 crc kubenswrapper[4830]: I0131 09:45:04.409760 4830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1682852e71abcec8ed0688ab639b47fe5e23b0809a7fc14f0944dd7216651adc" Jan 31 09:45:04 crc kubenswrapper[4830]: I0131 09:45:04.409782 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29497545-s6jfx" Jan 31 09:45:05 crc kubenswrapper[4830]: I0131 09:45:05.029399 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29497500-66dl8"] Jan 31 09:45:05 crc kubenswrapper[4830]: I0131 09:45:05.041343 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29497500-66dl8"] Jan 31 09:45:06 crc kubenswrapper[4830]: I0131 09:45:06.269772 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dc74377f-6986-4156-9c2b-7a003f07d6ff" path="/var/lib/kubelet/pods/dc74377f-6986-4156-9c2b-7a003f07d6ff/volumes" Jan 31 09:45:14 crc kubenswrapper[4830]: I0131 09:45:14.118901 4830 scope.go:117] "RemoveContainer" containerID="2325efe2d12a50cd38de3263bca44ba166a7c07c01d65a0da7aaab18fd9d7718" Jan 31 09:45:14 crc kubenswrapper[4830]: I0131 09:45:14.352750 4830 patch_prober.go:28] interesting pod/machine-config-daemon-gt7kd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 09:45:14 crc kubenswrapper[4830]: I0131 09:45:14.353325 4830 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" podUID="158dbfda-9b0a-4809-9946-3c6ee2d082dc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 09:45:44 crc kubenswrapper[4830]: I0131 09:45:44.353257 4830 patch_prober.go:28] interesting pod/machine-config-daemon-gt7kd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 09:45:44 crc kubenswrapper[4830]: I0131 09:45:44.354302 4830 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" podUID="158dbfda-9b0a-4809-9946-3c6ee2d082dc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 09:45:44 crc kubenswrapper[4830]: I0131 09:45:44.354392 4830 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" Jan 31 09:45:44 crc kubenswrapper[4830]: I0131 09:45:44.355862 4830 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a20ea2322cd4062ecdd9c286d63df058ebc8744e0a83dc5a6a03d87d2b70305c"} pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 31 09:45:44 crc kubenswrapper[4830]: I0131 09:45:44.355978 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" podUID="158dbfda-9b0a-4809-9946-3c6ee2d082dc" containerName="machine-config-daemon" containerID="cri-o://a20ea2322cd4062ecdd9c286d63df058ebc8744e0a83dc5a6a03d87d2b70305c" gracePeriod=600 Jan 31 09:45:45 crc kubenswrapper[4830]: I0131 09:45:45.082578 4830 generic.go:334] "Generic (PLEG): container finished" podID="158dbfda-9b0a-4809-9946-3c6ee2d082dc" containerID="a20ea2322cd4062ecdd9c286d63df058ebc8744e0a83dc5a6a03d87d2b70305c" exitCode=0 Jan 31 09:45:45 crc kubenswrapper[4830]: I0131 09:45:45.082661 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" event={"ID":"158dbfda-9b0a-4809-9946-3c6ee2d082dc","Type":"ContainerDied","Data":"a20ea2322cd4062ecdd9c286d63df058ebc8744e0a83dc5a6a03d87d2b70305c"} Jan 31 09:45:45 crc kubenswrapper[4830]: I0131 09:45:45.083060 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" event={"ID":"158dbfda-9b0a-4809-9946-3c6ee2d082dc","Type":"ContainerStarted","Data":"4c8d3d87e516871151f011a4e6c08fa4f0c34e4a44cf02a2e961fcf4fe1f40c9"} Jan 31 09:45:45 crc kubenswrapper[4830]: I0131 09:45:45.083089 4830 scope.go:117] "RemoveContainer" containerID="5a8f61ac813e58f2725a65e088faabbabc4f4a08bd1c263d53e2f3530d252de8" Jan 31 09:46:38 crc kubenswrapper[4830]: I0131 09:46:38.469346 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-cn59l"] Jan 31 09:46:38 crc kubenswrapper[4830]: E0131 09:46:38.470870 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb5cecb5-4005-43e1-bf40-b620150d746c" containerName="collect-profiles" Jan 31 09:46:38 crc kubenswrapper[4830]: I0131 09:46:38.470891 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb5cecb5-4005-43e1-bf40-b620150d746c" containerName="collect-profiles" Jan 31 09:46:38 crc kubenswrapper[4830]: I0131 09:46:38.471153 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="fb5cecb5-4005-43e1-bf40-b620150d746c" containerName="collect-profiles" Jan 31 09:46:38 crc kubenswrapper[4830]: I0131 09:46:38.473291 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cn59l" Jan 31 09:46:38 crc kubenswrapper[4830]: I0131 09:46:38.483899 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-cn59l"] Jan 31 09:46:38 crc kubenswrapper[4830]: I0131 09:46:38.564116 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/87d309ac-705c-4015-8d72-fc05a47ef5f0-utilities\") pod \"redhat-marketplace-cn59l\" (UID: \"87d309ac-705c-4015-8d72-fc05a47ef5f0\") " pod="openshift-marketplace/redhat-marketplace-cn59l" Jan 31 09:46:38 crc kubenswrapper[4830]: I0131 09:46:38.564538 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fddmq\" (UniqueName: \"kubernetes.io/projected/87d309ac-705c-4015-8d72-fc05a47ef5f0-kube-api-access-fddmq\") pod \"redhat-marketplace-cn59l\" (UID: \"87d309ac-705c-4015-8d72-fc05a47ef5f0\") " pod="openshift-marketplace/redhat-marketplace-cn59l" Jan 31 09:46:38 crc kubenswrapper[4830]: I0131 09:46:38.564694 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/87d309ac-705c-4015-8d72-fc05a47ef5f0-catalog-content\") pod \"redhat-marketplace-cn59l\" (UID: \"87d309ac-705c-4015-8d72-fc05a47ef5f0\") " pod="openshift-marketplace/redhat-marketplace-cn59l" Jan 31 09:46:38 crc kubenswrapper[4830]: I0131 09:46:38.667098 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fddmq\" (UniqueName: \"kubernetes.io/projected/87d309ac-705c-4015-8d72-fc05a47ef5f0-kube-api-access-fddmq\") pod \"redhat-marketplace-cn59l\" (UID: \"87d309ac-705c-4015-8d72-fc05a47ef5f0\") " pod="openshift-marketplace/redhat-marketplace-cn59l" Jan 31 09:46:38 crc kubenswrapper[4830]: I0131 09:46:38.667230 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/87d309ac-705c-4015-8d72-fc05a47ef5f0-catalog-content\") pod \"redhat-marketplace-cn59l\" (UID: \"87d309ac-705c-4015-8d72-fc05a47ef5f0\") " pod="openshift-marketplace/redhat-marketplace-cn59l" Jan 31 09:46:38 crc kubenswrapper[4830]: I0131 09:46:38.667342 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/87d309ac-705c-4015-8d72-fc05a47ef5f0-utilities\") pod \"redhat-marketplace-cn59l\" (UID: \"87d309ac-705c-4015-8d72-fc05a47ef5f0\") " pod="openshift-marketplace/redhat-marketplace-cn59l" Jan 31 09:46:38 crc kubenswrapper[4830]: I0131 09:46:38.668029 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/87d309ac-705c-4015-8d72-fc05a47ef5f0-catalog-content\") pod \"redhat-marketplace-cn59l\" (UID: \"87d309ac-705c-4015-8d72-fc05a47ef5f0\") " pod="openshift-marketplace/redhat-marketplace-cn59l" Jan 31 09:46:38 crc kubenswrapper[4830]: I0131 09:46:38.668031 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/87d309ac-705c-4015-8d72-fc05a47ef5f0-utilities\") pod \"redhat-marketplace-cn59l\" (UID: \"87d309ac-705c-4015-8d72-fc05a47ef5f0\") " pod="openshift-marketplace/redhat-marketplace-cn59l" Jan 31 09:46:38 crc kubenswrapper[4830]: I0131 09:46:38.689945 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fddmq\" (UniqueName: \"kubernetes.io/projected/87d309ac-705c-4015-8d72-fc05a47ef5f0-kube-api-access-fddmq\") pod \"redhat-marketplace-cn59l\" (UID: \"87d309ac-705c-4015-8d72-fc05a47ef5f0\") " pod="openshift-marketplace/redhat-marketplace-cn59l" Jan 31 09:46:38 crc kubenswrapper[4830]: I0131 09:46:38.799518 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cn59l" Jan 31 09:46:39 crc kubenswrapper[4830]: I0131 09:46:39.375598 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-cn59l"] Jan 31 09:46:39 crc kubenswrapper[4830]: I0131 09:46:39.742417 4830 generic.go:334] "Generic (PLEG): container finished" podID="87d309ac-705c-4015-8d72-fc05a47ef5f0" containerID="c573fc405ed388e10f9f7e6c1983e5619c78e5d72b7e16dddf83b3f8da74ff3d" exitCode=0 Jan 31 09:46:39 crc kubenswrapper[4830]: I0131 09:46:39.742478 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cn59l" event={"ID":"87d309ac-705c-4015-8d72-fc05a47ef5f0","Type":"ContainerDied","Data":"c573fc405ed388e10f9f7e6c1983e5619c78e5d72b7e16dddf83b3f8da74ff3d"} Jan 31 09:46:39 crc kubenswrapper[4830]: I0131 09:46:39.742528 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cn59l" event={"ID":"87d309ac-705c-4015-8d72-fc05a47ef5f0","Type":"ContainerStarted","Data":"5f5a4dbe3599f1ec1d954c06b6426dc37cc20ec5dd5d5d0050ab3e89026dc1e0"} Jan 31 09:46:40 crc kubenswrapper[4830]: I0131 09:46:40.756012 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cn59l" event={"ID":"87d309ac-705c-4015-8d72-fc05a47ef5f0","Type":"ContainerStarted","Data":"3141ca280bcfc51462a598540692227d1c5ce51d6fd3f46ffd5f01f3e0edbeaa"} Jan 31 09:46:41 crc kubenswrapper[4830]: I0131 09:46:41.770841 4830 generic.go:334] "Generic (PLEG): container finished" podID="87d309ac-705c-4015-8d72-fc05a47ef5f0" containerID="3141ca280bcfc51462a598540692227d1c5ce51d6fd3f46ffd5f01f3e0edbeaa" exitCode=0 Jan 31 09:46:41 crc kubenswrapper[4830]: I0131 09:46:41.770911 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cn59l" event={"ID":"87d309ac-705c-4015-8d72-fc05a47ef5f0","Type":"ContainerDied","Data":"3141ca280bcfc51462a598540692227d1c5ce51d6fd3f46ffd5f01f3e0edbeaa"} Jan 31 09:46:42 crc kubenswrapper[4830]: I0131 09:46:42.787037 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cn59l" event={"ID":"87d309ac-705c-4015-8d72-fc05a47ef5f0","Type":"ContainerStarted","Data":"23179245559ba2a0a6fa2e1152614caca84da826de9a3eff85aca4d1361845f0"} Jan 31 09:46:42 crc kubenswrapper[4830]: I0131 09:46:42.823633 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-cn59l" podStartSLOduration=2.278845471 podStartE2EDuration="4.823608384s" podCreationTimestamp="2026-01-31 09:46:38 +0000 UTC" firstStartedPulling="2026-01-31 09:46:39.745475188 +0000 UTC m=+2744.238837630" lastFinishedPulling="2026-01-31 09:46:42.290238101 +0000 UTC m=+2746.783600543" observedRunningTime="2026-01-31 09:46:42.813018437 +0000 UTC m=+2747.306380879" watchObservedRunningTime="2026-01-31 09:46:42.823608384 +0000 UTC m=+2747.316970826" Jan 31 09:46:48 crc kubenswrapper[4830]: I0131 09:46:48.800961 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-cn59l" Jan 31 09:46:48 crc kubenswrapper[4830]: I0131 09:46:48.801543 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-cn59l" Jan 31 09:46:48 crc kubenswrapper[4830]: I0131 09:46:48.862282 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-cn59l" Jan 31 09:46:48 crc kubenswrapper[4830]: I0131 09:46:48.922896 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-cn59l" Jan 31 09:46:49 crc kubenswrapper[4830]: I0131 09:46:49.118920 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-cn59l"] Jan 31 09:46:50 crc kubenswrapper[4830]: I0131 09:46:50.870498 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-cn59l" podUID="87d309ac-705c-4015-8d72-fc05a47ef5f0" containerName="registry-server" containerID="cri-o://23179245559ba2a0a6fa2e1152614caca84da826de9a3eff85aca4d1361845f0" gracePeriod=2 Jan 31 09:46:51 crc kubenswrapper[4830]: I0131 09:46:51.461763 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cn59l" Jan 31 09:46:51 crc kubenswrapper[4830]: I0131 09:46:51.535182 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/87d309ac-705c-4015-8d72-fc05a47ef5f0-utilities\") pod \"87d309ac-705c-4015-8d72-fc05a47ef5f0\" (UID: \"87d309ac-705c-4015-8d72-fc05a47ef5f0\") " Jan 31 09:46:51 crc kubenswrapper[4830]: I0131 09:46:51.535517 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fddmq\" (UniqueName: \"kubernetes.io/projected/87d309ac-705c-4015-8d72-fc05a47ef5f0-kube-api-access-fddmq\") pod \"87d309ac-705c-4015-8d72-fc05a47ef5f0\" (UID: \"87d309ac-705c-4015-8d72-fc05a47ef5f0\") " Jan 31 09:46:51 crc kubenswrapper[4830]: I0131 09:46:51.535554 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/87d309ac-705c-4015-8d72-fc05a47ef5f0-catalog-content\") pod \"87d309ac-705c-4015-8d72-fc05a47ef5f0\" (UID: \"87d309ac-705c-4015-8d72-fc05a47ef5f0\") " Jan 31 09:46:51 crc kubenswrapper[4830]: I0131 09:46:51.538523 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/87d309ac-705c-4015-8d72-fc05a47ef5f0-utilities" (OuterVolumeSpecName: "utilities") pod "87d309ac-705c-4015-8d72-fc05a47ef5f0" (UID: "87d309ac-705c-4015-8d72-fc05a47ef5f0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 09:46:51 crc kubenswrapper[4830]: I0131 09:46:51.542251 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87d309ac-705c-4015-8d72-fc05a47ef5f0-kube-api-access-fddmq" (OuterVolumeSpecName: "kube-api-access-fddmq") pod "87d309ac-705c-4015-8d72-fc05a47ef5f0" (UID: "87d309ac-705c-4015-8d72-fc05a47ef5f0"). InnerVolumeSpecName "kube-api-access-fddmq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:46:51 crc kubenswrapper[4830]: I0131 09:46:51.560646 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/87d309ac-705c-4015-8d72-fc05a47ef5f0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "87d309ac-705c-4015-8d72-fc05a47ef5f0" (UID: "87d309ac-705c-4015-8d72-fc05a47ef5f0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 09:46:51 crc kubenswrapper[4830]: I0131 09:46:51.638892 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fddmq\" (UniqueName: \"kubernetes.io/projected/87d309ac-705c-4015-8d72-fc05a47ef5f0-kube-api-access-fddmq\") on node \"crc\" DevicePath \"\"" Jan 31 09:46:51 crc kubenswrapper[4830]: I0131 09:46:51.638942 4830 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/87d309ac-705c-4015-8d72-fc05a47ef5f0-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 09:46:51 crc kubenswrapper[4830]: I0131 09:46:51.638958 4830 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/87d309ac-705c-4015-8d72-fc05a47ef5f0-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 09:46:51 crc kubenswrapper[4830]: I0131 09:46:51.913935 4830 generic.go:334] "Generic (PLEG): container finished" podID="87d309ac-705c-4015-8d72-fc05a47ef5f0" containerID="23179245559ba2a0a6fa2e1152614caca84da826de9a3eff85aca4d1361845f0" exitCode=0 Jan 31 09:46:51 crc kubenswrapper[4830]: I0131 09:46:51.914001 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cn59l" event={"ID":"87d309ac-705c-4015-8d72-fc05a47ef5f0","Type":"ContainerDied","Data":"23179245559ba2a0a6fa2e1152614caca84da826de9a3eff85aca4d1361845f0"} Jan 31 09:46:51 crc kubenswrapper[4830]: I0131 09:46:51.914048 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cn59l" event={"ID":"87d309ac-705c-4015-8d72-fc05a47ef5f0","Type":"ContainerDied","Data":"5f5a4dbe3599f1ec1d954c06b6426dc37cc20ec5dd5d5d0050ab3e89026dc1e0"} Jan 31 09:46:51 crc kubenswrapper[4830]: I0131 09:46:51.914092 4830 scope.go:117] "RemoveContainer" containerID="23179245559ba2a0a6fa2e1152614caca84da826de9a3eff85aca4d1361845f0" Jan 31 09:46:51 crc kubenswrapper[4830]: I0131 09:46:51.914388 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cn59l" Jan 31 09:46:51 crc kubenswrapper[4830]: I0131 09:46:51.942003 4830 scope.go:117] "RemoveContainer" containerID="3141ca280bcfc51462a598540692227d1c5ce51d6fd3f46ffd5f01f3e0edbeaa" Jan 31 09:46:51 crc kubenswrapper[4830]: I0131 09:46:51.981857 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-cn59l"] Jan 31 09:46:51 crc kubenswrapper[4830]: I0131 09:46:51.991346 4830 scope.go:117] "RemoveContainer" containerID="c573fc405ed388e10f9f7e6c1983e5619c78e5d72b7e16dddf83b3f8da74ff3d" Jan 31 09:46:51 crc kubenswrapper[4830]: I0131 09:46:51.995642 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-cn59l"] Jan 31 09:46:52 crc kubenswrapper[4830]: I0131 09:46:52.056338 4830 scope.go:117] "RemoveContainer" containerID="23179245559ba2a0a6fa2e1152614caca84da826de9a3eff85aca4d1361845f0" Jan 31 09:46:52 crc kubenswrapper[4830]: E0131 09:46:52.056929 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"23179245559ba2a0a6fa2e1152614caca84da826de9a3eff85aca4d1361845f0\": container with ID starting with 23179245559ba2a0a6fa2e1152614caca84da826de9a3eff85aca4d1361845f0 not found: ID does not exist" containerID="23179245559ba2a0a6fa2e1152614caca84da826de9a3eff85aca4d1361845f0" Jan 31 09:46:52 crc kubenswrapper[4830]: I0131 09:46:52.056976 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"23179245559ba2a0a6fa2e1152614caca84da826de9a3eff85aca4d1361845f0"} err="failed to get container status \"23179245559ba2a0a6fa2e1152614caca84da826de9a3eff85aca4d1361845f0\": rpc error: code = NotFound desc = could not find container \"23179245559ba2a0a6fa2e1152614caca84da826de9a3eff85aca4d1361845f0\": container with ID starting with 23179245559ba2a0a6fa2e1152614caca84da826de9a3eff85aca4d1361845f0 not found: ID does not exist" Jan 31 09:46:52 crc kubenswrapper[4830]: I0131 09:46:52.057006 4830 scope.go:117] "RemoveContainer" containerID="3141ca280bcfc51462a598540692227d1c5ce51d6fd3f46ffd5f01f3e0edbeaa" Jan 31 09:46:52 crc kubenswrapper[4830]: E0131 09:46:52.057457 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3141ca280bcfc51462a598540692227d1c5ce51d6fd3f46ffd5f01f3e0edbeaa\": container with ID starting with 3141ca280bcfc51462a598540692227d1c5ce51d6fd3f46ffd5f01f3e0edbeaa not found: ID does not exist" containerID="3141ca280bcfc51462a598540692227d1c5ce51d6fd3f46ffd5f01f3e0edbeaa" Jan 31 09:46:52 crc kubenswrapper[4830]: I0131 09:46:52.057491 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3141ca280bcfc51462a598540692227d1c5ce51d6fd3f46ffd5f01f3e0edbeaa"} err="failed to get container status \"3141ca280bcfc51462a598540692227d1c5ce51d6fd3f46ffd5f01f3e0edbeaa\": rpc error: code = NotFound desc = could not find container \"3141ca280bcfc51462a598540692227d1c5ce51d6fd3f46ffd5f01f3e0edbeaa\": container with ID starting with 3141ca280bcfc51462a598540692227d1c5ce51d6fd3f46ffd5f01f3e0edbeaa not found: ID does not exist" Jan 31 09:46:52 crc kubenswrapper[4830]: I0131 09:46:52.057511 4830 scope.go:117] "RemoveContainer" containerID="c573fc405ed388e10f9f7e6c1983e5619c78e5d72b7e16dddf83b3f8da74ff3d" Jan 31 09:46:52 crc kubenswrapper[4830]: E0131 09:46:52.057848 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c573fc405ed388e10f9f7e6c1983e5619c78e5d72b7e16dddf83b3f8da74ff3d\": container with ID starting with c573fc405ed388e10f9f7e6c1983e5619c78e5d72b7e16dddf83b3f8da74ff3d not found: ID does not exist" containerID="c573fc405ed388e10f9f7e6c1983e5619c78e5d72b7e16dddf83b3f8da74ff3d" Jan 31 09:46:52 crc kubenswrapper[4830]: I0131 09:46:52.057875 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c573fc405ed388e10f9f7e6c1983e5619c78e5d72b7e16dddf83b3f8da74ff3d"} err="failed to get container status \"c573fc405ed388e10f9f7e6c1983e5619c78e5d72b7e16dddf83b3f8da74ff3d\": rpc error: code = NotFound desc = could not find container \"c573fc405ed388e10f9f7e6c1983e5619c78e5d72b7e16dddf83b3f8da74ff3d\": container with ID starting with c573fc405ed388e10f9f7e6c1983e5619c78e5d72b7e16dddf83b3f8da74ff3d not found: ID does not exist" Jan 31 09:46:52 crc kubenswrapper[4830]: I0131 09:46:52.269066 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87d309ac-705c-4015-8d72-fc05a47ef5f0" path="/var/lib/kubelet/pods/87d309ac-705c-4015-8d72-fc05a47ef5f0/volumes" Jan 31 09:47:11 crc kubenswrapper[4830]: E0131 09:47:11.446328 4830 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod464743a2_b75e_49de_9628_6c12d7c7f8b7.slice/crio-379dd8d56c3169561eb7b44f60782b10a04225476dce5f7b0f727449560cfe0e.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod464743a2_b75e_49de_9628_6c12d7c7f8b7.slice/crio-conmon-379dd8d56c3169561eb7b44f60782b10a04225476dce5f7b0f727449560cfe0e.scope\": RecentStats: unable to find data in memory cache]" Jan 31 09:47:12 crc kubenswrapper[4830]: I0131 09:47:12.200190 4830 generic.go:334] "Generic (PLEG): container finished" podID="464743a2-b75e-49de-9628-6c12d7c7f8b7" containerID="379dd8d56c3169561eb7b44f60782b10a04225476dce5f7b0f727449560cfe0e" exitCode=0 Jan 31 09:47:12 crc kubenswrapper[4830]: I0131 09:47:12.200307 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-94xgt" event={"ID":"464743a2-b75e-49de-9628-6c12d7c7f8b7","Type":"ContainerDied","Data":"379dd8d56c3169561eb7b44f60782b10a04225476dce5f7b0f727449560cfe0e"} Jan 31 09:47:13 crc kubenswrapper[4830]: I0131 09:47:13.800868 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-94xgt" Jan 31 09:47:13 crc kubenswrapper[4830]: I0131 09:47:13.919604 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/464743a2-b75e-49de-9628-6c12d7c7f8b7-inventory\") pod \"464743a2-b75e-49de-9628-6c12d7c7f8b7\" (UID: \"464743a2-b75e-49de-9628-6c12d7c7f8b7\") " Jan 31 09:47:13 crc kubenswrapper[4830]: I0131 09:47:13.919972 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/464743a2-b75e-49de-9628-6c12d7c7f8b7-libvirt-secret-0\") pod \"464743a2-b75e-49de-9628-6c12d7c7f8b7\" (UID: \"464743a2-b75e-49de-9628-6c12d7c7f8b7\") " Jan 31 09:47:13 crc kubenswrapper[4830]: I0131 09:47:13.920509 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/464743a2-b75e-49de-9628-6c12d7c7f8b7-libvirt-combined-ca-bundle\") pod \"464743a2-b75e-49de-9628-6c12d7c7f8b7\" (UID: \"464743a2-b75e-49de-9628-6c12d7c7f8b7\") " Jan 31 09:47:13 crc kubenswrapper[4830]: I0131 09:47:13.920629 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xt42p\" (UniqueName: \"kubernetes.io/projected/464743a2-b75e-49de-9628-6c12d7c7f8b7-kube-api-access-xt42p\") pod \"464743a2-b75e-49de-9628-6c12d7c7f8b7\" (UID: \"464743a2-b75e-49de-9628-6c12d7c7f8b7\") " Jan 31 09:47:13 crc kubenswrapper[4830]: I0131 09:47:13.920763 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/464743a2-b75e-49de-9628-6c12d7c7f8b7-ssh-key-openstack-edpm-ipam\") pod \"464743a2-b75e-49de-9628-6c12d7c7f8b7\" (UID: \"464743a2-b75e-49de-9628-6c12d7c7f8b7\") " Jan 31 09:47:13 crc kubenswrapper[4830]: I0131 09:47:13.925449 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/464743a2-b75e-49de-9628-6c12d7c7f8b7-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "464743a2-b75e-49de-9628-6c12d7c7f8b7" (UID: "464743a2-b75e-49de-9628-6c12d7c7f8b7"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:47:13 crc kubenswrapper[4830]: I0131 09:47:13.925927 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/464743a2-b75e-49de-9628-6c12d7c7f8b7-kube-api-access-xt42p" (OuterVolumeSpecName: "kube-api-access-xt42p") pod "464743a2-b75e-49de-9628-6c12d7c7f8b7" (UID: "464743a2-b75e-49de-9628-6c12d7c7f8b7"). InnerVolumeSpecName "kube-api-access-xt42p". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:47:13 crc kubenswrapper[4830]: I0131 09:47:13.955774 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/464743a2-b75e-49de-9628-6c12d7c7f8b7-libvirt-secret-0" (OuterVolumeSpecName: "libvirt-secret-0") pod "464743a2-b75e-49de-9628-6c12d7c7f8b7" (UID: "464743a2-b75e-49de-9628-6c12d7c7f8b7"). InnerVolumeSpecName "libvirt-secret-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:47:13 crc kubenswrapper[4830]: I0131 09:47:13.957363 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/464743a2-b75e-49de-9628-6c12d7c7f8b7-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "464743a2-b75e-49de-9628-6c12d7c7f8b7" (UID: "464743a2-b75e-49de-9628-6c12d7c7f8b7"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:47:13 crc kubenswrapper[4830]: I0131 09:47:13.960059 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/464743a2-b75e-49de-9628-6c12d7c7f8b7-inventory" (OuterVolumeSpecName: "inventory") pod "464743a2-b75e-49de-9628-6c12d7c7f8b7" (UID: "464743a2-b75e-49de-9628-6c12d7c7f8b7"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:47:14 crc kubenswrapper[4830]: I0131 09:47:14.024994 4830 reconciler_common.go:293] "Volume detached for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/464743a2-b75e-49de-9628-6c12d7c7f8b7-libvirt-secret-0\") on node \"crc\" DevicePath \"\"" Jan 31 09:47:14 crc kubenswrapper[4830]: I0131 09:47:14.025033 4830 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/464743a2-b75e-49de-9628-6c12d7c7f8b7-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 09:47:14 crc kubenswrapper[4830]: I0131 09:47:14.025066 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xt42p\" (UniqueName: \"kubernetes.io/projected/464743a2-b75e-49de-9628-6c12d7c7f8b7-kube-api-access-xt42p\") on node \"crc\" DevicePath \"\"" Jan 31 09:47:14 crc kubenswrapper[4830]: I0131 09:47:14.025077 4830 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/464743a2-b75e-49de-9628-6c12d7c7f8b7-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 31 09:47:14 crc kubenswrapper[4830]: I0131 09:47:14.025090 4830 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/464743a2-b75e-49de-9628-6c12d7c7f8b7-inventory\") on node \"crc\" DevicePath \"\"" Jan 31 09:47:14 crc kubenswrapper[4830]: I0131 09:47:14.221681 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-94xgt" event={"ID":"464743a2-b75e-49de-9628-6c12d7c7f8b7","Type":"ContainerDied","Data":"3d1a4bdbce826f0b234f5a2b934b0a0133b36b24d3569f9090e41309736fc9a3"} Jan 31 09:47:14 crc kubenswrapper[4830]: I0131 09:47:14.221742 4830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3d1a4bdbce826f0b234f5a2b934b0a0133b36b24d3569f9090e41309736fc9a3" Jan 31 09:47:14 crc kubenswrapper[4830]: I0131 09:47:14.221801 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-94xgt" Jan 31 09:47:14 crc kubenswrapper[4830]: I0131 09:47:14.329893 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-xpd8k"] Jan 31 09:47:14 crc kubenswrapper[4830]: E0131 09:47:14.330499 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87d309ac-705c-4015-8d72-fc05a47ef5f0" containerName="extract-utilities" Jan 31 09:47:14 crc kubenswrapper[4830]: I0131 09:47:14.330525 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="87d309ac-705c-4015-8d72-fc05a47ef5f0" containerName="extract-utilities" Jan 31 09:47:14 crc kubenswrapper[4830]: E0131 09:47:14.330546 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87d309ac-705c-4015-8d72-fc05a47ef5f0" containerName="extract-content" Jan 31 09:47:14 crc kubenswrapper[4830]: I0131 09:47:14.330554 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="87d309ac-705c-4015-8d72-fc05a47ef5f0" containerName="extract-content" Jan 31 09:47:14 crc kubenswrapper[4830]: E0131 09:47:14.330567 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87d309ac-705c-4015-8d72-fc05a47ef5f0" containerName="registry-server" Jan 31 09:47:14 crc kubenswrapper[4830]: I0131 09:47:14.330574 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="87d309ac-705c-4015-8d72-fc05a47ef5f0" containerName="registry-server" Jan 31 09:47:14 crc kubenswrapper[4830]: E0131 09:47:14.330595 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="464743a2-b75e-49de-9628-6c12d7c7f8b7" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Jan 31 09:47:14 crc kubenswrapper[4830]: I0131 09:47:14.330601 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="464743a2-b75e-49de-9628-6c12d7c7f8b7" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Jan 31 09:47:14 crc kubenswrapper[4830]: I0131 09:47:14.330920 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="87d309ac-705c-4015-8d72-fc05a47ef5f0" containerName="registry-server" Jan 31 09:47:14 crc kubenswrapper[4830]: I0131 09:47:14.330936 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="464743a2-b75e-49de-9628-6c12d7c7f8b7" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Jan 31 09:47:14 crc kubenswrapper[4830]: I0131 09:47:14.331872 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-xpd8k" Jan 31 09:47:14 crc kubenswrapper[4830]: I0131 09:47:14.343421 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 31 09:47:14 crc kubenswrapper[4830]: I0131 09:47:14.343679 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 31 09:47:14 crc kubenswrapper[4830]: I0131 09:47:14.343913 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-config" Jan 31 09:47:14 crc kubenswrapper[4830]: I0131 09:47:14.344189 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"nova-extra-config" Jan 31 09:47:14 crc kubenswrapper[4830]: I0131 09:47:14.344324 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-migration-ssh-key" Jan 31 09:47:14 crc kubenswrapper[4830]: I0131 09:47:14.344392 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 31 09:47:14 crc kubenswrapper[4830]: I0131 09:47:14.344586 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-vd24j" Jan 31 09:47:14 crc kubenswrapper[4830]: I0131 09:47:14.347595 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-xpd8k"] Jan 31 09:47:14 crc kubenswrapper[4830]: I0131 09:47:14.436928 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8081b2b1-7847-4223-a583-0f0251f2ef52-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-xpd8k\" (UID: \"8081b2b1-7847-4223-a583-0f0251f2ef52\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-xpd8k" Jan 31 09:47:14 crc kubenswrapper[4830]: I0131 09:47:14.437024 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/8081b2b1-7847-4223-a583-0f0251f2ef52-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-xpd8k\" (UID: \"8081b2b1-7847-4223-a583-0f0251f2ef52\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-xpd8k" Jan 31 09:47:14 crc kubenswrapper[4830]: I0131 09:47:14.437071 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/8081b2b1-7847-4223-a583-0f0251f2ef52-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-xpd8k\" (UID: \"8081b2b1-7847-4223-a583-0f0251f2ef52\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-xpd8k" Jan 31 09:47:14 crc kubenswrapper[4830]: I0131 09:47:14.437098 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xd267\" (UniqueName: \"kubernetes.io/projected/8081b2b1-7847-4223-a583-0f0251f2ef52-kube-api-access-xd267\") pod \"nova-edpm-deployment-openstack-edpm-ipam-xpd8k\" (UID: \"8081b2b1-7847-4223-a583-0f0251f2ef52\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-xpd8k" Jan 31 09:47:14 crc kubenswrapper[4830]: I0131 09:47:14.437175 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8081b2b1-7847-4223-a583-0f0251f2ef52-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-xpd8k\" (UID: \"8081b2b1-7847-4223-a583-0f0251f2ef52\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-xpd8k" Jan 31 09:47:14 crc kubenswrapper[4830]: I0131 09:47:14.437264 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/8081b2b1-7847-4223-a583-0f0251f2ef52-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-xpd8k\" (UID: \"8081b2b1-7847-4223-a583-0f0251f2ef52\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-xpd8k" Jan 31 09:47:14 crc kubenswrapper[4830]: I0131 09:47:14.437400 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/8081b2b1-7847-4223-a583-0f0251f2ef52-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-xpd8k\" (UID: \"8081b2b1-7847-4223-a583-0f0251f2ef52\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-xpd8k" Jan 31 09:47:14 crc kubenswrapper[4830]: I0131 09:47:14.437476 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8081b2b1-7847-4223-a583-0f0251f2ef52-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-xpd8k\" (UID: \"8081b2b1-7847-4223-a583-0f0251f2ef52\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-xpd8k" Jan 31 09:47:14 crc kubenswrapper[4830]: I0131 09:47:14.437550 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/8081b2b1-7847-4223-a583-0f0251f2ef52-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-xpd8k\" (UID: \"8081b2b1-7847-4223-a583-0f0251f2ef52\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-xpd8k" Jan 31 09:47:14 crc kubenswrapper[4830]: I0131 09:47:14.540965 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/8081b2b1-7847-4223-a583-0f0251f2ef52-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-xpd8k\" (UID: \"8081b2b1-7847-4223-a583-0f0251f2ef52\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-xpd8k" Jan 31 09:47:14 crc kubenswrapper[4830]: I0131 09:47:14.541046 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/8081b2b1-7847-4223-a583-0f0251f2ef52-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-xpd8k\" (UID: \"8081b2b1-7847-4223-a583-0f0251f2ef52\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-xpd8k" Jan 31 09:47:14 crc kubenswrapper[4830]: I0131 09:47:14.541086 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xd267\" (UniqueName: \"kubernetes.io/projected/8081b2b1-7847-4223-a583-0f0251f2ef52-kube-api-access-xd267\") pod \"nova-edpm-deployment-openstack-edpm-ipam-xpd8k\" (UID: \"8081b2b1-7847-4223-a583-0f0251f2ef52\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-xpd8k" Jan 31 09:47:14 crc kubenswrapper[4830]: I0131 09:47:14.541117 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8081b2b1-7847-4223-a583-0f0251f2ef52-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-xpd8k\" (UID: \"8081b2b1-7847-4223-a583-0f0251f2ef52\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-xpd8k" Jan 31 09:47:14 crc kubenswrapper[4830]: I0131 09:47:14.541177 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/8081b2b1-7847-4223-a583-0f0251f2ef52-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-xpd8k\" (UID: \"8081b2b1-7847-4223-a583-0f0251f2ef52\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-xpd8k" Jan 31 09:47:14 crc kubenswrapper[4830]: I0131 09:47:14.541274 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/8081b2b1-7847-4223-a583-0f0251f2ef52-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-xpd8k\" (UID: \"8081b2b1-7847-4223-a583-0f0251f2ef52\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-xpd8k" Jan 31 09:47:14 crc kubenswrapper[4830]: I0131 09:47:14.541324 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8081b2b1-7847-4223-a583-0f0251f2ef52-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-xpd8k\" (UID: \"8081b2b1-7847-4223-a583-0f0251f2ef52\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-xpd8k" Jan 31 09:47:14 crc kubenswrapper[4830]: I0131 09:47:14.541378 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/8081b2b1-7847-4223-a583-0f0251f2ef52-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-xpd8k\" (UID: \"8081b2b1-7847-4223-a583-0f0251f2ef52\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-xpd8k" Jan 31 09:47:14 crc kubenswrapper[4830]: I0131 09:47:14.541408 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8081b2b1-7847-4223-a583-0f0251f2ef52-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-xpd8k\" (UID: \"8081b2b1-7847-4223-a583-0f0251f2ef52\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-xpd8k" Jan 31 09:47:14 crc kubenswrapper[4830]: I0131 09:47:14.549625 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/8081b2b1-7847-4223-a583-0f0251f2ef52-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-xpd8k\" (UID: \"8081b2b1-7847-4223-a583-0f0251f2ef52\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-xpd8k" Jan 31 09:47:14 crc kubenswrapper[4830]: I0131 09:47:14.562026 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/8081b2b1-7847-4223-a583-0f0251f2ef52-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-xpd8k\" (UID: \"8081b2b1-7847-4223-a583-0f0251f2ef52\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-xpd8k" Jan 31 09:47:14 crc kubenswrapper[4830]: I0131 09:47:14.562479 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8081b2b1-7847-4223-a583-0f0251f2ef52-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-xpd8k\" (UID: \"8081b2b1-7847-4223-a583-0f0251f2ef52\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-xpd8k" Jan 31 09:47:14 crc kubenswrapper[4830]: I0131 09:47:14.562552 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8081b2b1-7847-4223-a583-0f0251f2ef52-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-xpd8k\" (UID: \"8081b2b1-7847-4223-a583-0f0251f2ef52\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-xpd8k" Jan 31 09:47:14 crc kubenswrapper[4830]: I0131 09:47:14.566647 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8081b2b1-7847-4223-a583-0f0251f2ef52-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-xpd8k\" (UID: \"8081b2b1-7847-4223-a583-0f0251f2ef52\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-xpd8k" Jan 31 09:47:14 crc kubenswrapper[4830]: I0131 09:47:14.568382 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/8081b2b1-7847-4223-a583-0f0251f2ef52-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-xpd8k\" (UID: \"8081b2b1-7847-4223-a583-0f0251f2ef52\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-xpd8k" Jan 31 09:47:14 crc kubenswrapper[4830]: I0131 09:47:14.571489 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/8081b2b1-7847-4223-a583-0f0251f2ef52-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-xpd8k\" (UID: \"8081b2b1-7847-4223-a583-0f0251f2ef52\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-xpd8k" Jan 31 09:47:14 crc kubenswrapper[4830]: I0131 09:47:14.583366 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xd267\" (UniqueName: \"kubernetes.io/projected/8081b2b1-7847-4223-a583-0f0251f2ef52-kube-api-access-xd267\") pod \"nova-edpm-deployment-openstack-edpm-ipam-xpd8k\" (UID: \"8081b2b1-7847-4223-a583-0f0251f2ef52\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-xpd8k" Jan 31 09:47:14 crc kubenswrapper[4830]: I0131 09:47:14.591780 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/8081b2b1-7847-4223-a583-0f0251f2ef52-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-xpd8k\" (UID: \"8081b2b1-7847-4223-a583-0f0251f2ef52\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-xpd8k" Jan 31 09:47:14 crc kubenswrapper[4830]: I0131 09:47:14.659930 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-xpd8k" Jan 31 09:47:15 crc kubenswrapper[4830]: I0131 09:47:15.365138 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-xpd8k"] Jan 31 09:47:16 crc kubenswrapper[4830]: I0131 09:47:16.247346 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-xpd8k" event={"ID":"8081b2b1-7847-4223-a583-0f0251f2ef52","Type":"ContainerStarted","Data":"d859abb5c2beff8cfe355879dd692d32998e93f4a7b65c977e0ac08363dff7df"} Jan 31 09:47:16 crc kubenswrapper[4830]: I0131 09:47:16.247979 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-xpd8k" event={"ID":"8081b2b1-7847-4223-a583-0f0251f2ef52","Type":"ContainerStarted","Data":"9700e7b750b7d98f39a922d3508bf8b026332dc562c3f4036f6c63e8c0a4311e"} Jan 31 09:47:16 crc kubenswrapper[4830]: I0131 09:47:16.279669 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-xpd8k" podStartSLOduration=1.875109068 podStartE2EDuration="2.279646519s" podCreationTimestamp="2026-01-31 09:47:14 +0000 UTC" firstStartedPulling="2026-01-31 09:47:15.369267097 +0000 UTC m=+2779.862629539" lastFinishedPulling="2026-01-31 09:47:15.773804548 +0000 UTC m=+2780.267166990" observedRunningTime="2026-01-31 09:47:16.266867641 +0000 UTC m=+2780.760230103" watchObservedRunningTime="2026-01-31 09:47:16.279646519 +0000 UTC m=+2780.773008951" Jan 31 09:47:42 crc kubenswrapper[4830]: I0131 09:47:42.883361 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-swr8r"] Jan 31 09:47:42 crc kubenswrapper[4830]: I0131 09:47:42.888462 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-swr8r" Jan 31 09:47:42 crc kubenswrapper[4830]: I0131 09:47:42.901260 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-swr8r"] Jan 31 09:47:42 crc kubenswrapper[4830]: I0131 09:47:42.949425 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1a44fb8d-f6bf-4794-a7d0-4a19edb7b02f-catalog-content\") pod \"certified-operators-swr8r\" (UID: \"1a44fb8d-f6bf-4794-a7d0-4a19edb7b02f\") " pod="openshift-marketplace/certified-operators-swr8r" Jan 31 09:47:42 crc kubenswrapper[4830]: I0131 09:47:42.949535 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1a44fb8d-f6bf-4794-a7d0-4a19edb7b02f-utilities\") pod \"certified-operators-swr8r\" (UID: \"1a44fb8d-f6bf-4794-a7d0-4a19edb7b02f\") " pod="openshift-marketplace/certified-operators-swr8r" Jan 31 09:47:42 crc kubenswrapper[4830]: I0131 09:47:42.949618 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kq5rv\" (UniqueName: \"kubernetes.io/projected/1a44fb8d-f6bf-4794-a7d0-4a19edb7b02f-kube-api-access-kq5rv\") pod \"certified-operators-swr8r\" (UID: \"1a44fb8d-f6bf-4794-a7d0-4a19edb7b02f\") " pod="openshift-marketplace/certified-operators-swr8r" Jan 31 09:47:43 crc kubenswrapper[4830]: I0131 09:47:43.052611 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1a44fb8d-f6bf-4794-a7d0-4a19edb7b02f-catalog-content\") pod \"certified-operators-swr8r\" (UID: \"1a44fb8d-f6bf-4794-a7d0-4a19edb7b02f\") " pod="openshift-marketplace/certified-operators-swr8r" Jan 31 09:47:43 crc kubenswrapper[4830]: I0131 09:47:43.052678 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1a44fb8d-f6bf-4794-a7d0-4a19edb7b02f-catalog-content\") pod \"certified-operators-swr8r\" (UID: \"1a44fb8d-f6bf-4794-a7d0-4a19edb7b02f\") " pod="openshift-marketplace/certified-operators-swr8r" Jan 31 09:47:43 crc kubenswrapper[4830]: I0131 09:47:43.052872 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1a44fb8d-f6bf-4794-a7d0-4a19edb7b02f-utilities\") pod \"certified-operators-swr8r\" (UID: \"1a44fb8d-f6bf-4794-a7d0-4a19edb7b02f\") " pod="openshift-marketplace/certified-operators-swr8r" Jan 31 09:47:43 crc kubenswrapper[4830]: I0131 09:47:43.053017 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kq5rv\" (UniqueName: \"kubernetes.io/projected/1a44fb8d-f6bf-4794-a7d0-4a19edb7b02f-kube-api-access-kq5rv\") pod \"certified-operators-swr8r\" (UID: \"1a44fb8d-f6bf-4794-a7d0-4a19edb7b02f\") " pod="openshift-marketplace/certified-operators-swr8r" Jan 31 09:47:43 crc kubenswrapper[4830]: I0131 09:47:43.054014 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1a44fb8d-f6bf-4794-a7d0-4a19edb7b02f-utilities\") pod \"certified-operators-swr8r\" (UID: \"1a44fb8d-f6bf-4794-a7d0-4a19edb7b02f\") " pod="openshift-marketplace/certified-operators-swr8r" Jan 31 09:47:43 crc kubenswrapper[4830]: I0131 09:47:43.077927 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kq5rv\" (UniqueName: \"kubernetes.io/projected/1a44fb8d-f6bf-4794-a7d0-4a19edb7b02f-kube-api-access-kq5rv\") pod \"certified-operators-swr8r\" (UID: \"1a44fb8d-f6bf-4794-a7d0-4a19edb7b02f\") " pod="openshift-marketplace/certified-operators-swr8r" Jan 31 09:47:43 crc kubenswrapper[4830]: I0131 09:47:43.228906 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-swr8r" Jan 31 09:47:43 crc kubenswrapper[4830]: I0131 09:47:43.779237 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-swr8r"] Jan 31 09:47:44 crc kubenswrapper[4830]: I0131 09:47:44.353078 4830 patch_prober.go:28] interesting pod/machine-config-daemon-gt7kd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 09:47:44 crc kubenswrapper[4830]: I0131 09:47:44.353455 4830 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" podUID="158dbfda-9b0a-4809-9946-3c6ee2d082dc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 09:47:44 crc kubenswrapper[4830]: I0131 09:47:44.549312 4830 generic.go:334] "Generic (PLEG): container finished" podID="1a44fb8d-f6bf-4794-a7d0-4a19edb7b02f" containerID="d75b7276b78fdfb4c772ea4fe539d206d9e93b85929b3d3fad2bf62262335d27" exitCode=0 Jan 31 09:47:44 crc kubenswrapper[4830]: I0131 09:47:44.549410 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-swr8r" event={"ID":"1a44fb8d-f6bf-4794-a7d0-4a19edb7b02f","Type":"ContainerDied","Data":"d75b7276b78fdfb4c772ea4fe539d206d9e93b85929b3d3fad2bf62262335d27"} Jan 31 09:47:44 crc kubenswrapper[4830]: I0131 09:47:44.550187 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-swr8r" event={"ID":"1a44fb8d-f6bf-4794-a7d0-4a19edb7b02f","Type":"ContainerStarted","Data":"cfbaf3924d2c27a69f911e09932855d1166b246dde9d3f3f98be2ee1885a4af9"} Jan 31 09:47:45 crc kubenswrapper[4830]: I0131 09:47:45.561073 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-swr8r" event={"ID":"1a44fb8d-f6bf-4794-a7d0-4a19edb7b02f","Type":"ContainerStarted","Data":"39df002474b6fd01850c2c55039602e20cbfc656a58c50ca472fdb3b24b9749f"} Jan 31 09:47:47 crc kubenswrapper[4830]: I0131 09:47:47.601238 4830 generic.go:334] "Generic (PLEG): container finished" podID="1a44fb8d-f6bf-4794-a7d0-4a19edb7b02f" containerID="39df002474b6fd01850c2c55039602e20cbfc656a58c50ca472fdb3b24b9749f" exitCode=0 Jan 31 09:47:47 crc kubenswrapper[4830]: I0131 09:47:47.601336 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-swr8r" event={"ID":"1a44fb8d-f6bf-4794-a7d0-4a19edb7b02f","Type":"ContainerDied","Data":"39df002474b6fd01850c2c55039602e20cbfc656a58c50ca472fdb3b24b9749f"} Jan 31 09:47:48 crc kubenswrapper[4830]: I0131 09:47:48.616473 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-swr8r" event={"ID":"1a44fb8d-f6bf-4794-a7d0-4a19edb7b02f","Type":"ContainerStarted","Data":"6c1bedaa60e3f5ac5548b323d4b87ca6042267cf4d65e290a9cd4fa07563de9c"} Jan 31 09:47:48 crc kubenswrapper[4830]: I0131 09:47:48.645701 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-swr8r" podStartSLOduration=3.168234348 podStartE2EDuration="6.645675768s" podCreationTimestamp="2026-01-31 09:47:42 +0000 UTC" firstStartedPulling="2026-01-31 09:47:44.551178929 +0000 UTC m=+2809.044541371" lastFinishedPulling="2026-01-31 09:47:48.028620349 +0000 UTC m=+2812.521982791" observedRunningTime="2026-01-31 09:47:48.640147573 +0000 UTC m=+2813.133510025" watchObservedRunningTime="2026-01-31 09:47:48.645675768 +0000 UTC m=+2813.139038210" Jan 31 09:47:48 crc kubenswrapper[4830]: I0131 09:47:48.673741 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-54z4x"] Jan 31 09:47:48 crc kubenswrapper[4830]: I0131 09:47:48.676905 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-54z4x" Jan 31 09:47:48 crc kubenswrapper[4830]: I0131 09:47:48.688856 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-54z4x"] Jan 31 09:47:48 crc kubenswrapper[4830]: I0131 09:47:48.735345 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2j8d7\" (UniqueName: \"kubernetes.io/projected/6edcbd62-edb0-4d2b-9dff-3cc7889c16f2-kube-api-access-2j8d7\") pod \"community-operators-54z4x\" (UID: \"6edcbd62-edb0-4d2b-9dff-3cc7889c16f2\") " pod="openshift-marketplace/community-operators-54z4x" Jan 31 09:47:48 crc kubenswrapper[4830]: I0131 09:47:48.735425 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6edcbd62-edb0-4d2b-9dff-3cc7889c16f2-catalog-content\") pod \"community-operators-54z4x\" (UID: \"6edcbd62-edb0-4d2b-9dff-3cc7889c16f2\") " pod="openshift-marketplace/community-operators-54z4x" Jan 31 09:47:48 crc kubenswrapper[4830]: I0131 09:47:48.735473 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6edcbd62-edb0-4d2b-9dff-3cc7889c16f2-utilities\") pod \"community-operators-54z4x\" (UID: \"6edcbd62-edb0-4d2b-9dff-3cc7889c16f2\") " pod="openshift-marketplace/community-operators-54z4x" Jan 31 09:47:48 crc kubenswrapper[4830]: I0131 09:47:48.840753 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2j8d7\" (UniqueName: \"kubernetes.io/projected/6edcbd62-edb0-4d2b-9dff-3cc7889c16f2-kube-api-access-2j8d7\") pod \"community-operators-54z4x\" (UID: \"6edcbd62-edb0-4d2b-9dff-3cc7889c16f2\") " pod="openshift-marketplace/community-operators-54z4x" Jan 31 09:47:48 crc kubenswrapper[4830]: I0131 09:47:48.841118 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6edcbd62-edb0-4d2b-9dff-3cc7889c16f2-catalog-content\") pod \"community-operators-54z4x\" (UID: \"6edcbd62-edb0-4d2b-9dff-3cc7889c16f2\") " pod="openshift-marketplace/community-operators-54z4x" Jan 31 09:47:48 crc kubenswrapper[4830]: I0131 09:47:48.841142 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6edcbd62-edb0-4d2b-9dff-3cc7889c16f2-utilities\") pod \"community-operators-54z4x\" (UID: \"6edcbd62-edb0-4d2b-9dff-3cc7889c16f2\") " pod="openshift-marketplace/community-operators-54z4x" Jan 31 09:47:48 crc kubenswrapper[4830]: I0131 09:47:48.841962 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6edcbd62-edb0-4d2b-9dff-3cc7889c16f2-catalog-content\") pod \"community-operators-54z4x\" (UID: \"6edcbd62-edb0-4d2b-9dff-3cc7889c16f2\") " pod="openshift-marketplace/community-operators-54z4x" Jan 31 09:47:48 crc kubenswrapper[4830]: I0131 09:47:48.843464 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6edcbd62-edb0-4d2b-9dff-3cc7889c16f2-utilities\") pod \"community-operators-54z4x\" (UID: \"6edcbd62-edb0-4d2b-9dff-3cc7889c16f2\") " pod="openshift-marketplace/community-operators-54z4x" Jan 31 09:47:48 crc kubenswrapper[4830]: I0131 09:47:48.864907 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2j8d7\" (UniqueName: \"kubernetes.io/projected/6edcbd62-edb0-4d2b-9dff-3cc7889c16f2-kube-api-access-2j8d7\") pod \"community-operators-54z4x\" (UID: \"6edcbd62-edb0-4d2b-9dff-3cc7889c16f2\") " pod="openshift-marketplace/community-operators-54z4x" Jan 31 09:47:49 crc kubenswrapper[4830]: I0131 09:47:49.052459 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-54z4x" Jan 31 09:47:49 crc kubenswrapper[4830]: W0131 09:47:49.910164 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6edcbd62_edb0_4d2b_9dff_3cc7889c16f2.slice/crio-3e08f2e963eda4b901beedb2d6cb07bca1e0ed9fbad75a3858c3a96b5e0f8762 WatchSource:0}: Error finding container 3e08f2e963eda4b901beedb2d6cb07bca1e0ed9fbad75a3858c3a96b5e0f8762: Status 404 returned error can't find the container with id 3e08f2e963eda4b901beedb2d6cb07bca1e0ed9fbad75a3858c3a96b5e0f8762 Jan 31 09:47:49 crc kubenswrapper[4830]: I0131 09:47:49.918154 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-54z4x"] Jan 31 09:47:50 crc kubenswrapper[4830]: I0131 09:47:50.663018 4830 generic.go:334] "Generic (PLEG): container finished" podID="6edcbd62-edb0-4d2b-9dff-3cc7889c16f2" containerID="dc755ae5cf4ef90fe9036649a4f4dd192353265309a6264603bc3cee36afdefa" exitCode=0 Jan 31 09:47:50 crc kubenswrapper[4830]: I0131 09:47:50.663121 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-54z4x" event={"ID":"6edcbd62-edb0-4d2b-9dff-3cc7889c16f2","Type":"ContainerDied","Data":"dc755ae5cf4ef90fe9036649a4f4dd192353265309a6264603bc3cee36afdefa"} Jan 31 09:47:50 crc kubenswrapper[4830]: I0131 09:47:50.663371 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-54z4x" event={"ID":"6edcbd62-edb0-4d2b-9dff-3cc7889c16f2","Type":"ContainerStarted","Data":"3e08f2e963eda4b901beedb2d6cb07bca1e0ed9fbad75a3858c3a96b5e0f8762"} Jan 31 09:47:51 crc kubenswrapper[4830]: I0131 09:47:51.682509 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-54z4x" event={"ID":"6edcbd62-edb0-4d2b-9dff-3cc7889c16f2","Type":"ContainerStarted","Data":"c4ff333e4f08d7dffed48870ca9ad467c5e40a87a9f1de338159e208a37b0102"} Jan 31 09:47:53 crc kubenswrapper[4830]: I0131 09:47:53.229913 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-swr8r" Jan 31 09:47:53 crc kubenswrapper[4830]: I0131 09:47:53.230510 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-swr8r" Jan 31 09:47:53 crc kubenswrapper[4830]: I0131 09:47:53.289529 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-swr8r" Jan 31 09:47:53 crc kubenswrapper[4830]: I0131 09:47:53.767858 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-swr8r" Jan 31 09:47:54 crc kubenswrapper[4830]: I0131 09:47:54.450295 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-swr8r"] Jan 31 09:47:54 crc kubenswrapper[4830]: I0131 09:47:54.719438 4830 generic.go:334] "Generic (PLEG): container finished" podID="6edcbd62-edb0-4d2b-9dff-3cc7889c16f2" containerID="c4ff333e4f08d7dffed48870ca9ad467c5e40a87a9f1de338159e208a37b0102" exitCode=0 Jan 31 09:47:54 crc kubenswrapper[4830]: I0131 09:47:54.719500 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-54z4x" event={"ID":"6edcbd62-edb0-4d2b-9dff-3cc7889c16f2","Type":"ContainerDied","Data":"c4ff333e4f08d7dffed48870ca9ad467c5e40a87a9f1de338159e208a37b0102"} Jan 31 09:47:55 crc kubenswrapper[4830]: I0131 09:47:55.733118 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-swr8r" podUID="1a44fb8d-f6bf-4794-a7d0-4a19edb7b02f" containerName="registry-server" containerID="cri-o://6c1bedaa60e3f5ac5548b323d4b87ca6042267cf4d65e290a9cd4fa07563de9c" gracePeriod=2 Jan 31 09:47:56 crc kubenswrapper[4830]: E0131 09:47:56.004490 4830 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1a44fb8d_f6bf_4794_a7d0_4a19edb7b02f.slice/crio-conmon-6c1bedaa60e3f5ac5548b323d4b87ca6042267cf4d65e290a9cd4fa07563de9c.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1a44fb8d_f6bf_4794_a7d0_4a19edb7b02f.slice/crio-6c1bedaa60e3f5ac5548b323d4b87ca6042267cf4d65e290a9cd4fa07563de9c.scope\": RecentStats: unable to find data in memory cache]" Jan 31 09:47:56 crc kubenswrapper[4830]: I0131 09:47:56.478771 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-swr8r" Jan 31 09:47:56 crc kubenswrapper[4830]: I0131 09:47:56.496597 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1a44fb8d-f6bf-4794-a7d0-4a19edb7b02f-catalog-content\") pod \"1a44fb8d-f6bf-4794-a7d0-4a19edb7b02f\" (UID: \"1a44fb8d-f6bf-4794-a7d0-4a19edb7b02f\") " Jan 31 09:47:56 crc kubenswrapper[4830]: I0131 09:47:56.496961 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kq5rv\" (UniqueName: \"kubernetes.io/projected/1a44fb8d-f6bf-4794-a7d0-4a19edb7b02f-kube-api-access-kq5rv\") pod \"1a44fb8d-f6bf-4794-a7d0-4a19edb7b02f\" (UID: \"1a44fb8d-f6bf-4794-a7d0-4a19edb7b02f\") " Jan 31 09:47:56 crc kubenswrapper[4830]: I0131 09:47:56.497160 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1a44fb8d-f6bf-4794-a7d0-4a19edb7b02f-utilities\") pod \"1a44fb8d-f6bf-4794-a7d0-4a19edb7b02f\" (UID: \"1a44fb8d-f6bf-4794-a7d0-4a19edb7b02f\") " Jan 31 09:47:56 crc kubenswrapper[4830]: I0131 09:47:56.498015 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1a44fb8d-f6bf-4794-a7d0-4a19edb7b02f-utilities" (OuterVolumeSpecName: "utilities") pod "1a44fb8d-f6bf-4794-a7d0-4a19edb7b02f" (UID: "1a44fb8d-f6bf-4794-a7d0-4a19edb7b02f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 09:47:56 crc kubenswrapper[4830]: I0131 09:47:56.503320 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1a44fb8d-f6bf-4794-a7d0-4a19edb7b02f-kube-api-access-kq5rv" (OuterVolumeSpecName: "kube-api-access-kq5rv") pod "1a44fb8d-f6bf-4794-a7d0-4a19edb7b02f" (UID: "1a44fb8d-f6bf-4794-a7d0-4a19edb7b02f"). InnerVolumeSpecName "kube-api-access-kq5rv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:47:56 crc kubenswrapper[4830]: I0131 09:47:56.551339 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1a44fb8d-f6bf-4794-a7d0-4a19edb7b02f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1a44fb8d-f6bf-4794-a7d0-4a19edb7b02f" (UID: "1a44fb8d-f6bf-4794-a7d0-4a19edb7b02f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 09:47:56 crc kubenswrapper[4830]: I0131 09:47:56.600639 4830 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1a44fb8d-f6bf-4794-a7d0-4a19edb7b02f-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 09:47:56 crc kubenswrapper[4830]: I0131 09:47:56.601013 4830 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1a44fb8d-f6bf-4794-a7d0-4a19edb7b02f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 09:47:56 crc kubenswrapper[4830]: I0131 09:47:56.601028 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kq5rv\" (UniqueName: \"kubernetes.io/projected/1a44fb8d-f6bf-4794-a7d0-4a19edb7b02f-kube-api-access-kq5rv\") on node \"crc\" DevicePath \"\"" Jan 31 09:47:56 crc kubenswrapper[4830]: I0131 09:47:56.751213 4830 generic.go:334] "Generic (PLEG): container finished" podID="1a44fb8d-f6bf-4794-a7d0-4a19edb7b02f" containerID="6c1bedaa60e3f5ac5548b323d4b87ca6042267cf4d65e290a9cd4fa07563de9c" exitCode=0 Jan 31 09:47:56 crc kubenswrapper[4830]: I0131 09:47:56.751259 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-swr8r" event={"ID":"1a44fb8d-f6bf-4794-a7d0-4a19edb7b02f","Type":"ContainerDied","Data":"6c1bedaa60e3f5ac5548b323d4b87ca6042267cf4d65e290a9cd4fa07563de9c"} Jan 31 09:47:56 crc kubenswrapper[4830]: I0131 09:47:56.751306 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-swr8r" event={"ID":"1a44fb8d-f6bf-4794-a7d0-4a19edb7b02f","Type":"ContainerDied","Data":"cfbaf3924d2c27a69f911e09932855d1166b246dde9d3f3f98be2ee1885a4af9"} Jan 31 09:47:56 crc kubenswrapper[4830]: I0131 09:47:56.751328 4830 scope.go:117] "RemoveContainer" containerID="6c1bedaa60e3f5ac5548b323d4b87ca6042267cf4d65e290a9cd4fa07563de9c" Jan 31 09:47:56 crc kubenswrapper[4830]: I0131 09:47:56.753158 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-swr8r" Jan 31 09:47:56 crc kubenswrapper[4830]: I0131 09:47:56.753908 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-54z4x" event={"ID":"6edcbd62-edb0-4d2b-9dff-3cc7889c16f2","Type":"ContainerStarted","Data":"899f2381f4d9bdd68301c838aa29e2b9b663f4a845519d6bc7b82f34b3eaee59"} Jan 31 09:47:56 crc kubenswrapper[4830]: I0131 09:47:56.782366 4830 scope.go:117] "RemoveContainer" containerID="39df002474b6fd01850c2c55039602e20cbfc656a58c50ca472fdb3b24b9749f" Jan 31 09:47:56 crc kubenswrapper[4830]: I0131 09:47:56.792223 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-54z4x" podStartSLOduration=3.660681784 podStartE2EDuration="8.792199525s" podCreationTimestamp="2026-01-31 09:47:48 +0000 UTC" firstStartedPulling="2026-01-31 09:47:50.665045581 +0000 UTC m=+2815.158408023" lastFinishedPulling="2026-01-31 09:47:55.796563332 +0000 UTC m=+2820.289925764" observedRunningTime="2026-01-31 09:47:56.79025965 +0000 UTC m=+2821.283622092" watchObservedRunningTime="2026-01-31 09:47:56.792199525 +0000 UTC m=+2821.285561967" Jan 31 09:47:56 crc kubenswrapper[4830]: I0131 09:47:56.818519 4830 scope.go:117] "RemoveContainer" containerID="d75b7276b78fdfb4c772ea4fe539d206d9e93b85929b3d3fad2bf62262335d27" Jan 31 09:47:56 crc kubenswrapper[4830]: I0131 09:47:56.843997 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-swr8r"] Jan 31 09:47:56 crc kubenswrapper[4830]: I0131 09:47:56.877370 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-swr8r"] Jan 31 09:47:56 crc kubenswrapper[4830]: I0131 09:47:56.898576 4830 scope.go:117] "RemoveContainer" containerID="6c1bedaa60e3f5ac5548b323d4b87ca6042267cf4d65e290a9cd4fa07563de9c" Jan 31 09:47:56 crc kubenswrapper[4830]: E0131 09:47:56.899862 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6c1bedaa60e3f5ac5548b323d4b87ca6042267cf4d65e290a9cd4fa07563de9c\": container with ID starting with 6c1bedaa60e3f5ac5548b323d4b87ca6042267cf4d65e290a9cd4fa07563de9c not found: ID does not exist" containerID="6c1bedaa60e3f5ac5548b323d4b87ca6042267cf4d65e290a9cd4fa07563de9c" Jan 31 09:47:56 crc kubenswrapper[4830]: I0131 09:47:56.899904 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6c1bedaa60e3f5ac5548b323d4b87ca6042267cf4d65e290a9cd4fa07563de9c"} err="failed to get container status \"6c1bedaa60e3f5ac5548b323d4b87ca6042267cf4d65e290a9cd4fa07563de9c\": rpc error: code = NotFound desc = could not find container \"6c1bedaa60e3f5ac5548b323d4b87ca6042267cf4d65e290a9cd4fa07563de9c\": container with ID starting with 6c1bedaa60e3f5ac5548b323d4b87ca6042267cf4d65e290a9cd4fa07563de9c not found: ID does not exist" Jan 31 09:47:56 crc kubenswrapper[4830]: I0131 09:47:56.899936 4830 scope.go:117] "RemoveContainer" containerID="39df002474b6fd01850c2c55039602e20cbfc656a58c50ca472fdb3b24b9749f" Jan 31 09:47:56 crc kubenswrapper[4830]: E0131 09:47:56.902815 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"39df002474b6fd01850c2c55039602e20cbfc656a58c50ca472fdb3b24b9749f\": container with ID starting with 39df002474b6fd01850c2c55039602e20cbfc656a58c50ca472fdb3b24b9749f not found: ID does not exist" containerID="39df002474b6fd01850c2c55039602e20cbfc656a58c50ca472fdb3b24b9749f" Jan 31 09:47:56 crc kubenswrapper[4830]: I0131 09:47:56.902871 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"39df002474b6fd01850c2c55039602e20cbfc656a58c50ca472fdb3b24b9749f"} err="failed to get container status \"39df002474b6fd01850c2c55039602e20cbfc656a58c50ca472fdb3b24b9749f\": rpc error: code = NotFound desc = could not find container \"39df002474b6fd01850c2c55039602e20cbfc656a58c50ca472fdb3b24b9749f\": container with ID starting with 39df002474b6fd01850c2c55039602e20cbfc656a58c50ca472fdb3b24b9749f not found: ID does not exist" Jan 31 09:47:56 crc kubenswrapper[4830]: I0131 09:47:56.902903 4830 scope.go:117] "RemoveContainer" containerID="d75b7276b78fdfb4c772ea4fe539d206d9e93b85929b3d3fad2bf62262335d27" Jan 31 09:47:56 crc kubenswrapper[4830]: E0131 09:47:56.908889 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d75b7276b78fdfb4c772ea4fe539d206d9e93b85929b3d3fad2bf62262335d27\": container with ID starting with d75b7276b78fdfb4c772ea4fe539d206d9e93b85929b3d3fad2bf62262335d27 not found: ID does not exist" containerID="d75b7276b78fdfb4c772ea4fe539d206d9e93b85929b3d3fad2bf62262335d27" Jan 31 09:47:56 crc kubenswrapper[4830]: I0131 09:47:56.908945 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d75b7276b78fdfb4c772ea4fe539d206d9e93b85929b3d3fad2bf62262335d27"} err="failed to get container status \"d75b7276b78fdfb4c772ea4fe539d206d9e93b85929b3d3fad2bf62262335d27\": rpc error: code = NotFound desc = could not find container \"d75b7276b78fdfb4c772ea4fe539d206d9e93b85929b3d3fad2bf62262335d27\": container with ID starting with d75b7276b78fdfb4c772ea4fe539d206d9e93b85929b3d3fad2bf62262335d27 not found: ID does not exist" Jan 31 09:47:58 crc kubenswrapper[4830]: I0131 09:47:58.264083 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1a44fb8d-f6bf-4794-a7d0-4a19edb7b02f" path="/var/lib/kubelet/pods/1a44fb8d-f6bf-4794-a7d0-4a19edb7b02f/volumes" Jan 31 09:47:59 crc kubenswrapper[4830]: I0131 09:47:59.053080 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-54z4x" Jan 31 09:47:59 crc kubenswrapper[4830]: I0131 09:47:59.053428 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-54z4x" Jan 31 09:48:00 crc kubenswrapper[4830]: I0131 09:48:00.121765 4830 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-54z4x" podUID="6edcbd62-edb0-4d2b-9dff-3cc7889c16f2" containerName="registry-server" probeResult="failure" output=< Jan 31 09:48:00 crc kubenswrapper[4830]: timeout: failed to connect service ":50051" within 1s Jan 31 09:48:00 crc kubenswrapper[4830]: > Jan 31 09:48:09 crc kubenswrapper[4830]: I0131 09:48:09.111664 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-54z4x" Jan 31 09:48:09 crc kubenswrapper[4830]: I0131 09:48:09.174185 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-54z4x" Jan 31 09:48:09 crc kubenswrapper[4830]: I0131 09:48:09.358269 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-54z4x"] Jan 31 09:48:10 crc kubenswrapper[4830]: I0131 09:48:10.923149 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-54z4x" podUID="6edcbd62-edb0-4d2b-9dff-3cc7889c16f2" containerName="registry-server" containerID="cri-o://899f2381f4d9bdd68301c838aa29e2b9b663f4a845519d6bc7b82f34b3eaee59" gracePeriod=2 Jan 31 09:48:11 crc kubenswrapper[4830]: I0131 09:48:11.505215 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-54z4x" Jan 31 09:48:11 crc kubenswrapper[4830]: I0131 09:48:11.639839 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6edcbd62-edb0-4d2b-9dff-3cc7889c16f2-catalog-content\") pod \"6edcbd62-edb0-4d2b-9dff-3cc7889c16f2\" (UID: \"6edcbd62-edb0-4d2b-9dff-3cc7889c16f2\") " Jan 31 09:48:11 crc kubenswrapper[4830]: I0131 09:48:11.639954 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2j8d7\" (UniqueName: \"kubernetes.io/projected/6edcbd62-edb0-4d2b-9dff-3cc7889c16f2-kube-api-access-2j8d7\") pod \"6edcbd62-edb0-4d2b-9dff-3cc7889c16f2\" (UID: \"6edcbd62-edb0-4d2b-9dff-3cc7889c16f2\") " Jan 31 09:48:11 crc kubenswrapper[4830]: I0131 09:48:11.640355 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6edcbd62-edb0-4d2b-9dff-3cc7889c16f2-utilities\") pod \"6edcbd62-edb0-4d2b-9dff-3cc7889c16f2\" (UID: \"6edcbd62-edb0-4d2b-9dff-3cc7889c16f2\") " Jan 31 09:48:11 crc kubenswrapper[4830]: I0131 09:48:11.641317 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6edcbd62-edb0-4d2b-9dff-3cc7889c16f2-utilities" (OuterVolumeSpecName: "utilities") pod "6edcbd62-edb0-4d2b-9dff-3cc7889c16f2" (UID: "6edcbd62-edb0-4d2b-9dff-3cc7889c16f2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 09:48:11 crc kubenswrapper[4830]: I0131 09:48:11.643213 4830 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6edcbd62-edb0-4d2b-9dff-3cc7889c16f2-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 09:48:11 crc kubenswrapper[4830]: I0131 09:48:11.653100 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6edcbd62-edb0-4d2b-9dff-3cc7889c16f2-kube-api-access-2j8d7" (OuterVolumeSpecName: "kube-api-access-2j8d7") pod "6edcbd62-edb0-4d2b-9dff-3cc7889c16f2" (UID: "6edcbd62-edb0-4d2b-9dff-3cc7889c16f2"). InnerVolumeSpecName "kube-api-access-2j8d7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:48:11 crc kubenswrapper[4830]: I0131 09:48:11.703438 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6edcbd62-edb0-4d2b-9dff-3cc7889c16f2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6edcbd62-edb0-4d2b-9dff-3cc7889c16f2" (UID: "6edcbd62-edb0-4d2b-9dff-3cc7889c16f2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 09:48:11 crc kubenswrapper[4830]: I0131 09:48:11.745226 4830 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6edcbd62-edb0-4d2b-9dff-3cc7889c16f2-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 09:48:11 crc kubenswrapper[4830]: I0131 09:48:11.745273 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2j8d7\" (UniqueName: \"kubernetes.io/projected/6edcbd62-edb0-4d2b-9dff-3cc7889c16f2-kube-api-access-2j8d7\") on node \"crc\" DevicePath \"\"" Jan 31 09:48:11 crc kubenswrapper[4830]: I0131 09:48:11.972600 4830 generic.go:334] "Generic (PLEG): container finished" podID="6edcbd62-edb0-4d2b-9dff-3cc7889c16f2" containerID="899f2381f4d9bdd68301c838aa29e2b9b663f4a845519d6bc7b82f34b3eaee59" exitCode=0 Jan 31 09:48:11 crc kubenswrapper[4830]: I0131 09:48:11.972662 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-54z4x" event={"ID":"6edcbd62-edb0-4d2b-9dff-3cc7889c16f2","Type":"ContainerDied","Data":"899f2381f4d9bdd68301c838aa29e2b9b663f4a845519d6bc7b82f34b3eaee59"} Jan 31 09:48:11 crc kubenswrapper[4830]: I0131 09:48:11.972717 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-54z4x" event={"ID":"6edcbd62-edb0-4d2b-9dff-3cc7889c16f2","Type":"ContainerDied","Data":"3e08f2e963eda4b901beedb2d6cb07bca1e0ed9fbad75a3858c3a96b5e0f8762"} Jan 31 09:48:11 crc kubenswrapper[4830]: I0131 09:48:11.972749 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-54z4x" Jan 31 09:48:11 crc kubenswrapper[4830]: I0131 09:48:11.972759 4830 scope.go:117] "RemoveContainer" containerID="899f2381f4d9bdd68301c838aa29e2b9b663f4a845519d6bc7b82f34b3eaee59" Jan 31 09:48:12 crc kubenswrapper[4830]: I0131 09:48:12.006008 4830 scope.go:117] "RemoveContainer" containerID="c4ff333e4f08d7dffed48870ca9ad467c5e40a87a9f1de338159e208a37b0102" Jan 31 09:48:12 crc kubenswrapper[4830]: I0131 09:48:12.037555 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-54z4x"] Jan 31 09:48:12 crc kubenswrapper[4830]: I0131 09:48:12.045269 4830 scope.go:117] "RemoveContainer" containerID="dc755ae5cf4ef90fe9036649a4f4dd192353265309a6264603bc3cee36afdefa" Jan 31 09:48:12 crc kubenswrapper[4830]: I0131 09:48:12.050421 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-54z4x"] Jan 31 09:48:12 crc kubenswrapper[4830]: I0131 09:48:12.097589 4830 scope.go:117] "RemoveContainer" containerID="899f2381f4d9bdd68301c838aa29e2b9b663f4a845519d6bc7b82f34b3eaee59" Jan 31 09:48:12 crc kubenswrapper[4830]: E0131 09:48:12.098627 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"899f2381f4d9bdd68301c838aa29e2b9b663f4a845519d6bc7b82f34b3eaee59\": container with ID starting with 899f2381f4d9bdd68301c838aa29e2b9b663f4a845519d6bc7b82f34b3eaee59 not found: ID does not exist" containerID="899f2381f4d9bdd68301c838aa29e2b9b663f4a845519d6bc7b82f34b3eaee59" Jan 31 09:48:12 crc kubenswrapper[4830]: I0131 09:48:12.098705 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"899f2381f4d9bdd68301c838aa29e2b9b663f4a845519d6bc7b82f34b3eaee59"} err="failed to get container status \"899f2381f4d9bdd68301c838aa29e2b9b663f4a845519d6bc7b82f34b3eaee59\": rpc error: code = NotFound desc = could not find container \"899f2381f4d9bdd68301c838aa29e2b9b663f4a845519d6bc7b82f34b3eaee59\": container with ID starting with 899f2381f4d9bdd68301c838aa29e2b9b663f4a845519d6bc7b82f34b3eaee59 not found: ID does not exist" Jan 31 09:48:12 crc kubenswrapper[4830]: I0131 09:48:12.098770 4830 scope.go:117] "RemoveContainer" containerID="c4ff333e4f08d7dffed48870ca9ad467c5e40a87a9f1de338159e208a37b0102" Jan 31 09:48:12 crc kubenswrapper[4830]: E0131 09:48:12.099237 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c4ff333e4f08d7dffed48870ca9ad467c5e40a87a9f1de338159e208a37b0102\": container with ID starting with c4ff333e4f08d7dffed48870ca9ad467c5e40a87a9f1de338159e208a37b0102 not found: ID does not exist" containerID="c4ff333e4f08d7dffed48870ca9ad467c5e40a87a9f1de338159e208a37b0102" Jan 31 09:48:12 crc kubenswrapper[4830]: I0131 09:48:12.099318 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c4ff333e4f08d7dffed48870ca9ad467c5e40a87a9f1de338159e208a37b0102"} err="failed to get container status \"c4ff333e4f08d7dffed48870ca9ad467c5e40a87a9f1de338159e208a37b0102\": rpc error: code = NotFound desc = could not find container \"c4ff333e4f08d7dffed48870ca9ad467c5e40a87a9f1de338159e208a37b0102\": container with ID starting with c4ff333e4f08d7dffed48870ca9ad467c5e40a87a9f1de338159e208a37b0102 not found: ID does not exist" Jan 31 09:48:12 crc kubenswrapper[4830]: I0131 09:48:12.099376 4830 scope.go:117] "RemoveContainer" containerID="dc755ae5cf4ef90fe9036649a4f4dd192353265309a6264603bc3cee36afdefa" Jan 31 09:48:12 crc kubenswrapper[4830]: E0131 09:48:12.099924 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dc755ae5cf4ef90fe9036649a4f4dd192353265309a6264603bc3cee36afdefa\": container with ID starting with dc755ae5cf4ef90fe9036649a4f4dd192353265309a6264603bc3cee36afdefa not found: ID does not exist" containerID="dc755ae5cf4ef90fe9036649a4f4dd192353265309a6264603bc3cee36afdefa" Jan 31 09:48:12 crc kubenswrapper[4830]: I0131 09:48:12.099960 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dc755ae5cf4ef90fe9036649a4f4dd192353265309a6264603bc3cee36afdefa"} err="failed to get container status \"dc755ae5cf4ef90fe9036649a4f4dd192353265309a6264603bc3cee36afdefa\": rpc error: code = NotFound desc = could not find container \"dc755ae5cf4ef90fe9036649a4f4dd192353265309a6264603bc3cee36afdefa\": container with ID starting with dc755ae5cf4ef90fe9036649a4f4dd192353265309a6264603bc3cee36afdefa not found: ID does not exist" Jan 31 09:48:12 crc kubenswrapper[4830]: I0131 09:48:12.269951 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6edcbd62-edb0-4d2b-9dff-3cc7889c16f2" path="/var/lib/kubelet/pods/6edcbd62-edb0-4d2b-9dff-3cc7889c16f2/volumes" Jan 31 09:48:14 crc kubenswrapper[4830]: I0131 09:48:14.353304 4830 patch_prober.go:28] interesting pod/machine-config-daemon-gt7kd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 09:48:14 crc kubenswrapper[4830]: I0131 09:48:14.353955 4830 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" podUID="158dbfda-9b0a-4809-9946-3c6ee2d082dc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 09:48:44 crc kubenswrapper[4830]: I0131 09:48:44.353159 4830 patch_prober.go:28] interesting pod/machine-config-daemon-gt7kd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 09:48:44 crc kubenswrapper[4830]: I0131 09:48:44.355024 4830 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" podUID="158dbfda-9b0a-4809-9946-3c6ee2d082dc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 09:48:44 crc kubenswrapper[4830]: I0131 09:48:44.355096 4830 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" Jan 31 09:48:44 crc kubenswrapper[4830]: I0131 09:48:44.356245 4830 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"4c8d3d87e516871151f011a4e6c08fa4f0c34e4a44cf02a2e961fcf4fe1f40c9"} pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 31 09:48:44 crc kubenswrapper[4830]: I0131 09:48:44.356335 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" podUID="158dbfda-9b0a-4809-9946-3c6ee2d082dc" containerName="machine-config-daemon" containerID="cri-o://4c8d3d87e516871151f011a4e6c08fa4f0c34e4a44cf02a2e961fcf4fe1f40c9" gracePeriod=600 Jan 31 09:48:44 crc kubenswrapper[4830]: E0131 09:48:44.487893 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gt7kd_openshift-machine-config-operator(158dbfda-9b0a-4809-9946-3c6ee2d082dc)\"" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" podUID="158dbfda-9b0a-4809-9946-3c6ee2d082dc" Jan 31 09:48:45 crc kubenswrapper[4830]: I0131 09:48:45.419109 4830 generic.go:334] "Generic (PLEG): container finished" podID="158dbfda-9b0a-4809-9946-3c6ee2d082dc" containerID="4c8d3d87e516871151f011a4e6c08fa4f0c34e4a44cf02a2e961fcf4fe1f40c9" exitCode=0 Jan 31 09:48:45 crc kubenswrapper[4830]: I0131 09:48:45.419207 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" event={"ID":"158dbfda-9b0a-4809-9946-3c6ee2d082dc","Type":"ContainerDied","Data":"4c8d3d87e516871151f011a4e6c08fa4f0c34e4a44cf02a2e961fcf4fe1f40c9"} Jan 31 09:48:45 crc kubenswrapper[4830]: I0131 09:48:45.420183 4830 scope.go:117] "RemoveContainer" containerID="a20ea2322cd4062ecdd9c286d63df058ebc8744e0a83dc5a6a03d87d2b70305c" Jan 31 09:48:45 crc kubenswrapper[4830]: I0131 09:48:45.421121 4830 scope.go:117] "RemoveContainer" containerID="4c8d3d87e516871151f011a4e6c08fa4f0c34e4a44cf02a2e961fcf4fe1f40c9" Jan 31 09:48:45 crc kubenswrapper[4830]: E0131 09:48:45.421455 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gt7kd_openshift-machine-config-operator(158dbfda-9b0a-4809-9946-3c6ee2d082dc)\"" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" podUID="158dbfda-9b0a-4809-9946-3c6ee2d082dc" Jan 31 09:48:59 crc kubenswrapper[4830]: I0131 09:48:59.252283 4830 scope.go:117] "RemoveContainer" containerID="4c8d3d87e516871151f011a4e6c08fa4f0c34e4a44cf02a2e961fcf4fe1f40c9" Jan 31 09:48:59 crc kubenswrapper[4830]: E0131 09:48:59.254403 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gt7kd_openshift-machine-config-operator(158dbfda-9b0a-4809-9946-3c6ee2d082dc)\"" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" podUID="158dbfda-9b0a-4809-9946-3c6ee2d082dc" Jan 31 09:49:11 crc kubenswrapper[4830]: I0131 09:49:11.252050 4830 scope.go:117] "RemoveContainer" containerID="4c8d3d87e516871151f011a4e6c08fa4f0c34e4a44cf02a2e961fcf4fe1f40c9" Jan 31 09:49:11 crc kubenswrapper[4830]: E0131 09:49:11.253197 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gt7kd_openshift-machine-config-operator(158dbfda-9b0a-4809-9946-3c6ee2d082dc)\"" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" podUID="158dbfda-9b0a-4809-9946-3c6ee2d082dc" Jan 31 09:49:24 crc kubenswrapper[4830]: I0131 09:49:24.252370 4830 scope.go:117] "RemoveContainer" containerID="4c8d3d87e516871151f011a4e6c08fa4f0c34e4a44cf02a2e961fcf4fe1f40c9" Jan 31 09:49:24 crc kubenswrapper[4830]: E0131 09:49:24.253663 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gt7kd_openshift-machine-config-operator(158dbfda-9b0a-4809-9946-3c6ee2d082dc)\"" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" podUID="158dbfda-9b0a-4809-9946-3c6ee2d082dc" Jan 31 09:49:28 crc kubenswrapper[4830]: I0131 09:49:28.948844 4830 generic.go:334] "Generic (PLEG): container finished" podID="8081b2b1-7847-4223-a583-0f0251f2ef52" containerID="d859abb5c2beff8cfe355879dd692d32998e93f4a7b65c977e0ac08363dff7df" exitCode=0 Jan 31 09:49:28 crc kubenswrapper[4830]: I0131 09:49:28.949046 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-xpd8k" event={"ID":"8081b2b1-7847-4223-a583-0f0251f2ef52","Type":"ContainerDied","Data":"d859abb5c2beff8cfe355879dd692d32998e93f4a7b65c977e0ac08363dff7df"} Jan 31 09:49:30 crc kubenswrapper[4830]: I0131 09:49:30.509876 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-xpd8k" Jan 31 09:49:30 crc kubenswrapper[4830]: I0131 09:49:30.609236 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8081b2b1-7847-4223-a583-0f0251f2ef52-nova-combined-ca-bundle\") pod \"8081b2b1-7847-4223-a583-0f0251f2ef52\" (UID: \"8081b2b1-7847-4223-a583-0f0251f2ef52\") " Jan 31 09:49:30 crc kubenswrapper[4830]: I0131 09:49:30.609458 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/8081b2b1-7847-4223-a583-0f0251f2ef52-nova-extra-config-0\") pod \"8081b2b1-7847-4223-a583-0f0251f2ef52\" (UID: \"8081b2b1-7847-4223-a583-0f0251f2ef52\") " Jan 31 09:49:30 crc kubenswrapper[4830]: I0131 09:49:30.609526 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/8081b2b1-7847-4223-a583-0f0251f2ef52-nova-migration-ssh-key-1\") pod \"8081b2b1-7847-4223-a583-0f0251f2ef52\" (UID: \"8081b2b1-7847-4223-a583-0f0251f2ef52\") " Jan 31 09:49:30 crc kubenswrapper[4830]: I0131 09:49:30.609710 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/8081b2b1-7847-4223-a583-0f0251f2ef52-nova-cell1-compute-config-1\") pod \"8081b2b1-7847-4223-a583-0f0251f2ef52\" (UID: \"8081b2b1-7847-4223-a583-0f0251f2ef52\") " Jan 31 09:49:30 crc kubenswrapper[4830]: I0131 09:49:30.609839 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/8081b2b1-7847-4223-a583-0f0251f2ef52-nova-migration-ssh-key-0\") pod \"8081b2b1-7847-4223-a583-0f0251f2ef52\" (UID: \"8081b2b1-7847-4223-a583-0f0251f2ef52\") " Jan 31 09:49:30 crc kubenswrapper[4830]: I0131 09:49:30.609867 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/8081b2b1-7847-4223-a583-0f0251f2ef52-nova-cell1-compute-config-0\") pod \"8081b2b1-7847-4223-a583-0f0251f2ef52\" (UID: \"8081b2b1-7847-4223-a583-0f0251f2ef52\") " Jan 31 09:49:30 crc kubenswrapper[4830]: I0131 09:49:30.609886 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xd267\" (UniqueName: \"kubernetes.io/projected/8081b2b1-7847-4223-a583-0f0251f2ef52-kube-api-access-xd267\") pod \"8081b2b1-7847-4223-a583-0f0251f2ef52\" (UID: \"8081b2b1-7847-4223-a583-0f0251f2ef52\") " Jan 31 09:49:30 crc kubenswrapper[4830]: I0131 09:49:30.609964 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8081b2b1-7847-4223-a583-0f0251f2ef52-ssh-key-openstack-edpm-ipam\") pod \"8081b2b1-7847-4223-a583-0f0251f2ef52\" (UID: \"8081b2b1-7847-4223-a583-0f0251f2ef52\") " Jan 31 09:49:30 crc kubenswrapper[4830]: I0131 09:49:30.609984 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8081b2b1-7847-4223-a583-0f0251f2ef52-inventory\") pod \"8081b2b1-7847-4223-a583-0f0251f2ef52\" (UID: \"8081b2b1-7847-4223-a583-0f0251f2ef52\") " Jan 31 09:49:30 crc kubenswrapper[4830]: I0131 09:49:30.634499 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8081b2b1-7847-4223-a583-0f0251f2ef52-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "8081b2b1-7847-4223-a583-0f0251f2ef52" (UID: "8081b2b1-7847-4223-a583-0f0251f2ef52"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:49:30 crc kubenswrapper[4830]: I0131 09:49:30.635747 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8081b2b1-7847-4223-a583-0f0251f2ef52-kube-api-access-xd267" (OuterVolumeSpecName: "kube-api-access-xd267") pod "8081b2b1-7847-4223-a583-0f0251f2ef52" (UID: "8081b2b1-7847-4223-a583-0f0251f2ef52"). InnerVolumeSpecName "kube-api-access-xd267". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:49:30 crc kubenswrapper[4830]: I0131 09:49:30.659442 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8081b2b1-7847-4223-a583-0f0251f2ef52-inventory" (OuterVolumeSpecName: "inventory") pod "8081b2b1-7847-4223-a583-0f0251f2ef52" (UID: "8081b2b1-7847-4223-a583-0f0251f2ef52"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:49:30 crc kubenswrapper[4830]: I0131 09:49:30.661200 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8081b2b1-7847-4223-a583-0f0251f2ef52-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "8081b2b1-7847-4223-a583-0f0251f2ef52" (UID: "8081b2b1-7847-4223-a583-0f0251f2ef52"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:49:30 crc kubenswrapper[4830]: I0131 09:49:30.669922 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8081b2b1-7847-4223-a583-0f0251f2ef52-nova-cell1-compute-config-0" (OuterVolumeSpecName: "nova-cell1-compute-config-0") pod "8081b2b1-7847-4223-a583-0f0251f2ef52" (UID: "8081b2b1-7847-4223-a583-0f0251f2ef52"). InnerVolumeSpecName "nova-cell1-compute-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:49:30 crc kubenswrapper[4830]: I0131 09:49:30.670650 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8081b2b1-7847-4223-a583-0f0251f2ef52-nova-extra-config-0" (OuterVolumeSpecName: "nova-extra-config-0") pod "8081b2b1-7847-4223-a583-0f0251f2ef52" (UID: "8081b2b1-7847-4223-a583-0f0251f2ef52"). InnerVolumeSpecName "nova-extra-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:49:30 crc kubenswrapper[4830]: I0131 09:49:30.685203 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8081b2b1-7847-4223-a583-0f0251f2ef52-nova-migration-ssh-key-1" (OuterVolumeSpecName: "nova-migration-ssh-key-1") pod "8081b2b1-7847-4223-a583-0f0251f2ef52" (UID: "8081b2b1-7847-4223-a583-0f0251f2ef52"). InnerVolumeSpecName "nova-migration-ssh-key-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:49:30 crc kubenswrapper[4830]: I0131 09:49:30.689946 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8081b2b1-7847-4223-a583-0f0251f2ef52-nova-cell1-compute-config-1" (OuterVolumeSpecName: "nova-cell1-compute-config-1") pod "8081b2b1-7847-4223-a583-0f0251f2ef52" (UID: "8081b2b1-7847-4223-a583-0f0251f2ef52"). InnerVolumeSpecName "nova-cell1-compute-config-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:49:30 crc kubenswrapper[4830]: I0131 09:49:30.691977 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8081b2b1-7847-4223-a583-0f0251f2ef52-nova-migration-ssh-key-0" (OuterVolumeSpecName: "nova-migration-ssh-key-0") pod "8081b2b1-7847-4223-a583-0f0251f2ef52" (UID: "8081b2b1-7847-4223-a583-0f0251f2ef52"). InnerVolumeSpecName "nova-migration-ssh-key-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:49:30 crc kubenswrapper[4830]: I0131 09:49:30.715024 4830 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/8081b2b1-7847-4223-a583-0f0251f2ef52-nova-migration-ssh-key-1\") on node \"crc\" DevicePath \"\"" Jan 31 09:49:30 crc kubenswrapper[4830]: I0131 09:49:30.715096 4830 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/8081b2b1-7847-4223-a583-0f0251f2ef52-nova-cell1-compute-config-1\") on node \"crc\" DevicePath \"\"" Jan 31 09:49:30 crc kubenswrapper[4830]: I0131 09:49:30.715109 4830 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/8081b2b1-7847-4223-a583-0f0251f2ef52-nova-migration-ssh-key-0\") on node \"crc\" DevicePath \"\"" Jan 31 09:49:30 crc kubenswrapper[4830]: I0131 09:49:30.715123 4830 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/8081b2b1-7847-4223-a583-0f0251f2ef52-nova-cell1-compute-config-0\") on node \"crc\" DevicePath \"\"" Jan 31 09:49:30 crc kubenswrapper[4830]: I0131 09:49:30.715138 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xd267\" (UniqueName: \"kubernetes.io/projected/8081b2b1-7847-4223-a583-0f0251f2ef52-kube-api-access-xd267\") on node \"crc\" DevicePath \"\"" Jan 31 09:49:30 crc kubenswrapper[4830]: I0131 09:49:30.715150 4830 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8081b2b1-7847-4223-a583-0f0251f2ef52-inventory\") on node \"crc\" DevicePath \"\"" Jan 31 09:49:30 crc kubenswrapper[4830]: I0131 09:49:30.715161 4830 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8081b2b1-7847-4223-a583-0f0251f2ef52-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 31 09:49:30 crc kubenswrapper[4830]: I0131 09:49:30.715171 4830 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8081b2b1-7847-4223-a583-0f0251f2ef52-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 09:49:30 crc kubenswrapper[4830]: I0131 09:49:30.715183 4830 reconciler_common.go:293] "Volume detached for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/8081b2b1-7847-4223-a583-0f0251f2ef52-nova-extra-config-0\") on node \"crc\" DevicePath \"\"" Jan 31 09:49:30 crc kubenswrapper[4830]: I0131 09:49:30.971694 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-xpd8k" event={"ID":"8081b2b1-7847-4223-a583-0f0251f2ef52","Type":"ContainerDied","Data":"9700e7b750b7d98f39a922d3508bf8b026332dc562c3f4036f6c63e8c0a4311e"} Jan 31 09:49:30 crc kubenswrapper[4830]: I0131 09:49:30.971760 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-xpd8k" Jan 31 09:49:30 crc kubenswrapper[4830]: I0131 09:49:30.971767 4830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9700e7b750b7d98f39a922d3508bf8b026332dc562c3f4036f6c63e8c0a4311e" Jan 31 09:49:31 crc kubenswrapper[4830]: I0131 09:49:31.097540 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-v5kz8"] Jan 31 09:49:31 crc kubenswrapper[4830]: E0131 09:49:31.098195 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8081b2b1-7847-4223-a583-0f0251f2ef52" containerName="nova-edpm-deployment-openstack-edpm-ipam" Jan 31 09:49:31 crc kubenswrapper[4830]: I0131 09:49:31.098222 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="8081b2b1-7847-4223-a583-0f0251f2ef52" containerName="nova-edpm-deployment-openstack-edpm-ipam" Jan 31 09:49:31 crc kubenswrapper[4830]: E0131 09:49:31.098241 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1a44fb8d-f6bf-4794-a7d0-4a19edb7b02f" containerName="extract-content" Jan 31 09:49:31 crc kubenswrapper[4830]: I0131 09:49:31.098250 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a44fb8d-f6bf-4794-a7d0-4a19edb7b02f" containerName="extract-content" Jan 31 09:49:31 crc kubenswrapper[4830]: E0131 09:49:31.098265 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6edcbd62-edb0-4d2b-9dff-3cc7889c16f2" containerName="registry-server" Jan 31 09:49:31 crc kubenswrapper[4830]: I0131 09:49:31.098273 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="6edcbd62-edb0-4d2b-9dff-3cc7889c16f2" containerName="registry-server" Jan 31 09:49:31 crc kubenswrapper[4830]: E0131 09:49:31.098297 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1a44fb8d-f6bf-4794-a7d0-4a19edb7b02f" containerName="registry-server" Jan 31 09:49:31 crc kubenswrapper[4830]: I0131 09:49:31.098305 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a44fb8d-f6bf-4794-a7d0-4a19edb7b02f" containerName="registry-server" Jan 31 09:49:31 crc kubenswrapper[4830]: E0131 09:49:31.098332 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1a44fb8d-f6bf-4794-a7d0-4a19edb7b02f" containerName="extract-utilities" Jan 31 09:49:31 crc kubenswrapper[4830]: I0131 09:49:31.098340 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a44fb8d-f6bf-4794-a7d0-4a19edb7b02f" containerName="extract-utilities" Jan 31 09:49:31 crc kubenswrapper[4830]: E0131 09:49:31.098365 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6edcbd62-edb0-4d2b-9dff-3cc7889c16f2" containerName="extract-content" Jan 31 09:49:31 crc kubenswrapper[4830]: I0131 09:49:31.098371 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="6edcbd62-edb0-4d2b-9dff-3cc7889c16f2" containerName="extract-content" Jan 31 09:49:31 crc kubenswrapper[4830]: E0131 09:49:31.098386 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6edcbd62-edb0-4d2b-9dff-3cc7889c16f2" containerName="extract-utilities" Jan 31 09:49:31 crc kubenswrapper[4830]: I0131 09:49:31.098392 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="6edcbd62-edb0-4d2b-9dff-3cc7889c16f2" containerName="extract-utilities" Jan 31 09:49:31 crc kubenswrapper[4830]: I0131 09:49:31.098602 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="6edcbd62-edb0-4d2b-9dff-3cc7889c16f2" containerName="registry-server" Jan 31 09:49:31 crc kubenswrapper[4830]: I0131 09:49:31.098628 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="8081b2b1-7847-4223-a583-0f0251f2ef52" containerName="nova-edpm-deployment-openstack-edpm-ipam" Jan 31 09:49:31 crc kubenswrapper[4830]: I0131 09:49:31.098637 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="1a44fb8d-f6bf-4794-a7d0-4a19edb7b02f" containerName="registry-server" Jan 31 09:49:31 crc kubenswrapper[4830]: I0131 09:49:31.099539 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-v5kz8" Jan 31 09:49:31 crc kubenswrapper[4830]: I0131 09:49:31.102104 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 31 09:49:31 crc kubenswrapper[4830]: I0131 09:49:31.102350 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 31 09:49:31 crc kubenswrapper[4830]: I0131 09:49:31.102861 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 31 09:49:31 crc kubenswrapper[4830]: I0131 09:49:31.103057 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-compute-config-data" Jan 31 09:49:31 crc kubenswrapper[4830]: I0131 09:49:31.109684 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-vd24j" Jan 31 09:49:31 crc kubenswrapper[4830]: I0131 09:49:31.123175 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-v5kz8"] Jan 31 09:49:31 crc kubenswrapper[4830]: I0131 09:49:31.229038 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/501efae7-9326-4a6f-940a-32dc593da610-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-v5kz8\" (UID: \"501efae7-9326-4a6f-940a-32dc593da610\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-v5kz8" Jan 31 09:49:31 crc kubenswrapper[4830]: I0131 09:49:31.229466 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/501efae7-9326-4a6f-940a-32dc593da610-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-v5kz8\" (UID: \"501efae7-9326-4a6f-940a-32dc593da610\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-v5kz8" Jan 31 09:49:31 crc kubenswrapper[4830]: I0131 09:49:31.229543 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/501efae7-9326-4a6f-940a-32dc593da610-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-v5kz8\" (UID: \"501efae7-9326-4a6f-940a-32dc593da610\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-v5kz8" Jan 31 09:49:31 crc kubenswrapper[4830]: I0131 09:49:31.229585 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/501efae7-9326-4a6f-940a-32dc593da610-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-v5kz8\" (UID: \"501efae7-9326-4a6f-940a-32dc593da610\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-v5kz8" Jan 31 09:49:31 crc kubenswrapper[4830]: I0131 09:49:31.229858 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/501efae7-9326-4a6f-940a-32dc593da610-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-v5kz8\" (UID: \"501efae7-9326-4a6f-940a-32dc593da610\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-v5kz8" Jan 31 09:49:31 crc kubenswrapper[4830]: I0131 09:49:31.229901 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kqmpn\" (UniqueName: \"kubernetes.io/projected/501efae7-9326-4a6f-940a-32dc593da610-kube-api-access-kqmpn\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-v5kz8\" (UID: \"501efae7-9326-4a6f-940a-32dc593da610\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-v5kz8" Jan 31 09:49:31 crc kubenswrapper[4830]: I0131 09:49:31.230121 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/501efae7-9326-4a6f-940a-32dc593da610-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-v5kz8\" (UID: \"501efae7-9326-4a6f-940a-32dc593da610\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-v5kz8" Jan 31 09:49:31 crc kubenswrapper[4830]: I0131 09:49:31.333237 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/501efae7-9326-4a6f-940a-32dc593da610-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-v5kz8\" (UID: \"501efae7-9326-4a6f-940a-32dc593da610\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-v5kz8" Jan 31 09:49:31 crc kubenswrapper[4830]: I0131 09:49:31.333325 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kqmpn\" (UniqueName: \"kubernetes.io/projected/501efae7-9326-4a6f-940a-32dc593da610-kube-api-access-kqmpn\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-v5kz8\" (UID: \"501efae7-9326-4a6f-940a-32dc593da610\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-v5kz8" Jan 31 09:49:31 crc kubenswrapper[4830]: I0131 09:49:31.333510 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/501efae7-9326-4a6f-940a-32dc593da610-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-v5kz8\" (UID: \"501efae7-9326-4a6f-940a-32dc593da610\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-v5kz8" Jan 31 09:49:31 crc kubenswrapper[4830]: I0131 09:49:31.333711 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/501efae7-9326-4a6f-940a-32dc593da610-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-v5kz8\" (UID: \"501efae7-9326-4a6f-940a-32dc593da610\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-v5kz8" Jan 31 09:49:31 crc kubenswrapper[4830]: I0131 09:49:31.333789 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/501efae7-9326-4a6f-940a-32dc593da610-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-v5kz8\" (UID: \"501efae7-9326-4a6f-940a-32dc593da610\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-v5kz8" Jan 31 09:49:31 crc kubenswrapper[4830]: I0131 09:49:31.333846 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/501efae7-9326-4a6f-940a-32dc593da610-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-v5kz8\" (UID: \"501efae7-9326-4a6f-940a-32dc593da610\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-v5kz8" Jan 31 09:49:31 crc kubenswrapper[4830]: I0131 09:49:31.333878 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/501efae7-9326-4a6f-940a-32dc593da610-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-v5kz8\" (UID: \"501efae7-9326-4a6f-940a-32dc593da610\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-v5kz8" Jan 31 09:49:31 crc kubenswrapper[4830]: I0131 09:49:31.338778 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/501efae7-9326-4a6f-940a-32dc593da610-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-v5kz8\" (UID: \"501efae7-9326-4a6f-940a-32dc593da610\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-v5kz8" Jan 31 09:49:31 crc kubenswrapper[4830]: I0131 09:49:31.339039 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/501efae7-9326-4a6f-940a-32dc593da610-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-v5kz8\" (UID: \"501efae7-9326-4a6f-940a-32dc593da610\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-v5kz8" Jan 31 09:49:31 crc kubenswrapper[4830]: I0131 09:49:31.340466 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/501efae7-9326-4a6f-940a-32dc593da610-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-v5kz8\" (UID: \"501efae7-9326-4a6f-940a-32dc593da610\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-v5kz8" Jan 31 09:49:31 crc kubenswrapper[4830]: I0131 09:49:31.340859 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/501efae7-9326-4a6f-940a-32dc593da610-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-v5kz8\" (UID: \"501efae7-9326-4a6f-940a-32dc593da610\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-v5kz8" Jan 31 09:49:31 crc kubenswrapper[4830]: I0131 09:49:31.341241 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/501efae7-9326-4a6f-940a-32dc593da610-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-v5kz8\" (UID: \"501efae7-9326-4a6f-940a-32dc593da610\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-v5kz8" Jan 31 09:49:31 crc kubenswrapper[4830]: I0131 09:49:31.341527 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/501efae7-9326-4a6f-940a-32dc593da610-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-v5kz8\" (UID: \"501efae7-9326-4a6f-940a-32dc593da610\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-v5kz8" Jan 31 09:49:31 crc kubenswrapper[4830]: I0131 09:49:31.350976 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kqmpn\" (UniqueName: \"kubernetes.io/projected/501efae7-9326-4a6f-940a-32dc593da610-kube-api-access-kqmpn\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-v5kz8\" (UID: \"501efae7-9326-4a6f-940a-32dc593da610\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-v5kz8" Jan 31 09:49:31 crc kubenswrapper[4830]: I0131 09:49:31.434533 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-v5kz8" Jan 31 09:49:32 crc kubenswrapper[4830]: I0131 09:49:32.224816 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-v5kz8"] Jan 31 09:49:32 crc kubenswrapper[4830]: I0131 09:49:32.225130 4830 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 31 09:49:33 crc kubenswrapper[4830]: I0131 09:49:33.043887 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-v5kz8" event={"ID":"501efae7-9326-4a6f-940a-32dc593da610","Type":"ContainerStarted","Data":"c7355de61c68b0dc26efef8dc18fdf816f3944cc6fb880a1ba52733af2abd011"} Jan 31 09:49:36 crc kubenswrapper[4830]: I0131 09:49:36.126470 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-v5kz8" event={"ID":"501efae7-9326-4a6f-940a-32dc593da610","Type":"ContainerStarted","Data":"8ef6215a6f8ad1dedb47514f5a727ed338fd21bb94086d1a9dd2925ea6e01108"} Jan 31 09:49:36 crc kubenswrapper[4830]: I0131 09:49:36.164410 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-v5kz8" podStartSLOduration=2.46396485 podStartE2EDuration="5.164382476s" podCreationTimestamp="2026-01-31 09:49:31 +0000 UTC" firstStartedPulling="2026-01-31 09:49:32.224837007 +0000 UTC m=+2916.718199449" lastFinishedPulling="2026-01-31 09:49:34.925254633 +0000 UTC m=+2919.418617075" observedRunningTime="2026-01-31 09:49:36.148645918 +0000 UTC m=+2920.642008360" watchObservedRunningTime="2026-01-31 09:49:36.164382476 +0000 UTC m=+2920.657744918" Jan 31 09:49:39 crc kubenswrapper[4830]: I0131 09:49:39.255132 4830 scope.go:117] "RemoveContainer" containerID="4c8d3d87e516871151f011a4e6c08fa4f0c34e4a44cf02a2e961fcf4fe1f40c9" Jan 31 09:49:39 crc kubenswrapper[4830]: E0131 09:49:39.256208 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gt7kd_openshift-machine-config-operator(158dbfda-9b0a-4809-9946-3c6ee2d082dc)\"" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" podUID="158dbfda-9b0a-4809-9946-3c6ee2d082dc" Jan 31 09:49:50 crc kubenswrapper[4830]: I0131 09:49:50.252525 4830 scope.go:117] "RemoveContainer" containerID="4c8d3d87e516871151f011a4e6c08fa4f0c34e4a44cf02a2e961fcf4fe1f40c9" Jan 31 09:49:50 crc kubenswrapper[4830]: E0131 09:49:50.253441 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gt7kd_openshift-machine-config-operator(158dbfda-9b0a-4809-9946-3c6ee2d082dc)\"" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" podUID="158dbfda-9b0a-4809-9946-3c6ee2d082dc" Jan 31 09:50:04 crc kubenswrapper[4830]: I0131 09:50:04.252780 4830 scope.go:117] "RemoveContainer" containerID="4c8d3d87e516871151f011a4e6c08fa4f0c34e4a44cf02a2e961fcf4fe1f40c9" Jan 31 09:50:04 crc kubenswrapper[4830]: E0131 09:50:04.253781 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gt7kd_openshift-machine-config-operator(158dbfda-9b0a-4809-9946-3c6ee2d082dc)\"" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" podUID="158dbfda-9b0a-4809-9946-3c6ee2d082dc" Jan 31 09:50:18 crc kubenswrapper[4830]: I0131 09:50:18.252466 4830 scope.go:117] "RemoveContainer" containerID="4c8d3d87e516871151f011a4e6c08fa4f0c34e4a44cf02a2e961fcf4fe1f40c9" Jan 31 09:50:18 crc kubenswrapper[4830]: E0131 09:50:18.253568 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gt7kd_openshift-machine-config-operator(158dbfda-9b0a-4809-9946-3c6ee2d082dc)\"" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" podUID="158dbfda-9b0a-4809-9946-3c6ee2d082dc" Jan 31 09:50:29 crc kubenswrapper[4830]: I0131 09:50:29.252905 4830 scope.go:117] "RemoveContainer" containerID="4c8d3d87e516871151f011a4e6c08fa4f0c34e4a44cf02a2e961fcf4fe1f40c9" Jan 31 09:50:29 crc kubenswrapper[4830]: E0131 09:50:29.254248 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gt7kd_openshift-machine-config-operator(158dbfda-9b0a-4809-9946-3c6ee2d082dc)\"" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" podUID="158dbfda-9b0a-4809-9946-3c6ee2d082dc" Jan 31 09:50:42 crc kubenswrapper[4830]: I0131 09:50:42.252361 4830 scope.go:117] "RemoveContainer" containerID="4c8d3d87e516871151f011a4e6c08fa4f0c34e4a44cf02a2e961fcf4fe1f40c9" Jan 31 09:50:42 crc kubenswrapper[4830]: E0131 09:50:42.253293 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gt7kd_openshift-machine-config-operator(158dbfda-9b0a-4809-9946-3c6ee2d082dc)\"" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" podUID="158dbfda-9b0a-4809-9946-3c6ee2d082dc" Jan 31 09:50:56 crc kubenswrapper[4830]: I0131 09:50:56.265274 4830 scope.go:117] "RemoveContainer" containerID="4c8d3d87e516871151f011a4e6c08fa4f0c34e4a44cf02a2e961fcf4fe1f40c9" Jan 31 09:50:56 crc kubenswrapper[4830]: E0131 09:50:56.266295 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gt7kd_openshift-machine-config-operator(158dbfda-9b0a-4809-9946-3c6ee2d082dc)\"" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" podUID="158dbfda-9b0a-4809-9946-3c6ee2d082dc" Jan 31 09:51:09 crc kubenswrapper[4830]: I0131 09:51:09.251252 4830 scope.go:117] "RemoveContainer" containerID="4c8d3d87e516871151f011a4e6c08fa4f0c34e4a44cf02a2e961fcf4fe1f40c9" Jan 31 09:51:09 crc kubenswrapper[4830]: E0131 09:51:09.253301 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gt7kd_openshift-machine-config-operator(158dbfda-9b0a-4809-9946-3c6ee2d082dc)\"" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" podUID="158dbfda-9b0a-4809-9946-3c6ee2d082dc" Jan 31 09:51:22 crc kubenswrapper[4830]: I0131 09:51:22.257881 4830 scope.go:117] "RemoveContainer" containerID="4c8d3d87e516871151f011a4e6c08fa4f0c34e4a44cf02a2e961fcf4fe1f40c9" Jan 31 09:51:22 crc kubenswrapper[4830]: E0131 09:51:22.259392 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gt7kd_openshift-machine-config-operator(158dbfda-9b0a-4809-9946-3c6ee2d082dc)\"" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" podUID="158dbfda-9b0a-4809-9946-3c6ee2d082dc" Jan 31 09:51:37 crc kubenswrapper[4830]: I0131 09:51:37.251879 4830 scope.go:117] "RemoveContainer" containerID="4c8d3d87e516871151f011a4e6c08fa4f0c34e4a44cf02a2e961fcf4fe1f40c9" Jan 31 09:51:37 crc kubenswrapper[4830]: E0131 09:51:37.252877 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gt7kd_openshift-machine-config-operator(158dbfda-9b0a-4809-9946-3c6ee2d082dc)\"" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" podUID="158dbfda-9b0a-4809-9946-3c6ee2d082dc" Jan 31 09:51:48 crc kubenswrapper[4830]: I0131 09:51:48.251709 4830 scope.go:117] "RemoveContainer" containerID="4c8d3d87e516871151f011a4e6c08fa4f0c34e4a44cf02a2e961fcf4fe1f40c9" Jan 31 09:51:48 crc kubenswrapper[4830]: E0131 09:51:48.252894 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gt7kd_openshift-machine-config-operator(158dbfda-9b0a-4809-9946-3c6ee2d082dc)\"" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" podUID="158dbfda-9b0a-4809-9946-3c6ee2d082dc" Jan 31 09:52:00 crc kubenswrapper[4830]: I0131 09:52:00.834745 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-v5kz8" event={"ID":"501efae7-9326-4a6f-940a-32dc593da610","Type":"ContainerDied","Data":"8ef6215a6f8ad1dedb47514f5a727ed338fd21bb94086d1a9dd2925ea6e01108"} Jan 31 09:52:00 crc kubenswrapper[4830]: I0131 09:52:00.834675 4830 generic.go:334] "Generic (PLEG): container finished" podID="501efae7-9326-4a6f-940a-32dc593da610" containerID="8ef6215a6f8ad1dedb47514f5a727ed338fd21bb94086d1a9dd2925ea6e01108" exitCode=0 Jan 31 09:52:02 crc kubenswrapper[4830]: I0131 09:52:02.251573 4830 scope.go:117] "RemoveContainer" containerID="4c8d3d87e516871151f011a4e6c08fa4f0c34e4a44cf02a2e961fcf4fe1f40c9" Jan 31 09:52:02 crc kubenswrapper[4830]: E0131 09:52:02.252450 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gt7kd_openshift-machine-config-operator(158dbfda-9b0a-4809-9946-3c6ee2d082dc)\"" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" podUID="158dbfda-9b0a-4809-9946-3c6ee2d082dc" Jan 31 09:52:02 crc kubenswrapper[4830]: I0131 09:52:02.364515 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-v5kz8" Jan 31 09:52:02 crc kubenswrapper[4830]: I0131 09:52:02.551234 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/501efae7-9326-4a6f-940a-32dc593da610-telemetry-combined-ca-bundle\") pod \"501efae7-9326-4a6f-940a-32dc593da610\" (UID: \"501efae7-9326-4a6f-940a-32dc593da610\") " Jan 31 09:52:02 crc kubenswrapper[4830]: I0131 09:52:02.551304 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/501efae7-9326-4a6f-940a-32dc593da610-ceilometer-compute-config-data-0\") pod \"501efae7-9326-4a6f-940a-32dc593da610\" (UID: \"501efae7-9326-4a6f-940a-32dc593da610\") " Jan 31 09:52:02 crc kubenswrapper[4830]: I0131 09:52:02.551444 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/501efae7-9326-4a6f-940a-32dc593da610-inventory\") pod \"501efae7-9326-4a6f-940a-32dc593da610\" (UID: \"501efae7-9326-4a6f-940a-32dc593da610\") " Jan 31 09:52:02 crc kubenswrapper[4830]: I0131 09:52:02.551468 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kqmpn\" (UniqueName: \"kubernetes.io/projected/501efae7-9326-4a6f-940a-32dc593da610-kube-api-access-kqmpn\") pod \"501efae7-9326-4a6f-940a-32dc593da610\" (UID: \"501efae7-9326-4a6f-940a-32dc593da610\") " Jan 31 09:52:02 crc kubenswrapper[4830]: I0131 09:52:02.551484 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/501efae7-9326-4a6f-940a-32dc593da610-ceilometer-compute-config-data-2\") pod \"501efae7-9326-4a6f-940a-32dc593da610\" (UID: \"501efae7-9326-4a6f-940a-32dc593da610\") " Jan 31 09:52:02 crc kubenswrapper[4830]: I0131 09:52:02.551571 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/501efae7-9326-4a6f-940a-32dc593da610-ceilometer-compute-config-data-1\") pod \"501efae7-9326-4a6f-940a-32dc593da610\" (UID: \"501efae7-9326-4a6f-940a-32dc593da610\") " Jan 31 09:52:02 crc kubenswrapper[4830]: I0131 09:52:02.551638 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/501efae7-9326-4a6f-940a-32dc593da610-ssh-key-openstack-edpm-ipam\") pod \"501efae7-9326-4a6f-940a-32dc593da610\" (UID: \"501efae7-9326-4a6f-940a-32dc593da610\") " Jan 31 09:52:02 crc kubenswrapper[4830]: I0131 09:52:02.558885 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/501efae7-9326-4a6f-940a-32dc593da610-kube-api-access-kqmpn" (OuterVolumeSpecName: "kube-api-access-kqmpn") pod "501efae7-9326-4a6f-940a-32dc593da610" (UID: "501efae7-9326-4a6f-940a-32dc593da610"). InnerVolumeSpecName "kube-api-access-kqmpn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:52:02 crc kubenswrapper[4830]: I0131 09:52:02.559232 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/501efae7-9326-4a6f-940a-32dc593da610-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "501efae7-9326-4a6f-940a-32dc593da610" (UID: "501efae7-9326-4a6f-940a-32dc593da610"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:52:02 crc kubenswrapper[4830]: I0131 09:52:02.589647 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/501efae7-9326-4a6f-940a-32dc593da610-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "501efae7-9326-4a6f-940a-32dc593da610" (UID: "501efae7-9326-4a6f-940a-32dc593da610"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:52:02 crc kubenswrapper[4830]: I0131 09:52:02.596461 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/501efae7-9326-4a6f-940a-32dc593da610-ceilometer-compute-config-data-1" (OuterVolumeSpecName: "ceilometer-compute-config-data-1") pod "501efae7-9326-4a6f-940a-32dc593da610" (UID: "501efae7-9326-4a6f-940a-32dc593da610"). InnerVolumeSpecName "ceilometer-compute-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:52:02 crc kubenswrapper[4830]: I0131 09:52:02.600762 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/501efae7-9326-4a6f-940a-32dc593da610-inventory" (OuterVolumeSpecName: "inventory") pod "501efae7-9326-4a6f-940a-32dc593da610" (UID: "501efae7-9326-4a6f-940a-32dc593da610"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:52:02 crc kubenswrapper[4830]: I0131 09:52:02.602360 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/501efae7-9326-4a6f-940a-32dc593da610-ceilometer-compute-config-data-2" (OuterVolumeSpecName: "ceilometer-compute-config-data-2") pod "501efae7-9326-4a6f-940a-32dc593da610" (UID: "501efae7-9326-4a6f-940a-32dc593da610"). InnerVolumeSpecName "ceilometer-compute-config-data-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:52:02 crc kubenswrapper[4830]: I0131 09:52:02.605178 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/501efae7-9326-4a6f-940a-32dc593da610-ceilometer-compute-config-data-0" (OuterVolumeSpecName: "ceilometer-compute-config-data-0") pod "501efae7-9326-4a6f-940a-32dc593da610" (UID: "501efae7-9326-4a6f-940a-32dc593da610"). InnerVolumeSpecName "ceilometer-compute-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:52:02 crc kubenswrapper[4830]: I0131 09:52:02.656352 4830 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/501efae7-9326-4a6f-940a-32dc593da610-inventory\") on node \"crc\" DevicePath \"\"" Jan 31 09:52:02 crc kubenswrapper[4830]: I0131 09:52:02.657972 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kqmpn\" (UniqueName: \"kubernetes.io/projected/501efae7-9326-4a6f-940a-32dc593da610-kube-api-access-kqmpn\") on node \"crc\" DevicePath \"\"" Jan 31 09:52:02 crc kubenswrapper[4830]: I0131 09:52:02.658012 4830 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/501efae7-9326-4a6f-940a-32dc593da610-ceilometer-compute-config-data-2\") on node \"crc\" DevicePath \"\"" Jan 31 09:52:02 crc kubenswrapper[4830]: I0131 09:52:02.658027 4830 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/501efae7-9326-4a6f-940a-32dc593da610-ceilometer-compute-config-data-1\") on node \"crc\" DevicePath \"\"" Jan 31 09:52:02 crc kubenswrapper[4830]: I0131 09:52:02.658043 4830 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/501efae7-9326-4a6f-940a-32dc593da610-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 31 09:52:02 crc kubenswrapper[4830]: I0131 09:52:02.658059 4830 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/501efae7-9326-4a6f-940a-32dc593da610-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 09:52:02 crc kubenswrapper[4830]: I0131 09:52:02.658073 4830 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/501efae7-9326-4a6f-940a-32dc593da610-ceilometer-compute-config-data-0\") on node \"crc\" DevicePath \"\"" Jan 31 09:52:02 crc kubenswrapper[4830]: I0131 09:52:02.866650 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-v5kz8" event={"ID":"501efae7-9326-4a6f-940a-32dc593da610","Type":"ContainerDied","Data":"c7355de61c68b0dc26efef8dc18fdf816f3944cc6fb880a1ba52733af2abd011"} Jan 31 09:52:02 crc kubenswrapper[4830]: I0131 09:52:02.867169 4830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c7355de61c68b0dc26efef8dc18fdf816f3944cc6fb880a1ba52733af2abd011" Jan 31 09:52:02 crc kubenswrapper[4830]: I0131 09:52:02.867196 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-v5kz8" Jan 31 09:52:02 crc kubenswrapper[4830]: I0131 09:52:02.991814 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-mhz4d"] Jan 31 09:52:02 crc kubenswrapper[4830]: E0131 09:52:02.992824 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="501efae7-9326-4a6f-940a-32dc593da610" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Jan 31 09:52:02 crc kubenswrapper[4830]: I0131 09:52:02.992947 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="501efae7-9326-4a6f-940a-32dc593da610" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Jan 31 09:52:02 crc kubenswrapper[4830]: I0131 09:52:02.993822 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="501efae7-9326-4a6f-940a-32dc593da610" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Jan 31 09:52:02 crc kubenswrapper[4830]: I0131 09:52:02.995070 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-mhz4d" Jan 31 09:52:03 crc kubenswrapper[4830]: I0131 09:52:03.006369 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-vd24j" Jan 31 09:52:03 crc kubenswrapper[4830]: I0131 09:52:03.006623 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 31 09:52:03 crc kubenswrapper[4830]: I0131 09:52:03.006871 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 31 09:52:03 crc kubenswrapper[4830]: I0131 09:52:03.006391 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-mhz4d"] Jan 31 09:52:03 crc kubenswrapper[4830]: I0131 09:52:03.007131 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-ipmi-config-data" Jan 31 09:52:03 crc kubenswrapper[4830]: I0131 09:52:03.008996 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 31 09:52:03 crc kubenswrapper[4830]: I0131 09:52:03.173390 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/36db0fa7-717c-4785-942e-8c98a60f2350-inventory\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-mhz4d\" (UID: \"36db0fa7-717c-4785-942e-8c98a60f2350\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-mhz4d" Jan 31 09:52:03 crc kubenswrapper[4830]: I0131 09:52:03.173834 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/36db0fa7-717c-4785-942e-8c98a60f2350-telemetry-power-monitoring-combined-ca-bundle\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-mhz4d\" (UID: \"36db0fa7-717c-4785-942e-8c98a60f2350\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-mhz4d" Jan 31 09:52:03 crc kubenswrapper[4830]: I0131 09:52:03.173962 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sm48c\" (UniqueName: \"kubernetes.io/projected/36db0fa7-717c-4785-942e-8c98a60f2350-kube-api-access-sm48c\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-mhz4d\" (UID: \"36db0fa7-717c-4785-942e-8c98a60f2350\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-mhz4d" Jan 31 09:52:03 crc kubenswrapper[4830]: I0131 09:52:03.174005 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-ipmi-config-data-2\" (UniqueName: \"kubernetes.io/secret/36db0fa7-717c-4785-942e-8c98a60f2350-ceilometer-ipmi-config-data-2\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-mhz4d\" (UID: \"36db0fa7-717c-4785-942e-8c98a60f2350\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-mhz4d" Jan 31 09:52:03 crc kubenswrapper[4830]: I0131 09:52:03.174044 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/36db0fa7-717c-4785-942e-8c98a60f2350-ssh-key-openstack-edpm-ipam\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-mhz4d\" (UID: \"36db0fa7-717c-4785-942e-8c98a60f2350\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-mhz4d" Jan 31 09:52:03 crc kubenswrapper[4830]: I0131 09:52:03.174189 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-ipmi-config-data-0\" (UniqueName: \"kubernetes.io/secret/36db0fa7-717c-4785-942e-8c98a60f2350-ceilometer-ipmi-config-data-0\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-mhz4d\" (UID: \"36db0fa7-717c-4785-942e-8c98a60f2350\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-mhz4d" Jan 31 09:52:03 crc kubenswrapper[4830]: I0131 09:52:03.174239 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-ipmi-config-data-1\" (UniqueName: \"kubernetes.io/secret/36db0fa7-717c-4785-942e-8c98a60f2350-ceilometer-ipmi-config-data-1\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-mhz4d\" (UID: \"36db0fa7-717c-4785-942e-8c98a60f2350\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-mhz4d" Jan 31 09:52:03 crc kubenswrapper[4830]: I0131 09:52:03.276499 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/36db0fa7-717c-4785-942e-8c98a60f2350-telemetry-power-monitoring-combined-ca-bundle\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-mhz4d\" (UID: \"36db0fa7-717c-4785-942e-8c98a60f2350\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-mhz4d" Jan 31 09:52:03 crc kubenswrapper[4830]: I0131 09:52:03.277615 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sm48c\" (UniqueName: \"kubernetes.io/projected/36db0fa7-717c-4785-942e-8c98a60f2350-kube-api-access-sm48c\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-mhz4d\" (UID: \"36db0fa7-717c-4785-942e-8c98a60f2350\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-mhz4d" Jan 31 09:52:03 crc kubenswrapper[4830]: I0131 09:52:03.277776 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-ipmi-config-data-2\" (UniqueName: \"kubernetes.io/secret/36db0fa7-717c-4785-942e-8c98a60f2350-ceilometer-ipmi-config-data-2\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-mhz4d\" (UID: \"36db0fa7-717c-4785-942e-8c98a60f2350\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-mhz4d" Jan 31 09:52:03 crc kubenswrapper[4830]: I0131 09:52:03.277904 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/36db0fa7-717c-4785-942e-8c98a60f2350-ssh-key-openstack-edpm-ipam\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-mhz4d\" (UID: \"36db0fa7-717c-4785-942e-8c98a60f2350\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-mhz4d" Jan 31 09:52:03 crc kubenswrapper[4830]: I0131 09:52:03.278119 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-ipmi-config-data-0\" (UniqueName: \"kubernetes.io/secret/36db0fa7-717c-4785-942e-8c98a60f2350-ceilometer-ipmi-config-data-0\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-mhz4d\" (UID: \"36db0fa7-717c-4785-942e-8c98a60f2350\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-mhz4d" Jan 31 09:52:03 crc kubenswrapper[4830]: I0131 09:52:03.278287 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-ipmi-config-data-1\" (UniqueName: \"kubernetes.io/secret/36db0fa7-717c-4785-942e-8c98a60f2350-ceilometer-ipmi-config-data-1\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-mhz4d\" (UID: \"36db0fa7-717c-4785-942e-8c98a60f2350\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-mhz4d" Jan 31 09:52:03 crc kubenswrapper[4830]: I0131 09:52:03.278422 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/36db0fa7-717c-4785-942e-8c98a60f2350-inventory\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-mhz4d\" (UID: \"36db0fa7-717c-4785-942e-8c98a60f2350\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-mhz4d" Jan 31 09:52:03 crc kubenswrapper[4830]: I0131 09:52:03.283581 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-ipmi-config-data-0\" (UniqueName: \"kubernetes.io/secret/36db0fa7-717c-4785-942e-8c98a60f2350-ceilometer-ipmi-config-data-0\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-mhz4d\" (UID: \"36db0fa7-717c-4785-942e-8c98a60f2350\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-mhz4d" Jan 31 09:52:03 crc kubenswrapper[4830]: I0131 09:52:03.283789 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/36db0fa7-717c-4785-942e-8c98a60f2350-telemetry-power-monitoring-combined-ca-bundle\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-mhz4d\" (UID: \"36db0fa7-717c-4785-942e-8c98a60f2350\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-mhz4d" Jan 31 09:52:03 crc kubenswrapper[4830]: I0131 09:52:03.284926 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/36db0fa7-717c-4785-942e-8c98a60f2350-ssh-key-openstack-edpm-ipam\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-mhz4d\" (UID: \"36db0fa7-717c-4785-942e-8c98a60f2350\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-mhz4d" Jan 31 09:52:03 crc kubenswrapper[4830]: I0131 09:52:03.294326 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-ipmi-config-data-1\" (UniqueName: \"kubernetes.io/secret/36db0fa7-717c-4785-942e-8c98a60f2350-ceilometer-ipmi-config-data-1\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-mhz4d\" (UID: \"36db0fa7-717c-4785-942e-8c98a60f2350\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-mhz4d" Jan 31 09:52:03 crc kubenswrapper[4830]: I0131 09:52:03.294754 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-ipmi-config-data-2\" (UniqueName: \"kubernetes.io/secret/36db0fa7-717c-4785-942e-8c98a60f2350-ceilometer-ipmi-config-data-2\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-mhz4d\" (UID: \"36db0fa7-717c-4785-942e-8c98a60f2350\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-mhz4d" Jan 31 09:52:03 crc kubenswrapper[4830]: I0131 09:52:03.296089 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/36db0fa7-717c-4785-942e-8c98a60f2350-inventory\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-mhz4d\" (UID: \"36db0fa7-717c-4785-942e-8c98a60f2350\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-mhz4d" Jan 31 09:52:03 crc kubenswrapper[4830]: I0131 09:52:03.298318 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sm48c\" (UniqueName: \"kubernetes.io/projected/36db0fa7-717c-4785-942e-8c98a60f2350-kube-api-access-sm48c\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-mhz4d\" (UID: \"36db0fa7-717c-4785-942e-8c98a60f2350\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-mhz4d" Jan 31 09:52:03 crc kubenswrapper[4830]: I0131 09:52:03.328055 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-mhz4d" Jan 31 09:52:03 crc kubenswrapper[4830]: I0131 09:52:03.922798 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-mhz4d"] Jan 31 09:52:04 crc kubenswrapper[4830]: I0131 09:52:04.890485 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-mhz4d" event={"ID":"36db0fa7-717c-4785-942e-8c98a60f2350","Type":"ContainerStarted","Data":"0b4be155b3bf4f3ab34050d288840e912099b11174c50770938eecb07bdd3dc3"} Jan 31 09:52:04 crc kubenswrapper[4830]: I0131 09:52:04.890977 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-mhz4d" event={"ID":"36db0fa7-717c-4785-942e-8c98a60f2350","Type":"ContainerStarted","Data":"544778e4954edb4ef03c501ca8ebb659205d0de421bc568a01fbbe1143b85337"} Jan 31 09:52:04 crc kubenswrapper[4830]: I0131 09:52:04.919305 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-mhz4d" podStartSLOduration=2.282004856 podStartE2EDuration="2.919279682s" podCreationTimestamp="2026-01-31 09:52:02 +0000 UTC" firstStartedPulling="2026-01-31 09:52:03.927216258 +0000 UTC m=+3068.420578700" lastFinishedPulling="2026-01-31 09:52:04.564491084 +0000 UTC m=+3069.057853526" observedRunningTime="2026-01-31 09:52:04.909207475 +0000 UTC m=+3069.402569917" watchObservedRunningTime="2026-01-31 09:52:04.919279682 +0000 UTC m=+3069.412642124" Jan 31 09:52:15 crc kubenswrapper[4830]: I0131 09:52:15.252933 4830 scope.go:117] "RemoveContainer" containerID="4c8d3d87e516871151f011a4e6c08fa4f0c34e4a44cf02a2e961fcf4fe1f40c9" Jan 31 09:52:15 crc kubenswrapper[4830]: E0131 09:52:15.254680 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gt7kd_openshift-machine-config-operator(158dbfda-9b0a-4809-9946-3c6ee2d082dc)\"" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" podUID="158dbfda-9b0a-4809-9946-3c6ee2d082dc" Jan 31 09:52:28 crc kubenswrapper[4830]: I0131 09:52:28.252148 4830 scope.go:117] "RemoveContainer" containerID="4c8d3d87e516871151f011a4e6c08fa4f0c34e4a44cf02a2e961fcf4fe1f40c9" Jan 31 09:52:28 crc kubenswrapper[4830]: E0131 09:52:28.252918 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gt7kd_openshift-machine-config-operator(158dbfda-9b0a-4809-9946-3c6ee2d082dc)\"" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" podUID="158dbfda-9b0a-4809-9946-3c6ee2d082dc" Jan 31 09:52:43 crc kubenswrapper[4830]: I0131 09:52:43.252196 4830 scope.go:117] "RemoveContainer" containerID="4c8d3d87e516871151f011a4e6c08fa4f0c34e4a44cf02a2e961fcf4fe1f40c9" Jan 31 09:52:43 crc kubenswrapper[4830]: E0131 09:52:43.253044 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gt7kd_openshift-machine-config-operator(158dbfda-9b0a-4809-9946-3c6ee2d082dc)\"" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" podUID="158dbfda-9b0a-4809-9946-3c6ee2d082dc" Jan 31 09:52:54 crc kubenswrapper[4830]: I0131 09:52:54.251267 4830 scope.go:117] "RemoveContainer" containerID="4c8d3d87e516871151f011a4e6c08fa4f0c34e4a44cf02a2e961fcf4fe1f40c9" Jan 31 09:52:54 crc kubenswrapper[4830]: E0131 09:52:54.252068 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gt7kd_openshift-machine-config-operator(158dbfda-9b0a-4809-9946-3c6ee2d082dc)\"" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" podUID="158dbfda-9b0a-4809-9946-3c6ee2d082dc" Jan 31 09:53:08 crc kubenswrapper[4830]: I0131 09:53:08.251353 4830 scope.go:117] "RemoveContainer" containerID="4c8d3d87e516871151f011a4e6c08fa4f0c34e4a44cf02a2e961fcf4fe1f40c9" Jan 31 09:53:08 crc kubenswrapper[4830]: E0131 09:53:08.252426 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gt7kd_openshift-machine-config-operator(158dbfda-9b0a-4809-9946-3c6ee2d082dc)\"" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" podUID="158dbfda-9b0a-4809-9946-3c6ee2d082dc" Jan 31 09:53:19 crc kubenswrapper[4830]: I0131 09:53:19.251901 4830 scope.go:117] "RemoveContainer" containerID="4c8d3d87e516871151f011a4e6c08fa4f0c34e4a44cf02a2e961fcf4fe1f40c9" Jan 31 09:53:19 crc kubenswrapper[4830]: E0131 09:53:19.254205 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gt7kd_openshift-machine-config-operator(158dbfda-9b0a-4809-9946-3c6ee2d082dc)\"" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" podUID="158dbfda-9b0a-4809-9946-3c6ee2d082dc" Jan 31 09:53:32 crc kubenswrapper[4830]: I0131 09:53:32.251949 4830 scope.go:117] "RemoveContainer" containerID="4c8d3d87e516871151f011a4e6c08fa4f0c34e4a44cf02a2e961fcf4fe1f40c9" Jan 31 09:53:32 crc kubenswrapper[4830]: E0131 09:53:32.252854 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gt7kd_openshift-machine-config-operator(158dbfda-9b0a-4809-9946-3c6ee2d082dc)\"" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" podUID="158dbfda-9b0a-4809-9946-3c6ee2d082dc" Jan 31 09:53:47 crc kubenswrapper[4830]: I0131 09:53:47.251886 4830 scope.go:117] "RemoveContainer" containerID="4c8d3d87e516871151f011a4e6c08fa4f0c34e4a44cf02a2e961fcf4fe1f40c9" Jan 31 09:53:48 crc kubenswrapper[4830]: I0131 09:53:48.049330 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" event={"ID":"158dbfda-9b0a-4809-9946-3c6ee2d082dc","Type":"ContainerStarted","Data":"22b789d2ef559ce66600680e06b87ef5f548352affe5608d41b78430df090d48"} Jan 31 09:54:04 crc kubenswrapper[4830]: I0131 09:54:04.227797 4830 generic.go:334] "Generic (PLEG): container finished" podID="36db0fa7-717c-4785-942e-8c98a60f2350" containerID="0b4be155b3bf4f3ab34050d288840e912099b11174c50770938eecb07bdd3dc3" exitCode=0 Jan 31 09:54:04 crc kubenswrapper[4830]: I0131 09:54:04.227921 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-mhz4d" event={"ID":"36db0fa7-717c-4785-942e-8c98a60f2350","Type":"ContainerDied","Data":"0b4be155b3bf4f3ab34050d288840e912099b11174c50770938eecb07bdd3dc3"} Jan 31 09:54:05 crc kubenswrapper[4830]: I0131 09:54:05.737756 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-mhz4d" Jan 31 09:54:05 crc kubenswrapper[4830]: I0131 09:54:05.854429 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-ipmi-config-data-0\" (UniqueName: \"kubernetes.io/secret/36db0fa7-717c-4785-942e-8c98a60f2350-ceilometer-ipmi-config-data-0\") pod \"36db0fa7-717c-4785-942e-8c98a60f2350\" (UID: \"36db0fa7-717c-4785-942e-8c98a60f2350\") " Jan 31 09:54:05 crc kubenswrapper[4830]: I0131 09:54:05.854471 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sm48c\" (UniqueName: \"kubernetes.io/projected/36db0fa7-717c-4785-942e-8c98a60f2350-kube-api-access-sm48c\") pod \"36db0fa7-717c-4785-942e-8c98a60f2350\" (UID: \"36db0fa7-717c-4785-942e-8c98a60f2350\") " Jan 31 09:54:05 crc kubenswrapper[4830]: I0131 09:54:05.854535 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-ipmi-config-data-2\" (UniqueName: \"kubernetes.io/secret/36db0fa7-717c-4785-942e-8c98a60f2350-ceilometer-ipmi-config-data-2\") pod \"36db0fa7-717c-4785-942e-8c98a60f2350\" (UID: \"36db0fa7-717c-4785-942e-8c98a60f2350\") " Jan 31 09:54:05 crc kubenswrapper[4830]: I0131 09:54:05.854565 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/36db0fa7-717c-4785-942e-8c98a60f2350-ssh-key-openstack-edpm-ipam\") pod \"36db0fa7-717c-4785-942e-8c98a60f2350\" (UID: \"36db0fa7-717c-4785-942e-8c98a60f2350\") " Jan 31 09:54:05 crc kubenswrapper[4830]: I0131 09:54:05.854825 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/36db0fa7-717c-4785-942e-8c98a60f2350-telemetry-power-monitoring-combined-ca-bundle\") pod \"36db0fa7-717c-4785-942e-8c98a60f2350\" (UID: \"36db0fa7-717c-4785-942e-8c98a60f2350\") " Jan 31 09:54:05 crc kubenswrapper[4830]: I0131 09:54:05.854883 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-ipmi-config-data-1\" (UniqueName: \"kubernetes.io/secret/36db0fa7-717c-4785-942e-8c98a60f2350-ceilometer-ipmi-config-data-1\") pod \"36db0fa7-717c-4785-942e-8c98a60f2350\" (UID: \"36db0fa7-717c-4785-942e-8c98a60f2350\") " Jan 31 09:54:05 crc kubenswrapper[4830]: I0131 09:54:05.855012 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/36db0fa7-717c-4785-942e-8c98a60f2350-inventory\") pod \"36db0fa7-717c-4785-942e-8c98a60f2350\" (UID: \"36db0fa7-717c-4785-942e-8c98a60f2350\") " Jan 31 09:54:05 crc kubenswrapper[4830]: I0131 09:54:05.878240 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/36db0fa7-717c-4785-942e-8c98a60f2350-telemetry-power-monitoring-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-power-monitoring-combined-ca-bundle") pod "36db0fa7-717c-4785-942e-8c98a60f2350" (UID: "36db0fa7-717c-4785-942e-8c98a60f2350"). InnerVolumeSpecName "telemetry-power-monitoring-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:54:05 crc kubenswrapper[4830]: I0131 09:54:05.878881 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/36db0fa7-717c-4785-942e-8c98a60f2350-kube-api-access-sm48c" (OuterVolumeSpecName: "kube-api-access-sm48c") pod "36db0fa7-717c-4785-942e-8c98a60f2350" (UID: "36db0fa7-717c-4785-942e-8c98a60f2350"). InnerVolumeSpecName "kube-api-access-sm48c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:54:05 crc kubenswrapper[4830]: I0131 09:54:05.894537 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/36db0fa7-717c-4785-942e-8c98a60f2350-ceilometer-ipmi-config-data-0" (OuterVolumeSpecName: "ceilometer-ipmi-config-data-0") pod "36db0fa7-717c-4785-942e-8c98a60f2350" (UID: "36db0fa7-717c-4785-942e-8c98a60f2350"). InnerVolumeSpecName "ceilometer-ipmi-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:54:05 crc kubenswrapper[4830]: I0131 09:54:05.898088 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/36db0fa7-717c-4785-942e-8c98a60f2350-inventory" (OuterVolumeSpecName: "inventory") pod "36db0fa7-717c-4785-942e-8c98a60f2350" (UID: "36db0fa7-717c-4785-942e-8c98a60f2350"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:54:05 crc kubenswrapper[4830]: I0131 09:54:05.898583 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/36db0fa7-717c-4785-942e-8c98a60f2350-ceilometer-ipmi-config-data-1" (OuterVolumeSpecName: "ceilometer-ipmi-config-data-1") pod "36db0fa7-717c-4785-942e-8c98a60f2350" (UID: "36db0fa7-717c-4785-942e-8c98a60f2350"). InnerVolumeSpecName "ceilometer-ipmi-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:54:05 crc kubenswrapper[4830]: I0131 09:54:05.902928 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/36db0fa7-717c-4785-942e-8c98a60f2350-ceilometer-ipmi-config-data-2" (OuterVolumeSpecName: "ceilometer-ipmi-config-data-2") pod "36db0fa7-717c-4785-942e-8c98a60f2350" (UID: "36db0fa7-717c-4785-942e-8c98a60f2350"). InnerVolumeSpecName "ceilometer-ipmi-config-data-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:54:05 crc kubenswrapper[4830]: I0131 09:54:05.922291 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/36db0fa7-717c-4785-942e-8c98a60f2350-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "36db0fa7-717c-4785-942e-8c98a60f2350" (UID: "36db0fa7-717c-4785-942e-8c98a60f2350"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:54:05 crc kubenswrapper[4830]: I0131 09:54:05.959166 4830 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/36db0fa7-717c-4785-942e-8c98a60f2350-inventory\") on node \"crc\" DevicePath \"\"" Jan 31 09:54:05 crc kubenswrapper[4830]: I0131 09:54:05.959447 4830 reconciler_common.go:293] "Volume detached for volume \"ceilometer-ipmi-config-data-0\" (UniqueName: \"kubernetes.io/secret/36db0fa7-717c-4785-942e-8c98a60f2350-ceilometer-ipmi-config-data-0\") on node \"crc\" DevicePath \"\"" Jan 31 09:54:05 crc kubenswrapper[4830]: I0131 09:54:05.959551 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sm48c\" (UniqueName: \"kubernetes.io/projected/36db0fa7-717c-4785-942e-8c98a60f2350-kube-api-access-sm48c\") on node \"crc\" DevicePath \"\"" Jan 31 09:54:05 crc kubenswrapper[4830]: I0131 09:54:05.959612 4830 reconciler_common.go:293] "Volume detached for volume \"ceilometer-ipmi-config-data-2\" (UniqueName: \"kubernetes.io/secret/36db0fa7-717c-4785-942e-8c98a60f2350-ceilometer-ipmi-config-data-2\") on node \"crc\" DevicePath \"\"" Jan 31 09:54:05 crc kubenswrapper[4830]: I0131 09:54:05.959674 4830 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/36db0fa7-717c-4785-942e-8c98a60f2350-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 31 09:54:05 crc kubenswrapper[4830]: I0131 09:54:05.959763 4830 reconciler_common.go:293] "Volume detached for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/36db0fa7-717c-4785-942e-8c98a60f2350-telemetry-power-monitoring-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 09:54:05 crc kubenswrapper[4830]: I0131 09:54:05.959839 4830 reconciler_common.go:293] "Volume detached for volume \"ceilometer-ipmi-config-data-1\" (UniqueName: \"kubernetes.io/secret/36db0fa7-717c-4785-942e-8c98a60f2350-ceilometer-ipmi-config-data-1\") on node \"crc\" DevicePath \"\"" Jan 31 09:54:06 crc kubenswrapper[4830]: I0131 09:54:06.269013 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-mhz4d" Jan 31 09:54:06 crc kubenswrapper[4830]: I0131 09:54:06.270444 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-mhz4d" event={"ID":"36db0fa7-717c-4785-942e-8c98a60f2350","Type":"ContainerDied","Data":"544778e4954edb4ef03c501ca8ebb659205d0de421bc568a01fbbe1143b85337"} Jan 31 09:54:06 crc kubenswrapper[4830]: I0131 09:54:06.270531 4830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="544778e4954edb4ef03c501ca8ebb659205d0de421bc568a01fbbe1143b85337" Jan 31 09:54:06 crc kubenswrapper[4830]: I0131 09:54:06.377152 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/logging-edpm-deployment-openstack-edpm-ipam-5c2kd"] Jan 31 09:54:06 crc kubenswrapper[4830]: E0131 09:54:06.379708 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="36db0fa7-717c-4785-942e-8c98a60f2350" containerName="telemetry-power-monitoring-edpm-deployment-openstack-edpm-ipam" Jan 31 09:54:06 crc kubenswrapper[4830]: I0131 09:54:06.379763 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="36db0fa7-717c-4785-942e-8c98a60f2350" containerName="telemetry-power-monitoring-edpm-deployment-openstack-edpm-ipam" Jan 31 09:54:06 crc kubenswrapper[4830]: I0131 09:54:06.380092 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="36db0fa7-717c-4785-942e-8c98a60f2350" containerName="telemetry-power-monitoring-edpm-deployment-openstack-edpm-ipam" Jan 31 09:54:06 crc kubenswrapper[4830]: I0131 09:54:06.381701 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-5c2kd" Jan 31 09:54:06 crc kubenswrapper[4830]: I0131 09:54:06.384302 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-vd24j" Jan 31 09:54:06 crc kubenswrapper[4830]: I0131 09:54:06.384366 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 31 09:54:06 crc kubenswrapper[4830]: I0131 09:54:06.384615 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 31 09:54:06 crc kubenswrapper[4830]: I0131 09:54:06.384768 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"logging-compute-config-data" Jan 31 09:54:06 crc kubenswrapper[4830]: I0131 09:54:06.384984 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 31 09:54:06 crc kubenswrapper[4830]: I0131 09:54:06.391049 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/logging-edpm-deployment-openstack-edpm-ipam-5c2kd"] Jan 31 09:54:06 crc kubenswrapper[4830]: I0131 09:54:06.475096 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4edbd94d-6175-4ec1-831f-d68d8e272bd9-inventory\") pod \"logging-edpm-deployment-openstack-edpm-ipam-5c2kd\" (UID: \"4edbd94d-6175-4ec1-831f-d68d8e272bd9\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-5c2kd" Jan 31 09:54:06 crc kubenswrapper[4830]: I0131 09:54:06.475349 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/4edbd94d-6175-4ec1-831f-d68d8e272bd9-logging-compute-config-data-1\") pod \"logging-edpm-deployment-openstack-edpm-ipam-5c2kd\" (UID: \"4edbd94d-6175-4ec1-831f-d68d8e272bd9\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-5c2kd" Jan 31 09:54:06 crc kubenswrapper[4830]: I0131 09:54:06.475418 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4edbd94d-6175-4ec1-831f-d68d8e272bd9-ssh-key-openstack-edpm-ipam\") pod \"logging-edpm-deployment-openstack-edpm-ipam-5c2kd\" (UID: \"4edbd94d-6175-4ec1-831f-d68d8e272bd9\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-5c2kd" Jan 31 09:54:06 crc kubenswrapper[4830]: I0131 09:54:06.475451 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/4edbd94d-6175-4ec1-831f-d68d8e272bd9-logging-compute-config-data-0\") pod \"logging-edpm-deployment-openstack-edpm-ipam-5c2kd\" (UID: \"4edbd94d-6175-4ec1-831f-d68d8e272bd9\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-5c2kd" Jan 31 09:54:06 crc kubenswrapper[4830]: I0131 09:54:06.475597 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vsw2v\" (UniqueName: \"kubernetes.io/projected/4edbd94d-6175-4ec1-831f-d68d8e272bd9-kube-api-access-vsw2v\") pod \"logging-edpm-deployment-openstack-edpm-ipam-5c2kd\" (UID: \"4edbd94d-6175-4ec1-831f-d68d8e272bd9\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-5c2kd" Jan 31 09:54:06 crc kubenswrapper[4830]: I0131 09:54:06.578706 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4edbd94d-6175-4ec1-831f-d68d8e272bd9-ssh-key-openstack-edpm-ipam\") pod \"logging-edpm-deployment-openstack-edpm-ipam-5c2kd\" (UID: \"4edbd94d-6175-4ec1-831f-d68d8e272bd9\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-5c2kd" Jan 31 09:54:06 crc kubenswrapper[4830]: I0131 09:54:06.578802 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/4edbd94d-6175-4ec1-831f-d68d8e272bd9-logging-compute-config-data-0\") pod \"logging-edpm-deployment-openstack-edpm-ipam-5c2kd\" (UID: \"4edbd94d-6175-4ec1-831f-d68d8e272bd9\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-5c2kd" Jan 31 09:54:06 crc kubenswrapper[4830]: I0131 09:54:06.578871 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vsw2v\" (UniqueName: \"kubernetes.io/projected/4edbd94d-6175-4ec1-831f-d68d8e272bd9-kube-api-access-vsw2v\") pod \"logging-edpm-deployment-openstack-edpm-ipam-5c2kd\" (UID: \"4edbd94d-6175-4ec1-831f-d68d8e272bd9\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-5c2kd" Jan 31 09:54:06 crc kubenswrapper[4830]: I0131 09:54:06.578986 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4edbd94d-6175-4ec1-831f-d68d8e272bd9-inventory\") pod \"logging-edpm-deployment-openstack-edpm-ipam-5c2kd\" (UID: \"4edbd94d-6175-4ec1-831f-d68d8e272bd9\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-5c2kd" Jan 31 09:54:06 crc kubenswrapper[4830]: I0131 09:54:06.579154 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/4edbd94d-6175-4ec1-831f-d68d8e272bd9-logging-compute-config-data-1\") pod \"logging-edpm-deployment-openstack-edpm-ipam-5c2kd\" (UID: \"4edbd94d-6175-4ec1-831f-d68d8e272bd9\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-5c2kd" Jan 31 09:54:06 crc kubenswrapper[4830]: I0131 09:54:06.584134 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4edbd94d-6175-4ec1-831f-d68d8e272bd9-inventory\") pod \"logging-edpm-deployment-openstack-edpm-ipam-5c2kd\" (UID: \"4edbd94d-6175-4ec1-831f-d68d8e272bd9\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-5c2kd" Jan 31 09:54:06 crc kubenswrapper[4830]: I0131 09:54:06.584446 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4edbd94d-6175-4ec1-831f-d68d8e272bd9-ssh-key-openstack-edpm-ipam\") pod \"logging-edpm-deployment-openstack-edpm-ipam-5c2kd\" (UID: \"4edbd94d-6175-4ec1-831f-d68d8e272bd9\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-5c2kd" Jan 31 09:54:06 crc kubenswrapper[4830]: I0131 09:54:06.585557 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/4edbd94d-6175-4ec1-831f-d68d8e272bd9-logging-compute-config-data-1\") pod \"logging-edpm-deployment-openstack-edpm-ipam-5c2kd\" (UID: \"4edbd94d-6175-4ec1-831f-d68d8e272bd9\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-5c2kd" Jan 31 09:54:06 crc kubenswrapper[4830]: I0131 09:54:06.585583 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/4edbd94d-6175-4ec1-831f-d68d8e272bd9-logging-compute-config-data-0\") pod \"logging-edpm-deployment-openstack-edpm-ipam-5c2kd\" (UID: \"4edbd94d-6175-4ec1-831f-d68d8e272bd9\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-5c2kd" Jan 31 09:54:06 crc kubenswrapper[4830]: I0131 09:54:06.599549 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vsw2v\" (UniqueName: \"kubernetes.io/projected/4edbd94d-6175-4ec1-831f-d68d8e272bd9-kube-api-access-vsw2v\") pod \"logging-edpm-deployment-openstack-edpm-ipam-5c2kd\" (UID: \"4edbd94d-6175-4ec1-831f-d68d8e272bd9\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-5c2kd" Jan 31 09:54:06 crc kubenswrapper[4830]: I0131 09:54:06.703226 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-5c2kd" Jan 31 09:54:07 crc kubenswrapper[4830]: I0131 09:54:07.349899 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/logging-edpm-deployment-openstack-edpm-ipam-5c2kd"] Jan 31 09:54:08 crc kubenswrapper[4830]: I0131 09:54:08.283109 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-5c2kd" event={"ID":"4edbd94d-6175-4ec1-831f-d68d8e272bd9","Type":"ContainerStarted","Data":"71d383a4e50863ccb9fc4146d42137b2d8a2db41756e68dce51eb5a20de0032b"} Jan 31 09:54:08 crc kubenswrapper[4830]: I0131 09:54:08.283452 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-5c2kd" event={"ID":"4edbd94d-6175-4ec1-831f-d68d8e272bd9","Type":"ContainerStarted","Data":"36b294b3d5eaaac311ff99bc0ff80e7f1b5bfd88cfa713aaa6eb93f599c2a940"} Jan 31 09:54:08 crc kubenswrapper[4830]: I0131 09:54:08.315611 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-5c2kd" podStartSLOduration=1.842775344 podStartE2EDuration="2.315591004s" podCreationTimestamp="2026-01-31 09:54:06 +0000 UTC" firstStartedPulling="2026-01-31 09:54:07.356262493 +0000 UTC m=+3191.849624935" lastFinishedPulling="2026-01-31 09:54:07.829078153 +0000 UTC m=+3192.322440595" observedRunningTime="2026-01-31 09:54:08.305072155 +0000 UTC m=+3192.798434607" watchObservedRunningTime="2026-01-31 09:54:08.315591004 +0000 UTC m=+3192.808953446" Jan 31 09:54:22 crc kubenswrapper[4830]: I0131 09:54:22.442274 4830 generic.go:334] "Generic (PLEG): container finished" podID="4edbd94d-6175-4ec1-831f-d68d8e272bd9" containerID="71d383a4e50863ccb9fc4146d42137b2d8a2db41756e68dce51eb5a20de0032b" exitCode=0 Jan 31 09:54:22 crc kubenswrapper[4830]: I0131 09:54:22.442375 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-5c2kd" event={"ID":"4edbd94d-6175-4ec1-831f-d68d8e272bd9","Type":"ContainerDied","Data":"71d383a4e50863ccb9fc4146d42137b2d8a2db41756e68dce51eb5a20de0032b"} Jan 31 09:54:23 crc kubenswrapper[4830]: I0131 09:54:23.996867 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-5c2kd" Jan 31 09:54:24 crc kubenswrapper[4830]: I0131 09:54:24.117875 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4edbd94d-6175-4ec1-831f-d68d8e272bd9-ssh-key-openstack-edpm-ipam\") pod \"4edbd94d-6175-4ec1-831f-d68d8e272bd9\" (UID: \"4edbd94d-6175-4ec1-831f-d68d8e272bd9\") " Jan 31 09:54:24 crc kubenswrapper[4830]: I0131 09:54:24.117947 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logging-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/4edbd94d-6175-4ec1-831f-d68d8e272bd9-logging-compute-config-data-0\") pod \"4edbd94d-6175-4ec1-831f-d68d8e272bd9\" (UID: \"4edbd94d-6175-4ec1-831f-d68d8e272bd9\") " Jan 31 09:54:24 crc kubenswrapper[4830]: I0131 09:54:24.118137 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vsw2v\" (UniqueName: \"kubernetes.io/projected/4edbd94d-6175-4ec1-831f-d68d8e272bd9-kube-api-access-vsw2v\") pod \"4edbd94d-6175-4ec1-831f-d68d8e272bd9\" (UID: \"4edbd94d-6175-4ec1-831f-d68d8e272bd9\") " Jan 31 09:54:24 crc kubenswrapper[4830]: I0131 09:54:24.118257 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logging-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/4edbd94d-6175-4ec1-831f-d68d8e272bd9-logging-compute-config-data-1\") pod \"4edbd94d-6175-4ec1-831f-d68d8e272bd9\" (UID: \"4edbd94d-6175-4ec1-831f-d68d8e272bd9\") " Jan 31 09:54:24 crc kubenswrapper[4830]: I0131 09:54:24.118346 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4edbd94d-6175-4ec1-831f-d68d8e272bd9-inventory\") pod \"4edbd94d-6175-4ec1-831f-d68d8e272bd9\" (UID: \"4edbd94d-6175-4ec1-831f-d68d8e272bd9\") " Jan 31 09:54:24 crc kubenswrapper[4830]: I0131 09:54:24.125509 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4edbd94d-6175-4ec1-831f-d68d8e272bd9-kube-api-access-vsw2v" (OuterVolumeSpecName: "kube-api-access-vsw2v") pod "4edbd94d-6175-4ec1-831f-d68d8e272bd9" (UID: "4edbd94d-6175-4ec1-831f-d68d8e272bd9"). InnerVolumeSpecName "kube-api-access-vsw2v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:54:24 crc kubenswrapper[4830]: I0131 09:54:24.157245 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4edbd94d-6175-4ec1-831f-d68d8e272bd9-logging-compute-config-data-1" (OuterVolumeSpecName: "logging-compute-config-data-1") pod "4edbd94d-6175-4ec1-831f-d68d8e272bd9" (UID: "4edbd94d-6175-4ec1-831f-d68d8e272bd9"). InnerVolumeSpecName "logging-compute-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:54:24 crc kubenswrapper[4830]: I0131 09:54:24.160833 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4edbd94d-6175-4ec1-831f-d68d8e272bd9-logging-compute-config-data-0" (OuterVolumeSpecName: "logging-compute-config-data-0") pod "4edbd94d-6175-4ec1-831f-d68d8e272bd9" (UID: "4edbd94d-6175-4ec1-831f-d68d8e272bd9"). InnerVolumeSpecName "logging-compute-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:54:24 crc kubenswrapper[4830]: I0131 09:54:24.167717 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4edbd94d-6175-4ec1-831f-d68d8e272bd9-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "4edbd94d-6175-4ec1-831f-d68d8e272bd9" (UID: "4edbd94d-6175-4ec1-831f-d68d8e272bd9"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:54:24 crc kubenswrapper[4830]: I0131 09:54:24.173922 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4edbd94d-6175-4ec1-831f-d68d8e272bd9-inventory" (OuterVolumeSpecName: "inventory") pod "4edbd94d-6175-4ec1-831f-d68d8e272bd9" (UID: "4edbd94d-6175-4ec1-831f-d68d8e272bd9"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:54:24 crc kubenswrapper[4830]: I0131 09:54:24.223055 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vsw2v\" (UniqueName: \"kubernetes.io/projected/4edbd94d-6175-4ec1-831f-d68d8e272bd9-kube-api-access-vsw2v\") on node \"crc\" DevicePath \"\"" Jan 31 09:54:24 crc kubenswrapper[4830]: I0131 09:54:24.223095 4830 reconciler_common.go:293] "Volume detached for volume \"logging-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/4edbd94d-6175-4ec1-831f-d68d8e272bd9-logging-compute-config-data-1\") on node \"crc\" DevicePath \"\"" Jan 31 09:54:24 crc kubenswrapper[4830]: I0131 09:54:24.223108 4830 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4edbd94d-6175-4ec1-831f-d68d8e272bd9-inventory\") on node \"crc\" DevicePath \"\"" Jan 31 09:54:24 crc kubenswrapper[4830]: I0131 09:54:24.223120 4830 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4edbd94d-6175-4ec1-831f-d68d8e272bd9-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 31 09:54:24 crc kubenswrapper[4830]: I0131 09:54:24.223130 4830 reconciler_common.go:293] "Volume detached for volume \"logging-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/4edbd94d-6175-4ec1-831f-d68d8e272bd9-logging-compute-config-data-0\") on node \"crc\" DevicePath \"\"" Jan 31 09:54:24 crc kubenswrapper[4830]: I0131 09:54:24.471062 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-5c2kd" event={"ID":"4edbd94d-6175-4ec1-831f-d68d8e272bd9","Type":"ContainerDied","Data":"36b294b3d5eaaac311ff99bc0ff80e7f1b5bfd88cfa713aaa6eb93f599c2a940"} Jan 31 09:54:24 crc kubenswrapper[4830]: I0131 09:54:24.471110 4830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="36b294b3d5eaaac311ff99bc0ff80e7f1b5bfd88cfa713aaa6eb93f599c2a940" Jan 31 09:54:24 crc kubenswrapper[4830]: I0131 09:54:24.471174 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-5c2kd" Jan 31 09:55:30 crc kubenswrapper[4830]: I0131 09:55:30.342397 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-g72k5"] Jan 31 09:55:30 crc kubenswrapper[4830]: E0131 09:55:30.343525 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4edbd94d-6175-4ec1-831f-d68d8e272bd9" containerName="logging-edpm-deployment-openstack-edpm-ipam" Jan 31 09:55:30 crc kubenswrapper[4830]: I0131 09:55:30.343543 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="4edbd94d-6175-4ec1-831f-d68d8e272bd9" containerName="logging-edpm-deployment-openstack-edpm-ipam" Jan 31 09:55:30 crc kubenswrapper[4830]: I0131 09:55:30.343884 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="4edbd94d-6175-4ec1-831f-d68d8e272bd9" containerName="logging-edpm-deployment-openstack-edpm-ipam" Jan 31 09:55:30 crc kubenswrapper[4830]: I0131 09:55:30.345864 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-g72k5" Jan 31 09:55:30 crc kubenswrapper[4830]: I0131 09:55:30.363653 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-g72k5"] Jan 31 09:55:30 crc kubenswrapper[4830]: I0131 09:55:30.428735 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/79b397d6-76a5-4bb7-bcd8-66b6480a87d6-catalog-content\") pod \"redhat-operators-g72k5\" (UID: \"79b397d6-76a5-4bb7-bcd8-66b6480a87d6\") " pod="openshift-marketplace/redhat-operators-g72k5" Jan 31 09:55:30 crc kubenswrapper[4830]: I0131 09:55:30.429108 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wkkn6\" (UniqueName: \"kubernetes.io/projected/79b397d6-76a5-4bb7-bcd8-66b6480a87d6-kube-api-access-wkkn6\") pod \"redhat-operators-g72k5\" (UID: \"79b397d6-76a5-4bb7-bcd8-66b6480a87d6\") " pod="openshift-marketplace/redhat-operators-g72k5" Jan 31 09:55:30 crc kubenswrapper[4830]: I0131 09:55:30.429633 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/79b397d6-76a5-4bb7-bcd8-66b6480a87d6-utilities\") pod \"redhat-operators-g72k5\" (UID: \"79b397d6-76a5-4bb7-bcd8-66b6480a87d6\") " pod="openshift-marketplace/redhat-operators-g72k5" Jan 31 09:55:30 crc kubenswrapper[4830]: I0131 09:55:30.531805 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/79b397d6-76a5-4bb7-bcd8-66b6480a87d6-utilities\") pod \"redhat-operators-g72k5\" (UID: \"79b397d6-76a5-4bb7-bcd8-66b6480a87d6\") " pod="openshift-marketplace/redhat-operators-g72k5" Jan 31 09:55:30 crc kubenswrapper[4830]: I0131 09:55:30.531930 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/79b397d6-76a5-4bb7-bcd8-66b6480a87d6-catalog-content\") pod \"redhat-operators-g72k5\" (UID: \"79b397d6-76a5-4bb7-bcd8-66b6480a87d6\") " pod="openshift-marketplace/redhat-operators-g72k5" Jan 31 09:55:30 crc kubenswrapper[4830]: I0131 09:55:30.532014 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wkkn6\" (UniqueName: \"kubernetes.io/projected/79b397d6-76a5-4bb7-bcd8-66b6480a87d6-kube-api-access-wkkn6\") pod \"redhat-operators-g72k5\" (UID: \"79b397d6-76a5-4bb7-bcd8-66b6480a87d6\") " pod="openshift-marketplace/redhat-operators-g72k5" Jan 31 09:55:30 crc kubenswrapper[4830]: I0131 09:55:30.532637 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/79b397d6-76a5-4bb7-bcd8-66b6480a87d6-utilities\") pod \"redhat-operators-g72k5\" (UID: \"79b397d6-76a5-4bb7-bcd8-66b6480a87d6\") " pod="openshift-marketplace/redhat-operators-g72k5" Jan 31 09:55:30 crc kubenswrapper[4830]: I0131 09:55:30.532865 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/79b397d6-76a5-4bb7-bcd8-66b6480a87d6-catalog-content\") pod \"redhat-operators-g72k5\" (UID: \"79b397d6-76a5-4bb7-bcd8-66b6480a87d6\") " pod="openshift-marketplace/redhat-operators-g72k5" Jan 31 09:55:30 crc kubenswrapper[4830]: I0131 09:55:30.569225 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wkkn6\" (UniqueName: \"kubernetes.io/projected/79b397d6-76a5-4bb7-bcd8-66b6480a87d6-kube-api-access-wkkn6\") pod \"redhat-operators-g72k5\" (UID: \"79b397d6-76a5-4bb7-bcd8-66b6480a87d6\") " pod="openshift-marketplace/redhat-operators-g72k5" Jan 31 09:55:30 crc kubenswrapper[4830]: I0131 09:55:30.681982 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-g72k5" Jan 31 09:55:31 crc kubenswrapper[4830]: I0131 09:55:31.256974 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-g72k5"] Jan 31 09:55:32 crc kubenswrapper[4830]: I0131 09:55:32.258748 4830 generic.go:334] "Generic (PLEG): container finished" podID="79b397d6-76a5-4bb7-bcd8-66b6480a87d6" containerID="6b24de30d3ef9eed5eff75fd2c2e17ee1b5d78598af432bd7428e06333de3d10" exitCode=0 Jan 31 09:55:32 crc kubenswrapper[4830]: I0131 09:55:32.262254 4830 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 31 09:55:32 crc kubenswrapper[4830]: I0131 09:55:32.269463 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g72k5" event={"ID":"79b397d6-76a5-4bb7-bcd8-66b6480a87d6","Type":"ContainerDied","Data":"6b24de30d3ef9eed5eff75fd2c2e17ee1b5d78598af432bd7428e06333de3d10"} Jan 31 09:55:32 crc kubenswrapper[4830]: I0131 09:55:32.269513 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g72k5" event={"ID":"79b397d6-76a5-4bb7-bcd8-66b6480a87d6","Type":"ContainerStarted","Data":"a995573670445367f7a9ceb60e660b337e249f608c1d843728017b9fdfe4b2c3"} Jan 31 09:55:33 crc kubenswrapper[4830]: I0131 09:55:33.292851 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g72k5" event={"ID":"79b397d6-76a5-4bb7-bcd8-66b6480a87d6","Type":"ContainerStarted","Data":"05364f08999b274dbd1b05d56dd14a8caf7444f32592ccf76155339ca6498c8b"} Jan 31 09:55:39 crc kubenswrapper[4830]: I0131 09:55:39.356287 4830 generic.go:334] "Generic (PLEG): container finished" podID="79b397d6-76a5-4bb7-bcd8-66b6480a87d6" containerID="05364f08999b274dbd1b05d56dd14a8caf7444f32592ccf76155339ca6498c8b" exitCode=0 Jan 31 09:55:39 crc kubenswrapper[4830]: I0131 09:55:39.356349 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g72k5" event={"ID":"79b397d6-76a5-4bb7-bcd8-66b6480a87d6","Type":"ContainerDied","Data":"05364f08999b274dbd1b05d56dd14a8caf7444f32592ccf76155339ca6498c8b"} Jan 31 09:55:40 crc kubenswrapper[4830]: I0131 09:55:40.369965 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g72k5" event={"ID":"79b397d6-76a5-4bb7-bcd8-66b6480a87d6","Type":"ContainerStarted","Data":"fe5640b80ad8c7d7795cee692ba0ae9ff1f92e249098a526c75a48d2a766abc4"} Jan 31 09:55:40 crc kubenswrapper[4830]: I0131 09:55:40.398278 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-g72k5" podStartSLOduration=2.872484522 podStartE2EDuration="10.398257803s" podCreationTimestamp="2026-01-31 09:55:30 +0000 UTC" firstStartedPulling="2026-01-31 09:55:32.261926018 +0000 UTC m=+3276.755288460" lastFinishedPulling="2026-01-31 09:55:39.787699299 +0000 UTC m=+3284.281061741" observedRunningTime="2026-01-31 09:55:40.393054895 +0000 UTC m=+3284.886417337" watchObservedRunningTime="2026-01-31 09:55:40.398257803 +0000 UTC m=+3284.891620245" Jan 31 09:55:40 crc kubenswrapper[4830]: I0131 09:55:40.682960 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-g72k5" Jan 31 09:55:40 crc kubenswrapper[4830]: I0131 09:55:40.683051 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-g72k5" Jan 31 09:55:41 crc kubenswrapper[4830]: I0131 09:55:41.743686 4830 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-g72k5" podUID="79b397d6-76a5-4bb7-bcd8-66b6480a87d6" containerName="registry-server" probeResult="failure" output=< Jan 31 09:55:41 crc kubenswrapper[4830]: timeout: failed to connect service ":50051" within 1s Jan 31 09:55:41 crc kubenswrapper[4830]: > Jan 31 09:55:51 crc kubenswrapper[4830]: I0131 09:55:51.740457 4830 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-g72k5" podUID="79b397d6-76a5-4bb7-bcd8-66b6480a87d6" containerName="registry-server" probeResult="failure" output=< Jan 31 09:55:51 crc kubenswrapper[4830]: timeout: failed to connect service ":50051" within 1s Jan 31 09:55:51 crc kubenswrapper[4830]: > Jan 31 09:56:00 crc kubenswrapper[4830]: I0131 09:56:00.750592 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-g72k5" Jan 31 09:56:00 crc kubenswrapper[4830]: I0131 09:56:00.813462 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-g72k5" Jan 31 09:56:01 crc kubenswrapper[4830]: I0131 09:56:01.579275 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-g72k5"] Jan 31 09:56:02 crc kubenswrapper[4830]: I0131 09:56:02.648631 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-g72k5" podUID="79b397d6-76a5-4bb7-bcd8-66b6480a87d6" containerName="registry-server" containerID="cri-o://fe5640b80ad8c7d7795cee692ba0ae9ff1f92e249098a526c75a48d2a766abc4" gracePeriod=2 Jan 31 09:56:03 crc kubenswrapper[4830]: I0131 09:56:03.309350 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-g72k5" Jan 31 09:56:03 crc kubenswrapper[4830]: I0131 09:56:03.403862 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/79b397d6-76a5-4bb7-bcd8-66b6480a87d6-utilities\") pod \"79b397d6-76a5-4bb7-bcd8-66b6480a87d6\" (UID: \"79b397d6-76a5-4bb7-bcd8-66b6480a87d6\") " Jan 31 09:56:03 crc kubenswrapper[4830]: I0131 09:56:03.404194 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/79b397d6-76a5-4bb7-bcd8-66b6480a87d6-catalog-content\") pod \"79b397d6-76a5-4bb7-bcd8-66b6480a87d6\" (UID: \"79b397d6-76a5-4bb7-bcd8-66b6480a87d6\") " Jan 31 09:56:03 crc kubenswrapper[4830]: I0131 09:56:03.404378 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wkkn6\" (UniqueName: \"kubernetes.io/projected/79b397d6-76a5-4bb7-bcd8-66b6480a87d6-kube-api-access-wkkn6\") pod \"79b397d6-76a5-4bb7-bcd8-66b6480a87d6\" (UID: \"79b397d6-76a5-4bb7-bcd8-66b6480a87d6\") " Jan 31 09:56:03 crc kubenswrapper[4830]: I0131 09:56:03.405227 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/79b397d6-76a5-4bb7-bcd8-66b6480a87d6-utilities" (OuterVolumeSpecName: "utilities") pod "79b397d6-76a5-4bb7-bcd8-66b6480a87d6" (UID: "79b397d6-76a5-4bb7-bcd8-66b6480a87d6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 09:56:03 crc kubenswrapper[4830]: I0131 09:56:03.410916 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/79b397d6-76a5-4bb7-bcd8-66b6480a87d6-kube-api-access-wkkn6" (OuterVolumeSpecName: "kube-api-access-wkkn6") pod "79b397d6-76a5-4bb7-bcd8-66b6480a87d6" (UID: "79b397d6-76a5-4bb7-bcd8-66b6480a87d6"). InnerVolumeSpecName "kube-api-access-wkkn6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:56:03 crc kubenswrapper[4830]: I0131 09:56:03.508353 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wkkn6\" (UniqueName: \"kubernetes.io/projected/79b397d6-76a5-4bb7-bcd8-66b6480a87d6-kube-api-access-wkkn6\") on node \"crc\" DevicePath \"\"" Jan 31 09:56:03 crc kubenswrapper[4830]: I0131 09:56:03.508680 4830 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/79b397d6-76a5-4bb7-bcd8-66b6480a87d6-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 09:56:03 crc kubenswrapper[4830]: I0131 09:56:03.571982 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/79b397d6-76a5-4bb7-bcd8-66b6480a87d6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "79b397d6-76a5-4bb7-bcd8-66b6480a87d6" (UID: "79b397d6-76a5-4bb7-bcd8-66b6480a87d6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 09:56:03 crc kubenswrapper[4830]: I0131 09:56:03.610767 4830 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/79b397d6-76a5-4bb7-bcd8-66b6480a87d6-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 09:56:03 crc kubenswrapper[4830]: I0131 09:56:03.661637 4830 generic.go:334] "Generic (PLEG): container finished" podID="79b397d6-76a5-4bb7-bcd8-66b6480a87d6" containerID="fe5640b80ad8c7d7795cee692ba0ae9ff1f92e249098a526c75a48d2a766abc4" exitCode=0 Jan 31 09:56:03 crc kubenswrapper[4830]: I0131 09:56:03.661682 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g72k5" event={"ID":"79b397d6-76a5-4bb7-bcd8-66b6480a87d6","Type":"ContainerDied","Data":"fe5640b80ad8c7d7795cee692ba0ae9ff1f92e249098a526c75a48d2a766abc4"} Jan 31 09:56:03 crc kubenswrapper[4830]: I0131 09:56:03.661712 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g72k5" event={"ID":"79b397d6-76a5-4bb7-bcd8-66b6480a87d6","Type":"ContainerDied","Data":"a995573670445367f7a9ceb60e660b337e249f608c1d843728017b9fdfe4b2c3"} Jan 31 09:56:03 crc kubenswrapper[4830]: I0131 09:56:03.661712 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-g72k5" Jan 31 09:56:03 crc kubenswrapper[4830]: I0131 09:56:03.661764 4830 scope.go:117] "RemoveContainer" containerID="fe5640b80ad8c7d7795cee692ba0ae9ff1f92e249098a526c75a48d2a766abc4" Jan 31 09:56:03 crc kubenswrapper[4830]: I0131 09:56:03.686552 4830 scope.go:117] "RemoveContainer" containerID="05364f08999b274dbd1b05d56dd14a8caf7444f32592ccf76155339ca6498c8b" Jan 31 09:56:03 crc kubenswrapper[4830]: I0131 09:56:03.704203 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-g72k5"] Jan 31 09:56:03 crc kubenswrapper[4830]: I0131 09:56:03.721484 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-g72k5"] Jan 31 09:56:03 crc kubenswrapper[4830]: I0131 09:56:03.742062 4830 scope.go:117] "RemoveContainer" containerID="6b24de30d3ef9eed5eff75fd2c2e17ee1b5d78598af432bd7428e06333de3d10" Jan 31 09:56:03 crc kubenswrapper[4830]: I0131 09:56:03.776634 4830 scope.go:117] "RemoveContainer" containerID="fe5640b80ad8c7d7795cee692ba0ae9ff1f92e249098a526c75a48d2a766abc4" Jan 31 09:56:03 crc kubenswrapper[4830]: E0131 09:56:03.777245 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fe5640b80ad8c7d7795cee692ba0ae9ff1f92e249098a526c75a48d2a766abc4\": container with ID starting with fe5640b80ad8c7d7795cee692ba0ae9ff1f92e249098a526c75a48d2a766abc4 not found: ID does not exist" containerID="fe5640b80ad8c7d7795cee692ba0ae9ff1f92e249098a526c75a48d2a766abc4" Jan 31 09:56:03 crc kubenswrapper[4830]: I0131 09:56:03.777292 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fe5640b80ad8c7d7795cee692ba0ae9ff1f92e249098a526c75a48d2a766abc4"} err="failed to get container status \"fe5640b80ad8c7d7795cee692ba0ae9ff1f92e249098a526c75a48d2a766abc4\": rpc error: code = NotFound desc = could not find container \"fe5640b80ad8c7d7795cee692ba0ae9ff1f92e249098a526c75a48d2a766abc4\": container with ID starting with fe5640b80ad8c7d7795cee692ba0ae9ff1f92e249098a526c75a48d2a766abc4 not found: ID does not exist" Jan 31 09:56:03 crc kubenswrapper[4830]: I0131 09:56:03.777344 4830 scope.go:117] "RemoveContainer" containerID="05364f08999b274dbd1b05d56dd14a8caf7444f32592ccf76155339ca6498c8b" Jan 31 09:56:03 crc kubenswrapper[4830]: E0131 09:56:03.777665 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"05364f08999b274dbd1b05d56dd14a8caf7444f32592ccf76155339ca6498c8b\": container with ID starting with 05364f08999b274dbd1b05d56dd14a8caf7444f32592ccf76155339ca6498c8b not found: ID does not exist" containerID="05364f08999b274dbd1b05d56dd14a8caf7444f32592ccf76155339ca6498c8b" Jan 31 09:56:03 crc kubenswrapper[4830]: I0131 09:56:03.777716 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"05364f08999b274dbd1b05d56dd14a8caf7444f32592ccf76155339ca6498c8b"} err="failed to get container status \"05364f08999b274dbd1b05d56dd14a8caf7444f32592ccf76155339ca6498c8b\": rpc error: code = NotFound desc = could not find container \"05364f08999b274dbd1b05d56dd14a8caf7444f32592ccf76155339ca6498c8b\": container with ID starting with 05364f08999b274dbd1b05d56dd14a8caf7444f32592ccf76155339ca6498c8b not found: ID does not exist" Jan 31 09:56:03 crc kubenswrapper[4830]: I0131 09:56:03.777750 4830 scope.go:117] "RemoveContainer" containerID="6b24de30d3ef9eed5eff75fd2c2e17ee1b5d78598af432bd7428e06333de3d10" Jan 31 09:56:03 crc kubenswrapper[4830]: E0131 09:56:03.778253 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6b24de30d3ef9eed5eff75fd2c2e17ee1b5d78598af432bd7428e06333de3d10\": container with ID starting with 6b24de30d3ef9eed5eff75fd2c2e17ee1b5d78598af432bd7428e06333de3d10 not found: ID does not exist" containerID="6b24de30d3ef9eed5eff75fd2c2e17ee1b5d78598af432bd7428e06333de3d10" Jan 31 09:56:03 crc kubenswrapper[4830]: I0131 09:56:03.778285 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6b24de30d3ef9eed5eff75fd2c2e17ee1b5d78598af432bd7428e06333de3d10"} err="failed to get container status \"6b24de30d3ef9eed5eff75fd2c2e17ee1b5d78598af432bd7428e06333de3d10\": rpc error: code = NotFound desc = could not find container \"6b24de30d3ef9eed5eff75fd2c2e17ee1b5d78598af432bd7428e06333de3d10\": container with ID starting with 6b24de30d3ef9eed5eff75fd2c2e17ee1b5d78598af432bd7428e06333de3d10 not found: ID does not exist" Jan 31 09:56:04 crc kubenswrapper[4830]: I0131 09:56:04.268784 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="79b397d6-76a5-4bb7-bcd8-66b6480a87d6" path="/var/lib/kubelet/pods/79b397d6-76a5-4bb7-bcd8-66b6480a87d6/volumes" Jan 31 09:56:14 crc kubenswrapper[4830]: I0131 09:56:14.353890 4830 patch_prober.go:28] interesting pod/machine-config-daemon-gt7kd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 09:56:14 crc kubenswrapper[4830]: I0131 09:56:14.354447 4830 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" podUID="158dbfda-9b0a-4809-9946-3c6ee2d082dc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 09:56:41 crc kubenswrapper[4830]: I0131 09:56:41.582711 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-47nk4"] Jan 31 09:56:41 crc kubenswrapper[4830]: E0131 09:56:41.584126 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="79b397d6-76a5-4bb7-bcd8-66b6480a87d6" containerName="extract-utilities" Jan 31 09:56:41 crc kubenswrapper[4830]: I0131 09:56:41.584147 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="79b397d6-76a5-4bb7-bcd8-66b6480a87d6" containerName="extract-utilities" Jan 31 09:56:41 crc kubenswrapper[4830]: E0131 09:56:41.584172 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="79b397d6-76a5-4bb7-bcd8-66b6480a87d6" containerName="registry-server" Jan 31 09:56:41 crc kubenswrapper[4830]: I0131 09:56:41.584180 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="79b397d6-76a5-4bb7-bcd8-66b6480a87d6" containerName="registry-server" Jan 31 09:56:41 crc kubenswrapper[4830]: E0131 09:56:41.584206 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="79b397d6-76a5-4bb7-bcd8-66b6480a87d6" containerName="extract-content" Jan 31 09:56:41 crc kubenswrapper[4830]: I0131 09:56:41.584213 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="79b397d6-76a5-4bb7-bcd8-66b6480a87d6" containerName="extract-content" Jan 31 09:56:41 crc kubenswrapper[4830]: I0131 09:56:41.584499 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="79b397d6-76a5-4bb7-bcd8-66b6480a87d6" containerName="registry-server" Jan 31 09:56:41 crc kubenswrapper[4830]: I0131 09:56:41.586608 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-47nk4" Jan 31 09:56:41 crc kubenswrapper[4830]: I0131 09:56:41.605047 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-47nk4"] Jan 31 09:56:41 crc kubenswrapper[4830]: I0131 09:56:41.699567 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fm9k8\" (UniqueName: \"kubernetes.io/projected/ba5f29be-2d9d-479e-813f-b0795a1dab32-kube-api-access-fm9k8\") pod \"redhat-marketplace-47nk4\" (UID: \"ba5f29be-2d9d-479e-813f-b0795a1dab32\") " pod="openshift-marketplace/redhat-marketplace-47nk4" Jan 31 09:56:41 crc kubenswrapper[4830]: I0131 09:56:41.700134 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ba5f29be-2d9d-479e-813f-b0795a1dab32-catalog-content\") pod \"redhat-marketplace-47nk4\" (UID: \"ba5f29be-2d9d-479e-813f-b0795a1dab32\") " pod="openshift-marketplace/redhat-marketplace-47nk4" Jan 31 09:56:41 crc kubenswrapper[4830]: I0131 09:56:41.700542 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ba5f29be-2d9d-479e-813f-b0795a1dab32-utilities\") pod \"redhat-marketplace-47nk4\" (UID: \"ba5f29be-2d9d-479e-813f-b0795a1dab32\") " pod="openshift-marketplace/redhat-marketplace-47nk4" Jan 31 09:56:41 crc kubenswrapper[4830]: I0131 09:56:41.804287 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ba5f29be-2d9d-479e-813f-b0795a1dab32-utilities\") pod \"redhat-marketplace-47nk4\" (UID: \"ba5f29be-2d9d-479e-813f-b0795a1dab32\") " pod="openshift-marketplace/redhat-marketplace-47nk4" Jan 31 09:56:41 crc kubenswrapper[4830]: I0131 09:56:41.804434 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fm9k8\" (UniqueName: \"kubernetes.io/projected/ba5f29be-2d9d-479e-813f-b0795a1dab32-kube-api-access-fm9k8\") pod \"redhat-marketplace-47nk4\" (UID: \"ba5f29be-2d9d-479e-813f-b0795a1dab32\") " pod="openshift-marketplace/redhat-marketplace-47nk4" Jan 31 09:56:41 crc kubenswrapper[4830]: I0131 09:56:41.804550 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ba5f29be-2d9d-479e-813f-b0795a1dab32-catalog-content\") pod \"redhat-marketplace-47nk4\" (UID: \"ba5f29be-2d9d-479e-813f-b0795a1dab32\") " pod="openshift-marketplace/redhat-marketplace-47nk4" Jan 31 09:56:41 crc kubenswrapper[4830]: I0131 09:56:41.805029 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ba5f29be-2d9d-479e-813f-b0795a1dab32-utilities\") pod \"redhat-marketplace-47nk4\" (UID: \"ba5f29be-2d9d-479e-813f-b0795a1dab32\") " pod="openshift-marketplace/redhat-marketplace-47nk4" Jan 31 09:56:41 crc kubenswrapper[4830]: I0131 09:56:41.805173 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ba5f29be-2d9d-479e-813f-b0795a1dab32-catalog-content\") pod \"redhat-marketplace-47nk4\" (UID: \"ba5f29be-2d9d-479e-813f-b0795a1dab32\") " pod="openshift-marketplace/redhat-marketplace-47nk4" Jan 31 09:56:41 crc kubenswrapper[4830]: I0131 09:56:41.827065 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fm9k8\" (UniqueName: \"kubernetes.io/projected/ba5f29be-2d9d-479e-813f-b0795a1dab32-kube-api-access-fm9k8\") pod \"redhat-marketplace-47nk4\" (UID: \"ba5f29be-2d9d-479e-813f-b0795a1dab32\") " pod="openshift-marketplace/redhat-marketplace-47nk4" Jan 31 09:56:41 crc kubenswrapper[4830]: I0131 09:56:41.924453 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-47nk4" Jan 31 09:56:42 crc kubenswrapper[4830]: I0131 09:56:42.517764 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-47nk4"] Jan 31 09:56:43 crc kubenswrapper[4830]: I0131 09:56:43.113607 4830 generic.go:334] "Generic (PLEG): container finished" podID="ba5f29be-2d9d-479e-813f-b0795a1dab32" containerID="965e7e6ba7553cecd0b15febfb599c584c9d4322df9189af7c8ed44cf85e2973" exitCode=0 Jan 31 09:56:43 crc kubenswrapper[4830]: I0131 09:56:43.113960 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-47nk4" event={"ID":"ba5f29be-2d9d-479e-813f-b0795a1dab32","Type":"ContainerDied","Data":"965e7e6ba7553cecd0b15febfb599c584c9d4322df9189af7c8ed44cf85e2973"} Jan 31 09:56:43 crc kubenswrapper[4830]: I0131 09:56:43.114054 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-47nk4" event={"ID":"ba5f29be-2d9d-479e-813f-b0795a1dab32","Type":"ContainerStarted","Data":"186cf348b34961866ef497abbbca36d449a0087b8cd6d10d0dfed104784337bb"} Jan 31 09:56:44 crc kubenswrapper[4830]: I0131 09:56:44.130496 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-47nk4" event={"ID":"ba5f29be-2d9d-479e-813f-b0795a1dab32","Type":"ContainerStarted","Data":"369fa3526eb5d95fb20b126bd01b039a407f77f34556cc0312a3247074180f12"} Jan 31 09:56:44 crc kubenswrapper[4830]: I0131 09:56:44.353762 4830 patch_prober.go:28] interesting pod/machine-config-daemon-gt7kd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 09:56:44 crc kubenswrapper[4830]: I0131 09:56:44.353861 4830 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" podUID="158dbfda-9b0a-4809-9946-3c6ee2d082dc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 09:56:45 crc kubenswrapper[4830]: I0131 09:56:45.141457 4830 generic.go:334] "Generic (PLEG): container finished" podID="ba5f29be-2d9d-479e-813f-b0795a1dab32" containerID="369fa3526eb5d95fb20b126bd01b039a407f77f34556cc0312a3247074180f12" exitCode=0 Jan 31 09:56:45 crc kubenswrapper[4830]: I0131 09:56:45.141536 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-47nk4" event={"ID":"ba5f29be-2d9d-479e-813f-b0795a1dab32","Type":"ContainerDied","Data":"369fa3526eb5d95fb20b126bd01b039a407f77f34556cc0312a3247074180f12"} Jan 31 09:56:46 crc kubenswrapper[4830]: I0131 09:56:46.155830 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-47nk4" event={"ID":"ba5f29be-2d9d-479e-813f-b0795a1dab32","Type":"ContainerStarted","Data":"e17af18428ef5dab8ffb97288a3d7e0a21f005d029ef0fd1967a4f20960ebdca"} Jan 31 09:56:46 crc kubenswrapper[4830]: I0131 09:56:46.185597 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-47nk4" podStartSLOduration=2.640514382 podStartE2EDuration="5.185574491s" podCreationTimestamp="2026-01-31 09:56:41 +0000 UTC" firstStartedPulling="2026-01-31 09:56:43.119178859 +0000 UTC m=+3347.612541301" lastFinishedPulling="2026-01-31 09:56:45.664238968 +0000 UTC m=+3350.157601410" observedRunningTime="2026-01-31 09:56:46.173915219 +0000 UTC m=+3350.667277661" watchObservedRunningTime="2026-01-31 09:56:46.185574491 +0000 UTC m=+3350.678936933" Jan 31 09:56:51 crc kubenswrapper[4830]: I0131 09:56:51.925247 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-47nk4" Jan 31 09:56:51 crc kubenswrapper[4830]: I0131 09:56:51.925898 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-47nk4" Jan 31 09:56:51 crc kubenswrapper[4830]: I0131 09:56:51.977657 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-47nk4" Jan 31 09:56:52 crc kubenswrapper[4830]: I0131 09:56:52.295858 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-47nk4" Jan 31 09:56:52 crc kubenswrapper[4830]: I0131 09:56:52.355505 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-47nk4"] Jan 31 09:56:54 crc kubenswrapper[4830]: I0131 09:56:54.263243 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-47nk4" podUID="ba5f29be-2d9d-479e-813f-b0795a1dab32" containerName="registry-server" containerID="cri-o://e17af18428ef5dab8ffb97288a3d7e0a21f005d029ef0fd1967a4f20960ebdca" gracePeriod=2 Jan 31 09:56:55 crc kubenswrapper[4830]: I0131 09:56:55.289935 4830 generic.go:334] "Generic (PLEG): container finished" podID="ba5f29be-2d9d-479e-813f-b0795a1dab32" containerID="e17af18428ef5dab8ffb97288a3d7e0a21f005d029ef0fd1967a4f20960ebdca" exitCode=0 Jan 31 09:56:55 crc kubenswrapper[4830]: I0131 09:56:55.290920 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-47nk4" event={"ID":"ba5f29be-2d9d-479e-813f-b0795a1dab32","Type":"ContainerDied","Data":"e17af18428ef5dab8ffb97288a3d7e0a21f005d029ef0fd1967a4f20960ebdca"} Jan 31 09:56:55 crc kubenswrapper[4830]: I0131 09:56:55.423928 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-47nk4" Jan 31 09:56:55 crc kubenswrapper[4830]: I0131 09:56:55.511646 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ba5f29be-2d9d-479e-813f-b0795a1dab32-utilities\") pod \"ba5f29be-2d9d-479e-813f-b0795a1dab32\" (UID: \"ba5f29be-2d9d-479e-813f-b0795a1dab32\") " Jan 31 09:56:55 crc kubenswrapper[4830]: I0131 09:56:55.512062 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ba5f29be-2d9d-479e-813f-b0795a1dab32-catalog-content\") pod \"ba5f29be-2d9d-479e-813f-b0795a1dab32\" (UID: \"ba5f29be-2d9d-479e-813f-b0795a1dab32\") " Jan 31 09:56:55 crc kubenswrapper[4830]: I0131 09:56:55.513035 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fm9k8\" (UniqueName: \"kubernetes.io/projected/ba5f29be-2d9d-479e-813f-b0795a1dab32-kube-api-access-fm9k8\") pod \"ba5f29be-2d9d-479e-813f-b0795a1dab32\" (UID: \"ba5f29be-2d9d-479e-813f-b0795a1dab32\") " Jan 31 09:56:55 crc kubenswrapper[4830]: I0131 09:56:55.513362 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ba5f29be-2d9d-479e-813f-b0795a1dab32-utilities" (OuterVolumeSpecName: "utilities") pod "ba5f29be-2d9d-479e-813f-b0795a1dab32" (UID: "ba5f29be-2d9d-479e-813f-b0795a1dab32"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 09:56:55 crc kubenswrapper[4830]: I0131 09:56:55.514201 4830 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ba5f29be-2d9d-479e-813f-b0795a1dab32-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 09:56:55 crc kubenswrapper[4830]: I0131 09:56:55.522544 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ba5f29be-2d9d-479e-813f-b0795a1dab32-kube-api-access-fm9k8" (OuterVolumeSpecName: "kube-api-access-fm9k8") pod "ba5f29be-2d9d-479e-813f-b0795a1dab32" (UID: "ba5f29be-2d9d-479e-813f-b0795a1dab32"). InnerVolumeSpecName "kube-api-access-fm9k8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:56:55 crc kubenswrapper[4830]: I0131 09:56:55.536546 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ba5f29be-2d9d-479e-813f-b0795a1dab32-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ba5f29be-2d9d-479e-813f-b0795a1dab32" (UID: "ba5f29be-2d9d-479e-813f-b0795a1dab32"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 09:56:55 crc kubenswrapper[4830]: I0131 09:56:55.617037 4830 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ba5f29be-2d9d-479e-813f-b0795a1dab32-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 09:56:55 crc kubenswrapper[4830]: I0131 09:56:55.617081 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fm9k8\" (UniqueName: \"kubernetes.io/projected/ba5f29be-2d9d-479e-813f-b0795a1dab32-kube-api-access-fm9k8\") on node \"crc\" DevicePath \"\"" Jan 31 09:56:56 crc kubenswrapper[4830]: I0131 09:56:56.317039 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-47nk4" event={"ID":"ba5f29be-2d9d-479e-813f-b0795a1dab32","Type":"ContainerDied","Data":"186cf348b34961866ef497abbbca36d449a0087b8cd6d10d0dfed104784337bb"} Jan 31 09:56:56 crc kubenswrapper[4830]: I0131 09:56:56.317405 4830 scope.go:117] "RemoveContainer" containerID="e17af18428ef5dab8ffb97288a3d7e0a21f005d029ef0fd1967a4f20960ebdca" Jan 31 09:56:56 crc kubenswrapper[4830]: I0131 09:56:56.318246 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-47nk4" Jan 31 09:56:56 crc kubenswrapper[4830]: I0131 09:56:56.373961 4830 scope.go:117] "RemoveContainer" containerID="369fa3526eb5d95fb20b126bd01b039a407f77f34556cc0312a3247074180f12" Jan 31 09:56:56 crc kubenswrapper[4830]: I0131 09:56:56.377953 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-47nk4"] Jan 31 09:56:56 crc kubenswrapper[4830]: I0131 09:56:56.394120 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-47nk4"] Jan 31 09:56:56 crc kubenswrapper[4830]: I0131 09:56:56.408672 4830 scope.go:117] "RemoveContainer" containerID="965e7e6ba7553cecd0b15febfb599c584c9d4322df9189af7c8ed44cf85e2973" Jan 31 09:56:58 crc kubenswrapper[4830]: I0131 09:56:58.268369 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ba5f29be-2d9d-479e-813f-b0795a1dab32" path="/var/lib/kubelet/pods/ba5f29be-2d9d-479e-813f-b0795a1dab32/volumes" Jan 31 09:57:06 crc kubenswrapper[4830]: E0131 09:57:06.240506 4830 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podba5f29be_2d9d_479e_813f_b0795a1dab32.slice/crio-186cf348b34961866ef497abbbca36d449a0087b8cd6d10d0dfed104784337bb\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podba5f29be_2d9d_479e_813f_b0795a1dab32.slice\": RecentStats: unable to find data in memory cache]" Jan 31 09:57:11 crc kubenswrapper[4830]: E0131 09:57:11.467667 4830 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podba5f29be_2d9d_479e_813f_b0795a1dab32.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podba5f29be_2d9d_479e_813f_b0795a1dab32.slice/crio-186cf348b34961866ef497abbbca36d449a0087b8cd6d10d0dfed104784337bb\": RecentStats: unable to find data in memory cache]" Jan 31 09:57:14 crc kubenswrapper[4830]: I0131 09:57:14.352845 4830 patch_prober.go:28] interesting pod/machine-config-daemon-gt7kd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 09:57:14 crc kubenswrapper[4830]: I0131 09:57:14.353447 4830 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" podUID="158dbfda-9b0a-4809-9946-3c6ee2d082dc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 09:57:14 crc kubenswrapper[4830]: I0131 09:57:14.353501 4830 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" Jan 31 09:57:14 crc kubenswrapper[4830]: I0131 09:57:14.354528 4830 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"22b789d2ef559ce66600680e06b87ef5f548352affe5608d41b78430df090d48"} pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 31 09:57:14 crc kubenswrapper[4830]: I0131 09:57:14.354596 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" podUID="158dbfda-9b0a-4809-9946-3c6ee2d082dc" containerName="machine-config-daemon" containerID="cri-o://22b789d2ef559ce66600680e06b87ef5f548352affe5608d41b78430df090d48" gracePeriod=600 Jan 31 09:57:14 crc kubenswrapper[4830]: I0131 09:57:14.699617 4830 generic.go:334] "Generic (PLEG): container finished" podID="158dbfda-9b0a-4809-9946-3c6ee2d082dc" containerID="22b789d2ef559ce66600680e06b87ef5f548352affe5608d41b78430df090d48" exitCode=0 Jan 31 09:57:14 crc kubenswrapper[4830]: I0131 09:57:14.699690 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" event={"ID":"158dbfda-9b0a-4809-9946-3c6ee2d082dc","Type":"ContainerDied","Data":"22b789d2ef559ce66600680e06b87ef5f548352affe5608d41b78430df090d48"} Jan 31 09:57:14 crc kubenswrapper[4830]: I0131 09:57:14.700035 4830 scope.go:117] "RemoveContainer" containerID="4c8d3d87e516871151f011a4e6c08fa4f0c34e4a44cf02a2e961fcf4fe1f40c9" Jan 31 09:57:15 crc kubenswrapper[4830]: I0131 09:57:15.711692 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" event={"ID":"158dbfda-9b0a-4809-9946-3c6ee2d082dc","Type":"ContainerStarted","Data":"9476534483fcc007d26142cba83303b61999a6cb67a49527304ef1ac3d85e163"} Jan 31 09:57:16 crc kubenswrapper[4830]: E0131 09:57:16.526970 4830 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podba5f29be_2d9d_479e_813f_b0795a1dab32.slice/crio-186cf348b34961866ef497abbbca36d449a0087b8cd6d10d0dfed104784337bb\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podba5f29be_2d9d_479e_813f_b0795a1dab32.slice\": RecentStats: unable to find data in memory cache]" Jan 31 09:57:26 crc kubenswrapper[4830]: E0131 09:57:26.448223 4830 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podba5f29be_2d9d_479e_813f_b0795a1dab32.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podba5f29be_2d9d_479e_813f_b0795a1dab32.slice/crio-186cf348b34961866ef497abbbca36d449a0087b8cd6d10d0dfed104784337bb\": RecentStats: unable to find data in memory cache]" Jan 31 09:57:26 crc kubenswrapper[4830]: E0131 09:57:26.641928 4830 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podba5f29be_2d9d_479e_813f_b0795a1dab32.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podba5f29be_2d9d_479e_813f_b0795a1dab32.slice/crio-186cf348b34961866ef497abbbca36d449a0087b8cd6d10d0dfed104784337bb\": RecentStats: unable to find data in memory cache]" Jan 31 09:57:36 crc kubenswrapper[4830]: E0131 09:57:36.959080 4830 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podba5f29be_2d9d_479e_813f_b0795a1dab32.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podba5f29be_2d9d_479e_813f_b0795a1dab32.slice/crio-186cf348b34961866ef497abbbca36d449a0087b8cd6d10d0dfed104784337bb\": RecentStats: unable to find data in memory cache]" Jan 31 09:57:41 crc kubenswrapper[4830]: E0131 09:57:41.181901 4830 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podba5f29be_2d9d_479e_813f_b0795a1dab32.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podba5f29be_2d9d_479e_813f_b0795a1dab32.slice/crio-186cf348b34961866ef497abbbca36d449a0087b8cd6d10d0dfed104784337bb\": RecentStats: unable to find data in memory cache]" Jan 31 09:57:47 crc kubenswrapper[4830]: E0131 09:57:47.268421 4830 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podba5f29be_2d9d_479e_813f_b0795a1dab32.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podba5f29be_2d9d_479e_813f_b0795a1dab32.slice/crio-186cf348b34961866ef497abbbca36d449a0087b8cd6d10d0dfed104784337bb\": RecentStats: unable to find data in memory cache]" Jan 31 09:57:48 crc kubenswrapper[4830]: E0131 09:57:48.105007 4830 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podba5f29be_2d9d_479e_813f_b0795a1dab32.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podba5f29be_2d9d_479e_813f_b0795a1dab32.slice/crio-186cf348b34961866ef497abbbca36d449a0087b8cd6d10d0dfed104784337bb\": RecentStats: unable to find data in memory cache]" Jan 31 09:57:48 crc kubenswrapper[4830]: E0131 09:57:48.105627 4830 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podba5f29be_2d9d_479e_813f_b0795a1dab32.slice/crio-186cf348b34961866ef497abbbca36d449a0087b8cd6d10d0dfed104784337bb\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podba5f29be_2d9d_479e_813f_b0795a1dab32.slice\": RecentStats: unable to find data in memory cache]" Jan 31 09:57:56 crc kubenswrapper[4830]: E0131 09:57:56.474231 4830 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podba5f29be_2d9d_479e_813f_b0795a1dab32.slice/crio-186cf348b34961866ef497abbbca36d449a0087b8cd6d10d0dfed104784337bb\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podba5f29be_2d9d_479e_813f_b0795a1dab32.slice\": RecentStats: unable to find data in memory cache]" Jan 31 09:59:01 crc kubenswrapper[4830]: I0131 09:59:01.081980 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-q994h"] Jan 31 09:59:01 crc kubenswrapper[4830]: E0131 09:59:01.083511 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba5f29be-2d9d-479e-813f-b0795a1dab32" containerName="extract-utilities" Jan 31 09:59:01 crc kubenswrapper[4830]: I0131 09:59:01.083529 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba5f29be-2d9d-479e-813f-b0795a1dab32" containerName="extract-utilities" Jan 31 09:59:01 crc kubenswrapper[4830]: E0131 09:59:01.083543 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba5f29be-2d9d-479e-813f-b0795a1dab32" containerName="registry-server" Jan 31 09:59:01 crc kubenswrapper[4830]: I0131 09:59:01.083550 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba5f29be-2d9d-479e-813f-b0795a1dab32" containerName="registry-server" Jan 31 09:59:01 crc kubenswrapper[4830]: E0131 09:59:01.083580 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba5f29be-2d9d-479e-813f-b0795a1dab32" containerName="extract-content" Jan 31 09:59:01 crc kubenswrapper[4830]: I0131 09:59:01.083586 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba5f29be-2d9d-479e-813f-b0795a1dab32" containerName="extract-content" Jan 31 09:59:01 crc kubenswrapper[4830]: I0131 09:59:01.083972 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="ba5f29be-2d9d-479e-813f-b0795a1dab32" containerName="registry-server" Jan 31 09:59:01 crc kubenswrapper[4830]: I0131 09:59:01.086933 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-q994h" Jan 31 09:59:01 crc kubenswrapper[4830]: I0131 09:59:01.095118 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-q994h"] Jan 31 09:59:01 crc kubenswrapper[4830]: I0131 09:59:01.146438 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4bf7047e-6ec8-47ff-bc32-8626d3d9fcba-catalog-content\") pod \"community-operators-q994h\" (UID: \"4bf7047e-6ec8-47ff-bc32-8626d3d9fcba\") " pod="openshift-marketplace/community-operators-q994h" Jan 31 09:59:01 crc kubenswrapper[4830]: I0131 09:59:01.146612 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4bf7047e-6ec8-47ff-bc32-8626d3d9fcba-utilities\") pod \"community-operators-q994h\" (UID: \"4bf7047e-6ec8-47ff-bc32-8626d3d9fcba\") " pod="openshift-marketplace/community-operators-q994h" Jan 31 09:59:01 crc kubenswrapper[4830]: I0131 09:59:01.146674 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vjwjv\" (UniqueName: \"kubernetes.io/projected/4bf7047e-6ec8-47ff-bc32-8626d3d9fcba-kube-api-access-vjwjv\") pod \"community-operators-q994h\" (UID: \"4bf7047e-6ec8-47ff-bc32-8626d3d9fcba\") " pod="openshift-marketplace/community-operators-q994h" Jan 31 09:59:01 crc kubenswrapper[4830]: I0131 09:59:01.249227 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4bf7047e-6ec8-47ff-bc32-8626d3d9fcba-catalog-content\") pod \"community-operators-q994h\" (UID: \"4bf7047e-6ec8-47ff-bc32-8626d3d9fcba\") " pod="openshift-marketplace/community-operators-q994h" Jan 31 09:59:01 crc kubenswrapper[4830]: I0131 09:59:01.249663 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4bf7047e-6ec8-47ff-bc32-8626d3d9fcba-utilities\") pod \"community-operators-q994h\" (UID: \"4bf7047e-6ec8-47ff-bc32-8626d3d9fcba\") " pod="openshift-marketplace/community-operators-q994h" Jan 31 09:59:01 crc kubenswrapper[4830]: I0131 09:59:01.249706 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vjwjv\" (UniqueName: \"kubernetes.io/projected/4bf7047e-6ec8-47ff-bc32-8626d3d9fcba-kube-api-access-vjwjv\") pod \"community-operators-q994h\" (UID: \"4bf7047e-6ec8-47ff-bc32-8626d3d9fcba\") " pod="openshift-marketplace/community-operators-q994h" Jan 31 09:59:01 crc kubenswrapper[4830]: I0131 09:59:01.249891 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4bf7047e-6ec8-47ff-bc32-8626d3d9fcba-catalog-content\") pod \"community-operators-q994h\" (UID: \"4bf7047e-6ec8-47ff-bc32-8626d3d9fcba\") " pod="openshift-marketplace/community-operators-q994h" Jan 31 09:59:01 crc kubenswrapper[4830]: I0131 09:59:01.250120 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4bf7047e-6ec8-47ff-bc32-8626d3d9fcba-utilities\") pod \"community-operators-q994h\" (UID: \"4bf7047e-6ec8-47ff-bc32-8626d3d9fcba\") " pod="openshift-marketplace/community-operators-q994h" Jan 31 09:59:01 crc kubenswrapper[4830]: I0131 09:59:01.280912 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vjwjv\" (UniqueName: \"kubernetes.io/projected/4bf7047e-6ec8-47ff-bc32-8626d3d9fcba-kube-api-access-vjwjv\") pod \"community-operators-q994h\" (UID: \"4bf7047e-6ec8-47ff-bc32-8626d3d9fcba\") " pod="openshift-marketplace/community-operators-q994h" Jan 31 09:59:01 crc kubenswrapper[4830]: I0131 09:59:01.419334 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-q994h" Jan 31 09:59:02 crc kubenswrapper[4830]: I0131 09:59:01.999895 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-q994h"] Jan 31 09:59:02 crc kubenswrapper[4830]: I0131 09:59:02.927103 4830 generic.go:334] "Generic (PLEG): container finished" podID="4bf7047e-6ec8-47ff-bc32-8626d3d9fcba" containerID="2df4118f555df7269f8b8636d777d4d3214b2f71b28c53ba8c483f3c070e1199" exitCode=0 Jan 31 09:59:02 crc kubenswrapper[4830]: I0131 09:59:02.927828 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q994h" event={"ID":"4bf7047e-6ec8-47ff-bc32-8626d3d9fcba","Type":"ContainerDied","Data":"2df4118f555df7269f8b8636d777d4d3214b2f71b28c53ba8c483f3c070e1199"} Jan 31 09:59:02 crc kubenswrapper[4830]: I0131 09:59:02.927907 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q994h" event={"ID":"4bf7047e-6ec8-47ff-bc32-8626d3d9fcba","Type":"ContainerStarted","Data":"d4d7bcfb78330ad4c07a304571b1e284cee4270d78db97aeb7f3715504c4d544"} Jan 31 09:59:03 crc kubenswrapper[4830]: I0131 09:59:03.941758 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q994h" event={"ID":"4bf7047e-6ec8-47ff-bc32-8626d3d9fcba","Type":"ContainerStarted","Data":"9bb282f71f4897c90fbe9c3dd38c53e11ec24eb01d79f99ba559d0cc7fcfa068"} Jan 31 09:59:04 crc kubenswrapper[4830]: I0131 09:59:04.958404 4830 generic.go:334] "Generic (PLEG): container finished" podID="4bf7047e-6ec8-47ff-bc32-8626d3d9fcba" containerID="9bb282f71f4897c90fbe9c3dd38c53e11ec24eb01d79f99ba559d0cc7fcfa068" exitCode=0 Jan 31 09:59:04 crc kubenswrapper[4830]: I0131 09:59:04.958813 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q994h" event={"ID":"4bf7047e-6ec8-47ff-bc32-8626d3d9fcba","Type":"ContainerDied","Data":"9bb282f71f4897c90fbe9c3dd38c53e11ec24eb01d79f99ba559d0cc7fcfa068"} Jan 31 09:59:05 crc kubenswrapper[4830]: I0131 09:59:05.973143 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q994h" event={"ID":"4bf7047e-6ec8-47ff-bc32-8626d3d9fcba","Type":"ContainerStarted","Data":"a13bd14063b5cebaa334b525b88defb262adb5490db818349e1e711c030997cb"} Jan 31 09:59:06 crc kubenswrapper[4830]: I0131 09:59:06.001616 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-q994h" podStartSLOduration=2.229682578 podStartE2EDuration="5.001588998s" podCreationTimestamp="2026-01-31 09:59:01 +0000 UTC" firstStartedPulling="2026-01-31 09:59:02.933134863 +0000 UTC m=+3487.426497305" lastFinishedPulling="2026-01-31 09:59:05.705041283 +0000 UTC m=+3490.198403725" observedRunningTime="2026-01-31 09:59:05.991592744 +0000 UTC m=+3490.484955186" watchObservedRunningTime="2026-01-31 09:59:06.001588998 +0000 UTC m=+3490.494951440" Jan 31 09:59:11 crc kubenswrapper[4830]: I0131 09:59:11.419700 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-q994h" Jan 31 09:59:11 crc kubenswrapper[4830]: I0131 09:59:11.420169 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-q994h" Jan 31 09:59:11 crc kubenswrapper[4830]: I0131 09:59:11.468329 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-q994h" Jan 31 09:59:12 crc kubenswrapper[4830]: I0131 09:59:12.089426 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-q994h" Jan 31 09:59:12 crc kubenswrapper[4830]: I0131 09:59:12.140844 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-q994h"] Jan 31 09:59:14 crc kubenswrapper[4830]: I0131 09:59:14.058920 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-q994h" podUID="4bf7047e-6ec8-47ff-bc32-8626d3d9fcba" containerName="registry-server" containerID="cri-o://a13bd14063b5cebaa334b525b88defb262adb5490db818349e1e711c030997cb" gracePeriod=2 Jan 31 09:59:14 crc kubenswrapper[4830]: I0131 09:59:14.353477 4830 patch_prober.go:28] interesting pod/machine-config-daemon-gt7kd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 09:59:14 crc kubenswrapper[4830]: I0131 09:59:14.353550 4830 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" podUID="158dbfda-9b0a-4809-9946-3c6ee2d082dc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 09:59:14 crc kubenswrapper[4830]: I0131 09:59:14.659956 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-q994h" Jan 31 09:59:14 crc kubenswrapper[4830]: I0131 09:59:14.767619 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vjwjv\" (UniqueName: \"kubernetes.io/projected/4bf7047e-6ec8-47ff-bc32-8626d3d9fcba-kube-api-access-vjwjv\") pod \"4bf7047e-6ec8-47ff-bc32-8626d3d9fcba\" (UID: \"4bf7047e-6ec8-47ff-bc32-8626d3d9fcba\") " Jan 31 09:59:14 crc kubenswrapper[4830]: I0131 09:59:14.768050 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4bf7047e-6ec8-47ff-bc32-8626d3d9fcba-catalog-content\") pod \"4bf7047e-6ec8-47ff-bc32-8626d3d9fcba\" (UID: \"4bf7047e-6ec8-47ff-bc32-8626d3d9fcba\") " Jan 31 09:59:14 crc kubenswrapper[4830]: I0131 09:59:14.768089 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4bf7047e-6ec8-47ff-bc32-8626d3d9fcba-utilities\") pod \"4bf7047e-6ec8-47ff-bc32-8626d3d9fcba\" (UID: \"4bf7047e-6ec8-47ff-bc32-8626d3d9fcba\") " Jan 31 09:59:14 crc kubenswrapper[4830]: I0131 09:59:14.769791 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4bf7047e-6ec8-47ff-bc32-8626d3d9fcba-utilities" (OuterVolumeSpecName: "utilities") pod "4bf7047e-6ec8-47ff-bc32-8626d3d9fcba" (UID: "4bf7047e-6ec8-47ff-bc32-8626d3d9fcba"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 09:59:14 crc kubenswrapper[4830]: I0131 09:59:14.776355 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bf7047e-6ec8-47ff-bc32-8626d3d9fcba-kube-api-access-vjwjv" (OuterVolumeSpecName: "kube-api-access-vjwjv") pod "4bf7047e-6ec8-47ff-bc32-8626d3d9fcba" (UID: "4bf7047e-6ec8-47ff-bc32-8626d3d9fcba"). InnerVolumeSpecName "kube-api-access-vjwjv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:59:14 crc kubenswrapper[4830]: I0131 09:59:14.836111 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4bf7047e-6ec8-47ff-bc32-8626d3d9fcba-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4bf7047e-6ec8-47ff-bc32-8626d3d9fcba" (UID: "4bf7047e-6ec8-47ff-bc32-8626d3d9fcba"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 09:59:14 crc kubenswrapper[4830]: I0131 09:59:14.871999 4830 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4bf7047e-6ec8-47ff-bc32-8626d3d9fcba-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 09:59:14 crc kubenswrapper[4830]: I0131 09:59:14.872044 4830 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4bf7047e-6ec8-47ff-bc32-8626d3d9fcba-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 09:59:14 crc kubenswrapper[4830]: I0131 09:59:14.872059 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vjwjv\" (UniqueName: \"kubernetes.io/projected/4bf7047e-6ec8-47ff-bc32-8626d3d9fcba-kube-api-access-vjwjv\") on node \"crc\" DevicePath \"\"" Jan 31 09:59:15 crc kubenswrapper[4830]: I0131 09:59:15.071322 4830 generic.go:334] "Generic (PLEG): container finished" podID="4bf7047e-6ec8-47ff-bc32-8626d3d9fcba" containerID="a13bd14063b5cebaa334b525b88defb262adb5490db818349e1e711c030997cb" exitCode=0 Jan 31 09:59:15 crc kubenswrapper[4830]: I0131 09:59:15.071368 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q994h" event={"ID":"4bf7047e-6ec8-47ff-bc32-8626d3d9fcba","Type":"ContainerDied","Data":"a13bd14063b5cebaa334b525b88defb262adb5490db818349e1e711c030997cb"} Jan 31 09:59:15 crc kubenswrapper[4830]: I0131 09:59:15.071397 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q994h" event={"ID":"4bf7047e-6ec8-47ff-bc32-8626d3d9fcba","Type":"ContainerDied","Data":"d4d7bcfb78330ad4c07a304571b1e284cee4270d78db97aeb7f3715504c4d544"} Jan 31 09:59:15 crc kubenswrapper[4830]: I0131 09:59:15.071414 4830 scope.go:117] "RemoveContainer" containerID="a13bd14063b5cebaa334b525b88defb262adb5490db818349e1e711c030997cb" Jan 31 09:59:15 crc kubenswrapper[4830]: I0131 09:59:15.071574 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-q994h" Jan 31 09:59:15 crc kubenswrapper[4830]: I0131 09:59:15.107251 4830 scope.go:117] "RemoveContainer" containerID="9bb282f71f4897c90fbe9c3dd38c53e11ec24eb01d79f99ba559d0cc7fcfa068" Jan 31 09:59:15 crc kubenswrapper[4830]: I0131 09:59:15.120858 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-q994h"] Jan 31 09:59:15 crc kubenswrapper[4830]: I0131 09:59:15.137527 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-q994h"] Jan 31 09:59:15 crc kubenswrapper[4830]: I0131 09:59:15.142849 4830 scope.go:117] "RemoveContainer" containerID="2df4118f555df7269f8b8636d777d4d3214b2f71b28c53ba8c483f3c070e1199" Jan 31 09:59:15 crc kubenswrapper[4830]: I0131 09:59:15.197016 4830 scope.go:117] "RemoveContainer" containerID="a13bd14063b5cebaa334b525b88defb262adb5490db818349e1e711c030997cb" Jan 31 09:59:15 crc kubenswrapper[4830]: E0131 09:59:15.202076 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a13bd14063b5cebaa334b525b88defb262adb5490db818349e1e711c030997cb\": container with ID starting with a13bd14063b5cebaa334b525b88defb262adb5490db818349e1e711c030997cb not found: ID does not exist" containerID="a13bd14063b5cebaa334b525b88defb262adb5490db818349e1e711c030997cb" Jan 31 09:59:15 crc kubenswrapper[4830]: I0131 09:59:15.202130 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a13bd14063b5cebaa334b525b88defb262adb5490db818349e1e711c030997cb"} err="failed to get container status \"a13bd14063b5cebaa334b525b88defb262adb5490db818349e1e711c030997cb\": rpc error: code = NotFound desc = could not find container \"a13bd14063b5cebaa334b525b88defb262adb5490db818349e1e711c030997cb\": container with ID starting with a13bd14063b5cebaa334b525b88defb262adb5490db818349e1e711c030997cb not found: ID does not exist" Jan 31 09:59:15 crc kubenswrapper[4830]: I0131 09:59:15.202163 4830 scope.go:117] "RemoveContainer" containerID="9bb282f71f4897c90fbe9c3dd38c53e11ec24eb01d79f99ba559d0cc7fcfa068" Jan 31 09:59:15 crc kubenswrapper[4830]: E0131 09:59:15.202587 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9bb282f71f4897c90fbe9c3dd38c53e11ec24eb01d79f99ba559d0cc7fcfa068\": container with ID starting with 9bb282f71f4897c90fbe9c3dd38c53e11ec24eb01d79f99ba559d0cc7fcfa068 not found: ID does not exist" containerID="9bb282f71f4897c90fbe9c3dd38c53e11ec24eb01d79f99ba559d0cc7fcfa068" Jan 31 09:59:15 crc kubenswrapper[4830]: I0131 09:59:15.202615 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9bb282f71f4897c90fbe9c3dd38c53e11ec24eb01d79f99ba559d0cc7fcfa068"} err="failed to get container status \"9bb282f71f4897c90fbe9c3dd38c53e11ec24eb01d79f99ba559d0cc7fcfa068\": rpc error: code = NotFound desc = could not find container \"9bb282f71f4897c90fbe9c3dd38c53e11ec24eb01d79f99ba559d0cc7fcfa068\": container with ID starting with 9bb282f71f4897c90fbe9c3dd38c53e11ec24eb01d79f99ba559d0cc7fcfa068 not found: ID does not exist" Jan 31 09:59:15 crc kubenswrapper[4830]: I0131 09:59:15.202632 4830 scope.go:117] "RemoveContainer" containerID="2df4118f555df7269f8b8636d777d4d3214b2f71b28c53ba8c483f3c070e1199" Jan 31 09:59:15 crc kubenswrapper[4830]: E0131 09:59:15.203028 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2df4118f555df7269f8b8636d777d4d3214b2f71b28c53ba8c483f3c070e1199\": container with ID starting with 2df4118f555df7269f8b8636d777d4d3214b2f71b28c53ba8c483f3c070e1199 not found: ID does not exist" containerID="2df4118f555df7269f8b8636d777d4d3214b2f71b28c53ba8c483f3c070e1199" Jan 31 09:59:15 crc kubenswrapper[4830]: I0131 09:59:15.203058 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2df4118f555df7269f8b8636d777d4d3214b2f71b28c53ba8c483f3c070e1199"} err="failed to get container status \"2df4118f555df7269f8b8636d777d4d3214b2f71b28c53ba8c483f3c070e1199\": rpc error: code = NotFound desc = could not find container \"2df4118f555df7269f8b8636d777d4d3214b2f71b28c53ba8c483f3c070e1199\": container with ID starting with 2df4118f555df7269f8b8636d777d4d3214b2f71b28c53ba8c483f3c070e1199 not found: ID does not exist" Jan 31 09:59:16 crc kubenswrapper[4830]: I0131 09:59:16.264923 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bf7047e-6ec8-47ff-bc32-8626d3d9fcba" path="/var/lib/kubelet/pods/4bf7047e-6ec8-47ff-bc32-8626d3d9fcba/volumes" Jan 31 09:59:44 crc kubenswrapper[4830]: I0131 09:59:44.353233 4830 patch_prober.go:28] interesting pod/machine-config-daemon-gt7kd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 09:59:44 crc kubenswrapper[4830]: I0131 09:59:44.353787 4830 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" podUID="158dbfda-9b0a-4809-9946-3c6ee2d082dc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 10:00:00 crc kubenswrapper[4830]: I0131 10:00:00.168360 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29497560-zwwjl"] Jan 31 10:00:00 crc kubenswrapper[4830]: E0131 10:00:00.169658 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4bf7047e-6ec8-47ff-bc32-8626d3d9fcba" containerName="registry-server" Jan 31 10:00:00 crc kubenswrapper[4830]: I0131 10:00:00.169681 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="4bf7047e-6ec8-47ff-bc32-8626d3d9fcba" containerName="registry-server" Jan 31 10:00:00 crc kubenswrapper[4830]: E0131 10:00:00.169738 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4bf7047e-6ec8-47ff-bc32-8626d3d9fcba" containerName="extract-utilities" Jan 31 10:00:00 crc kubenswrapper[4830]: I0131 10:00:00.169756 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="4bf7047e-6ec8-47ff-bc32-8626d3d9fcba" containerName="extract-utilities" Jan 31 10:00:00 crc kubenswrapper[4830]: E0131 10:00:00.169774 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4bf7047e-6ec8-47ff-bc32-8626d3d9fcba" containerName="extract-content" Jan 31 10:00:00 crc kubenswrapper[4830]: I0131 10:00:00.169783 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="4bf7047e-6ec8-47ff-bc32-8626d3d9fcba" containerName="extract-content" Jan 31 10:00:00 crc kubenswrapper[4830]: I0131 10:00:00.170077 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="4bf7047e-6ec8-47ff-bc32-8626d3d9fcba" containerName="registry-server" Jan 31 10:00:00 crc kubenswrapper[4830]: I0131 10:00:00.171039 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29497560-zwwjl" Jan 31 10:00:00 crc kubenswrapper[4830]: I0131 10:00:00.173653 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 31 10:00:00 crc kubenswrapper[4830]: I0131 10:00:00.173654 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 31 10:00:00 crc kubenswrapper[4830]: I0131 10:00:00.182617 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29497560-zwwjl"] Jan 31 10:00:00 crc kubenswrapper[4830]: I0131 10:00:00.186530 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fc281670-0a11-45d2-8463-657eaf396711-config-volume\") pod \"collect-profiles-29497560-zwwjl\" (UID: \"fc281670-0a11-45d2-8463-657eaf396711\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497560-zwwjl" Jan 31 10:00:00 crc kubenswrapper[4830]: I0131 10:00:00.187115 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z4pr5\" (UniqueName: \"kubernetes.io/projected/fc281670-0a11-45d2-8463-657eaf396711-kube-api-access-z4pr5\") pod \"collect-profiles-29497560-zwwjl\" (UID: \"fc281670-0a11-45d2-8463-657eaf396711\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497560-zwwjl" Jan 31 10:00:00 crc kubenswrapper[4830]: I0131 10:00:00.187227 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/fc281670-0a11-45d2-8463-657eaf396711-secret-volume\") pod \"collect-profiles-29497560-zwwjl\" (UID: \"fc281670-0a11-45d2-8463-657eaf396711\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497560-zwwjl" Jan 31 10:00:00 crc kubenswrapper[4830]: I0131 10:00:00.291272 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z4pr5\" (UniqueName: \"kubernetes.io/projected/fc281670-0a11-45d2-8463-657eaf396711-kube-api-access-z4pr5\") pod \"collect-profiles-29497560-zwwjl\" (UID: \"fc281670-0a11-45d2-8463-657eaf396711\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497560-zwwjl" Jan 31 10:00:00 crc kubenswrapper[4830]: I0131 10:00:00.291362 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/fc281670-0a11-45d2-8463-657eaf396711-secret-volume\") pod \"collect-profiles-29497560-zwwjl\" (UID: \"fc281670-0a11-45d2-8463-657eaf396711\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497560-zwwjl" Jan 31 10:00:00 crc kubenswrapper[4830]: I0131 10:00:00.291520 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fc281670-0a11-45d2-8463-657eaf396711-config-volume\") pod \"collect-profiles-29497560-zwwjl\" (UID: \"fc281670-0a11-45d2-8463-657eaf396711\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497560-zwwjl" Jan 31 10:00:00 crc kubenswrapper[4830]: I0131 10:00:00.293224 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fc281670-0a11-45d2-8463-657eaf396711-config-volume\") pod \"collect-profiles-29497560-zwwjl\" (UID: \"fc281670-0a11-45d2-8463-657eaf396711\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497560-zwwjl" Jan 31 10:00:00 crc kubenswrapper[4830]: I0131 10:00:00.302473 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/fc281670-0a11-45d2-8463-657eaf396711-secret-volume\") pod \"collect-profiles-29497560-zwwjl\" (UID: \"fc281670-0a11-45d2-8463-657eaf396711\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497560-zwwjl" Jan 31 10:00:00 crc kubenswrapper[4830]: I0131 10:00:00.309691 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z4pr5\" (UniqueName: \"kubernetes.io/projected/fc281670-0a11-45d2-8463-657eaf396711-kube-api-access-z4pr5\") pod \"collect-profiles-29497560-zwwjl\" (UID: \"fc281670-0a11-45d2-8463-657eaf396711\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497560-zwwjl" Jan 31 10:00:00 crc kubenswrapper[4830]: I0131 10:00:00.522053 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29497560-zwwjl" Jan 31 10:00:01 crc kubenswrapper[4830]: I0131 10:00:01.112800 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29497560-zwwjl"] Jan 31 10:00:01 crc kubenswrapper[4830]: I0131 10:00:01.617965 4830 generic.go:334] "Generic (PLEG): container finished" podID="fc281670-0a11-45d2-8463-657eaf396711" containerID="82e61bfc26f5574f38dd9e925830e89ed159d9144f34a44a7a709077f5fef896" exitCode=0 Jan 31 10:00:01 crc kubenswrapper[4830]: I0131 10:00:01.618062 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29497560-zwwjl" event={"ID":"fc281670-0a11-45d2-8463-657eaf396711","Type":"ContainerDied","Data":"82e61bfc26f5574f38dd9e925830e89ed159d9144f34a44a7a709077f5fef896"} Jan 31 10:00:01 crc kubenswrapper[4830]: I0131 10:00:01.618285 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29497560-zwwjl" event={"ID":"fc281670-0a11-45d2-8463-657eaf396711","Type":"ContainerStarted","Data":"605dbb1c8cc0ac86023eb2976315f815c83a0fe7d78d7faefe3c3fcee447c042"} Jan 31 10:00:03 crc kubenswrapper[4830]: I0131 10:00:03.121646 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29497560-zwwjl" Jan 31 10:00:03 crc kubenswrapper[4830]: I0131 10:00:03.224327 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fc281670-0a11-45d2-8463-657eaf396711-config-volume\") pod \"fc281670-0a11-45d2-8463-657eaf396711\" (UID: \"fc281670-0a11-45d2-8463-657eaf396711\") " Jan 31 10:00:03 crc kubenswrapper[4830]: I0131 10:00:03.243590 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fc281670-0a11-45d2-8463-657eaf396711-config-volume" (OuterVolumeSpecName: "config-volume") pod "fc281670-0a11-45d2-8463-657eaf396711" (UID: "fc281670-0a11-45d2-8463-657eaf396711"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 10:00:03 crc kubenswrapper[4830]: I0131 10:00:03.338120 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/fc281670-0a11-45d2-8463-657eaf396711-secret-volume\") pod \"fc281670-0a11-45d2-8463-657eaf396711\" (UID: \"fc281670-0a11-45d2-8463-657eaf396711\") " Jan 31 10:00:03 crc kubenswrapper[4830]: I0131 10:00:03.338184 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z4pr5\" (UniqueName: \"kubernetes.io/projected/fc281670-0a11-45d2-8463-657eaf396711-kube-api-access-z4pr5\") pod \"fc281670-0a11-45d2-8463-657eaf396711\" (UID: \"fc281670-0a11-45d2-8463-657eaf396711\") " Jan 31 10:00:03 crc kubenswrapper[4830]: I0131 10:00:03.339365 4830 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fc281670-0a11-45d2-8463-657eaf396711-config-volume\") on node \"crc\" DevicePath \"\"" Jan 31 10:00:03 crc kubenswrapper[4830]: I0131 10:00:03.374013 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc281670-0a11-45d2-8463-657eaf396711-kube-api-access-z4pr5" (OuterVolumeSpecName: "kube-api-access-z4pr5") pod "fc281670-0a11-45d2-8463-657eaf396711" (UID: "fc281670-0a11-45d2-8463-657eaf396711"). InnerVolumeSpecName "kube-api-access-z4pr5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 10:00:03 crc kubenswrapper[4830]: I0131 10:00:03.377777 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fc281670-0a11-45d2-8463-657eaf396711-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "fc281670-0a11-45d2-8463-657eaf396711" (UID: "fc281670-0a11-45d2-8463-657eaf396711"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 10:00:03 crc kubenswrapper[4830]: I0131 10:00:03.441349 4830 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/fc281670-0a11-45d2-8463-657eaf396711-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 31 10:00:03 crc kubenswrapper[4830]: I0131 10:00:03.441396 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z4pr5\" (UniqueName: \"kubernetes.io/projected/fc281670-0a11-45d2-8463-657eaf396711-kube-api-access-z4pr5\") on node \"crc\" DevicePath \"\"" Jan 31 10:00:03 crc kubenswrapper[4830]: I0131 10:00:03.641308 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29497560-zwwjl" event={"ID":"fc281670-0a11-45d2-8463-657eaf396711","Type":"ContainerDied","Data":"605dbb1c8cc0ac86023eb2976315f815c83a0fe7d78d7faefe3c3fcee447c042"} Jan 31 10:00:03 crc kubenswrapper[4830]: I0131 10:00:03.641363 4830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="605dbb1c8cc0ac86023eb2976315f815c83a0fe7d78d7faefe3c3fcee447c042" Jan 31 10:00:03 crc kubenswrapper[4830]: I0131 10:00:03.641375 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29497560-zwwjl" Jan 31 10:00:04 crc kubenswrapper[4830]: I0131 10:00:04.218697 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29497515-qx8rx"] Jan 31 10:00:04 crc kubenswrapper[4830]: I0131 10:00:04.230404 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29497515-qx8rx"] Jan 31 10:00:04 crc kubenswrapper[4830]: I0131 10:00:04.271698 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3e9b9ae0-0c92-4992-b4cb-44bb51f84c45" path="/var/lib/kubelet/pods/3e9b9ae0-0c92-4992-b4cb-44bb51f84c45/volumes" Jan 31 10:00:14 crc kubenswrapper[4830]: I0131 10:00:14.352786 4830 patch_prober.go:28] interesting pod/machine-config-daemon-gt7kd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 10:00:14 crc kubenswrapper[4830]: I0131 10:00:14.353408 4830 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" podUID="158dbfda-9b0a-4809-9946-3c6ee2d082dc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 10:00:14 crc kubenswrapper[4830]: I0131 10:00:14.353457 4830 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" Jan 31 10:00:14 crc kubenswrapper[4830]: I0131 10:00:14.354574 4830 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"9476534483fcc007d26142cba83303b61999a6cb67a49527304ef1ac3d85e163"} pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 31 10:00:14 crc kubenswrapper[4830]: I0131 10:00:14.354648 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" podUID="158dbfda-9b0a-4809-9946-3c6ee2d082dc" containerName="machine-config-daemon" containerID="cri-o://9476534483fcc007d26142cba83303b61999a6cb67a49527304ef1ac3d85e163" gracePeriod=600 Jan 31 10:00:14 crc kubenswrapper[4830]: E0131 10:00:14.473583 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gt7kd_openshift-machine-config-operator(158dbfda-9b0a-4809-9946-3c6ee2d082dc)\"" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" podUID="158dbfda-9b0a-4809-9946-3c6ee2d082dc" Jan 31 10:00:14 crc kubenswrapper[4830]: I0131 10:00:14.734438 4830 scope.go:117] "RemoveContainer" containerID="ecaf8ff9f3fbe10cd06a1d1a00ef6ef6c4ccadc02179c3b8c2fd96256ecaeb80" Jan 31 10:00:14 crc kubenswrapper[4830]: I0131 10:00:14.754371 4830 generic.go:334] "Generic (PLEG): container finished" podID="158dbfda-9b0a-4809-9946-3c6ee2d082dc" containerID="9476534483fcc007d26142cba83303b61999a6cb67a49527304ef1ac3d85e163" exitCode=0 Jan 31 10:00:14 crc kubenswrapper[4830]: I0131 10:00:14.754422 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" event={"ID":"158dbfda-9b0a-4809-9946-3c6ee2d082dc","Type":"ContainerDied","Data":"9476534483fcc007d26142cba83303b61999a6cb67a49527304ef1ac3d85e163"} Jan 31 10:00:14 crc kubenswrapper[4830]: I0131 10:00:14.754466 4830 scope.go:117] "RemoveContainer" containerID="22b789d2ef559ce66600680e06b87ef5f548352affe5608d41b78430df090d48" Jan 31 10:00:14 crc kubenswrapper[4830]: I0131 10:00:14.755325 4830 scope.go:117] "RemoveContainer" containerID="9476534483fcc007d26142cba83303b61999a6cb67a49527304ef1ac3d85e163" Jan 31 10:00:14 crc kubenswrapper[4830]: E0131 10:00:14.755775 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gt7kd_openshift-machine-config-operator(158dbfda-9b0a-4809-9946-3c6ee2d082dc)\"" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" podUID="158dbfda-9b0a-4809-9946-3c6ee2d082dc" Jan 31 10:00:29 crc kubenswrapper[4830]: I0131 10:00:29.252023 4830 scope.go:117] "RemoveContainer" containerID="9476534483fcc007d26142cba83303b61999a6cb67a49527304ef1ac3d85e163" Jan 31 10:00:29 crc kubenswrapper[4830]: E0131 10:00:29.252797 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gt7kd_openshift-machine-config-operator(158dbfda-9b0a-4809-9946-3c6ee2d082dc)\"" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" podUID="158dbfda-9b0a-4809-9946-3c6ee2d082dc" Jan 31 10:00:42 crc kubenswrapper[4830]: I0131 10:00:42.037376 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-8h5jr"] Jan 31 10:00:42 crc kubenswrapper[4830]: E0131 10:00:42.038657 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc281670-0a11-45d2-8463-657eaf396711" containerName="collect-profiles" Jan 31 10:00:42 crc kubenswrapper[4830]: I0131 10:00:42.038674 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc281670-0a11-45d2-8463-657eaf396711" containerName="collect-profiles" Jan 31 10:00:42 crc kubenswrapper[4830]: I0131 10:00:42.038986 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="fc281670-0a11-45d2-8463-657eaf396711" containerName="collect-profiles" Jan 31 10:00:42 crc kubenswrapper[4830]: I0131 10:00:42.041549 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8h5jr" Jan 31 10:00:42 crc kubenswrapper[4830]: I0131 10:00:42.061459 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8h5jr"] Jan 31 10:00:42 crc kubenswrapper[4830]: I0131 10:00:42.109089 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d7004b5b-402e-4411-b2eb-5f1c274b460e-catalog-content\") pod \"certified-operators-8h5jr\" (UID: \"d7004b5b-402e-4411-b2eb-5f1c274b460e\") " pod="openshift-marketplace/certified-operators-8h5jr" Jan 31 10:00:42 crc kubenswrapper[4830]: I0131 10:00:42.109349 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5c9hs\" (UniqueName: \"kubernetes.io/projected/d7004b5b-402e-4411-b2eb-5f1c274b460e-kube-api-access-5c9hs\") pod \"certified-operators-8h5jr\" (UID: \"d7004b5b-402e-4411-b2eb-5f1c274b460e\") " pod="openshift-marketplace/certified-operators-8h5jr" Jan 31 10:00:42 crc kubenswrapper[4830]: I0131 10:00:42.109427 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d7004b5b-402e-4411-b2eb-5f1c274b460e-utilities\") pod \"certified-operators-8h5jr\" (UID: \"d7004b5b-402e-4411-b2eb-5f1c274b460e\") " pod="openshift-marketplace/certified-operators-8h5jr" Jan 31 10:00:42 crc kubenswrapper[4830]: I0131 10:00:42.212469 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d7004b5b-402e-4411-b2eb-5f1c274b460e-catalog-content\") pod \"certified-operators-8h5jr\" (UID: \"d7004b5b-402e-4411-b2eb-5f1c274b460e\") " pod="openshift-marketplace/certified-operators-8h5jr" Jan 31 10:00:42 crc kubenswrapper[4830]: I0131 10:00:42.212667 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5c9hs\" (UniqueName: \"kubernetes.io/projected/d7004b5b-402e-4411-b2eb-5f1c274b460e-kube-api-access-5c9hs\") pod \"certified-operators-8h5jr\" (UID: \"d7004b5b-402e-4411-b2eb-5f1c274b460e\") " pod="openshift-marketplace/certified-operators-8h5jr" Jan 31 10:00:42 crc kubenswrapper[4830]: I0131 10:00:42.212759 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d7004b5b-402e-4411-b2eb-5f1c274b460e-utilities\") pod \"certified-operators-8h5jr\" (UID: \"d7004b5b-402e-4411-b2eb-5f1c274b460e\") " pod="openshift-marketplace/certified-operators-8h5jr" Jan 31 10:00:42 crc kubenswrapper[4830]: I0131 10:00:42.213125 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d7004b5b-402e-4411-b2eb-5f1c274b460e-catalog-content\") pod \"certified-operators-8h5jr\" (UID: \"d7004b5b-402e-4411-b2eb-5f1c274b460e\") " pod="openshift-marketplace/certified-operators-8h5jr" Jan 31 10:00:42 crc kubenswrapper[4830]: I0131 10:00:42.213235 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d7004b5b-402e-4411-b2eb-5f1c274b460e-utilities\") pod \"certified-operators-8h5jr\" (UID: \"d7004b5b-402e-4411-b2eb-5f1c274b460e\") " pod="openshift-marketplace/certified-operators-8h5jr" Jan 31 10:00:42 crc kubenswrapper[4830]: I0131 10:00:42.248849 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5c9hs\" (UniqueName: \"kubernetes.io/projected/d7004b5b-402e-4411-b2eb-5f1c274b460e-kube-api-access-5c9hs\") pod \"certified-operators-8h5jr\" (UID: \"d7004b5b-402e-4411-b2eb-5f1c274b460e\") " pod="openshift-marketplace/certified-operators-8h5jr" Jan 31 10:00:42 crc kubenswrapper[4830]: I0131 10:00:42.251703 4830 scope.go:117] "RemoveContainer" containerID="9476534483fcc007d26142cba83303b61999a6cb67a49527304ef1ac3d85e163" Jan 31 10:00:42 crc kubenswrapper[4830]: E0131 10:00:42.252289 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gt7kd_openshift-machine-config-operator(158dbfda-9b0a-4809-9946-3c6ee2d082dc)\"" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" podUID="158dbfda-9b0a-4809-9946-3c6ee2d082dc" Jan 31 10:00:42 crc kubenswrapper[4830]: I0131 10:00:42.373071 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8h5jr" Jan 31 10:00:43 crc kubenswrapper[4830]: I0131 10:00:43.002938 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8h5jr"] Jan 31 10:00:43 crc kubenswrapper[4830]: I0131 10:00:43.117653 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8h5jr" event={"ID":"d7004b5b-402e-4411-b2eb-5f1c274b460e","Type":"ContainerStarted","Data":"206b1be46c8b2f1d959575b47e5150cc984c1446b9d6e173e3353b0d09742339"} Jan 31 10:00:44 crc kubenswrapper[4830]: I0131 10:00:44.129900 4830 generic.go:334] "Generic (PLEG): container finished" podID="d7004b5b-402e-4411-b2eb-5f1c274b460e" containerID="1db3d0e86b1a1c3f5a58e6b21946209337e0fb1554d7426c704f10c95437399b" exitCode=0 Jan 31 10:00:44 crc kubenswrapper[4830]: I0131 10:00:44.129981 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8h5jr" event={"ID":"d7004b5b-402e-4411-b2eb-5f1c274b460e","Type":"ContainerDied","Data":"1db3d0e86b1a1c3f5a58e6b21946209337e0fb1554d7426c704f10c95437399b"} Jan 31 10:00:44 crc kubenswrapper[4830]: I0131 10:00:44.133379 4830 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 31 10:00:45 crc kubenswrapper[4830]: I0131 10:00:45.141833 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8h5jr" event={"ID":"d7004b5b-402e-4411-b2eb-5f1c274b460e","Type":"ContainerStarted","Data":"3540021e2a8987cb4d8be1eb5458e235c7355638e78c51d7875407cd2ad2ad47"} Jan 31 10:00:47 crc kubenswrapper[4830]: I0131 10:00:47.163986 4830 generic.go:334] "Generic (PLEG): container finished" podID="d7004b5b-402e-4411-b2eb-5f1c274b460e" containerID="3540021e2a8987cb4d8be1eb5458e235c7355638e78c51d7875407cd2ad2ad47" exitCode=0 Jan 31 10:00:47 crc kubenswrapper[4830]: I0131 10:00:47.164089 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8h5jr" event={"ID":"d7004b5b-402e-4411-b2eb-5f1c274b460e","Type":"ContainerDied","Data":"3540021e2a8987cb4d8be1eb5458e235c7355638e78c51d7875407cd2ad2ad47"} Jan 31 10:00:48 crc kubenswrapper[4830]: I0131 10:00:48.190968 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8h5jr" event={"ID":"d7004b5b-402e-4411-b2eb-5f1c274b460e","Type":"ContainerStarted","Data":"0de821cc6848ee2bc72731c54d4a4167853ada6cfc8ca89ea85e9ecaa05d4557"} Jan 31 10:00:48 crc kubenswrapper[4830]: I0131 10:00:48.227666 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-8h5jr" podStartSLOduration=2.810404439 podStartE2EDuration="6.227639468s" podCreationTimestamp="2026-01-31 10:00:42 +0000 UTC" firstStartedPulling="2026-01-31 10:00:44.133085961 +0000 UTC m=+3588.626448403" lastFinishedPulling="2026-01-31 10:00:47.55032099 +0000 UTC m=+3592.043683432" observedRunningTime="2026-01-31 10:00:48.221947816 +0000 UTC m=+3592.715310258" watchObservedRunningTime="2026-01-31 10:00:48.227639468 +0000 UTC m=+3592.721001910" Jan 31 10:00:52 crc kubenswrapper[4830]: I0131 10:00:52.373647 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-8h5jr" Jan 31 10:00:52 crc kubenswrapper[4830]: I0131 10:00:52.374306 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-8h5jr" Jan 31 10:00:53 crc kubenswrapper[4830]: I0131 10:00:53.251901 4830 scope.go:117] "RemoveContainer" containerID="9476534483fcc007d26142cba83303b61999a6cb67a49527304ef1ac3d85e163" Jan 31 10:00:53 crc kubenswrapper[4830]: E0131 10:00:53.252290 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gt7kd_openshift-machine-config-operator(158dbfda-9b0a-4809-9946-3c6ee2d082dc)\"" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" podUID="158dbfda-9b0a-4809-9946-3c6ee2d082dc" Jan 31 10:00:53 crc kubenswrapper[4830]: I0131 10:00:53.424097 4830 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-8h5jr" podUID="d7004b5b-402e-4411-b2eb-5f1c274b460e" containerName="registry-server" probeResult="failure" output=< Jan 31 10:00:53 crc kubenswrapper[4830]: timeout: failed to connect service ":50051" within 1s Jan 31 10:00:53 crc kubenswrapper[4830]: > Jan 31 10:01:00 crc kubenswrapper[4830]: I0131 10:01:00.177750 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29497561-mfqc8"] Jan 31 10:01:00 crc kubenswrapper[4830]: I0131 10:01:00.180520 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29497561-mfqc8" Jan 31 10:01:00 crc kubenswrapper[4830]: I0131 10:01:00.193747 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29497561-mfqc8"] Jan 31 10:01:00 crc kubenswrapper[4830]: I0131 10:01:00.249007 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f472dd66-301e-4ce7-8279-6cec24c432c7-combined-ca-bundle\") pod \"keystone-cron-29497561-mfqc8\" (UID: \"f472dd66-301e-4ce7-8279-6cec24c432c7\") " pod="openstack/keystone-cron-29497561-mfqc8" Jan 31 10:01:00 crc kubenswrapper[4830]: I0131 10:01:00.249090 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f472dd66-301e-4ce7-8279-6cec24c432c7-config-data\") pod \"keystone-cron-29497561-mfqc8\" (UID: \"f472dd66-301e-4ce7-8279-6cec24c432c7\") " pod="openstack/keystone-cron-29497561-mfqc8" Jan 31 10:01:00 crc kubenswrapper[4830]: I0131 10:01:00.249777 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pxddc\" (UniqueName: \"kubernetes.io/projected/f472dd66-301e-4ce7-8279-6cec24c432c7-kube-api-access-pxddc\") pod \"keystone-cron-29497561-mfqc8\" (UID: \"f472dd66-301e-4ce7-8279-6cec24c432c7\") " pod="openstack/keystone-cron-29497561-mfqc8" Jan 31 10:01:00 crc kubenswrapper[4830]: I0131 10:01:00.249918 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f472dd66-301e-4ce7-8279-6cec24c432c7-fernet-keys\") pod \"keystone-cron-29497561-mfqc8\" (UID: \"f472dd66-301e-4ce7-8279-6cec24c432c7\") " pod="openstack/keystone-cron-29497561-mfqc8" Jan 31 10:01:00 crc kubenswrapper[4830]: I0131 10:01:00.352702 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pxddc\" (UniqueName: \"kubernetes.io/projected/f472dd66-301e-4ce7-8279-6cec24c432c7-kube-api-access-pxddc\") pod \"keystone-cron-29497561-mfqc8\" (UID: \"f472dd66-301e-4ce7-8279-6cec24c432c7\") " pod="openstack/keystone-cron-29497561-mfqc8" Jan 31 10:01:00 crc kubenswrapper[4830]: I0131 10:01:00.352805 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f472dd66-301e-4ce7-8279-6cec24c432c7-fernet-keys\") pod \"keystone-cron-29497561-mfqc8\" (UID: \"f472dd66-301e-4ce7-8279-6cec24c432c7\") " pod="openstack/keystone-cron-29497561-mfqc8" Jan 31 10:01:00 crc kubenswrapper[4830]: I0131 10:01:00.352926 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f472dd66-301e-4ce7-8279-6cec24c432c7-combined-ca-bundle\") pod \"keystone-cron-29497561-mfqc8\" (UID: \"f472dd66-301e-4ce7-8279-6cec24c432c7\") " pod="openstack/keystone-cron-29497561-mfqc8" Jan 31 10:01:00 crc kubenswrapper[4830]: I0131 10:01:00.352988 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f472dd66-301e-4ce7-8279-6cec24c432c7-config-data\") pod \"keystone-cron-29497561-mfqc8\" (UID: \"f472dd66-301e-4ce7-8279-6cec24c432c7\") " pod="openstack/keystone-cron-29497561-mfqc8" Jan 31 10:01:00 crc kubenswrapper[4830]: I0131 10:01:00.364233 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f472dd66-301e-4ce7-8279-6cec24c432c7-fernet-keys\") pod \"keystone-cron-29497561-mfqc8\" (UID: \"f472dd66-301e-4ce7-8279-6cec24c432c7\") " pod="openstack/keystone-cron-29497561-mfqc8" Jan 31 10:01:00 crc kubenswrapper[4830]: I0131 10:01:00.364569 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f472dd66-301e-4ce7-8279-6cec24c432c7-combined-ca-bundle\") pod \"keystone-cron-29497561-mfqc8\" (UID: \"f472dd66-301e-4ce7-8279-6cec24c432c7\") " pod="openstack/keystone-cron-29497561-mfqc8" Jan 31 10:01:00 crc kubenswrapper[4830]: I0131 10:01:00.372955 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f472dd66-301e-4ce7-8279-6cec24c432c7-config-data\") pod \"keystone-cron-29497561-mfqc8\" (UID: \"f472dd66-301e-4ce7-8279-6cec24c432c7\") " pod="openstack/keystone-cron-29497561-mfqc8" Jan 31 10:01:00 crc kubenswrapper[4830]: I0131 10:01:00.379670 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pxddc\" (UniqueName: \"kubernetes.io/projected/f472dd66-301e-4ce7-8279-6cec24c432c7-kube-api-access-pxddc\") pod \"keystone-cron-29497561-mfqc8\" (UID: \"f472dd66-301e-4ce7-8279-6cec24c432c7\") " pod="openstack/keystone-cron-29497561-mfqc8" Jan 31 10:01:00 crc kubenswrapper[4830]: I0131 10:01:00.505970 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29497561-mfqc8" Jan 31 10:01:00 crc kubenswrapper[4830]: I0131 10:01:00.985157 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29497561-mfqc8"] Jan 31 10:01:01 crc kubenswrapper[4830]: I0131 10:01:01.374388 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29497561-mfqc8" event={"ID":"f472dd66-301e-4ce7-8279-6cec24c432c7","Type":"ContainerStarted","Data":"703f5f6ed54f316c01f98e682bc73940bb3bb16f6271c8743a011451dcba56a3"} Jan 31 10:01:01 crc kubenswrapper[4830]: I0131 10:01:01.376989 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29497561-mfqc8" event={"ID":"f472dd66-301e-4ce7-8279-6cec24c432c7","Type":"ContainerStarted","Data":"51f5a3b02e2f1a50705c8d754f10bb86162d71e22c1ac8f2bd70646d791cdd60"} Jan 31 10:01:01 crc kubenswrapper[4830]: I0131 10:01:01.404015 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29497561-mfqc8" podStartSLOduration=1.403979713 podStartE2EDuration="1.403979713s" podCreationTimestamp="2026-01-31 10:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 10:01:01.39368238 +0000 UTC m=+3605.887044822" watchObservedRunningTime="2026-01-31 10:01:01.403979713 +0000 UTC m=+3605.897342155" Jan 31 10:01:02 crc kubenswrapper[4830]: I0131 10:01:02.430120 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-8h5jr" Jan 31 10:01:02 crc kubenswrapper[4830]: I0131 10:01:02.489934 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-8h5jr" Jan 31 10:01:03 crc kubenswrapper[4830]: I0131 10:01:03.705654 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-8h5jr"] Jan 31 10:01:04 crc kubenswrapper[4830]: I0131 10:01:04.414243 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-8h5jr" podUID="d7004b5b-402e-4411-b2eb-5f1c274b460e" containerName="registry-server" containerID="cri-o://0de821cc6848ee2bc72731c54d4a4167853ada6cfc8ca89ea85e9ecaa05d4557" gracePeriod=2 Jan 31 10:01:04 crc kubenswrapper[4830]: I0131 10:01:04.975475 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8h5jr" Jan 31 10:01:05 crc kubenswrapper[4830]: I0131 10:01:05.003523 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5c9hs\" (UniqueName: \"kubernetes.io/projected/d7004b5b-402e-4411-b2eb-5f1c274b460e-kube-api-access-5c9hs\") pod \"d7004b5b-402e-4411-b2eb-5f1c274b460e\" (UID: \"d7004b5b-402e-4411-b2eb-5f1c274b460e\") " Jan 31 10:01:05 crc kubenswrapper[4830]: I0131 10:01:05.003890 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d7004b5b-402e-4411-b2eb-5f1c274b460e-catalog-content\") pod \"d7004b5b-402e-4411-b2eb-5f1c274b460e\" (UID: \"d7004b5b-402e-4411-b2eb-5f1c274b460e\") " Jan 31 10:01:05 crc kubenswrapper[4830]: I0131 10:01:05.004177 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d7004b5b-402e-4411-b2eb-5f1c274b460e-utilities\") pod \"d7004b5b-402e-4411-b2eb-5f1c274b460e\" (UID: \"d7004b5b-402e-4411-b2eb-5f1c274b460e\") " Jan 31 10:01:05 crc kubenswrapper[4830]: I0131 10:01:05.004746 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d7004b5b-402e-4411-b2eb-5f1c274b460e-utilities" (OuterVolumeSpecName: "utilities") pod "d7004b5b-402e-4411-b2eb-5f1c274b460e" (UID: "d7004b5b-402e-4411-b2eb-5f1c274b460e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 10:01:05 crc kubenswrapper[4830]: I0131 10:01:05.006464 4830 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d7004b5b-402e-4411-b2eb-5f1c274b460e-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 10:01:05 crc kubenswrapper[4830]: I0131 10:01:05.013112 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d7004b5b-402e-4411-b2eb-5f1c274b460e-kube-api-access-5c9hs" (OuterVolumeSpecName: "kube-api-access-5c9hs") pod "d7004b5b-402e-4411-b2eb-5f1c274b460e" (UID: "d7004b5b-402e-4411-b2eb-5f1c274b460e"). InnerVolumeSpecName "kube-api-access-5c9hs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 10:01:05 crc kubenswrapper[4830]: I0131 10:01:05.052132 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d7004b5b-402e-4411-b2eb-5f1c274b460e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d7004b5b-402e-4411-b2eb-5f1c274b460e" (UID: "d7004b5b-402e-4411-b2eb-5f1c274b460e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 10:01:05 crc kubenswrapper[4830]: I0131 10:01:05.110016 4830 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d7004b5b-402e-4411-b2eb-5f1c274b460e-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 10:01:05 crc kubenswrapper[4830]: I0131 10:01:05.110378 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5c9hs\" (UniqueName: \"kubernetes.io/projected/d7004b5b-402e-4411-b2eb-5f1c274b460e-kube-api-access-5c9hs\") on node \"crc\" DevicePath \"\"" Jan 31 10:01:05 crc kubenswrapper[4830]: I0131 10:01:05.428040 4830 generic.go:334] "Generic (PLEG): container finished" podID="f472dd66-301e-4ce7-8279-6cec24c432c7" containerID="703f5f6ed54f316c01f98e682bc73940bb3bb16f6271c8743a011451dcba56a3" exitCode=0 Jan 31 10:01:05 crc kubenswrapper[4830]: I0131 10:01:05.428128 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29497561-mfqc8" event={"ID":"f472dd66-301e-4ce7-8279-6cec24c432c7","Type":"ContainerDied","Data":"703f5f6ed54f316c01f98e682bc73940bb3bb16f6271c8743a011451dcba56a3"} Jan 31 10:01:05 crc kubenswrapper[4830]: I0131 10:01:05.431889 4830 generic.go:334] "Generic (PLEG): container finished" podID="d7004b5b-402e-4411-b2eb-5f1c274b460e" containerID="0de821cc6848ee2bc72731c54d4a4167853ada6cfc8ca89ea85e9ecaa05d4557" exitCode=0 Jan 31 10:01:05 crc kubenswrapper[4830]: I0131 10:01:05.431942 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8h5jr" event={"ID":"d7004b5b-402e-4411-b2eb-5f1c274b460e","Type":"ContainerDied","Data":"0de821cc6848ee2bc72731c54d4a4167853ada6cfc8ca89ea85e9ecaa05d4557"} Jan 31 10:01:05 crc kubenswrapper[4830]: I0131 10:01:05.431983 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8h5jr" event={"ID":"d7004b5b-402e-4411-b2eb-5f1c274b460e","Type":"ContainerDied","Data":"206b1be46c8b2f1d959575b47e5150cc984c1446b9d6e173e3353b0d09742339"} Jan 31 10:01:05 crc kubenswrapper[4830]: I0131 10:01:05.432006 4830 scope.go:117] "RemoveContainer" containerID="0de821cc6848ee2bc72731c54d4a4167853ada6cfc8ca89ea85e9ecaa05d4557" Jan 31 10:01:05 crc kubenswrapper[4830]: I0131 10:01:05.432159 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8h5jr" Jan 31 10:01:05 crc kubenswrapper[4830]: I0131 10:01:05.471580 4830 scope.go:117] "RemoveContainer" containerID="3540021e2a8987cb4d8be1eb5458e235c7355638e78c51d7875407cd2ad2ad47" Jan 31 10:01:05 crc kubenswrapper[4830]: I0131 10:01:05.487341 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-8h5jr"] Jan 31 10:01:05 crc kubenswrapper[4830]: I0131 10:01:05.497189 4830 scope.go:117] "RemoveContainer" containerID="1db3d0e86b1a1c3f5a58e6b21946209337e0fb1554d7426c704f10c95437399b" Jan 31 10:01:05 crc kubenswrapper[4830]: I0131 10:01:05.499896 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-8h5jr"] Jan 31 10:01:05 crc kubenswrapper[4830]: I0131 10:01:05.551185 4830 scope.go:117] "RemoveContainer" containerID="0de821cc6848ee2bc72731c54d4a4167853ada6cfc8ca89ea85e9ecaa05d4557" Jan 31 10:01:05 crc kubenswrapper[4830]: E0131 10:01:05.551751 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0de821cc6848ee2bc72731c54d4a4167853ada6cfc8ca89ea85e9ecaa05d4557\": container with ID starting with 0de821cc6848ee2bc72731c54d4a4167853ada6cfc8ca89ea85e9ecaa05d4557 not found: ID does not exist" containerID="0de821cc6848ee2bc72731c54d4a4167853ada6cfc8ca89ea85e9ecaa05d4557" Jan 31 10:01:05 crc kubenswrapper[4830]: I0131 10:01:05.551792 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0de821cc6848ee2bc72731c54d4a4167853ada6cfc8ca89ea85e9ecaa05d4557"} err="failed to get container status \"0de821cc6848ee2bc72731c54d4a4167853ada6cfc8ca89ea85e9ecaa05d4557\": rpc error: code = NotFound desc = could not find container \"0de821cc6848ee2bc72731c54d4a4167853ada6cfc8ca89ea85e9ecaa05d4557\": container with ID starting with 0de821cc6848ee2bc72731c54d4a4167853ada6cfc8ca89ea85e9ecaa05d4557 not found: ID does not exist" Jan 31 10:01:05 crc kubenswrapper[4830]: I0131 10:01:05.551816 4830 scope.go:117] "RemoveContainer" containerID="3540021e2a8987cb4d8be1eb5458e235c7355638e78c51d7875407cd2ad2ad47" Jan 31 10:01:05 crc kubenswrapper[4830]: E0131 10:01:05.552199 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3540021e2a8987cb4d8be1eb5458e235c7355638e78c51d7875407cd2ad2ad47\": container with ID starting with 3540021e2a8987cb4d8be1eb5458e235c7355638e78c51d7875407cd2ad2ad47 not found: ID does not exist" containerID="3540021e2a8987cb4d8be1eb5458e235c7355638e78c51d7875407cd2ad2ad47" Jan 31 10:01:05 crc kubenswrapper[4830]: I0131 10:01:05.552226 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3540021e2a8987cb4d8be1eb5458e235c7355638e78c51d7875407cd2ad2ad47"} err="failed to get container status \"3540021e2a8987cb4d8be1eb5458e235c7355638e78c51d7875407cd2ad2ad47\": rpc error: code = NotFound desc = could not find container \"3540021e2a8987cb4d8be1eb5458e235c7355638e78c51d7875407cd2ad2ad47\": container with ID starting with 3540021e2a8987cb4d8be1eb5458e235c7355638e78c51d7875407cd2ad2ad47 not found: ID does not exist" Jan 31 10:01:05 crc kubenswrapper[4830]: I0131 10:01:05.552255 4830 scope.go:117] "RemoveContainer" containerID="1db3d0e86b1a1c3f5a58e6b21946209337e0fb1554d7426c704f10c95437399b" Jan 31 10:01:05 crc kubenswrapper[4830]: E0131 10:01:05.552707 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1db3d0e86b1a1c3f5a58e6b21946209337e0fb1554d7426c704f10c95437399b\": container with ID starting with 1db3d0e86b1a1c3f5a58e6b21946209337e0fb1554d7426c704f10c95437399b not found: ID does not exist" containerID="1db3d0e86b1a1c3f5a58e6b21946209337e0fb1554d7426c704f10c95437399b" Jan 31 10:01:05 crc kubenswrapper[4830]: I0131 10:01:05.552749 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1db3d0e86b1a1c3f5a58e6b21946209337e0fb1554d7426c704f10c95437399b"} err="failed to get container status \"1db3d0e86b1a1c3f5a58e6b21946209337e0fb1554d7426c704f10c95437399b\": rpc error: code = NotFound desc = could not find container \"1db3d0e86b1a1c3f5a58e6b21946209337e0fb1554d7426c704f10c95437399b\": container with ID starting with 1db3d0e86b1a1c3f5a58e6b21946209337e0fb1554d7426c704f10c95437399b not found: ID does not exist" Jan 31 10:01:06 crc kubenswrapper[4830]: I0131 10:01:06.267786 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d7004b5b-402e-4411-b2eb-5f1c274b460e" path="/var/lib/kubelet/pods/d7004b5b-402e-4411-b2eb-5f1c274b460e/volumes" Jan 31 10:01:06 crc kubenswrapper[4830]: I0131 10:01:06.909920 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29497561-mfqc8" Jan 31 10:01:06 crc kubenswrapper[4830]: I0131 10:01:06.964311 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pxddc\" (UniqueName: \"kubernetes.io/projected/f472dd66-301e-4ce7-8279-6cec24c432c7-kube-api-access-pxddc\") pod \"f472dd66-301e-4ce7-8279-6cec24c432c7\" (UID: \"f472dd66-301e-4ce7-8279-6cec24c432c7\") " Jan 31 10:01:06 crc kubenswrapper[4830]: I0131 10:01:06.964421 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f472dd66-301e-4ce7-8279-6cec24c432c7-combined-ca-bundle\") pod \"f472dd66-301e-4ce7-8279-6cec24c432c7\" (UID: \"f472dd66-301e-4ce7-8279-6cec24c432c7\") " Jan 31 10:01:06 crc kubenswrapper[4830]: I0131 10:01:06.964553 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f472dd66-301e-4ce7-8279-6cec24c432c7-fernet-keys\") pod \"f472dd66-301e-4ce7-8279-6cec24c432c7\" (UID: \"f472dd66-301e-4ce7-8279-6cec24c432c7\") " Jan 31 10:01:06 crc kubenswrapper[4830]: I0131 10:01:06.964895 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f472dd66-301e-4ce7-8279-6cec24c432c7-config-data\") pod \"f472dd66-301e-4ce7-8279-6cec24c432c7\" (UID: \"f472dd66-301e-4ce7-8279-6cec24c432c7\") " Jan 31 10:01:06 crc kubenswrapper[4830]: I0131 10:01:06.973127 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f472dd66-301e-4ce7-8279-6cec24c432c7-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "f472dd66-301e-4ce7-8279-6cec24c432c7" (UID: "f472dd66-301e-4ce7-8279-6cec24c432c7"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 10:01:06 crc kubenswrapper[4830]: I0131 10:01:06.976140 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f472dd66-301e-4ce7-8279-6cec24c432c7-kube-api-access-pxddc" (OuterVolumeSpecName: "kube-api-access-pxddc") pod "f472dd66-301e-4ce7-8279-6cec24c432c7" (UID: "f472dd66-301e-4ce7-8279-6cec24c432c7"). InnerVolumeSpecName "kube-api-access-pxddc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 10:01:07 crc kubenswrapper[4830]: I0131 10:01:07.008540 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f472dd66-301e-4ce7-8279-6cec24c432c7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f472dd66-301e-4ce7-8279-6cec24c432c7" (UID: "f472dd66-301e-4ce7-8279-6cec24c432c7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 10:01:07 crc kubenswrapper[4830]: I0131 10:01:07.042508 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f472dd66-301e-4ce7-8279-6cec24c432c7-config-data" (OuterVolumeSpecName: "config-data") pod "f472dd66-301e-4ce7-8279-6cec24c432c7" (UID: "f472dd66-301e-4ce7-8279-6cec24c432c7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 10:01:07 crc kubenswrapper[4830]: I0131 10:01:07.069844 4830 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f472dd66-301e-4ce7-8279-6cec24c432c7-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 10:01:07 crc kubenswrapper[4830]: I0131 10:01:07.069924 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pxddc\" (UniqueName: \"kubernetes.io/projected/f472dd66-301e-4ce7-8279-6cec24c432c7-kube-api-access-pxddc\") on node \"crc\" DevicePath \"\"" Jan 31 10:01:07 crc kubenswrapper[4830]: I0131 10:01:07.069943 4830 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f472dd66-301e-4ce7-8279-6cec24c432c7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 10:01:07 crc kubenswrapper[4830]: I0131 10:01:07.069954 4830 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f472dd66-301e-4ce7-8279-6cec24c432c7-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 31 10:01:07 crc kubenswrapper[4830]: I0131 10:01:07.454525 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29497561-mfqc8" event={"ID":"f472dd66-301e-4ce7-8279-6cec24c432c7","Type":"ContainerDied","Data":"51f5a3b02e2f1a50705c8d754f10bb86162d71e22c1ac8f2bd70646d791cdd60"} Jan 31 10:01:07 crc kubenswrapper[4830]: I0131 10:01:07.454588 4830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="51f5a3b02e2f1a50705c8d754f10bb86162d71e22c1ac8f2bd70646d791cdd60" Jan 31 10:01:07 crc kubenswrapper[4830]: I0131 10:01:07.454670 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29497561-mfqc8" Jan 31 10:01:08 crc kubenswrapper[4830]: I0131 10:01:08.320586 4830 scope.go:117] "RemoveContainer" containerID="9476534483fcc007d26142cba83303b61999a6cb67a49527304ef1ac3d85e163" Jan 31 10:01:08 crc kubenswrapper[4830]: E0131 10:01:08.321250 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gt7kd_openshift-machine-config-operator(158dbfda-9b0a-4809-9946-3c6ee2d082dc)\"" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" podUID="158dbfda-9b0a-4809-9946-3c6ee2d082dc" Jan 31 10:01:20 crc kubenswrapper[4830]: I0131 10:01:20.251561 4830 scope.go:117] "RemoveContainer" containerID="9476534483fcc007d26142cba83303b61999a6cb67a49527304ef1ac3d85e163" Jan 31 10:01:20 crc kubenswrapper[4830]: E0131 10:01:20.252374 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gt7kd_openshift-machine-config-operator(158dbfda-9b0a-4809-9946-3c6ee2d082dc)\"" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" podUID="158dbfda-9b0a-4809-9946-3c6ee2d082dc" Jan 31 10:01:32 crc kubenswrapper[4830]: I0131 10:01:32.252590 4830 scope.go:117] "RemoveContainer" containerID="9476534483fcc007d26142cba83303b61999a6cb67a49527304ef1ac3d85e163" Jan 31 10:01:32 crc kubenswrapper[4830]: E0131 10:01:32.253821 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gt7kd_openshift-machine-config-operator(158dbfda-9b0a-4809-9946-3c6ee2d082dc)\"" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" podUID="158dbfda-9b0a-4809-9946-3c6ee2d082dc" Jan 31 10:01:44 crc kubenswrapper[4830]: I0131 10:01:44.251405 4830 scope.go:117] "RemoveContainer" containerID="9476534483fcc007d26142cba83303b61999a6cb67a49527304ef1ac3d85e163" Jan 31 10:01:44 crc kubenswrapper[4830]: E0131 10:01:44.252259 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gt7kd_openshift-machine-config-operator(158dbfda-9b0a-4809-9946-3c6ee2d082dc)\"" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" podUID="158dbfda-9b0a-4809-9946-3c6ee2d082dc" Jan 31 10:01:56 crc kubenswrapper[4830]: I0131 10:01:56.263168 4830 scope.go:117] "RemoveContainer" containerID="9476534483fcc007d26142cba83303b61999a6cb67a49527304ef1ac3d85e163" Jan 31 10:01:56 crc kubenswrapper[4830]: E0131 10:01:56.263874 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gt7kd_openshift-machine-config-operator(158dbfda-9b0a-4809-9946-3c6ee2d082dc)\"" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" podUID="158dbfda-9b0a-4809-9946-3c6ee2d082dc" Jan 31 10:02:09 crc kubenswrapper[4830]: I0131 10:02:09.252028 4830 scope.go:117] "RemoveContainer" containerID="9476534483fcc007d26142cba83303b61999a6cb67a49527304ef1ac3d85e163" Jan 31 10:02:09 crc kubenswrapper[4830]: E0131 10:02:09.252960 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gt7kd_openshift-machine-config-operator(158dbfda-9b0a-4809-9946-3c6ee2d082dc)\"" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" podUID="158dbfda-9b0a-4809-9946-3c6ee2d082dc" Jan 31 10:02:21 crc kubenswrapper[4830]: I0131 10:02:21.252205 4830 scope.go:117] "RemoveContainer" containerID="9476534483fcc007d26142cba83303b61999a6cb67a49527304ef1ac3d85e163" Jan 31 10:02:21 crc kubenswrapper[4830]: E0131 10:02:21.253189 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gt7kd_openshift-machine-config-operator(158dbfda-9b0a-4809-9946-3c6ee2d082dc)\"" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" podUID="158dbfda-9b0a-4809-9946-3c6ee2d082dc" Jan 31 10:02:36 crc kubenswrapper[4830]: I0131 10:02:36.264911 4830 scope.go:117] "RemoveContainer" containerID="9476534483fcc007d26142cba83303b61999a6cb67a49527304ef1ac3d85e163" Jan 31 10:02:36 crc kubenswrapper[4830]: E0131 10:02:36.265796 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gt7kd_openshift-machine-config-operator(158dbfda-9b0a-4809-9946-3c6ee2d082dc)\"" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" podUID="158dbfda-9b0a-4809-9946-3c6ee2d082dc" Jan 31 10:02:47 crc kubenswrapper[4830]: I0131 10:02:47.254203 4830 scope.go:117] "RemoveContainer" containerID="9476534483fcc007d26142cba83303b61999a6cb67a49527304ef1ac3d85e163" Jan 31 10:02:47 crc kubenswrapper[4830]: E0131 10:02:47.255023 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gt7kd_openshift-machine-config-operator(158dbfda-9b0a-4809-9946-3c6ee2d082dc)\"" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" podUID="158dbfda-9b0a-4809-9946-3c6ee2d082dc" Jan 31 10:03:00 crc kubenswrapper[4830]: I0131 10:03:00.252224 4830 scope.go:117] "RemoveContainer" containerID="9476534483fcc007d26142cba83303b61999a6cb67a49527304ef1ac3d85e163" Jan 31 10:03:00 crc kubenswrapper[4830]: E0131 10:03:00.253187 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gt7kd_openshift-machine-config-operator(158dbfda-9b0a-4809-9946-3c6ee2d082dc)\"" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" podUID="158dbfda-9b0a-4809-9946-3c6ee2d082dc" Jan 31 10:03:13 crc kubenswrapper[4830]: I0131 10:03:13.251701 4830 scope.go:117] "RemoveContainer" containerID="9476534483fcc007d26142cba83303b61999a6cb67a49527304ef1ac3d85e163" Jan 31 10:03:13 crc kubenswrapper[4830]: E0131 10:03:13.252842 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gt7kd_openshift-machine-config-operator(158dbfda-9b0a-4809-9946-3c6ee2d082dc)\"" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" podUID="158dbfda-9b0a-4809-9946-3c6ee2d082dc" Jan 31 10:03:28 crc kubenswrapper[4830]: I0131 10:03:28.253164 4830 scope.go:117] "RemoveContainer" containerID="9476534483fcc007d26142cba83303b61999a6cb67a49527304ef1ac3d85e163" Jan 31 10:03:28 crc kubenswrapper[4830]: E0131 10:03:28.254347 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gt7kd_openshift-machine-config-operator(158dbfda-9b0a-4809-9946-3c6ee2d082dc)\"" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" podUID="158dbfda-9b0a-4809-9946-3c6ee2d082dc" Jan 31 10:03:43 crc kubenswrapper[4830]: I0131 10:03:43.252243 4830 scope.go:117] "RemoveContainer" containerID="9476534483fcc007d26142cba83303b61999a6cb67a49527304ef1ac3d85e163" Jan 31 10:03:43 crc kubenswrapper[4830]: E0131 10:03:43.253295 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gt7kd_openshift-machine-config-operator(158dbfda-9b0a-4809-9946-3c6ee2d082dc)\"" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" podUID="158dbfda-9b0a-4809-9946-3c6ee2d082dc" Jan 31 10:03:56 crc kubenswrapper[4830]: I0131 10:03:56.262047 4830 scope.go:117] "RemoveContainer" containerID="9476534483fcc007d26142cba83303b61999a6cb67a49527304ef1ac3d85e163" Jan 31 10:03:56 crc kubenswrapper[4830]: E0131 10:03:56.263617 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gt7kd_openshift-machine-config-operator(158dbfda-9b0a-4809-9946-3c6ee2d082dc)\"" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" podUID="158dbfda-9b0a-4809-9946-3c6ee2d082dc" Jan 31 10:04:07 crc kubenswrapper[4830]: I0131 10:04:07.251890 4830 scope.go:117] "RemoveContainer" containerID="9476534483fcc007d26142cba83303b61999a6cb67a49527304ef1ac3d85e163" Jan 31 10:04:07 crc kubenswrapper[4830]: E0131 10:04:07.252915 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gt7kd_openshift-machine-config-operator(158dbfda-9b0a-4809-9946-3c6ee2d082dc)\"" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" podUID="158dbfda-9b0a-4809-9946-3c6ee2d082dc" Jan 31 10:04:20 crc kubenswrapper[4830]: I0131 10:04:20.251681 4830 scope.go:117] "RemoveContainer" containerID="9476534483fcc007d26142cba83303b61999a6cb67a49527304ef1ac3d85e163" Jan 31 10:04:20 crc kubenswrapper[4830]: E0131 10:04:20.252503 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gt7kd_openshift-machine-config-operator(158dbfda-9b0a-4809-9946-3c6ee2d082dc)\"" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" podUID="158dbfda-9b0a-4809-9946-3c6ee2d082dc" Jan 31 10:04:31 crc kubenswrapper[4830]: I0131 10:04:31.252023 4830 scope.go:117] "RemoveContainer" containerID="9476534483fcc007d26142cba83303b61999a6cb67a49527304ef1ac3d85e163" Jan 31 10:04:31 crc kubenswrapper[4830]: E0131 10:04:31.252862 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gt7kd_openshift-machine-config-operator(158dbfda-9b0a-4809-9946-3c6ee2d082dc)\"" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" podUID="158dbfda-9b0a-4809-9946-3c6ee2d082dc" Jan 31 10:04:45 crc kubenswrapper[4830]: I0131 10:04:45.251953 4830 scope.go:117] "RemoveContainer" containerID="9476534483fcc007d26142cba83303b61999a6cb67a49527304ef1ac3d85e163" Jan 31 10:04:45 crc kubenswrapper[4830]: E0131 10:04:45.254327 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gt7kd_openshift-machine-config-operator(158dbfda-9b0a-4809-9946-3c6ee2d082dc)\"" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" podUID="158dbfda-9b0a-4809-9946-3c6ee2d082dc" Jan 31 10:05:00 crc kubenswrapper[4830]: I0131 10:05:00.252693 4830 scope.go:117] "RemoveContainer" containerID="9476534483fcc007d26142cba83303b61999a6cb67a49527304ef1ac3d85e163" Jan 31 10:05:00 crc kubenswrapper[4830]: E0131 10:05:00.254991 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gt7kd_openshift-machine-config-operator(158dbfda-9b0a-4809-9946-3c6ee2d082dc)\"" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" podUID="158dbfda-9b0a-4809-9946-3c6ee2d082dc" Jan 31 10:05:12 crc kubenswrapper[4830]: I0131 10:05:12.252814 4830 scope.go:117] "RemoveContainer" containerID="9476534483fcc007d26142cba83303b61999a6cb67a49527304ef1ac3d85e163" Jan 31 10:05:12 crc kubenswrapper[4830]: E0131 10:05:12.255401 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gt7kd_openshift-machine-config-operator(158dbfda-9b0a-4809-9946-3c6ee2d082dc)\"" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" podUID="158dbfda-9b0a-4809-9946-3c6ee2d082dc" Jan 31 10:05:23 crc kubenswrapper[4830]: I0131 10:05:23.252877 4830 scope.go:117] "RemoveContainer" containerID="9476534483fcc007d26142cba83303b61999a6cb67a49527304ef1ac3d85e163" Jan 31 10:05:24 crc kubenswrapper[4830]: I0131 10:05:24.340886 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" event={"ID":"158dbfda-9b0a-4809-9946-3c6ee2d082dc","Type":"ContainerStarted","Data":"0b092ca3bb9b9a206ab1c26c23ac27ed470668236cc5c21830f6134f0e65d665"} Jan 31 10:05:49 crc kubenswrapper[4830]: I0131 10:05:49.403184 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-hkx7r"] Jan 31 10:05:49 crc kubenswrapper[4830]: E0131 10:05:49.404362 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7004b5b-402e-4411-b2eb-5f1c274b460e" containerName="registry-server" Jan 31 10:05:49 crc kubenswrapper[4830]: I0131 10:05:49.404378 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7004b5b-402e-4411-b2eb-5f1c274b460e" containerName="registry-server" Jan 31 10:05:49 crc kubenswrapper[4830]: E0131 10:05:49.404390 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7004b5b-402e-4411-b2eb-5f1c274b460e" containerName="extract-content" Jan 31 10:05:49 crc kubenswrapper[4830]: I0131 10:05:49.404396 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7004b5b-402e-4411-b2eb-5f1c274b460e" containerName="extract-content" Jan 31 10:05:49 crc kubenswrapper[4830]: E0131 10:05:49.404408 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7004b5b-402e-4411-b2eb-5f1c274b460e" containerName="extract-utilities" Jan 31 10:05:49 crc kubenswrapper[4830]: I0131 10:05:49.404415 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7004b5b-402e-4411-b2eb-5f1c274b460e" containerName="extract-utilities" Jan 31 10:05:49 crc kubenswrapper[4830]: E0131 10:05:49.404443 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f472dd66-301e-4ce7-8279-6cec24c432c7" containerName="keystone-cron" Jan 31 10:05:49 crc kubenswrapper[4830]: I0131 10:05:49.404449 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="f472dd66-301e-4ce7-8279-6cec24c432c7" containerName="keystone-cron" Jan 31 10:05:49 crc kubenswrapper[4830]: I0131 10:05:49.404700 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="d7004b5b-402e-4411-b2eb-5f1c274b460e" containerName="registry-server" Jan 31 10:05:49 crc kubenswrapper[4830]: I0131 10:05:49.408288 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="f472dd66-301e-4ce7-8279-6cec24c432c7" containerName="keystone-cron" Jan 31 10:05:49 crc kubenswrapper[4830]: I0131 10:05:49.411644 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hkx7r" Jan 31 10:05:49 crc kubenswrapper[4830]: I0131 10:05:49.423350 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-hkx7r"] Jan 31 10:05:49 crc kubenswrapper[4830]: I0131 10:05:49.482384 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/03a21edf-85fe-4e36-b113-1b61a7492cf2-utilities\") pod \"redhat-operators-hkx7r\" (UID: \"03a21edf-85fe-4e36-b113-1b61a7492cf2\") " pod="openshift-marketplace/redhat-operators-hkx7r" Jan 31 10:05:49 crc kubenswrapper[4830]: I0131 10:05:49.482574 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-28wwx\" (UniqueName: \"kubernetes.io/projected/03a21edf-85fe-4e36-b113-1b61a7492cf2-kube-api-access-28wwx\") pod \"redhat-operators-hkx7r\" (UID: \"03a21edf-85fe-4e36-b113-1b61a7492cf2\") " pod="openshift-marketplace/redhat-operators-hkx7r" Jan 31 10:05:49 crc kubenswrapper[4830]: I0131 10:05:49.482744 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/03a21edf-85fe-4e36-b113-1b61a7492cf2-catalog-content\") pod \"redhat-operators-hkx7r\" (UID: \"03a21edf-85fe-4e36-b113-1b61a7492cf2\") " pod="openshift-marketplace/redhat-operators-hkx7r" Jan 31 10:05:49 crc kubenswrapper[4830]: I0131 10:05:49.584967 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/03a21edf-85fe-4e36-b113-1b61a7492cf2-catalog-content\") pod \"redhat-operators-hkx7r\" (UID: \"03a21edf-85fe-4e36-b113-1b61a7492cf2\") " pod="openshift-marketplace/redhat-operators-hkx7r" Jan 31 10:05:49 crc kubenswrapper[4830]: I0131 10:05:49.585448 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/03a21edf-85fe-4e36-b113-1b61a7492cf2-utilities\") pod \"redhat-operators-hkx7r\" (UID: \"03a21edf-85fe-4e36-b113-1b61a7492cf2\") " pod="openshift-marketplace/redhat-operators-hkx7r" Jan 31 10:05:49 crc kubenswrapper[4830]: I0131 10:05:49.585569 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-28wwx\" (UniqueName: \"kubernetes.io/projected/03a21edf-85fe-4e36-b113-1b61a7492cf2-kube-api-access-28wwx\") pod \"redhat-operators-hkx7r\" (UID: \"03a21edf-85fe-4e36-b113-1b61a7492cf2\") " pod="openshift-marketplace/redhat-operators-hkx7r" Jan 31 10:05:49 crc kubenswrapper[4830]: I0131 10:05:49.585682 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/03a21edf-85fe-4e36-b113-1b61a7492cf2-catalog-content\") pod \"redhat-operators-hkx7r\" (UID: \"03a21edf-85fe-4e36-b113-1b61a7492cf2\") " pod="openshift-marketplace/redhat-operators-hkx7r" Jan 31 10:05:49 crc kubenswrapper[4830]: I0131 10:05:49.585931 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/03a21edf-85fe-4e36-b113-1b61a7492cf2-utilities\") pod \"redhat-operators-hkx7r\" (UID: \"03a21edf-85fe-4e36-b113-1b61a7492cf2\") " pod="openshift-marketplace/redhat-operators-hkx7r" Jan 31 10:05:49 crc kubenswrapper[4830]: I0131 10:05:49.613785 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-28wwx\" (UniqueName: \"kubernetes.io/projected/03a21edf-85fe-4e36-b113-1b61a7492cf2-kube-api-access-28wwx\") pod \"redhat-operators-hkx7r\" (UID: \"03a21edf-85fe-4e36-b113-1b61a7492cf2\") " pod="openshift-marketplace/redhat-operators-hkx7r" Jan 31 10:05:49 crc kubenswrapper[4830]: I0131 10:05:49.738006 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hkx7r" Jan 31 10:05:50 crc kubenswrapper[4830]: I0131 10:05:50.336404 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-hkx7r"] Jan 31 10:05:50 crc kubenswrapper[4830]: I0131 10:05:50.660675 4830 generic.go:334] "Generic (PLEG): container finished" podID="03a21edf-85fe-4e36-b113-1b61a7492cf2" containerID="93dc46397d65bcf9ee1a54b797798d2b09b3bbd32b8fa820700e8bf14dfa9454" exitCode=0 Jan 31 10:05:50 crc kubenswrapper[4830]: I0131 10:05:50.660780 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hkx7r" event={"ID":"03a21edf-85fe-4e36-b113-1b61a7492cf2","Type":"ContainerDied","Data":"93dc46397d65bcf9ee1a54b797798d2b09b3bbd32b8fa820700e8bf14dfa9454"} Jan 31 10:05:50 crc kubenswrapper[4830]: I0131 10:05:50.661032 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hkx7r" event={"ID":"03a21edf-85fe-4e36-b113-1b61a7492cf2","Type":"ContainerStarted","Data":"dcaadd51249a9f037be9c5069d64f19333ff07a18d6fd5debd1dc9330da07bdc"} Jan 31 10:05:50 crc kubenswrapper[4830]: I0131 10:05:50.662512 4830 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 31 10:05:51 crc kubenswrapper[4830]: I0131 10:05:51.672776 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hkx7r" event={"ID":"03a21edf-85fe-4e36-b113-1b61a7492cf2","Type":"ContainerStarted","Data":"f02c599efedcf7d653a4ccd8bf146935362f977cd72cc1f01a8b40f673745b16"} Jan 31 10:05:56 crc kubenswrapper[4830]: I0131 10:05:56.743192 4830 generic.go:334] "Generic (PLEG): container finished" podID="03a21edf-85fe-4e36-b113-1b61a7492cf2" containerID="f02c599efedcf7d653a4ccd8bf146935362f977cd72cc1f01a8b40f673745b16" exitCode=0 Jan 31 10:05:56 crc kubenswrapper[4830]: I0131 10:05:56.743273 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hkx7r" event={"ID":"03a21edf-85fe-4e36-b113-1b61a7492cf2","Type":"ContainerDied","Data":"f02c599efedcf7d653a4ccd8bf146935362f977cd72cc1f01a8b40f673745b16"} Jan 31 10:05:57 crc kubenswrapper[4830]: I0131 10:05:57.801583 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hkx7r" event={"ID":"03a21edf-85fe-4e36-b113-1b61a7492cf2","Type":"ContainerStarted","Data":"4e61ed21084e2ac380705cff992c73333135a9c89ae75c29c9f46fd418d3afba"} Jan 31 10:05:57 crc kubenswrapper[4830]: I0131 10:05:57.835123 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-hkx7r" podStartSLOduration=2.363830316 podStartE2EDuration="8.835107509s" podCreationTimestamp="2026-01-31 10:05:49 +0000 UTC" firstStartedPulling="2026-01-31 10:05:50.662292717 +0000 UTC m=+3895.155655159" lastFinishedPulling="2026-01-31 10:05:57.13356991 +0000 UTC m=+3901.626932352" observedRunningTime="2026-01-31 10:05:57.832365031 +0000 UTC m=+3902.325727473" watchObservedRunningTime="2026-01-31 10:05:57.835107509 +0000 UTC m=+3902.328469951" Jan 31 10:05:59 crc kubenswrapper[4830]: I0131 10:05:59.738702 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-hkx7r" Jan 31 10:05:59 crc kubenswrapper[4830]: I0131 10:05:59.740520 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-hkx7r" Jan 31 10:06:00 crc kubenswrapper[4830]: I0131 10:06:00.797949 4830 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-hkx7r" podUID="03a21edf-85fe-4e36-b113-1b61a7492cf2" containerName="registry-server" probeResult="failure" output=< Jan 31 10:06:00 crc kubenswrapper[4830]: timeout: failed to connect service ":50051" within 1s Jan 31 10:06:00 crc kubenswrapper[4830]: > Jan 31 10:06:10 crc kubenswrapper[4830]: I0131 10:06:10.836757 4830 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-hkx7r" podUID="03a21edf-85fe-4e36-b113-1b61a7492cf2" containerName="registry-server" probeResult="failure" output=< Jan 31 10:06:10 crc kubenswrapper[4830]: timeout: failed to connect service ":50051" within 1s Jan 31 10:06:10 crc kubenswrapper[4830]: > Jan 31 10:06:19 crc kubenswrapper[4830]: I0131 10:06:19.807349 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-hkx7r" Jan 31 10:06:19 crc kubenswrapper[4830]: I0131 10:06:19.874778 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-hkx7r" Jan 31 10:06:20 crc kubenswrapper[4830]: I0131 10:06:20.606352 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-hkx7r"] Jan 31 10:06:21 crc kubenswrapper[4830]: I0131 10:06:21.092264 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-hkx7r" podUID="03a21edf-85fe-4e36-b113-1b61a7492cf2" containerName="registry-server" containerID="cri-o://4e61ed21084e2ac380705cff992c73333135a9c89ae75c29c9f46fd418d3afba" gracePeriod=2 Jan 31 10:06:21 crc kubenswrapper[4830]: I0131 10:06:21.792681 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hkx7r" Jan 31 10:06:21 crc kubenswrapper[4830]: I0131 10:06:21.920782 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/03a21edf-85fe-4e36-b113-1b61a7492cf2-utilities\") pod \"03a21edf-85fe-4e36-b113-1b61a7492cf2\" (UID: \"03a21edf-85fe-4e36-b113-1b61a7492cf2\") " Jan 31 10:06:21 crc kubenswrapper[4830]: I0131 10:06:21.920908 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-28wwx\" (UniqueName: \"kubernetes.io/projected/03a21edf-85fe-4e36-b113-1b61a7492cf2-kube-api-access-28wwx\") pod \"03a21edf-85fe-4e36-b113-1b61a7492cf2\" (UID: \"03a21edf-85fe-4e36-b113-1b61a7492cf2\") " Jan 31 10:06:21 crc kubenswrapper[4830]: I0131 10:06:21.921099 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/03a21edf-85fe-4e36-b113-1b61a7492cf2-catalog-content\") pod \"03a21edf-85fe-4e36-b113-1b61a7492cf2\" (UID: \"03a21edf-85fe-4e36-b113-1b61a7492cf2\") " Jan 31 10:06:21 crc kubenswrapper[4830]: I0131 10:06:21.921640 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/03a21edf-85fe-4e36-b113-1b61a7492cf2-utilities" (OuterVolumeSpecName: "utilities") pod "03a21edf-85fe-4e36-b113-1b61a7492cf2" (UID: "03a21edf-85fe-4e36-b113-1b61a7492cf2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 10:06:21 crc kubenswrapper[4830]: I0131 10:06:21.922603 4830 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/03a21edf-85fe-4e36-b113-1b61a7492cf2-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 10:06:21 crc kubenswrapper[4830]: I0131 10:06:21.929483 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/03a21edf-85fe-4e36-b113-1b61a7492cf2-kube-api-access-28wwx" (OuterVolumeSpecName: "kube-api-access-28wwx") pod "03a21edf-85fe-4e36-b113-1b61a7492cf2" (UID: "03a21edf-85fe-4e36-b113-1b61a7492cf2"). InnerVolumeSpecName "kube-api-access-28wwx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 10:06:22 crc kubenswrapper[4830]: I0131 10:06:22.024794 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-28wwx\" (UniqueName: \"kubernetes.io/projected/03a21edf-85fe-4e36-b113-1b61a7492cf2-kube-api-access-28wwx\") on node \"crc\" DevicePath \"\"" Jan 31 10:06:22 crc kubenswrapper[4830]: I0131 10:06:22.058165 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/03a21edf-85fe-4e36-b113-1b61a7492cf2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "03a21edf-85fe-4e36-b113-1b61a7492cf2" (UID: "03a21edf-85fe-4e36-b113-1b61a7492cf2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 10:06:22 crc kubenswrapper[4830]: I0131 10:06:22.107769 4830 generic.go:334] "Generic (PLEG): container finished" podID="03a21edf-85fe-4e36-b113-1b61a7492cf2" containerID="4e61ed21084e2ac380705cff992c73333135a9c89ae75c29c9f46fd418d3afba" exitCode=0 Jan 31 10:06:22 crc kubenswrapper[4830]: I0131 10:06:22.107824 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hkx7r" event={"ID":"03a21edf-85fe-4e36-b113-1b61a7492cf2","Type":"ContainerDied","Data":"4e61ed21084e2ac380705cff992c73333135a9c89ae75c29c9f46fd418d3afba"} Jan 31 10:06:22 crc kubenswrapper[4830]: I0131 10:06:22.107857 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hkx7r" Jan 31 10:06:22 crc kubenswrapper[4830]: I0131 10:06:22.107881 4830 scope.go:117] "RemoveContainer" containerID="4e61ed21084e2ac380705cff992c73333135a9c89ae75c29c9f46fd418d3afba" Jan 31 10:06:22 crc kubenswrapper[4830]: I0131 10:06:22.107865 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hkx7r" event={"ID":"03a21edf-85fe-4e36-b113-1b61a7492cf2","Type":"ContainerDied","Data":"dcaadd51249a9f037be9c5069d64f19333ff07a18d6fd5debd1dc9330da07bdc"} Jan 31 10:06:22 crc kubenswrapper[4830]: I0131 10:06:22.127619 4830 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/03a21edf-85fe-4e36-b113-1b61a7492cf2-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 10:06:22 crc kubenswrapper[4830]: I0131 10:06:22.137944 4830 scope.go:117] "RemoveContainer" containerID="f02c599efedcf7d653a4ccd8bf146935362f977cd72cc1f01a8b40f673745b16" Jan 31 10:06:22 crc kubenswrapper[4830]: I0131 10:06:22.154988 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-hkx7r"] Jan 31 10:06:22 crc kubenswrapper[4830]: I0131 10:06:22.162167 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-hkx7r"] Jan 31 10:06:22 crc kubenswrapper[4830]: I0131 10:06:22.177083 4830 scope.go:117] "RemoveContainer" containerID="93dc46397d65bcf9ee1a54b797798d2b09b3bbd32b8fa820700e8bf14dfa9454" Jan 31 10:06:22 crc kubenswrapper[4830]: I0131 10:06:22.211941 4830 scope.go:117] "RemoveContainer" containerID="4e61ed21084e2ac380705cff992c73333135a9c89ae75c29c9f46fd418d3afba" Jan 31 10:06:22 crc kubenswrapper[4830]: E0131 10:06:22.212430 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4e61ed21084e2ac380705cff992c73333135a9c89ae75c29c9f46fd418d3afba\": container with ID starting with 4e61ed21084e2ac380705cff992c73333135a9c89ae75c29c9f46fd418d3afba not found: ID does not exist" containerID="4e61ed21084e2ac380705cff992c73333135a9c89ae75c29c9f46fd418d3afba" Jan 31 10:06:22 crc kubenswrapper[4830]: I0131 10:06:22.212492 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4e61ed21084e2ac380705cff992c73333135a9c89ae75c29c9f46fd418d3afba"} err="failed to get container status \"4e61ed21084e2ac380705cff992c73333135a9c89ae75c29c9f46fd418d3afba\": rpc error: code = NotFound desc = could not find container \"4e61ed21084e2ac380705cff992c73333135a9c89ae75c29c9f46fd418d3afba\": container with ID starting with 4e61ed21084e2ac380705cff992c73333135a9c89ae75c29c9f46fd418d3afba not found: ID does not exist" Jan 31 10:06:22 crc kubenswrapper[4830]: I0131 10:06:22.212527 4830 scope.go:117] "RemoveContainer" containerID="f02c599efedcf7d653a4ccd8bf146935362f977cd72cc1f01a8b40f673745b16" Jan 31 10:06:22 crc kubenswrapper[4830]: E0131 10:06:22.212967 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f02c599efedcf7d653a4ccd8bf146935362f977cd72cc1f01a8b40f673745b16\": container with ID starting with f02c599efedcf7d653a4ccd8bf146935362f977cd72cc1f01a8b40f673745b16 not found: ID does not exist" containerID="f02c599efedcf7d653a4ccd8bf146935362f977cd72cc1f01a8b40f673745b16" Jan 31 10:06:22 crc kubenswrapper[4830]: I0131 10:06:22.213000 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f02c599efedcf7d653a4ccd8bf146935362f977cd72cc1f01a8b40f673745b16"} err="failed to get container status \"f02c599efedcf7d653a4ccd8bf146935362f977cd72cc1f01a8b40f673745b16\": rpc error: code = NotFound desc = could not find container \"f02c599efedcf7d653a4ccd8bf146935362f977cd72cc1f01a8b40f673745b16\": container with ID starting with f02c599efedcf7d653a4ccd8bf146935362f977cd72cc1f01a8b40f673745b16 not found: ID does not exist" Jan 31 10:06:22 crc kubenswrapper[4830]: I0131 10:06:22.213020 4830 scope.go:117] "RemoveContainer" containerID="93dc46397d65bcf9ee1a54b797798d2b09b3bbd32b8fa820700e8bf14dfa9454" Jan 31 10:06:22 crc kubenswrapper[4830]: E0131 10:06:22.213277 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"93dc46397d65bcf9ee1a54b797798d2b09b3bbd32b8fa820700e8bf14dfa9454\": container with ID starting with 93dc46397d65bcf9ee1a54b797798d2b09b3bbd32b8fa820700e8bf14dfa9454 not found: ID does not exist" containerID="93dc46397d65bcf9ee1a54b797798d2b09b3bbd32b8fa820700e8bf14dfa9454" Jan 31 10:06:22 crc kubenswrapper[4830]: I0131 10:06:22.213308 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"93dc46397d65bcf9ee1a54b797798d2b09b3bbd32b8fa820700e8bf14dfa9454"} err="failed to get container status \"93dc46397d65bcf9ee1a54b797798d2b09b3bbd32b8fa820700e8bf14dfa9454\": rpc error: code = NotFound desc = could not find container \"93dc46397d65bcf9ee1a54b797798d2b09b3bbd32b8fa820700e8bf14dfa9454\": container with ID starting with 93dc46397d65bcf9ee1a54b797798d2b09b3bbd32b8fa820700e8bf14dfa9454 not found: ID does not exist" Jan 31 10:06:22 crc kubenswrapper[4830]: I0131 10:06:22.274869 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="03a21edf-85fe-4e36-b113-1b61a7492cf2" path="/var/lib/kubelet/pods/03a21edf-85fe-4e36-b113-1b61a7492cf2/volumes" Jan 31 10:07:44 crc kubenswrapper[4830]: I0131 10:07:44.353177 4830 patch_prober.go:28] interesting pod/machine-config-daemon-gt7kd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 10:07:44 crc kubenswrapper[4830]: I0131 10:07:44.353860 4830 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" podUID="158dbfda-9b0a-4809-9946-3c6ee2d082dc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 10:08:00 crc kubenswrapper[4830]: I0131 10:08:00.677865 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-wdhbd"] Jan 31 10:08:00 crc kubenswrapper[4830]: E0131 10:08:00.679108 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03a21edf-85fe-4e36-b113-1b61a7492cf2" containerName="extract-content" Jan 31 10:08:00 crc kubenswrapper[4830]: I0131 10:08:00.679128 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="03a21edf-85fe-4e36-b113-1b61a7492cf2" containerName="extract-content" Jan 31 10:08:00 crc kubenswrapper[4830]: E0131 10:08:00.679192 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03a21edf-85fe-4e36-b113-1b61a7492cf2" containerName="extract-utilities" Jan 31 10:08:00 crc kubenswrapper[4830]: I0131 10:08:00.679200 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="03a21edf-85fe-4e36-b113-1b61a7492cf2" containerName="extract-utilities" Jan 31 10:08:00 crc kubenswrapper[4830]: E0131 10:08:00.679212 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03a21edf-85fe-4e36-b113-1b61a7492cf2" containerName="registry-server" Jan 31 10:08:00 crc kubenswrapper[4830]: I0131 10:08:00.679222 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="03a21edf-85fe-4e36-b113-1b61a7492cf2" containerName="registry-server" Jan 31 10:08:00 crc kubenswrapper[4830]: I0131 10:08:00.679522 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="03a21edf-85fe-4e36-b113-1b61a7492cf2" containerName="registry-server" Jan 31 10:08:00 crc kubenswrapper[4830]: I0131 10:08:00.682374 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wdhbd" Jan 31 10:08:00 crc kubenswrapper[4830]: I0131 10:08:00.696476 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-wdhbd"] Jan 31 10:08:00 crc kubenswrapper[4830]: I0131 10:08:00.786650 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e5546b0b-b961-426c-9183-8a5ad6ad11d6-utilities\") pod \"redhat-marketplace-wdhbd\" (UID: \"e5546b0b-b961-426c-9183-8a5ad6ad11d6\") " pod="openshift-marketplace/redhat-marketplace-wdhbd" Jan 31 10:08:00 crc kubenswrapper[4830]: I0131 10:08:00.786709 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n77bk\" (UniqueName: \"kubernetes.io/projected/e5546b0b-b961-426c-9183-8a5ad6ad11d6-kube-api-access-n77bk\") pod \"redhat-marketplace-wdhbd\" (UID: \"e5546b0b-b961-426c-9183-8a5ad6ad11d6\") " pod="openshift-marketplace/redhat-marketplace-wdhbd" Jan 31 10:08:00 crc kubenswrapper[4830]: I0131 10:08:00.787524 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e5546b0b-b961-426c-9183-8a5ad6ad11d6-catalog-content\") pod \"redhat-marketplace-wdhbd\" (UID: \"e5546b0b-b961-426c-9183-8a5ad6ad11d6\") " pod="openshift-marketplace/redhat-marketplace-wdhbd" Jan 31 10:08:00 crc kubenswrapper[4830]: I0131 10:08:00.890541 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e5546b0b-b961-426c-9183-8a5ad6ad11d6-utilities\") pod \"redhat-marketplace-wdhbd\" (UID: \"e5546b0b-b961-426c-9183-8a5ad6ad11d6\") " pod="openshift-marketplace/redhat-marketplace-wdhbd" Jan 31 10:08:00 crc kubenswrapper[4830]: I0131 10:08:00.890597 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n77bk\" (UniqueName: \"kubernetes.io/projected/e5546b0b-b961-426c-9183-8a5ad6ad11d6-kube-api-access-n77bk\") pod \"redhat-marketplace-wdhbd\" (UID: \"e5546b0b-b961-426c-9183-8a5ad6ad11d6\") " pod="openshift-marketplace/redhat-marketplace-wdhbd" Jan 31 10:08:00 crc kubenswrapper[4830]: I0131 10:08:00.890891 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e5546b0b-b961-426c-9183-8a5ad6ad11d6-catalog-content\") pod \"redhat-marketplace-wdhbd\" (UID: \"e5546b0b-b961-426c-9183-8a5ad6ad11d6\") " pod="openshift-marketplace/redhat-marketplace-wdhbd" Jan 31 10:08:00 crc kubenswrapper[4830]: I0131 10:08:00.891279 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e5546b0b-b961-426c-9183-8a5ad6ad11d6-utilities\") pod \"redhat-marketplace-wdhbd\" (UID: \"e5546b0b-b961-426c-9183-8a5ad6ad11d6\") " pod="openshift-marketplace/redhat-marketplace-wdhbd" Jan 31 10:08:00 crc kubenswrapper[4830]: I0131 10:08:00.891438 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e5546b0b-b961-426c-9183-8a5ad6ad11d6-catalog-content\") pod \"redhat-marketplace-wdhbd\" (UID: \"e5546b0b-b961-426c-9183-8a5ad6ad11d6\") " pod="openshift-marketplace/redhat-marketplace-wdhbd" Jan 31 10:08:00 crc kubenswrapper[4830]: I0131 10:08:00.910689 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n77bk\" (UniqueName: \"kubernetes.io/projected/e5546b0b-b961-426c-9183-8a5ad6ad11d6-kube-api-access-n77bk\") pod \"redhat-marketplace-wdhbd\" (UID: \"e5546b0b-b961-426c-9183-8a5ad6ad11d6\") " pod="openshift-marketplace/redhat-marketplace-wdhbd" Jan 31 10:08:01 crc kubenswrapper[4830]: I0131 10:08:01.022434 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wdhbd" Jan 31 10:08:01 crc kubenswrapper[4830]: I0131 10:08:01.565991 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-wdhbd"] Jan 31 10:08:02 crc kubenswrapper[4830]: I0131 10:08:02.344948 4830 generic.go:334] "Generic (PLEG): container finished" podID="e5546b0b-b961-426c-9183-8a5ad6ad11d6" containerID="105f190cc1660a94e1d69a3897219cf65da70417f943ced6319d8c8346c734d6" exitCode=0 Jan 31 10:08:02 crc kubenswrapper[4830]: I0131 10:08:02.345016 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wdhbd" event={"ID":"e5546b0b-b961-426c-9183-8a5ad6ad11d6","Type":"ContainerDied","Data":"105f190cc1660a94e1d69a3897219cf65da70417f943ced6319d8c8346c734d6"} Jan 31 10:08:02 crc kubenswrapper[4830]: I0131 10:08:02.345397 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wdhbd" event={"ID":"e5546b0b-b961-426c-9183-8a5ad6ad11d6","Type":"ContainerStarted","Data":"f27bce7187ab1503db3947810566673628d14b94c412aa7b06f0b39db44a12a4"} Jan 31 10:08:04 crc kubenswrapper[4830]: I0131 10:08:04.370619 4830 generic.go:334] "Generic (PLEG): container finished" podID="e5546b0b-b961-426c-9183-8a5ad6ad11d6" containerID="ead5f62c1233b87d15bd5188d10a14d962777826bdd0b79767f05a5bed1e2d5d" exitCode=0 Jan 31 10:08:04 crc kubenswrapper[4830]: I0131 10:08:04.371013 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wdhbd" event={"ID":"e5546b0b-b961-426c-9183-8a5ad6ad11d6","Type":"ContainerDied","Data":"ead5f62c1233b87d15bd5188d10a14d962777826bdd0b79767f05a5bed1e2d5d"} Jan 31 10:08:05 crc kubenswrapper[4830]: I0131 10:08:05.383423 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wdhbd" event={"ID":"e5546b0b-b961-426c-9183-8a5ad6ad11d6","Type":"ContainerStarted","Data":"4ce1efe5403ccc86396c34a3b3ff7a1c03efb478caf292b67368498c63eb223e"} Jan 31 10:08:05 crc kubenswrapper[4830]: I0131 10:08:05.407701 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-wdhbd" podStartSLOduration=2.996387928 podStartE2EDuration="5.407675644s" podCreationTimestamp="2026-01-31 10:08:00 +0000 UTC" firstStartedPulling="2026-01-31 10:08:02.347793857 +0000 UTC m=+4026.841156299" lastFinishedPulling="2026-01-31 10:08:04.759081563 +0000 UTC m=+4029.252444015" observedRunningTime="2026-01-31 10:08:05.404821433 +0000 UTC m=+4029.898183875" watchObservedRunningTime="2026-01-31 10:08:05.407675644 +0000 UTC m=+4029.901038096" Jan 31 10:08:11 crc kubenswrapper[4830]: I0131 10:08:11.022866 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-wdhbd" Jan 31 10:08:11 crc kubenswrapper[4830]: I0131 10:08:11.025429 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-wdhbd" Jan 31 10:08:11 crc kubenswrapper[4830]: I0131 10:08:11.078026 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-wdhbd" Jan 31 10:08:12 crc kubenswrapper[4830]: I0131 10:08:12.031822 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-wdhbd" Jan 31 10:08:14 crc kubenswrapper[4830]: I0131 10:08:14.353425 4830 patch_prober.go:28] interesting pod/machine-config-daemon-gt7kd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 10:08:14 crc kubenswrapper[4830]: I0131 10:08:14.353827 4830 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" podUID="158dbfda-9b0a-4809-9946-3c6ee2d082dc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 10:08:18 crc kubenswrapper[4830]: I0131 10:08:18.652426 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-wdhbd"] Jan 31 10:08:18 crc kubenswrapper[4830]: I0131 10:08:18.653257 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-wdhbd" podUID="e5546b0b-b961-426c-9183-8a5ad6ad11d6" containerName="registry-server" containerID="cri-o://4ce1efe5403ccc86396c34a3b3ff7a1c03efb478caf292b67368498c63eb223e" gracePeriod=2 Jan 31 10:08:19 crc kubenswrapper[4830]: I0131 10:08:19.338165 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wdhbd" Jan 31 10:08:19 crc kubenswrapper[4830]: I0131 10:08:19.511942 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e5546b0b-b961-426c-9183-8a5ad6ad11d6-catalog-content\") pod \"e5546b0b-b961-426c-9183-8a5ad6ad11d6\" (UID: \"e5546b0b-b961-426c-9183-8a5ad6ad11d6\") " Jan 31 10:08:19 crc kubenswrapper[4830]: I0131 10:08:19.512012 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e5546b0b-b961-426c-9183-8a5ad6ad11d6-utilities\") pod \"e5546b0b-b961-426c-9183-8a5ad6ad11d6\" (UID: \"e5546b0b-b961-426c-9183-8a5ad6ad11d6\") " Jan 31 10:08:19 crc kubenswrapper[4830]: I0131 10:08:19.512116 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n77bk\" (UniqueName: \"kubernetes.io/projected/e5546b0b-b961-426c-9183-8a5ad6ad11d6-kube-api-access-n77bk\") pod \"e5546b0b-b961-426c-9183-8a5ad6ad11d6\" (UID: \"e5546b0b-b961-426c-9183-8a5ad6ad11d6\") " Jan 31 10:08:19 crc kubenswrapper[4830]: I0131 10:08:19.513688 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e5546b0b-b961-426c-9183-8a5ad6ad11d6-utilities" (OuterVolumeSpecName: "utilities") pod "e5546b0b-b961-426c-9183-8a5ad6ad11d6" (UID: "e5546b0b-b961-426c-9183-8a5ad6ad11d6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 10:08:19 crc kubenswrapper[4830]: I0131 10:08:19.531991 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e5546b0b-b961-426c-9183-8a5ad6ad11d6-kube-api-access-n77bk" (OuterVolumeSpecName: "kube-api-access-n77bk") pod "e5546b0b-b961-426c-9183-8a5ad6ad11d6" (UID: "e5546b0b-b961-426c-9183-8a5ad6ad11d6"). InnerVolumeSpecName "kube-api-access-n77bk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 10:08:19 crc kubenswrapper[4830]: I0131 10:08:19.545442 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e5546b0b-b961-426c-9183-8a5ad6ad11d6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e5546b0b-b961-426c-9183-8a5ad6ad11d6" (UID: "e5546b0b-b961-426c-9183-8a5ad6ad11d6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 10:08:19 crc kubenswrapper[4830]: I0131 10:08:19.553594 4830 generic.go:334] "Generic (PLEG): container finished" podID="e5546b0b-b961-426c-9183-8a5ad6ad11d6" containerID="4ce1efe5403ccc86396c34a3b3ff7a1c03efb478caf292b67368498c63eb223e" exitCode=0 Jan 31 10:08:19 crc kubenswrapper[4830]: I0131 10:08:19.553645 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wdhbd" event={"ID":"e5546b0b-b961-426c-9183-8a5ad6ad11d6","Type":"ContainerDied","Data":"4ce1efe5403ccc86396c34a3b3ff7a1c03efb478caf292b67368498c63eb223e"} Jan 31 10:08:19 crc kubenswrapper[4830]: I0131 10:08:19.553671 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wdhbd" event={"ID":"e5546b0b-b961-426c-9183-8a5ad6ad11d6","Type":"ContainerDied","Data":"f27bce7187ab1503db3947810566673628d14b94c412aa7b06f0b39db44a12a4"} Jan 31 10:08:19 crc kubenswrapper[4830]: I0131 10:08:19.553690 4830 scope.go:117] "RemoveContainer" containerID="4ce1efe5403ccc86396c34a3b3ff7a1c03efb478caf292b67368498c63eb223e" Jan 31 10:08:19 crc kubenswrapper[4830]: I0131 10:08:19.553862 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wdhbd" Jan 31 10:08:19 crc kubenswrapper[4830]: I0131 10:08:19.609179 4830 scope.go:117] "RemoveContainer" containerID="ead5f62c1233b87d15bd5188d10a14d962777826bdd0b79767f05a5bed1e2d5d" Jan 31 10:08:19 crc kubenswrapper[4830]: I0131 10:08:19.609562 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-wdhbd"] Jan 31 10:08:19 crc kubenswrapper[4830]: I0131 10:08:19.615297 4830 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e5546b0b-b961-426c-9183-8a5ad6ad11d6-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 10:08:19 crc kubenswrapper[4830]: I0131 10:08:19.615336 4830 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e5546b0b-b961-426c-9183-8a5ad6ad11d6-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 10:08:19 crc kubenswrapper[4830]: I0131 10:08:19.615347 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n77bk\" (UniqueName: \"kubernetes.io/projected/e5546b0b-b961-426c-9183-8a5ad6ad11d6-kube-api-access-n77bk\") on node \"crc\" DevicePath \"\"" Jan 31 10:08:19 crc kubenswrapper[4830]: I0131 10:08:19.623200 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-wdhbd"] Jan 31 10:08:19 crc kubenswrapper[4830]: I0131 10:08:19.629938 4830 scope.go:117] "RemoveContainer" containerID="105f190cc1660a94e1d69a3897219cf65da70417f943ced6319d8c8346c734d6" Jan 31 10:08:19 crc kubenswrapper[4830]: I0131 10:08:19.689887 4830 scope.go:117] "RemoveContainer" containerID="4ce1efe5403ccc86396c34a3b3ff7a1c03efb478caf292b67368498c63eb223e" Jan 31 10:08:19 crc kubenswrapper[4830]: E0131 10:08:19.690827 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4ce1efe5403ccc86396c34a3b3ff7a1c03efb478caf292b67368498c63eb223e\": container with ID starting with 4ce1efe5403ccc86396c34a3b3ff7a1c03efb478caf292b67368498c63eb223e not found: ID does not exist" containerID="4ce1efe5403ccc86396c34a3b3ff7a1c03efb478caf292b67368498c63eb223e" Jan 31 10:08:19 crc kubenswrapper[4830]: I0131 10:08:19.690865 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4ce1efe5403ccc86396c34a3b3ff7a1c03efb478caf292b67368498c63eb223e"} err="failed to get container status \"4ce1efe5403ccc86396c34a3b3ff7a1c03efb478caf292b67368498c63eb223e\": rpc error: code = NotFound desc = could not find container \"4ce1efe5403ccc86396c34a3b3ff7a1c03efb478caf292b67368498c63eb223e\": container with ID starting with 4ce1efe5403ccc86396c34a3b3ff7a1c03efb478caf292b67368498c63eb223e not found: ID does not exist" Jan 31 10:08:19 crc kubenswrapper[4830]: I0131 10:08:19.690888 4830 scope.go:117] "RemoveContainer" containerID="ead5f62c1233b87d15bd5188d10a14d962777826bdd0b79767f05a5bed1e2d5d" Jan 31 10:08:19 crc kubenswrapper[4830]: E0131 10:08:19.691183 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ead5f62c1233b87d15bd5188d10a14d962777826bdd0b79767f05a5bed1e2d5d\": container with ID starting with ead5f62c1233b87d15bd5188d10a14d962777826bdd0b79767f05a5bed1e2d5d not found: ID does not exist" containerID="ead5f62c1233b87d15bd5188d10a14d962777826bdd0b79767f05a5bed1e2d5d" Jan 31 10:08:19 crc kubenswrapper[4830]: I0131 10:08:19.691203 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ead5f62c1233b87d15bd5188d10a14d962777826bdd0b79767f05a5bed1e2d5d"} err="failed to get container status \"ead5f62c1233b87d15bd5188d10a14d962777826bdd0b79767f05a5bed1e2d5d\": rpc error: code = NotFound desc = could not find container \"ead5f62c1233b87d15bd5188d10a14d962777826bdd0b79767f05a5bed1e2d5d\": container with ID starting with ead5f62c1233b87d15bd5188d10a14d962777826bdd0b79767f05a5bed1e2d5d not found: ID does not exist" Jan 31 10:08:19 crc kubenswrapper[4830]: I0131 10:08:19.691217 4830 scope.go:117] "RemoveContainer" containerID="105f190cc1660a94e1d69a3897219cf65da70417f943ced6319d8c8346c734d6" Jan 31 10:08:19 crc kubenswrapper[4830]: E0131 10:08:19.691519 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"105f190cc1660a94e1d69a3897219cf65da70417f943ced6319d8c8346c734d6\": container with ID starting with 105f190cc1660a94e1d69a3897219cf65da70417f943ced6319d8c8346c734d6 not found: ID does not exist" containerID="105f190cc1660a94e1d69a3897219cf65da70417f943ced6319d8c8346c734d6" Jan 31 10:08:19 crc kubenswrapper[4830]: I0131 10:08:19.691537 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"105f190cc1660a94e1d69a3897219cf65da70417f943ced6319d8c8346c734d6"} err="failed to get container status \"105f190cc1660a94e1d69a3897219cf65da70417f943ced6319d8c8346c734d6\": rpc error: code = NotFound desc = could not find container \"105f190cc1660a94e1d69a3897219cf65da70417f943ced6319d8c8346c734d6\": container with ID starting with 105f190cc1660a94e1d69a3897219cf65da70417f943ced6319d8c8346c734d6 not found: ID does not exist" Jan 31 10:08:20 crc kubenswrapper[4830]: I0131 10:08:20.263223 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e5546b0b-b961-426c-9183-8a5ad6ad11d6" path="/var/lib/kubelet/pods/e5546b0b-b961-426c-9183-8a5ad6ad11d6/volumes" Jan 31 10:08:44 crc kubenswrapper[4830]: I0131 10:08:44.353046 4830 patch_prober.go:28] interesting pod/machine-config-daemon-gt7kd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 10:08:44 crc kubenswrapper[4830]: I0131 10:08:44.354335 4830 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" podUID="158dbfda-9b0a-4809-9946-3c6ee2d082dc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 10:08:44 crc kubenswrapper[4830]: I0131 10:08:44.354456 4830 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" Jan 31 10:08:44 crc kubenswrapper[4830]: I0131 10:08:44.355990 4830 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"0b092ca3bb9b9a206ab1c26c23ac27ed470668236cc5c21830f6134f0e65d665"} pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 31 10:08:44 crc kubenswrapper[4830]: I0131 10:08:44.356081 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" podUID="158dbfda-9b0a-4809-9946-3c6ee2d082dc" containerName="machine-config-daemon" containerID="cri-o://0b092ca3bb9b9a206ab1c26c23ac27ed470668236cc5c21830f6134f0e65d665" gracePeriod=600 Jan 31 10:08:44 crc kubenswrapper[4830]: I0131 10:08:44.834974 4830 generic.go:334] "Generic (PLEG): container finished" podID="158dbfda-9b0a-4809-9946-3c6ee2d082dc" containerID="0b092ca3bb9b9a206ab1c26c23ac27ed470668236cc5c21830f6134f0e65d665" exitCode=0 Jan 31 10:08:44 crc kubenswrapper[4830]: I0131 10:08:44.835045 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" event={"ID":"158dbfda-9b0a-4809-9946-3c6ee2d082dc","Type":"ContainerDied","Data":"0b092ca3bb9b9a206ab1c26c23ac27ed470668236cc5c21830f6134f0e65d665"} Jan 31 10:08:44 crc kubenswrapper[4830]: I0131 10:08:44.835593 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" event={"ID":"158dbfda-9b0a-4809-9946-3c6ee2d082dc","Type":"ContainerStarted","Data":"7ca4e27c0e74098ff8b4f356a070085cc4684687a37c370e31d282c5c11adfc3"} Jan 31 10:08:44 crc kubenswrapper[4830]: I0131 10:08:44.835619 4830 scope.go:117] "RemoveContainer" containerID="9476534483fcc007d26142cba83303b61999a6cb67a49527304ef1ac3d85e163" Jan 31 10:09:48 crc kubenswrapper[4830]: I0131 10:09:48.697167 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-9jc8d"] Jan 31 10:09:48 crc kubenswrapper[4830]: E0131 10:09:48.698661 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5546b0b-b961-426c-9183-8a5ad6ad11d6" containerName="extract-utilities" Jan 31 10:09:48 crc kubenswrapper[4830]: I0131 10:09:48.698681 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5546b0b-b961-426c-9183-8a5ad6ad11d6" containerName="extract-utilities" Jan 31 10:09:48 crc kubenswrapper[4830]: E0131 10:09:48.698706 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5546b0b-b961-426c-9183-8a5ad6ad11d6" containerName="extract-content" Jan 31 10:09:48 crc kubenswrapper[4830]: I0131 10:09:48.698715 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5546b0b-b961-426c-9183-8a5ad6ad11d6" containerName="extract-content" Jan 31 10:09:48 crc kubenswrapper[4830]: E0131 10:09:48.698747 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5546b0b-b961-426c-9183-8a5ad6ad11d6" containerName="registry-server" Jan 31 10:09:48 crc kubenswrapper[4830]: I0131 10:09:48.698756 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5546b0b-b961-426c-9183-8a5ad6ad11d6" containerName="registry-server" Jan 31 10:09:48 crc kubenswrapper[4830]: I0131 10:09:48.699050 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="e5546b0b-b961-426c-9183-8a5ad6ad11d6" containerName="registry-server" Jan 31 10:09:48 crc kubenswrapper[4830]: I0131 10:09:48.701103 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9jc8d" Jan 31 10:09:48 crc kubenswrapper[4830]: I0131 10:09:48.718132 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9jc8d"] Jan 31 10:09:48 crc kubenswrapper[4830]: I0131 10:09:48.782850 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wtcv2\" (UniqueName: \"kubernetes.io/projected/8b225c54-de8c-42fc-8600-04b4ea4bff34-kube-api-access-wtcv2\") pod \"community-operators-9jc8d\" (UID: \"8b225c54-de8c-42fc-8600-04b4ea4bff34\") " pod="openshift-marketplace/community-operators-9jc8d" Jan 31 10:09:48 crc kubenswrapper[4830]: I0131 10:09:48.783284 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8b225c54-de8c-42fc-8600-04b4ea4bff34-utilities\") pod \"community-operators-9jc8d\" (UID: \"8b225c54-de8c-42fc-8600-04b4ea4bff34\") " pod="openshift-marketplace/community-operators-9jc8d" Jan 31 10:09:48 crc kubenswrapper[4830]: I0131 10:09:48.783607 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8b225c54-de8c-42fc-8600-04b4ea4bff34-catalog-content\") pod \"community-operators-9jc8d\" (UID: \"8b225c54-de8c-42fc-8600-04b4ea4bff34\") " pod="openshift-marketplace/community-operators-9jc8d" Jan 31 10:09:48 crc kubenswrapper[4830]: I0131 10:09:48.886838 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8b225c54-de8c-42fc-8600-04b4ea4bff34-catalog-content\") pod \"community-operators-9jc8d\" (UID: \"8b225c54-de8c-42fc-8600-04b4ea4bff34\") " pod="openshift-marketplace/community-operators-9jc8d" Jan 31 10:09:48 crc kubenswrapper[4830]: I0131 10:09:48.886998 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wtcv2\" (UniqueName: \"kubernetes.io/projected/8b225c54-de8c-42fc-8600-04b4ea4bff34-kube-api-access-wtcv2\") pod \"community-operators-9jc8d\" (UID: \"8b225c54-de8c-42fc-8600-04b4ea4bff34\") " pod="openshift-marketplace/community-operators-9jc8d" Jan 31 10:09:48 crc kubenswrapper[4830]: I0131 10:09:48.887077 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8b225c54-de8c-42fc-8600-04b4ea4bff34-utilities\") pod \"community-operators-9jc8d\" (UID: \"8b225c54-de8c-42fc-8600-04b4ea4bff34\") " pod="openshift-marketplace/community-operators-9jc8d" Jan 31 10:09:48 crc kubenswrapper[4830]: I0131 10:09:48.887902 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8b225c54-de8c-42fc-8600-04b4ea4bff34-utilities\") pod \"community-operators-9jc8d\" (UID: \"8b225c54-de8c-42fc-8600-04b4ea4bff34\") " pod="openshift-marketplace/community-operators-9jc8d" Jan 31 10:09:48 crc kubenswrapper[4830]: I0131 10:09:48.888200 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8b225c54-de8c-42fc-8600-04b4ea4bff34-catalog-content\") pod \"community-operators-9jc8d\" (UID: \"8b225c54-de8c-42fc-8600-04b4ea4bff34\") " pod="openshift-marketplace/community-operators-9jc8d" Jan 31 10:09:48 crc kubenswrapper[4830]: I0131 10:09:48.917698 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wtcv2\" (UniqueName: \"kubernetes.io/projected/8b225c54-de8c-42fc-8600-04b4ea4bff34-kube-api-access-wtcv2\") pod \"community-operators-9jc8d\" (UID: \"8b225c54-de8c-42fc-8600-04b4ea4bff34\") " pod="openshift-marketplace/community-operators-9jc8d" Jan 31 10:09:49 crc kubenswrapper[4830]: I0131 10:09:49.044531 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9jc8d" Jan 31 10:09:49 crc kubenswrapper[4830]: I0131 10:09:49.737580 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9jc8d"] Jan 31 10:09:50 crc kubenswrapper[4830]: I0131 10:09:50.655940 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9jc8d" event={"ID":"8b225c54-de8c-42fc-8600-04b4ea4bff34","Type":"ContainerStarted","Data":"0d496b1e54f64026b53767f853ed76a4f26d573bb23d20e96e603d540b0d32db"} Jan 31 10:09:51 crc kubenswrapper[4830]: I0131 10:09:51.673517 4830 generic.go:334] "Generic (PLEG): container finished" podID="8b225c54-de8c-42fc-8600-04b4ea4bff34" containerID="f64ba7b287e07fb5e79ed77239932d0cdae306c0837c1182f545933c748162db" exitCode=0 Jan 31 10:09:51 crc kubenswrapper[4830]: I0131 10:09:51.673588 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9jc8d" event={"ID":"8b225c54-de8c-42fc-8600-04b4ea4bff34","Type":"ContainerDied","Data":"f64ba7b287e07fb5e79ed77239932d0cdae306c0837c1182f545933c748162db"} Jan 31 10:09:52 crc kubenswrapper[4830]: I0131 10:09:52.685216 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9jc8d" event={"ID":"8b225c54-de8c-42fc-8600-04b4ea4bff34","Type":"ContainerStarted","Data":"4f05e3d07e972219385395046958277d4bd30273e29494ca2c5bb3d5627f3fdd"} Jan 31 10:09:54 crc kubenswrapper[4830]: I0131 10:09:54.711902 4830 generic.go:334] "Generic (PLEG): container finished" podID="8b225c54-de8c-42fc-8600-04b4ea4bff34" containerID="4f05e3d07e972219385395046958277d4bd30273e29494ca2c5bb3d5627f3fdd" exitCode=0 Jan 31 10:09:54 crc kubenswrapper[4830]: I0131 10:09:54.712335 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9jc8d" event={"ID":"8b225c54-de8c-42fc-8600-04b4ea4bff34","Type":"ContainerDied","Data":"4f05e3d07e972219385395046958277d4bd30273e29494ca2c5bb3d5627f3fdd"} Jan 31 10:09:55 crc kubenswrapper[4830]: I0131 10:09:55.730050 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9jc8d" event={"ID":"8b225c54-de8c-42fc-8600-04b4ea4bff34","Type":"ContainerStarted","Data":"fe9eaf68768adcb87983a6ef940de76d6c2e1e170d82c49903fa4ab84ccb5e81"} Jan 31 10:09:55 crc kubenswrapper[4830]: I0131 10:09:55.755817 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-9jc8d" podStartSLOduration=4.316579465 podStartE2EDuration="7.755799053s" podCreationTimestamp="2026-01-31 10:09:48 +0000 UTC" firstStartedPulling="2026-01-31 10:09:51.67624279 +0000 UTC m=+4136.169605252" lastFinishedPulling="2026-01-31 10:09:55.115462398 +0000 UTC m=+4139.608824840" observedRunningTime="2026-01-31 10:09:55.748938878 +0000 UTC m=+4140.242301340" watchObservedRunningTime="2026-01-31 10:09:55.755799053 +0000 UTC m=+4140.249161495" Jan 31 10:09:59 crc kubenswrapper[4830]: I0131 10:09:59.045570 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-9jc8d" Jan 31 10:09:59 crc kubenswrapper[4830]: I0131 10:09:59.046240 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-9jc8d" Jan 31 10:09:59 crc kubenswrapper[4830]: I0131 10:09:59.118934 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-9jc8d" Jan 31 10:10:09 crc kubenswrapper[4830]: I0131 10:10:09.112852 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-9jc8d" Jan 31 10:10:09 crc kubenswrapper[4830]: I0131 10:10:09.175426 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-9jc8d"] Jan 31 10:10:09 crc kubenswrapper[4830]: I0131 10:10:09.930946 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-9jc8d" podUID="8b225c54-de8c-42fc-8600-04b4ea4bff34" containerName="registry-server" containerID="cri-o://fe9eaf68768adcb87983a6ef940de76d6c2e1e170d82c49903fa4ab84ccb5e81" gracePeriod=2 Jan 31 10:10:10 crc kubenswrapper[4830]: I0131 10:10:10.487818 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9jc8d" Jan 31 10:10:10 crc kubenswrapper[4830]: I0131 10:10:10.591958 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wtcv2\" (UniqueName: \"kubernetes.io/projected/8b225c54-de8c-42fc-8600-04b4ea4bff34-kube-api-access-wtcv2\") pod \"8b225c54-de8c-42fc-8600-04b4ea4bff34\" (UID: \"8b225c54-de8c-42fc-8600-04b4ea4bff34\") " Jan 31 10:10:10 crc kubenswrapper[4830]: I0131 10:10:10.592054 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8b225c54-de8c-42fc-8600-04b4ea4bff34-utilities\") pod \"8b225c54-de8c-42fc-8600-04b4ea4bff34\" (UID: \"8b225c54-de8c-42fc-8600-04b4ea4bff34\") " Jan 31 10:10:10 crc kubenswrapper[4830]: I0131 10:10:10.592074 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8b225c54-de8c-42fc-8600-04b4ea4bff34-catalog-content\") pod \"8b225c54-de8c-42fc-8600-04b4ea4bff34\" (UID: \"8b225c54-de8c-42fc-8600-04b4ea4bff34\") " Jan 31 10:10:10 crc kubenswrapper[4830]: I0131 10:10:10.593171 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8b225c54-de8c-42fc-8600-04b4ea4bff34-utilities" (OuterVolumeSpecName: "utilities") pod "8b225c54-de8c-42fc-8600-04b4ea4bff34" (UID: "8b225c54-de8c-42fc-8600-04b4ea4bff34"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 10:10:10 crc kubenswrapper[4830]: I0131 10:10:10.595046 4830 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8b225c54-de8c-42fc-8600-04b4ea4bff34-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 10:10:10 crc kubenswrapper[4830]: I0131 10:10:10.599264 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8b225c54-de8c-42fc-8600-04b4ea4bff34-kube-api-access-wtcv2" (OuterVolumeSpecName: "kube-api-access-wtcv2") pod "8b225c54-de8c-42fc-8600-04b4ea4bff34" (UID: "8b225c54-de8c-42fc-8600-04b4ea4bff34"). InnerVolumeSpecName "kube-api-access-wtcv2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 10:10:10 crc kubenswrapper[4830]: I0131 10:10:10.646256 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8b225c54-de8c-42fc-8600-04b4ea4bff34-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8b225c54-de8c-42fc-8600-04b4ea4bff34" (UID: "8b225c54-de8c-42fc-8600-04b4ea4bff34"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 10:10:10 crc kubenswrapper[4830]: I0131 10:10:10.697755 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wtcv2\" (UniqueName: \"kubernetes.io/projected/8b225c54-de8c-42fc-8600-04b4ea4bff34-kube-api-access-wtcv2\") on node \"crc\" DevicePath \"\"" Jan 31 10:10:10 crc kubenswrapper[4830]: I0131 10:10:10.698177 4830 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8b225c54-de8c-42fc-8600-04b4ea4bff34-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 10:10:10 crc kubenswrapper[4830]: I0131 10:10:10.945030 4830 generic.go:334] "Generic (PLEG): container finished" podID="8b225c54-de8c-42fc-8600-04b4ea4bff34" containerID="fe9eaf68768adcb87983a6ef940de76d6c2e1e170d82c49903fa4ab84ccb5e81" exitCode=0 Jan 31 10:10:10 crc kubenswrapper[4830]: I0131 10:10:10.945077 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9jc8d" event={"ID":"8b225c54-de8c-42fc-8600-04b4ea4bff34","Type":"ContainerDied","Data":"fe9eaf68768adcb87983a6ef940de76d6c2e1e170d82c49903fa4ab84ccb5e81"} Jan 31 10:10:10 crc kubenswrapper[4830]: I0131 10:10:10.945111 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9jc8d" event={"ID":"8b225c54-de8c-42fc-8600-04b4ea4bff34","Type":"ContainerDied","Data":"0d496b1e54f64026b53767f853ed76a4f26d573bb23d20e96e603d540b0d32db"} Jan 31 10:10:10 crc kubenswrapper[4830]: I0131 10:10:10.945111 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9jc8d" Jan 31 10:10:10 crc kubenswrapper[4830]: I0131 10:10:10.945129 4830 scope.go:117] "RemoveContainer" containerID="fe9eaf68768adcb87983a6ef940de76d6c2e1e170d82c49903fa4ab84ccb5e81" Jan 31 10:10:10 crc kubenswrapper[4830]: I0131 10:10:10.973053 4830 scope.go:117] "RemoveContainer" containerID="4f05e3d07e972219385395046958277d4bd30273e29494ca2c5bb3d5627f3fdd" Jan 31 10:10:10 crc kubenswrapper[4830]: I0131 10:10:10.990201 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-9jc8d"] Jan 31 10:10:11 crc kubenswrapper[4830]: I0131 10:10:11.004933 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-9jc8d"] Jan 31 10:10:11 crc kubenswrapper[4830]: I0131 10:10:11.007538 4830 scope.go:117] "RemoveContainer" containerID="f64ba7b287e07fb5e79ed77239932d0cdae306c0837c1182f545933c748162db" Jan 31 10:10:11 crc kubenswrapper[4830]: I0131 10:10:11.067667 4830 scope.go:117] "RemoveContainer" containerID="fe9eaf68768adcb87983a6ef940de76d6c2e1e170d82c49903fa4ab84ccb5e81" Jan 31 10:10:11 crc kubenswrapper[4830]: E0131 10:10:11.068394 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fe9eaf68768adcb87983a6ef940de76d6c2e1e170d82c49903fa4ab84ccb5e81\": container with ID starting with fe9eaf68768adcb87983a6ef940de76d6c2e1e170d82c49903fa4ab84ccb5e81 not found: ID does not exist" containerID="fe9eaf68768adcb87983a6ef940de76d6c2e1e170d82c49903fa4ab84ccb5e81" Jan 31 10:10:11 crc kubenswrapper[4830]: I0131 10:10:11.068438 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fe9eaf68768adcb87983a6ef940de76d6c2e1e170d82c49903fa4ab84ccb5e81"} err="failed to get container status \"fe9eaf68768adcb87983a6ef940de76d6c2e1e170d82c49903fa4ab84ccb5e81\": rpc error: code = NotFound desc = could not find container \"fe9eaf68768adcb87983a6ef940de76d6c2e1e170d82c49903fa4ab84ccb5e81\": container with ID starting with fe9eaf68768adcb87983a6ef940de76d6c2e1e170d82c49903fa4ab84ccb5e81 not found: ID does not exist" Jan 31 10:10:11 crc kubenswrapper[4830]: I0131 10:10:11.068464 4830 scope.go:117] "RemoveContainer" containerID="4f05e3d07e972219385395046958277d4bd30273e29494ca2c5bb3d5627f3fdd" Jan 31 10:10:11 crc kubenswrapper[4830]: E0131 10:10:11.068811 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4f05e3d07e972219385395046958277d4bd30273e29494ca2c5bb3d5627f3fdd\": container with ID starting with 4f05e3d07e972219385395046958277d4bd30273e29494ca2c5bb3d5627f3fdd not found: ID does not exist" containerID="4f05e3d07e972219385395046958277d4bd30273e29494ca2c5bb3d5627f3fdd" Jan 31 10:10:11 crc kubenswrapper[4830]: I0131 10:10:11.068842 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4f05e3d07e972219385395046958277d4bd30273e29494ca2c5bb3d5627f3fdd"} err="failed to get container status \"4f05e3d07e972219385395046958277d4bd30273e29494ca2c5bb3d5627f3fdd\": rpc error: code = NotFound desc = could not find container \"4f05e3d07e972219385395046958277d4bd30273e29494ca2c5bb3d5627f3fdd\": container with ID starting with 4f05e3d07e972219385395046958277d4bd30273e29494ca2c5bb3d5627f3fdd not found: ID does not exist" Jan 31 10:10:11 crc kubenswrapper[4830]: I0131 10:10:11.068866 4830 scope.go:117] "RemoveContainer" containerID="f64ba7b287e07fb5e79ed77239932d0cdae306c0837c1182f545933c748162db" Jan 31 10:10:11 crc kubenswrapper[4830]: E0131 10:10:11.069618 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f64ba7b287e07fb5e79ed77239932d0cdae306c0837c1182f545933c748162db\": container with ID starting with f64ba7b287e07fb5e79ed77239932d0cdae306c0837c1182f545933c748162db not found: ID does not exist" containerID="f64ba7b287e07fb5e79ed77239932d0cdae306c0837c1182f545933c748162db" Jan 31 10:10:11 crc kubenswrapper[4830]: I0131 10:10:11.069647 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f64ba7b287e07fb5e79ed77239932d0cdae306c0837c1182f545933c748162db"} err="failed to get container status \"f64ba7b287e07fb5e79ed77239932d0cdae306c0837c1182f545933c748162db\": rpc error: code = NotFound desc = could not find container \"f64ba7b287e07fb5e79ed77239932d0cdae306c0837c1182f545933c748162db\": container with ID starting with f64ba7b287e07fb5e79ed77239932d0cdae306c0837c1182f545933c748162db not found: ID does not exist" Jan 31 10:10:12 crc kubenswrapper[4830]: I0131 10:10:12.265320 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8b225c54-de8c-42fc-8600-04b4ea4bff34" path="/var/lib/kubelet/pods/8b225c54-de8c-42fc-8600-04b4ea4bff34/volumes" Jan 31 10:10:44 crc kubenswrapper[4830]: I0131 10:10:44.353356 4830 patch_prober.go:28] interesting pod/machine-config-daemon-gt7kd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 10:10:44 crc kubenswrapper[4830]: I0131 10:10:44.354002 4830 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" podUID="158dbfda-9b0a-4809-9946-3c6ee2d082dc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 10:10:57 crc kubenswrapper[4830]: I0131 10:10:57.192681 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-kq26f"] Jan 31 10:10:57 crc kubenswrapper[4830]: E0131 10:10:57.194049 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b225c54-de8c-42fc-8600-04b4ea4bff34" containerName="extract-utilities" Jan 31 10:10:57 crc kubenswrapper[4830]: I0131 10:10:57.194072 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b225c54-de8c-42fc-8600-04b4ea4bff34" containerName="extract-utilities" Jan 31 10:10:57 crc kubenswrapper[4830]: E0131 10:10:57.194130 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b225c54-de8c-42fc-8600-04b4ea4bff34" containerName="extract-content" Jan 31 10:10:57 crc kubenswrapper[4830]: I0131 10:10:57.194139 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b225c54-de8c-42fc-8600-04b4ea4bff34" containerName="extract-content" Jan 31 10:10:57 crc kubenswrapper[4830]: E0131 10:10:57.194156 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b225c54-de8c-42fc-8600-04b4ea4bff34" containerName="registry-server" Jan 31 10:10:57 crc kubenswrapper[4830]: I0131 10:10:57.194163 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b225c54-de8c-42fc-8600-04b4ea4bff34" containerName="registry-server" Jan 31 10:10:57 crc kubenswrapper[4830]: I0131 10:10:57.194461 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="8b225c54-de8c-42fc-8600-04b4ea4bff34" containerName="registry-server" Jan 31 10:10:57 crc kubenswrapper[4830]: I0131 10:10:57.196761 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kq26f" Jan 31 10:10:57 crc kubenswrapper[4830]: I0131 10:10:57.204497 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-kq26f"] Jan 31 10:10:57 crc kubenswrapper[4830]: I0131 10:10:57.303691 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-98z9t\" (UniqueName: \"kubernetes.io/projected/a1fd3882-b7b7-4563-a50a-db9356faa4ba-kube-api-access-98z9t\") pod \"certified-operators-kq26f\" (UID: \"a1fd3882-b7b7-4563-a50a-db9356faa4ba\") " pod="openshift-marketplace/certified-operators-kq26f" Jan 31 10:10:57 crc kubenswrapper[4830]: I0131 10:10:57.303882 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a1fd3882-b7b7-4563-a50a-db9356faa4ba-catalog-content\") pod \"certified-operators-kq26f\" (UID: \"a1fd3882-b7b7-4563-a50a-db9356faa4ba\") " pod="openshift-marketplace/certified-operators-kq26f" Jan 31 10:10:57 crc kubenswrapper[4830]: I0131 10:10:57.304581 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a1fd3882-b7b7-4563-a50a-db9356faa4ba-utilities\") pod \"certified-operators-kq26f\" (UID: \"a1fd3882-b7b7-4563-a50a-db9356faa4ba\") " pod="openshift-marketplace/certified-operators-kq26f" Jan 31 10:10:57 crc kubenswrapper[4830]: I0131 10:10:57.406999 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a1fd3882-b7b7-4563-a50a-db9356faa4ba-catalog-content\") pod \"certified-operators-kq26f\" (UID: \"a1fd3882-b7b7-4563-a50a-db9356faa4ba\") " pod="openshift-marketplace/certified-operators-kq26f" Jan 31 10:10:57 crc kubenswrapper[4830]: I0131 10:10:57.407207 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a1fd3882-b7b7-4563-a50a-db9356faa4ba-utilities\") pod \"certified-operators-kq26f\" (UID: \"a1fd3882-b7b7-4563-a50a-db9356faa4ba\") " pod="openshift-marketplace/certified-operators-kq26f" Jan 31 10:10:57 crc kubenswrapper[4830]: I0131 10:10:57.407272 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-98z9t\" (UniqueName: \"kubernetes.io/projected/a1fd3882-b7b7-4563-a50a-db9356faa4ba-kube-api-access-98z9t\") pod \"certified-operators-kq26f\" (UID: \"a1fd3882-b7b7-4563-a50a-db9356faa4ba\") " pod="openshift-marketplace/certified-operators-kq26f" Jan 31 10:10:57 crc kubenswrapper[4830]: I0131 10:10:57.407589 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a1fd3882-b7b7-4563-a50a-db9356faa4ba-catalog-content\") pod \"certified-operators-kq26f\" (UID: \"a1fd3882-b7b7-4563-a50a-db9356faa4ba\") " pod="openshift-marketplace/certified-operators-kq26f" Jan 31 10:10:57 crc kubenswrapper[4830]: I0131 10:10:57.408132 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a1fd3882-b7b7-4563-a50a-db9356faa4ba-utilities\") pod \"certified-operators-kq26f\" (UID: \"a1fd3882-b7b7-4563-a50a-db9356faa4ba\") " pod="openshift-marketplace/certified-operators-kq26f" Jan 31 10:10:57 crc kubenswrapper[4830]: I0131 10:10:57.437582 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-98z9t\" (UniqueName: \"kubernetes.io/projected/a1fd3882-b7b7-4563-a50a-db9356faa4ba-kube-api-access-98z9t\") pod \"certified-operators-kq26f\" (UID: \"a1fd3882-b7b7-4563-a50a-db9356faa4ba\") " pod="openshift-marketplace/certified-operators-kq26f" Jan 31 10:10:57 crc kubenswrapper[4830]: I0131 10:10:57.521039 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kq26f" Jan 31 10:10:58 crc kubenswrapper[4830]: I0131 10:10:58.075923 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-kq26f"] Jan 31 10:10:59 crc kubenswrapper[4830]: I0131 10:10:59.445757 4830 generic.go:334] "Generic (PLEG): container finished" podID="a1fd3882-b7b7-4563-a50a-db9356faa4ba" containerID="64861daf18bd9d10f5f698f69dbd8b453e02b7dd9c3f909b9e129280b134d65c" exitCode=0 Jan 31 10:10:59 crc kubenswrapper[4830]: I0131 10:10:59.445955 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kq26f" event={"ID":"a1fd3882-b7b7-4563-a50a-db9356faa4ba","Type":"ContainerDied","Data":"64861daf18bd9d10f5f698f69dbd8b453e02b7dd9c3f909b9e129280b134d65c"} Jan 31 10:10:59 crc kubenswrapper[4830]: I0131 10:10:59.446347 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kq26f" event={"ID":"a1fd3882-b7b7-4563-a50a-db9356faa4ba","Type":"ContainerStarted","Data":"a75a89489d8a44e9f0c626a8b59b98d7193031a89e81028366f37f8953359278"} Jan 31 10:10:59 crc kubenswrapper[4830]: I0131 10:10:59.448844 4830 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 31 10:11:00 crc kubenswrapper[4830]: I0131 10:11:00.462211 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kq26f" event={"ID":"a1fd3882-b7b7-4563-a50a-db9356faa4ba","Type":"ContainerStarted","Data":"ae21dfc2b56f4323a8ca075eb13f855f16d17a47aee5720b22d1206595edc64d"} Jan 31 10:11:02 crc kubenswrapper[4830]: I0131 10:11:02.482855 4830 generic.go:334] "Generic (PLEG): container finished" podID="a1fd3882-b7b7-4563-a50a-db9356faa4ba" containerID="ae21dfc2b56f4323a8ca075eb13f855f16d17a47aee5720b22d1206595edc64d" exitCode=0 Jan 31 10:11:02 crc kubenswrapper[4830]: I0131 10:11:02.482925 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kq26f" event={"ID":"a1fd3882-b7b7-4563-a50a-db9356faa4ba","Type":"ContainerDied","Data":"ae21dfc2b56f4323a8ca075eb13f855f16d17a47aee5720b22d1206595edc64d"} Jan 31 10:11:03 crc kubenswrapper[4830]: I0131 10:11:03.495423 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kq26f" event={"ID":"a1fd3882-b7b7-4563-a50a-db9356faa4ba","Type":"ContainerStarted","Data":"a20c00f30b879fc080343afe50ce8d69ecd4e1769c8275dc2534696103bd5cb4"} Jan 31 10:11:03 crc kubenswrapper[4830]: I0131 10:11:03.522769 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-kq26f" podStartSLOduration=3.089543097 podStartE2EDuration="6.522751894s" podCreationTimestamp="2026-01-31 10:10:57 +0000 UTC" firstStartedPulling="2026-01-31 10:10:59.448592744 +0000 UTC m=+4203.941955186" lastFinishedPulling="2026-01-31 10:11:02.881801541 +0000 UTC m=+4207.375163983" observedRunningTime="2026-01-31 10:11:03.516568128 +0000 UTC m=+4208.009930570" watchObservedRunningTime="2026-01-31 10:11:03.522751894 +0000 UTC m=+4208.016114336" Jan 31 10:11:07 crc kubenswrapper[4830]: I0131 10:11:07.522189 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-kq26f" Jan 31 10:11:07 crc kubenswrapper[4830]: I0131 10:11:07.522712 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-kq26f" Jan 31 10:11:07 crc kubenswrapper[4830]: I0131 10:11:07.575966 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-kq26f" Jan 31 10:11:14 crc kubenswrapper[4830]: I0131 10:11:14.353183 4830 patch_prober.go:28] interesting pod/machine-config-daemon-gt7kd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 10:11:14 crc kubenswrapper[4830]: I0131 10:11:14.354885 4830 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" podUID="158dbfda-9b0a-4809-9946-3c6ee2d082dc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 10:11:17 crc kubenswrapper[4830]: I0131 10:11:17.574314 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-kq26f" Jan 31 10:11:17 crc kubenswrapper[4830]: I0131 10:11:17.643915 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-kq26f"] Jan 31 10:11:17 crc kubenswrapper[4830]: I0131 10:11:17.664542 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-kq26f" podUID="a1fd3882-b7b7-4563-a50a-db9356faa4ba" containerName="registry-server" containerID="cri-o://a20c00f30b879fc080343afe50ce8d69ecd4e1769c8275dc2534696103bd5cb4" gracePeriod=2 Jan 31 10:11:18 crc kubenswrapper[4830]: I0131 10:11:18.142387 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kq26f" Jan 31 10:11:18 crc kubenswrapper[4830]: I0131 10:11:18.329179 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a1fd3882-b7b7-4563-a50a-db9356faa4ba-utilities\") pod \"a1fd3882-b7b7-4563-a50a-db9356faa4ba\" (UID: \"a1fd3882-b7b7-4563-a50a-db9356faa4ba\") " Jan 31 10:11:18 crc kubenswrapper[4830]: I0131 10:11:18.329808 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a1fd3882-b7b7-4563-a50a-db9356faa4ba-catalog-content\") pod \"a1fd3882-b7b7-4563-a50a-db9356faa4ba\" (UID: \"a1fd3882-b7b7-4563-a50a-db9356faa4ba\") " Jan 31 10:11:18 crc kubenswrapper[4830]: I0131 10:11:18.330309 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-98z9t\" (UniqueName: \"kubernetes.io/projected/a1fd3882-b7b7-4563-a50a-db9356faa4ba-kube-api-access-98z9t\") pod \"a1fd3882-b7b7-4563-a50a-db9356faa4ba\" (UID: \"a1fd3882-b7b7-4563-a50a-db9356faa4ba\") " Jan 31 10:11:18 crc kubenswrapper[4830]: I0131 10:11:18.330458 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a1fd3882-b7b7-4563-a50a-db9356faa4ba-utilities" (OuterVolumeSpecName: "utilities") pod "a1fd3882-b7b7-4563-a50a-db9356faa4ba" (UID: "a1fd3882-b7b7-4563-a50a-db9356faa4ba"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 10:11:18 crc kubenswrapper[4830]: I0131 10:11:18.334099 4830 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a1fd3882-b7b7-4563-a50a-db9356faa4ba-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 10:11:18 crc kubenswrapper[4830]: I0131 10:11:18.355012 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a1fd3882-b7b7-4563-a50a-db9356faa4ba-kube-api-access-98z9t" (OuterVolumeSpecName: "kube-api-access-98z9t") pod "a1fd3882-b7b7-4563-a50a-db9356faa4ba" (UID: "a1fd3882-b7b7-4563-a50a-db9356faa4ba"). InnerVolumeSpecName "kube-api-access-98z9t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 10:11:18 crc kubenswrapper[4830]: I0131 10:11:18.381434 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a1fd3882-b7b7-4563-a50a-db9356faa4ba-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a1fd3882-b7b7-4563-a50a-db9356faa4ba" (UID: "a1fd3882-b7b7-4563-a50a-db9356faa4ba"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 10:11:18 crc kubenswrapper[4830]: I0131 10:11:18.438662 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-98z9t\" (UniqueName: \"kubernetes.io/projected/a1fd3882-b7b7-4563-a50a-db9356faa4ba-kube-api-access-98z9t\") on node \"crc\" DevicePath \"\"" Jan 31 10:11:18 crc kubenswrapper[4830]: I0131 10:11:18.438706 4830 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a1fd3882-b7b7-4563-a50a-db9356faa4ba-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 10:11:18 crc kubenswrapper[4830]: I0131 10:11:18.673989 4830 generic.go:334] "Generic (PLEG): container finished" podID="a1fd3882-b7b7-4563-a50a-db9356faa4ba" containerID="a20c00f30b879fc080343afe50ce8d69ecd4e1769c8275dc2534696103bd5cb4" exitCode=0 Jan 31 10:11:18 crc kubenswrapper[4830]: I0131 10:11:18.674026 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kq26f" event={"ID":"a1fd3882-b7b7-4563-a50a-db9356faa4ba","Type":"ContainerDied","Data":"a20c00f30b879fc080343afe50ce8d69ecd4e1769c8275dc2534696103bd5cb4"} Jan 31 10:11:18 crc kubenswrapper[4830]: I0131 10:11:18.674049 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kq26f" event={"ID":"a1fd3882-b7b7-4563-a50a-db9356faa4ba","Type":"ContainerDied","Data":"a75a89489d8a44e9f0c626a8b59b98d7193031a89e81028366f37f8953359278"} Jan 31 10:11:18 crc kubenswrapper[4830]: I0131 10:11:18.674052 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kq26f" Jan 31 10:11:18 crc kubenswrapper[4830]: I0131 10:11:18.674064 4830 scope.go:117] "RemoveContainer" containerID="a20c00f30b879fc080343afe50ce8d69ecd4e1769c8275dc2534696103bd5cb4" Jan 31 10:11:18 crc kubenswrapper[4830]: I0131 10:11:18.716048 4830 scope.go:117] "RemoveContainer" containerID="ae21dfc2b56f4323a8ca075eb13f855f16d17a47aee5720b22d1206595edc64d" Jan 31 10:11:18 crc kubenswrapper[4830]: I0131 10:11:18.729603 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-kq26f"] Jan 31 10:11:18 crc kubenswrapper[4830]: I0131 10:11:18.739849 4830 scope.go:117] "RemoveContainer" containerID="64861daf18bd9d10f5f698f69dbd8b453e02b7dd9c3f909b9e129280b134d65c" Jan 31 10:11:18 crc kubenswrapper[4830]: I0131 10:11:18.741078 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-kq26f"] Jan 31 10:11:18 crc kubenswrapper[4830]: I0131 10:11:18.799610 4830 scope.go:117] "RemoveContainer" containerID="a20c00f30b879fc080343afe50ce8d69ecd4e1769c8275dc2534696103bd5cb4" Jan 31 10:11:18 crc kubenswrapper[4830]: E0131 10:11:18.800437 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a20c00f30b879fc080343afe50ce8d69ecd4e1769c8275dc2534696103bd5cb4\": container with ID starting with a20c00f30b879fc080343afe50ce8d69ecd4e1769c8275dc2534696103bd5cb4 not found: ID does not exist" containerID="a20c00f30b879fc080343afe50ce8d69ecd4e1769c8275dc2534696103bd5cb4" Jan 31 10:11:18 crc kubenswrapper[4830]: I0131 10:11:18.800473 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a20c00f30b879fc080343afe50ce8d69ecd4e1769c8275dc2534696103bd5cb4"} err="failed to get container status \"a20c00f30b879fc080343afe50ce8d69ecd4e1769c8275dc2534696103bd5cb4\": rpc error: code = NotFound desc = could not find container \"a20c00f30b879fc080343afe50ce8d69ecd4e1769c8275dc2534696103bd5cb4\": container with ID starting with a20c00f30b879fc080343afe50ce8d69ecd4e1769c8275dc2534696103bd5cb4 not found: ID does not exist" Jan 31 10:11:18 crc kubenswrapper[4830]: I0131 10:11:18.800497 4830 scope.go:117] "RemoveContainer" containerID="ae21dfc2b56f4323a8ca075eb13f855f16d17a47aee5720b22d1206595edc64d" Jan 31 10:11:18 crc kubenswrapper[4830]: E0131 10:11:18.800828 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ae21dfc2b56f4323a8ca075eb13f855f16d17a47aee5720b22d1206595edc64d\": container with ID starting with ae21dfc2b56f4323a8ca075eb13f855f16d17a47aee5720b22d1206595edc64d not found: ID does not exist" containerID="ae21dfc2b56f4323a8ca075eb13f855f16d17a47aee5720b22d1206595edc64d" Jan 31 10:11:18 crc kubenswrapper[4830]: I0131 10:11:18.800869 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ae21dfc2b56f4323a8ca075eb13f855f16d17a47aee5720b22d1206595edc64d"} err="failed to get container status \"ae21dfc2b56f4323a8ca075eb13f855f16d17a47aee5720b22d1206595edc64d\": rpc error: code = NotFound desc = could not find container \"ae21dfc2b56f4323a8ca075eb13f855f16d17a47aee5720b22d1206595edc64d\": container with ID starting with ae21dfc2b56f4323a8ca075eb13f855f16d17a47aee5720b22d1206595edc64d not found: ID does not exist" Jan 31 10:11:18 crc kubenswrapper[4830]: I0131 10:11:18.800895 4830 scope.go:117] "RemoveContainer" containerID="64861daf18bd9d10f5f698f69dbd8b453e02b7dd9c3f909b9e129280b134d65c" Jan 31 10:11:18 crc kubenswrapper[4830]: E0131 10:11:18.801173 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"64861daf18bd9d10f5f698f69dbd8b453e02b7dd9c3f909b9e129280b134d65c\": container with ID starting with 64861daf18bd9d10f5f698f69dbd8b453e02b7dd9c3f909b9e129280b134d65c not found: ID does not exist" containerID="64861daf18bd9d10f5f698f69dbd8b453e02b7dd9c3f909b9e129280b134d65c" Jan 31 10:11:18 crc kubenswrapper[4830]: I0131 10:11:18.801209 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"64861daf18bd9d10f5f698f69dbd8b453e02b7dd9c3f909b9e129280b134d65c"} err="failed to get container status \"64861daf18bd9d10f5f698f69dbd8b453e02b7dd9c3f909b9e129280b134d65c\": rpc error: code = NotFound desc = could not find container \"64861daf18bd9d10f5f698f69dbd8b453e02b7dd9c3f909b9e129280b134d65c\": container with ID starting with 64861daf18bd9d10f5f698f69dbd8b453e02b7dd9c3f909b9e129280b134d65c not found: ID does not exist" Jan 31 10:11:20 crc kubenswrapper[4830]: I0131 10:11:20.264568 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a1fd3882-b7b7-4563-a50a-db9356faa4ba" path="/var/lib/kubelet/pods/a1fd3882-b7b7-4563-a50a-db9356faa4ba/volumes" Jan 31 10:11:44 crc kubenswrapper[4830]: I0131 10:11:44.354811 4830 patch_prober.go:28] interesting pod/machine-config-daemon-gt7kd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 10:11:44 crc kubenswrapper[4830]: I0131 10:11:44.355387 4830 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" podUID="158dbfda-9b0a-4809-9946-3c6ee2d082dc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 10:11:44 crc kubenswrapper[4830]: I0131 10:11:44.355432 4830 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" Jan 31 10:11:44 crc kubenswrapper[4830]: I0131 10:11:44.356306 4830 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"7ca4e27c0e74098ff8b4f356a070085cc4684687a37c370e31d282c5c11adfc3"} pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 31 10:11:44 crc kubenswrapper[4830]: I0131 10:11:44.356355 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" podUID="158dbfda-9b0a-4809-9946-3c6ee2d082dc" containerName="machine-config-daemon" containerID="cri-o://7ca4e27c0e74098ff8b4f356a070085cc4684687a37c370e31d282c5c11adfc3" gracePeriod=600 Jan 31 10:11:44 crc kubenswrapper[4830]: E0131 10:11:44.477106 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gt7kd_openshift-machine-config-operator(158dbfda-9b0a-4809-9946-3c6ee2d082dc)\"" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" podUID="158dbfda-9b0a-4809-9946-3c6ee2d082dc" Jan 31 10:11:44 crc kubenswrapper[4830]: I0131 10:11:44.959832 4830 generic.go:334] "Generic (PLEG): container finished" podID="158dbfda-9b0a-4809-9946-3c6ee2d082dc" containerID="7ca4e27c0e74098ff8b4f356a070085cc4684687a37c370e31d282c5c11adfc3" exitCode=0 Jan 31 10:11:44 crc kubenswrapper[4830]: I0131 10:11:44.959885 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" event={"ID":"158dbfda-9b0a-4809-9946-3c6ee2d082dc","Type":"ContainerDied","Data":"7ca4e27c0e74098ff8b4f356a070085cc4684687a37c370e31d282c5c11adfc3"} Jan 31 10:11:44 crc kubenswrapper[4830]: I0131 10:11:44.960489 4830 scope.go:117] "RemoveContainer" containerID="0b092ca3bb9b9a206ab1c26c23ac27ed470668236cc5c21830f6134f0e65d665" Jan 31 10:11:44 crc kubenswrapper[4830]: I0131 10:11:44.961582 4830 scope.go:117] "RemoveContainer" containerID="7ca4e27c0e74098ff8b4f356a070085cc4684687a37c370e31d282c5c11adfc3" Jan 31 10:11:44 crc kubenswrapper[4830]: E0131 10:11:44.961983 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gt7kd_openshift-machine-config-operator(158dbfda-9b0a-4809-9946-3c6ee2d082dc)\"" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" podUID="158dbfda-9b0a-4809-9946-3c6ee2d082dc" Jan 31 10:11:56 crc kubenswrapper[4830]: I0131 10:11:56.263563 4830 scope.go:117] "RemoveContainer" containerID="7ca4e27c0e74098ff8b4f356a070085cc4684687a37c370e31d282c5c11adfc3" Jan 31 10:11:56 crc kubenswrapper[4830]: E0131 10:11:56.266355 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gt7kd_openshift-machine-config-operator(158dbfda-9b0a-4809-9946-3c6ee2d082dc)\"" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" podUID="158dbfda-9b0a-4809-9946-3c6ee2d082dc" Jan 31 10:12:07 crc kubenswrapper[4830]: I0131 10:12:07.252072 4830 scope.go:117] "RemoveContainer" containerID="7ca4e27c0e74098ff8b4f356a070085cc4684687a37c370e31d282c5c11adfc3" Jan 31 10:12:07 crc kubenswrapper[4830]: E0131 10:12:07.252953 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gt7kd_openshift-machine-config-operator(158dbfda-9b0a-4809-9946-3c6ee2d082dc)\"" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" podUID="158dbfda-9b0a-4809-9946-3c6ee2d082dc" Jan 31 10:12:21 crc kubenswrapper[4830]: I0131 10:12:21.252583 4830 scope.go:117] "RemoveContainer" containerID="7ca4e27c0e74098ff8b4f356a070085cc4684687a37c370e31d282c5c11adfc3" Jan 31 10:12:21 crc kubenswrapper[4830]: E0131 10:12:21.253502 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gt7kd_openshift-machine-config-operator(158dbfda-9b0a-4809-9946-3c6ee2d082dc)\"" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" podUID="158dbfda-9b0a-4809-9946-3c6ee2d082dc" Jan 31 10:12:35 crc kubenswrapper[4830]: I0131 10:12:35.252429 4830 scope.go:117] "RemoveContainer" containerID="7ca4e27c0e74098ff8b4f356a070085cc4684687a37c370e31d282c5c11adfc3" Jan 31 10:12:35 crc kubenswrapper[4830]: E0131 10:12:35.253197 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gt7kd_openshift-machine-config-operator(158dbfda-9b0a-4809-9946-3c6ee2d082dc)\"" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" podUID="158dbfda-9b0a-4809-9946-3c6ee2d082dc" Jan 31 10:12:46 crc kubenswrapper[4830]: I0131 10:12:46.265532 4830 scope.go:117] "RemoveContainer" containerID="7ca4e27c0e74098ff8b4f356a070085cc4684687a37c370e31d282c5c11adfc3" Jan 31 10:12:46 crc kubenswrapper[4830]: E0131 10:12:46.266870 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gt7kd_openshift-machine-config-operator(158dbfda-9b0a-4809-9946-3c6ee2d082dc)\"" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" podUID="158dbfda-9b0a-4809-9946-3c6ee2d082dc" Jan 31 10:12:57 crc kubenswrapper[4830]: I0131 10:12:57.251547 4830 scope.go:117] "RemoveContainer" containerID="7ca4e27c0e74098ff8b4f356a070085cc4684687a37c370e31d282c5c11adfc3" Jan 31 10:12:57 crc kubenswrapper[4830]: E0131 10:12:57.252306 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gt7kd_openshift-machine-config-operator(158dbfda-9b0a-4809-9946-3c6ee2d082dc)\"" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" podUID="158dbfda-9b0a-4809-9946-3c6ee2d082dc" Jan 31 10:13:12 crc kubenswrapper[4830]: I0131 10:13:12.252109 4830 scope.go:117] "RemoveContainer" containerID="7ca4e27c0e74098ff8b4f356a070085cc4684687a37c370e31d282c5c11adfc3" Jan 31 10:13:12 crc kubenswrapper[4830]: E0131 10:13:12.253049 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gt7kd_openshift-machine-config-operator(158dbfda-9b0a-4809-9946-3c6ee2d082dc)\"" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" podUID="158dbfda-9b0a-4809-9946-3c6ee2d082dc" Jan 31 10:13:26 crc kubenswrapper[4830]: I0131 10:13:26.261428 4830 scope.go:117] "RemoveContainer" containerID="7ca4e27c0e74098ff8b4f356a070085cc4684687a37c370e31d282c5c11adfc3" Jan 31 10:13:26 crc kubenswrapper[4830]: E0131 10:13:26.262525 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gt7kd_openshift-machine-config-operator(158dbfda-9b0a-4809-9946-3c6ee2d082dc)\"" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" podUID="158dbfda-9b0a-4809-9946-3c6ee2d082dc" Jan 31 10:13:37 crc kubenswrapper[4830]: I0131 10:13:37.251593 4830 scope.go:117] "RemoveContainer" containerID="7ca4e27c0e74098ff8b4f356a070085cc4684687a37c370e31d282c5c11adfc3" Jan 31 10:13:37 crc kubenswrapper[4830]: E0131 10:13:37.252534 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gt7kd_openshift-machine-config-operator(158dbfda-9b0a-4809-9946-3c6ee2d082dc)\"" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" podUID="158dbfda-9b0a-4809-9946-3c6ee2d082dc" Jan 31 10:13:51 crc kubenswrapper[4830]: I0131 10:13:51.251694 4830 scope.go:117] "RemoveContainer" containerID="7ca4e27c0e74098ff8b4f356a070085cc4684687a37c370e31d282c5c11adfc3" Jan 31 10:13:51 crc kubenswrapper[4830]: E0131 10:13:51.252520 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gt7kd_openshift-machine-config-operator(158dbfda-9b0a-4809-9946-3c6ee2d082dc)\"" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" podUID="158dbfda-9b0a-4809-9946-3c6ee2d082dc" Jan 31 10:14:02 crc kubenswrapper[4830]: I0131 10:14:02.251445 4830 scope.go:117] "RemoveContainer" containerID="7ca4e27c0e74098ff8b4f356a070085cc4684687a37c370e31d282c5c11adfc3" Jan 31 10:14:02 crc kubenswrapper[4830]: E0131 10:14:02.253234 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gt7kd_openshift-machine-config-operator(158dbfda-9b0a-4809-9946-3c6ee2d082dc)\"" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" podUID="158dbfda-9b0a-4809-9946-3c6ee2d082dc" Jan 31 10:14:16 crc kubenswrapper[4830]: I0131 10:14:16.258813 4830 scope.go:117] "RemoveContainer" containerID="7ca4e27c0e74098ff8b4f356a070085cc4684687a37c370e31d282c5c11adfc3" Jan 31 10:14:16 crc kubenswrapper[4830]: E0131 10:14:16.259823 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gt7kd_openshift-machine-config-operator(158dbfda-9b0a-4809-9946-3c6ee2d082dc)\"" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" podUID="158dbfda-9b0a-4809-9946-3c6ee2d082dc" Jan 31 10:14:30 crc kubenswrapper[4830]: I0131 10:14:30.252029 4830 scope.go:117] "RemoveContainer" containerID="7ca4e27c0e74098ff8b4f356a070085cc4684687a37c370e31d282c5c11adfc3" Jan 31 10:14:30 crc kubenswrapper[4830]: E0131 10:14:30.252915 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gt7kd_openshift-machine-config-operator(158dbfda-9b0a-4809-9946-3c6ee2d082dc)\"" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" podUID="158dbfda-9b0a-4809-9946-3c6ee2d082dc" Jan 31 10:14:44 crc kubenswrapper[4830]: I0131 10:14:44.251458 4830 scope.go:117] "RemoveContainer" containerID="7ca4e27c0e74098ff8b4f356a070085cc4684687a37c370e31d282c5c11adfc3" Jan 31 10:14:44 crc kubenswrapper[4830]: E0131 10:14:44.252314 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gt7kd_openshift-machine-config-operator(158dbfda-9b0a-4809-9946-3c6ee2d082dc)\"" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" podUID="158dbfda-9b0a-4809-9946-3c6ee2d082dc" Jan 31 10:14:57 crc kubenswrapper[4830]: I0131 10:14:57.251122 4830 scope.go:117] "RemoveContainer" containerID="7ca4e27c0e74098ff8b4f356a070085cc4684687a37c370e31d282c5c11adfc3" Jan 31 10:14:57 crc kubenswrapper[4830]: E0131 10:14:57.251987 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gt7kd_openshift-machine-config-operator(158dbfda-9b0a-4809-9946-3c6ee2d082dc)\"" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" podUID="158dbfda-9b0a-4809-9946-3c6ee2d082dc" Jan 31 10:15:00 crc kubenswrapper[4830]: I0131 10:15:00.190268 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29497575-bnsfk"] Jan 31 10:15:00 crc kubenswrapper[4830]: E0131 10:15:00.192318 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1fd3882-b7b7-4563-a50a-db9356faa4ba" containerName="extract-utilities" Jan 31 10:15:00 crc kubenswrapper[4830]: I0131 10:15:00.192342 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1fd3882-b7b7-4563-a50a-db9356faa4ba" containerName="extract-utilities" Jan 31 10:15:00 crc kubenswrapper[4830]: E0131 10:15:00.192367 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1fd3882-b7b7-4563-a50a-db9356faa4ba" containerName="registry-server" Jan 31 10:15:00 crc kubenswrapper[4830]: I0131 10:15:00.192377 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1fd3882-b7b7-4563-a50a-db9356faa4ba" containerName="registry-server" Jan 31 10:15:00 crc kubenswrapper[4830]: E0131 10:15:00.192399 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1fd3882-b7b7-4563-a50a-db9356faa4ba" containerName="extract-content" Jan 31 10:15:00 crc kubenswrapper[4830]: I0131 10:15:00.192406 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1fd3882-b7b7-4563-a50a-db9356faa4ba" containerName="extract-content" Jan 31 10:15:00 crc kubenswrapper[4830]: I0131 10:15:00.192816 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="a1fd3882-b7b7-4563-a50a-db9356faa4ba" containerName="registry-server" Jan 31 10:15:00 crc kubenswrapper[4830]: I0131 10:15:00.194241 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29497575-bnsfk" Jan 31 10:15:00 crc kubenswrapper[4830]: I0131 10:15:00.199192 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 31 10:15:00 crc kubenswrapper[4830]: I0131 10:15:00.199407 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 31 10:15:00 crc kubenswrapper[4830]: I0131 10:15:00.207078 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29497575-bnsfk"] Jan 31 10:15:00 crc kubenswrapper[4830]: I0131 10:15:00.340168 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-svbz8\" (UniqueName: \"kubernetes.io/projected/23b43d17-2ab3-44b1-893c-3dbc72e77d51-kube-api-access-svbz8\") pod \"collect-profiles-29497575-bnsfk\" (UID: \"23b43d17-2ab3-44b1-893c-3dbc72e77d51\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497575-bnsfk" Jan 31 10:15:00 crc kubenswrapper[4830]: I0131 10:15:00.340983 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/23b43d17-2ab3-44b1-893c-3dbc72e77d51-secret-volume\") pod \"collect-profiles-29497575-bnsfk\" (UID: \"23b43d17-2ab3-44b1-893c-3dbc72e77d51\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497575-bnsfk" Jan 31 10:15:00 crc kubenswrapper[4830]: I0131 10:15:00.341195 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/23b43d17-2ab3-44b1-893c-3dbc72e77d51-config-volume\") pod \"collect-profiles-29497575-bnsfk\" (UID: \"23b43d17-2ab3-44b1-893c-3dbc72e77d51\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497575-bnsfk" Jan 31 10:15:00 crc kubenswrapper[4830]: I0131 10:15:00.444309 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/23b43d17-2ab3-44b1-893c-3dbc72e77d51-secret-volume\") pod \"collect-profiles-29497575-bnsfk\" (UID: \"23b43d17-2ab3-44b1-893c-3dbc72e77d51\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497575-bnsfk" Jan 31 10:15:00 crc kubenswrapper[4830]: I0131 10:15:00.444395 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/23b43d17-2ab3-44b1-893c-3dbc72e77d51-config-volume\") pod \"collect-profiles-29497575-bnsfk\" (UID: \"23b43d17-2ab3-44b1-893c-3dbc72e77d51\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497575-bnsfk" Jan 31 10:15:00 crc kubenswrapper[4830]: I0131 10:15:00.444586 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-svbz8\" (UniqueName: \"kubernetes.io/projected/23b43d17-2ab3-44b1-893c-3dbc72e77d51-kube-api-access-svbz8\") pod \"collect-profiles-29497575-bnsfk\" (UID: \"23b43d17-2ab3-44b1-893c-3dbc72e77d51\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497575-bnsfk" Jan 31 10:15:00 crc kubenswrapper[4830]: I0131 10:15:00.445464 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/23b43d17-2ab3-44b1-893c-3dbc72e77d51-config-volume\") pod \"collect-profiles-29497575-bnsfk\" (UID: \"23b43d17-2ab3-44b1-893c-3dbc72e77d51\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497575-bnsfk" Jan 31 10:15:00 crc kubenswrapper[4830]: I0131 10:15:00.451462 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/23b43d17-2ab3-44b1-893c-3dbc72e77d51-secret-volume\") pod \"collect-profiles-29497575-bnsfk\" (UID: \"23b43d17-2ab3-44b1-893c-3dbc72e77d51\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497575-bnsfk" Jan 31 10:15:00 crc kubenswrapper[4830]: I0131 10:15:00.463274 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-svbz8\" (UniqueName: \"kubernetes.io/projected/23b43d17-2ab3-44b1-893c-3dbc72e77d51-kube-api-access-svbz8\") pod \"collect-profiles-29497575-bnsfk\" (UID: \"23b43d17-2ab3-44b1-893c-3dbc72e77d51\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497575-bnsfk" Jan 31 10:15:00 crc kubenswrapper[4830]: I0131 10:15:00.525667 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29497575-bnsfk" Jan 31 10:15:01 crc kubenswrapper[4830]: I0131 10:15:01.078100 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29497575-bnsfk"] Jan 31 10:15:01 crc kubenswrapper[4830]: I0131 10:15:01.223981 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29497575-bnsfk" event={"ID":"23b43d17-2ab3-44b1-893c-3dbc72e77d51","Type":"ContainerStarted","Data":"173cda9f6cf04847aa6639709080c8b9ab8bfdad10ff9e10aaa8599148b3da9e"} Jan 31 10:15:02 crc kubenswrapper[4830]: I0131 10:15:02.238943 4830 generic.go:334] "Generic (PLEG): container finished" podID="23b43d17-2ab3-44b1-893c-3dbc72e77d51" containerID="f5aad90e02a24c26638b183f9a3d317442af63b06a9f01a78c373e39f8d42a61" exitCode=0 Jan 31 10:15:02 crc kubenswrapper[4830]: I0131 10:15:02.239008 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29497575-bnsfk" event={"ID":"23b43d17-2ab3-44b1-893c-3dbc72e77d51","Type":"ContainerDied","Data":"f5aad90e02a24c26638b183f9a3d317442af63b06a9f01a78c373e39f8d42a61"} Jan 31 10:15:03 crc kubenswrapper[4830]: I0131 10:15:03.700404 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29497575-bnsfk" Jan 31 10:15:03 crc kubenswrapper[4830]: I0131 10:15:03.831926 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/23b43d17-2ab3-44b1-893c-3dbc72e77d51-secret-volume\") pod \"23b43d17-2ab3-44b1-893c-3dbc72e77d51\" (UID: \"23b43d17-2ab3-44b1-893c-3dbc72e77d51\") " Jan 31 10:15:03 crc kubenswrapper[4830]: I0131 10:15:03.832312 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-svbz8\" (UniqueName: \"kubernetes.io/projected/23b43d17-2ab3-44b1-893c-3dbc72e77d51-kube-api-access-svbz8\") pod \"23b43d17-2ab3-44b1-893c-3dbc72e77d51\" (UID: \"23b43d17-2ab3-44b1-893c-3dbc72e77d51\") " Jan 31 10:15:03 crc kubenswrapper[4830]: I0131 10:15:03.832602 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/23b43d17-2ab3-44b1-893c-3dbc72e77d51-config-volume\") pod \"23b43d17-2ab3-44b1-893c-3dbc72e77d51\" (UID: \"23b43d17-2ab3-44b1-893c-3dbc72e77d51\") " Jan 31 10:15:03 crc kubenswrapper[4830]: I0131 10:15:03.833929 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/23b43d17-2ab3-44b1-893c-3dbc72e77d51-config-volume" (OuterVolumeSpecName: "config-volume") pod "23b43d17-2ab3-44b1-893c-3dbc72e77d51" (UID: "23b43d17-2ab3-44b1-893c-3dbc72e77d51"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 10:15:03 crc kubenswrapper[4830]: I0131 10:15:03.837364 4830 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/23b43d17-2ab3-44b1-893c-3dbc72e77d51-config-volume\") on node \"crc\" DevicePath \"\"" Jan 31 10:15:03 crc kubenswrapper[4830]: I0131 10:15:03.841968 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/23b43d17-2ab3-44b1-893c-3dbc72e77d51-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "23b43d17-2ab3-44b1-893c-3dbc72e77d51" (UID: "23b43d17-2ab3-44b1-893c-3dbc72e77d51"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 10:15:03 crc kubenswrapper[4830]: I0131 10:15:03.842614 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/23b43d17-2ab3-44b1-893c-3dbc72e77d51-kube-api-access-svbz8" (OuterVolumeSpecName: "kube-api-access-svbz8") pod "23b43d17-2ab3-44b1-893c-3dbc72e77d51" (UID: "23b43d17-2ab3-44b1-893c-3dbc72e77d51"). InnerVolumeSpecName "kube-api-access-svbz8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 10:15:03 crc kubenswrapper[4830]: I0131 10:15:03.939666 4830 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/23b43d17-2ab3-44b1-893c-3dbc72e77d51-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 31 10:15:03 crc kubenswrapper[4830]: I0131 10:15:03.939706 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-svbz8\" (UniqueName: \"kubernetes.io/projected/23b43d17-2ab3-44b1-893c-3dbc72e77d51-kube-api-access-svbz8\") on node \"crc\" DevicePath \"\"" Jan 31 10:15:04 crc kubenswrapper[4830]: I0131 10:15:04.268450 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29497575-bnsfk" event={"ID":"23b43d17-2ab3-44b1-893c-3dbc72e77d51","Type":"ContainerDied","Data":"173cda9f6cf04847aa6639709080c8b9ab8bfdad10ff9e10aaa8599148b3da9e"} Jan 31 10:15:04 crc kubenswrapper[4830]: I0131 10:15:04.268876 4830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="173cda9f6cf04847aa6639709080c8b9ab8bfdad10ff9e10aaa8599148b3da9e" Jan 31 10:15:04 crc kubenswrapper[4830]: I0131 10:15:04.268939 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29497575-bnsfk" Jan 31 10:15:04 crc kubenswrapper[4830]: I0131 10:15:04.784543 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29497530-xhdxr"] Jan 31 10:15:04 crc kubenswrapper[4830]: I0131 10:15:04.794833 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29497530-xhdxr"] Jan 31 10:15:06 crc kubenswrapper[4830]: I0131 10:15:06.272263 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e6a570c9-bd20-4f17-b62f-15eae189fedc" path="/var/lib/kubelet/pods/e6a570c9-bd20-4f17-b62f-15eae189fedc/volumes" Jan 31 10:15:08 crc kubenswrapper[4830]: I0131 10:15:08.251755 4830 scope.go:117] "RemoveContainer" containerID="7ca4e27c0e74098ff8b4f356a070085cc4684687a37c370e31d282c5c11adfc3" Jan 31 10:15:08 crc kubenswrapper[4830]: E0131 10:15:08.252847 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gt7kd_openshift-machine-config-operator(158dbfda-9b0a-4809-9946-3c6ee2d082dc)\"" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" podUID="158dbfda-9b0a-4809-9946-3c6ee2d082dc" Jan 31 10:15:15 crc kubenswrapper[4830]: I0131 10:15:15.305106 4830 scope.go:117] "RemoveContainer" containerID="c6c8b24d2fdfb99982a546cf48b0c64c196f856ca919c18652c658185e58816a" Jan 31 10:15:20 crc kubenswrapper[4830]: I0131 10:15:20.252533 4830 scope.go:117] "RemoveContainer" containerID="7ca4e27c0e74098ff8b4f356a070085cc4684687a37c370e31d282c5c11adfc3" Jan 31 10:15:20 crc kubenswrapper[4830]: E0131 10:15:20.253580 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gt7kd_openshift-machine-config-operator(158dbfda-9b0a-4809-9946-3c6ee2d082dc)\"" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" podUID="158dbfda-9b0a-4809-9946-3c6ee2d082dc" Jan 31 10:15:33 crc kubenswrapper[4830]: I0131 10:15:33.251822 4830 scope.go:117] "RemoveContainer" containerID="7ca4e27c0e74098ff8b4f356a070085cc4684687a37c370e31d282c5c11adfc3" Jan 31 10:15:33 crc kubenswrapper[4830]: E0131 10:15:33.252949 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gt7kd_openshift-machine-config-operator(158dbfda-9b0a-4809-9946-3c6ee2d082dc)\"" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" podUID="158dbfda-9b0a-4809-9946-3c6ee2d082dc" Jan 31 10:15:44 crc kubenswrapper[4830]: I0131 10:15:44.252254 4830 scope.go:117] "RemoveContainer" containerID="7ca4e27c0e74098ff8b4f356a070085cc4684687a37c370e31d282c5c11adfc3" Jan 31 10:15:44 crc kubenswrapper[4830]: E0131 10:15:44.253200 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gt7kd_openshift-machine-config-operator(158dbfda-9b0a-4809-9946-3c6ee2d082dc)\"" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" podUID="158dbfda-9b0a-4809-9946-3c6ee2d082dc" Jan 31 10:15:58 crc kubenswrapper[4830]: I0131 10:15:58.252637 4830 scope.go:117] "RemoveContainer" containerID="7ca4e27c0e74098ff8b4f356a070085cc4684687a37c370e31d282c5c11adfc3" Jan 31 10:15:58 crc kubenswrapper[4830]: E0131 10:15:58.254032 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gt7kd_openshift-machine-config-operator(158dbfda-9b0a-4809-9946-3c6ee2d082dc)\"" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" podUID="158dbfda-9b0a-4809-9946-3c6ee2d082dc" Jan 31 10:16:09 crc kubenswrapper[4830]: I0131 10:16:09.251532 4830 scope.go:117] "RemoveContainer" containerID="7ca4e27c0e74098ff8b4f356a070085cc4684687a37c370e31d282c5c11adfc3" Jan 31 10:16:09 crc kubenswrapper[4830]: E0131 10:16:09.252373 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gt7kd_openshift-machine-config-operator(158dbfda-9b0a-4809-9946-3c6ee2d082dc)\"" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" podUID="158dbfda-9b0a-4809-9946-3c6ee2d082dc" Jan 31 10:16:17 crc kubenswrapper[4830]: I0131 10:16:17.705697 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-d6zmz"] Jan 31 10:16:17 crc kubenswrapper[4830]: E0131 10:16:17.706867 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="23b43d17-2ab3-44b1-893c-3dbc72e77d51" containerName="collect-profiles" Jan 31 10:16:17 crc kubenswrapper[4830]: I0131 10:16:17.706883 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="23b43d17-2ab3-44b1-893c-3dbc72e77d51" containerName="collect-profiles" Jan 31 10:16:17 crc kubenswrapper[4830]: I0131 10:16:17.707196 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="23b43d17-2ab3-44b1-893c-3dbc72e77d51" containerName="collect-profiles" Jan 31 10:16:17 crc kubenswrapper[4830]: I0131 10:16:17.709385 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-d6zmz" Jan 31 10:16:17 crc kubenswrapper[4830]: I0131 10:16:17.720078 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-d6zmz"] Jan 31 10:16:17 crc kubenswrapper[4830]: I0131 10:16:17.797987 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d8157ef8-481b-405d-a740-0e0797d1d178-utilities\") pod \"redhat-operators-d6zmz\" (UID: \"d8157ef8-481b-405d-a740-0e0797d1d178\") " pod="openshift-marketplace/redhat-operators-d6zmz" Jan 31 10:16:17 crc kubenswrapper[4830]: I0131 10:16:17.798183 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d8157ef8-481b-405d-a740-0e0797d1d178-catalog-content\") pod \"redhat-operators-d6zmz\" (UID: \"d8157ef8-481b-405d-a740-0e0797d1d178\") " pod="openshift-marketplace/redhat-operators-d6zmz" Jan 31 10:16:17 crc kubenswrapper[4830]: I0131 10:16:17.798297 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xt8bg\" (UniqueName: \"kubernetes.io/projected/d8157ef8-481b-405d-a740-0e0797d1d178-kube-api-access-xt8bg\") pod \"redhat-operators-d6zmz\" (UID: \"d8157ef8-481b-405d-a740-0e0797d1d178\") " pod="openshift-marketplace/redhat-operators-d6zmz" Jan 31 10:16:17 crc kubenswrapper[4830]: I0131 10:16:17.900043 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d8157ef8-481b-405d-a740-0e0797d1d178-utilities\") pod \"redhat-operators-d6zmz\" (UID: \"d8157ef8-481b-405d-a740-0e0797d1d178\") " pod="openshift-marketplace/redhat-operators-d6zmz" Jan 31 10:16:17 crc kubenswrapper[4830]: I0131 10:16:17.900527 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d8157ef8-481b-405d-a740-0e0797d1d178-catalog-content\") pod \"redhat-operators-d6zmz\" (UID: \"d8157ef8-481b-405d-a740-0e0797d1d178\") " pod="openshift-marketplace/redhat-operators-d6zmz" Jan 31 10:16:17 crc kubenswrapper[4830]: I0131 10:16:17.900616 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xt8bg\" (UniqueName: \"kubernetes.io/projected/d8157ef8-481b-405d-a740-0e0797d1d178-kube-api-access-xt8bg\") pod \"redhat-operators-d6zmz\" (UID: \"d8157ef8-481b-405d-a740-0e0797d1d178\") " pod="openshift-marketplace/redhat-operators-d6zmz" Jan 31 10:16:17 crc kubenswrapper[4830]: I0131 10:16:17.900714 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d8157ef8-481b-405d-a740-0e0797d1d178-utilities\") pod \"redhat-operators-d6zmz\" (UID: \"d8157ef8-481b-405d-a740-0e0797d1d178\") " pod="openshift-marketplace/redhat-operators-d6zmz" Jan 31 10:16:17 crc kubenswrapper[4830]: I0131 10:16:17.900926 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d8157ef8-481b-405d-a740-0e0797d1d178-catalog-content\") pod \"redhat-operators-d6zmz\" (UID: \"d8157ef8-481b-405d-a740-0e0797d1d178\") " pod="openshift-marketplace/redhat-operators-d6zmz" Jan 31 10:16:17 crc kubenswrapper[4830]: I0131 10:16:17.921227 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xt8bg\" (UniqueName: \"kubernetes.io/projected/d8157ef8-481b-405d-a740-0e0797d1d178-kube-api-access-xt8bg\") pod \"redhat-operators-d6zmz\" (UID: \"d8157ef8-481b-405d-a740-0e0797d1d178\") " pod="openshift-marketplace/redhat-operators-d6zmz" Jan 31 10:16:18 crc kubenswrapper[4830]: I0131 10:16:18.071385 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-d6zmz" Jan 31 10:16:18 crc kubenswrapper[4830]: I0131 10:16:18.596030 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-d6zmz"] Jan 31 10:16:19 crc kubenswrapper[4830]: I0131 10:16:19.115566 4830 generic.go:334] "Generic (PLEG): container finished" podID="d8157ef8-481b-405d-a740-0e0797d1d178" containerID="f5ea1d9c43708c08fb2a8a11a57b004599742990fb1fa897ee189110d84b96f0" exitCode=0 Jan 31 10:16:19 crc kubenswrapper[4830]: I0131 10:16:19.115742 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d6zmz" event={"ID":"d8157ef8-481b-405d-a740-0e0797d1d178","Type":"ContainerDied","Data":"f5ea1d9c43708c08fb2a8a11a57b004599742990fb1fa897ee189110d84b96f0"} Jan 31 10:16:19 crc kubenswrapper[4830]: I0131 10:16:19.115892 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d6zmz" event={"ID":"d8157ef8-481b-405d-a740-0e0797d1d178","Type":"ContainerStarted","Data":"3b5d6930d86049fac83a57444bb495be2074d3fd3147ab2d701bbacce1cf02ac"} Jan 31 10:16:19 crc kubenswrapper[4830]: I0131 10:16:19.117868 4830 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 31 10:16:20 crc kubenswrapper[4830]: I0131 10:16:20.130477 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d6zmz" event={"ID":"d8157ef8-481b-405d-a740-0e0797d1d178","Type":"ContainerStarted","Data":"c4cc2b8ba78f36c810266fc418700ecfebf0453450d991389c8d78fa407070d4"} Jan 31 10:16:22 crc kubenswrapper[4830]: I0131 10:16:22.252640 4830 scope.go:117] "RemoveContainer" containerID="7ca4e27c0e74098ff8b4f356a070085cc4684687a37c370e31d282c5c11adfc3" Jan 31 10:16:22 crc kubenswrapper[4830]: E0131 10:16:22.253784 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gt7kd_openshift-machine-config-operator(158dbfda-9b0a-4809-9946-3c6ee2d082dc)\"" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" podUID="158dbfda-9b0a-4809-9946-3c6ee2d082dc" Jan 31 10:16:25 crc kubenswrapper[4830]: I0131 10:16:25.187731 4830 generic.go:334] "Generic (PLEG): container finished" podID="d8157ef8-481b-405d-a740-0e0797d1d178" containerID="c4cc2b8ba78f36c810266fc418700ecfebf0453450d991389c8d78fa407070d4" exitCode=0 Jan 31 10:16:25 crc kubenswrapper[4830]: I0131 10:16:25.187774 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d6zmz" event={"ID":"d8157ef8-481b-405d-a740-0e0797d1d178","Type":"ContainerDied","Data":"c4cc2b8ba78f36c810266fc418700ecfebf0453450d991389c8d78fa407070d4"} Jan 31 10:16:26 crc kubenswrapper[4830]: I0131 10:16:26.201125 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d6zmz" event={"ID":"d8157ef8-481b-405d-a740-0e0797d1d178","Type":"ContainerStarted","Data":"ffaddb459143e869baaa98e409ebc1dd6d6676f5c442ef15fb7d36de11fa605f"} Jan 31 10:16:26 crc kubenswrapper[4830]: I0131 10:16:26.233424 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-d6zmz" podStartSLOduration=2.748026272 podStartE2EDuration="9.233387486s" podCreationTimestamp="2026-01-31 10:16:17 +0000 UTC" firstStartedPulling="2026-01-31 10:16:19.117460409 +0000 UTC m=+4523.610822851" lastFinishedPulling="2026-01-31 10:16:25.602821623 +0000 UTC m=+4530.096184065" observedRunningTime="2026-01-31 10:16:26.22510003 +0000 UTC m=+4530.718462462" watchObservedRunningTime="2026-01-31 10:16:26.233387486 +0000 UTC m=+4530.726749928" Jan 31 10:16:28 crc kubenswrapper[4830]: I0131 10:16:28.072080 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-d6zmz" Jan 31 10:16:28 crc kubenswrapper[4830]: I0131 10:16:28.072125 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-d6zmz" Jan 31 10:16:29 crc kubenswrapper[4830]: I0131 10:16:29.129016 4830 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-d6zmz" podUID="d8157ef8-481b-405d-a740-0e0797d1d178" containerName="registry-server" probeResult="failure" output=< Jan 31 10:16:29 crc kubenswrapper[4830]: timeout: failed to connect service ":50051" within 1s Jan 31 10:16:29 crc kubenswrapper[4830]: > Jan 31 10:16:37 crc kubenswrapper[4830]: I0131 10:16:37.252328 4830 scope.go:117] "RemoveContainer" containerID="7ca4e27c0e74098ff8b4f356a070085cc4684687a37c370e31d282c5c11adfc3" Jan 31 10:16:37 crc kubenswrapper[4830]: E0131 10:16:37.253190 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gt7kd_openshift-machine-config-operator(158dbfda-9b0a-4809-9946-3c6ee2d082dc)\"" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" podUID="158dbfda-9b0a-4809-9946-3c6ee2d082dc" Jan 31 10:16:39 crc kubenswrapper[4830]: I0131 10:16:39.138241 4830 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-d6zmz" podUID="d8157ef8-481b-405d-a740-0e0797d1d178" containerName="registry-server" probeResult="failure" output=< Jan 31 10:16:39 crc kubenswrapper[4830]: timeout: failed to connect service ":50051" within 1s Jan 31 10:16:39 crc kubenswrapper[4830]: > Jan 31 10:16:48 crc kubenswrapper[4830]: I0131 10:16:48.252237 4830 scope.go:117] "RemoveContainer" containerID="7ca4e27c0e74098ff8b4f356a070085cc4684687a37c370e31d282c5c11adfc3" Jan 31 10:16:48 crc kubenswrapper[4830]: I0131 10:16:48.643788 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-d6zmz" Jan 31 10:16:48 crc kubenswrapper[4830]: I0131 10:16:48.717461 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-d6zmz" Jan 31 10:16:48 crc kubenswrapper[4830]: I0131 10:16:48.890568 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-d6zmz"] Jan 31 10:16:49 crc kubenswrapper[4830]: I0131 10:16:49.444388 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" event={"ID":"158dbfda-9b0a-4809-9946-3c6ee2d082dc","Type":"ContainerStarted","Data":"09d1151cbdf8a81aba2b88f7e7cacb0624012aa898ec498a1d6bc161d6e6a9d4"} Jan 31 10:16:50 crc kubenswrapper[4830]: I0131 10:16:50.455618 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-d6zmz" podUID="d8157ef8-481b-405d-a740-0e0797d1d178" containerName="registry-server" containerID="cri-o://ffaddb459143e869baaa98e409ebc1dd6d6676f5c442ef15fb7d36de11fa605f" gracePeriod=2 Jan 31 10:16:51 crc kubenswrapper[4830]: I0131 10:16:51.054867 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-d6zmz" Jan 31 10:16:51 crc kubenswrapper[4830]: I0131 10:16:51.137694 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d8157ef8-481b-405d-a740-0e0797d1d178-utilities\") pod \"d8157ef8-481b-405d-a740-0e0797d1d178\" (UID: \"d8157ef8-481b-405d-a740-0e0797d1d178\") " Jan 31 10:16:51 crc kubenswrapper[4830]: I0131 10:16:51.137800 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d8157ef8-481b-405d-a740-0e0797d1d178-catalog-content\") pod \"d8157ef8-481b-405d-a740-0e0797d1d178\" (UID: \"d8157ef8-481b-405d-a740-0e0797d1d178\") " Jan 31 10:16:51 crc kubenswrapper[4830]: I0131 10:16:51.137885 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xt8bg\" (UniqueName: \"kubernetes.io/projected/d8157ef8-481b-405d-a740-0e0797d1d178-kube-api-access-xt8bg\") pod \"d8157ef8-481b-405d-a740-0e0797d1d178\" (UID: \"d8157ef8-481b-405d-a740-0e0797d1d178\") " Jan 31 10:16:51 crc kubenswrapper[4830]: I0131 10:16:51.138577 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d8157ef8-481b-405d-a740-0e0797d1d178-utilities" (OuterVolumeSpecName: "utilities") pod "d8157ef8-481b-405d-a740-0e0797d1d178" (UID: "d8157ef8-481b-405d-a740-0e0797d1d178"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 10:16:51 crc kubenswrapper[4830]: I0131 10:16:51.157618 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d8157ef8-481b-405d-a740-0e0797d1d178-kube-api-access-xt8bg" (OuterVolumeSpecName: "kube-api-access-xt8bg") pod "d8157ef8-481b-405d-a740-0e0797d1d178" (UID: "d8157ef8-481b-405d-a740-0e0797d1d178"). InnerVolumeSpecName "kube-api-access-xt8bg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 10:16:51 crc kubenswrapper[4830]: I0131 10:16:51.241681 4830 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d8157ef8-481b-405d-a740-0e0797d1d178-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 10:16:51 crc kubenswrapper[4830]: I0131 10:16:51.241755 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xt8bg\" (UniqueName: \"kubernetes.io/projected/d8157ef8-481b-405d-a740-0e0797d1d178-kube-api-access-xt8bg\") on node \"crc\" DevicePath \"\"" Jan 31 10:16:51 crc kubenswrapper[4830]: I0131 10:16:51.261197 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d8157ef8-481b-405d-a740-0e0797d1d178-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d8157ef8-481b-405d-a740-0e0797d1d178" (UID: "d8157ef8-481b-405d-a740-0e0797d1d178"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 10:16:51 crc kubenswrapper[4830]: I0131 10:16:51.346783 4830 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d8157ef8-481b-405d-a740-0e0797d1d178-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 10:16:51 crc kubenswrapper[4830]: I0131 10:16:51.467830 4830 generic.go:334] "Generic (PLEG): container finished" podID="d8157ef8-481b-405d-a740-0e0797d1d178" containerID="ffaddb459143e869baaa98e409ebc1dd6d6676f5c442ef15fb7d36de11fa605f" exitCode=0 Jan 31 10:16:51 crc kubenswrapper[4830]: I0131 10:16:51.467892 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-d6zmz" Jan 31 10:16:51 crc kubenswrapper[4830]: I0131 10:16:51.467909 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d6zmz" event={"ID":"d8157ef8-481b-405d-a740-0e0797d1d178","Type":"ContainerDied","Data":"ffaddb459143e869baaa98e409ebc1dd6d6676f5c442ef15fb7d36de11fa605f"} Jan 31 10:16:51 crc kubenswrapper[4830]: I0131 10:16:51.468507 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d6zmz" event={"ID":"d8157ef8-481b-405d-a740-0e0797d1d178","Type":"ContainerDied","Data":"3b5d6930d86049fac83a57444bb495be2074d3fd3147ab2d701bbacce1cf02ac"} Jan 31 10:16:51 crc kubenswrapper[4830]: I0131 10:16:51.468540 4830 scope.go:117] "RemoveContainer" containerID="ffaddb459143e869baaa98e409ebc1dd6d6676f5c442ef15fb7d36de11fa605f" Jan 31 10:16:51 crc kubenswrapper[4830]: I0131 10:16:51.519038 4830 scope.go:117] "RemoveContainer" containerID="c4cc2b8ba78f36c810266fc418700ecfebf0453450d991389c8d78fa407070d4" Jan 31 10:16:51 crc kubenswrapper[4830]: I0131 10:16:51.524198 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-d6zmz"] Jan 31 10:16:51 crc kubenswrapper[4830]: I0131 10:16:51.543525 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-d6zmz"] Jan 31 10:16:51 crc kubenswrapper[4830]: I0131 10:16:51.581665 4830 scope.go:117] "RemoveContainer" containerID="f5ea1d9c43708c08fb2a8a11a57b004599742990fb1fa897ee189110d84b96f0" Jan 31 10:16:51 crc kubenswrapper[4830]: I0131 10:16:51.631850 4830 scope.go:117] "RemoveContainer" containerID="ffaddb459143e869baaa98e409ebc1dd6d6676f5c442ef15fb7d36de11fa605f" Jan 31 10:16:51 crc kubenswrapper[4830]: E0131 10:16:51.632256 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ffaddb459143e869baaa98e409ebc1dd6d6676f5c442ef15fb7d36de11fa605f\": container with ID starting with ffaddb459143e869baaa98e409ebc1dd6d6676f5c442ef15fb7d36de11fa605f not found: ID does not exist" containerID="ffaddb459143e869baaa98e409ebc1dd6d6676f5c442ef15fb7d36de11fa605f" Jan 31 10:16:51 crc kubenswrapper[4830]: I0131 10:16:51.632303 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ffaddb459143e869baaa98e409ebc1dd6d6676f5c442ef15fb7d36de11fa605f"} err="failed to get container status \"ffaddb459143e869baaa98e409ebc1dd6d6676f5c442ef15fb7d36de11fa605f\": rpc error: code = NotFound desc = could not find container \"ffaddb459143e869baaa98e409ebc1dd6d6676f5c442ef15fb7d36de11fa605f\": container with ID starting with ffaddb459143e869baaa98e409ebc1dd6d6676f5c442ef15fb7d36de11fa605f not found: ID does not exist" Jan 31 10:16:51 crc kubenswrapper[4830]: I0131 10:16:51.632333 4830 scope.go:117] "RemoveContainer" containerID="c4cc2b8ba78f36c810266fc418700ecfebf0453450d991389c8d78fa407070d4" Jan 31 10:16:51 crc kubenswrapper[4830]: E0131 10:16:51.632588 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c4cc2b8ba78f36c810266fc418700ecfebf0453450d991389c8d78fa407070d4\": container with ID starting with c4cc2b8ba78f36c810266fc418700ecfebf0453450d991389c8d78fa407070d4 not found: ID does not exist" containerID="c4cc2b8ba78f36c810266fc418700ecfebf0453450d991389c8d78fa407070d4" Jan 31 10:16:51 crc kubenswrapper[4830]: I0131 10:16:51.632611 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c4cc2b8ba78f36c810266fc418700ecfebf0453450d991389c8d78fa407070d4"} err="failed to get container status \"c4cc2b8ba78f36c810266fc418700ecfebf0453450d991389c8d78fa407070d4\": rpc error: code = NotFound desc = could not find container \"c4cc2b8ba78f36c810266fc418700ecfebf0453450d991389c8d78fa407070d4\": container with ID starting with c4cc2b8ba78f36c810266fc418700ecfebf0453450d991389c8d78fa407070d4 not found: ID does not exist" Jan 31 10:16:51 crc kubenswrapper[4830]: I0131 10:16:51.632623 4830 scope.go:117] "RemoveContainer" containerID="f5ea1d9c43708c08fb2a8a11a57b004599742990fb1fa897ee189110d84b96f0" Jan 31 10:16:51 crc kubenswrapper[4830]: E0131 10:16:51.632837 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f5ea1d9c43708c08fb2a8a11a57b004599742990fb1fa897ee189110d84b96f0\": container with ID starting with f5ea1d9c43708c08fb2a8a11a57b004599742990fb1fa897ee189110d84b96f0 not found: ID does not exist" containerID="f5ea1d9c43708c08fb2a8a11a57b004599742990fb1fa897ee189110d84b96f0" Jan 31 10:16:51 crc kubenswrapper[4830]: I0131 10:16:51.632857 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f5ea1d9c43708c08fb2a8a11a57b004599742990fb1fa897ee189110d84b96f0"} err="failed to get container status \"f5ea1d9c43708c08fb2a8a11a57b004599742990fb1fa897ee189110d84b96f0\": rpc error: code = NotFound desc = could not find container \"f5ea1d9c43708c08fb2a8a11a57b004599742990fb1fa897ee189110d84b96f0\": container with ID starting with f5ea1d9c43708c08fb2a8a11a57b004599742990fb1fa897ee189110d84b96f0 not found: ID does not exist" Jan 31 10:16:52 crc kubenswrapper[4830]: I0131 10:16:52.262494 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d8157ef8-481b-405d-a740-0e0797d1d178" path="/var/lib/kubelet/pods/d8157ef8-481b-405d-a740-0e0797d1d178/volumes" Jan 31 10:17:31 crc kubenswrapper[4830]: I0131 10:17:31.575791 4830 trace.go:236] Trace[1583391586]: "Calculate volume metrics of ovndbcluster-nb-etc-ovn for pod openstack/ovsdbserver-nb-0" (31-Jan-2026 10:17:30.518) (total time: 1055ms): Jan 31 10:17:31 crc kubenswrapper[4830]: Trace[1583391586]: [1.055738085s] [1.055738085s] END Jan 31 10:19:14 crc kubenswrapper[4830]: I0131 10:19:14.353114 4830 patch_prober.go:28] interesting pod/machine-config-daemon-gt7kd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 10:19:14 crc kubenswrapper[4830]: I0131 10:19:14.353784 4830 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" podUID="158dbfda-9b0a-4809-9946-3c6ee2d082dc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 10:19:44 crc kubenswrapper[4830]: I0131 10:19:44.353071 4830 patch_prober.go:28] interesting pod/machine-config-daemon-gt7kd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 10:19:44 crc kubenswrapper[4830]: I0131 10:19:44.354500 4830 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" podUID="158dbfda-9b0a-4809-9946-3c6ee2d082dc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 10:20:14 crc kubenswrapper[4830]: I0131 10:20:14.353710 4830 patch_prober.go:28] interesting pod/machine-config-daemon-gt7kd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 10:20:14 crc kubenswrapper[4830]: I0131 10:20:14.354443 4830 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" podUID="158dbfda-9b0a-4809-9946-3c6ee2d082dc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 10:20:14 crc kubenswrapper[4830]: I0131 10:20:14.354498 4830 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" Jan 31 10:20:14 crc kubenswrapper[4830]: I0131 10:20:14.355778 4830 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"09d1151cbdf8a81aba2b88f7e7cacb0624012aa898ec498a1d6bc161d6e6a9d4"} pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 31 10:20:14 crc kubenswrapper[4830]: I0131 10:20:14.355855 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" podUID="158dbfda-9b0a-4809-9946-3c6ee2d082dc" containerName="machine-config-daemon" containerID="cri-o://09d1151cbdf8a81aba2b88f7e7cacb0624012aa898ec498a1d6bc161d6e6a9d4" gracePeriod=600 Jan 31 10:20:15 crc kubenswrapper[4830]: I0131 10:20:15.021681 4830 generic.go:334] "Generic (PLEG): container finished" podID="158dbfda-9b0a-4809-9946-3c6ee2d082dc" containerID="09d1151cbdf8a81aba2b88f7e7cacb0624012aa898ec498a1d6bc161d6e6a9d4" exitCode=0 Jan 31 10:20:15 crc kubenswrapper[4830]: I0131 10:20:15.021769 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" event={"ID":"158dbfda-9b0a-4809-9946-3c6ee2d082dc","Type":"ContainerDied","Data":"09d1151cbdf8a81aba2b88f7e7cacb0624012aa898ec498a1d6bc161d6e6a9d4"} Jan 31 10:20:15 crc kubenswrapper[4830]: I0131 10:20:15.022440 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" event={"ID":"158dbfda-9b0a-4809-9946-3c6ee2d082dc","Type":"ContainerStarted","Data":"336fbcde4bc39ccadaddbb2c8835d20ab80032b9696d8d9d030c1910fa930c14"} Jan 31 10:20:15 crc kubenswrapper[4830]: I0131 10:20:15.022472 4830 scope.go:117] "RemoveContainer" containerID="7ca4e27c0e74098ff8b4f356a070085cc4684687a37c370e31d282c5c11adfc3" Jan 31 10:21:05 crc kubenswrapper[4830]: I0131 10:21:05.534405 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-llmrf"] Jan 31 10:21:05 crc kubenswrapper[4830]: E0131 10:21:05.535903 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d8157ef8-481b-405d-a740-0e0797d1d178" containerName="registry-server" Jan 31 10:21:05 crc kubenswrapper[4830]: I0131 10:21:05.535916 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8157ef8-481b-405d-a740-0e0797d1d178" containerName="registry-server" Jan 31 10:21:05 crc kubenswrapper[4830]: E0131 10:21:05.535939 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d8157ef8-481b-405d-a740-0e0797d1d178" containerName="extract-content" Jan 31 10:21:05 crc kubenswrapper[4830]: I0131 10:21:05.535946 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8157ef8-481b-405d-a740-0e0797d1d178" containerName="extract-content" Jan 31 10:21:05 crc kubenswrapper[4830]: E0131 10:21:05.535970 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d8157ef8-481b-405d-a740-0e0797d1d178" containerName="extract-utilities" Jan 31 10:21:05 crc kubenswrapper[4830]: I0131 10:21:05.535977 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8157ef8-481b-405d-a740-0e0797d1d178" containerName="extract-utilities" Jan 31 10:21:05 crc kubenswrapper[4830]: I0131 10:21:05.536270 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="d8157ef8-481b-405d-a740-0e0797d1d178" containerName="registry-server" Jan 31 10:21:05 crc kubenswrapper[4830]: I0131 10:21:05.538267 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-llmrf" Jan 31 10:21:05 crc kubenswrapper[4830]: I0131 10:21:05.552893 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-llmrf"] Jan 31 10:21:05 crc kubenswrapper[4830]: I0131 10:21:05.577971 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e11bca15-7fcb-42e2-8c1b-7d46ebeeea1d-utilities\") pod \"community-operators-llmrf\" (UID: \"e11bca15-7fcb-42e2-8c1b-7d46ebeeea1d\") " pod="openshift-marketplace/community-operators-llmrf" Jan 31 10:21:05 crc kubenswrapper[4830]: I0131 10:21:05.578135 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e11bca15-7fcb-42e2-8c1b-7d46ebeeea1d-catalog-content\") pod \"community-operators-llmrf\" (UID: \"e11bca15-7fcb-42e2-8c1b-7d46ebeeea1d\") " pod="openshift-marketplace/community-operators-llmrf" Jan 31 10:21:05 crc kubenswrapper[4830]: I0131 10:21:05.578192 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-krsz4\" (UniqueName: \"kubernetes.io/projected/e11bca15-7fcb-42e2-8c1b-7d46ebeeea1d-kube-api-access-krsz4\") pod \"community-operators-llmrf\" (UID: \"e11bca15-7fcb-42e2-8c1b-7d46ebeeea1d\") " pod="openshift-marketplace/community-operators-llmrf" Jan 31 10:21:05 crc kubenswrapper[4830]: I0131 10:21:05.679972 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e11bca15-7fcb-42e2-8c1b-7d46ebeeea1d-utilities\") pod \"community-operators-llmrf\" (UID: \"e11bca15-7fcb-42e2-8c1b-7d46ebeeea1d\") " pod="openshift-marketplace/community-operators-llmrf" Jan 31 10:21:05 crc kubenswrapper[4830]: I0131 10:21:05.680102 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e11bca15-7fcb-42e2-8c1b-7d46ebeeea1d-catalog-content\") pod \"community-operators-llmrf\" (UID: \"e11bca15-7fcb-42e2-8c1b-7d46ebeeea1d\") " pod="openshift-marketplace/community-operators-llmrf" Jan 31 10:21:05 crc kubenswrapper[4830]: I0131 10:21:05.680150 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-krsz4\" (UniqueName: \"kubernetes.io/projected/e11bca15-7fcb-42e2-8c1b-7d46ebeeea1d-kube-api-access-krsz4\") pod \"community-operators-llmrf\" (UID: \"e11bca15-7fcb-42e2-8c1b-7d46ebeeea1d\") " pod="openshift-marketplace/community-operators-llmrf" Jan 31 10:21:05 crc kubenswrapper[4830]: I0131 10:21:05.680782 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e11bca15-7fcb-42e2-8c1b-7d46ebeeea1d-utilities\") pod \"community-operators-llmrf\" (UID: \"e11bca15-7fcb-42e2-8c1b-7d46ebeeea1d\") " pod="openshift-marketplace/community-operators-llmrf" Jan 31 10:21:05 crc kubenswrapper[4830]: I0131 10:21:05.680846 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e11bca15-7fcb-42e2-8c1b-7d46ebeeea1d-catalog-content\") pod \"community-operators-llmrf\" (UID: \"e11bca15-7fcb-42e2-8c1b-7d46ebeeea1d\") " pod="openshift-marketplace/community-operators-llmrf" Jan 31 10:21:05 crc kubenswrapper[4830]: I0131 10:21:05.710096 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-krsz4\" (UniqueName: \"kubernetes.io/projected/e11bca15-7fcb-42e2-8c1b-7d46ebeeea1d-kube-api-access-krsz4\") pod \"community-operators-llmrf\" (UID: \"e11bca15-7fcb-42e2-8c1b-7d46ebeeea1d\") " pod="openshift-marketplace/community-operators-llmrf" Jan 31 10:21:05 crc kubenswrapper[4830]: I0131 10:21:05.888039 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-llmrf" Jan 31 10:21:06 crc kubenswrapper[4830]: I0131 10:21:06.662181 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-llmrf"] Jan 31 10:21:07 crc kubenswrapper[4830]: I0131 10:21:07.658900 4830 generic.go:334] "Generic (PLEG): container finished" podID="e11bca15-7fcb-42e2-8c1b-7d46ebeeea1d" containerID="861fa63e4ae2bc8b67086d8f8b28962684859fb30a8f457043038a7d10f0437d" exitCode=0 Jan 31 10:21:07 crc kubenswrapper[4830]: I0131 10:21:07.658962 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-llmrf" event={"ID":"e11bca15-7fcb-42e2-8c1b-7d46ebeeea1d","Type":"ContainerDied","Data":"861fa63e4ae2bc8b67086d8f8b28962684859fb30a8f457043038a7d10f0437d"} Jan 31 10:21:07 crc kubenswrapper[4830]: I0131 10:21:07.659214 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-llmrf" event={"ID":"e11bca15-7fcb-42e2-8c1b-7d46ebeeea1d","Type":"ContainerStarted","Data":"4d913fef55b7a55db37eb590787fb6feb32e404eb632f0b90b6d12cb71b48d8a"} Jan 31 10:21:08 crc kubenswrapper[4830]: I0131 10:21:08.671953 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-llmrf" event={"ID":"e11bca15-7fcb-42e2-8c1b-7d46ebeeea1d","Type":"ContainerStarted","Data":"71d765b6b9b8f59010ff0d3b9f19ec9127335c2fd48764bfb94aadaeb7ae3509"} Jan 31 10:21:10 crc kubenswrapper[4830]: I0131 10:21:10.694373 4830 generic.go:334] "Generic (PLEG): container finished" podID="e11bca15-7fcb-42e2-8c1b-7d46ebeeea1d" containerID="71d765b6b9b8f59010ff0d3b9f19ec9127335c2fd48764bfb94aadaeb7ae3509" exitCode=0 Jan 31 10:21:10 crc kubenswrapper[4830]: I0131 10:21:10.694610 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-llmrf" event={"ID":"e11bca15-7fcb-42e2-8c1b-7d46ebeeea1d","Type":"ContainerDied","Data":"71d765b6b9b8f59010ff0d3b9f19ec9127335c2fd48764bfb94aadaeb7ae3509"} Jan 31 10:21:12 crc kubenswrapper[4830]: I0131 10:21:12.717584 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-llmrf" event={"ID":"e11bca15-7fcb-42e2-8c1b-7d46ebeeea1d","Type":"ContainerStarted","Data":"d96d08fde123f1b34e118624d38eee88a226d1f645e4cac796f8dd765852cd67"} Jan 31 10:21:12 crc kubenswrapper[4830]: I0131 10:21:12.741924 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-llmrf" podStartSLOduration=4.283704302 podStartE2EDuration="7.741902344s" podCreationTimestamp="2026-01-31 10:21:05 +0000 UTC" firstStartedPulling="2026-01-31 10:21:07.661872703 +0000 UTC m=+4812.155235165" lastFinishedPulling="2026-01-31 10:21:11.120070765 +0000 UTC m=+4815.613433207" observedRunningTime="2026-01-31 10:21:12.740878395 +0000 UTC m=+4817.234240837" watchObservedRunningTime="2026-01-31 10:21:12.741902344 +0000 UTC m=+4817.235264796" Jan 31 10:21:15 crc kubenswrapper[4830]: I0131 10:21:15.890221 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-llmrf" Jan 31 10:21:15 crc kubenswrapper[4830]: I0131 10:21:15.890450 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-llmrf" Jan 31 10:21:15 crc kubenswrapper[4830]: I0131 10:21:15.955801 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-llmrf" Jan 31 10:21:25 crc kubenswrapper[4830]: I0131 10:21:25.949677 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-llmrf" Jan 31 10:21:26 crc kubenswrapper[4830]: I0131 10:21:26.009848 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-llmrf"] Jan 31 10:21:26 crc kubenswrapper[4830]: I0131 10:21:26.878034 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-llmrf" podUID="e11bca15-7fcb-42e2-8c1b-7d46ebeeea1d" containerName="registry-server" containerID="cri-o://d96d08fde123f1b34e118624d38eee88a226d1f645e4cac796f8dd765852cd67" gracePeriod=2 Jan 31 10:21:27 crc kubenswrapper[4830]: I0131 10:21:27.452090 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-llmrf" Jan 31 10:21:27 crc kubenswrapper[4830]: I0131 10:21:27.536438 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e11bca15-7fcb-42e2-8c1b-7d46ebeeea1d-catalog-content\") pod \"e11bca15-7fcb-42e2-8c1b-7d46ebeeea1d\" (UID: \"e11bca15-7fcb-42e2-8c1b-7d46ebeeea1d\") " Jan 31 10:21:27 crc kubenswrapper[4830]: I0131 10:21:27.536545 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e11bca15-7fcb-42e2-8c1b-7d46ebeeea1d-utilities\") pod \"e11bca15-7fcb-42e2-8c1b-7d46ebeeea1d\" (UID: \"e11bca15-7fcb-42e2-8c1b-7d46ebeeea1d\") " Jan 31 10:21:27 crc kubenswrapper[4830]: I0131 10:21:27.536573 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-krsz4\" (UniqueName: \"kubernetes.io/projected/e11bca15-7fcb-42e2-8c1b-7d46ebeeea1d-kube-api-access-krsz4\") pod \"e11bca15-7fcb-42e2-8c1b-7d46ebeeea1d\" (UID: \"e11bca15-7fcb-42e2-8c1b-7d46ebeeea1d\") " Jan 31 10:21:27 crc kubenswrapper[4830]: I0131 10:21:27.537500 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e11bca15-7fcb-42e2-8c1b-7d46ebeeea1d-utilities" (OuterVolumeSpecName: "utilities") pod "e11bca15-7fcb-42e2-8c1b-7d46ebeeea1d" (UID: "e11bca15-7fcb-42e2-8c1b-7d46ebeeea1d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 10:21:27 crc kubenswrapper[4830]: I0131 10:21:27.542793 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e11bca15-7fcb-42e2-8c1b-7d46ebeeea1d-kube-api-access-krsz4" (OuterVolumeSpecName: "kube-api-access-krsz4") pod "e11bca15-7fcb-42e2-8c1b-7d46ebeeea1d" (UID: "e11bca15-7fcb-42e2-8c1b-7d46ebeeea1d"). InnerVolumeSpecName "kube-api-access-krsz4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 10:21:27 crc kubenswrapper[4830]: I0131 10:21:27.586616 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e11bca15-7fcb-42e2-8c1b-7d46ebeeea1d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e11bca15-7fcb-42e2-8c1b-7d46ebeeea1d" (UID: "e11bca15-7fcb-42e2-8c1b-7d46ebeeea1d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 10:21:27 crc kubenswrapper[4830]: I0131 10:21:27.639391 4830 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e11bca15-7fcb-42e2-8c1b-7d46ebeeea1d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 10:21:27 crc kubenswrapper[4830]: I0131 10:21:27.639428 4830 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e11bca15-7fcb-42e2-8c1b-7d46ebeeea1d-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 10:21:27 crc kubenswrapper[4830]: I0131 10:21:27.639438 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-krsz4\" (UniqueName: \"kubernetes.io/projected/e11bca15-7fcb-42e2-8c1b-7d46ebeeea1d-kube-api-access-krsz4\") on node \"crc\" DevicePath \"\"" Jan 31 10:21:27 crc kubenswrapper[4830]: I0131 10:21:27.891793 4830 generic.go:334] "Generic (PLEG): container finished" podID="e11bca15-7fcb-42e2-8c1b-7d46ebeeea1d" containerID="d96d08fde123f1b34e118624d38eee88a226d1f645e4cac796f8dd765852cd67" exitCode=0 Jan 31 10:21:27 crc kubenswrapper[4830]: I0131 10:21:27.891841 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-llmrf" event={"ID":"e11bca15-7fcb-42e2-8c1b-7d46ebeeea1d","Type":"ContainerDied","Data":"d96d08fde123f1b34e118624d38eee88a226d1f645e4cac796f8dd765852cd67"} Jan 31 10:21:27 crc kubenswrapper[4830]: I0131 10:21:27.891877 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-llmrf" event={"ID":"e11bca15-7fcb-42e2-8c1b-7d46ebeeea1d","Type":"ContainerDied","Data":"4d913fef55b7a55db37eb590787fb6feb32e404eb632f0b90b6d12cb71b48d8a"} Jan 31 10:21:27 crc kubenswrapper[4830]: I0131 10:21:27.891902 4830 scope.go:117] "RemoveContainer" containerID="d96d08fde123f1b34e118624d38eee88a226d1f645e4cac796f8dd765852cd67" Jan 31 10:21:27 crc kubenswrapper[4830]: I0131 10:21:27.892635 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-llmrf" Jan 31 10:21:27 crc kubenswrapper[4830]: I0131 10:21:27.918218 4830 scope.go:117] "RemoveContainer" containerID="71d765b6b9b8f59010ff0d3b9f19ec9127335c2fd48764bfb94aadaeb7ae3509" Jan 31 10:21:27 crc kubenswrapper[4830]: I0131 10:21:27.945020 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-llmrf"] Jan 31 10:21:27 crc kubenswrapper[4830]: I0131 10:21:27.954869 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-llmrf"] Jan 31 10:21:27 crc kubenswrapper[4830]: I0131 10:21:27.973928 4830 scope.go:117] "RemoveContainer" containerID="861fa63e4ae2bc8b67086d8f8b28962684859fb30a8f457043038a7d10f0437d" Jan 31 10:21:27 crc kubenswrapper[4830]: I0131 10:21:27.999321 4830 scope.go:117] "RemoveContainer" containerID="d96d08fde123f1b34e118624d38eee88a226d1f645e4cac796f8dd765852cd67" Jan 31 10:21:27 crc kubenswrapper[4830]: E0131 10:21:27.999684 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d96d08fde123f1b34e118624d38eee88a226d1f645e4cac796f8dd765852cd67\": container with ID starting with d96d08fde123f1b34e118624d38eee88a226d1f645e4cac796f8dd765852cd67 not found: ID does not exist" containerID="d96d08fde123f1b34e118624d38eee88a226d1f645e4cac796f8dd765852cd67" Jan 31 10:21:27 crc kubenswrapper[4830]: I0131 10:21:27.999811 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d96d08fde123f1b34e118624d38eee88a226d1f645e4cac796f8dd765852cd67"} err="failed to get container status \"d96d08fde123f1b34e118624d38eee88a226d1f645e4cac796f8dd765852cd67\": rpc error: code = NotFound desc = could not find container \"d96d08fde123f1b34e118624d38eee88a226d1f645e4cac796f8dd765852cd67\": container with ID starting with d96d08fde123f1b34e118624d38eee88a226d1f645e4cac796f8dd765852cd67 not found: ID does not exist" Jan 31 10:21:27 crc kubenswrapper[4830]: I0131 10:21:27.999898 4830 scope.go:117] "RemoveContainer" containerID="71d765b6b9b8f59010ff0d3b9f19ec9127335c2fd48764bfb94aadaeb7ae3509" Jan 31 10:21:28 crc kubenswrapper[4830]: E0131 10:21:28.000167 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"71d765b6b9b8f59010ff0d3b9f19ec9127335c2fd48764bfb94aadaeb7ae3509\": container with ID starting with 71d765b6b9b8f59010ff0d3b9f19ec9127335c2fd48764bfb94aadaeb7ae3509 not found: ID does not exist" containerID="71d765b6b9b8f59010ff0d3b9f19ec9127335c2fd48764bfb94aadaeb7ae3509" Jan 31 10:21:28 crc kubenswrapper[4830]: I0131 10:21:28.000199 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"71d765b6b9b8f59010ff0d3b9f19ec9127335c2fd48764bfb94aadaeb7ae3509"} err="failed to get container status \"71d765b6b9b8f59010ff0d3b9f19ec9127335c2fd48764bfb94aadaeb7ae3509\": rpc error: code = NotFound desc = could not find container \"71d765b6b9b8f59010ff0d3b9f19ec9127335c2fd48764bfb94aadaeb7ae3509\": container with ID starting with 71d765b6b9b8f59010ff0d3b9f19ec9127335c2fd48764bfb94aadaeb7ae3509 not found: ID does not exist" Jan 31 10:21:28 crc kubenswrapper[4830]: I0131 10:21:28.000216 4830 scope.go:117] "RemoveContainer" containerID="861fa63e4ae2bc8b67086d8f8b28962684859fb30a8f457043038a7d10f0437d" Jan 31 10:21:28 crc kubenswrapper[4830]: E0131 10:21:28.000630 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"861fa63e4ae2bc8b67086d8f8b28962684859fb30a8f457043038a7d10f0437d\": container with ID starting with 861fa63e4ae2bc8b67086d8f8b28962684859fb30a8f457043038a7d10f0437d not found: ID does not exist" containerID="861fa63e4ae2bc8b67086d8f8b28962684859fb30a8f457043038a7d10f0437d" Jan 31 10:21:28 crc kubenswrapper[4830]: I0131 10:21:28.000656 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"861fa63e4ae2bc8b67086d8f8b28962684859fb30a8f457043038a7d10f0437d"} err="failed to get container status \"861fa63e4ae2bc8b67086d8f8b28962684859fb30a8f457043038a7d10f0437d\": rpc error: code = NotFound desc = could not find container \"861fa63e4ae2bc8b67086d8f8b28962684859fb30a8f457043038a7d10f0437d\": container with ID starting with 861fa63e4ae2bc8b67086d8f8b28962684859fb30a8f457043038a7d10f0437d not found: ID does not exist" Jan 31 10:21:28 crc kubenswrapper[4830]: I0131 10:21:28.264581 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e11bca15-7fcb-42e2-8c1b-7d46ebeeea1d" path="/var/lib/kubelet/pods/e11bca15-7fcb-42e2-8c1b-7d46ebeeea1d/volumes" Jan 31 10:21:31 crc kubenswrapper[4830]: E0131 10:21:31.176143 4830 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.53:42768->38.102.83.53:38781: write tcp 38.102.83.53:42768->38.102.83.53:38781: write: broken pipe Jan 31 10:21:54 crc kubenswrapper[4830]: I0131 10:21:54.653552 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-cvxjk"] Jan 31 10:21:54 crc kubenswrapper[4830]: E0131 10:21:54.654992 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e11bca15-7fcb-42e2-8c1b-7d46ebeeea1d" containerName="registry-server" Jan 31 10:21:54 crc kubenswrapper[4830]: I0131 10:21:54.655010 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="e11bca15-7fcb-42e2-8c1b-7d46ebeeea1d" containerName="registry-server" Jan 31 10:21:54 crc kubenswrapper[4830]: E0131 10:21:54.655034 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e11bca15-7fcb-42e2-8c1b-7d46ebeeea1d" containerName="extract-utilities" Jan 31 10:21:54 crc kubenswrapper[4830]: I0131 10:21:54.655042 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="e11bca15-7fcb-42e2-8c1b-7d46ebeeea1d" containerName="extract-utilities" Jan 31 10:21:54 crc kubenswrapper[4830]: E0131 10:21:54.655106 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e11bca15-7fcb-42e2-8c1b-7d46ebeeea1d" containerName="extract-content" Jan 31 10:21:54 crc kubenswrapper[4830]: I0131 10:21:54.655117 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="e11bca15-7fcb-42e2-8c1b-7d46ebeeea1d" containerName="extract-content" Jan 31 10:21:54 crc kubenswrapper[4830]: I0131 10:21:54.655396 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="e11bca15-7fcb-42e2-8c1b-7d46ebeeea1d" containerName="registry-server" Jan 31 10:21:54 crc kubenswrapper[4830]: I0131 10:21:54.658196 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cvxjk" Jan 31 10:21:54 crc kubenswrapper[4830]: I0131 10:21:54.684760 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-cvxjk"] Jan 31 10:21:54 crc kubenswrapper[4830]: I0131 10:21:54.741070 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1ac11619-5b30-45f2-b640-fe8cfca8a8b2-utilities\") pod \"certified-operators-cvxjk\" (UID: \"1ac11619-5b30-45f2-b640-fe8cfca8a8b2\") " pod="openshift-marketplace/certified-operators-cvxjk" Jan 31 10:21:54 crc kubenswrapper[4830]: I0131 10:21:54.741560 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1ac11619-5b30-45f2-b640-fe8cfca8a8b2-catalog-content\") pod \"certified-operators-cvxjk\" (UID: \"1ac11619-5b30-45f2-b640-fe8cfca8a8b2\") " pod="openshift-marketplace/certified-operators-cvxjk" Jan 31 10:21:54 crc kubenswrapper[4830]: I0131 10:21:54.741614 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dxjnl\" (UniqueName: \"kubernetes.io/projected/1ac11619-5b30-45f2-b640-fe8cfca8a8b2-kube-api-access-dxjnl\") pod \"certified-operators-cvxjk\" (UID: \"1ac11619-5b30-45f2-b640-fe8cfca8a8b2\") " pod="openshift-marketplace/certified-operators-cvxjk" Jan 31 10:21:54 crc kubenswrapper[4830]: I0131 10:21:54.843713 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1ac11619-5b30-45f2-b640-fe8cfca8a8b2-utilities\") pod \"certified-operators-cvxjk\" (UID: \"1ac11619-5b30-45f2-b640-fe8cfca8a8b2\") " pod="openshift-marketplace/certified-operators-cvxjk" Jan 31 10:21:54 crc kubenswrapper[4830]: I0131 10:21:54.843881 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1ac11619-5b30-45f2-b640-fe8cfca8a8b2-catalog-content\") pod \"certified-operators-cvxjk\" (UID: \"1ac11619-5b30-45f2-b640-fe8cfca8a8b2\") " pod="openshift-marketplace/certified-operators-cvxjk" Jan 31 10:21:54 crc kubenswrapper[4830]: I0131 10:21:54.843904 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dxjnl\" (UniqueName: \"kubernetes.io/projected/1ac11619-5b30-45f2-b640-fe8cfca8a8b2-kube-api-access-dxjnl\") pod \"certified-operators-cvxjk\" (UID: \"1ac11619-5b30-45f2-b640-fe8cfca8a8b2\") " pod="openshift-marketplace/certified-operators-cvxjk" Jan 31 10:21:54 crc kubenswrapper[4830]: I0131 10:21:54.844863 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1ac11619-5b30-45f2-b640-fe8cfca8a8b2-utilities\") pod \"certified-operators-cvxjk\" (UID: \"1ac11619-5b30-45f2-b640-fe8cfca8a8b2\") " pod="openshift-marketplace/certified-operators-cvxjk" Jan 31 10:21:54 crc kubenswrapper[4830]: I0131 10:21:54.844974 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1ac11619-5b30-45f2-b640-fe8cfca8a8b2-catalog-content\") pod \"certified-operators-cvxjk\" (UID: \"1ac11619-5b30-45f2-b640-fe8cfca8a8b2\") " pod="openshift-marketplace/certified-operators-cvxjk" Jan 31 10:21:54 crc kubenswrapper[4830]: I0131 10:21:54.866029 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dxjnl\" (UniqueName: \"kubernetes.io/projected/1ac11619-5b30-45f2-b640-fe8cfca8a8b2-kube-api-access-dxjnl\") pod \"certified-operators-cvxjk\" (UID: \"1ac11619-5b30-45f2-b640-fe8cfca8a8b2\") " pod="openshift-marketplace/certified-operators-cvxjk" Jan 31 10:21:54 crc kubenswrapper[4830]: I0131 10:21:54.992063 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cvxjk" Jan 31 10:21:55 crc kubenswrapper[4830]: I0131 10:21:55.522475 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-cvxjk"] Jan 31 10:21:56 crc kubenswrapper[4830]: I0131 10:21:56.242050 4830 generic.go:334] "Generic (PLEG): container finished" podID="1ac11619-5b30-45f2-b640-fe8cfca8a8b2" containerID="9ad635b8e894b0f2030a24b4b6ecf660e5366bd4bf862505ba852a33ef2cc2b9" exitCode=0 Jan 31 10:21:56 crc kubenswrapper[4830]: I0131 10:21:56.242176 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cvxjk" event={"ID":"1ac11619-5b30-45f2-b640-fe8cfca8a8b2","Type":"ContainerDied","Data":"9ad635b8e894b0f2030a24b4b6ecf660e5366bd4bf862505ba852a33ef2cc2b9"} Jan 31 10:21:56 crc kubenswrapper[4830]: I0131 10:21:56.242489 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cvxjk" event={"ID":"1ac11619-5b30-45f2-b640-fe8cfca8a8b2","Type":"ContainerStarted","Data":"3234fb6c69729af669d773c70b4b066e97105d0057580129f991d4329a34b153"} Jan 31 10:21:56 crc kubenswrapper[4830]: I0131 10:21:56.247311 4830 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 31 10:21:57 crc kubenswrapper[4830]: I0131 10:21:57.264650 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cvxjk" event={"ID":"1ac11619-5b30-45f2-b640-fe8cfca8a8b2","Type":"ContainerStarted","Data":"75add4e9f6bcbc16ef34b2a12c27dbdb78ed79d535cc89f6b4623a2bdfaadbe0"} Jan 31 10:21:59 crc kubenswrapper[4830]: I0131 10:21:59.285414 4830 generic.go:334] "Generic (PLEG): container finished" podID="1ac11619-5b30-45f2-b640-fe8cfca8a8b2" containerID="75add4e9f6bcbc16ef34b2a12c27dbdb78ed79d535cc89f6b4623a2bdfaadbe0" exitCode=0 Jan 31 10:21:59 crc kubenswrapper[4830]: I0131 10:21:59.285495 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cvxjk" event={"ID":"1ac11619-5b30-45f2-b640-fe8cfca8a8b2","Type":"ContainerDied","Data":"75add4e9f6bcbc16ef34b2a12c27dbdb78ed79d535cc89f6b4623a2bdfaadbe0"} Jan 31 10:22:00 crc kubenswrapper[4830]: I0131 10:22:00.299982 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cvxjk" event={"ID":"1ac11619-5b30-45f2-b640-fe8cfca8a8b2","Type":"ContainerStarted","Data":"61673599c8f83c9f33ed35ba502a15a75c0fdd9dcf861541acddf3c3b997d36d"} Jan 31 10:22:00 crc kubenswrapper[4830]: I0131 10:22:00.341078 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-cvxjk" podStartSLOduration=2.89509309 podStartE2EDuration="6.341023542s" podCreationTimestamp="2026-01-31 10:21:54 +0000 UTC" firstStartedPulling="2026-01-31 10:21:56.246897075 +0000 UTC m=+4860.740259527" lastFinishedPulling="2026-01-31 10:21:59.692827497 +0000 UTC m=+4864.186189979" observedRunningTime="2026-01-31 10:22:00.320184258 +0000 UTC m=+4864.813546700" watchObservedRunningTime="2026-01-31 10:22:00.341023542 +0000 UTC m=+4864.834385994" Jan 31 10:22:04 crc kubenswrapper[4830]: I0131 10:22:04.993427 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-cvxjk" Jan 31 10:22:04 crc kubenswrapper[4830]: I0131 10:22:04.994044 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-cvxjk" Jan 31 10:22:05 crc kubenswrapper[4830]: I0131 10:22:05.045181 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-cvxjk" Jan 31 10:22:05 crc kubenswrapper[4830]: I0131 10:22:05.412877 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-cvxjk" Jan 31 10:22:05 crc kubenswrapper[4830]: I0131 10:22:05.469136 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-cvxjk"] Jan 31 10:22:07 crc kubenswrapper[4830]: I0131 10:22:07.370162 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-cvxjk" podUID="1ac11619-5b30-45f2-b640-fe8cfca8a8b2" containerName="registry-server" containerID="cri-o://61673599c8f83c9f33ed35ba502a15a75c0fdd9dcf861541acddf3c3b997d36d" gracePeriod=2 Jan 31 10:22:07 crc kubenswrapper[4830]: I0131 10:22:07.884904 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cvxjk" Jan 31 10:22:07 crc kubenswrapper[4830]: I0131 10:22:07.988833 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1ac11619-5b30-45f2-b640-fe8cfca8a8b2-utilities\") pod \"1ac11619-5b30-45f2-b640-fe8cfca8a8b2\" (UID: \"1ac11619-5b30-45f2-b640-fe8cfca8a8b2\") " Jan 31 10:22:07 crc kubenswrapper[4830]: I0131 10:22:07.988910 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1ac11619-5b30-45f2-b640-fe8cfca8a8b2-catalog-content\") pod \"1ac11619-5b30-45f2-b640-fe8cfca8a8b2\" (UID: \"1ac11619-5b30-45f2-b640-fe8cfca8a8b2\") " Jan 31 10:22:07 crc kubenswrapper[4830]: I0131 10:22:07.988971 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dxjnl\" (UniqueName: \"kubernetes.io/projected/1ac11619-5b30-45f2-b640-fe8cfca8a8b2-kube-api-access-dxjnl\") pod \"1ac11619-5b30-45f2-b640-fe8cfca8a8b2\" (UID: \"1ac11619-5b30-45f2-b640-fe8cfca8a8b2\") " Jan 31 10:22:07 crc kubenswrapper[4830]: I0131 10:22:07.989629 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1ac11619-5b30-45f2-b640-fe8cfca8a8b2-utilities" (OuterVolumeSpecName: "utilities") pod "1ac11619-5b30-45f2-b640-fe8cfca8a8b2" (UID: "1ac11619-5b30-45f2-b640-fe8cfca8a8b2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 10:22:07 crc kubenswrapper[4830]: I0131 10:22:07.989880 4830 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1ac11619-5b30-45f2-b640-fe8cfca8a8b2-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 10:22:07 crc kubenswrapper[4830]: I0131 10:22:07.996521 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1ac11619-5b30-45f2-b640-fe8cfca8a8b2-kube-api-access-dxjnl" (OuterVolumeSpecName: "kube-api-access-dxjnl") pod "1ac11619-5b30-45f2-b640-fe8cfca8a8b2" (UID: "1ac11619-5b30-45f2-b640-fe8cfca8a8b2"). InnerVolumeSpecName "kube-api-access-dxjnl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 10:22:08 crc kubenswrapper[4830]: I0131 10:22:08.037292 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1ac11619-5b30-45f2-b640-fe8cfca8a8b2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1ac11619-5b30-45f2-b640-fe8cfca8a8b2" (UID: "1ac11619-5b30-45f2-b640-fe8cfca8a8b2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 10:22:08 crc kubenswrapper[4830]: I0131 10:22:08.092132 4830 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1ac11619-5b30-45f2-b640-fe8cfca8a8b2-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 10:22:08 crc kubenswrapper[4830]: I0131 10:22:08.092177 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dxjnl\" (UniqueName: \"kubernetes.io/projected/1ac11619-5b30-45f2-b640-fe8cfca8a8b2-kube-api-access-dxjnl\") on node \"crc\" DevicePath \"\"" Jan 31 10:22:08 crc kubenswrapper[4830]: I0131 10:22:08.383428 4830 generic.go:334] "Generic (PLEG): container finished" podID="1ac11619-5b30-45f2-b640-fe8cfca8a8b2" containerID="61673599c8f83c9f33ed35ba502a15a75c0fdd9dcf861541acddf3c3b997d36d" exitCode=0 Jan 31 10:22:08 crc kubenswrapper[4830]: I0131 10:22:08.383504 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cvxjk" Jan 31 10:22:08 crc kubenswrapper[4830]: I0131 10:22:08.383502 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cvxjk" event={"ID":"1ac11619-5b30-45f2-b640-fe8cfca8a8b2","Type":"ContainerDied","Data":"61673599c8f83c9f33ed35ba502a15a75c0fdd9dcf861541acddf3c3b997d36d"} Jan 31 10:22:08 crc kubenswrapper[4830]: I0131 10:22:08.383895 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cvxjk" event={"ID":"1ac11619-5b30-45f2-b640-fe8cfca8a8b2","Type":"ContainerDied","Data":"3234fb6c69729af669d773c70b4b066e97105d0057580129f991d4329a34b153"} Jan 31 10:22:08 crc kubenswrapper[4830]: I0131 10:22:08.383915 4830 scope.go:117] "RemoveContainer" containerID="61673599c8f83c9f33ed35ba502a15a75c0fdd9dcf861541acddf3c3b997d36d" Jan 31 10:22:08 crc kubenswrapper[4830]: I0131 10:22:08.413667 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-cvxjk"] Jan 31 10:22:08 crc kubenswrapper[4830]: I0131 10:22:08.415902 4830 scope.go:117] "RemoveContainer" containerID="75add4e9f6bcbc16ef34b2a12c27dbdb78ed79d535cc89f6b4623a2bdfaadbe0" Jan 31 10:22:08 crc kubenswrapper[4830]: I0131 10:22:08.426099 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-cvxjk"] Jan 31 10:22:08 crc kubenswrapper[4830]: I0131 10:22:08.436287 4830 scope.go:117] "RemoveContainer" containerID="9ad635b8e894b0f2030a24b4b6ecf660e5366bd4bf862505ba852a33ef2cc2b9" Jan 31 10:22:08 crc kubenswrapper[4830]: I0131 10:22:08.494913 4830 scope.go:117] "RemoveContainer" containerID="61673599c8f83c9f33ed35ba502a15a75c0fdd9dcf861541acddf3c3b997d36d" Jan 31 10:22:08 crc kubenswrapper[4830]: E0131 10:22:08.495541 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"61673599c8f83c9f33ed35ba502a15a75c0fdd9dcf861541acddf3c3b997d36d\": container with ID starting with 61673599c8f83c9f33ed35ba502a15a75c0fdd9dcf861541acddf3c3b997d36d not found: ID does not exist" containerID="61673599c8f83c9f33ed35ba502a15a75c0fdd9dcf861541acddf3c3b997d36d" Jan 31 10:22:08 crc kubenswrapper[4830]: I0131 10:22:08.495586 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"61673599c8f83c9f33ed35ba502a15a75c0fdd9dcf861541acddf3c3b997d36d"} err="failed to get container status \"61673599c8f83c9f33ed35ba502a15a75c0fdd9dcf861541acddf3c3b997d36d\": rpc error: code = NotFound desc = could not find container \"61673599c8f83c9f33ed35ba502a15a75c0fdd9dcf861541acddf3c3b997d36d\": container with ID starting with 61673599c8f83c9f33ed35ba502a15a75c0fdd9dcf861541acddf3c3b997d36d not found: ID does not exist" Jan 31 10:22:08 crc kubenswrapper[4830]: I0131 10:22:08.495614 4830 scope.go:117] "RemoveContainer" containerID="75add4e9f6bcbc16ef34b2a12c27dbdb78ed79d535cc89f6b4623a2bdfaadbe0" Jan 31 10:22:08 crc kubenswrapper[4830]: E0131 10:22:08.496047 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"75add4e9f6bcbc16ef34b2a12c27dbdb78ed79d535cc89f6b4623a2bdfaadbe0\": container with ID starting with 75add4e9f6bcbc16ef34b2a12c27dbdb78ed79d535cc89f6b4623a2bdfaadbe0 not found: ID does not exist" containerID="75add4e9f6bcbc16ef34b2a12c27dbdb78ed79d535cc89f6b4623a2bdfaadbe0" Jan 31 10:22:08 crc kubenswrapper[4830]: I0131 10:22:08.496078 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"75add4e9f6bcbc16ef34b2a12c27dbdb78ed79d535cc89f6b4623a2bdfaadbe0"} err="failed to get container status \"75add4e9f6bcbc16ef34b2a12c27dbdb78ed79d535cc89f6b4623a2bdfaadbe0\": rpc error: code = NotFound desc = could not find container \"75add4e9f6bcbc16ef34b2a12c27dbdb78ed79d535cc89f6b4623a2bdfaadbe0\": container with ID starting with 75add4e9f6bcbc16ef34b2a12c27dbdb78ed79d535cc89f6b4623a2bdfaadbe0 not found: ID does not exist" Jan 31 10:22:08 crc kubenswrapper[4830]: I0131 10:22:08.496102 4830 scope.go:117] "RemoveContainer" containerID="9ad635b8e894b0f2030a24b4b6ecf660e5366bd4bf862505ba852a33ef2cc2b9" Jan 31 10:22:08 crc kubenswrapper[4830]: E0131 10:22:08.496368 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9ad635b8e894b0f2030a24b4b6ecf660e5366bd4bf862505ba852a33ef2cc2b9\": container with ID starting with 9ad635b8e894b0f2030a24b4b6ecf660e5366bd4bf862505ba852a33ef2cc2b9 not found: ID does not exist" containerID="9ad635b8e894b0f2030a24b4b6ecf660e5366bd4bf862505ba852a33ef2cc2b9" Jan 31 10:22:08 crc kubenswrapper[4830]: I0131 10:22:08.496405 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9ad635b8e894b0f2030a24b4b6ecf660e5366bd4bf862505ba852a33ef2cc2b9"} err="failed to get container status \"9ad635b8e894b0f2030a24b4b6ecf660e5366bd4bf862505ba852a33ef2cc2b9\": rpc error: code = NotFound desc = could not find container \"9ad635b8e894b0f2030a24b4b6ecf660e5366bd4bf862505ba852a33ef2cc2b9\": container with ID starting with 9ad635b8e894b0f2030a24b4b6ecf660e5366bd4bf862505ba852a33ef2cc2b9 not found: ID does not exist" Jan 31 10:22:10 crc kubenswrapper[4830]: I0131 10:22:10.264598 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1ac11619-5b30-45f2-b640-fe8cfca8a8b2" path="/var/lib/kubelet/pods/1ac11619-5b30-45f2-b640-fe8cfca8a8b2/volumes" Jan 31 10:22:14 crc kubenswrapper[4830]: I0131 10:22:14.353259 4830 patch_prober.go:28] interesting pod/machine-config-daemon-gt7kd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 10:22:14 crc kubenswrapper[4830]: I0131 10:22:14.353879 4830 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" podUID="158dbfda-9b0a-4809-9946-3c6ee2d082dc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 10:22:44 crc kubenswrapper[4830]: I0131 10:22:44.353606 4830 patch_prober.go:28] interesting pod/machine-config-daemon-gt7kd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 10:22:44 crc kubenswrapper[4830]: I0131 10:22:44.354115 4830 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" podUID="158dbfda-9b0a-4809-9946-3c6ee2d082dc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 10:23:14 crc kubenswrapper[4830]: I0131 10:23:14.353146 4830 patch_prober.go:28] interesting pod/machine-config-daemon-gt7kd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 10:23:14 crc kubenswrapper[4830]: I0131 10:23:14.353673 4830 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" podUID="158dbfda-9b0a-4809-9946-3c6ee2d082dc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 10:23:14 crc kubenswrapper[4830]: I0131 10:23:14.353722 4830 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" Jan 31 10:23:14 crc kubenswrapper[4830]: I0131 10:23:14.354556 4830 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"336fbcde4bc39ccadaddbb2c8835d20ab80032b9696d8d9d030c1910fa930c14"} pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 31 10:23:14 crc kubenswrapper[4830]: I0131 10:23:14.354597 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" podUID="158dbfda-9b0a-4809-9946-3c6ee2d082dc" containerName="machine-config-daemon" containerID="cri-o://336fbcde4bc39ccadaddbb2c8835d20ab80032b9696d8d9d030c1910fa930c14" gracePeriod=600 Jan 31 10:23:14 crc kubenswrapper[4830]: E0131 10:23:14.477045 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gt7kd_openshift-machine-config-operator(158dbfda-9b0a-4809-9946-3c6ee2d082dc)\"" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" podUID="158dbfda-9b0a-4809-9946-3c6ee2d082dc" Jan 31 10:23:15 crc kubenswrapper[4830]: I0131 10:23:15.134484 4830 generic.go:334] "Generic (PLEG): container finished" podID="158dbfda-9b0a-4809-9946-3c6ee2d082dc" containerID="336fbcde4bc39ccadaddbb2c8835d20ab80032b9696d8d9d030c1910fa930c14" exitCode=0 Jan 31 10:23:15 crc kubenswrapper[4830]: I0131 10:23:15.134832 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" event={"ID":"158dbfda-9b0a-4809-9946-3c6ee2d082dc","Type":"ContainerDied","Data":"336fbcde4bc39ccadaddbb2c8835d20ab80032b9696d8d9d030c1910fa930c14"} Jan 31 10:23:15 crc kubenswrapper[4830]: I0131 10:23:15.134865 4830 scope.go:117] "RemoveContainer" containerID="09d1151cbdf8a81aba2b88f7e7cacb0624012aa898ec498a1d6bc161d6e6a9d4" Jan 31 10:23:15 crc kubenswrapper[4830]: I0131 10:23:15.135777 4830 scope.go:117] "RemoveContainer" containerID="336fbcde4bc39ccadaddbb2c8835d20ab80032b9696d8d9d030c1910fa930c14" Jan 31 10:23:15 crc kubenswrapper[4830]: E0131 10:23:15.136139 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gt7kd_openshift-machine-config-operator(158dbfda-9b0a-4809-9946-3c6ee2d082dc)\"" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" podUID="158dbfda-9b0a-4809-9946-3c6ee2d082dc" Jan 31 10:23:27 crc kubenswrapper[4830]: I0131 10:23:27.251818 4830 scope.go:117] "RemoveContainer" containerID="336fbcde4bc39ccadaddbb2c8835d20ab80032b9696d8d9d030c1910fa930c14" Jan 31 10:23:27 crc kubenswrapper[4830]: E0131 10:23:27.252830 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gt7kd_openshift-machine-config-operator(158dbfda-9b0a-4809-9946-3c6ee2d082dc)\"" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" podUID="158dbfda-9b0a-4809-9946-3c6ee2d082dc" Jan 31 10:23:36 crc kubenswrapper[4830]: I0131 10:23:36.701858 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-4h74l"] Jan 31 10:23:36 crc kubenswrapper[4830]: E0131 10:23:36.703444 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ac11619-5b30-45f2-b640-fe8cfca8a8b2" containerName="extract-content" Jan 31 10:23:36 crc kubenswrapper[4830]: I0131 10:23:36.703467 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ac11619-5b30-45f2-b640-fe8cfca8a8b2" containerName="extract-content" Jan 31 10:23:36 crc kubenswrapper[4830]: E0131 10:23:36.703509 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ac11619-5b30-45f2-b640-fe8cfca8a8b2" containerName="extract-utilities" Jan 31 10:23:36 crc kubenswrapper[4830]: I0131 10:23:36.703522 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ac11619-5b30-45f2-b640-fe8cfca8a8b2" containerName="extract-utilities" Jan 31 10:23:36 crc kubenswrapper[4830]: E0131 10:23:36.703553 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ac11619-5b30-45f2-b640-fe8cfca8a8b2" containerName="registry-server" Jan 31 10:23:36 crc kubenswrapper[4830]: I0131 10:23:36.703565 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ac11619-5b30-45f2-b640-fe8cfca8a8b2" containerName="registry-server" Jan 31 10:23:36 crc kubenswrapper[4830]: I0131 10:23:36.704030 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="1ac11619-5b30-45f2-b640-fe8cfca8a8b2" containerName="registry-server" Jan 31 10:23:36 crc kubenswrapper[4830]: I0131 10:23:36.706958 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4h74l" Jan 31 10:23:36 crc kubenswrapper[4830]: I0131 10:23:36.723304 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4h74l"] Jan 31 10:23:36 crc kubenswrapper[4830]: I0131 10:23:36.771651 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/813e7284-9d36-4a7a-9939-2971921d7548-utilities\") pod \"redhat-marketplace-4h74l\" (UID: \"813e7284-9d36-4a7a-9939-2971921d7548\") " pod="openshift-marketplace/redhat-marketplace-4h74l" Jan 31 10:23:36 crc kubenswrapper[4830]: I0131 10:23:36.771774 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7wr4p\" (UniqueName: \"kubernetes.io/projected/813e7284-9d36-4a7a-9939-2971921d7548-kube-api-access-7wr4p\") pod \"redhat-marketplace-4h74l\" (UID: \"813e7284-9d36-4a7a-9939-2971921d7548\") " pod="openshift-marketplace/redhat-marketplace-4h74l" Jan 31 10:23:36 crc kubenswrapper[4830]: I0131 10:23:36.771838 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/813e7284-9d36-4a7a-9939-2971921d7548-catalog-content\") pod \"redhat-marketplace-4h74l\" (UID: \"813e7284-9d36-4a7a-9939-2971921d7548\") " pod="openshift-marketplace/redhat-marketplace-4h74l" Jan 31 10:23:36 crc kubenswrapper[4830]: I0131 10:23:36.874486 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/813e7284-9d36-4a7a-9939-2971921d7548-utilities\") pod \"redhat-marketplace-4h74l\" (UID: \"813e7284-9d36-4a7a-9939-2971921d7548\") " pod="openshift-marketplace/redhat-marketplace-4h74l" Jan 31 10:23:36 crc kubenswrapper[4830]: I0131 10:23:36.874602 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7wr4p\" (UniqueName: \"kubernetes.io/projected/813e7284-9d36-4a7a-9939-2971921d7548-kube-api-access-7wr4p\") pod \"redhat-marketplace-4h74l\" (UID: \"813e7284-9d36-4a7a-9939-2971921d7548\") " pod="openshift-marketplace/redhat-marketplace-4h74l" Jan 31 10:23:36 crc kubenswrapper[4830]: I0131 10:23:36.874663 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/813e7284-9d36-4a7a-9939-2971921d7548-catalog-content\") pod \"redhat-marketplace-4h74l\" (UID: \"813e7284-9d36-4a7a-9939-2971921d7548\") " pod="openshift-marketplace/redhat-marketplace-4h74l" Jan 31 10:23:36 crc kubenswrapper[4830]: I0131 10:23:36.875047 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/813e7284-9d36-4a7a-9939-2971921d7548-utilities\") pod \"redhat-marketplace-4h74l\" (UID: \"813e7284-9d36-4a7a-9939-2971921d7548\") " pod="openshift-marketplace/redhat-marketplace-4h74l" Jan 31 10:23:36 crc kubenswrapper[4830]: I0131 10:23:36.875146 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/813e7284-9d36-4a7a-9939-2971921d7548-catalog-content\") pod \"redhat-marketplace-4h74l\" (UID: \"813e7284-9d36-4a7a-9939-2971921d7548\") " pod="openshift-marketplace/redhat-marketplace-4h74l" Jan 31 10:23:36 crc kubenswrapper[4830]: I0131 10:23:36.911122 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7wr4p\" (UniqueName: \"kubernetes.io/projected/813e7284-9d36-4a7a-9939-2971921d7548-kube-api-access-7wr4p\") pod \"redhat-marketplace-4h74l\" (UID: \"813e7284-9d36-4a7a-9939-2971921d7548\") " pod="openshift-marketplace/redhat-marketplace-4h74l" Jan 31 10:23:37 crc kubenswrapper[4830]: I0131 10:23:37.036410 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4h74l" Jan 31 10:23:37 crc kubenswrapper[4830]: I0131 10:23:37.556717 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4h74l"] Jan 31 10:23:38 crc kubenswrapper[4830]: I0131 10:23:38.434599 4830 generic.go:334] "Generic (PLEG): container finished" podID="813e7284-9d36-4a7a-9939-2971921d7548" containerID="d9d97c8899c4c218b87a9e5dc66e42e4525c7e9226323487967af2e2aba7450f" exitCode=0 Jan 31 10:23:38 crc kubenswrapper[4830]: I0131 10:23:38.434654 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4h74l" event={"ID":"813e7284-9d36-4a7a-9939-2971921d7548","Type":"ContainerDied","Data":"d9d97c8899c4c218b87a9e5dc66e42e4525c7e9226323487967af2e2aba7450f"} Jan 31 10:23:38 crc kubenswrapper[4830]: I0131 10:23:38.434688 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4h74l" event={"ID":"813e7284-9d36-4a7a-9939-2971921d7548","Type":"ContainerStarted","Data":"ada3eff13966af61e90b55dfef47aa4a56f899994e03a911e3f2681c2f33c7b4"} Jan 31 10:23:40 crc kubenswrapper[4830]: I0131 10:23:40.467931 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4h74l" event={"ID":"813e7284-9d36-4a7a-9939-2971921d7548","Type":"ContainerStarted","Data":"1361537b0702b633b4f9d3b11a35e3e7cff37c9f87ccd90e536d6e81329c86f8"} Jan 31 10:23:41 crc kubenswrapper[4830]: I0131 10:23:41.252350 4830 scope.go:117] "RemoveContainer" containerID="336fbcde4bc39ccadaddbb2c8835d20ab80032b9696d8d9d030c1910fa930c14" Jan 31 10:23:41 crc kubenswrapper[4830]: E0131 10:23:41.253237 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gt7kd_openshift-machine-config-operator(158dbfda-9b0a-4809-9946-3c6ee2d082dc)\"" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" podUID="158dbfda-9b0a-4809-9946-3c6ee2d082dc" Jan 31 10:23:41 crc kubenswrapper[4830]: I0131 10:23:41.482482 4830 generic.go:334] "Generic (PLEG): container finished" podID="813e7284-9d36-4a7a-9939-2971921d7548" containerID="1361537b0702b633b4f9d3b11a35e3e7cff37c9f87ccd90e536d6e81329c86f8" exitCode=0 Jan 31 10:23:41 crc kubenswrapper[4830]: I0131 10:23:41.482523 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4h74l" event={"ID":"813e7284-9d36-4a7a-9939-2971921d7548","Type":"ContainerDied","Data":"1361537b0702b633b4f9d3b11a35e3e7cff37c9f87ccd90e536d6e81329c86f8"} Jan 31 10:23:42 crc kubenswrapper[4830]: I0131 10:23:42.498166 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4h74l" event={"ID":"813e7284-9d36-4a7a-9939-2971921d7548","Type":"ContainerStarted","Data":"5e08484c9df0e3f5ff39f7c5ed0e089e9196af716a9127689b0e77ac8aef9de7"} Jan 31 10:23:42 crc kubenswrapper[4830]: I0131 10:23:42.530013 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-4h74l" podStartSLOduration=3.065557099 podStartE2EDuration="6.529989938s" podCreationTimestamp="2026-01-31 10:23:36 +0000 UTC" firstStartedPulling="2026-01-31 10:23:38.436989464 +0000 UTC m=+4962.930351906" lastFinishedPulling="2026-01-31 10:23:41.901422303 +0000 UTC m=+4966.394784745" observedRunningTime="2026-01-31 10:23:42.518080649 +0000 UTC m=+4967.011443101" watchObservedRunningTime="2026-01-31 10:23:42.529989938 +0000 UTC m=+4967.023352390" Jan 31 10:23:47 crc kubenswrapper[4830]: I0131 10:23:47.036968 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-4h74l" Jan 31 10:23:47 crc kubenswrapper[4830]: I0131 10:23:47.037533 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-4h74l" Jan 31 10:23:47 crc kubenswrapper[4830]: I0131 10:23:47.095051 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-4h74l" Jan 31 10:23:47 crc kubenswrapper[4830]: I0131 10:23:47.627027 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-4h74l" Jan 31 10:23:47 crc kubenswrapper[4830]: I0131 10:23:47.675963 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-4h74l"] Jan 31 10:23:49 crc kubenswrapper[4830]: I0131 10:23:49.598017 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-4h74l" podUID="813e7284-9d36-4a7a-9939-2971921d7548" containerName="registry-server" containerID="cri-o://5e08484c9df0e3f5ff39f7c5ed0e089e9196af716a9127689b0e77ac8aef9de7" gracePeriod=2 Jan 31 10:23:50 crc kubenswrapper[4830]: I0131 10:23:50.102714 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4h74l" Jan 31 10:23:50 crc kubenswrapper[4830]: I0131 10:23:50.121147 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/813e7284-9d36-4a7a-9939-2971921d7548-utilities\") pod \"813e7284-9d36-4a7a-9939-2971921d7548\" (UID: \"813e7284-9d36-4a7a-9939-2971921d7548\") " Jan 31 10:23:50 crc kubenswrapper[4830]: I0131 10:23:50.121192 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/813e7284-9d36-4a7a-9939-2971921d7548-catalog-content\") pod \"813e7284-9d36-4a7a-9939-2971921d7548\" (UID: \"813e7284-9d36-4a7a-9939-2971921d7548\") " Jan 31 10:23:50 crc kubenswrapper[4830]: I0131 10:23:50.121534 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7wr4p\" (UniqueName: \"kubernetes.io/projected/813e7284-9d36-4a7a-9939-2971921d7548-kube-api-access-7wr4p\") pod \"813e7284-9d36-4a7a-9939-2971921d7548\" (UID: \"813e7284-9d36-4a7a-9939-2971921d7548\") " Jan 31 10:23:50 crc kubenswrapper[4830]: I0131 10:23:50.122488 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/813e7284-9d36-4a7a-9939-2971921d7548-utilities" (OuterVolumeSpecName: "utilities") pod "813e7284-9d36-4a7a-9939-2971921d7548" (UID: "813e7284-9d36-4a7a-9939-2971921d7548"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 10:23:50 crc kubenswrapper[4830]: I0131 10:23:50.129026 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/813e7284-9d36-4a7a-9939-2971921d7548-kube-api-access-7wr4p" (OuterVolumeSpecName: "kube-api-access-7wr4p") pod "813e7284-9d36-4a7a-9939-2971921d7548" (UID: "813e7284-9d36-4a7a-9939-2971921d7548"). InnerVolumeSpecName "kube-api-access-7wr4p". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 10:23:50 crc kubenswrapper[4830]: I0131 10:23:50.144018 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/813e7284-9d36-4a7a-9939-2971921d7548-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "813e7284-9d36-4a7a-9939-2971921d7548" (UID: "813e7284-9d36-4a7a-9939-2971921d7548"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 10:23:50 crc kubenswrapper[4830]: I0131 10:23:50.224555 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7wr4p\" (UniqueName: \"kubernetes.io/projected/813e7284-9d36-4a7a-9939-2971921d7548-kube-api-access-7wr4p\") on node \"crc\" DevicePath \"\"" Jan 31 10:23:50 crc kubenswrapper[4830]: I0131 10:23:50.224594 4830 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/813e7284-9d36-4a7a-9939-2971921d7548-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 10:23:50 crc kubenswrapper[4830]: I0131 10:23:50.224606 4830 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/813e7284-9d36-4a7a-9939-2971921d7548-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 10:23:50 crc kubenswrapper[4830]: I0131 10:23:50.610585 4830 generic.go:334] "Generic (PLEG): container finished" podID="813e7284-9d36-4a7a-9939-2971921d7548" containerID="5e08484c9df0e3f5ff39f7c5ed0e089e9196af716a9127689b0e77ac8aef9de7" exitCode=0 Jan 31 10:23:50 crc kubenswrapper[4830]: I0131 10:23:50.610630 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4h74l" event={"ID":"813e7284-9d36-4a7a-9939-2971921d7548","Type":"ContainerDied","Data":"5e08484c9df0e3f5ff39f7c5ed0e089e9196af716a9127689b0e77ac8aef9de7"} Jan 31 10:23:50 crc kubenswrapper[4830]: I0131 10:23:50.610641 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4h74l" Jan 31 10:23:50 crc kubenswrapper[4830]: I0131 10:23:50.610659 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4h74l" event={"ID":"813e7284-9d36-4a7a-9939-2971921d7548","Type":"ContainerDied","Data":"ada3eff13966af61e90b55dfef47aa4a56f899994e03a911e3f2681c2f33c7b4"} Jan 31 10:23:50 crc kubenswrapper[4830]: I0131 10:23:50.610675 4830 scope.go:117] "RemoveContainer" containerID="5e08484c9df0e3f5ff39f7c5ed0e089e9196af716a9127689b0e77ac8aef9de7" Jan 31 10:23:50 crc kubenswrapper[4830]: I0131 10:23:50.644122 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-4h74l"] Jan 31 10:23:50 crc kubenswrapper[4830]: I0131 10:23:50.650363 4830 scope.go:117] "RemoveContainer" containerID="1361537b0702b633b4f9d3b11a35e3e7cff37c9f87ccd90e536d6e81329c86f8" Jan 31 10:23:50 crc kubenswrapper[4830]: I0131 10:23:50.654659 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-4h74l"] Jan 31 10:23:50 crc kubenswrapper[4830]: I0131 10:23:50.986630 4830 scope.go:117] "RemoveContainer" containerID="d9d97c8899c4c218b87a9e5dc66e42e4525c7e9226323487967af2e2aba7450f" Jan 31 10:23:51 crc kubenswrapper[4830]: I0131 10:23:51.025487 4830 scope.go:117] "RemoveContainer" containerID="5e08484c9df0e3f5ff39f7c5ed0e089e9196af716a9127689b0e77ac8aef9de7" Jan 31 10:23:51 crc kubenswrapper[4830]: E0131 10:23:51.026142 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5e08484c9df0e3f5ff39f7c5ed0e089e9196af716a9127689b0e77ac8aef9de7\": container with ID starting with 5e08484c9df0e3f5ff39f7c5ed0e089e9196af716a9127689b0e77ac8aef9de7 not found: ID does not exist" containerID="5e08484c9df0e3f5ff39f7c5ed0e089e9196af716a9127689b0e77ac8aef9de7" Jan 31 10:23:51 crc kubenswrapper[4830]: I0131 10:23:51.026206 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5e08484c9df0e3f5ff39f7c5ed0e089e9196af716a9127689b0e77ac8aef9de7"} err="failed to get container status \"5e08484c9df0e3f5ff39f7c5ed0e089e9196af716a9127689b0e77ac8aef9de7\": rpc error: code = NotFound desc = could not find container \"5e08484c9df0e3f5ff39f7c5ed0e089e9196af716a9127689b0e77ac8aef9de7\": container with ID starting with 5e08484c9df0e3f5ff39f7c5ed0e089e9196af716a9127689b0e77ac8aef9de7 not found: ID does not exist" Jan 31 10:23:51 crc kubenswrapper[4830]: I0131 10:23:51.026245 4830 scope.go:117] "RemoveContainer" containerID="1361537b0702b633b4f9d3b11a35e3e7cff37c9f87ccd90e536d6e81329c86f8" Jan 31 10:23:51 crc kubenswrapper[4830]: E0131 10:23:51.026595 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1361537b0702b633b4f9d3b11a35e3e7cff37c9f87ccd90e536d6e81329c86f8\": container with ID starting with 1361537b0702b633b4f9d3b11a35e3e7cff37c9f87ccd90e536d6e81329c86f8 not found: ID does not exist" containerID="1361537b0702b633b4f9d3b11a35e3e7cff37c9f87ccd90e536d6e81329c86f8" Jan 31 10:23:51 crc kubenswrapper[4830]: I0131 10:23:51.026626 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1361537b0702b633b4f9d3b11a35e3e7cff37c9f87ccd90e536d6e81329c86f8"} err="failed to get container status \"1361537b0702b633b4f9d3b11a35e3e7cff37c9f87ccd90e536d6e81329c86f8\": rpc error: code = NotFound desc = could not find container \"1361537b0702b633b4f9d3b11a35e3e7cff37c9f87ccd90e536d6e81329c86f8\": container with ID starting with 1361537b0702b633b4f9d3b11a35e3e7cff37c9f87ccd90e536d6e81329c86f8 not found: ID does not exist" Jan 31 10:23:51 crc kubenswrapper[4830]: I0131 10:23:51.026649 4830 scope.go:117] "RemoveContainer" containerID="d9d97c8899c4c218b87a9e5dc66e42e4525c7e9226323487967af2e2aba7450f" Jan 31 10:23:51 crc kubenswrapper[4830]: E0131 10:23:51.026956 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d9d97c8899c4c218b87a9e5dc66e42e4525c7e9226323487967af2e2aba7450f\": container with ID starting with d9d97c8899c4c218b87a9e5dc66e42e4525c7e9226323487967af2e2aba7450f not found: ID does not exist" containerID="d9d97c8899c4c218b87a9e5dc66e42e4525c7e9226323487967af2e2aba7450f" Jan 31 10:23:51 crc kubenswrapper[4830]: I0131 10:23:51.026999 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d9d97c8899c4c218b87a9e5dc66e42e4525c7e9226323487967af2e2aba7450f"} err="failed to get container status \"d9d97c8899c4c218b87a9e5dc66e42e4525c7e9226323487967af2e2aba7450f\": rpc error: code = NotFound desc = could not find container \"d9d97c8899c4c218b87a9e5dc66e42e4525c7e9226323487967af2e2aba7450f\": container with ID starting with d9d97c8899c4c218b87a9e5dc66e42e4525c7e9226323487967af2e2aba7450f not found: ID does not exist" Jan 31 10:23:52 crc kubenswrapper[4830]: I0131 10:23:52.264405 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="813e7284-9d36-4a7a-9939-2971921d7548" path="/var/lib/kubelet/pods/813e7284-9d36-4a7a-9939-2971921d7548/volumes" Jan 31 10:23:56 crc kubenswrapper[4830]: I0131 10:23:56.299456 4830 scope.go:117] "RemoveContainer" containerID="336fbcde4bc39ccadaddbb2c8835d20ab80032b9696d8d9d030c1910fa930c14" Jan 31 10:23:56 crc kubenswrapper[4830]: E0131 10:23:56.304130 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gt7kd_openshift-machine-config-operator(158dbfda-9b0a-4809-9946-3c6ee2d082dc)\"" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" podUID="158dbfda-9b0a-4809-9946-3c6ee2d082dc" Jan 31 10:24:09 crc kubenswrapper[4830]: I0131 10:24:09.251604 4830 scope.go:117] "RemoveContainer" containerID="336fbcde4bc39ccadaddbb2c8835d20ab80032b9696d8d9d030c1910fa930c14" Jan 31 10:24:09 crc kubenswrapper[4830]: E0131 10:24:09.252446 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gt7kd_openshift-machine-config-operator(158dbfda-9b0a-4809-9946-3c6ee2d082dc)\"" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" podUID="158dbfda-9b0a-4809-9946-3c6ee2d082dc" Jan 31 10:24:20 crc kubenswrapper[4830]: I0131 10:24:20.252126 4830 scope.go:117] "RemoveContainer" containerID="336fbcde4bc39ccadaddbb2c8835d20ab80032b9696d8d9d030c1910fa930c14" Jan 31 10:24:20 crc kubenswrapper[4830]: E0131 10:24:20.253054 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gt7kd_openshift-machine-config-operator(158dbfda-9b0a-4809-9946-3c6ee2d082dc)\"" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" podUID="158dbfda-9b0a-4809-9946-3c6ee2d082dc" Jan 31 10:24:31 crc kubenswrapper[4830]: I0131 10:24:31.251678 4830 scope.go:117] "RemoveContainer" containerID="336fbcde4bc39ccadaddbb2c8835d20ab80032b9696d8d9d030c1910fa930c14" Jan 31 10:24:31 crc kubenswrapper[4830]: E0131 10:24:31.252923 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gt7kd_openshift-machine-config-operator(158dbfda-9b0a-4809-9946-3c6ee2d082dc)\"" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" podUID="158dbfda-9b0a-4809-9946-3c6ee2d082dc" Jan 31 10:24:43 crc kubenswrapper[4830]: I0131 10:24:43.252611 4830 scope.go:117] "RemoveContainer" containerID="336fbcde4bc39ccadaddbb2c8835d20ab80032b9696d8d9d030c1910fa930c14" Jan 31 10:24:43 crc kubenswrapper[4830]: E0131 10:24:43.253519 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gt7kd_openshift-machine-config-operator(158dbfda-9b0a-4809-9946-3c6ee2d082dc)\"" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" podUID="158dbfda-9b0a-4809-9946-3c6ee2d082dc" Jan 31 10:24:55 crc kubenswrapper[4830]: I0131 10:24:55.252593 4830 scope.go:117] "RemoveContainer" containerID="336fbcde4bc39ccadaddbb2c8835d20ab80032b9696d8d9d030c1910fa930c14" Jan 31 10:24:55 crc kubenswrapper[4830]: E0131 10:24:55.253363 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gt7kd_openshift-machine-config-operator(158dbfda-9b0a-4809-9946-3c6ee2d082dc)\"" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" podUID="158dbfda-9b0a-4809-9946-3c6ee2d082dc" Jan 31 10:25:09 crc kubenswrapper[4830]: I0131 10:25:09.251528 4830 scope.go:117] "RemoveContainer" containerID="336fbcde4bc39ccadaddbb2c8835d20ab80032b9696d8d9d030c1910fa930c14" Jan 31 10:25:09 crc kubenswrapper[4830]: E0131 10:25:09.252451 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gt7kd_openshift-machine-config-operator(158dbfda-9b0a-4809-9946-3c6ee2d082dc)\"" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" podUID="158dbfda-9b0a-4809-9946-3c6ee2d082dc" Jan 31 10:25:23 crc kubenswrapper[4830]: I0131 10:25:23.251938 4830 scope.go:117] "RemoveContainer" containerID="336fbcde4bc39ccadaddbb2c8835d20ab80032b9696d8d9d030c1910fa930c14" Jan 31 10:25:23 crc kubenswrapper[4830]: E0131 10:25:23.252763 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gt7kd_openshift-machine-config-operator(158dbfda-9b0a-4809-9946-3c6ee2d082dc)\"" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" podUID="158dbfda-9b0a-4809-9946-3c6ee2d082dc" Jan 31 10:25:35 crc kubenswrapper[4830]: I0131 10:25:35.252518 4830 scope.go:117] "RemoveContainer" containerID="336fbcde4bc39ccadaddbb2c8835d20ab80032b9696d8d9d030c1910fa930c14" Jan 31 10:25:35 crc kubenswrapper[4830]: E0131 10:25:35.253402 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gt7kd_openshift-machine-config-operator(158dbfda-9b0a-4809-9946-3c6ee2d082dc)\"" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" podUID="158dbfda-9b0a-4809-9946-3c6ee2d082dc" Jan 31 10:25:47 crc kubenswrapper[4830]: I0131 10:25:47.252421 4830 scope.go:117] "RemoveContainer" containerID="336fbcde4bc39ccadaddbb2c8835d20ab80032b9696d8d9d030c1910fa930c14" Jan 31 10:25:47 crc kubenswrapper[4830]: E0131 10:25:47.253786 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gt7kd_openshift-machine-config-operator(158dbfda-9b0a-4809-9946-3c6ee2d082dc)\"" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" podUID="158dbfda-9b0a-4809-9946-3c6ee2d082dc" Jan 31 10:26:02 crc kubenswrapper[4830]: I0131 10:26:02.251781 4830 scope.go:117] "RemoveContainer" containerID="336fbcde4bc39ccadaddbb2c8835d20ab80032b9696d8d9d030c1910fa930c14" Jan 31 10:26:02 crc kubenswrapper[4830]: E0131 10:26:02.252481 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gt7kd_openshift-machine-config-operator(158dbfda-9b0a-4809-9946-3c6ee2d082dc)\"" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" podUID="158dbfda-9b0a-4809-9946-3c6ee2d082dc" Jan 31 10:26:14 crc kubenswrapper[4830]: I0131 10:26:14.251824 4830 scope.go:117] "RemoveContainer" containerID="336fbcde4bc39ccadaddbb2c8835d20ab80032b9696d8d9d030c1910fa930c14" Jan 31 10:26:14 crc kubenswrapper[4830]: E0131 10:26:14.252904 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gt7kd_openshift-machine-config-operator(158dbfda-9b0a-4809-9946-3c6ee2d082dc)\"" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" podUID="158dbfda-9b0a-4809-9946-3c6ee2d082dc" Jan 31 10:26:17 crc kubenswrapper[4830]: I0131 10:26:17.972649 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tempest-tests-tempest"] Jan 31 10:26:17 crc kubenswrapper[4830]: E0131 10:26:17.974547 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="813e7284-9d36-4a7a-9939-2971921d7548" containerName="extract-utilities" Jan 31 10:26:17 crc kubenswrapper[4830]: I0131 10:26:17.974564 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="813e7284-9d36-4a7a-9939-2971921d7548" containerName="extract-utilities" Jan 31 10:26:17 crc kubenswrapper[4830]: E0131 10:26:17.974584 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="813e7284-9d36-4a7a-9939-2971921d7548" containerName="registry-server" Jan 31 10:26:17 crc kubenswrapper[4830]: I0131 10:26:17.974590 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="813e7284-9d36-4a7a-9939-2971921d7548" containerName="registry-server" Jan 31 10:26:17 crc kubenswrapper[4830]: E0131 10:26:17.974599 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="813e7284-9d36-4a7a-9939-2971921d7548" containerName="extract-content" Jan 31 10:26:17 crc kubenswrapper[4830]: I0131 10:26:17.974605 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="813e7284-9d36-4a7a-9939-2971921d7548" containerName="extract-content" Jan 31 10:26:17 crc kubenswrapper[4830]: I0131 10:26:17.974853 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="813e7284-9d36-4a7a-9939-2971921d7548" containerName="registry-server" Jan 31 10:26:17 crc kubenswrapper[4830]: I0131 10:26:17.975689 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 31 10:26:17 crc kubenswrapper[4830]: I0131 10:26:17.983045 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"test-operator-controller-priv-key" Jan 31 10:26:17 crc kubenswrapper[4830]: I0131 10:26:17.983115 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-custom-data-s0" Jan 31 10:26:17 crc kubenswrapper[4830]: I0131 10:26:17.983160 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Jan 31 10:26:17 crc kubenswrapper[4830]: I0131 10:26:17.984101 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-cbc2n" Jan 31 10:26:17 crc kubenswrapper[4830]: I0131 10:26:17.987942 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Jan 31 10:26:18 crc kubenswrapper[4830]: I0131 10:26:18.030328 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1fa42e50-1a05-499f-9396-a1e5dc1161f6-config-data\") pod \"tempest-tests-tempest\" (UID: \"1fa42e50-1a05-499f-9396-a1e5dc1161f6\") " pod="openstack/tempest-tests-tempest" Jan 31 10:26:18 crc kubenswrapper[4830]: I0131 10:26:18.030473 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"tempest-tests-tempest\" (UID: \"1fa42e50-1a05-499f-9396-a1e5dc1161f6\") " pod="openstack/tempest-tests-tempest" Jan 31 10:26:18 crc kubenswrapper[4830]: I0131 10:26:18.030571 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/1fa42e50-1a05-499f-9396-a1e5dc1161f6-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"1fa42e50-1a05-499f-9396-a1e5dc1161f6\") " pod="openstack/tempest-tests-tempest" Jan 31 10:26:18 crc kubenswrapper[4830]: I0131 10:26:18.030650 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nchrt\" (UniqueName: \"kubernetes.io/projected/1fa42e50-1a05-499f-9396-a1e5dc1161f6-kube-api-access-nchrt\") pod \"tempest-tests-tempest\" (UID: \"1fa42e50-1a05-499f-9396-a1e5dc1161f6\") " pod="openstack/tempest-tests-tempest" Jan 31 10:26:18 crc kubenswrapper[4830]: I0131 10:26:18.030693 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/1fa42e50-1a05-499f-9396-a1e5dc1161f6-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"1fa42e50-1a05-499f-9396-a1e5dc1161f6\") " pod="openstack/tempest-tests-tempest" Jan 31 10:26:18 crc kubenswrapper[4830]: I0131 10:26:18.030741 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/1fa42e50-1a05-499f-9396-a1e5dc1161f6-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"1fa42e50-1a05-499f-9396-a1e5dc1161f6\") " pod="openstack/tempest-tests-tempest" Jan 31 10:26:18 crc kubenswrapper[4830]: I0131 10:26:18.030795 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/1fa42e50-1a05-499f-9396-a1e5dc1161f6-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"1fa42e50-1a05-499f-9396-a1e5dc1161f6\") " pod="openstack/tempest-tests-tempest" Jan 31 10:26:18 crc kubenswrapper[4830]: I0131 10:26:18.030856 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/1fa42e50-1a05-499f-9396-a1e5dc1161f6-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"1fa42e50-1a05-499f-9396-a1e5dc1161f6\") " pod="openstack/tempest-tests-tempest" Jan 31 10:26:18 crc kubenswrapper[4830]: I0131 10:26:18.030887 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/1fa42e50-1a05-499f-9396-a1e5dc1161f6-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"1fa42e50-1a05-499f-9396-a1e5dc1161f6\") " pod="openstack/tempest-tests-tempest" Jan 31 10:26:18 crc kubenswrapper[4830]: I0131 10:26:18.133773 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"tempest-tests-tempest\" (UID: \"1fa42e50-1a05-499f-9396-a1e5dc1161f6\") " pod="openstack/tempest-tests-tempest" Jan 31 10:26:18 crc kubenswrapper[4830]: I0131 10:26:18.133899 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/1fa42e50-1a05-499f-9396-a1e5dc1161f6-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"1fa42e50-1a05-499f-9396-a1e5dc1161f6\") " pod="openstack/tempest-tests-tempest" Jan 31 10:26:18 crc kubenswrapper[4830]: I0131 10:26:18.134562 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nchrt\" (UniqueName: \"kubernetes.io/projected/1fa42e50-1a05-499f-9396-a1e5dc1161f6-kube-api-access-nchrt\") pod \"tempest-tests-tempest\" (UID: \"1fa42e50-1a05-499f-9396-a1e5dc1161f6\") " pod="openstack/tempest-tests-tempest" Jan 31 10:26:18 crc kubenswrapper[4830]: I0131 10:26:18.134608 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/1fa42e50-1a05-499f-9396-a1e5dc1161f6-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"1fa42e50-1a05-499f-9396-a1e5dc1161f6\") " pod="openstack/tempest-tests-tempest" Jan 31 10:26:18 crc kubenswrapper[4830]: I0131 10:26:18.134637 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/1fa42e50-1a05-499f-9396-a1e5dc1161f6-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"1fa42e50-1a05-499f-9396-a1e5dc1161f6\") " pod="openstack/tempest-tests-tempest" Jan 31 10:26:18 crc kubenswrapper[4830]: I0131 10:26:18.134689 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/1fa42e50-1a05-499f-9396-a1e5dc1161f6-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"1fa42e50-1a05-499f-9396-a1e5dc1161f6\") " pod="openstack/tempest-tests-tempest" Jan 31 10:26:18 crc kubenswrapper[4830]: I0131 10:26:18.134749 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/1fa42e50-1a05-499f-9396-a1e5dc1161f6-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"1fa42e50-1a05-499f-9396-a1e5dc1161f6\") " pod="openstack/tempest-tests-tempest" Jan 31 10:26:18 crc kubenswrapper[4830]: I0131 10:26:18.134771 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/1fa42e50-1a05-499f-9396-a1e5dc1161f6-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"1fa42e50-1a05-499f-9396-a1e5dc1161f6\") " pod="openstack/tempest-tests-tempest" Jan 31 10:26:18 crc kubenswrapper[4830]: I0131 10:26:18.134844 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1fa42e50-1a05-499f-9396-a1e5dc1161f6-config-data\") pod \"tempest-tests-tempest\" (UID: \"1fa42e50-1a05-499f-9396-a1e5dc1161f6\") " pod="openstack/tempest-tests-tempest" Jan 31 10:26:18 crc kubenswrapper[4830]: I0131 10:26:18.135357 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/1fa42e50-1a05-499f-9396-a1e5dc1161f6-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"1fa42e50-1a05-499f-9396-a1e5dc1161f6\") " pod="openstack/tempest-tests-tempest" Jan 31 10:26:18 crc kubenswrapper[4830]: I0131 10:26:18.135584 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/1fa42e50-1a05-499f-9396-a1e5dc1161f6-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"1fa42e50-1a05-499f-9396-a1e5dc1161f6\") " pod="openstack/tempest-tests-tempest" Jan 31 10:26:18 crc kubenswrapper[4830]: I0131 10:26:18.136027 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/1fa42e50-1a05-499f-9396-a1e5dc1161f6-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"1fa42e50-1a05-499f-9396-a1e5dc1161f6\") " pod="openstack/tempest-tests-tempest" Jan 31 10:26:18 crc kubenswrapper[4830]: I0131 10:26:18.137018 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1fa42e50-1a05-499f-9396-a1e5dc1161f6-config-data\") pod \"tempest-tests-tempest\" (UID: \"1fa42e50-1a05-499f-9396-a1e5dc1161f6\") " pod="openstack/tempest-tests-tempest" Jan 31 10:26:18 crc kubenswrapper[4830]: I0131 10:26:18.137143 4830 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"tempest-tests-tempest\" (UID: \"1fa42e50-1a05-499f-9396-a1e5dc1161f6\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/tempest-tests-tempest" Jan 31 10:26:18 crc kubenswrapper[4830]: I0131 10:26:18.142701 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/1fa42e50-1a05-499f-9396-a1e5dc1161f6-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"1fa42e50-1a05-499f-9396-a1e5dc1161f6\") " pod="openstack/tempest-tests-tempest" Jan 31 10:26:18 crc kubenswrapper[4830]: I0131 10:26:18.144874 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/1fa42e50-1a05-499f-9396-a1e5dc1161f6-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"1fa42e50-1a05-499f-9396-a1e5dc1161f6\") " pod="openstack/tempest-tests-tempest" Jan 31 10:26:18 crc kubenswrapper[4830]: I0131 10:26:18.146447 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/1fa42e50-1a05-499f-9396-a1e5dc1161f6-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"1fa42e50-1a05-499f-9396-a1e5dc1161f6\") " pod="openstack/tempest-tests-tempest" Jan 31 10:26:18 crc kubenswrapper[4830]: I0131 10:26:18.159462 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nchrt\" (UniqueName: \"kubernetes.io/projected/1fa42e50-1a05-499f-9396-a1e5dc1161f6-kube-api-access-nchrt\") pod \"tempest-tests-tempest\" (UID: \"1fa42e50-1a05-499f-9396-a1e5dc1161f6\") " pod="openstack/tempest-tests-tempest" Jan 31 10:26:18 crc kubenswrapper[4830]: I0131 10:26:18.177075 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"tempest-tests-tempest\" (UID: \"1fa42e50-1a05-499f-9396-a1e5dc1161f6\") " pod="openstack/tempest-tests-tempest" Jan 31 10:26:18 crc kubenswrapper[4830]: I0131 10:26:18.301945 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 31 10:26:18 crc kubenswrapper[4830]: W0131 10:26:18.834762 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1fa42e50_1a05_499f_9396_a1e5dc1161f6.slice/crio-8356097a8dc598d2a344aae96266d60c2703096af258027cf9e5ef2e2ef12d93 WatchSource:0}: Error finding container 8356097a8dc598d2a344aae96266d60c2703096af258027cf9e5ef2e2ef12d93: Status 404 returned error can't find the container with id 8356097a8dc598d2a344aae96266d60c2703096af258027cf9e5ef2e2ef12d93 Jan 31 10:26:18 crc kubenswrapper[4830]: I0131 10:26:18.839377 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Jan 31 10:26:19 crc kubenswrapper[4830]: I0131 10:26:19.335511 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"1fa42e50-1a05-499f-9396-a1e5dc1161f6","Type":"ContainerStarted","Data":"8356097a8dc598d2a344aae96266d60c2703096af258027cf9e5ef2e2ef12d93"} Jan 31 10:26:26 crc kubenswrapper[4830]: I0131 10:26:26.261597 4830 scope.go:117] "RemoveContainer" containerID="336fbcde4bc39ccadaddbb2c8835d20ab80032b9696d8d9d030c1910fa930c14" Jan 31 10:26:26 crc kubenswrapper[4830]: E0131 10:26:26.264473 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gt7kd_openshift-machine-config-operator(158dbfda-9b0a-4809-9946-3c6ee2d082dc)\"" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" podUID="158dbfda-9b0a-4809-9946-3c6ee2d082dc" Jan 31 10:26:38 crc kubenswrapper[4830]: I0131 10:26:38.252548 4830 scope.go:117] "RemoveContainer" containerID="336fbcde4bc39ccadaddbb2c8835d20ab80032b9696d8d9d030c1910fa930c14" Jan 31 10:26:38 crc kubenswrapper[4830]: E0131 10:26:38.253507 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gt7kd_openshift-machine-config-operator(158dbfda-9b0a-4809-9946-3c6ee2d082dc)\"" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" podUID="158dbfda-9b0a-4809-9946-3c6ee2d082dc" Jan 31 10:26:52 crc kubenswrapper[4830]: I0131 10:26:52.356348 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-lz4t4"] Jan 31 10:26:52 crc kubenswrapper[4830]: I0131 10:26:52.361123 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lz4t4" Jan 31 10:26:52 crc kubenswrapper[4830]: I0131 10:26:52.483218 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/46a71ddb-bffa-4bf2-8f45-4eba31e50fa7-catalog-content\") pod \"redhat-operators-lz4t4\" (UID: \"46a71ddb-bffa-4bf2-8f45-4eba31e50fa7\") " pod="openshift-marketplace/redhat-operators-lz4t4" Jan 31 10:26:52 crc kubenswrapper[4830]: I0131 10:26:52.483308 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/46a71ddb-bffa-4bf2-8f45-4eba31e50fa7-utilities\") pod \"redhat-operators-lz4t4\" (UID: \"46a71ddb-bffa-4bf2-8f45-4eba31e50fa7\") " pod="openshift-marketplace/redhat-operators-lz4t4" Jan 31 10:26:52 crc kubenswrapper[4830]: I0131 10:26:52.483461 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8pjz2\" (UniqueName: \"kubernetes.io/projected/46a71ddb-bffa-4bf2-8f45-4eba31e50fa7-kube-api-access-8pjz2\") pod \"redhat-operators-lz4t4\" (UID: \"46a71ddb-bffa-4bf2-8f45-4eba31e50fa7\") " pod="openshift-marketplace/redhat-operators-lz4t4" Jan 31 10:26:52 crc kubenswrapper[4830]: I0131 10:26:52.585561 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8pjz2\" (UniqueName: \"kubernetes.io/projected/46a71ddb-bffa-4bf2-8f45-4eba31e50fa7-kube-api-access-8pjz2\") pod \"redhat-operators-lz4t4\" (UID: \"46a71ddb-bffa-4bf2-8f45-4eba31e50fa7\") " pod="openshift-marketplace/redhat-operators-lz4t4" Jan 31 10:26:52 crc kubenswrapper[4830]: I0131 10:26:52.585741 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/46a71ddb-bffa-4bf2-8f45-4eba31e50fa7-catalog-content\") pod \"redhat-operators-lz4t4\" (UID: \"46a71ddb-bffa-4bf2-8f45-4eba31e50fa7\") " pod="openshift-marketplace/redhat-operators-lz4t4" Jan 31 10:26:52 crc kubenswrapper[4830]: I0131 10:26:52.585782 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/46a71ddb-bffa-4bf2-8f45-4eba31e50fa7-utilities\") pod \"redhat-operators-lz4t4\" (UID: \"46a71ddb-bffa-4bf2-8f45-4eba31e50fa7\") " pod="openshift-marketplace/redhat-operators-lz4t4" Jan 31 10:26:52 crc kubenswrapper[4830]: I0131 10:26:52.586355 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/46a71ddb-bffa-4bf2-8f45-4eba31e50fa7-catalog-content\") pod \"redhat-operators-lz4t4\" (UID: \"46a71ddb-bffa-4bf2-8f45-4eba31e50fa7\") " pod="openshift-marketplace/redhat-operators-lz4t4" Jan 31 10:26:52 crc kubenswrapper[4830]: I0131 10:26:52.586373 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/46a71ddb-bffa-4bf2-8f45-4eba31e50fa7-utilities\") pod \"redhat-operators-lz4t4\" (UID: \"46a71ddb-bffa-4bf2-8f45-4eba31e50fa7\") " pod="openshift-marketplace/redhat-operators-lz4t4" Jan 31 10:26:52 crc kubenswrapper[4830]: I0131 10:26:52.729115 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-lz4t4"] Jan 31 10:26:52 crc kubenswrapper[4830]: I0131 10:26:52.861358 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8pjz2\" (UniqueName: \"kubernetes.io/projected/46a71ddb-bffa-4bf2-8f45-4eba31e50fa7-kube-api-access-8pjz2\") pod \"redhat-operators-lz4t4\" (UID: \"46a71ddb-bffa-4bf2-8f45-4eba31e50fa7\") " pod="openshift-marketplace/redhat-operators-lz4t4" Jan 31 10:26:53 crc kubenswrapper[4830]: I0131 10:26:53.046746 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lz4t4" Jan 31 10:26:53 crc kubenswrapper[4830]: I0131 10:26:53.252099 4830 scope.go:117] "RemoveContainer" containerID="336fbcde4bc39ccadaddbb2c8835d20ab80032b9696d8d9d030c1910fa930c14" Jan 31 10:26:53 crc kubenswrapper[4830]: E0131 10:26:53.252396 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gt7kd_openshift-machine-config-operator(158dbfda-9b0a-4809-9946-3c6ee2d082dc)\"" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" podUID="158dbfda-9b0a-4809-9946-3c6ee2d082dc" Jan 31 10:27:05 crc kubenswrapper[4830]: E0131 10:27:05.416918 4830 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified" Jan 31 10:27:05 crc kubenswrapper[4830]: E0131 10:27:05.431865 4830 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:tempest-tests-tempest-tests-runner,Image:quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/test_operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-workdir,ReadOnly:false,MountPath:/var/lib/tempest,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-temporary,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-logs,ReadOnly:false,MountPath:/var/lib/tempest/external_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/etc/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/var/lib/tempest/.config/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/etc/openstack/secure.yaml,SubPath:secure.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ca-certs,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ssh-key,ReadOnly:false,MountPath:/var/lib/tempest/id_ecdsa,SubPath:ssh_key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nchrt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42480,RunAsNonRoot:*false,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*true,RunAsGroup:*42480,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-custom-data-s0,},Optional:nil,},SecretRef:nil,},EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-env-vars-s0,},Optional:nil,},SecretRef:nil,},},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod tempest-tests-tempest_openstack(1fa42e50-1a05-499f-9396-a1e5dc1161f6): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 31 10:27:05 crc kubenswrapper[4830]: E0131 10:27:05.433114 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/tempest-tests-tempest" podUID="1fa42e50-1a05-499f-9396-a1e5dc1161f6" Jan 31 10:27:05 crc kubenswrapper[4830]: E0131 10:27:05.875983 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified\\\"\"" pod="openstack/tempest-tests-tempest" podUID="1fa42e50-1a05-499f-9396-a1e5dc1161f6" Jan 31 10:27:06 crc kubenswrapper[4830]: I0131 10:27:06.346511 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-lz4t4"] Jan 31 10:27:06 crc kubenswrapper[4830]: I0131 10:27:06.886094 4830 generic.go:334] "Generic (PLEG): container finished" podID="46a71ddb-bffa-4bf2-8f45-4eba31e50fa7" containerID="8bceb12af9febada80ab835c715de3ed794492bbcd051062ade47e00f19d63c0" exitCode=0 Jan 31 10:27:06 crc kubenswrapper[4830]: I0131 10:27:06.886152 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lz4t4" event={"ID":"46a71ddb-bffa-4bf2-8f45-4eba31e50fa7","Type":"ContainerDied","Data":"8bceb12af9febada80ab835c715de3ed794492bbcd051062ade47e00f19d63c0"} Jan 31 10:27:06 crc kubenswrapper[4830]: I0131 10:27:06.886848 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lz4t4" event={"ID":"46a71ddb-bffa-4bf2-8f45-4eba31e50fa7","Type":"ContainerStarted","Data":"0c4c0e98bd3c2a69ea2547886f6bc020864fa74efa33c817b65cf4b77305ab97"} Jan 31 10:27:06 crc kubenswrapper[4830]: I0131 10:27:06.888490 4830 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 31 10:27:07 crc kubenswrapper[4830]: I0131 10:27:07.902714 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lz4t4" event={"ID":"46a71ddb-bffa-4bf2-8f45-4eba31e50fa7","Type":"ContainerStarted","Data":"e4ffc309f61011d1bbb1dbe0fe22f7c82717ee384f3ac8052210580ef79f1a9c"} Jan 31 10:27:08 crc kubenswrapper[4830]: I0131 10:27:08.253065 4830 scope.go:117] "RemoveContainer" containerID="336fbcde4bc39ccadaddbb2c8835d20ab80032b9696d8d9d030c1910fa930c14" Jan 31 10:27:08 crc kubenswrapper[4830]: E0131 10:27:08.253490 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gt7kd_openshift-machine-config-operator(158dbfda-9b0a-4809-9946-3c6ee2d082dc)\"" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" podUID="158dbfda-9b0a-4809-9946-3c6ee2d082dc" Jan 31 10:27:17 crc kubenswrapper[4830]: I0131 10:27:17.681621 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Jan 31 10:27:18 crc kubenswrapper[4830]: I0131 10:27:18.015836 4830 generic.go:334] "Generic (PLEG): container finished" podID="46a71ddb-bffa-4bf2-8f45-4eba31e50fa7" containerID="e4ffc309f61011d1bbb1dbe0fe22f7c82717ee384f3ac8052210580ef79f1a9c" exitCode=0 Jan 31 10:27:18 crc kubenswrapper[4830]: I0131 10:27:18.015899 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lz4t4" event={"ID":"46a71ddb-bffa-4bf2-8f45-4eba31e50fa7","Type":"ContainerDied","Data":"e4ffc309f61011d1bbb1dbe0fe22f7c82717ee384f3ac8052210580ef79f1a9c"} Jan 31 10:27:19 crc kubenswrapper[4830]: I0131 10:27:19.033649 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lz4t4" event={"ID":"46a71ddb-bffa-4bf2-8f45-4eba31e50fa7","Type":"ContainerStarted","Data":"63542ec685aed86818edace9246d8f01d9b7192be28f0a3ff86ffa8f4460d4d5"} Jan 31 10:27:19 crc kubenswrapper[4830]: I0131 10:27:19.061123 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-lz4t4" podStartSLOduration=15.511435091 podStartE2EDuration="27.061097694s" podCreationTimestamp="2026-01-31 10:26:52 +0000 UTC" firstStartedPulling="2026-01-31 10:27:06.888293642 +0000 UTC m=+5171.381656084" lastFinishedPulling="2026-01-31 10:27:18.437956225 +0000 UTC m=+5182.931318687" observedRunningTime="2026-01-31 10:27:19.055153485 +0000 UTC m=+5183.548515927" watchObservedRunningTime="2026-01-31 10:27:19.061097694 +0000 UTC m=+5183.554460136" Jan 31 10:27:19 crc kubenswrapper[4830]: I0131 10:27:19.251613 4830 scope.go:117] "RemoveContainer" containerID="336fbcde4bc39ccadaddbb2c8835d20ab80032b9696d8d9d030c1910fa930c14" Jan 31 10:27:19 crc kubenswrapper[4830]: E0131 10:27:19.251966 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gt7kd_openshift-machine-config-operator(158dbfda-9b0a-4809-9946-3c6ee2d082dc)\"" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" podUID="158dbfda-9b0a-4809-9946-3c6ee2d082dc" Jan 31 10:27:20 crc kubenswrapper[4830]: I0131 10:27:20.045984 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"1fa42e50-1a05-499f-9396-a1e5dc1161f6","Type":"ContainerStarted","Data":"3cdbae831121a91472164c34cdf5b3766cb2b6765f577f7243d7c239f5a135a1"} Jan 31 10:27:20 crc kubenswrapper[4830]: I0131 10:27:20.076118 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tempest-tests-tempest" podStartSLOduration=5.237039642 podStartE2EDuration="1m4.076100389s" podCreationTimestamp="2026-01-31 10:26:16 +0000 UTC" firstStartedPulling="2026-01-31 10:26:18.839646852 +0000 UTC m=+5123.333009294" lastFinishedPulling="2026-01-31 10:27:17.678707599 +0000 UTC m=+5182.172070041" observedRunningTime="2026-01-31 10:27:20.069106621 +0000 UTC m=+5184.562469103" watchObservedRunningTime="2026-01-31 10:27:20.076100389 +0000 UTC m=+5184.569462831" Jan 31 10:27:23 crc kubenswrapper[4830]: I0131 10:27:23.048449 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-lz4t4" Jan 31 10:27:23 crc kubenswrapper[4830]: I0131 10:27:23.048838 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-lz4t4" Jan 31 10:27:24 crc kubenswrapper[4830]: I0131 10:27:24.099741 4830 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-lz4t4" podUID="46a71ddb-bffa-4bf2-8f45-4eba31e50fa7" containerName="registry-server" probeResult="failure" output=< Jan 31 10:27:24 crc kubenswrapper[4830]: timeout: failed to connect service ":50051" within 1s Jan 31 10:27:24 crc kubenswrapper[4830]: > Jan 31 10:27:32 crc kubenswrapper[4830]: I0131 10:27:32.252939 4830 scope.go:117] "RemoveContainer" containerID="336fbcde4bc39ccadaddbb2c8835d20ab80032b9696d8d9d030c1910fa930c14" Jan 31 10:27:32 crc kubenswrapper[4830]: E0131 10:27:32.253989 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gt7kd_openshift-machine-config-operator(158dbfda-9b0a-4809-9946-3c6ee2d082dc)\"" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" podUID="158dbfda-9b0a-4809-9946-3c6ee2d082dc" Jan 31 10:27:34 crc kubenswrapper[4830]: I0131 10:27:34.094624 4830 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-lz4t4" podUID="46a71ddb-bffa-4bf2-8f45-4eba31e50fa7" containerName="registry-server" probeResult="failure" output=< Jan 31 10:27:34 crc kubenswrapper[4830]: timeout: failed to connect service ":50051" within 1s Jan 31 10:27:34 crc kubenswrapper[4830]: > Jan 31 10:27:43 crc kubenswrapper[4830]: I0131 10:27:43.099262 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-lz4t4" Jan 31 10:27:43 crc kubenswrapper[4830]: I0131 10:27:43.176586 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-lz4t4" Jan 31 10:27:44 crc kubenswrapper[4830]: I0131 10:27:44.759666 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-lz4t4"] Jan 31 10:27:44 crc kubenswrapper[4830]: I0131 10:27:44.761046 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-lz4t4" podUID="46a71ddb-bffa-4bf2-8f45-4eba31e50fa7" containerName="registry-server" containerID="cri-o://63542ec685aed86818edace9246d8f01d9b7192be28f0a3ff86ffa8f4460d4d5" gracePeriod=2 Jan 31 10:27:45 crc kubenswrapper[4830]: I0131 10:27:45.252615 4830 scope.go:117] "RemoveContainer" containerID="336fbcde4bc39ccadaddbb2c8835d20ab80032b9696d8d9d030c1910fa930c14" Jan 31 10:27:45 crc kubenswrapper[4830]: E0131 10:27:45.253142 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gt7kd_openshift-machine-config-operator(158dbfda-9b0a-4809-9946-3c6ee2d082dc)\"" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" podUID="158dbfda-9b0a-4809-9946-3c6ee2d082dc" Jan 31 10:27:45 crc kubenswrapper[4830]: I0131 10:27:45.347221 4830 generic.go:334] "Generic (PLEG): container finished" podID="46a71ddb-bffa-4bf2-8f45-4eba31e50fa7" containerID="63542ec685aed86818edace9246d8f01d9b7192be28f0a3ff86ffa8f4460d4d5" exitCode=0 Jan 31 10:27:45 crc kubenswrapper[4830]: I0131 10:27:45.347270 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lz4t4" event={"ID":"46a71ddb-bffa-4bf2-8f45-4eba31e50fa7","Type":"ContainerDied","Data":"63542ec685aed86818edace9246d8f01d9b7192be28f0a3ff86ffa8f4460d4d5"} Jan 31 10:27:45 crc kubenswrapper[4830]: I0131 10:27:45.347301 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lz4t4" event={"ID":"46a71ddb-bffa-4bf2-8f45-4eba31e50fa7","Type":"ContainerDied","Data":"0c4c0e98bd3c2a69ea2547886f6bc020864fa74efa33c817b65cf4b77305ab97"} Jan 31 10:27:45 crc kubenswrapper[4830]: I0131 10:27:45.347315 4830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0c4c0e98bd3c2a69ea2547886f6bc020864fa74efa33c817b65cf4b77305ab97" Jan 31 10:27:45 crc kubenswrapper[4830]: I0131 10:27:45.399635 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lz4t4" Jan 31 10:27:45 crc kubenswrapper[4830]: I0131 10:27:45.503659 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8pjz2\" (UniqueName: \"kubernetes.io/projected/46a71ddb-bffa-4bf2-8f45-4eba31e50fa7-kube-api-access-8pjz2\") pod \"46a71ddb-bffa-4bf2-8f45-4eba31e50fa7\" (UID: \"46a71ddb-bffa-4bf2-8f45-4eba31e50fa7\") " Jan 31 10:27:45 crc kubenswrapper[4830]: I0131 10:27:45.503867 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/46a71ddb-bffa-4bf2-8f45-4eba31e50fa7-catalog-content\") pod \"46a71ddb-bffa-4bf2-8f45-4eba31e50fa7\" (UID: \"46a71ddb-bffa-4bf2-8f45-4eba31e50fa7\") " Jan 31 10:27:45 crc kubenswrapper[4830]: I0131 10:27:45.503892 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/46a71ddb-bffa-4bf2-8f45-4eba31e50fa7-utilities\") pod \"46a71ddb-bffa-4bf2-8f45-4eba31e50fa7\" (UID: \"46a71ddb-bffa-4bf2-8f45-4eba31e50fa7\") " Jan 31 10:27:45 crc kubenswrapper[4830]: I0131 10:27:45.504641 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/46a71ddb-bffa-4bf2-8f45-4eba31e50fa7-utilities" (OuterVolumeSpecName: "utilities") pod "46a71ddb-bffa-4bf2-8f45-4eba31e50fa7" (UID: "46a71ddb-bffa-4bf2-8f45-4eba31e50fa7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 10:27:45 crc kubenswrapper[4830]: I0131 10:27:45.505222 4830 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/46a71ddb-bffa-4bf2-8f45-4eba31e50fa7-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 10:27:45 crc kubenswrapper[4830]: I0131 10:27:45.511364 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/46a71ddb-bffa-4bf2-8f45-4eba31e50fa7-kube-api-access-8pjz2" (OuterVolumeSpecName: "kube-api-access-8pjz2") pod "46a71ddb-bffa-4bf2-8f45-4eba31e50fa7" (UID: "46a71ddb-bffa-4bf2-8f45-4eba31e50fa7"). InnerVolumeSpecName "kube-api-access-8pjz2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 10:27:45 crc kubenswrapper[4830]: I0131 10:27:45.607188 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8pjz2\" (UniqueName: \"kubernetes.io/projected/46a71ddb-bffa-4bf2-8f45-4eba31e50fa7-kube-api-access-8pjz2\") on node \"crc\" DevicePath \"\"" Jan 31 10:27:45 crc kubenswrapper[4830]: I0131 10:27:45.641096 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/46a71ddb-bffa-4bf2-8f45-4eba31e50fa7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "46a71ddb-bffa-4bf2-8f45-4eba31e50fa7" (UID: "46a71ddb-bffa-4bf2-8f45-4eba31e50fa7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 10:27:45 crc kubenswrapper[4830]: I0131 10:27:45.710072 4830 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/46a71ddb-bffa-4bf2-8f45-4eba31e50fa7-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 10:27:46 crc kubenswrapper[4830]: I0131 10:27:46.359545 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lz4t4" Jan 31 10:27:46 crc kubenswrapper[4830]: I0131 10:27:46.406176 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-lz4t4"] Jan 31 10:27:46 crc kubenswrapper[4830]: I0131 10:27:46.422704 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-lz4t4"] Jan 31 10:27:48 crc kubenswrapper[4830]: I0131 10:27:48.271914 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="46a71ddb-bffa-4bf2-8f45-4eba31e50fa7" path="/var/lib/kubelet/pods/46a71ddb-bffa-4bf2-8f45-4eba31e50fa7/volumes" Jan 31 10:27:58 crc kubenswrapper[4830]: I0131 10:27:58.251679 4830 scope.go:117] "RemoveContainer" containerID="336fbcde4bc39ccadaddbb2c8835d20ab80032b9696d8d9d030c1910fa930c14" Jan 31 10:27:58 crc kubenswrapper[4830]: E0131 10:27:58.252633 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gt7kd_openshift-machine-config-operator(158dbfda-9b0a-4809-9946-3c6ee2d082dc)\"" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" podUID="158dbfda-9b0a-4809-9946-3c6ee2d082dc" Jan 31 10:28:13 crc kubenswrapper[4830]: I0131 10:28:13.252258 4830 scope.go:117] "RemoveContainer" containerID="336fbcde4bc39ccadaddbb2c8835d20ab80032b9696d8d9d030c1910fa930c14" Jan 31 10:28:13 crc kubenswrapper[4830]: E0131 10:28:13.253137 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gt7kd_openshift-machine-config-operator(158dbfda-9b0a-4809-9946-3c6ee2d082dc)\"" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" podUID="158dbfda-9b0a-4809-9946-3c6ee2d082dc" Jan 31 10:28:28 crc kubenswrapper[4830]: I0131 10:28:28.251531 4830 scope.go:117] "RemoveContainer" containerID="336fbcde4bc39ccadaddbb2c8835d20ab80032b9696d8d9d030c1910fa930c14" Jan 31 10:28:28 crc kubenswrapper[4830]: I0131 10:28:28.843332 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" event={"ID":"158dbfda-9b0a-4809-9946-3c6ee2d082dc","Type":"ContainerStarted","Data":"7966079b95ee8b0c6a0eeec05fdab8c0893a01751591c9e2a9fe770dbf810c5f"} Jan 31 10:29:25 crc kubenswrapper[4830]: I0131 10:29:25.277473 4830 patch_prober.go:28] interesting pod/prometheus-operator-admission-webhook-f54c54754-dbkt8 container/prometheus-operator-admission-webhook namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.77:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 31 10:29:25 crc kubenswrapper[4830]: I0131 10:29:25.277497 4830 patch_prober.go:28] interesting pod/prometheus-operator-admission-webhook-f54c54754-dbkt8 container/prometheus-operator-admission-webhook namespace/openshift-monitoring: Liveness probe status=failure output="Get \"https://10.217.0.77:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 31 10:29:25 crc kubenswrapper[4830]: I0131 10:29:25.297501 4830 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-dbkt8" podUID="48688d73-57bb-4105-8116-4853be571b01" containerName="prometheus-operator-admission-webhook" probeResult="failure" output="Get \"https://10.217.0.77:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:25 crc kubenswrapper[4830]: I0131 10:29:25.297509 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-dbkt8" podUID="48688d73-57bb-4105-8116-4853be571b01" containerName="prometheus-operator-admission-webhook" probeResult="failure" output="Get \"https://10.217.0.77:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:25 crc kubenswrapper[4830]: I0131 10:29:25.311657 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-index-nc25d" podUID="b0b831b3-e535-4264-b46c-c93f7edd51d2" containerName="registry-server" probeResult="failure" output=< Jan 31 10:29:25 crc kubenswrapper[4830]: timeout: failed to connect service ":50051" within 1s Jan 31 10:29:25 crc kubenswrapper[4830]: > Jan 31 10:29:25 crc kubenswrapper[4830]: I0131 10:29:25.311657 4830 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/openstack-operator-index-nc25d" podUID="b0b831b3-e535-4264-b46c-c93f7edd51d2" containerName="registry-server" probeResult="failure" output=< Jan 31 10:29:25 crc kubenswrapper[4830]: timeout: failed to connect service ":50051" within 1s Jan 31 10:29:25 crc kubenswrapper[4830]: > Jan 31 10:29:25 crc kubenswrapper[4830]: I0131 10:29:25.768210 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/frr-k8s-4v2n6" podUID="d0107b00-a78b-432b-afc6-a9ccc1b3bf5b" containerName="controller" probeResult="failure" output="Get \"http://127.0.0.1:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:25 crc kubenswrapper[4830]: I0131 10:29:25.768238 4830 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-zwj92" podUID="3951c2f7-8a23-4d78-9a26-1b89399bdb4e" containerName="frr-k8s-webhook-server" probeResult="failure" output="Get \"http://10.217.0.95:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:25 crc kubenswrapper[4830]: I0131 10:29:25.768251 4830 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-4v2n6" podUID="d0107b00-a78b-432b-afc6-a9ccc1b3bf5b" containerName="controller" probeResult="failure" output="Get \"http://127.0.0.1:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:25 crc kubenswrapper[4830]: I0131 10:29:25.768463 4830 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-4v2n6" podUID="d0107b00-a78b-432b-afc6-a9ccc1b3bf5b" containerName="frr" probeResult="failure" output="Get \"http://127.0.0.1:7573/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:26 crc kubenswrapper[4830]: I0131 10:29:26.556223 4830 trace.go:236] Trace[1895214436]: "Calculate volume metrics of storage for pod openshift-logging/logging-loki-index-gateway-0" (31-Jan-2026 10:29:24.736) (total time: 1780ms): Jan 31 10:29:26 crc kubenswrapper[4830]: Trace[1895214436]: [1.78018528s] [1.78018528s] END Jan 31 10:29:26 crc kubenswrapper[4830]: I0131 10:29:26.669465 4830 patch_prober.go:28] interesting pod/logging-loki-distributor-5f678c8dd6-vm6jc container/loki-distributor namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.47:3101/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 31 10:29:26 crc kubenswrapper[4830]: I0131 10:29:26.669574 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-distributor-5f678c8dd6-vm6jc" podUID="e5b91203-480c-424e-877a-5f2f437d1ada" containerName="loki-distributor" probeResult="failure" output="Get \"https://10.217.0.47:3101/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:26 crc kubenswrapper[4830]: I0131 10:29:26.805830 4830 patch_prober.go:28] interesting pod/logging-loki-querier-76788598db-f89hf container/loki-querier namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.48:3101/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 31 10:29:26 crc kubenswrapper[4830]: I0131 10:29:26.805893 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-querier-76788598db-f89hf" podUID="8aa52b7a-444c-4f07-9c3a-c2223e966e34" containerName="loki-querier" probeResult="failure" output="Get \"https://10.217.0.48:3101/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:27 crc kubenswrapper[4830]: I0131 10:29:27.391376 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/speaker-x7g8x" podUID="1d713893-e8db-40ba-872c-e9d1650a56d0" containerName="speaker" probeResult="failure" output="Get \"http://localhost:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:27 crc kubenswrapper[4830]: I0131 10:29:27.391408 4830 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/speaker-x7g8x" podUID="1d713893-e8db-40ba-872c-e9d1650a56d0" containerName="speaker" probeResult="failure" output="Get \"http://localhost:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:27 crc kubenswrapper[4830]: I0131 10:29:27.847125 4830 patch_prober.go:28] interesting pod/logging-loki-gateway-74c87577db-hwvhd container/opa namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.50:8083/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 31 10:29:27 crc kubenswrapper[4830]: I0131 10:29:27.847206 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-74c87577db-hwvhd" podUID="fd432483-7467-4c9d-a13e-8ee908a8ed2b" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.50:8083/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:27 crc kubenswrapper[4830]: I0131 10:29:27.861118 4830 patch_prober.go:28] interesting pod/authentication-operator-69f744f599-hkd74 container/authentication-operator namespace/openshift-authentication-operator: Liveness probe status=failure output="Get \"https://10.217.0.8:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 31 10:29:27 crc kubenswrapper[4830]: I0131 10:29:27.861232 4830 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication-operator/authentication-operator-69f744f599-hkd74" podUID="00ab4f1c-2cc4-46b0-9e22-df58e5327352" containerName="authentication-operator" probeResult="failure" output="Get \"https://10.217.0.8:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:27 crc kubenswrapper[4830]: I0131 10:29:27.910418 4830 patch_prober.go:28] interesting pod/logging-loki-gateway-74c87577db-fjtpt container/opa namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.51:8083/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 31 10:29:27 crc kubenswrapper[4830]: I0131 10:29:27.910499 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-74c87577db-fjtpt" podUID="867e058e-8774-4ff8-af99-a8f35ac530ce" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.51:8083/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:28 crc kubenswrapper[4830]: I0131 10:29:28.109790 4830 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-58x6p container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.71:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 31 10:29:28 crc kubenswrapper[4830]: I0131 10:29:28.109881 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-58x6p" podUID="b6c3d452-2742-4f91-9857-5f5e0b50f348" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.71:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:28 crc kubenswrapper[4830]: I0131 10:29:28.109818 4830 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-58x6p container/marketplace-operator namespace/openshift-marketplace: Liveness probe status=failure output="Get \"http://10.217.0.71:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 31 10:29:28 crc kubenswrapper[4830]: I0131 10:29:28.110053 4830 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/marketplace-operator-79b997595-58x6p" podUID="b6c3d452-2742-4f91-9857-5f5e0b50f348" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.71:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:28 crc kubenswrapper[4830]: I0131 10:29:28.115368 4830 patch_prober.go:28] interesting pod/metrics-server-6cdc866fc6-9thf6 container/metrics-server namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.84:10250/livez\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 31 10:29:28 crc kubenswrapper[4830]: I0131 10:29:28.116673 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/metrics-server-6cdc866fc6-9thf6" podUID="45903f73-e8ae-4e54-b650-f0090e9436b3" containerName="metrics-server" probeResult="failure" output="Get \"https://10.217.0.84:10250/livez\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:28 crc kubenswrapper[4830]: I0131 10:29:28.155895 4830 patch_prober.go:28] interesting pod/metrics-server-6cdc866fc6-9thf6 container/metrics-server namespace/openshift-monitoring: Liveness probe status=failure output="Get \"https://10.217.0.84:10250/livez\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 31 10:29:28 crc kubenswrapper[4830]: I0131 10:29:28.155949 4830 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-monitoring/metrics-server-6cdc866fc6-9thf6" podUID="45903f73-e8ae-4e54-b650-f0090e9436b3" containerName="metrics-server" probeResult="failure" output="Get \"https://10.217.0.84:10250/livez\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:28 crc kubenswrapper[4830]: I0131 10:29:28.447214 4830 patch_prober.go:28] interesting pod/openshift-kube-scheduler-crc container/kube-scheduler namespace/openshift-kube-scheduler: Liveness probe status=failure output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 31 10:29:28 crc kubenswrapper[4830]: I0131 10:29:28.447291 4830 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podUID="3dcd261975c3d6b9a6ad6367fd4facd3" containerName="kube-scheduler" probeResult="failure" output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:28 crc kubenswrapper[4830]: I0131 10:29:28.492485 4830 patch_prober.go:28] interesting pod/monitoring-plugin-546c959798-jmj57 container/monitoring-plugin namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.85:9443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 31 10:29:28 crc kubenswrapper[4830]: I0131 10:29:28.492839 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/monitoring-plugin-546c959798-jmj57" podUID="fadaea73-e4ec-47a5-b6df-c93b1ce5645f" containerName="monitoring-plugin" probeResult="failure" output="Get \"https://10.217.0.85:9443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:28 crc kubenswrapper[4830]: I0131 10:29:28.762282 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="2ca5d2f1-673e-4173-848a-8d32d33b8bcc" containerName="galera" probeResult="failure" output="command timed out" Jan 31 10:29:28 crc kubenswrapper[4830]: I0131 10:29:28.764207 4830 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-galera-0" podUID="2ca5d2f1-673e-4173-848a-8d32d33b8bcc" containerName="galera" probeResult="failure" output="command timed out" Jan 31 10:29:28 crc kubenswrapper[4830]: I0131 10:29:28.764295 4830 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ceilometer-0" podUID="f2ea7efa-c50b-4208-a9df-2c3fc454762b" containerName="ceilometer-central-agent" probeResult="failure" output="command timed out" Jan 31 10:29:28 crc kubenswrapper[4830]: I0131 10:29:28.869869 4830 patch_prober.go:28] interesting pod/router-default-5444994796-vbcgc container/router namespace/openshift-ingress: Readiness probe status=failure output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 31 10:29:28 crc kubenswrapper[4830]: I0131 10:29:28.869930 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-ingress/router-default-5444994796-vbcgc" podUID="bf986437-9998-4cd1-90b8-b2e0716e8d37" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:28 crc kubenswrapper[4830]: I0131 10:29:28.869977 4830 patch_prober.go:28] interesting pod/router-default-5444994796-vbcgc container/router namespace/openshift-ingress: Liveness probe status=failure output="Get \"http://localhost:1936/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 31 10:29:28 crc kubenswrapper[4830]: I0131 10:29:28.870024 4830 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-ingress/router-default-5444994796-vbcgc" podUID="bf986437-9998-4cd1-90b8-b2e0716e8d37" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:29 crc kubenswrapper[4830]: I0131 10:29:29.447807 4830 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-n4rml container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.32:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 31 10:29:29 crc kubenswrapper[4830]: I0131 10:29:29.447844 4830 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-n4rml container/catalog-operator namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.32:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 31 10:29:29 crc kubenswrapper[4830]: I0131 10:29:29.447873 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-n4rml" podUID="cf057c5a-deef-4c01-bd58-f761ec86e2f4" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.32:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:29 crc kubenswrapper[4830]: I0131 10:29:29.447897 4830 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-n4rml" podUID="cf057c5a-deef-4c01-bd58-f761ec86e2f4" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.32:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:29 crc kubenswrapper[4830]: I0131 10:29:29.509167 4830 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-lp7ks container/packageserver namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.39:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 31 10:29:29 crc kubenswrapper[4830]: I0131 10:29:29.509227 4830 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-lp7ks" podUID="e80e8b17-711d-46d8-a240-4fa52e093545" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.39:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:29 crc kubenswrapper[4830]: I0131 10:29:29.509293 4830 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-lp7ks container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.39:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 31 10:29:29 crc kubenswrapper[4830]: I0131 10:29:29.509366 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-lp7ks" podUID="e80e8b17-711d-46d8-a240-4fa52e093545" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.39:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:29 crc kubenswrapper[4830]: I0131 10:29:29.649031 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/infra-operator-controller-manager-79955696d6-vvv24" podUID="0b519925-01de-4cf0-8ff8-0f97137dd3d9" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.107:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:29 crc kubenswrapper[4830]: I0131 10:29:29.767405 4830 patch_prober.go:28] interesting pod/oauth-openshift-6768bc9c9c-5t4z8 container/oauth-openshift namespace/openshift-authentication: Liveness probe status=failure output="Get \"https://10.217.0.63:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 31 10:29:29 crc kubenswrapper[4830]: I0131 10:29:29.767468 4830 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication/oauth-openshift-6768bc9c9c-5t4z8" podUID="3549201c-94c2-4a29-9e62-b498b4a97ece" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.63:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:29 crc kubenswrapper[4830]: I0131 10:29:29.767605 4830 patch_prober.go:28] interesting pod/oauth-openshift-6768bc9c9c-5t4z8 container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.63:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 31 10:29:29 crc kubenswrapper[4830]: I0131 10:29:29.767666 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-6768bc9c9c-5t4z8" podUID="3549201c-94c2-4a29-9e62-b498b4a97ece" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.63:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:29 crc kubenswrapper[4830]: I0131 10:29:29.778590 4830 patch_prober.go:28] interesting pod/thanos-querier-57c5b4b8d5-lsvdc container/kube-rbac-proxy-web namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.82:9091/-/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 31 10:29:29 crc kubenswrapper[4830]: I0131 10:29:29.778641 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/thanos-querier-57c5b4b8d5-lsvdc" podUID="4158e29b-a0d9-40f2-904d-ffb63ba734f6" containerName="kube-rbac-proxy-web" probeResult="failure" output="Get \"https://10.217.0.82:9091/-/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:30 crc kubenswrapper[4830]: I0131 10:29:30.760183 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-cell1-galera-0" podUID="f37f41b4-3b56-45f9-a368-0f772bcf3002" containerName="galera" probeResult="failure" output="command timed out" Jan 31 10:29:30 crc kubenswrapper[4830]: I0131 10:29:30.761144 4830 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-cell1-galera-0" podUID="f37f41b4-3b56-45f9-a368-0f772bcf3002" containerName="galera" probeResult="failure" output="command timed out" Jan 31 10:29:30 crc kubenswrapper[4830]: I0131 10:29:30.802884 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4drvxhm" podUID="250c9f1b-d78c-488e-b28e-6c2b783edd9b" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.115:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:31 crc kubenswrapper[4830]: I0131 10:29:31.063909 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-controller-manager-55f549db95-67sj5" podUID="ce245704-5b88-4544-ae21-bcb30ff5d0d0" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.122:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:31 crc kubenswrapper[4830]: I0131 10:29:31.436603 4830 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-ttnrg container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.24:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 31 10:29:31 crc kubenswrapper[4830]: I0131 10:29:31.436993 4830 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-ttnrg" podUID="d1346d7f-25da-4035-9c88-1f96c034d795" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.24:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:31 crc kubenswrapper[4830]: I0131 10:29:31.436109 4830 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-ttnrg container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.24:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 31 10:29:31 crc kubenswrapper[4830]: I0131 10:29:31.437140 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-ttnrg" podUID="d1346d7f-25da-4035-9c88-1f96c034d795" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.24:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:31 crc kubenswrapper[4830]: I0131 10:29:31.770003 4830 patch_prober.go:28] interesting pod/loki-operator-controller-manager-688c9bff97-t8jpp container/manager namespace/openshift-operators-redhat: Liveness probe status=failure output="Get \"http://10.217.0.45:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 31 10:29:31 crc kubenswrapper[4830]: I0131 10:29:31.770061 4830 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operators-redhat/loki-operator-controller-manager-688c9bff97-t8jpp" podUID="ce3329e2-9eca-4a04-bf1d-0578e12beaa5" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.45:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:31 crc kubenswrapper[4830]: I0131 10:29:31.770318 4830 patch_prober.go:28] interesting pod/loki-operator-controller-manager-688c9bff97-t8jpp container/manager namespace/openshift-operators-redhat: Readiness probe status=failure output="Get \"http://10.217.0.45:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 31 10:29:31 crc kubenswrapper[4830]: I0131 10:29:31.770377 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators-redhat/loki-operator-controller-manager-688c9bff97-t8jpp" podUID="ce3329e2-9eca-4a04-bf1d-0578e12beaa5" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.45:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:32 crc kubenswrapper[4830]: I0131 10:29:32.621838 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/redhat-marketplace-g5pvp" podUID="35d308f6-fcf3-4b01-b26e-5c1848d6ee7d" containerName="registry-server" probeResult="failure" output=< Jan 31 10:29:32 crc kubenswrapper[4830]: timeout: failed to connect service ":50051" within 1s Jan 31 10:29:32 crc kubenswrapper[4830]: > Jan 31 10:29:32 crc kubenswrapper[4830]: I0131 10:29:32.846503 4830 patch_prober.go:28] interesting pod/logging-loki-gateway-74c87577db-hwvhd container/gateway namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.50:8081/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 31 10:29:32 crc kubenswrapper[4830]: I0131 10:29:32.846576 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-74c87577db-hwvhd" podUID="fd432483-7467-4c9d-a13e-8ee908a8ed2b" containerName="gateway" probeResult="failure" output="Get \"https://10.217.0.50:8081/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:32 crc kubenswrapper[4830]: I0131 10:29:32.849697 4830 patch_prober.go:28] interesting pod/logging-loki-gateway-74c87577db-hwvhd container/opa namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.50:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 31 10:29:32 crc kubenswrapper[4830]: I0131 10:29:32.849775 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-74c87577db-hwvhd" podUID="fd432483-7467-4c9d-a13e-8ee908a8ed2b" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.50:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:32 crc kubenswrapper[4830]: I0131 10:29:32.910280 4830 patch_prober.go:28] interesting pod/logging-loki-gateway-74c87577db-fjtpt container/gateway namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.51:8081/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 31 10:29:32 crc kubenswrapper[4830]: I0131 10:29:32.910359 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-74c87577db-fjtpt" podUID="867e058e-8774-4ff8-af99-a8f35ac530ce" containerName="gateway" probeResult="failure" output="Get \"https://10.217.0.51:8081/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:32 crc kubenswrapper[4830]: I0131 10:29:32.910477 4830 patch_prober.go:28] interesting pod/logging-loki-gateway-74c87577db-fjtpt container/opa namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.51:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 31 10:29:32 crc kubenswrapper[4830]: I0131 10:29:32.910861 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-74c87577db-fjtpt" podUID="867e058e-8774-4ff8-af99-a8f35ac530ce" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.51:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:33 crc kubenswrapper[4830]: I0131 10:29:33.031324 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/redhat-operators-56876" podUID="2626e876-9148-4165-a735-a5a1733c014d" containerName="registry-server" probeResult="failure" output=< Jan 31 10:29:33 crc kubenswrapper[4830]: timeout: failed to connect service ":50051" within 1s Jan 31 10:29:33 crc kubenswrapper[4830]: > Jan 31 10:29:33 crc kubenswrapper[4830]: I0131 10:29:33.031376 4830 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/redhat-operators-56876" podUID="2626e876-9148-4165-a735-a5a1733c014d" containerName="registry-server" probeResult="failure" output=< Jan 31 10:29:33 crc kubenswrapper[4830]: timeout: failed to connect service ":50051" within 1s Jan 31 10:29:33 crc kubenswrapper[4830]: > Jan 31 10:29:33 crc kubenswrapper[4830]: I0131 10:29:33.032180 4830 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/redhat-marketplace-g5pvp" podUID="35d308f6-fcf3-4b01-b26e-5c1848d6ee7d" containerName="registry-server" probeResult="failure" output=< Jan 31 10:29:33 crc kubenswrapper[4830]: timeout: failed to connect service ":50051" within 1s Jan 31 10:29:33 crc kubenswrapper[4830]: > Jan 31 10:29:33 crc kubenswrapper[4830]: I0131 10:29:33.049990 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-controller-init-54dc59fd95-sv8r9" podUID="2a183ae3-dc4b-4f75-a9ca-4832bd5faf06" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.100:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:33 crc kubenswrapper[4830]: I0131 10:29:33.226898 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="cert-manager/cert-manager-webhook-687f57d79b-22grv" podUID="eb0ab04d-4e0a-4a84-965a-2c0513d6d79a" containerName="cert-manager-webhook" probeResult="failure" output="Get \"http://10.217.0.5:6080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:33 crc kubenswrapper[4830]: I0131 10:29:33.312814 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/certified-operators-jwvm4" podUID="14550547-ce63-48cc-800e-b74235d0daa1" containerName="registry-server" probeResult="failure" output=< Jan 31 10:29:33 crc kubenswrapper[4830]: timeout: failed to connect service ":50051" within 1s Jan 31 10:29:33 crc kubenswrapper[4830]: > Jan 31 10:29:33 crc kubenswrapper[4830]: I0131 10:29:33.313295 4830 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/community-operators-fcmv2" podUID="c361702a-d6db-4925-809d-f08c6dd88a7d" containerName="registry-server" probeResult="failure" output=< Jan 31 10:29:33 crc kubenswrapper[4830]: timeout: failed to connect service ":50051" within 1s Jan 31 10:29:33 crc kubenswrapper[4830]: > Jan 31 10:29:33 crc kubenswrapper[4830]: I0131 10:29:33.314747 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/community-operators-fcmv2" podUID="c361702a-d6db-4925-809d-f08c6dd88a7d" containerName="registry-server" probeResult="failure" output=< Jan 31 10:29:33 crc kubenswrapper[4830]: timeout: failed to connect service ":50051" within 1s Jan 31 10:29:33 crc kubenswrapper[4830]: > Jan 31 10:29:33 crc kubenswrapper[4830]: I0131 10:29:33.321028 4830 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/certified-operators-jwvm4" podUID="14550547-ce63-48cc-800e-b74235d0daa1" containerName="registry-server" probeResult="failure" output=< Jan 31 10:29:33 crc kubenswrapper[4830]: timeout: failed to connect service ":50051" within 1s Jan 31 10:29:33 crc kubenswrapper[4830]: > Jan 31 10:29:33 crc kubenswrapper[4830]: I0131 10:29:33.398028 4830 patch_prober.go:28] interesting pod/route-controller-manager-bcf89fb66-fxq4w container/route-controller-manager namespace/openshift-route-controller-manager: Liveness probe status=failure output="Get \"https://10.217.0.69:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 31 10:29:33 crc kubenswrapper[4830]: I0131 10:29:33.398104 4830 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-route-controller-manager/route-controller-manager-bcf89fb66-fxq4w" podUID="9e3fd47c-6860-47d0-98ce-3654da25fdce" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.69:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:33 crc kubenswrapper[4830]: I0131 10:29:33.398019 4830 patch_prober.go:28] interesting pod/route-controller-manager-bcf89fb66-fxq4w container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.69:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 31 10:29:33 crc kubenswrapper[4830]: I0131 10:29:33.398264 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-bcf89fb66-fxq4w" podUID="9e3fd47c-6860-47d0-98ce-3654da25fdce" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.69:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:33 crc kubenswrapper[4830]: I0131 10:29:33.502976 4830 patch_prober.go:28] interesting pod/console-bbcf59d54-qmgsn container/console namespace/openshift-console: Readiness probe status=failure output="Get \"https://10.217.0.137:8443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 31 10:29:33 crc kubenswrapper[4830]: I0131 10:29:33.503049 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/console-bbcf59d54-qmgsn" podUID="afe486bd-6c62-42d6-ac04-9c2bb21204d7" containerName="console" probeResult="failure" output="Get \"https://10.217.0.137:8443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:33 crc kubenswrapper[4830]: I0131 10:29:33.515092 4830 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/prometheus-metric-storage-0" podUID="7b3b4d1e-8963-469f-abe7-204392275c48" containerName="prometheus" probeResult="failure" output="Get \"https://10.217.0.165:9090/-/healthy\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:33 crc kubenswrapper[4830]: I0131 10:29:33.515209 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/prometheus-metric-storage-0" podUID="7b3b4d1e-8963-469f-abe7-204392275c48" containerName="prometheus" probeResult="failure" output="Get \"https://10.217.0.165:9090/-/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:33 crc kubenswrapper[4830]: I0131 10:29:33.661918 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-d8xvw" podUID="3f5623d3-168a-4bca-9154-ecb4c81b5b3b" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.103:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:33 crc kubenswrapper[4830]: I0131 10:29:33.725080 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-hcpk8" podUID="17f5c61d-5997-482b-961a-0339cfe6c15c" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.104:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:33 crc kubenswrapper[4830]: I0131 10:29:33.813973 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-8wnqw" podUID="dafe4db4-4a74-4cb2-8e7f-496cfa1a1c5e" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.105:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:33 crc kubenswrapper[4830]: I0131 10:29:33.854890 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-cpwlp" podUID="47718a89-dc4c-4f5d-bb58-aec265aa68bf" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.102:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:33 crc kubenswrapper[4830]: I0131 10:29:33.978940 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-d9xtg" podUID="4d28fd37-b97c-447a-9165-d90d11fd4698" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.106:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:34 crc kubenswrapper[4830]: I0131 10:29:34.151932 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-kgrns" podUID="758269b2-16c6-4f5a-8f9f-875659eede84" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.109:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:34 crc kubenswrapper[4830]: I0131 10:29:34.151987 4830 patch_prober.go:28] interesting pod/controller-manager-7896c76d86-c5cgs container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.68:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 31 10:29:34 crc kubenswrapper[4830]: I0131 10:29:34.152041 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-7896c76d86-c5cgs" podUID="d85aeaa6-c7da-420f-b8d9-2d0983e2ab36" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.68:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:34 crc kubenswrapper[4830]: I0131 10:29:34.152057 4830 patch_prober.go:28] interesting pod/controller-manager-7896c76d86-c5cgs container/controller-manager namespace/openshift-controller-manager: Liveness probe status=failure output="Get \"https://10.217.0.68:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 31 10:29:34 crc kubenswrapper[4830]: I0131 10:29:34.152091 4830 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-controller-manager/controller-manager-7896c76d86-c5cgs" podUID="d85aeaa6-c7da-420f-b8d9-2d0983e2ab36" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.68:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:34 crc kubenswrapper[4830]: I0131 10:29:34.152155 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-slc6p" podUID="bd972fba-0692-45af-b28c-db4929fe150a" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.108:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:34 crc kubenswrapper[4830]: I0131 10:29:34.193002 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-4tqzd" podUID="1891b74f-fe71-4020-98a3-5796e2a67ea2" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.110:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:34 crc kubenswrapper[4830]: I0131 10:29:34.352529 4830 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-ttnrg container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.24:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 31 10:29:34 crc kubenswrapper[4830]: I0131 10:29:34.352581 4830 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-ttnrg container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.24:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 31 10:29:34 crc kubenswrapper[4830]: I0131 10:29:34.352606 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-ttnrg" podUID="d1346d7f-25da-4035-9c88-1f96c034d795" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.24:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:34 crc kubenswrapper[4830]: I0131 10:29:34.352661 4830 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-ttnrg" podUID="d1346d7f-25da-4035-9c88-1f96c034d795" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.24:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:34 crc kubenswrapper[4830]: I0131 10:29:34.618998 4830 patch_prober.go:28] interesting pod/observability-operator-59bdc8b94-l59nt container/operator namespace/openshift-operators: Readiness probe status=failure output="Get \"http://10.217.0.93:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 31 10:29:34 crc kubenswrapper[4830]: I0131 10:29:34.619078 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators/observability-operator-59bdc8b94-l59nt" podUID="1ebf3f9f-75ef-4cfd-a7f7-d5fb556aeb48" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.93:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:34 crc kubenswrapper[4830]: I0131 10:29:34.619012 4830 patch_prober.go:28] interesting pod/observability-operator-59bdc8b94-l59nt container/operator namespace/openshift-operators: Liveness probe status=failure output="Get \"http://10.217.0.93:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 31 10:29:34 crc kubenswrapper[4830]: I0131 10:29:34.619162 4830 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operators/observability-operator-59bdc8b94-l59nt" podUID="1ebf3f9f-75ef-4cfd-a7f7-d5fb556aeb48" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.93:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:34 crc kubenswrapper[4830]: I0131 10:29:34.763020 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-rkvx7" podUID="e681f66d-3695-4b59-9ef1-6f9bbf007ed2" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.113:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:34 crc kubenswrapper[4830]: I0131 10:29:34.763109 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/metallb-operator-controller-manager-74fbb6df4-hrt7k" podUID="1145e85a-d436-40c8-baef-ceb53625e06b" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.87:8080/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:34 crc kubenswrapper[4830]: I0131 10:29:34.766561 4830 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ceilometer-0" podUID="f2ea7efa-c50b-4208-a9df-2c3fc454762b" containerName="ceilometer-central-agent" probeResult="failure" output="command timed out" Jan 31 10:29:34 crc kubenswrapper[4830]: I0131 10:29:34.779735 4830 patch_prober.go:28] interesting pod/thanos-querier-57c5b4b8d5-lsvdc container/kube-rbac-proxy-web namespace/openshift-monitoring: Liveness probe status=failure output="Get \"https://10.217.0.82:9091/-/healthy\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 31 10:29:34 crc kubenswrapper[4830]: I0131 10:29:34.779824 4830 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-monitoring/thanos-querier-57c5b4b8d5-lsvdc" podUID="4158e29b-a0d9-40f2-904d-ffb63ba734f6" containerName="kube-rbac-proxy-web" probeResult="failure" output="Get \"https://10.217.0.82:9091/-/healthy\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:34 crc kubenswrapper[4830]: I0131 10:29:34.780029 4830 patch_prober.go:28] interesting pod/thanos-querier-57c5b4b8d5-lsvdc container/kube-rbac-proxy-web namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.82:9091/-/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 31 10:29:34 crc kubenswrapper[4830]: I0131 10:29:34.780180 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/thanos-querier-57c5b4b8d5-lsvdc" podUID="4158e29b-a0d9-40f2-904d-ffb63ba734f6" containerName="kube-rbac-proxy-web" probeResult="failure" output="Get \"https://10.217.0.82:9091/-/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:34 crc kubenswrapper[4830]: I0131 10:29:34.845967 4830 patch_prober.go:28] interesting pod/perses-operator-5bf474d74f-wtdqw container/perses-operator namespace/openshift-operators: Liveness probe status=failure output="Get \"http://10.217.0.94:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 31 10:29:34 crc kubenswrapper[4830]: I0131 10:29:34.846025 4830 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operators/perses-operator-5bf474d74f-wtdqw" podUID="0af185f3-0cfa-4299-8eee-0e523d87504c" containerName="perses-operator" probeResult="failure" output="Get \"http://10.217.0.94:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:34 crc kubenswrapper[4830]: I0131 10:29:34.846115 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-ld2fb" podUID="f101dda8-ba4c-42c2-a8e3-9a5e53c2ec8a" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.114:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:34 crc kubenswrapper[4830]: I0131 10:29:34.846337 4830 patch_prober.go:28] interesting pod/perses-operator-5bf474d74f-wtdqw container/perses-operator namespace/openshift-operators: Readiness probe status=failure output="Get \"http://10.217.0.94:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 31 10:29:34 crc kubenswrapper[4830]: I0131 10:29:34.846404 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators/perses-operator-5bf474d74f-wtdqw" podUID="0af185f3-0cfa-4299-8eee-0e523d87504c" containerName="perses-operator" probeResult="failure" output="Get \"http://10.217.0.94:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:34 crc kubenswrapper[4830]: I0131 10:29:34.886986 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-gbjts" podUID="7ff06918-8b3c-48cb-bd11-1254b9bbc276" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.116:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:35 crc kubenswrapper[4830]: I0131 10:29:35.034122 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-gktql" podUID="21448bf1-0318-4469-baff-d35cf905337b" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.117:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:35 crc kubenswrapper[4830]: I0131 10:29:35.116898 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-2l42c" podUID="388d9bc4-698e-4dea-8029-aa32433cf734" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.118:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:35 crc kubenswrapper[4830]: I0131 10:29:35.116901 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/telemetry-operator-controller-manager-57fbdcd888-cp9fj" podUID="2365408f-7d7a-482c-87c0-0452fa330e4e" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.119:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:35 crc kubenswrapper[4830]: I0131 10:29:35.258310 4830 patch_prober.go:28] interesting pod/prometheus-operator-admission-webhook-f54c54754-dbkt8 container/prometheus-operator-admission-webhook namespace/openshift-monitoring: Liveness probe status=failure output="Get \"https://10.217.0.77:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 31 10:29:35 crc kubenswrapper[4830]: I0131 10:29:35.258358 4830 patch_prober.go:28] interesting pod/prometheus-operator-admission-webhook-f54c54754-dbkt8 container/prometheus-operator-admission-webhook namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.77:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 31 10:29:35 crc kubenswrapper[4830]: I0131 10:29:35.258380 4830 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-dbkt8" podUID="48688d73-57bb-4105-8116-4853be571b01" containerName="prometheus-operator-admission-webhook" probeResult="failure" output="Get \"https://10.217.0.77:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:35 crc kubenswrapper[4830]: I0131 10:29:35.258397 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-dbkt8" podUID="48688d73-57bb-4105-8116-4853be571b01" containerName="prometheus-operator-admission-webhook" probeResult="failure" output="Get \"https://10.217.0.77:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:35 crc kubenswrapper[4830]: I0131 10:29:35.433942 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-czm79" podUID="68f255f0-5951-47f2-979e-af80607453e8" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.121:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:35 crc kubenswrapper[4830]: I0131 10:29:35.517946 4830 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/metallb-operator-webhook-server-55459579-xtkmd" podUID="328e9260-46e9-41a9-a42c-891fe870a5d1" containerName="webhook-server" probeResult="failure" output="Get \"http://10.217.0.88:7472/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:35 crc kubenswrapper[4830]: I0131 10:29:35.559959 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/watcher-operator-controller-manager-564965969-62c8t" podUID="d4a8ef63-6ba0-4bb4-93b5-dc9fc1134bb5" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.120:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:35 crc kubenswrapper[4830]: I0131 10:29:35.559965 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/metallb-operator-webhook-server-55459579-xtkmd" podUID="328e9260-46e9-41a9-a42c-891fe870a5d1" containerName="webhook-server" probeResult="failure" output="Get \"http://10.217.0.88:7472/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:35 crc kubenswrapper[4830]: I0131 10:29:35.760259 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-nmstate/nmstate-handler-9wzdf" podUID="09ac1675-c6eb-453a-83a5-94f0a04c9665" containerName="nmstate-handler" probeResult="failure" output="command timed out" Jan 31 10:29:35 crc kubenswrapper[4830]: I0131 10:29:35.876997 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-zwj92" podUID="3951c2f7-8a23-4d78-9a26-1b89399bdb4e" containerName="frr-k8s-webhook-server" probeResult="failure" output="Get \"http://10.217.0.95:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:35 crc kubenswrapper[4830]: I0131 10:29:35.877069 4830 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-4v2n6" podUID="d0107b00-a78b-432b-afc6-a9ccc1b3bf5b" containerName="frr" probeResult="failure" output="Get \"http://127.0.0.1:7573/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:35 crc kubenswrapper[4830]: I0131 10:29:35.877101 4830 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-4v2n6" podUID="d0107b00-a78b-432b-afc6-a9ccc1b3bf5b" containerName="controller" probeResult="failure" output="Get \"http://127.0.0.1:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:35 crc kubenswrapper[4830]: I0131 10:29:35.877123 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/frr-k8s-4v2n6" podUID="d0107b00-a78b-432b-afc6-a9ccc1b3bf5b" containerName="controller" probeResult="failure" output="Get \"http://127.0.0.1:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:35 crc kubenswrapper[4830]: I0131 10:29:35.877229 4830 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-zwj92" podUID="3951c2f7-8a23-4d78-9a26-1b89399bdb4e" containerName="frr-k8s-webhook-server" probeResult="failure" output="Get \"http://10.217.0.95:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:35 crc kubenswrapper[4830]: I0131 10:29:35.962055 4830 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/controller-6968d8fdc4-lhbbn" podUID="2683cf74-2506-4496-b132-4c274291727b" containerName="controller" probeResult="failure" output="Get \"http://10.217.0.96:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:35 crc kubenswrapper[4830]: I0131 10:29:35.962276 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/controller-6968d8fdc4-lhbbn" podUID="2683cf74-2506-4496-b132-4c274291727b" containerName="controller" probeResult="failure" output="Get \"http://10.217.0.96:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:36 crc kubenswrapper[4830]: I0131 10:29:36.669689 4830 patch_prober.go:28] interesting pod/logging-loki-distributor-5f678c8dd6-vm6jc container/loki-distributor namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.47:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 31 10:29:36 crc kubenswrapper[4830]: I0131 10:29:36.669783 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-distributor-5f678c8dd6-vm6jc" podUID="e5b91203-480c-424e-877a-5f2f437d1ada" containerName="loki-distributor" probeResult="failure" output="Get \"https://10.217.0.47:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:36 crc kubenswrapper[4830]: I0131 10:29:36.762359 4830 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-monitoring/prometheus-k8s-0" podUID="d2c8c9f6-9f27-4f47-8fb3-e22aaa0d3e88" containerName="prometheus" probeResult="failure" output="command timed out" Jan 31 10:29:36 crc kubenswrapper[4830]: I0131 10:29:36.763563 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/prometheus-k8s-0" podUID="d2c8c9f6-9f27-4f47-8fb3-e22aaa0d3e88" containerName="prometheus" probeResult="failure" output="command timed out" Jan 31 10:29:36 crc kubenswrapper[4830]: I0131 10:29:36.806549 4830 patch_prober.go:28] interesting pod/logging-loki-querier-76788598db-f89hf container/loki-querier namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.48:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 31 10:29:36 crc kubenswrapper[4830]: I0131 10:29:36.806690 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-querier-76788598db-f89hf" podUID="8aa52b7a-444c-4f07-9c3a-c2223e966e34" containerName="loki-querier" probeResult="failure" output="Get \"https://10.217.0.48:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:36 crc kubenswrapper[4830]: I0131 10:29:36.933697 4830 patch_prober.go:28] interesting pod/logging-loki-query-frontend-69d9546745-8k7rn container/loki-query-frontend namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.49:3101/loki/api/v1/status/buildinfo\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 31 10:29:36 crc kubenswrapper[4830]: I0131 10:29:36.933790 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-query-frontend-69d9546745-8k7rn" podUID="6a2f00bb-9954-46d0-901b-3d9a82939850" containerName="loki-query-frontend" probeResult="failure" output="Get \"https://10.217.0.49:3101/loki/api/v1/status/buildinfo\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:37 crc kubenswrapper[4830]: I0131 10:29:37.348012 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/speaker-x7g8x" podUID="1d713893-e8db-40ba-872c-e9d1650a56d0" containerName="speaker" probeResult="failure" output="Get \"http://localhost:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:37 crc kubenswrapper[4830]: I0131 10:29:37.389999 4830 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/speaker-x7g8x" podUID="1d713893-e8db-40ba-872c-e9d1650a56d0" containerName="speaker" probeResult="failure" output="Get \"http://localhost:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:37 crc kubenswrapper[4830]: I0131 10:29:37.821505 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/kube-state-metrics-0" podUID="adf0d571-b5dc-4d7c-9e8d-8813354a5128" containerName="kube-state-metrics" probeResult="failure" output="Get \"https://10.217.1.8:8081/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:37 crc kubenswrapper[4830]: I0131 10:29:37.821587 4830 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/kube-state-metrics-0" podUID="adf0d571-b5dc-4d7c-9e8d-8813354a5128" containerName="kube-state-metrics" probeResult="failure" output="Get \"https://10.217.1.8:8080/livez\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:37 crc kubenswrapper[4830]: I0131 10:29:37.847358 4830 patch_prober.go:28] interesting pod/logging-loki-gateway-74c87577db-hwvhd container/gateway namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.50:8081/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 31 10:29:37 crc kubenswrapper[4830]: I0131 10:29:37.847422 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-74c87577db-hwvhd" podUID="fd432483-7467-4c9d-a13e-8ee908a8ed2b" containerName="gateway" probeResult="failure" output="Get \"https://10.217.0.50:8081/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:37 crc kubenswrapper[4830]: I0131 10:29:37.847421 4830 patch_prober.go:28] interesting pod/logging-loki-gateway-74c87577db-hwvhd container/opa namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.50:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 31 10:29:37 crc kubenswrapper[4830]: I0131 10:29:37.847481 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-74c87577db-hwvhd" podUID="fd432483-7467-4c9d-a13e-8ee908a8ed2b" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.50:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:37 crc kubenswrapper[4830]: I0131 10:29:37.861009 4830 patch_prober.go:28] interesting pod/authentication-operator-69f744f599-hkd74 container/authentication-operator namespace/openshift-authentication-operator: Liveness probe status=failure output="Get \"https://10.217.0.8:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 31 10:29:37 crc kubenswrapper[4830]: I0131 10:29:37.861062 4830 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication-operator/authentication-operator-69f744f599-hkd74" podUID="00ab4f1c-2cc4-46b0-9e22-df58e5327352" containerName="authentication-operator" probeResult="failure" output="Get \"https://10.217.0.8:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:37 crc kubenswrapper[4830]: I0131 10:29:37.910152 4830 patch_prober.go:28] interesting pod/logging-loki-gateway-74c87577db-fjtpt container/opa namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.51:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 31 10:29:37 crc kubenswrapper[4830]: I0131 10:29:37.910228 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-74c87577db-fjtpt" podUID="867e058e-8774-4ff8-af99-a8f35ac530ce" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.51:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:37 crc kubenswrapper[4830]: I0131 10:29:37.910416 4830 patch_prober.go:28] interesting pod/logging-loki-gateway-74c87577db-fjtpt container/gateway namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.51:8081/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 31 10:29:37 crc kubenswrapper[4830]: I0131 10:29:37.910535 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-74c87577db-fjtpt" podUID="867e058e-8774-4ff8-af99-a8f35ac530ce" containerName="gateway" probeResult="failure" output="Get \"https://10.217.0.51:8081/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:37 crc kubenswrapper[4830]: I0131 10:29:37.969137 4830 patch_prober.go:28] interesting pod/logging-loki-compactor-0 container/loki-compactor namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.53:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 31 10:29:37 crc kubenswrapper[4830]: I0131 10:29:37.969207 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-compactor-0" podUID="70d5f51c-1a87-45fb-8822-7aa0997fceb1" containerName="loki-compactor" probeResult="failure" output="Get \"https://10.217.0.53:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:38 crc kubenswrapper[4830]: I0131 10:29:38.123917 4830 patch_prober.go:28] interesting pod/logging-loki-index-gateway-0 container/loki-index-gateway namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.54:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 31 10:29:38 crc kubenswrapper[4830]: I0131 10:29:38.123980 4830 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-58x6p container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.71:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 31 10:29:38 crc kubenswrapper[4830]: I0131 10:29:38.124033 4830 patch_prober.go:28] interesting pod/metrics-server-6cdc866fc6-9thf6 container/metrics-server namespace/openshift-monitoring: Liveness probe status=failure output="Get \"https://10.217.0.84:10250/livez\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 31 10:29:38 crc kubenswrapper[4830]: I0131 10:29:38.124050 4830 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-monitoring/metrics-server-6cdc866fc6-9thf6" podUID="45903f73-e8ae-4e54-b650-f0090e9436b3" containerName="metrics-server" probeResult="failure" output="Get \"https://10.217.0.84:10250/livez\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:38 crc kubenswrapper[4830]: I0131 10:29:38.124045 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-58x6p" podUID="b6c3d452-2742-4f91-9857-5f5e0b50f348" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.71:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:38 crc kubenswrapper[4830]: I0131 10:29:38.123980 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-index-gateway-0" podUID="efadb8be-37d4-4e2b-9df2-3d1301ae81a8" containerName="loki-index-gateway" probeResult="failure" output="Get \"https://10.217.0.54:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:38 crc kubenswrapper[4830]: I0131 10:29:38.123929 4830 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-58x6p container/marketplace-operator namespace/openshift-marketplace: Liveness probe status=failure output="Get \"http://10.217.0.71:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 31 10:29:38 crc kubenswrapper[4830]: I0131 10:29:38.124197 4830 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/marketplace-operator-79b997595-58x6p" podUID="b6c3d452-2742-4f91-9857-5f5e0b50f348" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.71:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:38 crc kubenswrapper[4830]: I0131 10:29:38.453928 4830 patch_prober.go:28] interesting pod/downloads-7954f5f757-l8ckt container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.23:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 31 10:29:38 crc kubenswrapper[4830]: I0131 10:29:38.454887 4830 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-l8ckt" podUID="a8d26ab0-33c3-4eb7-928b-ffba996579d9" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.23:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:38 crc kubenswrapper[4830]: I0131 10:29:38.453949 4830 patch_prober.go:28] interesting pod/downloads-7954f5f757-l8ckt container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.23:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 31 10:29:38 crc kubenswrapper[4830]: I0131 10:29:38.455001 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-l8ckt" podUID="a8d26ab0-33c3-4eb7-928b-ffba996579d9" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.23:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:38 crc kubenswrapper[4830]: I0131 10:29:38.453997 4830 patch_prober.go:28] interesting pod/openshift-kube-scheduler-crc container/kube-scheduler namespace/openshift-kube-scheduler: Liveness probe status=failure output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 31 10:29:38 crc kubenswrapper[4830]: I0131 10:29:38.455088 4830 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podUID="3dcd261975c3d6b9a6ad6367fd4facd3" containerName="kube-scheduler" probeResult="failure" output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:38 crc kubenswrapper[4830]: I0131 10:29:38.484341 4830 patch_prober.go:28] interesting pod/image-registry-66df7c8f76-gkw8v container/registry namespace/openshift-image-registry: Readiness probe status=failure output="Get \"https://10.217.0.70:5000/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 31 10:29:38 crc kubenswrapper[4830]: I0131 10:29:38.484361 4830 patch_prober.go:28] interesting pod/image-registry-66df7c8f76-gkw8v container/registry namespace/openshift-image-registry: Liveness probe status=failure output="Get \"https://10.217.0.70:5000/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 31 10:29:38 crc kubenswrapper[4830]: I0131 10:29:38.484410 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-image-registry/image-registry-66df7c8f76-gkw8v" podUID="4889a479-52c6-494e-a902-c7653ffef4a7" containerName="registry" probeResult="failure" output="Get \"https://10.217.0.70:5000/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:38 crc kubenswrapper[4830]: I0131 10:29:38.484414 4830 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-image-registry/image-registry-66df7c8f76-gkw8v" podUID="4889a479-52c6-494e-a902-c7653ffef4a7" containerName="registry" probeResult="failure" output="Get \"https://10.217.0.70:5000/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:38 crc kubenswrapper[4830]: I0131 10:29:38.492677 4830 patch_prober.go:28] interesting pod/monitoring-plugin-546c959798-jmj57 container/monitoring-plugin namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.85:9443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 31 10:29:38 crc kubenswrapper[4830]: I0131 10:29:38.492743 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/monitoring-plugin-546c959798-jmj57" podUID="fadaea73-e4ec-47a5-b6df-c93b1ce5645f" containerName="monitoring-plugin" probeResult="failure" output="Get \"https://10.217.0.85:9443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:38 crc kubenswrapper[4830]: I0131 10:29:38.760710 4830 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-galera-0" podUID="2ca5d2f1-673e-4173-848a-8d32d33b8bcc" containerName="galera" probeResult="failure" output="command timed out" Jan 31 10:29:38 crc kubenswrapper[4830]: I0131 10:29:38.761472 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="2ca5d2f1-673e-4173-848a-8d32d33b8bcc" containerName="galera" probeResult="failure" output="command timed out" Jan 31 10:29:38 crc kubenswrapper[4830]: I0131 10:29:38.764427 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-index-nc25d" podUID="b0b831b3-e535-4264-b46c-c93f7edd51d2" containerName="registry-server" probeResult="failure" output="command timed out" Jan 31 10:29:38 crc kubenswrapper[4830]: I0131 10:29:38.765336 4830 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/openstack-operator-index-nc25d" podUID="b0b831b3-e535-4264-b46c-c93f7edd51d2" containerName="registry-server" probeResult="failure" output="command timed out" Jan 31 10:29:38 crc kubenswrapper[4830]: I0131 10:29:38.872076 4830 patch_prober.go:28] interesting pod/router-default-5444994796-vbcgc container/router namespace/openshift-ingress: Readiness probe status=failure output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 31 10:29:38 crc kubenswrapper[4830]: I0131 10:29:38.872851 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-ingress/router-default-5444994796-vbcgc" podUID="bf986437-9998-4cd1-90b8-b2e0716e8d37" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:38 crc kubenswrapper[4830]: I0131 10:29:38.872112 4830 patch_prober.go:28] interesting pod/router-default-5444994796-vbcgc container/router namespace/openshift-ingress: Liveness probe status=failure output="Get \"http://localhost:1936/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 31 10:29:38 crc kubenswrapper[4830]: I0131 10:29:38.872968 4830 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-ingress/router-default-5444994796-vbcgc" podUID="bf986437-9998-4cd1-90b8-b2e0716e8d37" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:38 crc kubenswrapper[4830]: I0131 10:29:38.991904 4830 patch_prober.go:28] interesting pod/console-operator-58897d9998-pkx9p container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.21:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 31 10:29:38 crc kubenswrapper[4830]: I0131 10:29:38.991904 4830 patch_prober.go:28] interesting pod/console-operator-58897d9998-pkx9p container/console-operator namespace/openshift-console-operator: Liveness probe status=failure output="Get \"https://10.217.0.21:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 31 10:29:38 crc kubenswrapper[4830]: I0131 10:29:38.992001 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-pkx9p" podUID="691a8aff-6fcd-400a-ace9-fb3fa8778206" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.21:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:38 crc kubenswrapper[4830]: I0131 10:29:38.992062 4830 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console-operator/console-operator-58897d9998-pkx9p" podUID="691a8aff-6fcd-400a-ace9-fb3fa8778206" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.21:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:39 crc kubenswrapper[4830]: I0131 10:29:39.300353 4830 patch_prober.go:28] interesting pod/logging-loki-ingester-0 container/loki-ingester namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.52:3101/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 31 10:29:39 crc kubenswrapper[4830]: I0131 10:29:39.300433 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-ingester-0" podUID="07a77a4a-344b-45bb-8488-a536a94185b1" containerName="loki-ingester" probeResult="failure" output="Get \"https://10.217.0.52:3101/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:39 crc kubenswrapper[4830]: I0131 10:29:39.447837 4830 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-n4rml container/catalog-operator namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.32:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 31 10:29:39 crc kubenswrapper[4830]: I0131 10:29:39.447882 4830 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-n4rml container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.32:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 31 10:29:39 crc kubenswrapper[4830]: I0131 10:29:39.447913 4830 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-n4rml" podUID="cf057c5a-deef-4c01-bd58-f761ec86e2f4" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.32:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:39 crc kubenswrapper[4830]: I0131 10:29:39.447945 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-n4rml" podUID="cf057c5a-deef-4c01-bd58-f761ec86e2f4" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.32:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:39 crc kubenswrapper[4830]: I0131 10:29:39.464854 4830 trace.go:236] Trace[1871714418]: "Calculate volume metrics of prometheus-metric-storage-db for pod openstack/prometheus-metric-storage-0" (31-Jan-2026 10:29:34.553) (total time: 4885ms): Jan 31 10:29:39 crc kubenswrapper[4830]: Trace[1871714418]: [4.885254782s] [4.885254782s] END Jan 31 10:29:39 crc kubenswrapper[4830]: I0131 10:29:39.464873 4830 trace.go:236] Trace[1376297429]: "Calculate volume metrics of storage for pod openshift-logging/logging-loki-compactor-0" (31-Jan-2026 10:29:31.801) (total time: 7637ms): Jan 31 10:29:39 crc kubenswrapper[4830]: Trace[1376297429]: [7.637456914s] [7.637456914s] END Jan 31 10:29:39 crc kubenswrapper[4830]: I0131 10:29:39.464857 4830 trace.go:236] Trace[297975928]: "Calculate volume metrics of persistence for pod openstack/rabbitmq-server-2" (31-Jan-2026 10:29:32.826) (total time: 6612ms): Jan 31 10:29:39 crc kubenswrapper[4830]: Trace[297975928]: [6.61228868s] [6.61228868s] END Jan 31 10:29:39 crc kubenswrapper[4830]: I0131 10:29:39.575918 4830 patch_prober.go:28] interesting pod/package-server-manager-789f6589d5-ckvgq container/package-server-manager namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"http://10.217.0.29:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 31 10:29:39 crc kubenswrapper[4830]: I0131 10:29:39.575982 4830 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-lb8hp container/olm-operator namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.38:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 31 10:29:39 crc kubenswrapper[4830]: I0131 10:29:39.576000 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-ckvgq" podUID="007a4117-0dfe-485e-85df-6bc68e0cee5e" containerName="package-server-manager" probeResult="failure" output="Get \"http://10.217.0.29:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:39 crc kubenswrapper[4830]: I0131 10:29:39.576014 4830 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lb8hp" podUID="13f1c33b-cede-4fb1-9651-15d0dcd36173" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.38:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:39 crc kubenswrapper[4830]: I0131 10:29:39.575918 4830 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-lp7ks container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.39:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 31 10:29:39 crc kubenswrapper[4830]: I0131 10:29:39.576061 4830 patch_prober.go:28] interesting pod/package-server-manager-789f6589d5-ckvgq container/package-server-manager namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"http://10.217.0.29:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 31 10:29:39 crc kubenswrapper[4830]: I0131 10:29:39.576096 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-lp7ks" podUID="e80e8b17-711d-46d8-a240-4fa52e093545" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.39:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:39 crc kubenswrapper[4830]: I0131 10:29:39.575945 4830 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-lp7ks container/packageserver namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.39:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 31 10:29:39 crc kubenswrapper[4830]: I0131 10:29:39.576145 4830 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-lp7ks" podUID="e80e8b17-711d-46d8-a240-4fa52e093545" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.39:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:39 crc kubenswrapper[4830]: I0131 10:29:39.575965 4830 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-lb8hp container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.38:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 31 10:29:39 crc kubenswrapper[4830]: I0131 10:29:39.576137 4830 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-ckvgq" podUID="007a4117-0dfe-485e-85df-6bc68e0cee5e" containerName="package-server-manager" probeResult="failure" output="Get \"http://10.217.0.29:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:39 crc kubenswrapper[4830]: I0131 10:29:39.576191 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lb8hp" podUID="13f1c33b-cede-4fb1-9651-15d0dcd36173" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.38:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:39 crc kubenswrapper[4830]: I0131 10:29:39.692005 4830 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/infra-operator-controller-manager-79955696d6-vvv24" podUID="0b519925-01de-4cf0-8ff8-0f97137dd3d9" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.107:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:39 crc kubenswrapper[4830]: I0131 10:29:39.692143 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/infra-operator-controller-manager-79955696d6-vvv24" podUID="0b519925-01de-4cf0-8ff8-0f97137dd3d9" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.107:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:39 crc kubenswrapper[4830]: I0131 10:29:39.767879 4830 patch_prober.go:28] interesting pod/oauth-openshift-6768bc9c9c-5t4z8 container/oauth-openshift namespace/openshift-authentication: Liveness probe status=failure output="Get \"https://10.217.0.63:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 31 10:29:39 crc kubenswrapper[4830]: I0131 10:29:39.767913 4830 patch_prober.go:28] interesting pod/oauth-openshift-6768bc9c9c-5t4z8 container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.63:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 31 10:29:39 crc kubenswrapper[4830]: I0131 10:29:39.767950 4830 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication/oauth-openshift-6768bc9c9c-5t4z8" podUID="3549201c-94c2-4a29-9e62-b498b4a97ece" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.63:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:39 crc kubenswrapper[4830]: I0131 10:29:39.767980 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-6768bc9c9c-5t4z8" podUID="3549201c-94c2-4a29-9e62-b498b4a97ece" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.63:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:39 crc kubenswrapper[4830]: I0131 10:29:39.779955 4830 patch_prober.go:28] interesting pod/thanos-querier-57c5b4b8d5-lsvdc container/kube-rbac-proxy-web namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.82:9091/-/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 31 10:29:39 crc kubenswrapper[4830]: I0131 10:29:39.780017 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/thanos-querier-57c5b4b8d5-lsvdc" podUID="4158e29b-a0d9-40f2-904d-ffb63ba734f6" containerName="kube-rbac-proxy-web" probeResult="failure" output="Get \"https://10.217.0.82:9091/-/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:40 crc kubenswrapper[4830]: I0131 10:29:40.285087 4830 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Readiness probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 31 10:29:40 crc kubenswrapper[4830]: I0131 10:29:40.285460 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:40 crc kubenswrapper[4830]: I0131 10:29:40.553358 4830 patch_prober.go:28] interesting pod/nmstate-webhook-8474b5b9d8-hw8mv container/nmstate-webhook namespace/openshift-nmstate: Readiness probe status=failure output="Get \"https://10.217.0.65:9443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 31 10:29:40 crc kubenswrapper[4830]: I0131 10:29:40.553434 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-hw8mv" podUID="a580c5e1-30c2-40b1-993d-c375cc99e2f2" containerName="nmstate-webhook" probeResult="failure" output="Get \"https://10.217.0.65:9443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:40 crc kubenswrapper[4830]: I0131 10:29:40.760851 4830 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-cell1-galera-0" podUID="f37f41b4-3b56-45f9-a368-0f772bcf3002" containerName="galera" probeResult="failure" output="command timed out" Jan 31 10:29:40 crc kubenswrapper[4830]: I0131 10:29:40.760851 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-nmstate/nmstate-handler-9wzdf" podUID="09ac1675-c6eb-453a-83a5-94f0a04c9665" containerName="nmstate-handler" probeResult="failure" output="command timed out" Jan 31 10:29:40 crc kubenswrapper[4830]: I0131 10:29:40.761664 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-cell1-galera-0" podUID="f37f41b4-3b56-45f9-a368-0f772bcf3002" containerName="galera" probeResult="failure" output="command timed out" Jan 31 10:29:40 crc kubenswrapper[4830]: I0131 10:29:40.765169 4830 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ceilometer-0" podUID="f2ea7efa-c50b-4208-a9df-2c3fc454762b" containerName="ceilometer-central-agent" probeResult="failure" output="command timed out" Jan 31 10:29:40 crc kubenswrapper[4830]: I0131 10:29:40.768741 4830 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/ceilometer-0" Jan 31 10:29:40 crc kubenswrapper[4830]: I0131 10:29:40.773901 4830 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="ceilometer-central-agent" containerStatusID={"Type":"cri-o","ID":"7438470b0ded09ffc16921538313fba4d8d5737ade46eb0d1751c36880d19f27"} pod="openstack/ceilometer-0" containerMessage="Container ceilometer-central-agent failed liveness probe, will be restarted" Jan 31 10:29:40 crc kubenswrapper[4830]: I0131 10:29:40.776743 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f2ea7efa-c50b-4208-a9df-2c3fc454762b" containerName="ceilometer-central-agent" containerID="cri-o://7438470b0ded09ffc16921538313fba4d8d5737ade46eb0d1751c36880d19f27" gracePeriod=30 Jan 31 10:29:40 crc kubenswrapper[4830]: I0131 10:29:40.842938 4830 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4drvxhm" podUID="250c9f1b-d78c-488e-b28e-6c2b783edd9b" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.115:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:40 crc kubenswrapper[4830]: I0131 10:29:40.842938 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4drvxhm" podUID="250c9f1b-d78c-488e-b28e-6c2b783edd9b" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.115:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:41 crc kubenswrapper[4830]: I0131 10:29:41.105028 4830 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/openstack-operator-controller-manager-55f549db95-67sj5" podUID="ce245704-5b88-4544-ae21-bcb30ff5d0d0" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.122:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:41 crc kubenswrapper[4830]: I0131 10:29:41.105035 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-controller-manager-55f549db95-67sj5" podUID="ce245704-5b88-4544-ae21-bcb30ff5d0d0" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.122:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:41 crc kubenswrapper[4830]: I0131 10:29:41.469759 4830 patch_prober.go:28] interesting pod/openshift-kube-scheduler-crc container/kube-scheduler namespace/openshift-kube-scheduler: Readiness probe status=failure output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 31 10:29:41 crc kubenswrapper[4830]: I0131 10:29:41.470301 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podUID="3dcd261975c3d6b9a6ad6367fd4facd3" containerName="kube-scheduler" probeResult="failure" output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:41 crc kubenswrapper[4830]: I0131 10:29:41.728945 4830 patch_prober.go:28] interesting pod/loki-operator-controller-manager-688c9bff97-t8jpp container/manager namespace/openshift-operators-redhat: Readiness probe status=failure output="Get \"http://10.217.0.45:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 31 10:29:41 crc kubenswrapper[4830]: I0131 10:29:41.729023 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators-redhat/loki-operator-controller-manager-688c9bff97-t8jpp" podUID="ce3329e2-9eca-4a04-bf1d-0578e12beaa5" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.45:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:41 crc kubenswrapper[4830]: I0131 10:29:41.763407 4830 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-monitoring/prometheus-k8s-0" podUID="d2c8c9f6-9f27-4f47-8fb3-e22aaa0d3e88" containerName="prometheus" probeResult="failure" output="command timed out" Jan 31 10:29:41 crc kubenswrapper[4830]: I0131 10:29:41.763411 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-sgnqp" podUID="e87ff23b-1ce8-4556-8998-7fc4dd84775c" containerName="nbdb" probeResult="failure" output="command timed out" Jan 31 10:29:41 crc kubenswrapper[4830]: I0131 10:29:41.763407 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/prometheus-k8s-0" podUID="d2c8c9f6-9f27-4f47-8fb3-e22aaa0d3e88" containerName="prometheus" probeResult="failure" output="command timed out" Jan 31 10:29:41 crc kubenswrapper[4830]: I0131 10:29:41.763508 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-sgnqp" podUID="e87ff23b-1ce8-4556-8998-7fc4dd84775c" containerName="sbdb" probeResult="failure" output="command timed out" Jan 31 10:29:42 crc kubenswrapper[4830]: I0131 10:29:42.508747 4830 patch_prober.go:28] interesting pod/apiserver-7bbb656c7d-pwk76 container/oauth-apiserver namespace/openshift-oauth-apiserver: Readiness probe status=failure output="Get \"https://10.217.0.22:8443/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 31 10:29:42 crc kubenswrapper[4830]: I0131 10:29:42.508822 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pwk76" podUID="c61fa19c-7742-4ab1-b3ca-9607723fe94d" containerName="oauth-apiserver" probeResult="failure" output="Get \"https://10.217.0.22:8443/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:42 crc kubenswrapper[4830]: I0131 10:29:42.848001 4830 patch_prober.go:28] interesting pod/logging-loki-gateway-74c87577db-hwvhd container/gateway namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.50:8081/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 31 10:29:42 crc kubenswrapper[4830]: I0131 10:29:42.848078 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-74c87577db-hwvhd" podUID="fd432483-7467-4c9d-a13e-8ee908a8ed2b" containerName="gateway" probeResult="failure" output="Get \"https://10.217.0.50:8081/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:42 crc kubenswrapper[4830]: I0131 10:29:42.848112 4830 patch_prober.go:28] interesting pod/logging-loki-gateway-74c87577db-hwvhd container/opa namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.50:8083/ready\": context deadline exceeded" start-of-body= Jan 31 10:29:42 crc kubenswrapper[4830]: I0131 10:29:42.848148 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-74c87577db-hwvhd" podUID="fd432483-7467-4c9d-a13e-8ee908a8ed2b" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.50:8083/ready\": context deadline exceeded" Jan 31 10:29:42 crc kubenswrapper[4830]: I0131 10:29:42.911215 4830 patch_prober.go:28] interesting pod/logging-loki-gateway-74c87577db-fjtpt container/opa namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.51:8083/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 31 10:29:42 crc kubenswrapper[4830]: I0131 10:29:42.911283 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-74c87577db-fjtpt" podUID="867e058e-8774-4ff8-af99-a8f35ac530ce" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.51:8083/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:42 crc kubenswrapper[4830]: I0131 10:29:42.911344 4830 patch_prober.go:28] interesting pod/logging-loki-gateway-74c87577db-fjtpt container/gateway namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.51:8081/ready\": context deadline exceeded" start-of-body= Jan 31 10:29:42 crc kubenswrapper[4830]: I0131 10:29:42.911360 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-74c87577db-fjtpt" podUID="867e058e-8774-4ff8-af99-a8f35ac530ce" containerName="gateway" probeResult="failure" output="Get \"https://10.217.0.51:8081/ready\": context deadline exceeded" Jan 31 10:29:43 crc kubenswrapper[4830]: I0131 10:29:43.006446 4830 trace.go:236] Trace[1127189158]: "Calculate volume metrics of storage for pod openshift-logging/logging-loki-ingester-0" (31-Jan-2026 10:29:39.976) (total time: 3029ms): Jan 31 10:29:43 crc kubenswrapper[4830]: Trace[1127189158]: [3.029607587s] [3.029607587s] END Jan 31 10:29:43 crc kubenswrapper[4830]: I0131 10:29:43.090917 4830 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/openstack-operator-controller-init-54dc59fd95-sv8r9" podUID="2a183ae3-dc4b-4f75-a9ca-4832bd5faf06" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.100:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:43 crc kubenswrapper[4830]: I0131 10:29:43.091097 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-controller-init-54dc59fd95-sv8r9" podUID="2a183ae3-dc4b-4f75-a9ca-4832bd5faf06" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.100:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:43 crc kubenswrapper[4830]: I0131 10:29:43.227014 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="cert-manager/cert-manager-webhook-687f57d79b-22grv" podUID="eb0ab04d-4e0a-4a84-965a-2c0513d6d79a" containerName="cert-manager-webhook" probeResult="failure" output="Get \"http://10.217.0.5:6080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:43 crc kubenswrapper[4830]: I0131 10:29:43.352472 4830 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-ttnrg container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.24:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 31 10:29:43 crc kubenswrapper[4830]: I0131 10:29:43.352540 4830 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-ttnrg container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.24:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 31 10:29:43 crc kubenswrapper[4830]: I0131 10:29:43.352598 4830 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-ttnrg" podUID="d1346d7f-25da-4035-9c88-1f96c034d795" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.24:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:43 crc kubenswrapper[4830]: I0131 10:29:43.352539 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-ttnrg" podUID="d1346d7f-25da-4035-9c88-1f96c034d795" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.24:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:43 crc kubenswrapper[4830]: I0131 10:29:43.398236 4830 patch_prober.go:28] interesting pod/route-controller-manager-bcf89fb66-fxq4w container/route-controller-manager namespace/openshift-route-controller-manager: Liveness probe status=failure output="Get \"https://10.217.0.69:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 31 10:29:43 crc kubenswrapper[4830]: I0131 10:29:43.398301 4830 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-route-controller-manager/route-controller-manager-bcf89fb66-fxq4w" podUID="9e3fd47c-6860-47d0-98ce-3654da25fdce" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.69:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:43 crc kubenswrapper[4830]: I0131 10:29:43.398292 4830 patch_prober.go:28] interesting pod/route-controller-manager-bcf89fb66-fxq4w container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.69:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 31 10:29:43 crc kubenswrapper[4830]: I0131 10:29:43.398350 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-bcf89fb66-fxq4w" podUID="9e3fd47c-6860-47d0-98ce-3654da25fdce" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.69:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:43 crc kubenswrapper[4830]: I0131 10:29:43.552884 4830 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-kwwkw" podUID="1488b4ea-ba49-423e-a995-917dc9cbb9e2" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.101:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:43 crc kubenswrapper[4830]: I0131 10:29:43.552950 4830 patch_prober.go:28] interesting pod/console-bbcf59d54-qmgsn container/console namespace/openshift-console: Readiness probe status=failure output="Get \"https://10.217.0.137:8443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 31 10:29:43 crc kubenswrapper[4830]: I0131 10:29:43.553021 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/console-bbcf59d54-qmgsn" podUID="afe486bd-6c62-42d6-ac04-9c2bb21204d7" containerName="console" probeResult="failure" output="Get \"https://10.217.0.137:8443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:43 crc kubenswrapper[4830]: I0131 10:29:43.553038 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-kwwkw" podUID="1488b4ea-ba49-423e-a995-917dc9cbb9e2" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.101:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:43 crc kubenswrapper[4830]: I0131 10:29:43.553061 4830 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/prometheus-metric-storage-0" podUID="7b3b4d1e-8963-469f-abe7-204392275c48" containerName="prometheus" probeResult="failure" output="Get \"https://10.217.0.165:9090/-/healthy\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:43 crc kubenswrapper[4830]: I0131 10:29:43.553086 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/prometheus-metric-storage-0" podUID="7b3b4d1e-8963-469f-abe7-204392275c48" containerName="prometheus" probeResult="failure" output="Get \"https://10.217.0.165:9090/-/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:43 crc kubenswrapper[4830]: I0131 10:29:43.702039 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-d8xvw" podUID="3f5623d3-168a-4bca-9154-ecb4c81b5b3b" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.103:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:43 crc kubenswrapper[4830]: I0131 10:29:43.702048 4830 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-d8xvw" podUID="3f5623d3-168a-4bca-9154-ecb4c81b5b3b" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.103:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:43 crc kubenswrapper[4830]: I0131 10:29:43.784259 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-hcpk8" podUID="17f5c61d-5997-482b-961a-0339cfe6c15c" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.104:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:43 crc kubenswrapper[4830]: I0131 10:29:43.784661 4830 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-hcpk8" podUID="17f5c61d-5997-482b-961a-0339cfe6c15c" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.104:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:43 crc kubenswrapper[4830]: I0131 10:29:43.866893 4830 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-8wnqw" podUID="dafe4db4-4a74-4cb2-8e7f-496cfa1a1c5e" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.105:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:43 crc kubenswrapper[4830]: I0131 10:29:43.948928 4830 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-cpwlp" podUID="47718a89-dc4c-4f5d-bb58-aec265aa68bf" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.102:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:43 crc kubenswrapper[4830]: I0131 10:29:43.948928 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-8wnqw" podUID="dafe4db4-4a74-4cb2-8e7f-496cfa1a1c5e" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.105:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:43 crc kubenswrapper[4830]: I0131 10:29:43.949216 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-cpwlp" podUID="47718a89-dc4c-4f5d-bb58-aec265aa68bf" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.102:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:44 crc kubenswrapper[4830]: I0131 10:29:44.030956 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-d9xtg" podUID="4d28fd37-b97c-447a-9165-d90d11fd4698" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.106:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:44 crc kubenswrapper[4830]: I0131 10:29:44.030957 4830 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-d9xtg" podUID="4d28fd37-b97c-447a-9165-d90d11fd4698" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.106:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:44 crc kubenswrapper[4830]: I0131 10:29:44.194258 4830 patch_prober.go:28] interesting pod/controller-manager-7896c76d86-c5cgs container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.68:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 31 10:29:44 crc kubenswrapper[4830]: I0131 10:29:44.194358 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-7896c76d86-c5cgs" podUID="d85aeaa6-c7da-420f-b8d9-2d0983e2ab36" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.68:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:44 crc kubenswrapper[4830]: I0131 10:29:44.194419 4830 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-kgrns" podUID="758269b2-16c6-4f5a-8f9f-875659eede84" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.109:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:44 crc kubenswrapper[4830]: I0131 10:29:44.194466 4830 patch_prober.go:28] interesting pod/controller-manager-7896c76d86-c5cgs container/controller-manager namespace/openshift-controller-manager: Liveness probe status=failure output="Get \"https://10.217.0.68:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 31 10:29:44 crc kubenswrapper[4830]: I0131 10:29:44.194515 4830 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-controller-manager/controller-manager-7896c76d86-c5cgs" podUID="d85aeaa6-c7da-420f-b8d9-2d0983e2ab36" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.68:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:44 crc kubenswrapper[4830]: I0131 10:29:44.194560 4830 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-slc6p" podUID="bd972fba-0692-45af-b28c-db4929fe150a" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.108:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:44 crc kubenswrapper[4830]: I0131 10:29:44.275928 4830 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-4tqzd" podUID="1891b74f-fe71-4020-98a3-5796e2a67ea2" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.110:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:44 crc kubenswrapper[4830]: I0131 10:29:44.275952 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-slc6p" podUID="bd972fba-0692-45af-b28c-db4929fe150a" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.108:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:44 crc kubenswrapper[4830]: I0131 10:29:44.275942 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-kgrns" podUID="758269b2-16c6-4f5a-8f9f-875659eede84" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.109:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:44 crc kubenswrapper[4830]: I0131 10:29:44.276036 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-4tqzd" podUID="1891b74f-fe71-4020-98a3-5796e2a67ea2" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.110:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:44 crc kubenswrapper[4830]: I0131 10:29:44.387221 4830 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-sbhfn" podUID="0e056a0c-ee06-43aa-bf36-35f202f76b17" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.111:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:44 crc kubenswrapper[4830]: I0131 10:29:44.387273 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-sbhfn" podUID="0e056a0c-ee06-43aa-bf36-35f202f76b17" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.111:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:44 crc kubenswrapper[4830]: I0131 10:29:44.512934 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-sjf7r" podUID="617226b5-2b2c-4f6c-902d-9784c8a283de" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.112:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:44 crc kubenswrapper[4830]: I0131 10:29:44.512981 4830 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-sjf7r" podUID="617226b5-2b2c-4f6c-902d-9784c8a283de" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.112:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:44 crc kubenswrapper[4830]: I0131 10:29:44.618049 4830 patch_prober.go:28] interesting pod/observability-operator-59bdc8b94-l59nt container/operator namespace/openshift-operators: Readiness probe status=failure output="Get \"http://10.217.0.93:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 31 10:29:44 crc kubenswrapper[4830]: I0131 10:29:44.618120 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators/observability-operator-59bdc8b94-l59nt" podUID="1ebf3f9f-75ef-4cfd-a7f7-d5fb556aeb48" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.93:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:44 crc kubenswrapper[4830]: I0131 10:29:44.618177 4830 patch_prober.go:28] interesting pod/observability-operator-59bdc8b94-l59nt container/operator namespace/openshift-operators: Liveness probe status=failure output="Get \"http://10.217.0.93:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 31 10:29:44 crc kubenswrapper[4830]: I0131 10:29:44.618195 4830 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operators/observability-operator-59bdc8b94-l59nt" podUID="1ebf3f9f-75ef-4cfd-a7f7-d5fb556aeb48" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.93:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:44 crc kubenswrapper[4830]: I0131 10:29:44.768426 4830 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/redhat-operators-56876" podUID="2626e876-9148-4165-a735-a5a1733c014d" containerName="registry-server" probeResult="failure" output="command timed out" Jan 31 10:29:44 crc kubenswrapper[4830]: I0131 10:29:44.768486 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/redhat-marketplace-g5pvp" podUID="35d308f6-fcf3-4b01-b26e-5c1848d6ee7d" containerName="registry-server" probeResult="failure" output="command timed out" Jan 31 10:29:44 crc kubenswrapper[4830]: I0131 10:29:44.768519 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/redhat-operators-56876" podUID="2626e876-9148-4165-a735-a5a1733c014d" containerName="registry-server" probeResult="failure" output="command timed out" Jan 31 10:29:44 crc kubenswrapper[4830]: I0131 10:29:44.768740 4830 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/redhat-marketplace-g5pvp" podUID="35d308f6-fcf3-4b01-b26e-5c1848d6ee7d" containerName="registry-server" probeResult="failure" output="command timed out" Jan 31 10:29:44 crc kubenswrapper[4830]: I0131 10:29:44.778889 4830 patch_prober.go:28] interesting pod/thanos-querier-57c5b4b8d5-lsvdc container/kube-rbac-proxy-web namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.82:9091/-/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 31 10:29:44 crc kubenswrapper[4830]: I0131 10:29:44.778921 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/thanos-querier-57c5b4b8d5-lsvdc" podUID="4158e29b-a0d9-40f2-904d-ffb63ba734f6" containerName="kube-rbac-proxy-web" probeResult="failure" output="Get \"https://10.217.0.82:9091/-/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:44 crc kubenswrapper[4830]: I0131 10:29:44.845002 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-ld2fb" podUID="f101dda8-ba4c-42c2-a8e3-9a5e53c2ec8a" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.114:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:44 crc kubenswrapper[4830]: I0131 10:29:44.845140 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/metallb-operator-controller-manager-74fbb6df4-hrt7k" podUID="1145e85a-d436-40c8-baef-ceb53625e06b" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.87:8080/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:44 crc kubenswrapper[4830]: I0131 10:29:44.885876 4830 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-ld2fb" podUID="f101dda8-ba4c-42c2-a8e3-9a5e53c2ec8a" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.114:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:44 crc kubenswrapper[4830]: I0131 10:29:44.885906 4830 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-rkvx7" podUID="e681f66d-3695-4b59-9ef1-6f9bbf007ed2" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.113:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:44 crc kubenswrapper[4830]: I0131 10:29:44.885920 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-rkvx7" podUID="e681f66d-3695-4b59-9ef1-6f9bbf007ed2" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.113:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:44 crc kubenswrapper[4830]: I0131 10:29:44.886048 4830 patch_prober.go:28] interesting pod/perses-operator-5bf474d74f-wtdqw container/perses-operator namespace/openshift-operators: Readiness probe status=failure output="Get \"http://10.217.0.94:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 31 10:29:44 crc kubenswrapper[4830]: I0131 10:29:44.886075 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators/perses-operator-5bf474d74f-wtdqw" podUID="0af185f3-0cfa-4299-8eee-0e523d87504c" containerName="perses-operator" probeResult="failure" output="Get \"http://10.217.0.94:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:44 crc kubenswrapper[4830]: I0131 10:29:44.967997 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-gbjts" podUID="7ff06918-8b3c-48cb-bd11-1254b9bbc276" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.116:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:44 crc kubenswrapper[4830]: I0131 10:29:44.968155 4830 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-gbjts" podUID="7ff06918-8b3c-48cb-bd11-1254b9bbc276" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.116:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:45 crc kubenswrapper[4830]: I0131 10:29:45.074936 4830 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-gktql" podUID="21448bf1-0318-4469-baff-d35cf905337b" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.117:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:45 crc kubenswrapper[4830]: I0131 10:29:45.127118 4830 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Readiness probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 31 10:29:45 crc kubenswrapper[4830]: I0131 10:29:45.127188 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:45 crc kubenswrapper[4830]: I0131 10:29:45.157918 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-2l42c" podUID="388d9bc4-698e-4dea-8029-aa32433cf734" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.118:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:45 crc kubenswrapper[4830]: I0131 10:29:45.158006 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-gktql" podUID="21448bf1-0318-4469-baff-d35cf905337b" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.117:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:45 crc kubenswrapper[4830]: I0131 10:29:45.240924 4830 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-2l42c" podUID="388d9bc4-698e-4dea-8029-aa32433cf734" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.118:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:45 crc kubenswrapper[4830]: I0131 10:29:45.240939 4830 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/telemetry-operator-controller-manager-57fbdcd888-cp9fj" podUID="2365408f-7d7a-482c-87c0-0452fa330e4e" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.119:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:45 crc kubenswrapper[4830]: I0131 10:29:45.241058 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/telemetry-operator-controller-manager-57fbdcd888-cp9fj" podUID="2365408f-7d7a-482c-87c0-0452fa330e4e" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.119:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:45 crc kubenswrapper[4830]: I0131 10:29:45.258264 4830 patch_prober.go:28] interesting pod/prometheus-operator-admission-webhook-f54c54754-dbkt8 container/prometheus-operator-admission-webhook namespace/openshift-monitoring: Liveness probe status=failure output="Get \"https://10.217.0.77:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 31 10:29:45 crc kubenswrapper[4830]: I0131 10:29:45.258305 4830 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-dbkt8" podUID="48688d73-57bb-4105-8116-4853be571b01" containerName="prometheus-operator-admission-webhook" probeResult="failure" output="Get \"https://10.217.0.77:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:45 crc kubenswrapper[4830]: I0131 10:29:45.258338 4830 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-dbkt8" Jan 31 10:29:45 crc kubenswrapper[4830]: I0131 10:29:45.258406 4830 patch_prober.go:28] interesting pod/prometheus-operator-admission-webhook-f54c54754-dbkt8 container/prometheus-operator-admission-webhook namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.77:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 31 10:29:45 crc kubenswrapper[4830]: I0131 10:29:45.258433 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-dbkt8" podUID="48688d73-57bb-4105-8116-4853be571b01" containerName="prometheus-operator-admission-webhook" probeResult="failure" output="Get \"https://10.217.0.77:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:45 crc kubenswrapper[4830]: I0131 10:29:45.259306 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-dbkt8" Jan 31 10:29:45 crc kubenswrapper[4830]: I0131 10:29:45.259302 4830 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="prometheus-operator-admission-webhook" containerStatusID={"Type":"cri-o","ID":"54aa8ec469ea3a966faa7ccc7d68b904d98cfe2c3172796d1eb2782e8f440f84"} pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-dbkt8" containerMessage="Container prometheus-operator-admission-webhook failed liveness probe, will be restarted" Jan 31 10:29:45 crc kubenswrapper[4830]: I0131 10:29:45.259657 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-dbkt8" podUID="48688d73-57bb-4105-8116-4853be571b01" containerName="prometheus-operator-admission-webhook" containerID="cri-o://54aa8ec469ea3a966faa7ccc7d68b904d98cfe2c3172796d1eb2782e8f440f84" gracePeriod=30 Jan 31 10:29:45 crc kubenswrapper[4830]: I0131 10:29:45.559144 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-czm79" podUID="68f255f0-5951-47f2-979e-af80607453e8" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.121:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:45 crc kubenswrapper[4830]: I0131 10:29:45.559161 4830 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-czm79" podUID="68f255f0-5951-47f2-979e-af80607453e8" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.121:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:45 crc kubenswrapper[4830]: I0131 10:29:45.559287 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/metallb-operator-webhook-server-55459579-xtkmd" podUID="328e9260-46e9-41a9-a42c-891fe870a5d1" containerName="webhook-server" probeResult="failure" output="Get \"http://10.217.0.88:7472/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:45 crc kubenswrapper[4830]: I0131 10:29:45.559297 4830 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/metallb-operator-webhook-server-55459579-xtkmd" podUID="328e9260-46e9-41a9-a42c-891fe870a5d1" containerName="webhook-server" probeResult="failure" output="Get \"http://10.217.0.88:7472/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:45 crc kubenswrapper[4830]: I0131 10:29:45.559331 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/watcher-operator-controller-manager-564965969-62c8t" podUID="d4a8ef63-6ba0-4bb4-93b5-dc9fc1134bb5" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.120:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:45 crc kubenswrapper[4830]: I0131 10:29:45.559333 4830 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/watcher-operator-controller-manager-564965969-62c8t" podUID="d4a8ef63-6ba0-4bb4-93b5-dc9fc1134bb5" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.120:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:45 crc kubenswrapper[4830]: I0131 10:29:45.875977 4830 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-4v2n6" podUID="d0107b00-a78b-432b-afc6-a9ccc1b3bf5b" containerName="controller" probeResult="failure" output="Get \"http://127.0.0.1:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:45 crc kubenswrapper[4830]: I0131 10:29:45.876009 4830 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-4v2n6" podUID="d0107b00-a78b-432b-afc6-a9ccc1b3bf5b" containerName="frr" probeResult="failure" output="Get \"http://127.0.0.1:7573/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:45 crc kubenswrapper[4830]: I0131 10:29:45.875970 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/frr-k8s-4v2n6" podUID="d0107b00-a78b-432b-afc6-a9ccc1b3bf5b" containerName="controller" probeResult="failure" output="Get \"http://127.0.0.1:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:45 crc kubenswrapper[4830]: I0131 10:29:45.876057 4830 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="metallb-system/frr-k8s-4v2n6" Jan 31 10:29:45 crc kubenswrapper[4830]: I0131 10:29:45.876020 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-zwj92" podUID="3951c2f7-8a23-4d78-9a26-1b89399bdb4e" containerName="frr-k8s-webhook-server" probeResult="failure" output="Get \"http://10.217.0.95:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:45 crc kubenswrapper[4830]: I0131 10:29:45.876079 4830 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="metallb-system/frr-k8s-4v2n6" Jan 31 10:29:45 crc kubenswrapper[4830]: I0131 10:29:45.876087 4830 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-zwj92" podUID="3951c2f7-8a23-4d78-9a26-1b89399bdb4e" containerName="frr-k8s-webhook-server" probeResult="failure" output="Get \"http://10.217.0.95:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:45 crc kubenswrapper[4830]: I0131 10:29:45.876166 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-4v2n6" Jan 31 10:29:45 crc kubenswrapper[4830]: I0131 10:29:45.876176 4830 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-zwj92" Jan 31 10:29:45 crc kubenswrapper[4830]: I0131 10:29:45.878048 4830 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="controller" containerStatusID={"Type":"cri-o","ID":"727ecac22e63391e070b11aadabf324369bf6f5aa72356556f5e4f8598e8f60c"} pod="metallb-system/frr-k8s-4v2n6" containerMessage="Container controller failed liveness probe, will be restarted" Jan 31 10:29:45 crc kubenswrapper[4830]: I0131 10:29:45.878083 4830 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="frr" containerStatusID={"Type":"cri-o","ID":"48cc01afd187531d11a8e7950848cdfb1bbe3d5df848bd9f580f457ec1e94f6e"} pod="metallb-system/frr-k8s-4v2n6" containerMessage="Container frr failed liveness probe, will be restarted" Jan 31 10:29:45 crc kubenswrapper[4830]: I0131 10:29:45.878184 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="metallb-system/frr-k8s-4v2n6" podUID="d0107b00-a78b-432b-afc6-a9ccc1b3bf5b" containerName="controller" containerID="cri-o://727ecac22e63391e070b11aadabf324369bf6f5aa72356556f5e4f8598e8f60c" gracePeriod=2 Jan 31 10:29:45 crc kubenswrapper[4830]: I0131 10:29:45.879190 4830 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="frr-k8s-webhook-server" containerStatusID={"Type":"cri-o","ID":"8142163e3ce80c3464ac0822fda30bf877ce271f5e4ceef098795181d0f6e7eb"} pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-zwj92" containerMessage="Container frr-k8s-webhook-server failed liveness probe, will be restarted" Jan 31 10:29:45 crc kubenswrapper[4830]: I0131 10:29:45.879227 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-zwj92" podUID="3951c2f7-8a23-4d78-9a26-1b89399bdb4e" containerName="frr-k8s-webhook-server" containerID="cri-o://8142163e3ce80c3464ac0822fda30bf877ce271f5e4ceef098795181d0f6e7eb" gracePeriod=10 Jan 31 10:29:45 crc kubenswrapper[4830]: I0131 10:29:45.961912 4830 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/controller-6968d8fdc4-lhbbn" podUID="2683cf74-2506-4496-b132-4c274291727b" containerName="controller" probeResult="failure" output="Get \"http://10.217.0.96:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:45 crc kubenswrapper[4830]: I0131 10:29:45.961942 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/controller-6968d8fdc4-lhbbn" podUID="2683cf74-2506-4496-b132-4c274291727b" containerName="controller" probeResult="failure" output="Get \"http://10.217.0.96:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:46 crc kubenswrapper[4830]: I0131 10:29:46.352858 4830 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-ttnrg container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.24:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 31 10:29:46 crc kubenswrapper[4830]: I0131 10:29:46.352915 4830 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-ttnrg" podUID="d1346d7f-25da-4035-9c88-1f96c034d795" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.24:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:46 crc kubenswrapper[4830]: I0131 10:29:46.352923 4830 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-ttnrg container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.24:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 31 10:29:46 crc kubenswrapper[4830]: I0131 10:29:46.352955 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-ttnrg" podUID="d1346d7f-25da-4035-9c88-1f96c034d795" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.24:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:46 crc kubenswrapper[4830]: I0131 10:29:46.669287 4830 patch_prober.go:28] interesting pod/logging-loki-distributor-5f678c8dd6-vm6jc container/loki-distributor namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.47:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 31 10:29:46 crc kubenswrapper[4830]: I0131 10:29:46.669399 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-distributor-5f678c8dd6-vm6jc" podUID="e5b91203-480c-424e-877a-5f2f437d1ada" containerName="loki-distributor" probeResult="failure" output="Get \"https://10.217.0.47:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:46 crc kubenswrapper[4830]: I0131 10:29:46.669488 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-distributor-5f678c8dd6-vm6jc" Jan 31 10:29:46 crc kubenswrapper[4830]: I0131 10:29:46.761491 4830 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-monitoring/prometheus-k8s-0" podUID="d2c8c9f6-9f27-4f47-8fb3-e22aaa0d3e88" containerName="prometheus" probeResult="failure" output="command timed out" Jan 31 10:29:46 crc kubenswrapper[4830]: I0131 10:29:46.762052 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/prometheus-k8s-0" podUID="d2c8c9f6-9f27-4f47-8fb3-e22aaa0d3e88" containerName="prometheus" probeResult="failure" output="command timed out" Jan 31 10:29:46 crc kubenswrapper[4830]: I0131 10:29:46.762309 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/prometheus-k8s-0" Jan 31 10:29:46 crc kubenswrapper[4830]: I0131 10:29:46.762337 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-nmstate/nmstate-handler-9wzdf" podUID="09ac1675-c6eb-453a-83a5-94f0a04c9665" containerName="nmstate-handler" probeResult="failure" output="command timed out" Jan 31 10:29:46 crc kubenswrapper[4830]: I0131 10:29:46.762438 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-9wzdf" Jan 31 10:29:46 crc kubenswrapper[4830]: I0131 10:29:46.765049 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-engine-88757d59b-r55jf" podUID="3d4efcc1-d98d-466c-a7ee-6a6aa3766681" containerName="heat-engine" probeResult="failure" output="command timed out" Jan 31 10:29:46 crc kubenswrapper[4830]: I0131 10:29:46.765219 4830 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/certified-operators-jwvm4" podUID="14550547-ce63-48cc-800e-b74235d0daa1" containerName="registry-server" probeResult="failure" output="command timed out" Jan 31 10:29:46 crc kubenswrapper[4830]: I0131 10:29:46.765276 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/community-operators-fcmv2" podUID="c361702a-d6db-4925-809d-f08c6dd88a7d" containerName="registry-server" probeResult="failure" output="command timed out" Jan 31 10:29:46 crc kubenswrapper[4830]: I0131 10:29:46.765288 4830 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/heat-engine-88757d59b-r55jf" podUID="3d4efcc1-d98d-466c-a7ee-6a6aa3766681" containerName="heat-engine" probeResult="failure" output="command timed out" Jan 31 10:29:46 crc kubenswrapper[4830]: I0131 10:29:46.765342 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/certified-operators-jwvm4" podUID="14550547-ce63-48cc-800e-b74235d0daa1" containerName="registry-server" probeResult="failure" output="command timed out" Jan 31 10:29:46 crc kubenswrapper[4830]: I0131 10:29:46.766762 4830 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/community-operators-fcmv2" podUID="c361702a-d6db-4925-809d-f08c6dd88a7d" containerName="registry-server" probeResult="failure" output="command timed out" Jan 31 10:29:46 crc kubenswrapper[4830]: I0131 10:29:46.805197 4830 patch_prober.go:28] interesting pod/logging-loki-querier-76788598db-f89hf container/loki-querier namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.48:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 31 10:29:46 crc kubenswrapper[4830]: I0131 10:29:46.805246 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-querier-76788598db-f89hf" podUID="8aa52b7a-444c-4f07-9c3a-c2223e966e34" containerName="loki-querier" probeResult="failure" output="Get \"https://10.217.0.48:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:46 crc kubenswrapper[4830]: I0131 10:29:46.805315 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-querier-76788598db-f89hf" Jan 31 10:29:46 crc kubenswrapper[4830]: I0131 10:29:46.918019 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/frr-k8s-4v2n6" podUID="d0107b00-a78b-432b-afc6-a9ccc1b3bf5b" containerName="controller" probeResult="failure" output="Get \"http://127.0.0.1:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:46 crc kubenswrapper[4830]: I0131 10:29:46.933772 4830 patch_prober.go:28] interesting pod/logging-loki-query-frontend-69d9546745-8k7rn container/loki-query-frontend namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.49:3101/loki/api/v1/status/buildinfo\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 31 10:29:46 crc kubenswrapper[4830]: I0131 10:29:46.933850 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-query-frontend-69d9546745-8k7rn" podUID="6a2f00bb-9954-46d0-901b-3d9a82939850" containerName="loki-query-frontend" probeResult="failure" output="Get \"https://10.217.0.49:3101/loki/api/v1/status/buildinfo\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:47 crc kubenswrapper[4830]: I0131 10:29:47.387872 4830 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/speaker-x7g8x" podUID="1d713893-e8db-40ba-872c-e9d1650a56d0" containerName="speaker" probeResult="failure" output="Get \"http://localhost:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:47 crc kubenswrapper[4830]: I0131 10:29:47.387951 4830 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="metallb-system/speaker-x7g8x" Jan 31 10:29:47 crc kubenswrapper[4830]: I0131 10:29:47.387959 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/speaker-x7g8x" podUID="1d713893-e8db-40ba-872c-e9d1650a56d0" containerName="speaker" probeResult="failure" output="Get \"http://localhost:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:47 crc kubenswrapper[4830]: I0131 10:29:47.388092 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-x7g8x" Jan 31 10:29:47 crc kubenswrapper[4830]: I0131 10:29:47.389487 4830 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="speaker" containerStatusID={"Type":"cri-o","ID":"d8ae027c1b15e1df367ce806f666e3b8850ae99e4b45a28fc07df2f9232d9bff"} pod="metallb-system/speaker-x7g8x" containerMessage="Container speaker failed liveness probe, will be restarted" Jan 31 10:29:47 crc kubenswrapper[4830]: I0131 10:29:47.389562 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="metallb-system/speaker-x7g8x" podUID="1d713893-e8db-40ba-872c-e9d1650a56d0" containerName="speaker" containerID="cri-o://d8ae027c1b15e1df367ce806f666e3b8850ae99e4b45a28fc07df2f9232d9bff" gracePeriod=2 Jan 31 10:29:47 crc kubenswrapper[4830]: I0131 10:29:47.611940 4830 prober.go:107] "Probe failed" probeType="Liveness" pod="hostpath-provisioner/csi-hostpathplugin-rhvlq" podUID="30c7c034-9492-4051-9cc9-235a6d87bd03" containerName="hostpath-provisioner" probeResult="failure" output="Get \"http://10.217.0.42:9898/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:47 crc kubenswrapper[4830]: I0131 10:29:47.670652 4830 patch_prober.go:28] interesting pod/logging-loki-distributor-5f678c8dd6-vm6jc container/loki-distributor namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.47:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 31 10:29:47 crc kubenswrapper[4830]: I0131 10:29:47.670741 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-distributor-5f678c8dd6-vm6jc" podUID="e5b91203-480c-424e-877a-5f2f437d1ada" containerName="loki-distributor" probeResult="failure" output="Get \"https://10.217.0.47:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:47 crc kubenswrapper[4830]: I0131 10:29:47.808172 4830 patch_prober.go:28] interesting pod/logging-loki-querier-76788598db-f89hf container/loki-querier namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.48:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 31 10:29:47 crc kubenswrapper[4830]: I0131 10:29:47.808770 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-querier-76788598db-f89hf" podUID="8aa52b7a-444c-4f07-9c3a-c2223e966e34" containerName="loki-querier" probeResult="failure" output="Get \"https://10.217.0.48:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:47 crc kubenswrapper[4830]: I0131 10:29:47.822042 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/kube-state-metrics-0" podUID="adf0d571-b5dc-4d7c-9e8d-8813354a5128" containerName="kube-state-metrics" probeResult="failure" output="Get \"https://10.217.1.8:8081/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:47 crc kubenswrapper[4830]: I0131 10:29:47.822110 4830 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/kube-state-metrics-0" podUID="adf0d571-b5dc-4d7c-9e8d-8813354a5128" containerName="kube-state-metrics" probeResult="failure" output="Get \"https://10.217.1.8:8080/livez\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:47 crc kubenswrapper[4830]: I0131 10:29:47.847032 4830 patch_prober.go:28] interesting pod/logging-loki-gateway-74c87577db-hwvhd container/gateway namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.50:8081/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 31 10:29:47 crc kubenswrapper[4830]: I0131 10:29:47.847084 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-74c87577db-hwvhd" podUID="fd432483-7467-4c9d-a13e-8ee908a8ed2b" containerName="gateway" probeResult="failure" output="Get \"https://10.217.0.50:8081/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:47 crc kubenswrapper[4830]: I0131 10:29:47.847032 4830 patch_prober.go:28] interesting pod/logging-loki-gateway-74c87577db-hwvhd container/opa namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.50:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 31 10:29:47 crc kubenswrapper[4830]: I0131 10:29:47.847147 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-74c87577db-hwvhd" podUID="fd432483-7467-4c9d-a13e-8ee908a8ed2b" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.50:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:47 crc kubenswrapper[4830]: I0131 10:29:47.861026 4830 patch_prober.go:28] interesting pod/authentication-operator-69f744f599-hkd74 container/authentication-operator namespace/openshift-authentication-operator: Liveness probe status=failure output="Get \"https://10.217.0.8:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 31 10:29:47 crc kubenswrapper[4830]: I0131 10:29:47.861137 4830 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication-operator/authentication-operator-69f744f599-hkd74" podUID="00ab4f1c-2cc4-46b0-9e22-df58e5327352" containerName="authentication-operator" probeResult="failure" output="Get \"https://10.217.0.8:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:47 crc kubenswrapper[4830]: I0131 10:29:47.861183 4830 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-authentication-operator/authentication-operator-69f744f599-hkd74" Jan 31 10:29:47 crc kubenswrapper[4830]: I0131 10:29:47.880369 4830 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="authentication-operator" containerStatusID={"Type":"cri-o","ID":"9bb0f1093a37424441fc8374c5fb71cb747c472d42f4f79a9b45c2da6c131ac0"} pod="openshift-authentication-operator/authentication-operator-69f744f599-hkd74" containerMessage="Container authentication-operator failed liveness probe, will be restarted" Jan 31 10:29:47 crc kubenswrapper[4830]: I0131 10:29:47.880409 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication-operator/authentication-operator-69f744f599-hkd74" podUID="00ab4f1c-2cc4-46b0-9e22-df58e5327352" containerName="authentication-operator" containerID="cri-o://9bb0f1093a37424441fc8374c5fb71cb747c472d42f4f79a9b45c2da6c131ac0" gracePeriod=30 Jan 31 10:29:47 crc kubenswrapper[4830]: I0131 10:29:47.910294 4830 patch_prober.go:28] interesting pod/logging-loki-gateway-74c87577db-fjtpt container/gateway namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.51:8081/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 31 10:29:47 crc kubenswrapper[4830]: I0131 10:29:47.910369 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-74c87577db-fjtpt" podUID="867e058e-8774-4ff8-af99-a8f35ac530ce" containerName="gateway" probeResult="failure" output="Get \"https://10.217.0.51:8081/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:47 crc kubenswrapper[4830]: I0131 10:29:47.910510 4830 patch_prober.go:28] interesting pod/logging-loki-gateway-74c87577db-fjtpt container/opa namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.51:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 31 10:29:47 crc kubenswrapper[4830]: I0131 10:29:47.910568 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-74c87577db-fjtpt" podUID="867e058e-8774-4ff8-af99-a8f35ac530ce" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.51:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:47 crc kubenswrapper[4830]: I0131 10:29:47.968873 4830 patch_prober.go:28] interesting pod/logging-loki-compactor-0 container/loki-compactor namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.53:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 31 10:29:47 crc kubenswrapper[4830]: I0131 10:29:47.968938 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-compactor-0" podUID="70d5f51c-1a87-45fb-8822-7aa0997fceb1" containerName="loki-compactor" probeResult="failure" output="Get \"https://10.217.0.53:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:48 crc kubenswrapper[4830]: I0131 10:29:48.026966 4830 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Liveness probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 31 10:29:48 crc kubenswrapper[4830]: I0131 10:29:48.027030 4830 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:48 crc kubenswrapper[4830]: I0131 10:29:48.123982 4830 patch_prober.go:28] interesting pod/logging-loki-index-gateway-0 container/loki-index-gateway namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.54:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 31 10:29:48 crc kubenswrapper[4830]: I0131 10:29:48.124059 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-index-gateway-0" podUID="efadb8be-37d4-4e2b-9df2-3d1301ae81a8" containerName="loki-index-gateway" probeResult="failure" output="Get \"https://10.217.0.54:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:48 crc kubenswrapper[4830]: I0131 10:29:48.124119 4830 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-58x6p container/marketplace-operator namespace/openshift-marketplace: Liveness probe status=failure output="Get \"http://10.217.0.71:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 31 10:29:48 crc kubenswrapper[4830]: I0131 10:29:48.124138 4830 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/marketplace-operator-79b997595-58x6p" podUID="b6c3d452-2742-4f91-9857-5f5e0b50f348" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.71:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:48 crc kubenswrapper[4830]: I0131 10:29:48.124164 4830 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-marketplace/marketplace-operator-79b997595-58x6p" Jan 31 10:29:48 crc kubenswrapper[4830]: I0131 10:29:48.124186 4830 patch_prober.go:28] interesting pod/metrics-server-6cdc866fc6-9thf6 container/metrics-server namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.84:10250/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 31 10:29:48 crc kubenswrapper[4830]: I0131 10:29:48.124251 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/metrics-server-6cdc866fc6-9thf6" podUID="45903f73-e8ae-4e54-b650-f0090e9436b3" containerName="metrics-server" probeResult="failure" output="Get \"https://10.217.0.84:10250/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:48 crc kubenswrapper[4830]: I0131 10:29:48.124328 4830 patch_prober.go:28] interesting pod/metrics-server-6cdc866fc6-9thf6 container/metrics-server namespace/openshift-monitoring: Liveness probe status=failure output="Get \"https://10.217.0.84:10250/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 31 10:29:48 crc kubenswrapper[4830]: I0131 10:29:48.124351 4830 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-monitoring/metrics-server-6cdc866fc6-9thf6" podUID="45903f73-e8ae-4e54-b650-f0090e9436b3" containerName="metrics-server" probeResult="failure" output="Get \"https://10.217.0.84:10250/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:48 crc kubenswrapper[4830]: I0131 10:29:48.124392 4830 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-monitoring/metrics-server-6cdc866fc6-9thf6" Jan 31 10:29:48 crc kubenswrapper[4830]: I0131 10:29:48.124583 4830 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-58x6p container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.71:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 31 10:29:48 crc kubenswrapper[4830]: I0131 10:29:48.124608 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-58x6p" podUID="b6c3d452-2742-4f91-9857-5f5e0b50f348" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.71:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:48 crc kubenswrapper[4830]: I0131 10:29:48.124661 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-58x6p" Jan 31 10:29:48 crc kubenswrapper[4830]: I0131 10:29:48.138941 4830 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="marketplace-operator" containerStatusID={"Type":"cri-o","ID":"d85017aaf93892f489ab9319825e71a9a965d45d582b884dfab7617b94a784eb"} pod="openshift-marketplace/marketplace-operator-79b997595-58x6p" containerMessage="Container marketplace-operator failed liveness probe, will be restarted" Jan 31 10:29:48 crc kubenswrapper[4830]: I0131 10:29:48.139003 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-58x6p" podUID="b6c3d452-2742-4f91-9857-5f5e0b50f348" containerName="marketplace-operator" containerID="cri-o://d85017aaf93892f489ab9319825e71a9a965d45d582b884dfab7617b94a784eb" gracePeriod=30 Jan 31 10:29:48 crc kubenswrapper[4830]: I0131 10:29:48.139898 4830 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="metrics-server" containerStatusID={"Type":"cri-o","ID":"c4da751ed7e78efc6b02a950d82b969bca3c58873a46feefa2b13814f5949365"} pod="openshift-monitoring/metrics-server-6cdc866fc6-9thf6" containerMessage="Container metrics-server failed liveness probe, will be restarted" Jan 31 10:29:48 crc kubenswrapper[4830]: I0131 10:29:48.139979 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-monitoring/metrics-server-6cdc866fc6-9thf6" podUID="45903f73-e8ae-4e54-b650-f0090e9436b3" containerName="metrics-server" containerID="cri-o://c4da751ed7e78efc6b02a950d82b969bca3c58873a46feefa2b13814f5949365" gracePeriod=170 Jan 31 10:29:48 crc kubenswrapper[4830]: I0131 10:29:48.268864 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="cert-manager/cert-manager-webhook-687f57d79b-22grv" podUID="eb0ab04d-4e0a-4a84-965a-2c0513d6d79a" containerName="cert-manager-webhook" probeResult="failure" output="Get \"http://10.217.0.5:6080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:48 crc kubenswrapper[4830]: I0131 10:29:48.268876 4830 prober.go:107] "Probe failed" probeType="Liveness" pod="cert-manager/cert-manager-webhook-687f57d79b-22grv" podUID="eb0ab04d-4e0a-4a84-965a-2c0513d6d79a" containerName="cert-manager-webhook" probeResult="failure" output="Get \"http://10.217.0.5:6080/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:48 crc kubenswrapper[4830]: I0131 10:29:48.494829 4830 patch_prober.go:28] interesting pod/downloads-7954f5f757-l8ckt container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.23:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 31 10:29:48 crc kubenswrapper[4830]: I0131 10:29:48.495077 4830 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-l8ckt" podUID="a8d26ab0-33c3-4eb7-928b-ffba996579d9" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.23:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:48 crc kubenswrapper[4830]: I0131 10:29:48.495125 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/speaker-x7g8x" podUID="1d713893-e8db-40ba-872c-e9d1650a56d0" containerName="speaker" probeResult="failure" output="Get \"http://localhost:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:48 crc kubenswrapper[4830]: I0131 10:29:48.495264 4830 patch_prober.go:28] interesting pod/openshift-kube-scheduler-crc container/kube-scheduler namespace/openshift-kube-scheduler: Liveness probe status=failure output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 31 10:29:48 crc kubenswrapper[4830]: I0131 10:29:48.495282 4830 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podUID="3dcd261975c3d6b9a6ad6367fd4facd3" containerName="kube-scheduler" probeResult="failure" output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:48 crc kubenswrapper[4830]: I0131 10:29:48.495305 4830 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 31 10:29:48 crc kubenswrapper[4830]: I0131 10:29:48.496502 4830 patch_prober.go:28] interesting pod/downloads-7954f5f757-l8ckt container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.23:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 31 10:29:48 crc kubenswrapper[4830]: I0131 10:29:48.496586 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-l8ckt" podUID="a8d26ab0-33c3-4eb7-928b-ffba996579d9" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.23:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:48 crc kubenswrapper[4830]: I0131 10:29:48.496668 4830 patch_prober.go:28] interesting pod/image-registry-66df7c8f76-gkw8v container/registry namespace/openshift-image-registry: Readiness probe status=failure output="Get \"https://10.217.0.70:5000/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 31 10:29:48 crc kubenswrapper[4830]: I0131 10:29:48.496694 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-image-registry/image-registry-66df7c8f76-gkw8v" podUID="4889a479-52c6-494e-a902-c7653ffef4a7" containerName="registry" probeResult="failure" output="Get \"https://10.217.0.70:5000/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:48 crc kubenswrapper[4830]: I0131 10:29:48.496803 4830 patch_prober.go:28] interesting pod/image-registry-66df7c8f76-gkw8v container/registry namespace/openshift-image-registry: Liveness probe status=failure output="Get \"https://10.217.0.70:5000/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 31 10:29:48 crc kubenswrapper[4830]: I0131 10:29:48.496869 4830 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-image-registry/image-registry-66df7c8f76-gkw8v" podUID="4889a479-52c6-494e-a902-c7653ffef4a7" containerName="registry" probeResult="failure" output="Get \"https://10.217.0.70:5000/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:48 crc kubenswrapper[4830]: I0131 10:29:48.496899 4830 patch_prober.go:28] interesting pod/monitoring-plugin-546c959798-jmj57 container/monitoring-plugin namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.85:9443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 31 10:29:48 crc kubenswrapper[4830]: I0131 10:29:48.496937 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/monitoring-plugin-546c959798-jmj57" podUID="fadaea73-e4ec-47a5-b6df-c93b1ce5645f" containerName="monitoring-plugin" probeResult="failure" output="Get \"https://10.217.0.85:9443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:48 crc kubenswrapper[4830]: I0131 10:29:48.497018 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/monitoring-plugin-546c959798-jmj57" Jan 31 10:29:48 crc kubenswrapper[4830]: I0131 10:29:48.515351 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/prometheus-metric-storage-0" podUID="7b3b4d1e-8963-469f-abe7-204392275c48" containerName="prometheus" probeResult="failure" output="Get \"https://10.217.0.165:9090/-/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:48 crc kubenswrapper[4830]: I0131 10:29:48.515375 4830 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/prometheus-metric-storage-0" podUID="7b3b4d1e-8963-469f-abe7-204392275c48" containerName="prometheus" probeResult="failure" output="Get \"https://10.217.0.165:9090/-/healthy\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:48 crc kubenswrapper[4830]: I0131 10:29:48.518294 4830 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="kube-scheduler" containerStatusID={"Type":"cri-o","ID":"1dc96f3d1e085f925a6a1b73ef1312bd85072065059f20eb6c11f7d044635f8b"} pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" containerMessage="Container kube-scheduler failed liveness probe, will be restarted" Jan 31 10:29:48 crc kubenswrapper[4830]: I0131 10:29:48.518620 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podUID="3dcd261975c3d6b9a6ad6367fd4facd3" containerName="kube-scheduler" containerID="cri-o://1dc96f3d1e085f925a6a1b73ef1312bd85072065059f20eb6c11f7d044635f8b" gracePeriod=30 Jan 31 10:29:48 crc kubenswrapper[4830]: I0131 10:29:48.743897 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/swift-proxy-f44b7d679-6khcx" podUID="f99258ad-5714-491f-bdad-d7196ed9833a" containerName="proxy-server" probeResult="failure" output="Get \"https://10.217.0.218:8080/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:48 crc kubenswrapper[4830]: I0131 10:29:48.747882 4830 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/swift-proxy-f44b7d679-6khcx" podUID="f99258ad-5714-491f-bdad-d7196ed9833a" containerName="proxy-server" probeResult="failure" output="Get \"https://10.217.0.218:8080/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:48 crc kubenswrapper[4830]: I0131 10:29:48.747909 4830 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/swift-proxy-f44b7d679-6khcx" podUID="f99258ad-5714-491f-bdad-d7196ed9833a" containerName="proxy-httpd" probeResult="failure" output="Get \"https://10.217.0.218:8080/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:48 crc kubenswrapper[4830]: I0131 10:29:48.747959 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/swift-proxy-f44b7d679-6khcx" podUID="f99258ad-5714-491f-bdad-d7196ed9833a" containerName="proxy-httpd" probeResult="failure" output="Get \"https://10.217.0.218:8080/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:48 crc kubenswrapper[4830]: I0131 10:29:48.759288 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-nmstate/nmstate-handler-9wzdf" podUID="09ac1675-c6eb-453a-83a5-94f0a04c9665" containerName="nmstate-handler" probeResult="failure" output="command timed out" Jan 31 10:29:48 crc kubenswrapper[4830]: I0131 10:29:48.872855 4830 patch_prober.go:28] interesting pod/router-default-5444994796-vbcgc container/router namespace/openshift-ingress: Liveness probe status=failure output="Get \"http://localhost:1936/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 31 10:29:48 crc kubenswrapper[4830]: I0131 10:29:48.872860 4830 patch_prober.go:28] interesting pod/router-default-5444994796-vbcgc container/router namespace/openshift-ingress: Readiness probe status=failure output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 31 10:29:48 crc kubenswrapper[4830]: I0131 10:29:48.872907 4830 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-ingress/router-default-5444994796-vbcgc" podUID="bf986437-9998-4cd1-90b8-b2e0716e8d37" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:48 crc kubenswrapper[4830]: I0131 10:29:48.873015 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-ingress/router-default-5444994796-vbcgc" podUID="bf986437-9998-4cd1-90b8-b2e0716e8d37" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:48 crc kubenswrapper[4830]: I0131 10:29:48.873083 4830 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-ingress/router-default-5444994796-vbcgc" Jan 31 10:29:48 crc kubenswrapper[4830]: I0131 10:29:48.873170 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-vbcgc" Jan 31 10:29:48 crc kubenswrapper[4830]: I0131 10:29:48.874573 4830 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="router" containerStatusID={"Type":"cri-o","ID":"80f837c980bbb2106b85f0e8ae5ce486b89cde72328711691a5e7a58dca33a3f"} pod="openshift-ingress/router-default-5444994796-vbcgc" containerMessage="Container router failed liveness probe, will be restarted" Jan 31 10:29:48 crc kubenswrapper[4830]: I0131 10:29:48.874612 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ingress/router-default-5444994796-vbcgc" podUID="bf986437-9998-4cd1-90b8-b2e0716e8d37" containerName="router" containerID="cri-o://80f837c980bbb2106b85f0e8ae5ce486b89cde72328711691a5e7a58dca33a3f" gracePeriod=10 Jan 31 10:29:48 crc kubenswrapper[4830]: I0131 10:29:48.991800 4830 patch_prober.go:28] interesting pod/console-operator-58897d9998-pkx9p container/console-operator namespace/openshift-console-operator: Liveness probe status=failure output="Get \"https://10.217.0.21:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 31 10:29:48 crc kubenswrapper[4830]: I0131 10:29:48.991847 4830 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console-operator/console-operator-58897d9998-pkx9p" podUID="691a8aff-6fcd-400a-ace9-fb3fa8778206" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.21:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:48 crc kubenswrapper[4830]: I0131 10:29:48.992951 4830 patch_prober.go:28] interesting pod/console-operator-58897d9998-pkx9p container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.21:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 31 10:29:48 crc kubenswrapper[4830]: I0131 10:29:48.992990 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-pkx9p" podUID="691a8aff-6fcd-400a-ace9-fb3fa8778206" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.21:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:49 crc kubenswrapper[4830]: I0131 10:29:49.300324 4830 patch_prober.go:28] interesting pod/logging-loki-ingester-0 container/loki-ingester namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.52:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 31 10:29:49 crc kubenswrapper[4830]: I0131 10:29:49.300655 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-ingester-0" podUID="07a77a4a-344b-45bb-8488-a536a94185b1" containerName="loki-ingester" probeResult="failure" output="Get \"https://10.217.0.52:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:49 crc kubenswrapper[4830]: I0131 10:29:49.434918 4830 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-ttnrg container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.24:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 31 10:29:49 crc kubenswrapper[4830]: I0131 10:29:49.434961 4830 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-ttnrg container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.24:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 31 10:29:49 crc kubenswrapper[4830]: I0131 10:29:49.435013 4830 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-ttnrg" podUID="d1346d7f-25da-4035-9c88-1f96c034d795" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.24:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:49 crc kubenswrapper[4830]: I0131 10:29:49.435088 4830 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-config-operator/openshift-config-operator-7777fb866f-ttnrg" Jan 31 10:29:49 crc kubenswrapper[4830]: I0131 10:29:49.435131 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-ttnrg" podUID="d1346d7f-25da-4035-9c88-1f96c034d795" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.24:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:49 crc kubenswrapper[4830]: I0131 10:29:49.435240 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-ttnrg" Jan 31 10:29:49 crc kubenswrapper[4830]: I0131 10:29:49.436605 4830 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="openshift-config-operator" containerStatusID={"Type":"cri-o","ID":"85d3d5001bb1210574c9fdb22694fa1d3ee858ab7e8b183782ae2dc18e10a849"} pod="openshift-config-operator/openshift-config-operator-7777fb866f-ttnrg" containerMessage="Container openshift-config-operator failed liveness probe, will be restarted" Jan 31 10:29:49 crc kubenswrapper[4830]: I0131 10:29:49.436646 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-config-operator/openshift-config-operator-7777fb866f-ttnrg" podUID="d1346d7f-25da-4035-9c88-1f96c034d795" containerName="openshift-config-operator" containerID="cri-o://85d3d5001bb1210574c9fdb22694fa1d3ee858ab7e8b183782ae2dc18e10a849" gracePeriod=30 Jan 31 10:29:49 crc kubenswrapper[4830]: I0131 10:29:49.447888 4830 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-n4rml container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.32:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 31 10:29:49 crc kubenswrapper[4830]: I0131 10:29:49.447946 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-n4rml" podUID="cf057c5a-deef-4c01-bd58-f761ec86e2f4" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.32:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:49 crc kubenswrapper[4830]: I0131 10:29:49.448036 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-n4rml" Jan 31 10:29:49 crc kubenswrapper[4830]: I0131 10:29:49.447913 4830 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-n4rml container/catalog-operator namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.32:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 31 10:29:49 crc kubenswrapper[4830]: I0131 10:29:49.448180 4830 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-n4rml" podUID="cf057c5a-deef-4c01-bd58-f761ec86e2f4" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.32:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:49 crc kubenswrapper[4830]: I0131 10:29:49.448291 4830 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-n4rml" Jan 31 10:29:49 crc kubenswrapper[4830]: I0131 10:29:49.449542 4830 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="catalog-operator" containerStatusID={"Type":"cri-o","ID":"4ee9412f00cc39ee85a53e00735952960dbf6826e8a88f21b12231d990adad8a"} pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-n4rml" containerMessage="Container catalog-operator failed liveness probe, will be restarted" Jan 31 10:29:49 crc kubenswrapper[4830]: I0131 10:29:49.449593 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-n4rml" podUID="cf057c5a-deef-4c01-bd58-f761ec86e2f4" containerName="catalog-operator" containerID="cri-o://4ee9412f00cc39ee85a53e00735952960dbf6826e8a88f21b12231d990adad8a" gracePeriod=30 Jan 31 10:29:49 crc kubenswrapper[4830]: I0131 10:29:49.575934 4830 patch_prober.go:28] interesting pod/package-server-manager-789f6589d5-ckvgq container/package-server-manager namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"http://10.217.0.29:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 31 10:29:49 crc kubenswrapper[4830]: I0131 10:29:49.575969 4830 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-lp7ks container/packageserver namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.39:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 31 10:29:49 crc kubenswrapper[4830]: I0131 10:29:49.575984 4830 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-ckvgq" podUID="007a4117-0dfe-485e-85df-6bc68e0cee5e" containerName="package-server-manager" probeResult="failure" output="Get \"http://10.217.0.29:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:49 crc kubenswrapper[4830]: I0131 10:29:49.575942 4830 patch_prober.go:28] interesting pod/monitoring-plugin-546c959798-jmj57 container/monitoring-plugin namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.85:9443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 31 10:29:49 crc kubenswrapper[4830]: I0131 10:29:49.576017 4830 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-lp7ks" podUID="e80e8b17-711d-46d8-a240-4fa52e093545" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.39:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:49 crc kubenswrapper[4830]: I0131 10:29:49.576050 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/monitoring-plugin-546c959798-jmj57" podUID="fadaea73-e4ec-47a5-b6df-c93b1ce5645f" containerName="monitoring-plugin" probeResult="failure" output="Get \"https://10.217.0.85:9443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:49 crc kubenswrapper[4830]: I0131 10:29:49.576067 4830 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-lp7ks" Jan 31 10:29:49 crc kubenswrapper[4830]: I0131 10:29:49.576087 4830 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-lp7ks container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.39:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 31 10:29:49 crc kubenswrapper[4830]: I0131 10:29:49.576103 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-lp7ks" podUID="e80e8b17-711d-46d8-a240-4fa52e093545" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.39:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:49 crc kubenswrapper[4830]: I0131 10:29:49.576132 4830 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-lb8hp container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.38:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 31 10:29:49 crc kubenswrapper[4830]: I0131 10:29:49.576143 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lb8hp" podUID="13f1c33b-cede-4fb1-9651-15d0dcd36173" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.38:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:49 crc kubenswrapper[4830]: I0131 10:29:49.576162 4830 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-lb8hp container/olm-operator namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.38:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 31 10:29:49 crc kubenswrapper[4830]: I0131 10:29:49.576172 4830 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lb8hp" podUID="13f1c33b-cede-4fb1-9651-15d0dcd36173" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.38:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:49 crc kubenswrapper[4830]: I0131 10:29:49.576192 4830 patch_prober.go:28] interesting pod/package-server-manager-789f6589d5-ckvgq container/package-server-manager namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"http://10.217.0.29:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 31 10:29:49 crc kubenswrapper[4830]: I0131 10:29:49.576202 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-ckvgq" podUID="007a4117-0dfe-485e-85df-6bc68e0cee5e" containerName="package-server-manager" probeResult="failure" output="Get \"http://10.217.0.29:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:49 crc kubenswrapper[4830]: I0131 10:29:49.576242 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-lp7ks" Jan 31 10:29:49 crc kubenswrapper[4830]: I0131 10:29:49.588046 4830 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="packageserver" containerStatusID={"Type":"cri-o","ID":"4bb4b393d788389636a749f9855b6b5af59603d34816a47e960c64dbe48662c7"} pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-lp7ks" containerMessage="Container packageserver failed liveness probe, will be restarted" Jan 31 10:29:49 crc kubenswrapper[4830]: I0131 10:29:49.588353 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-lp7ks" podUID="e80e8b17-711d-46d8-a240-4fa52e093545" containerName="packageserver" containerID="cri-o://4bb4b393d788389636a749f9855b6b5af59603d34816a47e960c64dbe48662c7" gracePeriod=30 Jan 31 10:29:49 crc kubenswrapper[4830]: I0131 10:29:49.647975 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/infra-operator-controller-manager-79955696d6-vvv24" podUID="0b519925-01de-4cf0-8ff8-0f97137dd3d9" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.107:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:49 crc kubenswrapper[4830]: I0131 10:29:49.648131 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-79955696d6-vvv24" Jan 31 10:29:49 crc kubenswrapper[4830]: I0131 10:29:49.761407 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="2ca5d2f1-673e-4173-848a-8d32d33b8bcc" containerName="galera" probeResult="failure" output="command timed out" Jan 31 10:29:49 crc kubenswrapper[4830]: I0131 10:29:49.761542 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Jan 31 10:29:49 crc kubenswrapper[4830]: I0131 10:29:49.762490 4830 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-galera-0" podUID="2ca5d2f1-673e-4173-848a-8d32d33b8bcc" containerName="galera" probeResult="failure" output="command timed out" Jan 31 10:29:49 crc kubenswrapper[4830]: I0131 10:29:49.762570 4830 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/openstack-galera-0" Jan 31 10:29:49 crc kubenswrapper[4830]: I0131 10:29:49.763118 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-ovs-gk8dv" podUID="e3aa8ed2-c434-4c0c-9a0e-8fe7ce3cc5d1" containerName="ovsdb-server" probeResult="failure" output="command timed out" Jan 31 10:29:49 crc kubenswrapper[4830]: I0131 10:29:49.763824 4830 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="galera" containerStatusID={"Type":"cri-o","ID":"e774409d73ea3f7c6d1de27e1c877dc73032596ee68ca15941563cc71678e875"} pod="openstack/openstack-galera-0" containerMessage="Container galera failed liveness probe, will be restarted" Jan 31 10:29:49 crc kubenswrapper[4830]: I0131 10:29:49.764764 4830 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ovn-controller-ovs-gk8dv" podUID="e3aa8ed2-c434-4c0c-9a0e-8fe7ce3cc5d1" containerName="ovs-vswitchd" probeResult="failure" output="command timed out" Jan 31 10:29:49 crc kubenswrapper[4830]: I0131 10:29:49.764811 4830 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ceilometer-0" podUID="f2ea7efa-c50b-4208-a9df-2c3fc454762b" containerName="ceilometer-notification-agent" probeResult="failure" output="command timed out" Jan 31 10:29:49 crc kubenswrapper[4830]: I0131 10:29:49.764846 4830 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ovn-controller-ps27t" podUID="dcebcaa8-8cb1-4a94-a15a-0ef39c9bee73" containerName="ovn-controller" probeResult="failure" output="command timed out" Jan 31 10:29:49 crc kubenswrapper[4830]: I0131 10:29:49.766369 4830 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ovn-controller-ovs-gk8dv" podUID="e3aa8ed2-c434-4c0c-9a0e-8fe7ce3cc5d1" containerName="ovsdb-server" probeResult="failure" output="command timed out" Jan 31 10:29:49 crc kubenswrapper[4830]: I0131 10:29:49.766488 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-ovs-gk8dv" podUID="e3aa8ed2-c434-4c0c-9a0e-8fe7ce3cc5d1" containerName="ovs-vswitchd" probeResult="failure" output="command timed out" Jan 31 10:29:49 crc kubenswrapper[4830]: I0131 10:29:49.767869 4830 patch_prober.go:28] interesting pod/oauth-openshift-6768bc9c9c-5t4z8 container/oauth-openshift namespace/openshift-authentication: Liveness probe status=failure output="Get \"https://10.217.0.63:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 31 10:29:49 crc kubenswrapper[4830]: I0131 10:29:49.767901 4830 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication/oauth-openshift-6768bc9c9c-5t4z8" podUID="3549201c-94c2-4a29-9e62-b498b4a97ece" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.63:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:49 crc kubenswrapper[4830]: I0131 10:29:49.767934 4830 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-authentication/oauth-openshift-6768bc9c9c-5t4z8" Jan 31 10:29:49 crc kubenswrapper[4830]: I0131 10:29:49.767926 4830 patch_prober.go:28] interesting pod/oauth-openshift-6768bc9c9c-5t4z8 container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.63:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 31 10:29:49 crc kubenswrapper[4830]: I0131 10:29:49.768007 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-6768bc9c9c-5t4z8" podUID="3549201c-94c2-4a29-9e62-b498b4a97ece" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.63:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:49 crc kubenswrapper[4830]: I0131 10:29:49.768074 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-6768bc9c9c-5t4z8" Jan 31 10:29:49 crc kubenswrapper[4830]: I0131 10:29:49.771293 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-ps27t" podUID="dcebcaa8-8cb1-4a94-a15a-0ef39c9bee73" containerName="ovn-controller" probeResult="failure" output="command timed out" Jan 31 10:29:49 crc kubenswrapper[4830]: I0131 10:29:49.771364 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-index-nc25d" podUID="b0b831b3-e535-4264-b46c-c93f7edd51d2" containerName="registry-server" probeResult="failure" output="command timed out" Jan 31 10:29:49 crc kubenswrapper[4830]: I0131 10:29:49.771401 4830 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/openstack-operator-index-nc25d" podUID="b0b831b3-e535-4264-b46c-c93f7edd51d2" containerName="registry-server" probeResult="failure" output="command timed out" Jan 31 10:29:49 crc kubenswrapper[4830]: I0131 10:29:49.771443 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-nc25d" Jan 31 10:29:49 crc kubenswrapper[4830]: I0131 10:29:49.771467 4830 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/openstack-operator-index-nc25d" Jan 31 10:29:49 crc kubenswrapper[4830]: I0131 10:29:49.773282 4830 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="registry-server" containerStatusID={"Type":"cri-o","ID":"384a0831544a2cb790ebf79501804b539d1b77cdf911870336931f1b831b232d"} pod="openstack-operators/openstack-operator-index-nc25d" containerMessage="Container registry-server failed liveness probe, will be restarted" Jan 31 10:29:49 crc kubenswrapper[4830]: I0131 10:29:49.773326 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-nc25d" podUID="b0b831b3-e535-4264-b46c-c93f7edd51d2" containerName="registry-server" containerID="cri-o://384a0831544a2cb790ebf79501804b539d1b77cdf911870336931f1b831b232d" gracePeriod=30 Jan 31 10:29:49 crc kubenswrapper[4830]: I0131 10:29:49.778700 4830 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="oauth-openshift" containerStatusID={"Type":"cri-o","ID":"3ea2639af37448a2eefa4b679484a5226ded1742fea84b95ff9c683ad7e4fd1e"} pod="openshift-authentication/oauth-openshift-6768bc9c9c-5t4z8" containerMessage="Container oauth-openshift failed liveness probe, will be restarted" Jan 31 10:29:49 crc kubenswrapper[4830]: I0131 10:29:49.779131 4830 patch_prober.go:28] interesting pod/thanos-querier-57c5b4b8d5-lsvdc container/kube-rbac-proxy-web namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.82:9091/-/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 31 10:29:49 crc kubenswrapper[4830]: I0131 10:29:49.779259 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/thanos-querier-57c5b4b8d5-lsvdc" podUID="4158e29b-a0d9-40f2-904d-ffb63ba734f6" containerName="kube-rbac-proxy-web" probeResult="failure" output="Get \"https://10.217.0.82:9091/-/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:49 crc kubenswrapper[4830]: I0131 10:29:49.825863 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cinder-api-0" podUID="945c030b-2a43-431b-b898-d3a28b4e3821" containerName="cinder-api" probeResult="failure" output="Get \"https://10.217.0.209:8776/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:49 crc kubenswrapper[4830]: I0131 10:29:49.825914 4830 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/cinder-api-0" podUID="945c030b-2a43-431b-b898-d3a28b4e3821" containerName="cinder-api" probeResult="failure" output="Get \"https://10.217.0.209:8776/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:49 crc kubenswrapper[4830]: E0131 10:29:49.887311 4830 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="384a0831544a2cb790ebf79501804b539d1b77cdf911870336931f1b831b232d" cmd=["grpc_health_probe","-addr=:50051"] Jan 31 10:29:49 crc kubenswrapper[4830]: E0131 10:29:49.893160 4830 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="384a0831544a2cb790ebf79501804b539d1b77cdf911870336931f1b831b232d" cmd=["grpc_health_probe","-addr=:50051"] Jan 31 10:29:49 crc kubenswrapper[4830]: E0131 10:29:49.894961 4830 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="384a0831544a2cb790ebf79501804b539d1b77cdf911870336931f1b831b232d" cmd=["grpc_health_probe","-addr=:50051"] Jan 31 10:29:49 crc kubenswrapper[4830]: E0131 10:29:49.895025 4830 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack-operators/openstack-operator-index-nc25d" podUID="b0b831b3-e535-4264-b46c-c93f7edd51d2" containerName="registry-server" Jan 31 10:29:49 crc kubenswrapper[4830]: I0131 10:29:49.914986 4830 patch_prober.go:28] interesting pod/router-default-5444994796-vbcgc container/router namespace/openshift-ingress: Readiness probe status=failure output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 31 10:29:49 crc kubenswrapper[4830]: I0131 10:29:49.928279 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-ingress/router-default-5444994796-vbcgc" podUID="bf986437-9998-4cd1-90b8-b2e0716e8d37" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:50 crc kubenswrapper[4830]: I0131 10:29:50.159480 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-4v2n6" event={"ID":"d0107b00-a78b-432b-afc6-a9ccc1b3bf5b","Type":"ContainerDied","Data":"727ecac22e63391e070b11aadabf324369bf6f5aa72356556f5e4f8598e8f60c"} Jan 31 10:29:50 crc kubenswrapper[4830]: I0131 10:29:50.162173 4830 generic.go:334] "Generic (PLEG): container finished" podID="d0107b00-a78b-432b-afc6-a9ccc1b3bf5b" containerID="727ecac22e63391e070b11aadabf324369bf6f5aa72356556f5e4f8598e8f60c" exitCode=137 Jan 31 10:29:50 crc kubenswrapper[4830]: I0131 10:29:50.167180 4830 generic.go:334] "Generic (PLEG): container finished" podID="ce3329e2-9eca-4a04-bf1d-0578e12beaa5" containerID="094038c5117902e3dfa535713a374ec621d40c8bc0b99cd163b60a4a2eeca820" exitCode=1 Jan 31 10:29:50 crc kubenswrapper[4830]: I0131 10:29:50.167232 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-688c9bff97-t8jpp" event={"ID":"ce3329e2-9eca-4a04-bf1d-0578e12beaa5","Type":"ContainerDied","Data":"094038c5117902e3dfa535713a374ec621d40c8bc0b99cd163b60a4a2eeca820"} Jan 31 10:29:50 crc kubenswrapper[4830]: I0131 10:29:50.192344 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="metallb-system/frr-k8s-4v2n6" podUID="d0107b00-a78b-432b-afc6-a9ccc1b3bf5b" containerName="frr" containerID="cri-o://48cc01afd187531d11a8e7950848cdfb1bbe3d5df848bd9f580f457ec1e94f6e" gracePeriod=2 Jan 31 10:29:50 crc kubenswrapper[4830]: I0131 10:29:50.213316 4830 scope.go:117] "RemoveContainer" containerID="094038c5117902e3dfa535713a374ec621d40c8bc0b99cd163b60a4a2eeca820" Jan 31 10:29:50 crc kubenswrapper[4830]: I0131 10:29:50.327909 4830 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Readiness probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 31 10:29:50 crc kubenswrapper[4830]: I0131 10:29:50.327981 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:50 crc kubenswrapper[4830]: I0131 10:29:50.439925 4830 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-ttnrg container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.24:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 31 10:29:50 crc kubenswrapper[4830]: I0131 10:29:50.440285 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-ttnrg" podUID="d1346d7f-25da-4035-9c88-1f96c034d795" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.24:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:50 crc kubenswrapper[4830]: I0131 10:29:50.448442 4830 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-n4rml container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.32:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 31 10:29:50 crc kubenswrapper[4830]: I0131 10:29:50.448499 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-n4rml" podUID="cf057c5a-deef-4c01-bd58-f761ec86e2f4" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.32:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:50 crc kubenswrapper[4830]: I0131 10:29:50.453157 4830 prober.go:107] "Probe failed" probeType="Liveness" pod="hostpath-provisioner/csi-hostpathplugin-rhvlq" podUID="30c7c034-9492-4051-9cc9-235a6d87bd03" containerName="hostpath-provisioner" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 31 10:29:50 crc kubenswrapper[4830]: I0131 10:29:50.552710 4830 patch_prober.go:28] interesting pod/nmstate-webhook-8474b5b9d8-hw8mv container/nmstate-webhook namespace/openshift-nmstate: Readiness probe status=failure output="Get \"https://10.217.0.65:9443/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 31 10:29:50 crc kubenswrapper[4830]: I0131 10:29:50.552800 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-hw8mv" podUID="a580c5e1-30c2-40b1-993d-c375cc99e2f2" containerName="nmstate-webhook" probeResult="failure" output="Get \"https://10.217.0.65:9443/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:50 crc kubenswrapper[4830]: I0131 10:29:50.576486 4830 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-lp7ks container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.39:5443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 31 10:29:50 crc kubenswrapper[4830]: I0131 10:29:50.576549 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-lp7ks" podUID="e80e8b17-711d-46d8-a240-4fa52e093545" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.39:5443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:50 crc kubenswrapper[4830]: I0131 10:29:50.687297 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators-redhat/loki-operator-controller-manager-688c9bff97-t8jpp" Jan 31 10:29:50 crc kubenswrapper[4830]: I0131 10:29:50.687342 4830 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-operators-redhat/loki-operator-controller-manager-688c9bff97-t8jpp" Jan 31 10:29:50 crc kubenswrapper[4830]: I0131 10:29:50.689937 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/infra-operator-controller-manager-79955696d6-vvv24" podUID="0b519925-01de-4cf0-8ff8-0f97137dd3d9" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.107:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:50 crc kubenswrapper[4830]: I0131 10:29:50.765059 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-cell1-galera-0" podUID="f37f41b4-3b56-45f9-a368-0f772bcf3002" containerName="galera" probeResult="failure" output="command timed out" Jan 31 10:29:50 crc kubenswrapper[4830]: I0131 10:29:50.765121 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-nmstate/nmstate-handler-9wzdf" podUID="09ac1675-c6eb-453a-83a5-94f0a04c9665" containerName="nmstate-handler" probeResult="failure" output="command timed out" Jan 31 10:29:50 crc kubenswrapper[4830]: I0131 10:29:50.765176 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Jan 31 10:29:50 crc kubenswrapper[4830]: I0131 10:29:50.765132 4830 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-cell1-galera-0" podUID="f37f41b4-3b56-45f9-a368-0f772bcf3002" containerName="galera" probeResult="failure" output="command timed out" Jan 31 10:29:50 crc kubenswrapper[4830]: I0131 10:29:50.765229 4830 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Jan 31 10:29:50 crc kubenswrapper[4830]: I0131 10:29:50.765343 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="2ca5d2f1-673e-4173-848a-8d32d33b8bcc" containerName="galera" probeResult="failure" output="command timed out" Jan 31 10:29:50 crc kubenswrapper[4830]: I0131 10:29:50.766377 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/prometheus-k8s-0" podUID="d2c8c9f6-9f27-4f47-8fb3-e22aaa0d3e88" containerName="prometheus" probeResult="failure" output="command timed out" Jan 31 10:29:50 crc kubenswrapper[4830]: I0131 10:29:50.801924 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4drvxhm" podUID="250c9f1b-d78c-488e-b28e-6c2b783edd9b" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.115:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:50 crc kubenswrapper[4830]: I0131 10:29:50.802039 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4drvxhm" Jan 31 10:29:50 crc kubenswrapper[4830]: I0131 10:29:50.816051 4830 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="galera" containerStatusID={"Type":"cri-o","ID":"5e7b646f4ff6e1b24d55539a3bc21143cce21d3f36a569975a8acf1b82a40d40"} pod="openstack/openstack-cell1-galera-0" containerMessage="Container galera failed liveness probe, will be restarted" Jan 31 10:29:51 crc kubenswrapper[4830]: I0131 10:29:51.265268 4830 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/heat-api-5677f68f94-9mmb8" podUID="99dbef57-35a0-4840-a293-fefe87379a4b" containerName="heat-api" probeResult="failure" output="Get \"https://10.217.1.18:8004/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:51 crc kubenswrapper[4830]: I0131 10:29:51.266664 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-api-5677f68f94-9mmb8" podUID="99dbef57-35a0-4840-a293-fefe87379a4b" containerName="heat-api" probeResult="failure" output="Get \"https://10.217.1.18:8004/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:51 crc kubenswrapper[4830]: I0131 10:29:51.287420 4830 generic.go:334] "Generic (PLEG): container finished" podID="1d713893-e8db-40ba-872c-e9d1650a56d0" containerID="d8ae027c1b15e1df367ce806f666e3b8850ae99e4b45a28fc07df2f9232d9bff" exitCode=137 Jan 31 10:29:51 crc kubenswrapper[4830]: I0131 10:29:51.287495 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-x7g8x" event={"ID":"1d713893-e8db-40ba-872c-e9d1650a56d0","Type":"ContainerDied","Data":"d8ae027c1b15e1df367ce806f666e3b8850ae99e4b45a28fc07df2f9232d9bff"} Jan 31 10:29:51 crc kubenswrapper[4830]: I0131 10:29:51.387902 4830 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/heat-cfnapi-546fb56cb7-54z2g" podUID="bcd98bf8-a064-4c62-9847-37dd7939889b" containerName="heat-cfnapi" probeResult="failure" output="Get \"https://10.217.1.19:8000/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:51 crc kubenswrapper[4830]: I0131 10:29:51.387904 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-cfnapi-546fb56cb7-54z2g" podUID="bcd98bf8-a064-4c62-9847-37dd7939889b" containerName="heat-cfnapi" probeResult="failure" output="Get \"https://10.217.1.19:8000/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:51 crc kubenswrapper[4830]: I0131 10:29:51.470202 4830 patch_prober.go:28] interesting pod/openshift-kube-scheduler-crc container/kube-scheduler namespace/openshift-kube-scheduler: Readiness probe status=failure output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 31 10:29:51 crc kubenswrapper[4830]: I0131 10:29:51.470270 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podUID="3dcd261975c3d6b9a6ad6367fd4facd3" containerName="kube-scheduler" probeResult="failure" output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:51 crc kubenswrapper[4830]: I0131 10:29:51.760111 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-cell1-galera-0" podUID="f37f41b4-3b56-45f9-a368-0f772bcf3002" containerName="galera" probeResult="failure" output="command timed out" Jan 31 10:29:51 crc kubenswrapper[4830]: I0131 10:29:51.761858 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-sgnqp" podUID="e87ff23b-1ce8-4556-8998-7fc4dd84775c" containerName="nbdb" probeResult="failure" output="command timed out" Jan 31 10:29:51 crc kubenswrapper[4830]: I0131 10:29:51.763856 4830 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-monitoring/prometheus-k8s-0" podUID="d2c8c9f6-9f27-4f47-8fb3-e22aaa0d3e88" containerName="prometheus" probeResult="failure" output="command timed out" Jan 31 10:29:51 crc kubenswrapper[4830]: I0131 10:29:51.764372 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-sgnqp" podUID="e87ff23b-1ce8-4556-8998-7fc4dd84775c" containerName="sbdb" probeResult="failure" output="command timed out" Jan 31 10:29:51 crc kubenswrapper[4830]: I0131 10:29:51.847042 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4drvxhm" podUID="250c9f1b-d78c-488e-b28e-6c2b783edd9b" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.115:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:52 crc kubenswrapper[4830]: I0131 10:29:52.324004 4830 generic.go:334] "Generic (PLEG): container finished" podID="d0107b00-a78b-432b-afc6-a9ccc1b3bf5b" containerID="48cc01afd187531d11a8e7950848cdfb1bbe3d5df848bd9f580f457ec1e94f6e" exitCode=143 Jan 31 10:29:52 crc kubenswrapper[4830]: I0131 10:29:52.324108 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-4v2n6" event={"ID":"d0107b00-a78b-432b-afc6-a9ccc1b3bf5b","Type":"ContainerDied","Data":"48cc01afd187531d11a8e7950848cdfb1bbe3d5df848bd9f580f457ec1e94f6e"} Jan 31 10:29:52 crc kubenswrapper[4830]: I0131 10:29:52.327065 4830 generic.go:334] "Generic (PLEG): container finished" podID="48688d73-57bb-4105-8116-4853be571b01" containerID="54aa8ec469ea3a966faa7ccc7d68b904d98cfe2c3172796d1eb2782e8f440f84" exitCode=0 Jan 31 10:29:52 crc kubenswrapper[4830]: I0131 10:29:52.327119 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-dbkt8" event={"ID":"48688d73-57bb-4105-8116-4853be571b01","Type":"ContainerDied","Data":"54aa8ec469ea3a966faa7ccc7d68b904d98cfe2c3172796d1eb2782e8f440f84"} Jan 31 10:29:52 crc kubenswrapper[4830]: I0131 10:29:52.393079 4830 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-ttnrg container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.24:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 31 10:29:52 crc kubenswrapper[4830]: I0131 10:29:52.393158 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-ttnrg" podUID="d1346d7f-25da-4035-9c88-1f96c034d795" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.24:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:52 crc kubenswrapper[4830]: I0131 10:29:52.502324 4830 patch_prober.go:28] interesting pod/console-bbcf59d54-qmgsn container/console namespace/openshift-console: Liveness probe status=failure output="Get \"https://10.217.0.137:8443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 31 10:29:52 crc kubenswrapper[4830]: I0131 10:29:52.502379 4830 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/console-bbcf59d54-qmgsn" podUID="afe486bd-6c62-42d6-ac04-9c2bb21204d7" containerName="console" probeResult="failure" output="Get \"https://10.217.0.137:8443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:52 crc kubenswrapper[4830]: I0131 10:29:52.502436 4830 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-console/console-bbcf59d54-qmgsn" Jan 31 10:29:52 crc kubenswrapper[4830]: I0131 10:29:52.550878 4830 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Liveness probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 31 10:29:52 crc kubenswrapper[4830]: I0131 10:29:52.550964 4830 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:52 crc kubenswrapper[4830]: I0131 10:29:52.561443 4830 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="console" containerStatusID={"Type":"cri-o","ID":"1a17af186cd49559857c4ee4b13ab37df2f7b3afdf6c5f13f5fe7127854f599d"} pod="openshift-console/console-bbcf59d54-qmgsn" containerMessage="Container console failed liveness probe, will be restarted" Jan 31 10:29:52 crc kubenswrapper[4830]: I0131 10:29:52.847816 4830 patch_prober.go:28] interesting pod/logging-loki-gateway-74c87577db-hwvhd container/gateway namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.50:8081/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 31 10:29:52 crc kubenswrapper[4830]: I0131 10:29:52.848254 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-74c87577db-hwvhd" podUID="fd432483-7467-4c9d-a13e-8ee908a8ed2b" containerName="gateway" probeResult="failure" output="Get \"https://10.217.0.50:8081/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:52 crc kubenswrapper[4830]: I0131 10:29:52.847837 4830 patch_prober.go:28] interesting pod/logging-loki-gateway-74c87577db-hwvhd container/opa namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.50:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 31 10:29:52 crc kubenswrapper[4830]: I0131 10:29:52.849447 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-74c87577db-hwvhd" podUID="fd432483-7467-4c9d-a13e-8ee908a8ed2b" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.50:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:52 crc kubenswrapper[4830]: I0131 10:29:52.910508 4830 patch_prober.go:28] interesting pod/logging-loki-gateway-74c87577db-fjtpt container/opa namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.51:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 31 10:29:53 crc kubenswrapper[4830]: I0131 10:29:52.910572 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-74c87577db-fjtpt" podUID="867e058e-8774-4ff8-af99-a8f35ac530ce" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.51:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:53 crc kubenswrapper[4830]: I0131 10:29:52.910626 4830 patch_prober.go:28] interesting pod/logging-loki-gateway-74c87577db-fjtpt container/gateway namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.51:8081/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 31 10:29:53 crc kubenswrapper[4830]: I0131 10:29:52.910639 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-74c87577db-fjtpt" podUID="867e058e-8774-4ff8-af99-a8f35ac530ce" containerName="gateway" probeResult="failure" output="Get \"https://10.217.0.51:8081/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:53 crc kubenswrapper[4830]: I0131 10:29:53.049954 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-controller-init-54dc59fd95-sv8r9" podUID="2a183ae3-dc4b-4f75-a9ca-4832bd5faf06" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.100:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:53 crc kubenswrapper[4830]: I0131 10:29:53.050159 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-54dc59fd95-sv8r9" Jan 31 10:29:53 crc kubenswrapper[4830]: I0131 10:29:53.397562 4830 patch_prober.go:28] interesting pod/route-controller-manager-bcf89fb66-fxq4w container/route-controller-manager namespace/openshift-route-controller-manager: Liveness probe status=failure output="Get \"https://10.217.0.69:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 31 10:29:53 crc kubenswrapper[4830]: I0131 10:29:53.398230 4830 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-route-controller-manager/route-controller-manager-bcf89fb66-fxq4w" podUID="9e3fd47c-6860-47d0-98ce-3654da25fdce" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.69:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:53 crc kubenswrapper[4830]: I0131 10:29:53.398323 4830 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-route-controller-manager/route-controller-manager-bcf89fb66-fxq4w" Jan 31 10:29:53 crc kubenswrapper[4830]: I0131 10:29:53.398375 4830 patch_prober.go:28] interesting pod/route-controller-manager-bcf89fb66-fxq4w container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.69:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 31 10:29:53 crc kubenswrapper[4830]: I0131 10:29:53.398455 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-bcf89fb66-fxq4w" podUID="9e3fd47c-6860-47d0-98ce-3654da25fdce" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.69:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:53 crc kubenswrapper[4830]: E0131 10:29:53.418371 4830 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="384a0831544a2cb790ebf79501804b539d1b77cdf911870336931f1b831b232d" cmd=["grpc_health_probe","-addr=:50051"] Jan 31 10:29:53 crc kubenswrapper[4830]: E0131 10:29:53.421543 4830 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="384a0831544a2cb790ebf79501804b539d1b77cdf911870336931f1b831b232d" cmd=["grpc_health_probe","-addr=:50051"] Jan 31 10:29:53 crc kubenswrapper[4830]: E0131 10:29:53.424270 4830 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="384a0831544a2cb790ebf79501804b539d1b77cdf911870336931f1b831b232d" cmd=["grpc_health_probe","-addr=:50051"] Jan 31 10:29:53 crc kubenswrapper[4830]: E0131 10:29:53.424373 4830 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack-operators/openstack-operator-index-nc25d" podUID="b0b831b3-e535-4264-b46c-c93f7edd51d2" containerName="registry-server" Jan 31 10:29:53 crc kubenswrapper[4830]: I0131 10:29:53.442861 4830 generic.go:334] "Generic (PLEG): container finished" podID="cf057c5a-deef-4c01-bd58-f761ec86e2f4" containerID="4ee9412f00cc39ee85a53e00735952960dbf6826e8a88f21b12231d990adad8a" exitCode=0 Jan 31 10:29:53 crc kubenswrapper[4830]: I0131 10:29:53.442964 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-n4rml" event={"ID":"cf057c5a-deef-4c01-bd58-f761ec86e2f4","Type":"ContainerDied","Data":"4ee9412f00cc39ee85a53e00735952960dbf6826e8a88f21b12231d990adad8a"} Jan 31 10:29:53 crc kubenswrapper[4830]: I0131 10:29:53.445913 4830 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="route-controller-manager" containerStatusID={"Type":"cri-o","ID":"788f52e16faad612c586019f97fd0e1c157ee62484db497fa5c83f31c107360d"} pod="openshift-route-controller-manager/route-controller-manager-bcf89fb66-fxq4w" containerMessage="Container route-controller-manager failed liveness probe, will be restarted" Jan 31 10:29:53 crc kubenswrapper[4830]: I0131 10:29:53.446031 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-bcf89fb66-fxq4w" podUID="9e3fd47c-6860-47d0-98ce-3654da25fdce" containerName="route-controller-manager" containerID="cri-o://788f52e16faad612c586019f97fd0e1c157ee62484db497fa5c83f31c107360d" gracePeriod=30 Jan 31 10:29:53 crc kubenswrapper[4830]: I0131 10:29:53.512098 4830 patch_prober.go:28] interesting pod/console-bbcf59d54-qmgsn container/console namespace/openshift-console: Readiness probe status=failure output="Get \"https://10.217.0.137:8443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 31 10:29:53 crc kubenswrapper[4830]: I0131 10:29:53.512216 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/console-bbcf59d54-qmgsn" podUID="afe486bd-6c62-42d6-ac04-9c2bb21204d7" containerName="console" probeResult="failure" output="Get \"https://10.217.0.137:8443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:53 crc kubenswrapper[4830]: I0131 10:29:53.512342 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-bbcf59d54-qmgsn" Jan 31 10:29:53 crc kubenswrapper[4830]: I0131 10:29:53.512373 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-kwwkw" podUID="1488b4ea-ba49-423e-a995-917dc9cbb9e2" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.101:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:53 crc kubenswrapper[4830]: I0131 10:29:53.515280 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/prometheus-metric-storage-0" podUID="7b3b4d1e-8963-469f-abe7-204392275c48" containerName="prometheus" probeResult="failure" output="Get \"https://10.217.0.165:9090/-/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:53 crc kubenswrapper[4830]: I0131 10:29:53.515313 4830 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/prometheus-metric-storage-0" podUID="7b3b4d1e-8963-469f-abe7-204392275c48" containerName="prometheus" probeResult="failure" output="Get \"https://10.217.0.165:9090/-/healthy\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:53 crc kubenswrapper[4830]: I0131 10:29:53.515492 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Jan 31 10:29:53 crc kubenswrapper[4830]: I0131 10:29:53.663444 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-d8xvw" podUID="3f5623d3-168a-4bca-9154-ecb4c81b5b3b" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.103:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:53 crc kubenswrapper[4830]: I0131 10:29:53.663614 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-d8xvw" Jan 31 10:29:53 crc kubenswrapper[4830]: I0131 10:29:53.722970 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-hcpk8" podUID="17f5c61d-5997-482b-961a-0339cfe6c15c" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.104:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:53 crc kubenswrapper[4830]: I0131 10:29:53.723118 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-hcpk8" Jan 31 10:29:53 crc kubenswrapper[4830]: I0131 10:29:53.817019 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-8wnqw" podUID="dafe4db4-4a74-4cb2-8e7f-496cfa1a1c5e" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.105:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:53 crc kubenswrapper[4830]: I0131 10:29:53.817203 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-8wnqw" Jan 31 10:29:53 crc kubenswrapper[4830]: I0131 10:29:53.858911 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-cpwlp" podUID="47718a89-dc4c-4f5d-bb58-aec265aa68bf" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.102:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:53 crc kubenswrapper[4830]: I0131 10:29:53.859092 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-cpwlp" Jan 31 10:29:53 crc kubenswrapper[4830]: I0131 10:29:53.978962 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-d9xtg" podUID="4d28fd37-b97c-447a-9165-d90d11fd4698" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.106:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:53 crc kubenswrapper[4830]: I0131 10:29:53.979991 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-d9xtg" Jan 31 10:29:54 crc kubenswrapper[4830]: I0131 10:29:54.092052 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-controller-init-54dc59fd95-sv8r9" podUID="2a183ae3-dc4b-4f75-a9ca-4832bd5faf06" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.100:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:54 crc kubenswrapper[4830]: I0131 10:29:54.174035 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-kgrns" podUID="758269b2-16c6-4f5a-8f9f-875659eede84" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.109:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:54 crc kubenswrapper[4830]: I0131 10:29:54.174201 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-kgrns" Jan 31 10:29:54 crc kubenswrapper[4830]: I0131 10:29:54.174183 4830 patch_prober.go:28] interesting pod/controller-manager-7896c76d86-c5cgs container/controller-manager namespace/openshift-controller-manager: Liveness probe status=failure output="Get \"https://10.217.0.68:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 31 10:29:54 crc kubenswrapper[4830]: I0131 10:29:54.174282 4830 patch_prober.go:28] interesting pod/controller-manager-7896c76d86-c5cgs container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.68:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 31 10:29:54 crc kubenswrapper[4830]: I0131 10:29:54.174286 4830 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-controller-manager/controller-manager-7896c76d86-c5cgs" podUID="d85aeaa6-c7da-420f-b8d9-2d0983e2ab36" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.68:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:54 crc kubenswrapper[4830]: I0131 10:29:54.174357 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-7896c76d86-c5cgs" podUID="d85aeaa6-c7da-420f-b8d9-2d0983e2ab36" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.68:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:54 crc kubenswrapper[4830]: I0131 10:29:54.174378 4830 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-controller-manager/controller-manager-7896c76d86-c5cgs" Jan 31 10:29:54 crc kubenswrapper[4830]: I0131 10:29:54.182814 4830 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="controller-manager" containerStatusID={"Type":"cri-o","ID":"f32553c7b295719f56496bf853a26b7c14fef0d6e4969159c919977278f26085"} pod="openshift-controller-manager/controller-manager-7896c76d86-c5cgs" containerMessage="Container controller-manager failed liveness probe, will be restarted" Jan 31 10:29:54 crc kubenswrapper[4830]: I0131 10:29:54.182868 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-7896c76d86-c5cgs" podUID="d85aeaa6-c7da-420f-b8d9-2d0983e2ab36" containerName="controller-manager" containerID="cri-o://f32553c7b295719f56496bf853a26b7c14fef0d6e4969159c919977278f26085" gracePeriod=30 Jan 31 10:29:54 crc kubenswrapper[4830]: I0131 10:29:54.215061 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-slc6p" podUID="bd972fba-0692-45af-b28c-db4929fe150a" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.108:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:54 crc kubenswrapper[4830]: I0131 10:29:54.215228 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-slc6p" Jan 31 10:29:54 crc kubenswrapper[4830]: I0131 10:29:54.215713 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-4tqzd" podUID="1891b74f-fe71-4020-98a3-5796e2a67ea2" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.110:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:54 crc kubenswrapper[4830]: I0131 10:29:54.215883 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-4tqzd" Jan 31 10:29:54 crc kubenswrapper[4830]: I0131 10:29:54.259168 4830 patch_prober.go:28] interesting pod/prometheus-operator-admission-webhook-f54c54754-dbkt8 container/prometheus-operator-admission-webhook namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.77:8443/healthz\": dial tcp 10.217.0.77:8443: connect: connection refused" start-of-body= Jan 31 10:29:54 crc kubenswrapper[4830]: I0131 10:29:54.259210 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-dbkt8" podUID="48688d73-57bb-4105-8116-4853be571b01" containerName="prometheus-operator-admission-webhook" probeResult="failure" output="Get \"https://10.217.0.77:8443/healthz\": dial tcp 10.217.0.77:8443: connect: connection refused" Jan 31 10:29:54 crc kubenswrapper[4830]: I0131 10:29:54.346058 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-sbhfn" podUID="0e056a0c-ee06-43aa-bf36-35f202f76b17" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.111:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:54 crc kubenswrapper[4830]: I0131 10:29:54.352558 4830 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-ttnrg container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.24:8443/healthz\": dial tcp 10.217.0.24:8443: connect: connection refused" start-of-body= Jan 31 10:29:54 crc kubenswrapper[4830]: I0131 10:29:54.352628 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-ttnrg" podUID="d1346d7f-25da-4035-9c88-1f96c034d795" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.24:8443/healthz\": dial tcp 10.217.0.24:8443: connect: connection refused" Jan 31 10:29:54 crc kubenswrapper[4830]: I0131 10:29:54.505368 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-cpwlp" Jan 31 10:29:54 crc kubenswrapper[4830]: I0131 10:29:54.620093 4830 patch_prober.go:28] interesting pod/observability-operator-59bdc8b94-l59nt container/operator namespace/openshift-operators: Liveness probe status=failure output="Get \"http://10.217.0.93:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 31 10:29:54 crc kubenswrapper[4830]: I0131 10:29:54.620178 4830 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operators/observability-operator-59bdc8b94-l59nt" podUID="1ebf3f9f-75ef-4cfd-a7f7-d5fb556aeb48" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.93:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:54 crc kubenswrapper[4830]: I0131 10:29:54.620239 4830 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-operators/observability-operator-59bdc8b94-l59nt" Jan 31 10:29:54 crc kubenswrapper[4830]: I0131 10:29:54.620107 4830 patch_prober.go:28] interesting pod/observability-operator-59bdc8b94-l59nt container/operator namespace/openshift-operators: Readiness probe status=failure output="Get \"http://10.217.0.93:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 31 10:29:54 crc kubenswrapper[4830]: I0131 10:29:54.620950 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators/observability-operator-59bdc8b94-l59nt" podUID="1ebf3f9f-75ef-4cfd-a7f7-d5fb556aeb48" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.93:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:54 crc kubenswrapper[4830]: I0131 10:29:54.621065 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/observability-operator-59bdc8b94-l59nt" Jan 31 10:29:54 crc kubenswrapper[4830]: I0131 10:29:54.639496 4830 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="operator" containerStatusID={"Type":"cri-o","ID":"84b9f1eb86465dfd8f507bac40d6eba61e040eff8f9e2cec0a5f6e8db4aeffc3"} pod="openshift-operators/observability-operator-59bdc8b94-l59nt" containerMessage="Container operator failed liveness probe, will be restarted" Jan 31 10:29:54 crc kubenswrapper[4830]: I0131 10:29:54.639556 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-operators/observability-operator-59bdc8b94-l59nt" podUID="1ebf3f9f-75ef-4cfd-a7f7-d5fb556aeb48" containerName="operator" containerID="cri-o://84b9f1eb86465dfd8f507bac40d6eba61e040eff8f9e2cec0a5f6e8db4aeffc3" gracePeriod=30 Jan 31 10:29:54 crc kubenswrapper[4830]: I0131 10:29:54.763338 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/redhat-marketplace-g5pvp" podUID="35d308f6-fcf3-4b01-b26e-5c1848d6ee7d" containerName="registry-server" probeResult="failure" output="command timed out" Jan 31 10:29:54 crc kubenswrapper[4830]: I0131 10:29:54.763462 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-g5pvp" Jan 31 10:29:54 crc kubenswrapper[4830]: I0131 10:29:54.766125 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/redhat-operators-56876" podUID="2626e876-9148-4165-a735-a5a1733c014d" containerName="registry-server" probeResult="failure" output="command timed out" Jan 31 10:29:54 crc kubenswrapper[4830]: I0131 10:29:54.766139 4830 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/redhat-marketplace-g5pvp" podUID="35d308f6-fcf3-4b01-b26e-5c1848d6ee7d" containerName="registry-server" probeResult="failure" output="command timed out" Jan 31 10:29:54 crc kubenswrapper[4830]: I0131 10:29:54.766197 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-56876" Jan 31 10:29:54 crc kubenswrapper[4830]: I0131 10:29:54.766216 4830 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-g5pvp" Jan 31 10:29:54 crc kubenswrapper[4830]: I0131 10:29:54.767095 4830 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/redhat-operators-56876" podUID="2626e876-9148-4165-a735-a5a1733c014d" containerName="registry-server" probeResult="failure" output="command timed out" Jan 31 10:29:54 crc kubenswrapper[4830]: I0131 10:29:54.767164 4830 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-marketplace/redhat-operators-56876" Jan 31 10:29:54 crc kubenswrapper[4830]: I0131 10:29:54.776521 4830 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="registry-server" containerStatusID={"Type":"cri-o","ID":"f11441cbba9561c6c57f871491bfd86946bb4556451df5f1b4cd312425394af7"} pod="openshift-marketplace/redhat-operators-56876" containerMessage="Container registry-server failed liveness probe, will be restarted" Jan 31 10:29:54 crc kubenswrapper[4830]: I0131 10:29:54.776588 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-56876" podUID="2626e876-9148-4165-a735-a5a1733c014d" containerName="registry-server" containerID="cri-o://f11441cbba9561c6c57f871491bfd86946bb4556451df5f1b4cd312425394af7" gracePeriod=30 Jan 31 10:29:54 crc kubenswrapper[4830]: I0131 10:29:54.782345 4830 patch_prober.go:28] interesting pod/thanos-querier-57c5b4b8d5-lsvdc container/kube-rbac-proxy-web namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.82:9091/-/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 31 10:29:54 crc kubenswrapper[4830]: I0131 10:29:54.782523 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/thanos-querier-57c5b4b8d5-lsvdc" podUID="4158e29b-a0d9-40f2-904d-ffb63ba734f6" containerName="kube-rbac-proxy-web" probeResult="failure" output="Get \"https://10.217.0.82:9091/-/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:54 crc kubenswrapper[4830]: I0131 10:29:54.784676 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/prometheus-k8s-0" podUID="d2c8c9f6-9f27-4f47-8fb3-e22aaa0d3e88" containerName="prometheus" probeResult="failure" output="command timed out" Jan 31 10:29:54 crc kubenswrapper[4830]: I0131 10:29:54.804329 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-ld2fb" podUID="f101dda8-ba4c-42c2-a8e3-9a5e53c2ec8a" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.114:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:54 crc kubenswrapper[4830]: I0131 10:29:54.804395 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/metallb-operator-controller-manager-74fbb6df4-hrt7k" podUID="1145e85a-d436-40c8-baef-ceb53625e06b" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.87:8080/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:54 crc kubenswrapper[4830]: I0131 10:29:54.804448 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-ld2fb" Jan 31 10:29:54 crc kubenswrapper[4830]: I0131 10:29:54.804541 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-74fbb6df4-hrt7k" Jan 31 10:29:54 crc kubenswrapper[4830]: I0131 10:29:54.887879 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-d8xvw" podUID="3f5623d3-168a-4bca-9154-ecb4c81b5b3b" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.103:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:54 crc kubenswrapper[4830]: I0131 10:29:54.887949 4830 patch_prober.go:28] interesting pod/perses-operator-5bf474d74f-wtdqw container/perses-operator namespace/openshift-operators: Liveness probe status=failure output="Get \"http://10.217.0.94:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 31 10:29:54 crc kubenswrapper[4830]: I0131 10:29:54.888050 4830 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operators/perses-operator-5bf474d74f-wtdqw" podUID="0af185f3-0cfa-4299-8eee-0e523d87504c" containerName="perses-operator" probeResult="failure" output="Get \"http://10.217.0.94:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:54 crc kubenswrapper[4830]: I0131 10:29:54.929057 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-hcpk8" podUID="17f5c61d-5997-482b-961a-0339cfe6c15c" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.104:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:54 crc kubenswrapper[4830]: I0131 10:29:54.929144 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-rkvx7" podUID="e681f66d-3695-4b59-9ef1-6f9bbf007ed2" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.113:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:54 crc kubenswrapper[4830]: I0131 10:29:54.929223 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/frr-k8s-4v2n6" podUID="d0107b00-a78b-432b-afc6-a9ccc1b3bf5b" containerName="controller" probeResult="failure" output="Get \"http://127.0.0.1:7572/metrics\": dial tcp 127.0.0.1:7572: connect: connection refused" Jan 31 10:29:54 crc kubenswrapper[4830]: I0131 10:29:54.929295 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-zwj92" podUID="3951c2f7-8a23-4d78-9a26-1b89399bdb4e" containerName="frr-k8s-webhook-server" probeResult="failure" output="Get \"http://10.217.0.95:7572/metrics\": dial tcp 10.217.0.95:7572: connect: connection refused" Jan 31 10:29:54 crc kubenswrapper[4830]: I0131 10:29:54.929299 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-rkvx7" Jan 31 10:29:54 crc kubenswrapper[4830]: I0131 10:29:54.929360 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-zwj92" Jan 31 10:29:55 crc kubenswrapper[4830]: I0131 10:29:55.011918 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-gbjts" podUID="7ff06918-8b3c-48cb-bd11-1254b9bbc276" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.116:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:55 crc kubenswrapper[4830]: I0131 10:29:55.012066 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-gbjts" Jan 31 10:29:55 crc kubenswrapper[4830]: I0131 10:29:55.012241 4830 patch_prober.go:28] interesting pod/perses-operator-5bf474d74f-wtdqw container/perses-operator namespace/openshift-operators: Readiness probe status=failure output="Get \"http://10.217.0.94:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 31 10:29:55 crc kubenswrapper[4830]: I0131 10:29:55.012300 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators/perses-operator-5bf474d74f-wtdqw" podUID="0af185f3-0cfa-4299-8eee-0e523d87504c" containerName="perses-operator" probeResult="failure" output="Get \"http://10.217.0.94:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:55 crc kubenswrapper[4830]: I0131 10:29:55.012374 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/perses-operator-5bf474d74f-wtdqw" Jan 31 10:29:55 crc kubenswrapper[4830]: I0131 10:29:55.013050 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-8wnqw" podUID="dafe4db4-4a74-4cb2-8e7f-496cfa1a1c5e" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.105:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:55 crc kubenswrapper[4830]: I0131 10:29:55.103015 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-gktql" podUID="21448bf1-0318-4469-baff-d35cf905337b" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.117:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:55 crc kubenswrapper[4830]: I0131 10:29:55.103655 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-gktql" Jan 31 10:29:55 crc kubenswrapper[4830]: I0131 10:29:55.143928 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-9wzdf" Jan 31 10:29:55 crc kubenswrapper[4830]: I0131 10:29:55.143946 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-2l42c" podUID="388d9bc4-698e-4dea-8029-aa32433cf734" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.118:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:55 crc kubenswrapper[4830]: I0131 10:29:55.144105 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-2l42c" Jan 31 10:29:55 crc kubenswrapper[4830]: I0131 10:29:55.186037 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/telemetry-operator-controller-manager-57fbdcd888-cp9fj" podUID="2365408f-7d7a-482c-87c0-0452fa330e4e" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.119:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:55 crc kubenswrapper[4830]: I0131 10:29:55.186178 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-57fbdcd888-cp9fj" Jan 31 10:29:55 crc kubenswrapper[4830]: I0131 10:29:55.186037 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-d9xtg" podUID="4d28fd37-b97c-447a-9165-d90d11fd4698" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.106:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:55 crc kubenswrapper[4830]: I0131 10:29:55.227018 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-kgrns" podUID="758269b2-16c6-4f5a-8f9f-875659eede84" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.109:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:55 crc kubenswrapper[4830]: I0131 10:29:55.267935 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-slc6p" podUID="bd972fba-0692-45af-b28c-db4929fe150a" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.108:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:55 crc kubenswrapper[4830]: I0131 10:29:55.314941 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-4tqzd" podUID="1891b74f-fe71-4020-98a3-5796e2a67ea2" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.110:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:55 crc kubenswrapper[4830]: I0131 10:29:55.479888 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-czm79" podUID="68f255f0-5951-47f2-979e-af80607453e8" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.121:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:55 crc kubenswrapper[4830]: I0131 10:29:55.479896 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/watcher-operator-controller-manager-564965969-62c8t" podUID="d4a8ef63-6ba0-4bb4-93b5-dc9fc1134bb5" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.120:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:55 crc kubenswrapper[4830]: I0131 10:29:55.479996 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-czm79" Jan 31 10:29:55 crc kubenswrapper[4830]: I0131 10:29:55.480148 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-564965969-62c8t" Jan 31 10:29:55 crc kubenswrapper[4830]: I0131 10:29:55.568700 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-4v2n6" event={"ID":"d0107b00-a78b-432b-afc6-a9ccc1b3bf5b","Type":"ContainerStarted","Data":"790ae5cbd87ef044e80b4dc5b87d885a199e25f48818f3ba1fdba0e646cbf61b"} Jan 31 10:29:55 crc kubenswrapper[4830]: I0131 10:29:55.573679 4830 generic.go:334] "Generic (PLEG): container finished" podID="e80e8b17-711d-46d8-a240-4fa52e093545" containerID="4bb4b393d788389636a749f9855b6b5af59603d34816a47e960c64dbe48662c7" exitCode=0 Jan 31 10:29:55 crc kubenswrapper[4830]: I0131 10:29:55.575555 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-lp7ks" event={"ID":"e80e8b17-711d-46d8-a240-4fa52e093545","Type":"ContainerDied","Data":"4bb4b393d788389636a749f9855b6b5af59603d34816a47e960c64dbe48662c7"} Jan 31 10:29:55 crc kubenswrapper[4830]: I0131 10:29:55.577372 4830 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="registry-server" containerStatusID={"Type":"cri-o","ID":"f3902cab012b2fd7a05dad2e119debffa319f7939ade666477b0cf8bf2859a4a"} pod="openshift-marketplace/redhat-marketplace-g5pvp" containerMessage="Container registry-server failed liveness probe, will be restarted" Jan 31 10:29:55 crc kubenswrapper[4830]: I0131 10:29:55.577444 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-g5pvp" podUID="35d308f6-fcf3-4b01-b26e-5c1848d6ee7d" containerName="registry-server" containerID="cri-o://f3902cab012b2fd7a05dad2e119debffa319f7939ade666477b0cf8bf2859a4a" gracePeriod=30 Jan 31 10:29:55 crc kubenswrapper[4830]: I0131 10:29:55.888488 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-ld2fb" podUID="f101dda8-ba4c-42c2-a8e3-9a5e53c2ec8a" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.114:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:55 crc kubenswrapper[4830]: I0131 10:29:55.888642 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/metallb-operator-controller-manager-74fbb6df4-hrt7k" podUID="1145e85a-d436-40c8-baef-ceb53625e06b" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.87:8080/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:55 crc kubenswrapper[4830]: I0131 10:29:55.971002 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-rkvx7" podUID="e681f66d-3695-4b59-9ef1-6f9bbf007ed2" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.113:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:56 crc kubenswrapper[4830]: I0131 10:29:56.054905 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-gbjts" podUID="7ff06918-8b3c-48cb-bd11-1254b9bbc276" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.116:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:56 crc kubenswrapper[4830]: I0131 10:29:56.096905 4830 patch_prober.go:28] interesting pod/perses-operator-5bf474d74f-wtdqw container/perses-operator namespace/openshift-operators: Readiness probe status=failure output="Get \"http://10.217.0.94:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 31 10:29:56 crc kubenswrapper[4830]: I0131 10:29:56.096962 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators/perses-operator-5bf474d74f-wtdqw" podUID="0af185f3-0cfa-4299-8eee-0e523d87504c" containerName="perses-operator" probeResult="failure" output="Get \"http://10.217.0.94:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:56 crc kubenswrapper[4830]: I0131 10:29:56.187938 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-gktql" podUID="21448bf1-0318-4469-baff-d35cf905337b" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.117:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:56 crc kubenswrapper[4830]: I0131 10:29:56.228989 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-2l42c" podUID="388d9bc4-698e-4dea-8029-aa32433cf734" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.118:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:56 crc kubenswrapper[4830]: I0131 10:29:56.229046 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/telemetry-operator-controller-manager-57fbdcd888-cp9fj" podUID="2365408f-7d7a-482c-87c0-0452fa330e4e" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.119:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:56 crc kubenswrapper[4830]: I0131 10:29:56.371170 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/speaker-x7g8x" podUID="1d713893-e8db-40ba-872c-e9d1650a56d0" containerName="speaker" probeResult="failure" output="Get \"http://localhost:29150/metrics\": dial tcp [::1]:29150: connect: connection refused" Jan 31 10:29:56 crc kubenswrapper[4830]: I0131 10:29:56.516845 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/prometheus-metric-storage-0" podUID="7b3b4d1e-8963-469f-abe7-204392275c48" containerName="prometheus" probeResult="failure" output="Get \"https://10.217.0.165:9090/-/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:56 crc kubenswrapper[4830]: I0131 10:29:56.562061 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/watcher-operator-controller-manager-564965969-62c8t" podUID="d4a8ef63-6ba0-4bb4-93b5-dc9fc1134bb5" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.120:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:56 crc kubenswrapper[4830]: I0131 10:29:56.562058 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-czm79" podUID="68f255f0-5951-47f2-979e-af80607453e8" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.121:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:56 crc kubenswrapper[4830]: I0131 10:29:56.567608 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Jan 31 10:29:56 crc kubenswrapper[4830]: I0131 10:29:56.586495 4830 generic.go:334] "Generic (PLEG): container finished" podID="b0b831b3-e535-4264-b46c-c93f7edd51d2" containerID="384a0831544a2cb790ebf79501804b539d1b77cdf911870336931f1b831b232d" exitCode=0 Jan 31 10:29:56 crc kubenswrapper[4830]: I0131 10:29:56.586572 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-nc25d" event={"ID":"b0b831b3-e535-4264-b46c-c93f7edd51d2","Type":"ContainerDied","Data":"384a0831544a2cb790ebf79501804b539d1b77cdf911870336931f1b831b232d"} Jan 31 10:29:56 crc kubenswrapper[4830]: I0131 10:29:56.606958 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-dbkt8" event={"ID":"48688d73-57bb-4105-8116-4853be571b01","Type":"ContainerStarted","Data":"c2d375e873e923a7a28a77995187816cb51fd2910b0e3878e24e422b503ed06d"} Jan 31 10:29:56 crc kubenswrapper[4830]: I0131 10:29:56.607169 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-dbkt8" Jan 31 10:29:56 crc kubenswrapper[4830]: I0131 10:29:56.608776 4830 patch_prober.go:28] interesting pod/prometheus-operator-admission-webhook-f54c54754-dbkt8 container/prometheus-operator-admission-webhook namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.77:8443/healthz\": dial tcp 10.217.0.77:8443: connect: connection refused" start-of-body= Jan 31 10:29:56 crc kubenswrapper[4830]: I0131 10:29:56.608813 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-dbkt8" podUID="48688d73-57bb-4105-8116-4853be571b01" containerName="prometheus-operator-admission-webhook" probeResult="failure" output="Get \"https://10.217.0.77:8443/healthz\": dial tcp 10.217.0.77:8443: connect: connection refused" Jan 31 10:29:56 crc kubenswrapper[4830]: I0131 10:29:56.668798 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-688c9bff97-t8jpp" event={"ID":"ce3329e2-9eca-4a04-bf1d-0578e12beaa5","Type":"ContainerStarted","Data":"55b0ae0eb340cde0773548bcf933825fb1cfe0a59751d9de0fada82c65e1b6df"} Jan 31 10:29:56 crc kubenswrapper[4830]: I0131 10:29:56.668922 4830 patch_prober.go:28] interesting pod/logging-loki-distributor-5f678c8dd6-vm6jc container/loki-distributor namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.47:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 31 10:29:56 crc kubenswrapper[4830]: I0131 10:29:56.668971 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-distributor-5f678c8dd6-vm6jc" podUID="e5b91203-480c-424e-877a-5f2f437d1ada" containerName="loki-distributor" probeResult="failure" output="Get \"https://10.217.0.47:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:56 crc kubenswrapper[4830]: I0131 10:29:56.668990 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators-redhat/loki-operator-controller-manager-688c9bff97-t8jpp" Jan 31 10:29:56 crc kubenswrapper[4830]: I0131 10:29:56.672859 4830 generic.go:334] "Generic (PLEG): container finished" podID="3951c2f7-8a23-4d78-9a26-1b89399bdb4e" containerID="8142163e3ce80c3464ac0822fda30bf877ce271f5e4ceef098795181d0f6e7eb" exitCode=0 Jan 31 10:29:56 crc kubenswrapper[4830]: I0131 10:29:56.672908 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-zwj92" event={"ID":"3951c2f7-8a23-4d78-9a26-1b89399bdb4e","Type":"ContainerDied","Data":"8142163e3ce80c3464ac0822fda30bf877ce271f5e4ceef098795181d0f6e7eb"} Jan 31 10:29:56 crc kubenswrapper[4830]: I0131 10:29:56.765097 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/certified-operators-jwvm4" podUID="14550547-ce63-48cc-800e-b74235d0daa1" containerName="registry-server" probeResult="failure" output="command timed out" Jan 31 10:29:56 crc kubenswrapper[4830]: I0131 10:29:56.765216 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-jwvm4" Jan 31 10:29:56 crc kubenswrapper[4830]: I0131 10:29:56.765330 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-engine-88757d59b-r55jf" podUID="3d4efcc1-d98d-466c-a7ee-6a6aa3766681" containerName="heat-engine" probeResult="failure" output="command timed out" Jan 31 10:29:56 crc kubenswrapper[4830]: I0131 10:29:56.765380 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/community-operators-fcmv2" podUID="c361702a-d6db-4925-809d-f08c6dd88a7d" containerName="registry-server" probeResult="failure" output="command timed out" Jan 31 10:29:56 crc kubenswrapper[4830]: I0131 10:29:56.765444 4830 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/heat-engine-88757d59b-r55jf" podUID="3d4efcc1-d98d-466c-a7ee-6a6aa3766681" containerName="heat-engine" probeResult="failure" output="command timed out" Jan 31 10:29:56 crc kubenswrapper[4830]: I0131 10:29:56.765342 4830 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/certified-operators-jwvm4" podUID="14550547-ce63-48cc-800e-b74235d0daa1" containerName="registry-server" probeResult="failure" output="command timed out" Jan 31 10:29:56 crc kubenswrapper[4830]: I0131 10:29:56.765482 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-fcmv2" Jan 31 10:29:56 crc kubenswrapper[4830]: I0131 10:29:56.765510 4830 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-marketplace/certified-operators-jwvm4" Jan 31 10:29:56 crc kubenswrapper[4830]: I0131 10:29:56.766587 4830 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="registry-server" containerStatusID={"Type":"cri-o","ID":"83d53b8dc5ef1de88fb6035c22e2a2cf67146c16f93c7ba5c2795bd39e9c58c1"} pod="openshift-marketplace/certified-operators-jwvm4" containerMessage="Container registry-server failed liveness probe, will be restarted" Jan 31 10:29:56 crc kubenswrapper[4830]: I0131 10:29:56.766626 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-jwvm4" podUID="14550547-ce63-48cc-800e-b74235d0daa1" containerName="registry-server" containerID="cri-o://83d53b8dc5ef1de88fb6035c22e2a2cf67146c16f93c7ba5c2795bd39e9c58c1" gracePeriod=30 Jan 31 10:29:56 crc kubenswrapper[4830]: I0131 10:29:56.767832 4830 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/community-operators-fcmv2" podUID="c361702a-d6db-4925-809d-f08c6dd88a7d" containerName="registry-server" probeResult="failure" output="command timed out" Jan 31 10:29:56 crc kubenswrapper[4830]: I0131 10:29:56.767896 4830 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-marketplace/community-operators-fcmv2" Jan 31 10:29:56 crc kubenswrapper[4830]: E0131 10:29:56.768095 4830 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="83d53b8dc5ef1de88fb6035c22e2a2cf67146c16f93c7ba5c2795bd39e9c58c1" cmd=["grpc_health_probe","-addr=:50051"] Jan 31 10:29:56 crc kubenswrapper[4830]: E0131 10:29:56.769745 4830 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="83d53b8dc5ef1de88fb6035c22e2a2cf67146c16f93c7ba5c2795bd39e9c58c1" cmd=["grpc_health_probe","-addr=:50051"] Jan 31 10:29:56 crc kubenswrapper[4830]: E0131 10:29:56.772487 4830 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="83d53b8dc5ef1de88fb6035c22e2a2cf67146c16f93c7ba5c2795bd39e9c58c1" cmd=["grpc_health_probe","-addr=:50051"] Jan 31 10:29:56 crc kubenswrapper[4830]: E0131 10:29:56.772593 4830 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-marketplace/certified-operators-jwvm4" podUID="14550547-ce63-48cc-800e-b74235d0daa1" containerName="registry-server" Jan 31 10:29:56 crc kubenswrapper[4830]: I0131 10:29:56.805817 4830 patch_prober.go:28] interesting pod/logging-loki-querier-76788598db-f89hf container/loki-querier namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.48:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 31 10:29:56 crc kubenswrapper[4830]: I0131 10:29:56.806076 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-querier-76788598db-f89hf" podUID="8aa52b7a-444c-4f07-9c3a-c2223e966e34" containerName="loki-querier" probeResult="failure" output="Get \"https://10.217.0.48:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:56 crc kubenswrapper[4830]: I0131 10:29:56.933149 4830 patch_prober.go:28] interesting pod/logging-loki-query-frontend-69d9546745-8k7rn container/loki-query-frontend namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.49:3101/loki/api/v1/status/buildinfo\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 31 10:29:56 crc kubenswrapper[4830]: I0131 10:29:56.933538 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-query-frontend-69d9546745-8k7rn" podUID="6a2f00bb-9954-46d0-901b-3d9a82939850" containerName="loki-query-frontend" probeResult="failure" output="Get \"https://10.217.0.49:3101/loki/api/v1/status/buildinfo\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:56 crc kubenswrapper[4830]: I0131 10:29:56.933623 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-query-frontend-69d9546745-8k7rn" Jan 31 10:29:57 crc kubenswrapper[4830]: I0131 10:29:57.269896 4830 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Liveness probe status=failure output="Get \"https://192.168.126.11:6443/livez?exclude=etcd\": context deadline exceeded" start-of-body= Jan 31 10:29:57 crc kubenswrapper[4830]: I0131 10:29:57.270287 4830 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" containerName="kube-apiserver" probeResult="failure" output="Get \"https://192.168.126.11:6443/livez?exclude=etcd\": context deadline exceeded" Jan 31 10:29:57 crc kubenswrapper[4830]: I0131 10:29:57.269927 4830 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:6443/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 31 10:29:57 crc kubenswrapper[4830]: I0131 10:29:57.270385 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" containerName="kube-apiserver" probeResult="failure" output="Get \"https://192.168.126.11:6443/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:57 crc kubenswrapper[4830]: I0131 10:29:57.352417 4830 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-ttnrg container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.24:8443/healthz\": dial tcp 10.217.0.24:8443: connect: connection refused" start-of-body= Jan 31 10:29:57 crc kubenswrapper[4830]: I0131 10:29:57.352482 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-ttnrg" podUID="d1346d7f-25da-4035-9c88-1f96c034d795" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.24:8443/healthz\": dial tcp 10.217.0.24:8443: connect: connection refused" Jan 31 10:29:57 crc kubenswrapper[4830]: I0131 10:29:57.669783 4830 patch_prober.go:28] interesting pod/logging-loki-distributor-5f678c8dd6-vm6jc container/loki-distributor namespace/openshift-logging: Liveness probe status=failure output="Get \"https://10.217.0.47:3101/loki/api/v1/status/buildinfo\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 31 10:29:57 crc kubenswrapper[4830]: I0131 10:29:57.669855 4830 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-logging/logging-loki-distributor-5f678c8dd6-vm6jc" podUID="e5b91203-480c-424e-877a-5f2f437d1ada" containerName="loki-distributor" probeResult="failure" output="Get \"https://10.217.0.47:3101/loki/api/v1/status/buildinfo\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:57 crc kubenswrapper[4830]: I0131 10:29:57.685528 4830 generic.go:334] "Generic (PLEG): container finished" podID="bd972fba-0692-45af-b28c-db4929fe150a" containerID="e853bb2ecb118b1cc3318dc0554cf415d016c80dcd3c771fdde1705ef75ce376" exitCode=1 Jan 31 10:29:57 crc kubenswrapper[4830]: I0131 10:29:57.685738 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-slc6p" event={"ID":"bd972fba-0692-45af-b28c-db4929fe150a","Type":"ContainerDied","Data":"e853bb2ecb118b1cc3318dc0554cf415d016c80dcd3c771fdde1705ef75ce376"} Jan 31 10:29:57 crc kubenswrapper[4830]: I0131 10:29:57.708167 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/prometheus-k8s-0" Jan 31 10:29:57 crc kubenswrapper[4830]: I0131 10:29:57.736946 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-n4rml" event={"ID":"cf057c5a-deef-4c01-bd58-f761ec86e2f4","Type":"ContainerStarted","Data":"199d4c0a00313066e5eb16e70d86dd1464c30a1d139bb454f7d0e21a22d0679d"} Jan 31 10:29:57 crc kubenswrapper[4830]: I0131 10:29:57.737266 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-n4rml" Jan 31 10:29:57 crc kubenswrapper[4830]: I0131 10:29:57.737544 4830 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-n4rml container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.32:8443/healthz\": dial tcp 10.217.0.32:8443: connect: connection refused" start-of-body= Jan 31 10:29:57 crc kubenswrapper[4830]: I0131 10:29:57.737592 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-n4rml" podUID="cf057c5a-deef-4c01-bd58-f761ec86e2f4" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.32:8443/healthz\": dial tcp 10.217.0.32:8443: connect: connection refused" Jan 31 10:29:57 crc kubenswrapper[4830]: I0131 10:29:57.740557 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-zwj92" event={"ID":"3951c2f7-8a23-4d78-9a26-1b89399bdb4e","Type":"ContainerStarted","Data":"117ea1302680b74ce4178e0eb6b2810d36ae3bc19ba6fd1dd41f25ba90b05665"} Jan 31 10:29:57 crc kubenswrapper[4830]: I0131 10:29:57.740679 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-zwj92" Jan 31 10:29:57 crc kubenswrapper[4830]: I0131 10:29:57.785985 4830 scope.go:117] "RemoveContainer" containerID="e853bb2ecb118b1cc3318dc0554cf415d016c80dcd3c771fdde1705ef75ce376" Jan 31 10:29:57 crc kubenswrapper[4830]: I0131 10:29:57.805300 4830 patch_prober.go:28] interesting pod/logging-loki-querier-76788598db-f89hf container/loki-querier namespace/openshift-logging: Liveness probe status=failure output="Get \"https://10.217.0.48:3101/loki/api/v1/status/buildinfo\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 31 10:29:57 crc kubenswrapper[4830]: I0131 10:29:57.805357 4830 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-logging/logging-loki-querier-76788598db-f89hf" podUID="8aa52b7a-444c-4f07-9c3a-c2223e966e34" containerName="loki-querier" probeResult="failure" output="Get \"https://10.217.0.48:3101/loki/api/v1/status/buildinfo\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:57 crc kubenswrapper[4830]: I0131 10:29:57.820318 4830 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/kube-state-metrics-0" podUID="adf0d571-b5dc-4d7c-9e8d-8813354a5128" containerName="kube-state-metrics" probeResult="failure" output="Get \"https://10.217.1.8:8080/livez\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:57 crc kubenswrapper[4830]: I0131 10:29:57.820416 4830 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/kube-state-metrics-0" Jan 31 10:29:57 crc kubenswrapper[4830]: I0131 10:29:57.821554 4830 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="kube-state-metrics" containerStatusID={"Type":"cri-o","ID":"184536029a48d98e756eccab3b9c57d61b4ae582035a9dd9a291492b0aec8e02"} pod="openstack/kube-state-metrics-0" containerMessage="Container kube-state-metrics failed liveness probe, will be restarted" Jan 31 10:29:57 crc kubenswrapper[4830]: I0131 10:29:57.821600 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="adf0d571-b5dc-4d7c-9e8d-8813354a5128" containerName="kube-state-metrics" containerID="cri-o://184536029a48d98e756eccab3b9c57d61b4ae582035a9dd9a291492b0aec8e02" gracePeriod=30 Jan 31 10:29:57 crc kubenswrapper[4830]: I0131 10:29:57.847741 4830 patch_prober.go:28] interesting pod/logging-loki-gateway-74c87577db-hwvhd container/opa namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.50:8083/ready\": context deadline exceeded" start-of-body= Jan 31 10:29:57 crc kubenswrapper[4830]: I0131 10:29:57.847788 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-74c87577db-hwvhd" podUID="fd432483-7467-4c9d-a13e-8ee908a8ed2b" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.50:8083/ready\": context deadline exceeded" Jan 31 10:29:57 crc kubenswrapper[4830]: I0131 10:29:57.847845 4830 patch_prober.go:28] interesting pod/logging-loki-gateway-74c87577db-hwvhd container/gateway namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.50:8081/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 31 10:29:57 crc kubenswrapper[4830]: I0131 10:29:57.847959 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-74c87577db-hwvhd" podUID="fd432483-7467-4c9d-a13e-8ee908a8ed2b" containerName="gateway" probeResult="failure" output="Get \"https://10.217.0.50:8081/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:57 crc kubenswrapper[4830]: I0131 10:29:57.856359 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/frr-k8s-4v2n6" podUID="d0107b00-a78b-432b-afc6-a9ccc1b3bf5b" containerName="controller" probeResult="failure" output="Get \"http://127.0.0.1:7572/metrics\": dial tcp 127.0.0.1:7572: connect: connection refused" Jan 31 10:29:57 crc kubenswrapper[4830]: I0131 10:29:57.856423 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-4v2n6" event={"ID":"d0107b00-a78b-432b-afc6-a9ccc1b3bf5b","Type":"ContainerStarted","Data":"682e7dd21db91dc27602e5aa37fef47124edf4da34992421a77aa2aa1e18a5e3"} Jan 31 10:29:57 crc kubenswrapper[4830]: I0131 10:29:57.862489 4830 generic.go:334] "Generic (PLEG): container finished" podID="250c9f1b-d78c-488e-b28e-6c2b783edd9b" containerID="9380900d47bdf1ab694b731927e0ab1da64712898c40df124879121a9d41869c" exitCode=1 Jan 31 10:29:57 crc kubenswrapper[4830]: I0131 10:29:57.862573 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4drvxhm" event={"ID":"250c9f1b-d78c-488e-b28e-6c2b783edd9b","Type":"ContainerDied","Data":"9380900d47bdf1ab694b731927e0ab1da64712898c40df124879121a9d41869c"} Jan 31 10:29:57 crc kubenswrapper[4830]: I0131 10:29:57.864002 4830 scope.go:117] "RemoveContainer" containerID="9380900d47bdf1ab694b731927e0ab1da64712898c40df124879121a9d41869c" Jan 31 10:29:57 crc kubenswrapper[4830]: I0131 10:29:57.873674 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-lp7ks" event={"ID":"e80e8b17-711d-46d8-a240-4fa52e093545","Type":"ContainerStarted","Data":"442fb11e8157d2828607bf3d2b725bfd0577a69849f698c338711dde35e1d885"} Jan 31 10:29:57 crc kubenswrapper[4830]: I0131 10:29:57.873927 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-lp7ks" Jan 31 10:29:57 crc kubenswrapper[4830]: I0131 10:29:57.874211 4830 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-lp7ks container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.39:5443/healthz\": dial tcp 10.217.0.39:5443: connect: connection refused" start-of-body= Jan 31 10:29:57 crc kubenswrapper[4830]: I0131 10:29:57.874250 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-lp7ks" podUID="e80e8b17-711d-46d8-a240-4fa52e093545" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.39:5443/healthz\": dial tcp 10.217.0.39:5443: connect: connection refused" Jan 31 10:29:57 crc kubenswrapper[4830]: I0131 10:29:57.877232 4830 generic.go:334] "Generic (PLEG): container finished" podID="47718a89-dc4c-4f5d-bb58-aec265aa68bf" containerID="68d32f98fc69855e761a0992edb05093b5ca47972eeb484d8f1fcb9ba7a65281" exitCode=1 Jan 31 10:29:57 crc kubenswrapper[4830]: I0131 10:29:57.877375 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-cpwlp" event={"ID":"47718a89-dc4c-4f5d-bb58-aec265aa68bf","Type":"ContainerDied","Data":"68d32f98fc69855e761a0992edb05093b5ca47972eeb484d8f1fcb9ba7a65281"} Jan 31 10:29:57 crc kubenswrapper[4830]: I0131 10:29:57.877891 4830 patch_prober.go:28] interesting pod/prometheus-operator-admission-webhook-f54c54754-dbkt8 container/prometheus-operator-admission-webhook namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.77:8443/healthz\": dial tcp 10.217.0.77:8443: connect: connection refused" start-of-body= Jan 31 10:29:57 crc kubenswrapper[4830]: I0131 10:29:57.877924 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-dbkt8" podUID="48688d73-57bb-4105-8116-4853be571b01" containerName="prometheus-operator-admission-webhook" probeResult="failure" output="Get \"https://10.217.0.77:8443/healthz\": dial tcp 10.217.0.77:8443: connect: connection refused" Jan 31 10:29:57 crc kubenswrapper[4830]: I0131 10:29:57.878526 4830 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="registry-server" containerStatusID={"Type":"cri-o","ID":"d90335abfa9207b4d4d63cf2f5f0c9a8b085e06ea6f5f12d88ddd096f3e7f6f8"} pod="openshift-marketplace/community-operators-fcmv2" containerMessage="Container registry-server failed liveness probe, will be restarted" Jan 31 10:29:57 crc kubenswrapper[4830]: I0131 10:29:57.878588 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-fcmv2" podUID="c361702a-d6db-4925-809d-f08c6dd88a7d" containerName="registry-server" containerID="cri-o://d90335abfa9207b4d4d63cf2f5f0c9a8b085e06ea6f5f12d88ddd096f3e7f6f8" gracePeriod=30 Jan 31 10:29:57 crc kubenswrapper[4830]: I0131 10:29:57.880294 4830 scope.go:117] "RemoveContainer" containerID="68d32f98fc69855e761a0992edb05093b5ca47972eeb484d8f1fcb9ba7a65281" Jan 31 10:29:57 crc kubenswrapper[4830]: I0131 10:29:57.910504 4830 patch_prober.go:28] interesting pod/logging-loki-gateway-74c87577db-fjtpt container/opa namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.51:8083/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 31 10:29:57 crc kubenswrapper[4830]: I0131 10:29:57.910534 4830 patch_prober.go:28] interesting pod/logging-loki-gateway-74c87577db-fjtpt container/gateway namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.51:8081/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 31 10:29:57 crc kubenswrapper[4830]: I0131 10:29:57.910567 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-74c87577db-fjtpt" podUID="867e058e-8774-4ff8-af99-a8f35ac530ce" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.51:8083/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:57 crc kubenswrapper[4830]: I0131 10:29:57.910585 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-74c87577db-fjtpt" podUID="867e058e-8774-4ff8-af99-a8f35ac530ce" containerName="gateway" probeResult="failure" output="Get \"https://10.217.0.51:8081/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:57 crc kubenswrapper[4830]: I0131 10:29:57.933183 4830 patch_prober.go:28] interesting pod/logging-loki-query-frontend-69d9546745-8k7rn container/loki-query-frontend namespace/openshift-logging: Liveness probe status=failure output="Get \"https://10.217.0.49:3101/loki/api/v1/status/buildinfo\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 31 10:29:57 crc kubenswrapper[4830]: I0131 10:29:57.933286 4830 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-logging/logging-loki-query-frontend-69d9546745-8k7rn" podUID="6a2f00bb-9954-46d0-901b-3d9a82939850" containerName="loki-query-frontend" probeResult="failure" output="Get \"https://10.217.0.49:3101/loki/api/v1/status/buildinfo\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:57 crc kubenswrapper[4830]: I0131 10:29:57.935428 4830 patch_prober.go:28] interesting pod/logging-loki-query-frontend-69d9546745-8k7rn container/loki-query-frontend namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.49:3101/loki/api/v1/status/buildinfo\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 31 10:29:57 crc kubenswrapper[4830]: I0131 10:29:57.935565 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-query-frontend-69d9546745-8k7rn" podUID="6a2f00bb-9954-46d0-901b-3d9a82939850" containerName="loki-query-frontend" probeResult="failure" output="Get \"https://10.217.0.49:3101/loki/api/v1/status/buildinfo\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:58 crc kubenswrapper[4830]: I0131 10:29:57.969471 4830 patch_prober.go:28] interesting pod/logging-loki-compactor-0 container/loki-compactor namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.53:3101/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 31 10:29:58 crc kubenswrapper[4830]: I0131 10:29:57.969538 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-compactor-0" podUID="70d5f51c-1a87-45fb-8822-7aa0997fceb1" containerName="loki-compactor" probeResult="failure" output="Get \"https://10.217.0.53:3101/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:58 crc kubenswrapper[4830]: I0131 10:29:57.969625 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-compactor-0" Jan 31 10:29:58 crc kubenswrapper[4830]: I0131 10:29:58.066702 4830 patch_prober.go:28] interesting pod/logging-loki-index-gateway-0 container/loki-index-gateway namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.54:3101/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 31 10:29:58 crc kubenswrapper[4830]: I0131 10:29:58.066758 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-index-gateway-0" podUID="efadb8be-37d4-4e2b-9df2-3d1301ae81a8" containerName="loki-index-gateway" probeResult="failure" output="Get \"https://10.217.0.54:3101/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:58 crc kubenswrapper[4830]: I0131 10:29:58.066821 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-index-gateway-0" Jan 31 10:29:58 crc kubenswrapper[4830]: I0131 10:29:58.082024 4830 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-58x6p container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.71:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 31 10:29:58 crc kubenswrapper[4830]: I0131 10:29:58.082097 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-58x6p" podUID="b6c3d452-2742-4f91-9857-5f5e0b50f348" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.71:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:58 crc kubenswrapper[4830]: I0131 10:29:58.338131 4830 patch_prober.go:28] interesting pod/router-default-5444994796-vbcgc container/router namespace/openshift-ingress: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]backend-http ok Jan 31 10:29:58 crc kubenswrapper[4830]: [+]has-synced ok Jan 31 10:29:58 crc kubenswrapper[4830]: [-]process-running failed: reason withheld Jan 31 10:29:58 crc kubenswrapper[4830]: healthz check failed Jan 31 10:29:58 crc kubenswrapper[4830]: I0131 10:29:58.338522 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-ingress/router-default-5444994796-vbcgc" podUID="bf986437-9998-4cd1-90b8-b2e0716e8d37" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 31 10:29:58 crc kubenswrapper[4830]: I0131 10:29:58.453896 4830 patch_prober.go:28] interesting pod/downloads-7954f5f757-l8ckt container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.23:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 31 10:29:58 crc kubenswrapper[4830]: I0131 10:29:58.453957 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-l8ckt" podUID="a8d26ab0-33c3-4eb7-928b-ffba996579d9" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.23:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:58 crc kubenswrapper[4830]: I0131 10:29:58.453896 4830 patch_prober.go:28] interesting pod/downloads-7954f5f757-l8ckt container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.23:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 31 10:29:58 crc kubenswrapper[4830]: I0131 10:29:58.454095 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-l8ckt" Jan 31 10:29:58 crc kubenswrapper[4830]: I0131 10:29:58.454118 4830 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-l8ckt" podUID="a8d26ab0-33c3-4eb7-928b-ffba996579d9" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.23:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:58 crc kubenswrapper[4830]: I0131 10:29:58.454175 4830 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-console/downloads-7954f5f757-l8ckt" Jan 31 10:29:58 crc kubenswrapper[4830]: I0131 10:29:58.454316 4830 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-n4rml container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.32:8443/healthz\": dial tcp 10.217.0.32:8443: connect: connection refused" start-of-body= Jan 31 10:29:58 crc kubenswrapper[4830]: I0131 10:29:58.454328 4830 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-n4rml container/catalog-operator namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.32:8443/healthz\": dial tcp 10.217.0.32:8443: connect: connection refused" start-of-body= Jan 31 10:29:58 crc kubenswrapper[4830]: I0131 10:29:58.454368 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-n4rml" podUID="cf057c5a-deef-4c01-bd58-f761ec86e2f4" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.32:8443/healthz\": dial tcp 10.217.0.32:8443: connect: connection refused" Jan 31 10:29:58 crc kubenswrapper[4830]: I0131 10:29:58.454413 4830 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-n4rml" podUID="cf057c5a-deef-4c01-bd58-f761ec86e2f4" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.32:8443/healthz\": dial tcp 10.217.0.32:8443: connect: connection refused" Jan 31 10:29:58 crc kubenswrapper[4830]: I0131 10:29:58.455036 4830 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="download-server" containerStatusID={"Type":"cri-o","ID":"663300a1eec888f0c1315103a2cb4760fc9ed1d0e7eb16f88381ae83cf26de31"} pod="openshift-console/downloads-7954f5f757-l8ckt" containerMessage="Container download-server failed liveness probe, will be restarted" Jan 31 10:29:58 crc kubenswrapper[4830]: I0131 10:29:58.455079 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/downloads-7954f5f757-l8ckt" podUID="a8d26ab0-33c3-4eb7-928b-ffba996579d9" containerName="download-server" containerID="cri-o://663300a1eec888f0c1315103a2cb4760fc9ed1d0e7eb16f88381ae83cf26de31" gracePeriod=2 Jan 31 10:29:58 crc kubenswrapper[4830]: I0131 10:29:58.492703 4830 patch_prober.go:28] interesting pod/monitoring-plugin-546c959798-jmj57 container/monitoring-plugin namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.85:9443/health\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 31 10:29:58 crc kubenswrapper[4830]: I0131 10:29:58.492835 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/monitoring-plugin-546c959798-jmj57" podUID="fadaea73-e4ec-47a5-b6df-c93b1ce5645f" containerName="monitoring-plugin" probeResult="failure" output="Get \"https://10.217.0.85:9443/health\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:58 crc kubenswrapper[4830]: I0131 10:29:58.508763 4830 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-lp7ks container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.39:5443/healthz\": dial tcp 10.217.0.39:5443: connect: connection refused" start-of-body= Jan 31 10:29:58 crc kubenswrapper[4830]: I0131 10:29:58.508833 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-lp7ks" podUID="e80e8b17-711d-46d8-a240-4fa52e093545" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.39:5443/healthz\": dial tcp 10.217.0.39:5443: connect: connection refused" Jan 31 10:29:58 crc kubenswrapper[4830]: I0131 10:29:58.509262 4830 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-lp7ks container/packageserver namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.39:5443/healthz\": dial tcp 10.217.0.39:5443: connect: connection refused" start-of-body= Jan 31 10:29:58 crc kubenswrapper[4830]: I0131 10:29:58.509311 4830 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-lp7ks" podUID="e80e8b17-711d-46d8-a240-4fa52e093545" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.39:5443/healthz\": dial tcp 10.217.0.39:5443: connect: connection refused" Jan 31 10:29:58 crc kubenswrapper[4830]: I0131 10:29:58.761501 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-compactor-0" Jan 31 10:29:58 crc kubenswrapper[4830]: I0131 10:29:58.761815 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="2ca5d2f1-673e-4173-848a-8d32d33b8bcc" containerName="galera" probeResult="failure" output="command timed out" Jan 31 10:29:58 crc kubenswrapper[4830]: I0131 10:29:58.763917 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-index-gateway-0" Jan 31 10:29:58 crc kubenswrapper[4830]: E0131 10:29:58.765353 4830 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podadf0d571_b5dc_4d7c_9e8d_8813354a5128.slice/crio-conmon-184536029a48d98e756eccab3b9c57d61b4ae582035a9dd9a291492b0aec8e02.scope\": RecentStats: unable to find data in memory cache]" Jan 31 10:29:58 crc kubenswrapper[4830]: I0131 10:29:58.945478 4830 generic.go:334] "Generic (PLEG): container finished" podID="f2ea7efa-c50b-4208-a9df-2c3fc454762b" containerID="7438470b0ded09ffc16921538313fba4d8d5737ade46eb0d1751c36880d19f27" exitCode=0 Jan 31 10:29:58 crc kubenswrapper[4830]: I0131 10:29:58.945559 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f2ea7efa-c50b-4208-a9df-2c3fc454762b","Type":"ContainerDied","Data":"7438470b0ded09ffc16921538313fba4d8d5737ade46eb0d1751c36880d19f27"} Jan 31 10:29:58 crc kubenswrapper[4830]: I0131 10:29:58.948612 4830 generic.go:334] "Generic (PLEG): container finished" podID="1ebf3f9f-75ef-4cfd-a7f7-d5fb556aeb48" containerID="84b9f1eb86465dfd8f507bac40d6eba61e040eff8f9e2cec0a5f6e8db4aeffc3" exitCode=0 Jan 31 10:29:58 crc kubenswrapper[4830]: I0131 10:29:58.948670 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-l59nt" event={"ID":"1ebf3f9f-75ef-4cfd-a7f7-d5fb556aeb48","Type":"ContainerDied","Data":"84b9f1eb86465dfd8f507bac40d6eba61e040eff8f9e2cec0a5f6e8db4aeffc3"} Jan 31 10:29:58 crc kubenswrapper[4830]: I0131 10:29:58.950959 4830 generic.go:334] "Generic (PLEG): container finished" podID="adf0d571-b5dc-4d7c-9e8d-8813354a5128" containerID="184536029a48d98e756eccab3b9c57d61b4ae582035a9dd9a291492b0aec8e02" exitCode=2 Jan 31 10:29:58 crc kubenswrapper[4830]: I0131 10:29:58.951043 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"adf0d571-b5dc-4d7c-9e8d-8813354a5128","Type":"ContainerDied","Data":"184536029a48d98e756eccab3b9c57d61b4ae582035a9dd9a291492b0aec8e02"} Jan 31 10:29:58 crc kubenswrapper[4830]: I0131 10:29:58.954232 4830 generic.go:334] "Generic (PLEG): container finished" podID="f101dda8-ba4c-42c2-a8e3-9a5e53c2ec8a" containerID="65826d84cc5288bdc372aae11461481e47526fb47b67a9c7df52eb2655067fa2" exitCode=1 Jan 31 10:29:58 crc kubenswrapper[4830]: I0131 10:29:58.954342 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-ld2fb" event={"ID":"f101dda8-ba4c-42c2-a8e3-9a5e53c2ec8a","Type":"ContainerDied","Data":"65826d84cc5288bdc372aae11461481e47526fb47b67a9c7df52eb2655067fa2"} Jan 31 10:29:58 crc kubenswrapper[4830]: I0131 10:29:58.955238 4830 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-lp7ks container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.39:5443/healthz\": dial tcp 10.217.0.39:5443: connect: connection refused" start-of-body= Jan 31 10:29:58 crc kubenswrapper[4830]: I0131 10:29:58.955260 4830 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-n4rml container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.32:8443/healthz\": dial tcp 10.217.0.32:8443: connect: connection refused" start-of-body= Jan 31 10:29:58 crc kubenswrapper[4830]: I0131 10:29:58.955271 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-lp7ks" podUID="e80e8b17-711d-46d8-a240-4fa52e093545" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.39:5443/healthz\": dial tcp 10.217.0.39:5443: connect: connection refused" Jan 31 10:29:58 crc kubenswrapper[4830]: I0131 10:29:58.955298 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-n4rml" podUID="cf057c5a-deef-4c01-bd58-f761ec86e2f4" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.32:8443/healthz\": dial tcp 10.217.0.32:8443: connect: connection refused" Jan 31 10:29:59 crc kubenswrapper[4830]: I0131 10:29:58.956516 4830 scope.go:117] "RemoveContainer" containerID="65826d84cc5288bdc372aae11461481e47526fb47b67a9c7df52eb2655067fa2" Jan 31 10:29:59 crc kubenswrapper[4830]: I0131 10:29:58.956985 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-4v2n6" Jan 31 10:29:59 crc kubenswrapper[4830]: I0131 10:29:59.072889 4830 patch_prober.go:28] interesting pod/console-operator-58897d9998-pkx9p container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.21:8443/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 31 10:29:59 crc kubenswrapper[4830]: I0131 10:29:59.072953 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-pkx9p" podUID="691a8aff-6fcd-400a-ace9-fb3fa8778206" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.21:8443/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:59 crc kubenswrapper[4830]: I0131 10:29:59.072965 4830 patch_prober.go:28] interesting pod/console-operator-58897d9998-pkx9p container/console-operator namespace/openshift-console-operator: Liveness probe status=failure output="Get \"https://10.217.0.21:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 31 10:29:59 crc kubenswrapper[4830]: I0131 10:29:59.073030 4830 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console-operator/console-operator-58897d9998-pkx9p" podUID="691a8aff-6fcd-400a-ace9-fb3fa8778206" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.21:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:59 crc kubenswrapper[4830]: I0131 10:29:59.073045 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-pkx9p" Jan 31 10:29:59 crc kubenswrapper[4830]: I0131 10:29:59.073099 4830 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-console-operator/console-operator-58897d9998-pkx9p" Jan 31 10:29:59 crc kubenswrapper[4830]: I0131 10:29:59.256846 4830 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="console-operator" containerStatusID={"Type":"cri-o","ID":"497622e31559cfebe662e6932b434973f3b3c9ada6b4f06670330d37ab8d06cb"} pod="openshift-console-operator/console-operator-58897d9998-pkx9p" containerMessage="Container console-operator failed liveness probe, will be restarted" Jan 31 10:29:59 crc kubenswrapper[4830]: I0131 10:29:59.256916 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console-operator/console-operator-58897d9998-pkx9p" podUID="691a8aff-6fcd-400a-ace9-fb3fa8778206" containerName="console-operator" containerID="cri-o://497622e31559cfebe662e6932b434973f3b3c9ada6b4f06670330d37ab8d06cb" gracePeriod=30 Jan 31 10:29:59 crc kubenswrapper[4830]: I0131 10:29:59.406182 4830 patch_prober.go:28] interesting pod/downloads-7954f5f757-l8ckt container/download-server namespace/openshift-console: Readiness probe status=failure output="" start-of-body= Jan 31 10:29:59 crc kubenswrapper[4830]: I0131 10:29:59.406792 4830 patch_prober.go:28] interesting pod/downloads-7954f5f757-l8ckt container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" start-of-body= Jan 31 10:29:59 crc kubenswrapper[4830]: I0131 10:29:59.406868 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-l8ckt" podUID="a8d26ab0-33c3-4eb7-928b-ffba996579d9" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" Jan 31 10:29:59 crc kubenswrapper[4830]: I0131 10:29:59.535049 4830 patch_prober.go:28] interesting pod/package-server-manager-789f6589d5-ckvgq container/package-server-manager namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"http://10.217.0.29:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 31 10:29:59 crc kubenswrapper[4830]: I0131 10:29:59.535120 4830 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-ckvgq" podUID="007a4117-0dfe-485e-85df-6bc68e0cee5e" containerName="package-server-manager" probeResult="failure" output="Get \"http://10.217.0.29:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:59 crc kubenswrapper[4830]: I0131 10:29:59.535175 4830 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-ckvgq" Jan 31 10:29:59 crc kubenswrapper[4830]: I0131 10:29:59.548282 4830 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="package-server-manager" containerStatusID={"Type":"cri-o","ID":"d90adb47121b9222b981576066f0df9e3cafccb2f5b0004e261272503fa48a5d"} pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-ckvgq" containerMessage="Container package-server-manager failed liveness probe, will be restarted" Jan 31 10:29:59 crc kubenswrapper[4830]: I0131 10:29:59.548344 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-ckvgq" podUID="007a4117-0dfe-485e-85df-6bc68e0cee5e" containerName="package-server-manager" containerID="cri-o://d90adb47121b9222b981576066f0df9e3cafccb2f5b0004e261272503fa48a5d" gracePeriod=30 Jan 31 10:29:59 crc kubenswrapper[4830]: I0131 10:29:59.577884 4830 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-lb8hp container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.38:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 31 10:29:59 crc kubenswrapper[4830]: I0131 10:29:59.577934 4830 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-lb8hp container/olm-operator namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.38:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 31 10:29:59 crc kubenswrapper[4830]: I0131 10:29:59.577948 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lb8hp" podUID="13f1c33b-cede-4fb1-9651-15d0dcd36173" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.38:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:59 crc kubenswrapper[4830]: I0131 10:29:59.577985 4830 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lb8hp" podUID="13f1c33b-cede-4fb1-9651-15d0dcd36173" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.38:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:59 crc kubenswrapper[4830]: I0131 10:29:59.578053 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lb8hp" Jan 31 10:29:59 crc kubenswrapper[4830]: I0131 10:29:59.578079 4830 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lb8hp" Jan 31 10:29:59 crc kubenswrapper[4830]: I0131 10:29:59.580506 4830 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="olm-operator" containerStatusID={"Type":"cri-o","ID":"e0cb3249c4e74782086ada27cb6cdcdf73644dbc41e394c8950ad3621a48b54d"} pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lb8hp" containerMessage="Container olm-operator failed liveness probe, will be restarted" Jan 31 10:29:59 crc kubenswrapper[4830]: I0131 10:29:59.580541 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lb8hp" podUID="13f1c33b-cede-4fb1-9651-15d0dcd36173" containerName="olm-operator" containerID="cri-o://e0cb3249c4e74782086ada27cb6cdcdf73644dbc41e394c8950ad3621a48b54d" gracePeriod=30 Jan 31 10:29:59 crc kubenswrapper[4830]: I0131 10:29:59.577893 4830 patch_prober.go:28] interesting pod/package-server-manager-789f6589d5-ckvgq container/package-server-manager namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"http://10.217.0.29:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 31 10:29:59 crc kubenswrapper[4830]: I0131 10:29:59.580650 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-ckvgq" podUID="007a4117-0dfe-485e-85df-6bc68e0cee5e" containerName="package-server-manager" probeResult="failure" output="Get \"http://10.217.0.29:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:59 crc kubenswrapper[4830]: I0131 10:29:59.580699 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-ckvgq" Jan 31 10:29:59 crc kubenswrapper[4830]: I0131 10:29:59.618002 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/community-operators-fcmv2" podUID="c361702a-d6db-4925-809d-f08c6dd88a7d" containerName="registry-server" probeResult="failure" output="" Jan 31 10:29:59 crc kubenswrapper[4830]: E0131 10:29:59.618789 4830 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of d90335abfa9207b4d4d63cf2f5f0c9a8b085e06ea6f5f12d88ddd096f3e7f6f8 is running failed: container process not found" containerID="d90335abfa9207b4d4d63cf2f5f0c9a8b085e06ea6f5f12d88ddd096f3e7f6f8" cmd=["grpc_health_probe","-addr=:50051"] Jan 31 10:29:59 crc kubenswrapper[4830]: E0131 10:29:59.619355 4830 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of d90335abfa9207b4d4d63cf2f5f0c9a8b085e06ea6f5f12d88ddd096f3e7f6f8 is running failed: container process not found" containerID="d90335abfa9207b4d4d63cf2f5f0c9a8b085e06ea6f5f12d88ddd096f3e7f6f8" cmd=["grpc_health_probe","-addr=:50051"] Jan 31 10:29:59 crc kubenswrapper[4830]: E0131 10:29:59.622411 4830 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of d90335abfa9207b4d4d63cf2f5f0c9a8b085e06ea6f5f12d88ddd096f3e7f6f8 is running failed: container process not found" containerID="d90335abfa9207b4d4d63cf2f5f0c9a8b085e06ea6f5f12d88ddd096f3e7f6f8" cmd=["grpc_health_probe","-addr=:50051"] Jan 31 10:29:59 crc kubenswrapper[4830]: E0131 10:29:59.622473 4830 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of d90335abfa9207b4d4d63cf2f5f0c9a8b085e06ea6f5f12d88ddd096f3e7f6f8 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/community-operators-fcmv2" podUID="c361702a-d6db-4925-809d-f08c6dd88a7d" containerName="registry-server" Jan 31 10:29:59 crc kubenswrapper[4830]: I0131 10:29:59.635956 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/redhat-marketplace-g5pvp" podUID="35d308f6-fcf3-4b01-b26e-5c1848d6ee7d" containerName="registry-server" probeResult="failure" output=< Jan 31 10:29:59 crc kubenswrapper[4830]: cancellation received Jan 31 10:29:59 crc kubenswrapper[4830]: error: failed to connect service at ":50051": context canceled Jan 31 10:29:59 crc kubenswrapper[4830]: > Jan 31 10:29:59 crc kubenswrapper[4830]: E0131 10:29:59.636565 4830 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of f3902cab012b2fd7a05dad2e119debffa319f7939ade666477b0cf8bf2859a4a is running failed: container process not found" containerID="f3902cab012b2fd7a05dad2e119debffa319f7939ade666477b0cf8bf2859a4a" cmd=["grpc_health_probe","-addr=:50051"] Jan 31 10:29:59 crc kubenswrapper[4830]: I0131 10:29:59.636942 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/redhat-operators-56876" podUID="2626e876-9148-4165-a735-a5a1733c014d" containerName="registry-server" probeResult="failure" output="" Jan 31 10:29:59 crc kubenswrapper[4830]: E0131 10:29:59.637797 4830 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of f3902cab012b2fd7a05dad2e119debffa319f7939ade666477b0cf8bf2859a4a is running failed: container process not found" containerID="f3902cab012b2fd7a05dad2e119debffa319f7939ade666477b0cf8bf2859a4a" cmd=["grpc_health_probe","-addr=:50051"] Jan 31 10:29:59 crc kubenswrapper[4830]: E0131 10:29:59.637875 4830 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of f11441cbba9561c6c57f871491bfd86946bb4556451df5f1b4cd312425394af7 is running failed: container process not found" containerID="f11441cbba9561c6c57f871491bfd86946bb4556451df5f1b4cd312425394af7" cmd=["grpc_health_probe","-addr=:50051"] Jan 31 10:29:59 crc kubenswrapper[4830]: E0131 10:29:59.638405 4830 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of f3902cab012b2fd7a05dad2e119debffa319f7939ade666477b0cf8bf2859a4a is running failed: container process not found" containerID="f3902cab012b2fd7a05dad2e119debffa319f7939ade666477b0cf8bf2859a4a" cmd=["grpc_health_probe","-addr=:50051"] Jan 31 10:29:59 crc kubenswrapper[4830]: E0131 10:29:59.638436 4830 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of f3902cab012b2fd7a05dad2e119debffa319f7939ade666477b0cf8bf2859a4a is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/redhat-marketplace-g5pvp" podUID="35d308f6-fcf3-4b01-b26e-5c1848d6ee7d" containerName="registry-server" Jan 31 10:29:59 crc kubenswrapper[4830]: E0131 10:29:59.638507 4830 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of f11441cbba9561c6c57f871491bfd86946bb4556451df5f1b4cd312425394af7 is running failed: container process not found" containerID="f11441cbba9561c6c57f871491bfd86946bb4556451df5f1b4cd312425394af7" cmd=["grpc_health_probe","-addr=:50051"] Jan 31 10:29:59 crc kubenswrapper[4830]: E0131 10:29:59.639135 4830 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of f3902cab012b2fd7a05dad2e119debffa319f7939ade666477b0cf8bf2859a4a is running failed: container process not found" containerID="f3902cab012b2fd7a05dad2e119debffa319f7939ade666477b0cf8bf2859a4a" cmd=["grpc_health_probe","-addr=:50051"] Jan 31 10:29:59 crc kubenswrapper[4830]: E0131 10:29:59.639202 4830 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of f11441cbba9561c6c57f871491bfd86946bb4556451df5f1b4cd312425394af7 is running failed: container process not found" containerID="f11441cbba9561c6c57f871491bfd86946bb4556451df5f1b4cd312425394af7" cmd=["grpc_health_probe","-addr=:50051"] Jan 31 10:29:59 crc kubenswrapper[4830]: E0131 10:29:59.639223 4830 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of f11441cbba9561c6c57f871491bfd86946bb4556451df5f1b4cd312425394af7 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/redhat-operators-56876" podUID="2626e876-9148-4165-a735-a5a1733c014d" containerName="registry-server" Jan 31 10:29:59 crc kubenswrapper[4830]: E0131 10:29:59.639527 4830 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of f3902cab012b2fd7a05dad2e119debffa319f7939ade666477b0cf8bf2859a4a is running failed: container process not found" containerID="f3902cab012b2fd7a05dad2e119debffa319f7939ade666477b0cf8bf2859a4a" cmd=["grpc_health_probe","-addr=:50051"] Jan 31 10:29:59 crc kubenswrapper[4830]: E0131 10:29:59.639968 4830 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of f3902cab012b2fd7a05dad2e119debffa319f7939ade666477b0cf8bf2859a4a is running failed: container process not found" containerID="f3902cab012b2fd7a05dad2e119debffa319f7939ade666477b0cf8bf2859a4a" cmd=["grpc_health_probe","-addr=:50051"] Jan 31 10:29:59 crc kubenswrapper[4830]: E0131 10:29:59.639994 4830 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of f3902cab012b2fd7a05dad2e119debffa319f7939ade666477b0cf8bf2859a4a is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/redhat-marketplace-g5pvp" podUID="35d308f6-fcf3-4b01-b26e-5c1848d6ee7d" containerName="registry-server" Jan 31 10:29:59 crc kubenswrapper[4830]: I0131 10:29:59.671331 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-4v2n6" Jan 31 10:29:59 crc kubenswrapper[4830]: I0131 10:29:59.688937 4830 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/infra-operator-controller-manager-79955696d6-vvv24" podUID="0b519925-01de-4cf0-8ff8-0f97137dd3d9" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.107:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:59 crc kubenswrapper[4830]: I0131 10:29:59.689046 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/infra-operator-controller-manager-79955696d6-vvv24" podUID="0b519925-01de-4cf0-8ff8-0f97137dd3d9" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.107:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:59 crc kubenswrapper[4830]: I0131 10:29:59.689974 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lb8hp" Jan 31 10:29:59 crc kubenswrapper[4830]: I0131 10:29:59.760761 4830 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4drvxhm" Jan 31 10:29:59 crc kubenswrapper[4830]: I0131 10:29:59.768262 4830 patch_prober.go:28] interesting pod/oauth-openshift-6768bc9c9c-5t4z8 container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.63:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 31 10:29:59 crc kubenswrapper[4830]: I0131 10:29:59.768325 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-6768bc9c9c-5t4z8" podUID="3549201c-94c2-4a29-9e62-b498b4a97ece" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.63:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 31 10:29:59 crc kubenswrapper[4830]: I0131 10:29:59.842540 4830 patch_prober.go:28] interesting pod/console-operator-58897d9998-pkx9p container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.21:8443/readyz\": EOF" start-of-body= Jan 31 10:29:59 crc kubenswrapper[4830]: I0131 10:29:59.842597 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-pkx9p" podUID="691a8aff-6fcd-400a-ace9-fb3fa8778206" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.21:8443/readyz\": EOF" Jan 31 10:30:00 crc kubenswrapper[4830]: I0131 10:29:59.975003 4830 generic.go:334] "Generic (PLEG): container finished" podID="2a183ae3-dc4b-4f75-a9ca-4832bd5faf06" containerID="02485d5110c6b88cac3b44496e1451c9cb9553b4fe3f14a833ef5e41c773e726" exitCode=1 Jan 31 10:30:00 crc kubenswrapper[4830]: I0131 10:29:59.975103 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-54dc59fd95-sv8r9" event={"ID":"2a183ae3-dc4b-4f75-a9ca-4832bd5faf06","Type":"ContainerDied","Data":"02485d5110c6b88cac3b44496e1451c9cb9553b4fe3f14a833ef5e41c773e726"} Jan 31 10:30:00 crc kubenswrapper[4830]: I0131 10:29:59.976512 4830 scope.go:117] "RemoveContainer" containerID="02485d5110c6b88cac3b44496e1451c9cb9553b4fe3f14a833ef5e41c773e726" Jan 31 10:30:00 crc kubenswrapper[4830]: I0131 10:30:00.024097 4830 generic.go:334] "Generic (PLEG): container finished" podID="2626e876-9148-4165-a735-a5a1733c014d" containerID="f11441cbba9561c6c57f871491bfd86946bb4556451df5f1b4cd312425394af7" exitCode=0 Jan 31 10:30:00 crc kubenswrapper[4830]: I0131 10:30:00.024200 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-56876" event={"ID":"2626e876-9148-4165-a735-a5a1733c014d","Type":"ContainerDied","Data":"f11441cbba9561c6c57f871491bfd86946bb4556451df5f1b4cd312425394af7"} Jan 31 10:30:00 crc kubenswrapper[4830]: I0131 10:30:00.028832 4830 generic.go:334] "Generic (PLEG): container finished" podID="a8d26ab0-33c3-4eb7-928b-ffba996579d9" containerID="663300a1eec888f0c1315103a2cb4760fc9ed1d0e7eb16f88381ae83cf26de31" exitCode=0 Jan 31 10:30:00 crc kubenswrapper[4830]: I0131 10:30:00.028913 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-l8ckt" event={"ID":"a8d26ab0-33c3-4eb7-928b-ffba996579d9","Type":"ContainerDied","Data":"663300a1eec888f0c1315103a2cb4760fc9ed1d0e7eb16f88381ae83cf26de31"} Jan 31 10:30:00 crc kubenswrapper[4830]: I0131 10:30:00.052845 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress_router-default-5444994796-vbcgc_bf986437-9998-4cd1-90b8-b2e0716e8d37/router/0.log" Jan 31 10:30:00 crc kubenswrapper[4830]: I0131 10:30:00.052912 4830 generic.go:334] "Generic (PLEG): container finished" podID="bf986437-9998-4cd1-90b8-b2e0716e8d37" containerID="80f837c980bbb2106b85f0e8ae5ce486b89cde72328711691a5e7a58dca33a3f" exitCode=137 Jan 31 10:30:00 crc kubenswrapper[4830]: I0131 10:30:00.053072 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-vbcgc" event={"ID":"bf986437-9998-4cd1-90b8-b2e0716e8d37","Type":"ContainerDied","Data":"80f837c980bbb2106b85f0e8ae5ce486b89cde72328711691a5e7a58dca33a3f"} Jan 31 10:30:00 crc kubenswrapper[4830]: I0131 10:30:00.057774 4830 generic.go:334] "Generic (PLEG): container finished" podID="35d308f6-fcf3-4b01-b26e-5c1848d6ee7d" containerID="f3902cab012b2fd7a05dad2e119debffa319f7939ade666477b0cf8bf2859a4a" exitCode=0 Jan 31 10:30:00 crc kubenswrapper[4830]: I0131 10:30:00.057827 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-g5pvp" event={"ID":"35d308f6-fcf3-4b01-b26e-5c1848d6ee7d","Type":"ContainerDied","Data":"f3902cab012b2fd7a05dad2e119debffa319f7939ade666477b0cf8bf2859a4a"} Jan 31 10:30:00 crc kubenswrapper[4830]: I0131 10:30:00.066933 4830 generic.go:334] "Generic (PLEG): container finished" podID="14550547-ce63-48cc-800e-b74235d0daa1" containerID="83d53b8dc5ef1de88fb6035c22e2a2cf67146c16f93c7ba5c2795bd39e9c58c1" exitCode=0 Jan 31 10:30:00 crc kubenswrapper[4830]: I0131 10:30:00.066980 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jwvm4" event={"ID":"14550547-ce63-48cc-800e-b74235d0daa1","Type":"ContainerDied","Data":"83d53b8dc5ef1de88fb6035c22e2a2cf67146c16f93c7ba5c2795bd39e9c58c1"} Jan 31 10:30:00 crc kubenswrapper[4830]: I0131 10:30:00.069139 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-cpwlp" event={"ID":"47718a89-dc4c-4f5d-bb58-aec265aa68bf","Type":"ContainerStarted","Data":"456ad05f91fa8132d27e4730377dd2a5804482c8538513463f8041ff9dc53119"} Jan 31 10:30:00 crc kubenswrapper[4830]: I0131 10:30:00.070788 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-slc6p" event={"ID":"bd972fba-0692-45af-b28c-db4929fe150a","Type":"ContainerStarted","Data":"786db8f95461dccfc619fdfd5611dbd0144e8a8e780b4b823374223661d34aea"} Jan 31 10:30:00 crc kubenswrapper[4830]: I0131 10:30:00.071072 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-slc6p" Jan 31 10:30:00 crc kubenswrapper[4830]: I0131 10:30:00.071885 4830 generic.go:334] "Generic (PLEG): container finished" podID="1891b74f-fe71-4020-98a3-5796e2a67ea2" containerID="32f6281283ec15b9184365b426762c2ae5925724835732331d2fc9a0f9708e67" exitCode=1 Jan 31 10:30:00 crc kubenswrapper[4830]: I0131 10:30:00.071919 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-4tqzd" event={"ID":"1891b74f-fe71-4020-98a3-5796e2a67ea2","Type":"ContainerDied","Data":"32f6281283ec15b9184365b426762c2ae5925724835732331d2fc9a0f9708e67"} Jan 31 10:30:00 crc kubenswrapper[4830]: I0131 10:30:00.072837 4830 scope.go:117] "RemoveContainer" containerID="32f6281283ec15b9184365b426762c2ae5925724835732331d2fc9a0f9708e67" Jan 31 10:30:00 crc kubenswrapper[4830]: I0131 10:30:00.075206 4830 generic.go:334] "Generic (PLEG): container finished" podID="c361702a-d6db-4925-809d-f08c6dd88a7d" containerID="d90335abfa9207b4d4d63cf2f5f0c9a8b085e06ea6f5f12d88ddd096f3e7f6f8" exitCode=0 Jan 31 10:30:00 crc kubenswrapper[4830]: I0131 10:30:00.075251 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fcmv2" event={"ID":"c361702a-d6db-4925-809d-f08c6dd88a7d","Type":"ContainerDied","Data":"d90335abfa9207b4d4d63cf2f5f0c9a8b085e06ea6f5f12d88ddd096f3e7f6f8"} Jan 31 10:30:00 crc kubenswrapper[4830]: I0131 10:30:00.077595 4830 generic.go:334] "Generic (PLEG): container finished" podID="9e3fd47c-6860-47d0-98ce-3654da25fdce" containerID="788f52e16faad612c586019f97fd0e1c157ee62484db497fa5c83f31c107360d" exitCode=0 Jan 31 10:30:00 crc kubenswrapper[4830]: I0131 10:30:00.078459 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-bcf89fb66-fxq4w" event={"ID":"9e3fd47c-6860-47d0-98ce-3654da25fdce","Type":"ContainerDied","Data":"788f52e16faad612c586019f97fd0e1c157ee62484db497fa5c83f31c107360d"} Jan 31 10:30:00 crc kubenswrapper[4830]: I0131 10:30:00.352601 4830 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-ttnrg container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.24:8443/healthz\": dial tcp 10.217.0.24:8443: connect: connection refused" start-of-body= Jan 31 10:30:00 crc kubenswrapper[4830]: I0131 10:30:00.353255 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-ttnrg" podUID="d1346d7f-25da-4035-9c88-1f96c034d795" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.24:8443/healthz\": dial tcp 10.217.0.24:8443: connect: connection refused" Jan 31 10:30:00 crc kubenswrapper[4830]: I0131 10:30:00.469926 4830 patch_prober.go:28] interesting pod/openshift-kube-scheduler-crc container/kube-scheduler namespace/openshift-kube-scheduler: Readiness probe status=failure output="Get \"https://192.168.126.11:10259/healthz\": dial tcp 192.168.126.11:10259: connect: connection refused" start-of-body= Jan 31 10:30:00 crc kubenswrapper[4830]: I0131 10:30:00.469976 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podUID="3dcd261975c3d6b9a6ad6367fd4facd3" containerName="kube-scheduler" probeResult="failure" output="Get \"https://192.168.126.11:10259/healthz\": dial tcp 192.168.126.11:10259: connect: connection refused" Jan 31 10:30:00 crc kubenswrapper[4830]: I0131 10:30:00.470050 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 31 10:30:00 crc kubenswrapper[4830]: I0131 10:30:00.714142 4830 prober.go:107] "Probe failed" probeType="Startup" pod="metallb-system/frr-k8s-4v2n6" podUID="d0107b00-a78b-432b-afc6-a9ccc1b3bf5b" containerName="frr" probeResult="failure" output="Get \"http://127.0.0.1:7573/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 31 10:30:00 crc kubenswrapper[4830]: I0131 10:30:00.763929 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-cell1-galera-0" podUID="f37f41b4-3b56-45f9-a368-0f772bcf3002" containerName="galera" probeResult="failure" output="command timed out" Jan 31 10:30:01 crc kubenswrapper[4830]: I0131 10:30:01.091271 4830 generic.go:334] "Generic (PLEG): container finished" podID="1145e85a-d436-40c8-baef-ceb53625e06b" containerID="74b56cb11209b9c16ad49800be665604e4083fba596f8098c7026e6fadcfb5c8" exitCode=1 Jan 31 10:30:01 crc kubenswrapper[4830]: I0131 10:30:01.091324 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-74fbb6df4-hrt7k" event={"ID":"1145e85a-d436-40c8-baef-ceb53625e06b","Type":"ContainerDied","Data":"74b56cb11209b9c16ad49800be665604e4083fba596f8098c7026e6fadcfb5c8"} Jan 31 10:30:01 crc kubenswrapper[4830]: I0131 10:30:01.092207 4830 scope.go:117] "RemoveContainer" containerID="74b56cb11209b9c16ad49800be665604e4083fba596f8098c7026e6fadcfb5c8" Jan 31 10:30:01 crc kubenswrapper[4830]: I0131 10:30:01.095359 4830 generic.go:334] "Generic (PLEG): container finished" podID="d85aeaa6-c7da-420f-b8d9-2d0983e2ab36" containerID="f32553c7b295719f56496bf853a26b7c14fef0d6e4969159c919977278f26085" exitCode=0 Jan 31 10:30:01 crc kubenswrapper[4830]: I0131 10:30:01.095461 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7896c76d86-c5cgs" event={"ID":"d85aeaa6-c7da-420f-b8d9-2d0983e2ab36","Type":"ContainerDied","Data":"f32553c7b295719f56496bf853a26b7c14fef0d6e4969159c919977278f26085"} Jan 31 10:30:01 crc kubenswrapper[4830]: I0131 10:30:01.101146 4830 generic.go:334] "Generic (PLEG): container finished" podID="b6c3d452-2742-4f91-9857-5f5e0b50f348" containerID="d85017aaf93892f489ab9319825e71a9a965d45d582b884dfab7617b94a784eb" exitCode=0 Jan 31 10:30:01 crc kubenswrapper[4830]: I0131 10:30:01.101242 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-58x6p" event={"ID":"b6c3d452-2742-4f91-9857-5f5e0b50f348","Type":"ContainerDied","Data":"d85017aaf93892f489ab9319825e71a9a965d45d582b884dfab7617b94a784eb"} Jan 31 10:30:01 crc kubenswrapper[4830]: I0131 10:30:01.105094 4830 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/openstack-operator-controller-manager-55f549db95-67sj5" podUID="ce245704-5b88-4544-ae21-bcb30ff5d0d0" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.122:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 31 10:30:01 crc kubenswrapper[4830]: I0131 10:30:01.105094 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-controller-manager-55f549db95-67sj5" podUID="ce245704-5b88-4544-ae21-bcb30ff5d0d0" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.122:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 31 10:30:01 crc kubenswrapper[4830]: I0131 10:30:01.106174 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-x7g8x" event={"ID":"1d713893-e8db-40ba-872c-e9d1650a56d0","Type":"ContainerStarted","Data":"844d6b8d63ab4949e1f7566a5427f526904349e9acf169301342fd5039d8ba12"} Jan 31 10:30:01 crc kubenswrapper[4830]: I0131 10:30:01.142473 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/1.log" Jan 31 10:30:01 crc kubenswrapper[4830]: I0131 10:30:01.161454 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 31 10:30:01 crc kubenswrapper[4830]: I0131 10:30:01.161535 4830 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="a49046577e1bb5d63fea892db0b89c5f6ece8f18d3a0ad0eaf6cecdb7f6d5340" exitCode=1 Jan 31 10:30:01 crc kubenswrapper[4830]: I0131 10:30:01.161628 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"a49046577e1bb5d63fea892db0b89c5f6ece8f18d3a0ad0eaf6cecdb7f6d5340"} Jan 31 10:30:01 crc kubenswrapper[4830]: I0131 10:30:01.161702 4830 scope.go:117] "RemoveContainer" containerID="c484d4ed3775a955f3077e3404135c771c581eef1e8fa614d7cdd6e521bbb426" Jan 31 10:30:01 crc kubenswrapper[4830]: I0131 10:30:01.166803 4830 scope.go:117] "RemoveContainer" containerID="a49046577e1bb5d63fea892db0b89c5f6ece8f18d3a0ad0eaf6cecdb7f6d5340" Jan 31 10:30:01 crc kubenswrapper[4830]: I0131 10:30:01.174142 4830 generic.go:334] "Generic (PLEG): container finished" podID="758269b2-16c6-4f5a-8f9f-875659eede84" containerID="92efecdd91982ec1cbb17dd9b011166406c9ca06b9ac553f35213473bc7469d9" exitCode=1 Jan 31 10:30:01 crc kubenswrapper[4830]: I0131 10:30:01.174210 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-kgrns" event={"ID":"758269b2-16c6-4f5a-8f9f-875659eede84","Type":"ContainerDied","Data":"92efecdd91982ec1cbb17dd9b011166406c9ca06b9ac553f35213473bc7469d9"} Jan 31 10:30:01 crc kubenswrapper[4830]: I0131 10:30:01.175063 4830 scope.go:117] "RemoveContainer" containerID="92efecdd91982ec1cbb17dd9b011166406c9ca06b9ac553f35213473bc7469d9" Jan 31 10:30:01 crc kubenswrapper[4830]: I0131 10:30:01.187827 4830 generic.go:334] "Generic (PLEG): container finished" podID="388d9bc4-698e-4dea-8029-aa32433cf734" containerID="9aed26c324093444bc9ccd23a18084abbc4df93e2ecc7eea93af5f5bb2391ba2" exitCode=1 Jan 31 10:30:01 crc kubenswrapper[4830]: I0131 10:30:01.187896 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-2l42c" event={"ID":"388d9bc4-698e-4dea-8029-aa32433cf734","Type":"ContainerDied","Data":"9aed26c324093444bc9ccd23a18084abbc4df93e2ecc7eea93af5f5bb2391ba2"} Jan 31 10:30:01 crc kubenswrapper[4830]: I0131 10:30:01.188827 4830 scope.go:117] "RemoveContainer" containerID="9aed26c324093444bc9ccd23a18084abbc4df93e2ecc7eea93af5f5bb2391ba2" Jan 31 10:30:01 crc kubenswrapper[4830]: I0131 10:30:01.193186 4830 generic.go:334] "Generic (PLEG): container finished" podID="3f5623d3-168a-4bca-9154-ecb4c81b5b3b" containerID="9bced4f3ec27f0428b862ec47565b2a44f8905a5c62eaeb9a3c727f9bf0a6d84" exitCode=1 Jan 31 10:30:01 crc kubenswrapper[4830]: I0131 10:30:01.193261 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-d8xvw" event={"ID":"3f5623d3-168a-4bca-9154-ecb4c81b5b3b","Type":"ContainerDied","Data":"9bced4f3ec27f0428b862ec47565b2a44f8905a5c62eaeb9a3c727f9bf0a6d84"} Jan 31 10:30:01 crc kubenswrapper[4830]: I0131 10:30:01.194383 4830 scope.go:117] "RemoveContainer" containerID="9bced4f3ec27f0428b862ec47565b2a44f8905a5c62eaeb9a3c727f9bf0a6d84" Jan 31 10:30:01 crc kubenswrapper[4830]: I0131 10:30:01.215115 4830 generic.go:334] "Generic (PLEG): container finished" podID="d1346d7f-25da-4035-9c88-1f96c034d795" containerID="85d3d5001bb1210574c9fdb22694fa1d3ee858ab7e8b183782ae2dc18e10a849" exitCode=0 Jan 31 10:30:01 crc kubenswrapper[4830]: I0131 10:30:01.215233 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-ttnrg" event={"ID":"d1346d7f-25da-4035-9c88-1f96c034d795","Type":"ContainerDied","Data":"85d3d5001bb1210574c9fdb22694fa1d3ee858ab7e8b183782ae2dc18e10a849"} Jan 31 10:30:01 crc kubenswrapper[4830]: I0131 10:30:01.229757 4830 generic.go:334] "Generic (PLEG): container finished" podID="4d28fd37-b97c-447a-9165-d90d11fd4698" containerID="902ecfc4e561e30299ea9903ea913ed25bc7ccebc30137d211b272c3dc40b959" exitCode=1 Jan 31 10:30:01 crc kubenswrapper[4830]: I0131 10:30:01.229933 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-d9xtg" event={"ID":"4d28fd37-b97c-447a-9165-d90d11fd4698","Type":"ContainerDied","Data":"902ecfc4e561e30299ea9903ea913ed25bc7ccebc30137d211b272c3dc40b959"} Jan 31 10:30:01 crc kubenswrapper[4830]: I0131 10:30:01.230847 4830 scope.go:117] "RemoveContainer" containerID="902ecfc4e561e30299ea9903ea913ed25bc7ccebc30137d211b272c3dc40b959" Jan 31 10:30:01 crc kubenswrapper[4830]: I0131 10:30:01.273114 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console-operator_console-operator-58897d9998-pkx9p_691a8aff-6fcd-400a-ace9-fb3fa8778206/console-operator/0.log" Jan 31 10:30:01 crc kubenswrapper[4830]: I0131 10:30:01.273465 4830 generic.go:334] "Generic (PLEG): container finished" podID="691a8aff-6fcd-400a-ace9-fb3fa8778206" containerID="497622e31559cfebe662e6932b434973f3b3c9ada6b4f06670330d37ab8d06cb" exitCode=1 Jan 31 10:30:01 crc kubenswrapper[4830]: I0131 10:30:01.273580 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-pkx9p" event={"ID":"691a8aff-6fcd-400a-ace9-fb3fa8778206","Type":"ContainerDied","Data":"497622e31559cfebe662e6932b434973f3b3c9ada6b4f06670330d37ab8d06cb"} Jan 31 10:30:01 crc kubenswrapper[4830]: I0131 10:30:01.276162 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4drvxhm" event={"ID":"250c9f1b-d78c-488e-b28e-6c2b783edd9b","Type":"ContainerStarted","Data":"c489581697eddb99e16083ac6a1224839d38921d99c577f6660c8bd286e4854a"} Jan 31 10:30:01 crc kubenswrapper[4830]: I0131 10:30:01.276512 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4drvxhm" Jan 31 10:30:01 crc kubenswrapper[4830]: I0131 10:30:01.305134 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_olm-operator-6b444d44fb-lb8hp_13f1c33b-cede-4fb1-9651-15d0dcd36173/olm-operator/0.log" Jan 31 10:30:01 crc kubenswrapper[4830]: I0131 10:30:01.305203 4830 generic.go:334] "Generic (PLEG): container finished" podID="13f1c33b-cede-4fb1-9651-15d0dcd36173" containerID="e0cb3249c4e74782086ada27cb6cdcdf73644dbc41e394c8950ad3621a48b54d" exitCode=2 Jan 31 10:30:01 crc kubenswrapper[4830]: I0131 10:30:01.306527 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lb8hp" event={"ID":"13f1c33b-cede-4fb1-9651-15d0dcd36173","Type":"ContainerDied","Data":"e0cb3249c4e74782086ada27cb6cdcdf73644dbc41e394c8950ad3621a48b54d"} Jan 31 10:30:01 crc kubenswrapper[4830]: I0131 10:30:01.306862 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-cpwlp" Jan 31 10:30:01 crc kubenswrapper[4830]: I0131 10:30:01.351046 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstack-cell1-galera-0" podUID="f37f41b4-3b56-45f9-a368-0f772bcf3002" containerName="galera" containerID="cri-o://5e7b646f4ff6e1b24d55539a3bc21143cce21d3f36a569975a8acf1b82a40d40" gracePeriod=20 Jan 31 10:30:01 crc kubenswrapper[4830]: I0131 10:30:01.386219 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstack-galera-0" podUID="2ca5d2f1-673e-4173-848a-8d32d33b8bcc" containerName="galera" containerID="cri-o://e774409d73ea3f7c6d1de27e1c877dc73032596ee68ca15941563cc71678e875" gracePeriod=19 Jan 31 10:30:01 crc kubenswrapper[4830]: E0131 10:30:01.668753 4830 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 83d53b8dc5ef1de88fb6035c22e2a2cf67146c16f93c7ba5c2795bd39e9c58c1 is running failed: container process not found" containerID="83d53b8dc5ef1de88fb6035c22e2a2cf67146c16f93c7ba5c2795bd39e9c58c1" cmd=["grpc_health_probe","-addr=:50051"] Jan 31 10:30:01 crc kubenswrapper[4830]: E0131 10:30:01.674621 4830 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 83d53b8dc5ef1de88fb6035c22e2a2cf67146c16f93c7ba5c2795bd39e9c58c1 is running failed: container process not found" containerID="83d53b8dc5ef1de88fb6035c22e2a2cf67146c16f93c7ba5c2795bd39e9c58c1" cmd=["grpc_health_probe","-addr=:50051"] Jan 31 10:30:01 crc kubenswrapper[4830]: E0131 10:30:01.675265 4830 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 83d53b8dc5ef1de88fb6035c22e2a2cf67146c16f93c7ba5c2795bd39e9c58c1 is running failed: container process not found" containerID="83d53b8dc5ef1de88fb6035c22e2a2cf67146c16f93c7ba5c2795bd39e9c58c1" cmd=["grpc_health_probe","-addr=:50051"] Jan 31 10:30:01 crc kubenswrapper[4830]: E0131 10:30:01.675297 4830 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 83d53b8dc5ef1de88fb6035c22e2a2cf67146c16f93c7ba5c2795bd39e9c58c1 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/certified-operators-jwvm4" podUID="14550547-ce63-48cc-800e-b74235d0daa1" containerName="registry-server" Jan 31 10:30:01 crc kubenswrapper[4830]: E0131 10:30:01.883313 4830 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of d90335abfa9207b4d4d63cf2f5f0c9a8b085e06ea6f5f12d88ddd096f3e7f6f8 is running failed: container process not found" containerID="d90335abfa9207b4d4d63cf2f5f0c9a8b085e06ea6f5f12d88ddd096f3e7f6f8" cmd=["grpc_health_probe","-addr=:50051"] Jan 31 10:30:01 crc kubenswrapper[4830]: E0131 10:30:01.884239 4830 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of d90335abfa9207b4d4d63cf2f5f0c9a8b085e06ea6f5f12d88ddd096f3e7f6f8 is running failed: container process not found" containerID="d90335abfa9207b4d4d63cf2f5f0c9a8b085e06ea6f5f12d88ddd096f3e7f6f8" cmd=["grpc_health_probe","-addr=:50051"] Jan 31 10:30:01 crc kubenswrapper[4830]: E0131 10:30:01.905689 4830 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of d90335abfa9207b4d4d63cf2f5f0c9a8b085e06ea6f5f12d88ddd096f3e7f6f8 is running failed: container process not found" containerID="d90335abfa9207b4d4d63cf2f5f0c9a8b085e06ea6f5f12d88ddd096f3e7f6f8" cmd=["grpc_health_probe","-addr=:50051"] Jan 31 10:30:01 crc kubenswrapper[4830]: E0131 10:30:01.905960 4830 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of d90335abfa9207b4d4d63cf2f5f0c9a8b085e06ea6f5f12d88ddd096f3e7f6f8 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/community-operators-fcmv2" podUID="c361702a-d6db-4925-809d-f08c6dd88a7d" containerName="registry-server" Jan 31 10:30:02 crc kubenswrapper[4830]: I0131 10:30:02.007560 4830 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/openstack-operator-controller-init-54dc59fd95-sv8r9" Jan 31 10:30:02 crc kubenswrapper[4830]: I0131 10:30:02.332112 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-nc25d" event={"ID":"b0b831b3-e535-4264-b46c-c93f7edd51d2","Type":"ContainerStarted","Data":"807e98fa7ab4d3b29b8c0eacc3a0674090decc86ee8ed7af79e2820cbfd60ec2"} Jan 31 10:30:02 crc kubenswrapper[4830]: I0131 10:30:02.362448 4830 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="1dc96f3d1e085f925a6a1b73ef1312bd85072065059f20eb6c11f7d044635f8b" exitCode=0 Jan 31 10:30:02 crc kubenswrapper[4830]: I0131 10:30:02.362542 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"1dc96f3d1e085f925a6a1b73ef1312bd85072065059f20eb6c11f7d044635f8b"} Jan 31 10:30:02 crc kubenswrapper[4830]: I0131 10:30:02.368793 4830 generic.go:334] "Generic (PLEG): container finished" podID="abf5a919-4697-4468-b9e4-8a4617e3a5ca" containerID="d706c18d76cfc22b1c391b0d4078ef2adb310b34bb5f7688d64455a13ee69324" exitCode=1 Jan 31 10:30:02 crc kubenswrapper[4830]: I0131 10:30:02.368851 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-slhpt" event={"ID":"abf5a919-4697-4468-b9e4-8a4617e3a5ca","Type":"ContainerDied","Data":"d706c18d76cfc22b1c391b0d4078ef2adb310b34bb5f7688d64455a13ee69324"} Jan 31 10:30:02 crc kubenswrapper[4830]: I0131 10:30:02.369955 4830 scope.go:117] "RemoveContainer" containerID="d706c18d76cfc22b1c391b0d4078ef2adb310b34bb5f7688d64455a13ee69324" Jan 31 10:30:02 crc kubenswrapper[4830]: I0131 10:30:02.378888 4830 generic.go:334] "Generic (PLEG): container finished" podID="68f255f0-5951-47f2-979e-af80607453e8" containerID="b2feb0aeb46343e5f4c408422d5788609c74ff97771f008e26d4476e2b4b51ca" exitCode=1 Jan 31 10:30:02 crc kubenswrapper[4830]: I0131 10:30:02.378980 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-czm79" event={"ID":"68f255f0-5951-47f2-979e-af80607453e8","Type":"ContainerDied","Data":"b2feb0aeb46343e5f4c408422d5788609c74ff97771f008e26d4476e2b4b51ca"} Jan 31 10:30:02 crc kubenswrapper[4830]: I0131 10:30:02.379916 4830 scope.go:117] "RemoveContainer" containerID="b2feb0aeb46343e5f4c408422d5788609c74ff97771f008e26d4476e2b4b51ca" Jan 31 10:30:02 crc kubenswrapper[4830]: I0131 10:30:02.385188 4830 generic.go:334] "Generic (PLEG): container finished" podID="007a4117-0dfe-485e-85df-6bc68e0cee5e" containerID="d90adb47121b9222b981576066f0df9e3cafccb2f5b0004e261272503fa48a5d" exitCode=0 Jan 31 10:30:02 crc kubenswrapper[4830]: I0131 10:30:02.385341 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-ckvgq" event={"ID":"007a4117-0dfe-485e-85df-6bc68e0cee5e","Type":"ContainerDied","Data":"d90adb47121b9222b981576066f0df9e3cafccb2f5b0004e261272503fa48a5d"} Jan 31 10:30:02 crc kubenswrapper[4830]: I0131 10:30:02.390802 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/1.log" Jan 31 10:30:02 crc kubenswrapper[4830]: I0131 10:30:02.397378 4830 patch_prober.go:28] interesting pod/route-controller-manager-bcf89fb66-fxq4w container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.69:8443/healthz\": dial tcp 10.217.0.69:8443: connect: connection refused" start-of-body= Jan 31 10:30:02 crc kubenswrapper[4830]: I0131 10:30:02.397433 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-bcf89fb66-fxq4w" podUID="9e3fd47c-6860-47d0-98ce-3654da25fdce" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.69:8443/healthz\": dial tcp 10.217.0.69:8443: connect: connection refused" Jan 31 10:30:02 crc kubenswrapper[4830]: I0131 10:30:02.408982 4830 generic.go:334] "Generic (PLEG): container finished" podID="00ab4f1c-2cc4-46b0-9e22-df58e5327352" containerID="9bb0f1093a37424441fc8374c5fb71cb747c472d42f4f79a9b45c2da6c131ac0" exitCode=0 Jan 31 10:30:02 crc kubenswrapper[4830]: I0131 10:30:02.409057 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-hkd74" event={"ID":"00ab4f1c-2cc4-46b0-9e22-df58e5327352","Type":"ContainerDied","Data":"9bb0f1093a37424441fc8374c5fb71cb747c472d42f4f79a9b45c2da6c131ac0"} Jan 31 10:30:02 crc kubenswrapper[4830]: I0131 10:30:02.415375 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-54dc59fd95-sv8r9" event={"ID":"2a183ae3-dc4b-4f75-a9ca-4832bd5faf06","Type":"ContainerStarted","Data":"2cb9ef4b71fc6e88b7a4591e8b3ba491189cf5ba108ae2e8e47935bc8ea5ef69"} Jan 31 10:30:02 crc kubenswrapper[4830]: I0131 10:30:02.415424 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-x7g8x" Jan 31 10:30:02 crc kubenswrapper[4830]: I0131 10:30:02.415439 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-54dc59fd95-sv8r9" Jan 31 10:30:02 crc kubenswrapper[4830]: I0131 10:30:02.533661 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-bbcf59d54-qmgsn" Jan 31 10:30:02 crc kubenswrapper[4830]: I0131 10:30:02.552452 4830 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/cinder-scheduler-0" podUID="c45f6608-4c27-4322-b60a-3362294e1ab8" containerName="cinder-scheduler" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 31 10:30:02 crc kubenswrapper[4830]: I0131 10:30:02.619449 4830 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-d8xvw" Jan 31 10:30:02 crc kubenswrapper[4830]: I0131 10:30:02.666774 4830 trace.go:236] Trace[1297076055]: "Calculate volume metrics of glance for pod openstack/glance-default-external-api-0" (31-Jan-2026 10:29:59.739) (total time: 2913ms): Jan 31 10:30:02 crc kubenswrapper[4830]: Trace[1297076055]: [2.91396411s] [2.91396411s] END Jan 31 10:30:02 crc kubenswrapper[4830]: I0131 10:30:02.686066 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-hcpk8" Jan 31 10:30:02 crc kubenswrapper[4830]: I0131 10:30:02.780152 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-8wnqw" Jan 31 10:30:02 crc kubenswrapper[4830]: I0131 10:30:02.821823 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/kube-state-metrics-0" podUID="adf0d571-b5dc-4d7c-9e8d-8813354a5128" containerName="kube-state-metrics" probeResult="failure" output="Get \"https://10.217.1.8:8081/readyz\": dial tcp 10.217.1.8:8081: connect: connection refused" Jan 31 10:30:02 crc kubenswrapper[4830]: I0131 10:30:02.938045 4830 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-d9xtg" Jan 31 10:30:03 crc kubenswrapper[4830]: I0131 10:30:03.089085 4830 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-kgrns" Jan 31 10:30:03 crc kubenswrapper[4830]: I0131 10:30:03.098815 4830 patch_prober.go:28] interesting pod/controller-manager-7896c76d86-c5cgs container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.68:8443/healthz\": dial tcp 10.217.0.68:8443: connect: connection refused" start-of-body= Jan 31 10:30:03 crc kubenswrapper[4830]: I0131 10:30:03.099170 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-7896c76d86-c5cgs" podUID="d85aeaa6-c7da-420f-b8d9-2d0983e2ab36" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.68:8443/healthz\": dial tcp 10.217.0.68:8443: connect: connection refused" Jan 31 10:30:03 crc kubenswrapper[4830]: I0131 10:30:03.122021 4830 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-4tqzd" Jan 31 10:30:03 crc kubenswrapper[4830]: I0131 10:30:03.284909 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-nc25d" Jan 31 10:30:03 crc kubenswrapper[4830]: I0131 10:30:03.284954 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-nc25d" Jan 31 10:30:03 crc kubenswrapper[4830]: I0131 10:30:03.357059 4830 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-ttnrg container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.24:8443/healthz\": dial tcp 10.217.0.24:8443: connect: connection refused" start-of-body= Jan 31 10:30:03 crc kubenswrapper[4830]: I0131 10:30:03.357111 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-ttnrg" podUID="d1346d7f-25da-4035-9c88-1f96c034d795" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.24:8443/healthz\": dial tcp 10.217.0.24:8443: connect: connection refused" Jan 31 10:30:03 crc kubenswrapper[4830]: I0131 10:30:03.451583 4830 patch_prober.go:28] interesting pod/route-controller-manager-bcf89fb66-fxq4w container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.69:8443/healthz\": dial tcp 10.217.0.69:8443: connect: connection refused" start-of-body= Jan 31 10:30:03 crc kubenswrapper[4830]: I0131 10:30:03.451623 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-bcf89fb66-fxq4w" podUID="9e3fd47c-6860-47d0-98ce-3654da25fdce" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.69:8443/healthz\": dial tcp 10.217.0.69:8443: connect: connection refused" Jan 31 10:30:03 crc kubenswrapper[4830]: I0131 10:30:03.452695 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-bcf89fb66-fxq4w" event={"ID":"9e3fd47c-6860-47d0-98ce-3654da25fdce","Type":"ContainerStarted","Data":"c1ff6073973afd9f9b276eb23574b95bc996a70416d07be38dd5d5da112a0077"} Jan 31 10:30:03 crc kubenswrapper[4830]: I0131 10:30:03.453171 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-bcf89fb66-fxq4w" Jan 31 10:30:03 crc kubenswrapper[4830]: I0131 10:30:03.540680 4830 patch_prober.go:28] interesting pod/observability-operator-59bdc8b94-l59nt container/operator namespace/openshift-operators: Readiness probe status=failure output="Get \"http://10.217.0.93:8081/healthz\": dial tcp 10.217.0.93:8081: connect: connection refused" start-of-body= Jan 31 10:30:03 crc kubenswrapper[4830]: I0131 10:30:03.540813 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators/observability-operator-59bdc8b94-l59nt" podUID="1ebf3f9f-75ef-4cfd-a7f7-d5fb556aeb48" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.93:8081/healthz\": dial tcp 10.217.0.93:8081: connect: connection refused" Jan 31 10:30:03 crc kubenswrapper[4830]: I0131 10:30:03.666017 4830 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-ld2fb" Jan 31 10:30:03 crc kubenswrapper[4830]: I0131 10:30:03.667009 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-rkvx7" Jan 31 10:30:03 crc kubenswrapper[4830]: I0131 10:30:03.700581 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/perses-operator-5bf474d74f-wtdqw" Jan 31 10:30:03 crc kubenswrapper[4830]: I0131 10:30:03.844087 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-gbjts" Jan 31 10:30:03 crc kubenswrapper[4830]: I0131 10:30:03.993039 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-gktql" Jan 31 10:30:04 crc kubenswrapper[4830]: I0131 10:30:04.008155 4830 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-2l42c" Jan 31 10:30:04 crc kubenswrapper[4830]: I0131 10:30:04.070434 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-57fbdcd888-cp9fj" Jan 31 10:30:04 crc kubenswrapper[4830]: I0131 10:30:04.300450 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-dbkt8" Jan 31 10:30:04 crc kubenswrapper[4830]: I0131 10:30:04.392330 4830 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-czm79" Jan 31 10:30:04 crc kubenswrapper[4830]: I0131 10:30:04.447936 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-564965969-62c8t" Jan 31 10:30:04 crc kubenswrapper[4830]: I0131 10:30:04.466811 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-d9xtg" event={"ID":"4d28fd37-b97c-447a-9165-d90d11fd4698","Type":"ContainerStarted","Data":"67c579c13e91a721e265e333fdab01b7959c8051f478406ae10fa11ad22c61c4"} Jan 31 10:30:04 crc kubenswrapper[4830]: I0131 10:30:04.480717 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-l8ckt" event={"ID":"a8d26ab0-33c3-4eb7-928b-ffba996579d9","Type":"ContainerStarted","Data":"c8685f08f3e1ed53c4d0bb305e700f19749ba057867b68e539b4e2bdaba619a0"} Jan 31 10:30:04 crc kubenswrapper[4830]: I0131 10:30:04.493228 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress_router-default-5444994796-vbcgc_bf986437-9998-4cd1-90b8-b2e0716e8d37/router/0.log" Jan 31 10:30:04 crc kubenswrapper[4830]: I0131 10:30:04.493541 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-vbcgc" event={"ID":"bf986437-9998-4cd1-90b8-b2e0716e8d37","Type":"ContainerStarted","Data":"fe8b95ab8870d074a0c4e0affad74980c3c2ff273fe41630068ff2632a1ec88a"} Jan 31 10:30:04 crc kubenswrapper[4830]: I0131 10:30:04.495618 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-4tqzd" event={"ID":"1891b74f-fe71-4020-98a3-5796e2a67ea2","Type":"ContainerStarted","Data":"f9e1ce3b268458998478e3eae4de7440a6b33bb3fc80754e48ae642a85567464"} Jan 31 10:30:04 crc kubenswrapper[4830]: I0131 10:30:04.496126 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-4tqzd" Jan 31 10:30:04 crc kubenswrapper[4830]: I0131 10:30:04.518269 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7896c76d86-c5cgs" event={"ID":"d85aeaa6-c7da-420f-b8d9-2d0983e2ab36","Type":"ContainerStarted","Data":"549e9aaafa169b62a442a31b9222ca0edc2434af3e31833ee6418a83160cb9c8"} Jan 31 10:30:04 crc kubenswrapper[4830]: I0131 10:30:04.523431 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-ld2fb" event={"ID":"f101dda8-ba4c-42c2-a8e3-9a5e53c2ec8a","Type":"ContainerStarted","Data":"54b3fcbd12cb8674d3545bc461a3ef2b604e3a4cafabc493ea50a9993c77c54a"} Jan 31 10:30:04 crc kubenswrapper[4830]: I0131 10:30:04.523477 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-ld2fb" Jan 31 10:30:04 crc kubenswrapper[4830]: I0131 10:30:04.524154 4830 patch_prober.go:28] interesting pod/route-controller-manager-bcf89fb66-fxq4w container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.69:8443/healthz\": dial tcp 10.217.0.69:8443: connect: connection refused" start-of-body= Jan 31 10:30:04 crc kubenswrapper[4830]: I0131 10:30:04.524200 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-bcf89fb66-fxq4w" podUID="9e3fd47c-6860-47d0-98ce-3654da25fdce" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.69:8443/healthz\": dial tcp 10.217.0.69:8443: connect: connection refused" Jan 31 10:30:04 crc kubenswrapper[4830]: I0131 10:30:04.673938 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-4v2n6" Jan 31 10:30:04 crc kubenswrapper[4830]: I0131 10:30:04.710337 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-4v2n6" Jan 31 10:30:04 crc kubenswrapper[4830]: I0131 10:30:04.787851 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-vbcgc" Jan 31 10:30:04 crc kubenswrapper[4830]: I0131 10:30:04.789553 4830 patch_prober.go:28] interesting pod/router-default-5444994796-vbcgc container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Jan 31 10:30:04 crc kubenswrapper[4830]: I0131 10:30:04.789614 4830 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-vbcgc" podUID="bf986437-9998-4cd1-90b8-b2e0716e8d37" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Jan 31 10:30:04 crc kubenswrapper[4830]: I0131 10:30:04.873961 4830 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/cinder-scheduler-0" podUID="c45f6608-4c27-4322-b60a-3362294e1ab8" containerName="cinder-scheduler" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 31 10:30:05 crc kubenswrapper[4830]: I0131 10:30:05.127404 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 31 10:30:05 crc kubenswrapper[4830]: I0131 10:30:05.385846 4830 prober.go:107] "Probe failed" probeType="Startup" pod="openstack-operators/openstack-operator-index-nc25d" podUID="b0b831b3-e535-4264-b46c-c93f7edd51d2" containerName="registry-server" probeResult="failure" output=< Jan 31 10:30:05 crc kubenswrapper[4830]: timeout: failed to connect service ":50051" within 1s Jan 31 10:30:05 crc kubenswrapper[4830]: > Jan 31 10:30:05 crc kubenswrapper[4830]: I0131 10:30:05.535605 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console-operator_console-operator-58897d9998-pkx9p_691a8aff-6fcd-400a-ace9-fb3fa8778206/console-operator/0.log" Jan 31 10:30:05 crc kubenswrapper[4830]: I0131 10:30:05.535687 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-pkx9p" event={"ID":"691a8aff-6fcd-400a-ace9-fb3fa8778206","Type":"ContainerStarted","Data":"d87798c73f7f97a9d4216cedcb4652699569eb4f2991986fa06a68420c8a2f02"} Jan 31 10:30:05 crc kubenswrapper[4830]: I0131 10:30:05.535995 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-pkx9p" Jan 31 10:30:05 crc kubenswrapper[4830]: I0131 10:30:05.536374 4830 patch_prober.go:28] interesting pod/console-operator-58897d9998-pkx9p container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.21:8443/readyz\": dial tcp 10.217.0.21:8443: connect: connection refused" start-of-body= Jan 31 10:30:05 crc kubenswrapper[4830]: I0131 10:30:05.536422 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-pkx9p" podUID="691a8aff-6fcd-400a-ace9-fb3fa8778206" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.21:8443/readyz\": dial tcp 10.217.0.21:8443: connect: connection refused" Jan 31 10:30:05 crc kubenswrapper[4830]: I0131 10:30:05.541526 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f2ea7efa-c50b-4208-a9df-2c3fc454762b","Type":"ContainerStarted","Data":"ab1df13540c5736f4883b48b9994a14b487e0ded3584187e708237aaa209feaf"} Jan 31 10:30:05 crc kubenswrapper[4830]: I0131 10:30:05.545027 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-ckvgq" event={"ID":"007a4117-0dfe-485e-85df-6bc68e0cee5e","Type":"ContainerStarted","Data":"efad43d343b6a628e8423c1af9ade963cc1d0d175c0e5440e4670432a9b377c6"} Jan 31 10:30:05 crc kubenswrapper[4830]: I0131 10:30:05.545174 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-ckvgq" Jan 31 10:30:05 crc kubenswrapper[4830]: I0131 10:30:05.547230 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-hkd74" event={"ID":"00ab4f1c-2cc4-46b0-9e22-df58e5327352","Type":"ContainerStarted","Data":"20ef8cca72c0e2fd785e2ba1545783b3fc38a5d0fa041da9cd02e71906ed57f0"} Jan 31 10:30:05 crc kubenswrapper[4830]: I0131 10:30:05.553068 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-kgrns" event={"ID":"758269b2-16c6-4f5a-8f9f-875659eede84","Type":"ContainerStarted","Data":"ac16fd2fac9ee2fb3217c093bd9e8938ae00e948e6a1c1a14a3134d6afaa8f46"} Jan 31 10:30:05 crc kubenswrapper[4830]: I0131 10:30:05.553214 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-kgrns" Jan 31 10:30:05 crc kubenswrapper[4830]: I0131 10:30:05.567388 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-d8xvw" event={"ID":"3f5623d3-168a-4bca-9154-ecb4c81b5b3b","Type":"ContainerStarted","Data":"499ca3e92294fb0e52fed281a0e4af1aa466c004b03da2d6e6fdb6fe359ee24c"} Jan 31 10:30:05 crc kubenswrapper[4830]: I0131 10:30:05.567492 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-d8xvw" Jan 31 10:30:05 crc kubenswrapper[4830]: I0131 10:30:05.575881 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-ttnrg" event={"ID":"d1346d7f-25da-4035-9c88-1f96c034d795","Type":"ContainerStarted","Data":"ad101828caebbf8ced82ba190642231e0975bbc37fab1bfe15be340d0ff060b6"} Jan 31 10:30:05 crc kubenswrapper[4830]: I0131 10:30:05.576104 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-ttnrg" Jan 31 10:30:05 crc kubenswrapper[4830]: I0131 10:30:05.581279 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-58x6p" event={"ID":"b6c3d452-2742-4f91-9857-5f5e0b50f348","Type":"ContainerStarted","Data":"06e06abf911a1bf737a6fbb8257daae8a7df3ded8410b73e69fde32d08f5e95d"} Jan 31 10:30:05 crc kubenswrapper[4830]: I0131 10:30:05.582053 4830 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-58x6p container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.71:8080/healthz\": dial tcp 10.217.0.71:8080: connect: connection refused" start-of-body= Jan 31 10:30:05 crc kubenswrapper[4830]: I0131 10:30:05.582097 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-58x6p" podUID="b6c3d452-2742-4f91-9857-5f5e0b50f348" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.71:8080/healthz\": dial tcp 10.217.0.71:8080: connect: connection refused" Jan 31 10:30:05 crc kubenswrapper[4830]: I0131 10:30:05.596421 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-slhpt" event={"ID":"abf5a919-4697-4468-b9e4-8a4617e3a5ca","Type":"ContainerStarted","Data":"d8dd3f9aee9fbfa0a72ecaffbb97effffb6a2f26871f38120bfbe63dad00b87e"} Jan 31 10:30:05 crc kubenswrapper[4830]: I0131 10:30:05.601429 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-czm79" event={"ID":"68f255f0-5951-47f2-979e-af80607453e8","Type":"ContainerStarted","Data":"da363703bff308bef71e1e8b5e24d4d9cc25253793d2075dd5142592675c20b8"} Jan 31 10:30:05 crc kubenswrapper[4830]: I0131 10:30:05.602573 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-czm79" Jan 31 10:30:05 crc kubenswrapper[4830]: I0131 10:30:05.605998 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-2l42c" event={"ID":"388d9bc4-698e-4dea-8029-aa32433cf734","Type":"ContainerStarted","Data":"4a6457bcae69f9affd19b6517b5d8124ca8787161fe89287df236130f23522e3"} Jan 31 10:30:05 crc kubenswrapper[4830]: I0131 10:30:05.607516 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-2l42c" Jan 31 10:30:05 crc kubenswrapper[4830]: I0131 10:30:05.646258 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-74fbb6df4-hrt7k" event={"ID":"1145e85a-d436-40c8-baef-ceb53625e06b","Type":"ContainerStarted","Data":"032907aec229649969b8bf6918bbb94ce0a668123b4370b91ca535b2b0a1f630"} Jan 31 10:30:05 crc kubenswrapper[4830]: I0131 10:30:05.646476 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-74fbb6df4-hrt7k" Jan 31 10:30:05 crc kubenswrapper[4830]: I0131 10:30:05.656563 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_olm-operator-6b444d44fb-lb8hp_13f1c33b-cede-4fb1-9651-15d0dcd36173/olm-operator/0.log" Jan 31 10:30:05 crc kubenswrapper[4830]: I0131 10:30:05.658008 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lb8hp" event={"ID":"13f1c33b-cede-4fb1-9651-15d0dcd36173","Type":"ContainerStarted","Data":"0eb15def758d3e20e9a1c295bcce1808506781edc13b4427c8d7a8c13dc18484"} Jan 31 10:30:05 crc kubenswrapper[4830]: I0131 10:30:05.658376 4830 patch_prober.go:28] interesting pod/controller-manager-7896c76d86-c5cgs container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.68:8443/healthz\": dial tcp 10.217.0.68:8443: connect: connection refused" start-of-body= Jan 31 10:30:05 crc kubenswrapper[4830]: I0131 10:30:05.658415 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-7896c76d86-c5cgs" podUID="d85aeaa6-c7da-420f-b8d9-2d0983e2ab36" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.68:8443/healthz\": dial tcp 10.217.0.68:8443: connect: connection refused" Jan 31 10:30:05 crc kubenswrapper[4830]: I0131 10:30:05.659069 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-d9xtg" Jan 31 10:30:05 crc kubenswrapper[4830]: I0131 10:30:05.659200 4830 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-lb8hp container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.38:8443/healthz\": dial tcp 10.217.0.38:8443: connect: connection refused" start-of-body= Jan 31 10:30:05 crc kubenswrapper[4830]: I0131 10:30:05.659245 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lb8hp" podUID="13f1c33b-cede-4fb1-9651-15d0dcd36173" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.38:8443/healthz\": dial tcp 10.217.0.38:8443: connect: connection refused" Jan 31 10:30:05 crc kubenswrapper[4830]: I0131 10:30:05.660112 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-l8ckt" Jan 31 10:30:05 crc kubenswrapper[4830]: I0131 10:30:05.660215 4830 patch_prober.go:28] interesting pod/downloads-7954f5f757-l8ckt container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" start-of-body= Jan 31 10:30:05 crc kubenswrapper[4830]: I0131 10:30:05.660241 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-l8ckt" podUID="a8d26ab0-33c3-4eb7-928b-ffba996579d9" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" Jan 31 10:30:05 crc kubenswrapper[4830]: I0131 10:30:05.675493 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-distributor-5f678c8dd6-vm6jc" Jan 31 10:30:05 crc kubenswrapper[4830]: I0131 10:30:05.789618 4830 patch_prober.go:28] interesting pod/router-default-5444994796-vbcgc container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Jan 31 10:30:05 crc kubenswrapper[4830]: I0131 10:30:05.789666 4830 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-vbcgc" podUID="bf986437-9998-4cd1-90b8-b2e0716e8d37" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Jan 31 10:30:05 crc kubenswrapper[4830]: I0131 10:30:05.813997 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-querier-76788598db-f89hf" Jan 31 10:30:05 crc kubenswrapper[4830]: I0131 10:30:05.937954 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-query-frontend-69d9546745-8k7rn" Jan 31 10:30:06 crc kubenswrapper[4830]: I0131 10:30:06.675433 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-56876" event={"ID":"2626e876-9148-4165-a735-a5a1733c014d","Type":"ContainerStarted","Data":"97bdea75e8a8d5bfbcd399570d47755e4471d3a3fe6a23d4f303703eb435ff27"} Jan 31 10:30:06 crc kubenswrapper[4830]: I0131 10:30:06.675987 4830 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-58x6p container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.71:8080/healthz\": dial tcp 10.217.0.71:8080: connect: connection refused" start-of-body= Jan 31 10:30:06 crc kubenswrapper[4830]: I0131 10:30:06.676078 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-58x6p" podUID="b6c3d452-2742-4f91-9857-5f5e0b50f348" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.71:8080/healthz\": dial tcp 10.217.0.71:8080: connect: connection refused" Jan 31 10:30:06 crc kubenswrapper[4830]: I0131 10:30:06.676170 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-58x6p" Jan 31 10:30:06 crc kubenswrapper[4830]: I0131 10:30:06.676377 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lb8hp" Jan 31 10:30:06 crc kubenswrapper[4830]: I0131 10:30:06.676536 4830 patch_prober.go:28] interesting pod/console-operator-58897d9998-pkx9p container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.21:8443/readyz\": dial tcp 10.217.0.21:8443: connect: connection refused" start-of-body= Jan 31 10:30:06 crc kubenswrapper[4830]: I0131 10:30:06.676563 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-pkx9p" podUID="691a8aff-6fcd-400a-ace9-fb3fa8778206" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.21:8443/readyz\": dial tcp 10.217.0.21:8443: connect: connection refused" Jan 31 10:30:06 crc kubenswrapper[4830]: I0131 10:30:06.676648 4830 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-lb8hp container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.38:8443/healthz\": dial tcp 10.217.0.38:8443: connect: connection refused" start-of-body= Jan 31 10:30:06 crc kubenswrapper[4830]: I0131 10:30:06.676676 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lb8hp" podUID="13f1c33b-cede-4fb1-9651-15d0dcd36173" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.38:8443/healthz\": dial tcp 10.217.0.38:8443: connect: connection refused" Jan 31 10:30:06 crc kubenswrapper[4830]: I0131 10:30:06.676754 4830 patch_prober.go:28] interesting pod/downloads-7954f5f757-l8ckt container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" start-of-body= Jan 31 10:30:06 crc kubenswrapper[4830]: I0131 10:30:06.676773 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-l8ckt" podUID="a8d26ab0-33c3-4eb7-928b-ffba996579d9" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" Jan 31 10:30:06 crc kubenswrapper[4830]: I0131 10:30:06.795824 4830 patch_prober.go:28] interesting pod/router-default-5444994796-vbcgc container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 31 10:30:06 crc kubenswrapper[4830]: [-]has-synced failed: reason withheld Jan 31 10:30:06 crc kubenswrapper[4830]: [+]process-running ok Jan 31 10:30:06 crc kubenswrapper[4830]: healthz check failed Jan 31 10:30:06 crc kubenswrapper[4830]: I0131 10:30:06.795890 4830 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-vbcgc" podUID="bf986437-9998-4cd1-90b8-b2e0716e8d37" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 31 10:30:07 crc kubenswrapper[4830]: I0131 10:30:07.041715 4830 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-58x6p container/marketplace-operator namespace/openshift-marketplace: Liveness probe status=failure output="Get \"http://10.217.0.71:8080/healthz\": dial tcp 10.217.0.71:8080: connect: connection refused" start-of-body= Jan 31 10:30:07 crc kubenswrapper[4830]: I0131 10:30:07.042162 4830 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/marketplace-operator-79b997595-58x6p" podUID="b6c3d452-2742-4f91-9857-5f5e0b50f348" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.71:8080/healthz\": dial tcp 10.217.0.71:8080: connect: connection refused" Jan 31 10:30:07 crc kubenswrapper[4830]: I0131 10:30:07.041796 4830 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-58x6p container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.71:8080/healthz\": dial tcp 10.217.0.71:8080: connect: connection refused" start-of-body= Jan 31 10:30:07 crc kubenswrapper[4830]: I0131 10:30:07.042247 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-58x6p" podUID="b6c3d452-2742-4f91-9857-5f5e0b50f348" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.71:8080/healthz\": dial tcp 10.217.0.71:8080: connect: connection refused" Jan 31 10:30:07 crc kubenswrapper[4830]: I0131 10:30:07.371764 4830 patch_prober.go:28] interesting pod/downloads-7954f5f757-l8ckt container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" start-of-body= Jan 31 10:30:07 crc kubenswrapper[4830]: I0131 10:30:07.371817 4830 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-l8ckt" podUID="a8d26ab0-33c3-4eb7-928b-ffba996579d9" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" Jan 31 10:30:07 crc kubenswrapper[4830]: I0131 10:30:07.371931 4830 patch_prober.go:28] interesting pod/downloads-7954f5f757-l8ckt container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" start-of-body= Jan 31 10:30:07 crc kubenswrapper[4830]: I0131 10:30:07.371988 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-l8ckt" podUID="a8d26ab0-33c3-4eb7-928b-ffba996579d9" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" Jan 31 10:30:07 crc kubenswrapper[4830]: I0131 10:30:07.500411 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/monitoring-plugin-546c959798-jmj57" Jan 31 10:30:07 crc kubenswrapper[4830]: E0131 10:30:07.572181 4830 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="e774409d73ea3f7c6d1de27e1c877dc73032596ee68ca15941563cc71678e875" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Jan 31 10:30:07 crc kubenswrapper[4830]: E0131 10:30:07.573266 4830 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="e774409d73ea3f7c6d1de27e1c877dc73032596ee68ca15941563cc71678e875" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Jan 31 10:30:07 crc kubenswrapper[4830]: E0131 10:30:07.574380 4830 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="e774409d73ea3f7c6d1de27e1c877dc73032596ee68ca15941563cc71678e875" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Jan 31 10:30:07 crc kubenswrapper[4830]: E0131 10:30:07.574424 4830 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="2ca5d2f1-673e-4173-848a-8d32d33b8bcc" containerName="galera" Jan 31 10:30:07 crc kubenswrapper[4830]: I0131 10:30:07.692334 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/1.log" Jan 31 10:30:07 crc kubenswrapper[4830]: I0131 10:30:07.695006 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"0b7af75821af5d77396c1a034a7b879de884804af0f33b2567bc99a80ee976d7"} Jan 31 10:30:07 crc kubenswrapper[4830]: I0131 10:30:07.717421 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"c75579a32d73284ea66c8cc53b53e4ccb238d7a153668bfe375da5eebdc54261"} Jan 31 10:30:07 crc kubenswrapper[4830]: I0131 10:30:07.717452 4830 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-58x6p container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.71:8080/healthz\": dial tcp 10.217.0.71:8080: connect: connection refused" start-of-body= Jan 31 10:30:07 crc kubenswrapper[4830]: I0131 10:30:07.717492 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-58x6p" podUID="b6c3d452-2742-4f91-9857-5f5e0b50f348" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.71:8080/healthz\": dial tcp 10.217.0.71:8080: connect: connection refused" Jan 31 10:30:07 crc kubenswrapper[4830]: I0131 10:30:07.717607 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 31 10:30:07 crc kubenswrapper[4830]: I0131 10:30:07.728491 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lb8hp" Jan 31 10:30:07 crc kubenswrapper[4830]: I0131 10:30:07.787933 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-vbcgc" Jan 31 10:30:07 crc kubenswrapper[4830]: I0131 10:30:07.792630 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-vbcgc" Jan 31 10:30:07 crc kubenswrapper[4830]: I0131 10:30:07.978014 4830 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/cinder-scheduler-0" podUID="c45f6608-4c27-4322-b60a-3362294e1ab8" containerName="cinder-scheduler" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 31 10:30:07 crc kubenswrapper[4830]: I0131 10:30:07.978540 4830 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 31 10:30:07 crc kubenswrapper[4830]: I0131 10:30:07.980051 4830 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="cinder-scheduler" containerStatusID={"Type":"cri-o","ID":"51ac00061815f4f76b1bcd8da30d2e12d08c49a1d9468728407654b6e4ca4049"} pod="openstack/cinder-scheduler-0" containerMessage="Container cinder-scheduler failed liveness probe, will be restarted" Jan 31 10:30:07 crc kubenswrapper[4830]: I0131 10:30:07.980247 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="c45f6608-4c27-4322-b60a-3362294e1ab8" containerName="cinder-scheduler" containerID="cri-o://51ac00061815f4f76b1bcd8da30d2e12d08c49a1d9468728407654b6e4ca4049" gracePeriod=30 Jan 31 10:30:08 crc kubenswrapper[4830]: I0131 10:30:08.455431 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-n4rml" Jan 31 10:30:08 crc kubenswrapper[4830]: I0131 10:30:08.519806 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-lp7ks" Jan 31 10:30:08 crc kubenswrapper[4830]: I0131 10:30:08.611507 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-79955696d6-vvv24" Jan 31 10:30:08 crc kubenswrapper[4830]: I0131 10:30:08.740970 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-l59nt" event={"ID":"1ebf3f9f-75ef-4cfd-a7f7-d5fb556aeb48","Type":"ContainerStarted","Data":"ee17f6a44d9a889e1a06972ca0a9941b3fa5e360227664cc344dea4c9f6b571d"} Jan 31 10:30:08 crc kubenswrapper[4830]: I0131 10:30:08.742013 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/observability-operator-59bdc8b94-l59nt" Jan 31 10:30:08 crc kubenswrapper[4830]: I0131 10:30:08.742106 4830 patch_prober.go:28] interesting pod/observability-operator-59bdc8b94-l59nt container/operator namespace/openshift-operators: Readiness probe status=failure output="Get \"http://10.217.0.93:8081/healthz\": dial tcp 10.217.0.93:8081: connect: connection refused" start-of-body= Jan 31 10:30:08 crc kubenswrapper[4830]: I0131 10:30:08.742148 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators/observability-operator-59bdc8b94-l59nt" podUID="1ebf3f9f-75ef-4cfd-a7f7-d5fb556aeb48" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.93:8081/healthz\": dial tcp 10.217.0.93:8081: connect: connection refused" Jan 31 10:30:08 crc kubenswrapper[4830]: I0131 10:30:08.745119 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-g5pvp" event={"ID":"35d308f6-fcf3-4b01-b26e-5c1848d6ee7d","Type":"ContainerStarted","Data":"e86b64149960c158ec64981516ef6121370bd7f2f8e6f872d9ee55dd39e96ace"} Jan 31 10:30:08 crc kubenswrapper[4830]: I0131 10:30:08.749429 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jwvm4" event={"ID":"14550547-ce63-48cc-800e-b74235d0daa1","Type":"ContainerStarted","Data":"edac13c31cb01881a840308ab03a7f179a98c9cb8cbf13034687d4903d5b6291"} Jan 31 10:30:08 crc kubenswrapper[4830]: I0131 10:30:08.756252 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fcmv2" event={"ID":"c361702a-d6db-4925-809d-f08c6dd88a7d","Type":"ContainerStarted","Data":"d5913c675196bb09ed1293c830aa1057b67466e7e2e8f3c7c7cf323f0a0e1cff"} Jan 31 10:30:08 crc kubenswrapper[4830]: I0131 10:30:08.764130 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-vbcgc" Jan 31 10:30:08 crc kubenswrapper[4830]: I0131 10:30:08.816290 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-6768bc9c9c-5t4z8" Jan 31 10:30:08 crc kubenswrapper[4830]: I0131 10:30:08.818576 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-pkx9p" Jan 31 10:30:09 crc kubenswrapper[4830]: E0131 10:30:09.036343 4830 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="5e7b646f4ff6e1b24d55539a3bc21143cce21d3f36a569975a8acf1b82a40d40" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Jan 31 10:30:09 crc kubenswrapper[4830]: E0131 10:30:09.047079 4830 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="5e7b646f4ff6e1b24d55539a3bc21143cce21d3f36a569975a8acf1b82a40d40" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Jan 31 10:30:09 crc kubenswrapper[4830]: E0131 10:30:09.048944 4830 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="5e7b646f4ff6e1b24d55539a3bc21143cce21d3f36a569975a8acf1b82a40d40" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Jan 31 10:30:09 crc kubenswrapper[4830]: E0131 10:30:09.049000 4830 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/openstack-cell1-galera-0" podUID="f37f41b4-3b56-45f9-a368-0f772bcf3002" containerName="galera" Jan 31 10:30:09 crc kubenswrapper[4830]: I0131 10:30:09.206430 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-g5pvp" Jan 31 10:30:09 crc kubenswrapper[4830]: I0131 10:30:09.206483 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-g5pvp" Jan 31 10:30:09 crc kubenswrapper[4830]: I0131 10:30:09.368779 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-ttnrg" Jan 31 10:30:09 crc kubenswrapper[4830]: I0131 10:30:09.376316 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-56876" Jan 31 10:30:09 crc kubenswrapper[4830]: I0131 10:30:09.376363 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-56876" Jan 31 10:30:09 crc kubenswrapper[4830]: I0131 10:30:09.563880 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 31 10:30:09 crc kubenswrapper[4830]: I0131 10:30:09.589350 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 31 10:30:09 crc kubenswrapper[4830]: I0131 10:30:09.770667 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"adf0d571-b5dc-4d7c-9e8d-8813354a5128","Type":"ContainerStarted","Data":"cb8c373bd4b4a0ac59de5a172c69bf5178050c35668592651a6b3d7e200c943e"} Jan 31 10:30:09 crc kubenswrapper[4830]: I0131 10:30:09.771106 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Jan 31 10:30:09 crc kubenswrapper[4830]: I0131 10:30:09.772008 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4drvxhm" Jan 31 10:30:09 crc kubenswrapper[4830]: I0131 10:30:09.773558 4830 generic.go:334] "Generic (PLEG): container finished" podID="f37f41b4-3b56-45f9-a368-0f772bcf3002" containerID="5e7b646f4ff6e1b24d55539a3bc21143cce21d3f36a569975a8acf1b82a40d40" exitCode=0 Jan 31 10:30:09 crc kubenswrapper[4830]: I0131 10:30:09.773686 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"f37f41b4-3b56-45f9-a368-0f772bcf3002","Type":"ContainerDied","Data":"5e7b646f4ff6e1b24d55539a3bc21143cce21d3f36a569975a8acf1b82a40d40"} Jan 31 10:30:09 crc kubenswrapper[4830]: I0131 10:30:09.775887 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 31 10:30:09 crc kubenswrapper[4830]: I0131 10:30:09.779353 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/observability-operator-59bdc8b94-l59nt" Jan 31 10:30:10 crc kubenswrapper[4830]: I0131 10:30:10.336068 4830 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-g5pvp" podUID="35d308f6-fcf3-4b01-b26e-5c1848d6ee7d" containerName="registry-server" probeResult="failure" output=< Jan 31 10:30:10 crc kubenswrapper[4830]: timeout: failed to connect service ":50051" within 1s Jan 31 10:30:10 crc kubenswrapper[4830]: > Jan 31 10:30:10 crc kubenswrapper[4830]: I0131 10:30:10.443432 4830 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-56876" podUID="2626e876-9148-4165-a735-a5a1733c014d" containerName="registry-server" probeResult="failure" output=< Jan 31 10:30:10 crc kubenswrapper[4830]: timeout: failed to connect service ":50051" within 1s Jan 31 10:30:10 crc kubenswrapper[4830]: > Jan 31 10:30:10 crc kubenswrapper[4830]: I0131 10:30:10.698401 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators-redhat/loki-operator-controller-manager-688c9bff97-t8jpp" Jan 31 10:30:11 crc kubenswrapper[4830]: I0131 10:30:11.663133 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-jwvm4" Jan 31 10:30:11 crc kubenswrapper[4830]: I0131 10:30:11.664977 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-jwvm4" Jan 31 10:30:11 crc kubenswrapper[4830]: I0131 10:30:11.803552 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"f37f41b4-3b56-45f9-a368-0f772bcf3002","Type":"ContainerStarted","Data":"9308d470b5f01e6a9460de70c100fd5e6024aff192119f6719533b561e3c242d"} Jan 31 10:30:11 crc kubenswrapper[4830]: I0131 10:30:11.852957 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-fcmv2" Jan 31 10:30:11 crc kubenswrapper[4830]: I0131 10:30:11.853009 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-fcmv2" Jan 31 10:30:12 crc kubenswrapper[4830]: I0131 10:30:12.036149 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-54dc59fd95-sv8r9" Jan 31 10:30:12 crc kubenswrapper[4830]: I0131 10:30:12.404931 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-bcf89fb66-fxq4w" Jan 31 10:30:12 crc kubenswrapper[4830]: I0131 10:30:12.622566 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-d8xvw" Jan 31 10:30:12 crc kubenswrapper[4830]: I0131 10:30:12.818220 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-cpwlp" Jan 31 10:30:12 crc kubenswrapper[4830]: I0131 10:30:12.939640 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-d9xtg" Jan 31 10:30:12 crc kubenswrapper[4830]: I0131 10:30:12.979508 4830 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-fcmv2" podUID="c361702a-d6db-4925-809d-f08c6dd88a7d" containerName="registry-server" probeResult="failure" output=< Jan 31 10:30:12 crc kubenswrapper[4830]: timeout: failed to connect service ":50051" within 1s Jan 31 10:30:12 crc kubenswrapper[4830]: > Jan 31 10:30:12 crc kubenswrapper[4830]: I0131 10:30:12.980011 4830 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-jwvm4" podUID="14550547-ce63-48cc-800e-b74235d0daa1" containerName="registry-server" probeResult="failure" output=< Jan 31 10:30:12 crc kubenswrapper[4830]: timeout: failed to connect service ":50051" within 1s Jan 31 10:30:12 crc kubenswrapper[4830]: > Jan 31 10:30:13 crc kubenswrapper[4830]: I0131 10:30:13.074309 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-slc6p" Jan 31 10:30:13 crc kubenswrapper[4830]: I0131 10:30:13.092338 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-kgrns" Jan 31 10:30:13 crc kubenswrapper[4830]: I0131 10:30:13.098205 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-7896c76d86-c5cgs" Jan 31 10:30:13 crc kubenswrapper[4830]: I0131 10:30:13.101678 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-7896c76d86-c5cgs" Jan 31 10:30:13 crc kubenswrapper[4830]: I0131 10:30:13.123453 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-4tqzd" Jan 31 10:30:13 crc kubenswrapper[4830]: I0131 10:30:13.802118 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-ld2fb" Jan 31 10:30:13 crc kubenswrapper[4830]: I0131 10:30:13.845397 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-nc25d" Jan 31 10:30:13 crc kubenswrapper[4830]: I0131 10:30:13.893195 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-nc25d" Jan 31 10:30:14 crc kubenswrapper[4830]: I0131 10:30:14.009956 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-2l42c" Jan 31 10:30:14 crc kubenswrapper[4830]: I0131 10:30:14.514279 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-czm79" Jan 31 10:30:14 crc kubenswrapper[4830]: I0131 10:30:14.694064 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-zwj92" Jan 31 10:30:15 crc kubenswrapper[4830]: I0131 10:30:15.645438 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-6768bc9c9c-5t4z8" podUID="3549201c-94c2-4a29-9e62-b498b4a97ece" containerName="oauth-openshift" containerID="cri-o://3ea2639af37448a2eefa4b679484a5226ded1742fea84b95ff9c683ad7e4fd1e" gracePeriod=15 Jan 31 10:30:16 crc kubenswrapper[4830]: I0131 10:30:16.312175 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-x7g8x" Jan 31 10:30:16 crc kubenswrapper[4830]: I0131 10:30:16.950540 4830 generic.go:334] "Generic (PLEG): container finished" podID="3549201c-94c2-4a29-9e62-b498b4a97ece" containerID="3ea2639af37448a2eefa4b679484a5226ded1742fea84b95ff9c683ad7e4fd1e" exitCode=0 Jan 31 10:30:16 crc kubenswrapper[4830]: I0131 10:30:16.950594 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-6768bc9c9c-5t4z8" event={"ID":"3549201c-94c2-4a29-9e62-b498b4a97ece","Type":"ContainerDied","Data":"3ea2639af37448a2eefa4b679484a5226ded1742fea84b95ff9c683ad7e4fd1e"} Jan 31 10:30:17 crc kubenswrapper[4830]: I0131 10:30:17.045550 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-58x6p" Jan 31 10:30:17 crc kubenswrapper[4830]: I0131 10:30:17.371159 4830 patch_prober.go:28] interesting pod/downloads-7954f5f757-l8ckt container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" start-of-body= Jan 31 10:30:17 crc kubenswrapper[4830]: I0131 10:30:17.371205 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-l8ckt" podUID="a8d26ab0-33c3-4eb7-928b-ffba996579d9" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" Jan 31 10:30:17 crc kubenswrapper[4830]: I0131 10:30:17.371154 4830 patch_prober.go:28] interesting pod/downloads-7954f5f757-l8ckt container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" start-of-body= Jan 31 10:30:17 crc kubenswrapper[4830]: I0131 10:30:17.371475 4830 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-l8ckt" podUID="a8d26ab0-33c3-4eb7-928b-ffba996579d9" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" Jan 31 10:30:17 crc kubenswrapper[4830]: E0131 10:30:17.572586 4830 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="e774409d73ea3f7c6d1de27e1c877dc73032596ee68ca15941563cc71678e875" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Jan 31 10:30:17 crc kubenswrapper[4830]: E0131 10:30:17.574974 4830 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="e774409d73ea3f7c6d1de27e1c877dc73032596ee68ca15941563cc71678e875" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Jan 31 10:30:17 crc kubenswrapper[4830]: E0131 10:30:17.576717 4830 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="e774409d73ea3f7c6d1de27e1c877dc73032596ee68ca15941563cc71678e875" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Jan 31 10:30:17 crc kubenswrapper[4830]: E0131 10:30:17.576786 4830 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="2ca5d2f1-673e-4173-848a-8d32d33b8bcc" containerName="galera" Jan 31 10:30:17 crc kubenswrapper[4830]: I0131 10:30:17.963037 4830 generic.go:334] "Generic (PLEG): container finished" podID="c45f6608-4c27-4322-b60a-3362294e1ab8" containerID="51ac00061815f4f76b1bcd8da30d2e12d08c49a1d9468728407654b6e4ca4049" exitCode=0 Jan 31 10:30:17 crc kubenswrapper[4830]: I0131 10:30:17.963093 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"c45f6608-4c27-4322-b60a-3362294e1ab8","Type":"ContainerDied","Data":"51ac00061815f4f76b1bcd8da30d2e12d08c49a1d9468728407654b6e4ca4049"} Jan 31 10:30:18 crc kubenswrapper[4830]: I0131 10:30:18.768499 4830 patch_prober.go:28] interesting pod/oauth-openshift-6768bc9c9c-5t4z8 container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.63:6443/healthz\": dial tcp 10.217.0.63:6443: connect: connection refused" start-of-body= Jan 31 10:30:18 crc kubenswrapper[4830]: I0131 10:30:18.768876 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-6768bc9c9c-5t4z8" podUID="3549201c-94c2-4a29-9e62-b498b4a97ece" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.63:6443/healthz\": dial tcp 10.217.0.63:6443: connect: connection refused" Jan 31 10:30:18 crc kubenswrapper[4830]: I0131 10:30:18.943979 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-bbcf59d54-qmgsn" podUID="afe486bd-6c62-42d6-ac04-9c2bb21204d7" containerName="console" containerID="cri-o://1a17af186cd49559857c4ee4b13ab37df2f7b3afdf6c5f13f5fe7127854f599d" gracePeriod=14 Jan 31 10:30:19 crc kubenswrapper[4830]: I0131 10:30:19.035084 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Jan 31 10:30:19 crc kubenswrapper[4830]: I0131 10:30:19.035843 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Jan 31 10:30:19 crc kubenswrapper[4830]: I0131 10:30:19.985770 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-6768bc9c9c-5t4z8" event={"ID":"3549201c-94c2-4a29-9e62-b498b4a97ece","Type":"ContainerStarted","Data":"ff72bf03eaeaec9b8e6f1fb0cbb330d785be2681ae4df2abde89ff60794641df"} Jan 31 10:30:19 crc kubenswrapper[4830]: I0131 10:30:19.986777 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-6768bc9c9c-5t4z8" Jan 31 10:30:19 crc kubenswrapper[4830]: I0131 10:30:19.986923 4830 patch_prober.go:28] interesting pod/oauth-openshift-6768bc9c9c-5t4z8 container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.63:6443/healthz\": dial tcp 10.217.0.63:6443: connect: connection refused" start-of-body= Jan 31 10:30:19 crc kubenswrapper[4830]: I0131 10:30:19.986974 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-6768bc9c9c-5t4z8" podUID="3549201c-94c2-4a29-9e62-b498b4a97ece" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.63:6443/healthz\": dial tcp 10.217.0.63:6443: connect: connection refused" Jan 31 10:30:20 crc kubenswrapper[4830]: I0131 10:30:20.254649 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-bbcf59d54-qmgsn_afe486bd-6c62-42d6-ac04-9c2bb21204d7/console/0.log" Jan 31 10:30:20 crc kubenswrapper[4830]: I0131 10:30:20.254717 4830 generic.go:334] "Generic (PLEG): container finished" podID="afe486bd-6c62-42d6-ac04-9c2bb21204d7" containerID="1a17af186cd49559857c4ee4b13ab37df2f7b3afdf6c5f13f5fe7127854f599d" exitCode=2 Jan 31 10:30:20 crc kubenswrapper[4830]: I0131 10:30:20.266306 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-bbcf59d54-qmgsn" event={"ID":"afe486bd-6c62-42d6-ac04-9c2bb21204d7","Type":"ContainerDied","Data":"1a17af186cd49559857c4ee4b13ab37df2f7b3afdf6c5f13f5fe7127854f599d"} Jan 31 10:30:20 crc kubenswrapper[4830]: I0131 10:30:20.385252 4830 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-g5pvp" podUID="35d308f6-fcf3-4b01-b26e-5c1848d6ee7d" containerName="registry-server" probeResult="failure" output=< Jan 31 10:30:20 crc kubenswrapper[4830]: timeout: failed to connect service ":50051" within 1s Jan 31 10:30:20 crc kubenswrapper[4830]: > Jan 31 10:30:20 crc kubenswrapper[4830]: I0131 10:30:20.438396 4830 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-56876" podUID="2626e876-9148-4165-a735-a5a1733c014d" containerName="registry-server" probeResult="failure" output=< Jan 31 10:30:20 crc kubenswrapper[4830]: timeout: failed to connect service ":50051" within 1s Jan 31 10:30:20 crc kubenswrapper[4830]: > Jan 31 10:30:21 crc kubenswrapper[4830]: I0131 10:30:21.267100 4830 generic.go:334] "Generic (PLEG): container finished" podID="2ca5d2f1-673e-4173-848a-8d32d33b8bcc" containerID="e774409d73ea3f7c6d1de27e1c877dc73032596ee68ca15941563cc71678e875" exitCode=137 Jan 31 10:30:21 crc kubenswrapper[4830]: I0131 10:30:21.267190 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"2ca5d2f1-673e-4173-848a-8d32d33b8bcc","Type":"ContainerDied","Data":"e774409d73ea3f7c6d1de27e1c877dc73032596ee68ca15941563cc71678e875"} Jan 31 10:30:21 crc kubenswrapper[4830]: I0131 10:30:21.268009 4830 patch_prober.go:28] interesting pod/oauth-openshift-6768bc9c9c-5t4z8 container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.63:6443/healthz\": dial tcp 10.217.0.63:6443: connect: connection refused" start-of-body= Jan 31 10:30:21 crc kubenswrapper[4830]: I0131 10:30:21.268053 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-6768bc9c9c-5t4z8" podUID="3549201c-94c2-4a29-9e62-b498b4a97ece" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.63:6443/healthz\": dial tcp 10.217.0.63:6443: connect: connection refused" Jan 31 10:30:22 crc kubenswrapper[4830]: I0131 10:30:22.322628 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-bbcf59d54-qmgsn_afe486bd-6c62-42d6-ac04-9c2bb21204d7/console/0.log" Jan 31 10:30:22 crc kubenswrapper[4830]: I0131 10:30:22.323879 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-bbcf59d54-qmgsn" event={"ID":"afe486bd-6c62-42d6-ac04-9c2bb21204d7","Type":"ContainerStarted","Data":"1ac3ddaca511383f10019a91c11c43d1136d93948cfd931d1441e4a94ef05e83"} Jan 31 10:30:22 crc kubenswrapper[4830]: I0131 10:30:22.503242 4830 patch_prober.go:28] interesting pod/console-bbcf59d54-qmgsn container/console namespace/openshift-console: Readiness probe status=failure output="Get \"https://10.217.0.137:8443/health\": dial tcp 10.217.0.137:8443: connect: connection refused" start-of-body= Jan 31 10:30:22 crc kubenswrapper[4830]: I0131 10:30:22.503618 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/console-bbcf59d54-qmgsn" podUID="afe486bd-6c62-42d6-ac04-9c2bb21204d7" containerName="console" probeResult="failure" output="Get \"https://10.217.0.137:8443/health\": dial tcp 10.217.0.137:8443: connect: connection refused" Jan 31 10:30:22 crc kubenswrapper[4830]: I0131 10:30:22.972540 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Jan 31 10:30:23 crc kubenswrapper[4830]: I0131 10:30:23.425332 4830 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-fcmv2" podUID="c361702a-d6db-4925-809d-f08c6dd88a7d" containerName="registry-server" probeResult="failure" output=< Jan 31 10:30:23 crc kubenswrapper[4830]: timeout: failed to connect service ":50051" within 1s Jan 31 10:30:23 crc kubenswrapper[4830]: > Jan 31 10:30:23 crc kubenswrapper[4830]: I0131 10:30:23.448537 4830 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-jwvm4" podUID="14550547-ce63-48cc-800e-b74235d0daa1" containerName="registry-server" probeResult="failure" output=< Jan 31 10:30:23 crc kubenswrapper[4830]: timeout: failed to connect service ":50051" within 1s Jan 31 10:30:23 crc kubenswrapper[4830]: > Jan 31 10:30:23 crc kubenswrapper[4830]: I0131 10:30:23.660929 4830 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-d8xvw" podUID="3f5623d3-168a-4bca-9154-ecb4c81b5b3b" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.103:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 31 10:30:25 crc kubenswrapper[4830]: I0131 10:30:25.318879 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 31 10:30:25 crc kubenswrapper[4830]: I0131 10:30:25.378937 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"2ca5d2f1-673e-4173-848a-8d32d33b8bcc","Type":"ContainerStarted","Data":"27e8faa0fc9d2b6ba0fd3681988906b34d8752a6484f4aeb6612a9c3389eb785"} Jan 31 10:30:25 crc kubenswrapper[4830]: I0131 10:30:25.397870 4830 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 6.056323671s: [/var/lib/containers/storage/overlay/1f94de73988c62cd2595c43a4a3f333d9472e4ea54d5b6688e333be1e4a81677/diff /var/log/pods/openstack_openstackclient_4ed170d0-8e88-40c3-a2b4-9908fc87a3db/openstackclient/0.log]; will not log again for this container unless duration exceeds 2s Jan 31 10:30:27 crc kubenswrapper[4830]: I0131 10:30:27.372344 4830 patch_prober.go:28] interesting pod/downloads-7954f5f757-l8ckt container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" start-of-body= Jan 31 10:30:27 crc kubenswrapper[4830]: I0131 10:30:27.372986 4830 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-l8ckt" podUID="a8d26ab0-33c3-4eb7-928b-ffba996579d9" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" Jan 31 10:30:27 crc kubenswrapper[4830]: I0131 10:30:27.373037 4830 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-console/downloads-7954f5f757-l8ckt" Jan 31 10:30:27 crc kubenswrapper[4830]: I0131 10:30:27.374058 4830 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="download-server" containerStatusID={"Type":"cri-o","ID":"c8685f08f3e1ed53c4d0bb305e700f19749ba057867b68e539b4e2bdaba619a0"} pod="openshift-console/downloads-7954f5f757-l8ckt" containerMessage="Container download-server failed liveness probe, will be restarted" Jan 31 10:30:27 crc kubenswrapper[4830]: I0131 10:30:27.374098 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/downloads-7954f5f757-l8ckt" podUID="a8d26ab0-33c3-4eb7-928b-ffba996579d9" containerName="download-server" containerID="cri-o://c8685f08f3e1ed53c4d0bb305e700f19749ba057867b68e539b4e2bdaba619a0" gracePeriod=2 Jan 31 10:30:27 crc kubenswrapper[4830]: I0131 10:30:27.374970 4830 patch_prober.go:28] interesting pod/downloads-7954f5f757-l8ckt container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" start-of-body= Jan 31 10:30:27 crc kubenswrapper[4830]: I0131 10:30:27.375013 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-l8ckt" podUID="a8d26ab0-33c3-4eb7-928b-ffba996579d9" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" Jan 31 10:30:27 crc kubenswrapper[4830]: I0131 10:30:27.375436 4830 patch_prober.go:28] interesting pod/downloads-7954f5f757-l8ckt container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" start-of-body= Jan 31 10:30:27 crc kubenswrapper[4830]: I0131 10:30:27.375522 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-l8ckt" podUID="a8d26ab0-33c3-4eb7-928b-ffba996579d9" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" Jan 31 10:30:27 crc kubenswrapper[4830]: I0131 10:30:27.570352 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Jan 31 10:30:27 crc kubenswrapper[4830]: I0131 10:30:27.570390 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Jan 31 10:30:28 crc kubenswrapper[4830]: I0131 10:30:28.412381 4830 generic.go:334] "Generic (PLEG): container finished" podID="a8d26ab0-33c3-4eb7-928b-ffba996579d9" containerID="c8685f08f3e1ed53c4d0bb305e700f19749ba057867b68e539b4e2bdaba619a0" exitCode=0 Jan 31 10:30:28 crc kubenswrapper[4830]: I0131 10:30:28.412454 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-l8ckt" event={"ID":"a8d26ab0-33c3-4eb7-928b-ffba996579d9","Type":"ContainerDied","Data":"c8685f08f3e1ed53c4d0bb305e700f19749ba057867b68e539b4e2bdaba619a0"} Jan 31 10:30:28 crc kubenswrapper[4830]: I0131 10:30:28.412757 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-l8ckt" event={"ID":"a8d26ab0-33c3-4eb7-928b-ffba996579d9","Type":"ContainerStarted","Data":"f2bbdf5755b002d5ed8fe2832da8f74380aad3b363a6be18961d435492f1eb33"} Jan 31 10:30:28 crc kubenswrapper[4830]: I0131 10:30:28.412790 4830 scope.go:117] "RemoveContainer" containerID="663300a1eec888f0c1315103a2cb4760fc9ed1d0e7eb16f88381ae83cf26de31" Jan 31 10:30:28 crc kubenswrapper[4830]: I0131 10:30:28.413086 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-l8ckt" Jan 31 10:30:28 crc kubenswrapper[4830]: I0131 10:30:28.413433 4830 patch_prober.go:28] interesting pod/downloads-7954f5f757-l8ckt container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" start-of-body= Jan 31 10:30:28 crc kubenswrapper[4830]: I0131 10:30:28.413478 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-l8ckt" podUID="a8d26ab0-33c3-4eb7-928b-ffba996579d9" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" Jan 31 10:30:28 crc kubenswrapper[4830]: I0131 10:30:28.771871 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-6768bc9c9c-5t4z8" Jan 31 10:30:29 crc kubenswrapper[4830]: I0131 10:30:29.431127 4830 patch_prober.go:28] interesting pod/downloads-7954f5f757-l8ckt container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" start-of-body= Jan 31 10:30:29 crc kubenswrapper[4830]: I0131 10:30:29.431195 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-l8ckt" podUID="a8d26ab0-33c3-4eb7-928b-ffba996579d9" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" Jan 31 10:30:30 crc kubenswrapper[4830]: I0131 10:30:30.441891 4830 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-g5pvp" podUID="35d308f6-fcf3-4b01-b26e-5c1848d6ee7d" containerName="registry-server" probeResult="failure" output=< Jan 31 10:30:30 crc kubenswrapper[4830]: timeout: failed to connect service ":50051" within 1s Jan 31 10:30:30 crc kubenswrapper[4830]: > Jan 31 10:30:30 crc kubenswrapper[4830]: I0131 10:30:30.442287 4830 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-56876" podUID="2626e876-9148-4165-a735-a5a1733c014d" containerName="registry-server" probeResult="failure" output=< Jan 31 10:30:30 crc kubenswrapper[4830]: timeout: failed to connect service ":50051" within 1s Jan 31 10:30:30 crc kubenswrapper[4830]: > Jan 31 10:30:30 crc kubenswrapper[4830]: I0131 10:30:30.445911 4830 generic.go:334] "Generic (PLEG): container finished" podID="1fa42e50-1a05-499f-9396-a1e5dc1161f6" containerID="3cdbae831121a91472164c34cdf5b3766cb2b6765f577f7243d7c239f5a135a1" exitCode=1 Jan 31 10:30:30 crc kubenswrapper[4830]: I0131 10:30:30.445947 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"1fa42e50-1a05-499f-9396-a1e5dc1161f6","Type":"ContainerDied","Data":"3cdbae831121a91472164c34cdf5b3766cb2b6765f577f7243d7c239f5a135a1"} Jan 31 10:30:30 crc kubenswrapper[4830]: I0131 10:30:30.601318 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29497590-z54jt"] Jan 31 10:30:30 crc kubenswrapper[4830]: E0131 10:30:30.607761 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46a71ddb-bffa-4bf2-8f45-4eba31e50fa7" containerName="extract-content" Jan 31 10:30:30 crc kubenswrapper[4830]: I0131 10:30:30.611777 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="46a71ddb-bffa-4bf2-8f45-4eba31e50fa7" containerName="extract-content" Jan 31 10:30:30 crc kubenswrapper[4830]: E0131 10:30:30.611943 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46a71ddb-bffa-4bf2-8f45-4eba31e50fa7" containerName="registry-server" Jan 31 10:30:30 crc kubenswrapper[4830]: I0131 10:30:30.611957 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="46a71ddb-bffa-4bf2-8f45-4eba31e50fa7" containerName="registry-server" Jan 31 10:30:30 crc kubenswrapper[4830]: E0131 10:30:30.611995 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46a71ddb-bffa-4bf2-8f45-4eba31e50fa7" containerName="extract-utilities" Jan 31 10:30:30 crc kubenswrapper[4830]: I0131 10:30:30.612003 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="46a71ddb-bffa-4bf2-8f45-4eba31e50fa7" containerName="extract-utilities" Jan 31 10:30:30 crc kubenswrapper[4830]: I0131 10:30:30.614239 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="46a71ddb-bffa-4bf2-8f45-4eba31e50fa7" containerName="registry-server" Jan 31 10:30:30 crc kubenswrapper[4830]: I0131 10:30:30.625766 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29497590-z54jt" Jan 31 10:30:30 crc kubenswrapper[4830]: I0131 10:30:30.634304 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 31 10:30:30 crc kubenswrapper[4830]: I0131 10:30:30.649944 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 31 10:30:30 crc kubenswrapper[4830]: I0131 10:30:30.762555 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29497590-z54jt"] Jan 31 10:30:30 crc kubenswrapper[4830]: I0131 10:30:30.767804 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0554fbd5-a18f-4a9d-88f1-747a031f3de5-config-volume\") pod \"collect-profiles-29497590-z54jt\" (UID: \"0554fbd5-a18f-4a9d-88f1-747a031f3de5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497590-z54jt" Jan 31 10:30:30 crc kubenswrapper[4830]: I0131 10:30:30.767886 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0554fbd5-a18f-4a9d-88f1-747a031f3de5-secret-volume\") pod \"collect-profiles-29497590-z54jt\" (UID: \"0554fbd5-a18f-4a9d-88f1-747a031f3de5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497590-z54jt" Jan 31 10:30:30 crc kubenswrapper[4830]: I0131 10:30:30.768094 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mkjh6\" (UniqueName: \"kubernetes.io/projected/0554fbd5-a18f-4a9d-88f1-747a031f3de5-kube-api-access-mkjh6\") pod \"collect-profiles-29497590-z54jt\" (UID: \"0554fbd5-a18f-4a9d-88f1-747a031f3de5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497590-z54jt" Jan 31 10:30:30 crc kubenswrapper[4830]: I0131 10:30:30.877412 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0554fbd5-a18f-4a9d-88f1-747a031f3de5-secret-volume\") pod \"collect-profiles-29497590-z54jt\" (UID: \"0554fbd5-a18f-4a9d-88f1-747a031f3de5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497590-z54jt" Jan 31 10:30:30 crc kubenswrapper[4830]: I0131 10:30:30.877604 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mkjh6\" (UniqueName: \"kubernetes.io/projected/0554fbd5-a18f-4a9d-88f1-747a031f3de5-kube-api-access-mkjh6\") pod \"collect-profiles-29497590-z54jt\" (UID: \"0554fbd5-a18f-4a9d-88f1-747a031f3de5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497590-z54jt" Jan 31 10:30:30 crc kubenswrapper[4830]: I0131 10:30:30.877833 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0554fbd5-a18f-4a9d-88f1-747a031f3de5-config-volume\") pod \"collect-profiles-29497590-z54jt\" (UID: \"0554fbd5-a18f-4a9d-88f1-747a031f3de5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497590-z54jt" Jan 31 10:30:30 crc kubenswrapper[4830]: I0131 10:30:30.879443 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0554fbd5-a18f-4a9d-88f1-747a031f3de5-config-volume\") pod \"collect-profiles-29497590-z54jt\" (UID: \"0554fbd5-a18f-4a9d-88f1-747a031f3de5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497590-z54jt" Jan 31 10:30:31 crc kubenswrapper[4830]: I0131 10:30:31.169981 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mkjh6\" (UniqueName: \"kubernetes.io/projected/0554fbd5-a18f-4a9d-88f1-747a031f3de5-kube-api-access-mkjh6\") pod \"collect-profiles-29497590-z54jt\" (UID: \"0554fbd5-a18f-4a9d-88f1-747a031f3de5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497590-z54jt" Jan 31 10:30:31 crc kubenswrapper[4830]: I0131 10:30:31.170603 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0554fbd5-a18f-4a9d-88f1-747a031f3de5-secret-volume\") pod \"collect-profiles-29497590-z54jt\" (UID: \"0554fbd5-a18f-4a9d-88f1-747a031f3de5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497590-z54jt" Jan 31 10:30:31 crc kubenswrapper[4830]: I0131 10:30:31.298012 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29497590-z54jt" Jan 31 10:30:31 crc kubenswrapper[4830]: I0131 10:30:31.458583 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"c45f6608-4c27-4322-b60a-3362294e1ab8","Type":"ContainerStarted","Data":"5814f006099f4c9c908b31cd6a47c1100f2c2eae3e0572c8e4c7068d532ec1c7"} Jan 31 10:30:31 crc kubenswrapper[4830]: E0131 10:30:31.835431 4830 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.53:44396->38.102.83.53:38781: write tcp 38.102.83.53:44396->38.102.83.53:38781: write: broken pipe Jan 31 10:30:32 crc kubenswrapper[4830]: I0131 10:30:32.502774 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-bbcf59d54-qmgsn" Jan 31 10:30:32 crc kubenswrapper[4830]: I0131 10:30:32.503131 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-bbcf59d54-qmgsn" Jan 31 10:30:32 crc kubenswrapper[4830]: I0131 10:30:32.510346 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-bbcf59d54-qmgsn" Jan 31 10:30:32 crc kubenswrapper[4830]: I0131 10:30:32.825220 4830 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-jwvm4" podUID="14550547-ce63-48cc-800e-b74235d0daa1" containerName="registry-server" probeResult="failure" output=< Jan 31 10:30:32 crc kubenswrapper[4830]: timeout: failed to connect service ":50051" within 1s Jan 31 10:30:32 crc kubenswrapper[4830]: > Jan 31 10:30:32 crc kubenswrapper[4830]: I0131 10:30:32.919526 4830 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-fcmv2" podUID="c361702a-d6db-4925-809d-f08c6dd88a7d" containerName="registry-server" probeResult="failure" output=< Jan 31 10:30:32 crc kubenswrapper[4830]: timeout: failed to connect service ":50051" within 1s Jan 31 10:30:32 crc kubenswrapper[4830]: > Jan 31 10:30:33 crc kubenswrapper[4830]: I0131 10:30:33.493254 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-bbcf59d54-qmgsn" Jan 31 10:30:34 crc kubenswrapper[4830]: I0131 10:30:34.061867 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 31 10:30:34 crc kubenswrapper[4830]: I0131 10:30:34.203459 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/1fa42e50-1a05-499f-9396-a1e5dc1161f6-ssh-key\") pod \"1fa42e50-1a05-499f-9396-a1e5dc1161f6\" (UID: \"1fa42e50-1a05-499f-9396-a1e5dc1161f6\") " Jan 31 10:30:34 crc kubenswrapper[4830]: I0131 10:30:34.203557 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/1fa42e50-1a05-499f-9396-a1e5dc1161f6-openstack-config\") pod \"1fa42e50-1a05-499f-9396-a1e5dc1161f6\" (UID: \"1fa42e50-1a05-499f-9396-a1e5dc1161f6\") " Jan 31 10:30:34 crc kubenswrapper[4830]: I0131 10:30:34.203610 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/1fa42e50-1a05-499f-9396-a1e5dc1161f6-openstack-config-secret\") pod \"1fa42e50-1a05-499f-9396-a1e5dc1161f6\" (UID: \"1fa42e50-1a05-499f-9396-a1e5dc1161f6\") " Jan 31 10:30:34 crc kubenswrapper[4830]: I0131 10:30:34.203654 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1fa42e50-1a05-499f-9396-a1e5dc1161f6-config-data\") pod \"1fa42e50-1a05-499f-9396-a1e5dc1161f6\" (UID: \"1fa42e50-1a05-499f-9396-a1e5dc1161f6\") " Jan 31 10:30:34 crc kubenswrapper[4830]: I0131 10:30:34.203683 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/1fa42e50-1a05-499f-9396-a1e5dc1161f6-test-operator-ephemeral-temporary\") pod \"1fa42e50-1a05-499f-9396-a1e5dc1161f6\" (UID: \"1fa42e50-1a05-499f-9396-a1e5dc1161f6\") " Jan 31 10:30:34 crc kubenswrapper[4830]: I0131 10:30:34.203834 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nchrt\" (UniqueName: \"kubernetes.io/projected/1fa42e50-1a05-499f-9396-a1e5dc1161f6-kube-api-access-nchrt\") pod \"1fa42e50-1a05-499f-9396-a1e5dc1161f6\" (UID: \"1fa42e50-1a05-499f-9396-a1e5dc1161f6\") " Jan 31 10:30:34 crc kubenswrapper[4830]: I0131 10:30:34.203887 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/1fa42e50-1a05-499f-9396-a1e5dc1161f6-test-operator-ephemeral-workdir\") pod \"1fa42e50-1a05-499f-9396-a1e5dc1161f6\" (UID: \"1fa42e50-1a05-499f-9396-a1e5dc1161f6\") " Jan 31 10:30:34 crc kubenswrapper[4830]: I0131 10:30:34.204003 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-logs\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"1fa42e50-1a05-499f-9396-a1e5dc1161f6\" (UID: \"1fa42e50-1a05-499f-9396-a1e5dc1161f6\") " Jan 31 10:30:34 crc kubenswrapper[4830]: I0131 10:30:34.204030 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/1fa42e50-1a05-499f-9396-a1e5dc1161f6-ca-certs\") pod \"1fa42e50-1a05-499f-9396-a1e5dc1161f6\" (UID: \"1fa42e50-1a05-499f-9396-a1e5dc1161f6\") " Jan 31 10:30:34 crc kubenswrapper[4830]: I0131 10:30:34.205325 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1fa42e50-1a05-499f-9396-a1e5dc1161f6-test-operator-ephemeral-temporary" (OuterVolumeSpecName: "test-operator-ephemeral-temporary") pod "1fa42e50-1a05-499f-9396-a1e5dc1161f6" (UID: "1fa42e50-1a05-499f-9396-a1e5dc1161f6"). InnerVolumeSpecName "test-operator-ephemeral-temporary". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 10:30:34 crc kubenswrapper[4830]: I0131 10:30:34.206127 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1fa42e50-1a05-499f-9396-a1e5dc1161f6-config-data" (OuterVolumeSpecName: "config-data") pod "1fa42e50-1a05-499f-9396-a1e5dc1161f6" (UID: "1fa42e50-1a05-499f-9396-a1e5dc1161f6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 10:30:34 crc kubenswrapper[4830]: I0131 10:30:34.207782 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1fa42e50-1a05-499f-9396-a1e5dc1161f6-test-operator-ephemeral-workdir" (OuterVolumeSpecName: "test-operator-ephemeral-workdir") pod "1fa42e50-1a05-499f-9396-a1e5dc1161f6" (UID: "1fa42e50-1a05-499f-9396-a1e5dc1161f6"). InnerVolumeSpecName "test-operator-ephemeral-workdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 10:30:34 crc kubenswrapper[4830]: I0131 10:30:34.263098 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage06-crc" (OuterVolumeSpecName: "test-operator-logs") pod "1fa42e50-1a05-499f-9396-a1e5dc1161f6" (UID: "1fa42e50-1a05-499f-9396-a1e5dc1161f6"). InnerVolumeSpecName "local-storage06-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 31 10:30:34 crc kubenswrapper[4830]: I0131 10:30:34.269022 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1fa42e50-1a05-499f-9396-a1e5dc1161f6-kube-api-access-nchrt" (OuterVolumeSpecName: "kube-api-access-nchrt") pod "1fa42e50-1a05-499f-9396-a1e5dc1161f6" (UID: "1fa42e50-1a05-499f-9396-a1e5dc1161f6"). InnerVolumeSpecName "kube-api-access-nchrt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 10:30:34 crc kubenswrapper[4830]: I0131 10:30:34.305650 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1fa42e50-1a05-499f-9396-a1e5dc1161f6-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "1fa42e50-1a05-499f-9396-a1e5dc1161f6" (UID: "1fa42e50-1a05-499f-9396-a1e5dc1161f6"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 10:30:34 crc kubenswrapper[4830]: I0131 10:30:34.308295 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nchrt\" (UniqueName: \"kubernetes.io/projected/1fa42e50-1a05-499f-9396-a1e5dc1161f6-kube-api-access-nchrt\") on node \"crc\" DevicePath \"\"" Jan 31 10:30:34 crc kubenswrapper[4830]: I0131 10:30:34.308340 4830 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/1fa42e50-1a05-499f-9396-a1e5dc1161f6-test-operator-ephemeral-workdir\") on node \"crc\" DevicePath \"\"" Jan 31 10:30:34 crc kubenswrapper[4830]: I0131 10:30:34.308386 4830 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" " Jan 31 10:30:34 crc kubenswrapper[4830]: I0131 10:30:34.308403 4830 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/1fa42e50-1a05-499f-9396-a1e5dc1161f6-ssh-key\") on node \"crc\" DevicePath \"\"" Jan 31 10:30:34 crc kubenswrapper[4830]: I0131 10:30:34.308415 4830 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1fa42e50-1a05-499f-9396-a1e5dc1161f6-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 10:30:34 crc kubenswrapper[4830]: I0131 10:30:34.308430 4830 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/1fa42e50-1a05-499f-9396-a1e5dc1161f6-test-operator-ephemeral-temporary\") on node \"crc\" DevicePath \"\"" Jan 31 10:30:34 crc kubenswrapper[4830]: I0131 10:30:34.322859 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1fa42e50-1a05-499f-9396-a1e5dc1161f6-ca-certs" (OuterVolumeSpecName: "ca-certs") pod "1fa42e50-1a05-499f-9396-a1e5dc1161f6" (UID: "1fa42e50-1a05-499f-9396-a1e5dc1161f6"). InnerVolumeSpecName "ca-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 10:30:34 crc kubenswrapper[4830]: I0131 10:30:34.325811 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1fa42e50-1a05-499f-9396-a1e5dc1161f6-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "1fa42e50-1a05-499f-9396-a1e5dc1161f6" (UID: "1fa42e50-1a05-499f-9396-a1e5dc1161f6"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 10:30:34 crc kubenswrapper[4830]: I0131 10:30:34.336925 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1fa42e50-1a05-499f-9396-a1e5dc1161f6-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "1fa42e50-1a05-499f-9396-a1e5dc1161f6" (UID: "1fa42e50-1a05-499f-9396-a1e5dc1161f6"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 10:30:34 crc kubenswrapper[4830]: I0131 10:30:34.349881 4830 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage06-crc" (UniqueName: "kubernetes.io/local-volume/local-storage06-crc") on node "crc" Jan 31 10:30:34 crc kubenswrapper[4830]: I0131 10:30:34.399797 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29497590-z54jt"] Jan 31 10:30:34 crc kubenswrapper[4830]: I0131 10:30:34.412668 4830 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/1fa42e50-1a05-499f-9396-a1e5dc1161f6-openstack-config\") on node \"crc\" DevicePath \"\"" Jan 31 10:30:34 crc kubenswrapper[4830]: I0131 10:30:34.412704 4830 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/1fa42e50-1a05-499f-9396-a1e5dc1161f6-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Jan 31 10:30:34 crc kubenswrapper[4830]: I0131 10:30:34.412717 4830 reconciler_common.go:293] "Volume detached for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" DevicePath \"\"" Jan 31 10:30:34 crc kubenswrapper[4830]: I0131 10:30:34.412747 4830 reconciler_common.go:293] "Volume detached for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/1fa42e50-1a05-499f-9396-a1e5dc1161f6-ca-certs\") on node \"crc\" DevicePath \"\"" Jan 31 10:30:34 crc kubenswrapper[4830]: I0131 10:30:34.500321 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29497590-z54jt" event={"ID":"0554fbd5-a18f-4a9d-88f1-747a031f3de5","Type":"ContainerStarted","Data":"747f02137e13d7d5ea406a04080203ef9c52dd01ac61e0821a0a8028f5f97f43"} Jan 31 10:30:34 crc kubenswrapper[4830]: I0131 10:30:34.506567 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 31 10:30:34 crc kubenswrapper[4830]: I0131 10:30:34.507012 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"1fa42e50-1a05-499f-9396-a1e5dc1161f6","Type":"ContainerDied","Data":"8356097a8dc598d2a344aae96266d60c2703096af258027cf9e5ef2e2ef12d93"} Jan 31 10:30:34 crc kubenswrapper[4830]: I0131 10:30:34.510757 4830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8356097a8dc598d2a344aae96266d60c2703096af258027cf9e5ef2e2ef12d93" Jan 31 10:30:34 crc kubenswrapper[4830]: I0131 10:30:34.535096 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Jan 31 10:30:34 crc kubenswrapper[4830]: I0131 10:30:34.816774 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Jan 31 10:30:35 crc kubenswrapper[4830]: I0131 10:30:35.521185 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29497590-z54jt" event={"ID":"0554fbd5-a18f-4a9d-88f1-747a031f3de5","Type":"ContainerStarted","Data":"c68dae5c7bb70807a65542630e4e7dbb4954a0e73e94d86233f1e1f426543a53"} Jan 31 10:30:35 crc kubenswrapper[4830]: I0131 10:30:35.563145 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29497590-z54jt" podStartSLOduration=5.556172279 podStartE2EDuration="5.556172279s" podCreationTimestamp="2026-01-31 10:30:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 10:30:35.546591017 +0000 UTC m=+5380.039953459" watchObservedRunningTime="2026-01-31 10:30:35.556172279 +0000 UTC m=+5380.049534721" Jan 31 10:30:35 crc kubenswrapper[4830]: I0131 10:30:35.849138 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 31 10:30:36 crc kubenswrapper[4830]: I0131 10:30:36.533161 4830 generic.go:334] "Generic (PLEG): container finished" podID="0554fbd5-a18f-4a9d-88f1-747a031f3de5" containerID="c68dae5c7bb70807a65542630e4e7dbb4954a0e73e94d86233f1e1f426543a53" exitCode=0 Jan 31 10:30:36 crc kubenswrapper[4830]: I0131 10:30:36.533251 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29497590-z54jt" event={"ID":"0554fbd5-a18f-4a9d-88f1-747a031f3de5","Type":"ContainerDied","Data":"c68dae5c7bb70807a65542630e4e7dbb4954a0e73e94d86233f1e1f426543a53"} Jan 31 10:30:37 crc kubenswrapper[4830]: I0131 10:30:37.371165 4830 patch_prober.go:28] interesting pod/downloads-7954f5f757-l8ckt container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" start-of-body= Jan 31 10:30:37 crc kubenswrapper[4830]: I0131 10:30:37.371546 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-l8ckt" podUID="a8d26ab0-33c3-4eb7-928b-ffba996579d9" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" Jan 31 10:30:37 crc kubenswrapper[4830]: I0131 10:30:37.371370 4830 patch_prober.go:28] interesting pod/downloads-7954f5f757-l8ckt container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" start-of-body= Jan 31 10:30:37 crc kubenswrapper[4830]: I0131 10:30:37.371654 4830 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-l8ckt" podUID="a8d26ab0-33c3-4eb7-928b-ffba996579d9" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" Jan 31 10:30:38 crc kubenswrapper[4830]: I0131 10:30:38.533242 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-ckvgq" Jan 31 10:30:38 crc kubenswrapper[4830]: I0131 10:30:38.544038 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29497590-z54jt" Jan 31 10:30:38 crc kubenswrapper[4830]: I0131 10:30:38.553639 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29497590-z54jt" event={"ID":"0554fbd5-a18f-4a9d-88f1-747a031f3de5","Type":"ContainerDied","Data":"747f02137e13d7d5ea406a04080203ef9c52dd01ac61e0821a0a8028f5f97f43"} Jan 31 10:30:38 crc kubenswrapper[4830]: I0131 10:30:38.553683 4830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="747f02137e13d7d5ea406a04080203ef9c52dd01ac61e0821a0a8028f5f97f43" Jan 31 10:30:38 crc kubenswrapper[4830]: I0131 10:30:38.553713 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29497590-z54jt" Jan 31 10:30:38 crc kubenswrapper[4830]: I0131 10:30:38.623996 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Jan 31 10:30:38 crc kubenswrapper[4830]: I0131 10:30:38.636075 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0554fbd5-a18f-4a9d-88f1-747a031f3de5-secret-volume\") pod \"0554fbd5-a18f-4a9d-88f1-747a031f3de5\" (UID: \"0554fbd5-a18f-4a9d-88f1-747a031f3de5\") " Jan 31 10:30:38 crc kubenswrapper[4830]: I0131 10:30:38.636289 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0554fbd5-a18f-4a9d-88f1-747a031f3de5-config-volume\") pod \"0554fbd5-a18f-4a9d-88f1-747a031f3de5\" (UID: \"0554fbd5-a18f-4a9d-88f1-747a031f3de5\") " Jan 31 10:30:38 crc kubenswrapper[4830]: I0131 10:30:38.636578 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mkjh6\" (UniqueName: \"kubernetes.io/projected/0554fbd5-a18f-4a9d-88f1-747a031f3de5-kube-api-access-mkjh6\") pod \"0554fbd5-a18f-4a9d-88f1-747a031f3de5\" (UID: \"0554fbd5-a18f-4a9d-88f1-747a031f3de5\") " Jan 31 10:30:38 crc kubenswrapper[4830]: I0131 10:30:38.640630 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0554fbd5-a18f-4a9d-88f1-747a031f3de5-config-volume" (OuterVolumeSpecName: "config-volume") pod "0554fbd5-a18f-4a9d-88f1-747a031f3de5" (UID: "0554fbd5-a18f-4a9d-88f1-747a031f3de5"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 10:30:38 crc kubenswrapper[4830]: I0131 10:30:38.669508 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0554fbd5-a18f-4a9d-88f1-747a031f3de5-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "0554fbd5-a18f-4a9d-88f1-747a031f3de5" (UID: "0554fbd5-a18f-4a9d-88f1-747a031f3de5"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 10:30:38 crc kubenswrapper[4830]: I0131 10:30:38.706083 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0554fbd5-a18f-4a9d-88f1-747a031f3de5-kube-api-access-mkjh6" (OuterVolumeSpecName: "kube-api-access-mkjh6") pod "0554fbd5-a18f-4a9d-88f1-747a031f3de5" (UID: "0554fbd5-a18f-4a9d-88f1-747a031f3de5"). InnerVolumeSpecName "kube-api-access-mkjh6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 10:30:38 crc kubenswrapper[4830]: I0131 10:30:38.740231 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mkjh6\" (UniqueName: \"kubernetes.io/projected/0554fbd5-a18f-4a9d-88f1-747a031f3de5-kube-api-access-mkjh6\") on node \"crc\" DevicePath \"\"" Jan 31 10:30:38 crc kubenswrapper[4830]: I0131 10:30:38.740757 4830 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0554fbd5-a18f-4a9d-88f1-747a031f3de5-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 31 10:30:38 crc kubenswrapper[4830]: I0131 10:30:38.740792 4830 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0554fbd5-a18f-4a9d-88f1-747a031f3de5-config-volume\") on node \"crc\" DevicePath \"\"" Jan 31 10:30:38 crc kubenswrapper[4830]: I0131 10:30:38.795145 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Jan 31 10:30:39 crc kubenswrapper[4830]: I0131 10:30:39.649397 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29497545-s6jfx"] Jan 31 10:30:39 crc kubenswrapper[4830]: I0131 10:30:39.660365 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29497545-s6jfx"] Jan 31 10:30:40 crc kubenswrapper[4830]: I0131 10:30:40.296252 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fb5cecb5-4005-43e1-bf40-b620150d746c" path="/var/lib/kubelet/pods/fb5cecb5-4005-43e1-bf40-b620150d746c/volumes" Jan 31 10:30:40 crc kubenswrapper[4830]: I0131 10:30:40.519418 4830 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-56876" podUID="2626e876-9148-4165-a735-a5a1733c014d" containerName="registry-server" probeResult="failure" output=< Jan 31 10:30:40 crc kubenswrapper[4830]: timeout: failed to connect service ":50051" within 1s Jan 31 10:30:40 crc kubenswrapper[4830]: > Jan 31 10:30:40 crc kubenswrapper[4830]: I0131 10:30:40.521117 4830 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-g5pvp" podUID="35d308f6-fcf3-4b01-b26e-5c1848d6ee7d" containerName="registry-server" probeResult="failure" output=< Jan 31 10:30:40 crc kubenswrapper[4830]: timeout: failed to connect service ":50051" within 1s Jan 31 10:30:40 crc kubenswrapper[4830]: > Jan 31 10:30:40 crc kubenswrapper[4830]: I0131 10:30:40.890706 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 31 10:30:42 crc kubenswrapper[4830]: I0131 10:30:42.804072 4830 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-jwvm4" podUID="14550547-ce63-48cc-800e-b74235d0daa1" containerName="registry-server" probeResult="failure" output=< Jan 31 10:30:42 crc kubenswrapper[4830]: timeout: failed to connect service ":50051" within 1s Jan 31 10:30:42 crc kubenswrapper[4830]: > Jan 31 10:30:42 crc kubenswrapper[4830]: I0131 10:30:42.907990 4830 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-fcmv2" podUID="c361702a-d6db-4925-809d-f08c6dd88a7d" containerName="registry-server" probeResult="failure" output=< Jan 31 10:30:42 crc kubenswrapper[4830]: timeout: failed to connect service ":50051" within 1s Jan 31 10:30:42 crc kubenswrapper[4830]: > Jan 31 10:30:44 crc kubenswrapper[4830]: I0131 10:30:44.159163 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-74fbb6df4-hrt7k" Jan 31 10:30:44 crc kubenswrapper[4830]: I0131 10:30:44.356698 4830 patch_prober.go:28] interesting pod/machine-config-daemon-gt7kd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 10:30:44 crc kubenswrapper[4830]: I0131 10:30:44.357028 4830 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" podUID="158dbfda-9b0a-4809-9946-3c6ee2d082dc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 10:30:45 crc kubenswrapper[4830]: I0131 10:30:45.342823 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Jan 31 10:30:45 crc kubenswrapper[4830]: E0131 10:30:45.343348 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0554fbd5-a18f-4a9d-88f1-747a031f3de5" containerName="collect-profiles" Jan 31 10:30:45 crc kubenswrapper[4830]: I0131 10:30:45.343968 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="0554fbd5-a18f-4a9d-88f1-747a031f3de5" containerName="collect-profiles" Jan 31 10:30:45 crc kubenswrapper[4830]: E0131 10:30:45.344035 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1fa42e50-1a05-499f-9396-a1e5dc1161f6" containerName="tempest-tests-tempest-tests-runner" Jan 31 10:30:45 crc kubenswrapper[4830]: I0131 10:30:45.344045 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="1fa42e50-1a05-499f-9396-a1e5dc1161f6" containerName="tempest-tests-tempest-tests-runner" Jan 31 10:30:45 crc kubenswrapper[4830]: I0131 10:30:45.381246 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="0554fbd5-a18f-4a9d-88f1-747a031f3de5" containerName="collect-profiles" Jan 31 10:30:45 crc kubenswrapper[4830]: I0131 10:30:45.381335 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="1fa42e50-1a05-499f-9396-a1e5dc1161f6" containerName="tempest-tests-tempest-tests-runner" Jan 31 10:30:45 crc kubenswrapper[4830]: I0131 10:30:45.385453 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Jan 31 10:30:45 crc kubenswrapper[4830]: I0131 10:30:45.385644 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 31 10:30:45 crc kubenswrapper[4830]: I0131 10:30:45.389461 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-cbc2n" Jan 31 10:30:45 crc kubenswrapper[4830]: I0131 10:30:45.456694 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"9a5cf76b-5737-425c-9add-4f45212ca5da\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 31 10:30:45 crc kubenswrapper[4830]: I0131 10:30:45.456834 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qcfls\" (UniqueName: \"kubernetes.io/projected/9a5cf76b-5737-425c-9add-4f45212ca5da-kube-api-access-qcfls\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"9a5cf76b-5737-425c-9add-4f45212ca5da\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 31 10:30:45 crc kubenswrapper[4830]: I0131 10:30:45.559613 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"9a5cf76b-5737-425c-9add-4f45212ca5da\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 31 10:30:45 crc kubenswrapper[4830]: I0131 10:30:45.559737 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qcfls\" (UniqueName: \"kubernetes.io/projected/9a5cf76b-5737-425c-9add-4f45212ca5da-kube-api-access-qcfls\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"9a5cf76b-5737-425c-9add-4f45212ca5da\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 31 10:30:45 crc kubenswrapper[4830]: I0131 10:30:45.623648 4830 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"9a5cf76b-5737-425c-9add-4f45212ca5da\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 31 10:30:45 crc kubenswrapper[4830]: I0131 10:30:45.629406 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qcfls\" (UniqueName: \"kubernetes.io/projected/9a5cf76b-5737-425c-9add-4f45212ca5da-kube-api-access-qcfls\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"9a5cf76b-5737-425c-9add-4f45212ca5da\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 31 10:30:45 crc kubenswrapper[4830]: I0131 10:30:45.674837 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"9a5cf76b-5737-425c-9add-4f45212ca5da\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 31 10:30:45 crc kubenswrapper[4830]: I0131 10:30:45.725917 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 31 10:30:46 crc kubenswrapper[4830]: I0131 10:30:46.816897 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Jan 31 10:30:47 crc kubenswrapper[4830]: I0131 10:30:47.371525 4830 patch_prober.go:28] interesting pod/downloads-7954f5f757-l8ckt container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" start-of-body= Jan 31 10:30:47 crc kubenswrapper[4830]: I0131 10:30:47.371911 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-l8ckt" podUID="a8d26ab0-33c3-4eb7-928b-ffba996579d9" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" Jan 31 10:30:47 crc kubenswrapper[4830]: I0131 10:30:47.371559 4830 patch_prober.go:28] interesting pod/downloads-7954f5f757-l8ckt container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" start-of-body= Jan 31 10:30:47 crc kubenswrapper[4830]: I0131 10:30:47.372009 4830 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-l8ckt" podUID="a8d26ab0-33c3-4eb7-928b-ffba996579d9" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" Jan 31 10:30:47 crc kubenswrapper[4830]: I0131 10:30:47.694699 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"9a5cf76b-5737-425c-9add-4f45212ca5da","Type":"ContainerStarted","Data":"a1ee6ebcc516049dc92d9609f2b3851f459fad058f7dcc351ad4e2f758a30999"} Jan 31 10:30:49 crc kubenswrapper[4830]: I0131 10:30:49.722019 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"9a5cf76b-5737-425c-9add-4f45212ca5da","Type":"ContainerStarted","Data":"90ae1b96f6b6b9050aa8ea13f19806e5a1d4d53f787f764a5c13ce0c45c30412"} Jan 31 10:30:49 crc kubenswrapper[4830]: I0131 10:30:49.743144 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podStartSLOduration=2.488051675 podStartE2EDuration="4.743126041s" podCreationTimestamp="2026-01-31 10:30:45 +0000 UTC" firstStartedPulling="2026-01-31 10:30:47.128894688 +0000 UTC m=+5391.622257130" lastFinishedPulling="2026-01-31 10:30:49.383969054 +0000 UTC m=+5393.877331496" observedRunningTime="2026-01-31 10:30:49.734337481 +0000 UTC m=+5394.227699923" watchObservedRunningTime="2026-01-31 10:30:49.743126041 +0000 UTC m=+5394.236488483" Jan 31 10:30:50 crc kubenswrapper[4830]: I0131 10:30:50.273870 4830 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-g5pvp" podUID="35d308f6-fcf3-4b01-b26e-5c1848d6ee7d" containerName="registry-server" probeResult="failure" output=< Jan 31 10:30:50 crc kubenswrapper[4830]: timeout: failed to connect service ":50051" within 1s Jan 31 10:30:50 crc kubenswrapper[4830]: > Jan 31 10:30:50 crc kubenswrapper[4830]: I0131 10:30:50.432126 4830 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-56876" podUID="2626e876-9148-4165-a735-a5a1733c014d" containerName="registry-server" probeResult="failure" output=< Jan 31 10:30:50 crc kubenswrapper[4830]: timeout: failed to connect service ":50051" within 1s Jan 31 10:30:50 crc kubenswrapper[4830]: > Jan 31 10:30:52 crc kubenswrapper[4830]: I0131 10:30:52.729066 4830 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-jwvm4" podUID="14550547-ce63-48cc-800e-b74235d0daa1" containerName="registry-server" probeResult="failure" output=< Jan 31 10:30:52 crc kubenswrapper[4830]: timeout: failed to connect service ":50051" within 1s Jan 31 10:30:52 crc kubenswrapper[4830]: > Jan 31 10:30:52 crc kubenswrapper[4830]: I0131 10:30:52.902134 4830 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-fcmv2" podUID="c361702a-d6db-4925-809d-f08c6dd88a7d" containerName="registry-server" probeResult="failure" output=< Jan 31 10:30:52 crc kubenswrapper[4830]: timeout: failed to connect service ":50051" within 1s Jan 31 10:30:52 crc kubenswrapper[4830]: > Jan 31 10:30:57 crc kubenswrapper[4830]: I0131 10:30:57.378767 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-l8ckt" Jan 31 10:30:59 crc kubenswrapper[4830]: I0131 10:30:59.309178 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-g5pvp" Jan 31 10:30:59 crc kubenswrapper[4830]: I0131 10:30:59.370268 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-g5pvp" Jan 31 10:31:00 crc kubenswrapper[4830]: I0131 10:31:00.434792 4830 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-56876" podUID="2626e876-9148-4165-a735-a5a1733c014d" containerName="registry-server" probeResult="failure" output=< Jan 31 10:31:00 crc kubenswrapper[4830]: timeout: failed to connect service ":50051" within 1s Jan 31 10:31:00 crc kubenswrapper[4830]: > Jan 31 10:31:00 crc kubenswrapper[4830]: I0131 10:31:00.479775 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 31 10:31:01 crc kubenswrapper[4830]: I0131 10:31:01.715790 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-jwvm4" Jan 31 10:31:01 crc kubenswrapper[4830]: I0131 10:31:01.764395 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-jwvm4" Jan 31 10:31:01 crc kubenswrapper[4830]: I0131 10:31:01.902119 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-fcmv2" Jan 31 10:31:01 crc kubenswrapper[4830]: I0131 10:31:01.966464 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-fcmv2" Jan 31 10:31:09 crc kubenswrapper[4830]: I0131 10:31:09.142203 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-n662q"] Jan 31 10:31:09 crc kubenswrapper[4830]: I0131 10:31:09.194812 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-n662q"] Jan 31 10:31:09 crc kubenswrapper[4830]: I0131 10:31:09.194972 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-n662q" Jan 31 10:31:09 crc kubenswrapper[4830]: I0131 10:31:09.277007 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c03ad834-8641-41fb-b29f-6f2b9e895501-catalog-content\") pod \"community-operators-n662q\" (UID: \"c03ad834-8641-41fb-b29f-6f2b9e895501\") " pod="openshift-marketplace/community-operators-n662q" Jan 31 10:31:09 crc kubenswrapper[4830]: I0131 10:31:09.277083 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kfg82\" (UniqueName: \"kubernetes.io/projected/c03ad834-8641-41fb-b29f-6f2b9e895501-kube-api-access-kfg82\") pod \"community-operators-n662q\" (UID: \"c03ad834-8641-41fb-b29f-6f2b9e895501\") " pod="openshift-marketplace/community-operators-n662q" Jan 31 10:31:09 crc kubenswrapper[4830]: I0131 10:31:09.277271 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c03ad834-8641-41fb-b29f-6f2b9e895501-utilities\") pod \"community-operators-n662q\" (UID: \"c03ad834-8641-41fb-b29f-6f2b9e895501\") " pod="openshift-marketplace/community-operators-n662q" Jan 31 10:31:09 crc kubenswrapper[4830]: I0131 10:31:09.380274 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c03ad834-8641-41fb-b29f-6f2b9e895501-catalog-content\") pod \"community-operators-n662q\" (UID: \"c03ad834-8641-41fb-b29f-6f2b9e895501\") " pod="openshift-marketplace/community-operators-n662q" Jan 31 10:31:09 crc kubenswrapper[4830]: I0131 10:31:09.381206 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kfg82\" (UniqueName: \"kubernetes.io/projected/c03ad834-8641-41fb-b29f-6f2b9e895501-kube-api-access-kfg82\") pod \"community-operators-n662q\" (UID: \"c03ad834-8641-41fb-b29f-6f2b9e895501\") " pod="openshift-marketplace/community-operators-n662q" Jan 31 10:31:09 crc kubenswrapper[4830]: I0131 10:31:09.382190 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c03ad834-8641-41fb-b29f-6f2b9e895501-utilities\") pod \"community-operators-n662q\" (UID: \"c03ad834-8641-41fb-b29f-6f2b9e895501\") " pod="openshift-marketplace/community-operators-n662q" Jan 31 10:31:09 crc kubenswrapper[4830]: I0131 10:31:09.383481 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c03ad834-8641-41fb-b29f-6f2b9e895501-catalog-content\") pod \"community-operators-n662q\" (UID: \"c03ad834-8641-41fb-b29f-6f2b9e895501\") " pod="openshift-marketplace/community-operators-n662q" Jan 31 10:31:09 crc kubenswrapper[4830]: I0131 10:31:09.384156 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c03ad834-8641-41fb-b29f-6f2b9e895501-utilities\") pod \"community-operators-n662q\" (UID: \"c03ad834-8641-41fb-b29f-6f2b9e895501\") " pod="openshift-marketplace/community-operators-n662q" Jan 31 10:31:09 crc kubenswrapper[4830]: I0131 10:31:09.403243 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kfg82\" (UniqueName: \"kubernetes.io/projected/c03ad834-8641-41fb-b29f-6f2b9e895501-kube-api-access-kfg82\") pod \"community-operators-n662q\" (UID: \"c03ad834-8641-41fb-b29f-6f2b9e895501\") " pod="openshift-marketplace/community-operators-n662q" Jan 31 10:31:09 crc kubenswrapper[4830]: I0131 10:31:09.539590 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-n662q" Jan 31 10:31:10 crc kubenswrapper[4830]: I0131 10:31:10.431682 4830 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-56876" podUID="2626e876-9148-4165-a735-a5a1733c014d" containerName="registry-server" probeResult="failure" output=< Jan 31 10:31:10 crc kubenswrapper[4830]: timeout: failed to connect service ":50051" within 1s Jan 31 10:31:10 crc kubenswrapper[4830]: > Jan 31 10:31:10 crc kubenswrapper[4830]: I0131 10:31:10.988835 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-n662q"] Jan 31 10:31:11 crc kubenswrapper[4830]: I0131 10:31:11.995810 4830 generic.go:334] "Generic (PLEG): container finished" podID="c03ad834-8641-41fb-b29f-6f2b9e895501" containerID="31f25008099fb8b018dc5bd6809c387ff28bfec2a2db031cbae19d430280f28e" exitCode=0 Jan 31 10:31:11 crc kubenswrapper[4830]: I0131 10:31:11.995906 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n662q" event={"ID":"c03ad834-8641-41fb-b29f-6f2b9e895501","Type":"ContainerDied","Data":"31f25008099fb8b018dc5bd6809c387ff28bfec2a2db031cbae19d430280f28e"} Jan 31 10:31:11 crc kubenswrapper[4830]: I0131 10:31:11.996181 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n662q" event={"ID":"c03ad834-8641-41fb-b29f-6f2b9e895501","Type":"ContainerStarted","Data":"7304a474402ef2d0db56c33e27eea98254560ca7a151e18ea0e16267bb0e3a18"} Jan 31 10:31:13 crc kubenswrapper[4830]: I0131 10:31:13.012671 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n662q" event={"ID":"c03ad834-8641-41fb-b29f-6f2b9e895501","Type":"ContainerStarted","Data":"9cd0db92d6bafb5cc2c095531c2415c472faf015be916321d145e28f8c506aae"} Jan 31 10:31:14 crc kubenswrapper[4830]: I0131 10:31:14.353271 4830 patch_prober.go:28] interesting pod/machine-config-daemon-gt7kd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 10:31:14 crc kubenswrapper[4830]: I0131 10:31:14.353335 4830 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" podUID="158dbfda-9b0a-4809-9946-3c6ee2d082dc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 10:31:18 crc kubenswrapper[4830]: I0131 10:31:18.094392 4830 generic.go:334] "Generic (PLEG): container finished" podID="c03ad834-8641-41fb-b29f-6f2b9e895501" containerID="9cd0db92d6bafb5cc2c095531c2415c472faf015be916321d145e28f8c506aae" exitCode=0 Jan 31 10:31:18 crc kubenswrapper[4830]: I0131 10:31:18.094480 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n662q" event={"ID":"c03ad834-8641-41fb-b29f-6f2b9e895501","Type":"ContainerDied","Data":"9cd0db92d6bafb5cc2c095531c2415c472faf015be916321d145e28f8c506aae"} Jan 31 10:31:19 crc kubenswrapper[4830]: I0131 10:31:19.206977 4830 scope.go:117] "RemoveContainer" containerID="74f8b9685969693e5971df89538f336d0957a5b3af1f58cf4cefb74a71fa3b33" Jan 31 10:31:20 crc kubenswrapper[4830]: I0131 10:31:20.119375 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n662q" event={"ID":"c03ad834-8641-41fb-b29f-6f2b9e895501","Type":"ContainerStarted","Data":"021de52ce1afd0080b0214a3795e04cd863fc8c9ee2d008cc764652901471df2"} Jan 31 10:31:20 crc kubenswrapper[4830]: I0131 10:31:20.153499 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-n662q" podStartSLOduration=4.383486235 podStartE2EDuration="11.153475837s" podCreationTimestamp="2026-01-31 10:31:09 +0000 UTC" firstStartedPulling="2026-01-31 10:31:11.999237046 +0000 UTC m=+5416.492599508" lastFinishedPulling="2026-01-31 10:31:18.769226668 +0000 UTC m=+5423.262589110" observedRunningTime="2026-01-31 10:31:20.140593031 +0000 UTC m=+5424.633955473" watchObservedRunningTime="2026-01-31 10:31:20.153475837 +0000 UTC m=+5424.646838279" Jan 31 10:31:20 crc kubenswrapper[4830]: I0131 10:31:20.432344 4830 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-56876" podUID="2626e876-9148-4165-a735-a5a1733c014d" containerName="registry-server" probeResult="failure" output=< Jan 31 10:31:20 crc kubenswrapper[4830]: timeout: failed to connect service ":50051" within 1s Jan 31 10:31:20 crc kubenswrapper[4830]: > Jan 31 10:31:29 crc kubenswrapper[4830]: I0131 10:31:29.443043 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-56876" Jan 31 10:31:29 crc kubenswrapper[4830]: I0131 10:31:29.518771 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-56876" Jan 31 10:31:29 crc kubenswrapper[4830]: I0131 10:31:29.542835 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-n662q" Jan 31 10:31:29 crc kubenswrapper[4830]: I0131 10:31:29.542891 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-n662q" Jan 31 10:31:30 crc kubenswrapper[4830]: I0131 10:31:30.608065 4830 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-n662q" podUID="c03ad834-8641-41fb-b29f-6f2b9e895501" containerName="registry-server" probeResult="failure" output=< Jan 31 10:31:30 crc kubenswrapper[4830]: timeout: failed to connect service ":50051" within 1s Jan 31 10:31:30 crc kubenswrapper[4830]: > Jan 31 10:31:38 crc kubenswrapper[4830]: I0131 10:31:38.678074 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-7b67z/must-gather-fcflx"] Jan 31 10:31:38 crc kubenswrapper[4830]: I0131 10:31:38.680401 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-7b67z/must-gather-fcflx" Jan 31 10:31:38 crc kubenswrapper[4830]: I0131 10:31:38.682397 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-7b67z"/"default-dockercfg-8jpgt" Jan 31 10:31:38 crc kubenswrapper[4830]: I0131 10:31:38.686399 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-7b67z"/"openshift-service-ca.crt" Jan 31 10:31:38 crc kubenswrapper[4830]: I0131 10:31:38.686413 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-7b67z"/"kube-root-ca.crt" Jan 31 10:31:38 crc kubenswrapper[4830]: I0131 10:31:38.722566 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-7b67z/must-gather-fcflx"] Jan 31 10:31:38 crc kubenswrapper[4830]: I0131 10:31:38.880141 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/71ef938b-5a48-4f89-af62-86a680856139-must-gather-output\") pod \"must-gather-fcflx\" (UID: \"71ef938b-5a48-4f89-af62-86a680856139\") " pod="openshift-must-gather-7b67z/must-gather-fcflx" Jan 31 10:31:38 crc kubenswrapper[4830]: I0131 10:31:38.881213 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-thxz4\" (UniqueName: \"kubernetes.io/projected/71ef938b-5a48-4f89-af62-86a680856139-kube-api-access-thxz4\") pod \"must-gather-fcflx\" (UID: \"71ef938b-5a48-4f89-af62-86a680856139\") " pod="openshift-must-gather-7b67z/must-gather-fcflx" Jan 31 10:31:38 crc kubenswrapper[4830]: I0131 10:31:38.983193 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/71ef938b-5a48-4f89-af62-86a680856139-must-gather-output\") pod \"must-gather-fcflx\" (UID: \"71ef938b-5a48-4f89-af62-86a680856139\") " pod="openshift-must-gather-7b67z/must-gather-fcflx" Jan 31 10:31:38 crc kubenswrapper[4830]: I0131 10:31:38.983270 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-thxz4\" (UniqueName: \"kubernetes.io/projected/71ef938b-5a48-4f89-af62-86a680856139-kube-api-access-thxz4\") pod \"must-gather-fcflx\" (UID: \"71ef938b-5a48-4f89-af62-86a680856139\") " pod="openshift-must-gather-7b67z/must-gather-fcflx" Jan 31 10:31:38 crc kubenswrapper[4830]: I0131 10:31:38.984935 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/71ef938b-5a48-4f89-af62-86a680856139-must-gather-output\") pod \"must-gather-fcflx\" (UID: \"71ef938b-5a48-4f89-af62-86a680856139\") " pod="openshift-must-gather-7b67z/must-gather-fcflx" Jan 31 10:31:39 crc kubenswrapper[4830]: I0131 10:31:39.013630 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-thxz4\" (UniqueName: \"kubernetes.io/projected/71ef938b-5a48-4f89-af62-86a680856139-kube-api-access-thxz4\") pod \"must-gather-fcflx\" (UID: \"71ef938b-5a48-4f89-af62-86a680856139\") " pod="openshift-must-gather-7b67z/must-gather-fcflx" Jan 31 10:31:39 crc kubenswrapper[4830]: I0131 10:31:39.309194 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-7b67z/must-gather-fcflx" Jan 31 10:31:39 crc kubenswrapper[4830]: I0131 10:31:39.592113 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-n662q" Jan 31 10:31:39 crc kubenswrapper[4830]: I0131 10:31:39.652530 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-n662q" Jan 31 10:31:40 crc kubenswrapper[4830]: I0131 10:31:40.237439 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-7b67z/must-gather-fcflx"] Jan 31 10:31:40 crc kubenswrapper[4830]: I0131 10:31:40.363682 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-7b67z/must-gather-fcflx" event={"ID":"71ef938b-5a48-4f89-af62-86a680856139","Type":"ContainerStarted","Data":"4c5ff536d85f1ddc515aaa59825095c4730c3368d048ee8878e8158773b4fbc7"} Jan 31 10:31:43 crc kubenswrapper[4830]: I0131 10:31:43.125968 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-n662q"] Jan 31 10:31:43 crc kubenswrapper[4830]: I0131 10:31:43.131096 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-n662q" podUID="c03ad834-8641-41fb-b29f-6f2b9e895501" containerName="registry-server" containerID="cri-o://021de52ce1afd0080b0214a3795e04cd863fc8c9ee2d008cc764652901471df2" gracePeriod=2 Jan 31 10:31:43 crc kubenswrapper[4830]: I0131 10:31:43.405578 4830 generic.go:334] "Generic (PLEG): container finished" podID="c03ad834-8641-41fb-b29f-6f2b9e895501" containerID="021de52ce1afd0080b0214a3795e04cd863fc8c9ee2d008cc764652901471df2" exitCode=0 Jan 31 10:31:43 crc kubenswrapper[4830]: I0131 10:31:43.405599 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n662q" event={"ID":"c03ad834-8641-41fb-b29f-6f2b9e895501","Type":"ContainerDied","Data":"021de52ce1afd0080b0214a3795e04cd863fc8c9ee2d008cc764652901471df2"} Jan 31 10:31:44 crc kubenswrapper[4830]: I0131 10:31:44.353455 4830 patch_prober.go:28] interesting pod/machine-config-daemon-gt7kd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 10:31:44 crc kubenswrapper[4830]: I0131 10:31:44.353510 4830 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" podUID="158dbfda-9b0a-4809-9946-3c6ee2d082dc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 10:31:44 crc kubenswrapper[4830]: I0131 10:31:44.353547 4830 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" Jan 31 10:31:44 crc kubenswrapper[4830]: I0131 10:31:44.355015 4830 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"7966079b95ee8b0c6a0eeec05fdab8c0893a01751591c9e2a9fe770dbf810c5f"} pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 31 10:31:44 crc kubenswrapper[4830]: I0131 10:31:44.355071 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" podUID="158dbfda-9b0a-4809-9946-3c6ee2d082dc" containerName="machine-config-daemon" containerID="cri-o://7966079b95ee8b0c6a0eeec05fdab8c0893a01751591c9e2a9fe770dbf810c5f" gracePeriod=600 Jan 31 10:31:45 crc kubenswrapper[4830]: I0131 10:31:45.113646 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-n662q" Jan 31 10:31:45 crc kubenswrapper[4830]: I0131 10:31:45.228894 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c03ad834-8641-41fb-b29f-6f2b9e895501-catalog-content\") pod \"c03ad834-8641-41fb-b29f-6f2b9e895501\" (UID: \"c03ad834-8641-41fb-b29f-6f2b9e895501\") " Jan 31 10:31:45 crc kubenswrapper[4830]: I0131 10:31:45.229079 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfg82\" (UniqueName: \"kubernetes.io/projected/c03ad834-8641-41fb-b29f-6f2b9e895501-kube-api-access-kfg82\") pod \"c03ad834-8641-41fb-b29f-6f2b9e895501\" (UID: \"c03ad834-8641-41fb-b29f-6f2b9e895501\") " Jan 31 10:31:45 crc kubenswrapper[4830]: I0131 10:31:45.229278 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c03ad834-8641-41fb-b29f-6f2b9e895501-utilities\") pod \"c03ad834-8641-41fb-b29f-6f2b9e895501\" (UID: \"c03ad834-8641-41fb-b29f-6f2b9e895501\") " Jan 31 10:31:45 crc kubenswrapper[4830]: I0131 10:31:45.241474 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c03ad834-8641-41fb-b29f-6f2b9e895501-utilities" (OuterVolumeSpecName: "utilities") pod "c03ad834-8641-41fb-b29f-6f2b9e895501" (UID: "c03ad834-8641-41fb-b29f-6f2b9e895501"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 10:31:45 crc kubenswrapper[4830]: I0131 10:31:45.273473 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ad834-8641-41fb-b29f-6f2b9e895501-kube-api-access-kfg82" (OuterVolumeSpecName: "kube-api-access-kfg82") pod "c03ad834-8641-41fb-b29f-6f2b9e895501" (UID: "c03ad834-8641-41fb-b29f-6f2b9e895501"). InnerVolumeSpecName "kube-api-access-kfg82". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 10:31:45 crc kubenswrapper[4830]: I0131 10:31:45.332230 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfg82\" (UniqueName: \"kubernetes.io/projected/c03ad834-8641-41fb-b29f-6f2b9e895501-kube-api-access-kfg82\") on node \"crc\" DevicePath \"\"" Jan 31 10:31:45 crc kubenswrapper[4830]: I0131 10:31:45.332579 4830 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c03ad834-8641-41fb-b29f-6f2b9e895501-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 10:31:45 crc kubenswrapper[4830]: I0131 10:31:45.335864 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c03ad834-8641-41fb-b29f-6f2b9e895501-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c03ad834-8641-41fb-b29f-6f2b9e895501" (UID: "c03ad834-8641-41fb-b29f-6f2b9e895501"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 10:31:45 crc kubenswrapper[4830]: I0131 10:31:45.430419 4830 generic.go:334] "Generic (PLEG): container finished" podID="158dbfda-9b0a-4809-9946-3c6ee2d082dc" containerID="7966079b95ee8b0c6a0eeec05fdab8c0893a01751591c9e2a9fe770dbf810c5f" exitCode=0 Jan 31 10:31:45 crc kubenswrapper[4830]: I0131 10:31:45.430491 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" event={"ID":"158dbfda-9b0a-4809-9946-3c6ee2d082dc","Type":"ContainerDied","Data":"7966079b95ee8b0c6a0eeec05fdab8c0893a01751591c9e2a9fe770dbf810c5f"} Jan 31 10:31:45 crc kubenswrapper[4830]: I0131 10:31:45.430547 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" event={"ID":"158dbfda-9b0a-4809-9946-3c6ee2d082dc","Type":"ContainerStarted","Data":"3d1a1e3cfe2a93b485fa3e3d1d4183a5d4d87a568ef46466b13f6520a7c27ceb"} Jan 31 10:31:45 crc kubenswrapper[4830]: I0131 10:31:45.430575 4830 scope.go:117] "RemoveContainer" containerID="336fbcde4bc39ccadaddbb2c8835d20ab80032b9696d8d9d030c1910fa930c14" Jan 31 10:31:45 crc kubenswrapper[4830]: I0131 10:31:45.435427 4830 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c03ad834-8641-41fb-b29f-6f2b9e895501-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 10:31:45 crc kubenswrapper[4830]: I0131 10:31:45.435811 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n662q" event={"ID":"c03ad834-8641-41fb-b29f-6f2b9e895501","Type":"ContainerDied","Data":"7304a474402ef2d0db56c33e27eea98254560ca7a151e18ea0e16267bb0e3a18"} Jan 31 10:31:45 crc kubenswrapper[4830]: I0131 10:31:45.436239 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-n662q" Jan 31 10:31:45 crc kubenswrapper[4830]: I0131 10:31:45.485492 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-n662q"] Jan 31 10:31:45 crc kubenswrapper[4830]: I0131 10:31:45.495254 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-n662q"] Jan 31 10:31:45 crc kubenswrapper[4830]: I0131 10:31:45.630928 4830 scope.go:117] "RemoveContainer" containerID="021de52ce1afd0080b0214a3795e04cd863fc8c9ee2d008cc764652901471df2" Jan 31 10:31:45 crc kubenswrapper[4830]: I0131 10:31:45.706600 4830 scope.go:117] "RemoveContainer" containerID="9cd0db92d6bafb5cc2c095531c2415c472faf015be916321d145e28f8c506aae" Jan 31 10:31:45 crc kubenswrapper[4830]: I0131 10:31:45.740897 4830 scope.go:117] "RemoveContainer" containerID="31f25008099fb8b018dc5bd6809c387ff28bfec2a2db031cbae19d430280f28e" Jan 31 10:31:46 crc kubenswrapper[4830]: I0131 10:31:46.278947 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ad834-8641-41fb-b29f-6f2b9e895501" path="/var/lib/kubelet/pods/c03ad834-8641-41fb-b29f-6f2b9e895501/volumes" Jan 31 10:31:46 crc kubenswrapper[4830]: I0131 10:31:46.448473 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-7b67z/must-gather-fcflx" event={"ID":"71ef938b-5a48-4f89-af62-86a680856139","Type":"ContainerStarted","Data":"6273ca33f8a1fb1e8d19812845dc6358d83ca0e20300badc81a3dc8578790499"} Jan 31 10:31:46 crc kubenswrapper[4830]: I0131 10:31:46.448615 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-7b67z/must-gather-fcflx" event={"ID":"71ef938b-5a48-4f89-af62-86a680856139","Type":"ContainerStarted","Data":"cb17e5846c66d872c817ced4fee10babf0b444ab2dbed64e4f46d44710127835"} Jan 31 10:31:46 crc kubenswrapper[4830]: I0131 10:31:46.469958 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-7b67z/must-gather-fcflx" podStartSLOduration=4.013666029 podStartE2EDuration="8.46993361s" podCreationTimestamp="2026-01-31 10:31:38 +0000 UTC" firstStartedPulling="2026-01-31 10:31:40.252917132 +0000 UTC m=+5444.746279574" lastFinishedPulling="2026-01-31 10:31:44.709184713 +0000 UTC m=+5449.202547155" observedRunningTime="2026-01-31 10:31:46.464811484 +0000 UTC m=+5450.958173946" watchObservedRunningTime="2026-01-31 10:31:46.46993361 +0000 UTC m=+5450.963296062" Jan 31 10:31:49 crc kubenswrapper[4830]: I0131 10:31:49.416845 4830 trace.go:236] Trace[1396049671]: "Calculate volume metrics of catalog-content for pod openshift-marketplace/community-operators-fcmv2" (31-Jan-2026 10:31:48.169) (total time: 1243ms): Jan 31 10:31:49 crc kubenswrapper[4830]: Trace[1396049671]: [1.243794656s] [1.243794656s] END Jan 31 10:31:53 crc kubenswrapper[4830]: I0131 10:31:53.276955 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-7b67z/crc-debug-tmrvt"] Jan 31 10:31:53 crc kubenswrapper[4830]: E0131 10:31:53.278123 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c03ad834-8641-41fb-b29f-6f2b9e895501" containerName="extract-utilities" Jan 31 10:31:53 crc kubenswrapper[4830]: I0131 10:31:53.280481 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="c03ad834-8641-41fb-b29f-6f2b9e895501" containerName="extract-utilities" Jan 31 10:31:53 crc kubenswrapper[4830]: E0131 10:31:53.280539 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c03ad834-8641-41fb-b29f-6f2b9e895501" containerName="extract-content" Jan 31 10:31:53 crc kubenswrapper[4830]: I0131 10:31:53.280551 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="c03ad834-8641-41fb-b29f-6f2b9e895501" containerName="extract-content" Jan 31 10:31:53 crc kubenswrapper[4830]: E0131 10:31:53.280597 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c03ad834-8641-41fb-b29f-6f2b9e895501" containerName="registry-server" Jan 31 10:31:53 crc kubenswrapper[4830]: I0131 10:31:53.280607 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="c03ad834-8641-41fb-b29f-6f2b9e895501" containerName="registry-server" Jan 31 10:31:53 crc kubenswrapper[4830]: I0131 10:31:53.285979 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="c03ad834-8641-41fb-b29f-6f2b9e895501" containerName="registry-server" Jan 31 10:31:53 crc kubenswrapper[4830]: I0131 10:31:53.287653 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-7b67z/crc-debug-tmrvt" Jan 31 10:31:53 crc kubenswrapper[4830]: I0131 10:31:53.345096 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/beab1865-9319-40e8-9056-a0c867c7d04d-host\") pod \"crc-debug-tmrvt\" (UID: \"beab1865-9319-40e8-9056-a0c867c7d04d\") " pod="openshift-must-gather-7b67z/crc-debug-tmrvt" Jan 31 10:31:53 crc kubenswrapper[4830]: I0131 10:31:53.345371 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kb9pz\" (UniqueName: \"kubernetes.io/projected/beab1865-9319-40e8-9056-a0c867c7d04d-kube-api-access-kb9pz\") pod \"crc-debug-tmrvt\" (UID: \"beab1865-9319-40e8-9056-a0c867c7d04d\") " pod="openshift-must-gather-7b67z/crc-debug-tmrvt" Jan 31 10:31:53 crc kubenswrapper[4830]: I0131 10:31:53.447695 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/beab1865-9319-40e8-9056-a0c867c7d04d-host\") pod \"crc-debug-tmrvt\" (UID: \"beab1865-9319-40e8-9056-a0c867c7d04d\") " pod="openshift-must-gather-7b67z/crc-debug-tmrvt" Jan 31 10:31:53 crc kubenswrapper[4830]: I0131 10:31:53.447886 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kb9pz\" (UniqueName: \"kubernetes.io/projected/beab1865-9319-40e8-9056-a0c867c7d04d-kube-api-access-kb9pz\") pod \"crc-debug-tmrvt\" (UID: \"beab1865-9319-40e8-9056-a0c867c7d04d\") " pod="openshift-must-gather-7b67z/crc-debug-tmrvt" Jan 31 10:31:53 crc kubenswrapper[4830]: I0131 10:31:53.449427 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/beab1865-9319-40e8-9056-a0c867c7d04d-host\") pod \"crc-debug-tmrvt\" (UID: \"beab1865-9319-40e8-9056-a0c867c7d04d\") " pod="openshift-must-gather-7b67z/crc-debug-tmrvt" Jan 31 10:31:53 crc kubenswrapper[4830]: I0131 10:31:53.486853 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kb9pz\" (UniqueName: \"kubernetes.io/projected/beab1865-9319-40e8-9056-a0c867c7d04d-kube-api-access-kb9pz\") pod \"crc-debug-tmrvt\" (UID: \"beab1865-9319-40e8-9056-a0c867c7d04d\") " pod="openshift-must-gather-7b67z/crc-debug-tmrvt" Jan 31 10:31:53 crc kubenswrapper[4830]: I0131 10:31:53.611899 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-7b67z/crc-debug-tmrvt" Jan 31 10:31:54 crc kubenswrapper[4830]: I0131 10:31:54.568258 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-7b67z/crc-debug-tmrvt" event={"ID":"beab1865-9319-40e8-9056-a0c867c7d04d","Type":"ContainerStarted","Data":"b11625d8bd58e389f3f3dc3a7fea043609cf738f7ad77b2a36e8bc822850ae7c"} Jan 31 10:32:10 crc kubenswrapper[4830]: E0131 10:32:10.096660 4830 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6ab858aed98e4fe57e6b144da8e90ad5d6698bb4cc5521206f5c05809f0f9296" Jan 31 10:32:10 crc kubenswrapper[4830]: E0131 10:32:10.102535 4830 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:container-00,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6ab858aed98e4fe57e6b144da8e90ad5d6698bb4cc5521206f5c05809f0f9296,Command:[chroot /host bash -c echo 'TOOLBOX_NAME=toolbox-osp' > /root/.toolboxrc ; rm -rf \"/var/tmp/sos-osp\" && mkdir -p \"/var/tmp/sos-osp\" && sudo podman rm --force toolbox-osp; sudo --preserve-env podman pull --authfile /var/lib/kubelet/config.json registry.redhat.io/rhel9/support-tools && toolbox sos report --batch --all-logs --only-plugins block,cifs,crio,devicemapper,devices,firewall_tables,firewalld,iscsi,lvm2,memory,multipath,nfs,nis,nvme,podman,process,processor,selinux,scsi,udev,logs,crypto --tmp-dir=\"/var/tmp/sos-osp\" && if [[ \"$(ls /var/log/pods/*/{*.log.*,*/*.log.*} 2>/dev/null)\" != '' ]]; then tar --ignore-failed-read --warning=no-file-changed -cJf \"/var/tmp/sos-osp/podlogs.tar.xz\" --transform 's,^,podlogs/,' /var/log/pods/*/{*.log.*,*/*.log.*} || true; fi],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:TMOUT,Value:900,ValueFrom:nil,},EnvVar{Name:HOST,Value:/host,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:host,ReadOnly:false,MountPath:/host,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kb9pz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod crc-debug-tmrvt_openshift-must-gather-7b67z(beab1865-9319-40e8-9056-a0c867c7d04d): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 31 10:32:10 crc kubenswrapper[4830]: E0131 10:32:10.103797 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"container-00\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openshift-must-gather-7b67z/crc-debug-tmrvt" podUID="beab1865-9319-40e8-9056-a0c867c7d04d" Jan 31 10:32:10 crc kubenswrapper[4830]: E0131 10:32:10.845065 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"container-00\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6ab858aed98e4fe57e6b144da8e90ad5d6698bb4cc5521206f5c05809f0f9296\\\"\"" pod="openshift-must-gather-7b67z/crc-debug-tmrvt" podUID="beab1865-9319-40e8-9056-a0c867c7d04d" Jan 31 10:32:18 crc kubenswrapper[4830]: I0131 10:32:18.946373 4830 generic.go:334] "Generic (PLEG): container finished" podID="45903f73-e8ae-4e54-b650-f0090e9436b3" containerID="c4da751ed7e78efc6b02a950d82b969bca3c58873a46feefa2b13814f5949365" exitCode=0 Jan 31 10:32:18 crc kubenswrapper[4830]: I0131 10:32:18.946795 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-6cdc866fc6-9thf6" event={"ID":"45903f73-e8ae-4e54-b650-f0090e9436b3","Type":"ContainerDied","Data":"c4da751ed7e78efc6b02a950d82b969bca3c58873a46feefa2b13814f5949365"} Jan 31 10:32:19 crc kubenswrapper[4830]: I0131 10:32:19.961570 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-6cdc866fc6-9thf6" event={"ID":"45903f73-e8ae-4e54-b650-f0090e9436b3","Type":"ContainerStarted","Data":"bd88dfec23acff13a05788eae231d9bfae40c98f34a8ecdbfa117b3c745e4333"} Jan 31 10:32:22 crc kubenswrapper[4830]: I0131 10:32:22.256261 4830 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 31 10:32:24 crc kubenswrapper[4830]: I0131 10:32:24.011445 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-7b67z/crc-debug-tmrvt" event={"ID":"beab1865-9319-40e8-9056-a0c867c7d04d","Type":"ContainerStarted","Data":"8ec17bdd09bf12fe8c6ed358dd7051b71f1c1b75afbdccd1c05806726c8588c4"} Jan 31 10:32:24 crc kubenswrapper[4830]: I0131 10:32:24.052859 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-7b67z/crc-debug-tmrvt" podStartSLOduration=2.038024753 podStartE2EDuration="31.05283842s" podCreationTimestamp="2026-01-31 10:31:53 +0000 UTC" firstStartedPulling="2026-01-31 10:31:53.656629235 +0000 UTC m=+5458.149991677" lastFinishedPulling="2026-01-31 10:32:22.671442902 +0000 UTC m=+5487.164805344" observedRunningTime="2026-01-31 10:32:24.048311521 +0000 UTC m=+5488.541673963" watchObservedRunningTime="2026-01-31 10:32:24.05283842 +0000 UTC m=+5488.546200862" Jan 31 10:32:27 crc kubenswrapper[4830]: I0131 10:32:27.115231 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/metrics-server-6cdc866fc6-9thf6" Jan 31 10:32:27 crc kubenswrapper[4830]: I0131 10:32:27.115928 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-monitoring/metrics-server-6cdc866fc6-9thf6" Jan 31 10:32:47 crc kubenswrapper[4830]: I0131 10:32:47.121602 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-monitoring/metrics-server-6cdc866fc6-9thf6" Jan 31 10:32:47 crc kubenswrapper[4830]: I0131 10:32:47.127540 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/metrics-server-6cdc866fc6-9thf6" Jan 31 10:33:18 crc kubenswrapper[4830]: I0131 10:33:18.715274 4830 generic.go:334] "Generic (PLEG): container finished" podID="beab1865-9319-40e8-9056-a0c867c7d04d" containerID="8ec17bdd09bf12fe8c6ed358dd7051b71f1c1b75afbdccd1c05806726c8588c4" exitCode=0 Jan 31 10:33:18 crc kubenswrapper[4830]: I0131 10:33:18.715367 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-7b67z/crc-debug-tmrvt" event={"ID":"beab1865-9319-40e8-9056-a0c867c7d04d","Type":"ContainerDied","Data":"8ec17bdd09bf12fe8c6ed358dd7051b71f1c1b75afbdccd1c05806726c8588c4"} Jan 31 10:33:19 crc kubenswrapper[4830]: I0131 10:33:19.660209 4830 scope.go:117] "RemoveContainer" containerID="e4ffc309f61011d1bbb1dbe0fe22f7c82717ee384f3ac8052210580ef79f1a9c" Jan 31 10:33:19 crc kubenswrapper[4830]: I0131 10:33:19.719984 4830 scope.go:117] "RemoveContainer" containerID="63542ec685aed86818edace9246d8f01d9b7192be28f0a3ff86ffa8f4460d4d5" Jan 31 10:33:19 crc kubenswrapper[4830]: I0131 10:33:19.825532 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-7b67z/crc-debug-tmrvt" Jan 31 10:33:19 crc kubenswrapper[4830]: I0131 10:33:19.832451 4830 scope.go:117] "RemoveContainer" containerID="8bceb12af9febada80ab835c715de3ed794492bbcd051062ade47e00f19d63c0" Jan 31 10:33:19 crc kubenswrapper[4830]: I0131 10:33:19.870919 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-7b67z/crc-debug-tmrvt"] Jan 31 10:33:19 crc kubenswrapper[4830]: I0131 10:33:19.879877 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-7b67z/crc-debug-tmrvt"] Jan 31 10:33:19 crc kubenswrapper[4830]: I0131 10:33:19.920360 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/beab1865-9319-40e8-9056-a0c867c7d04d-host\") pod \"beab1865-9319-40e8-9056-a0c867c7d04d\" (UID: \"beab1865-9319-40e8-9056-a0c867c7d04d\") " Jan 31 10:33:19 crc kubenswrapper[4830]: I0131 10:33:19.920474 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/beab1865-9319-40e8-9056-a0c867c7d04d-host" (OuterVolumeSpecName: "host") pod "beab1865-9319-40e8-9056-a0c867c7d04d" (UID: "beab1865-9319-40e8-9056-a0c867c7d04d"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 10:33:19 crc kubenswrapper[4830]: I0131 10:33:19.920988 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kb9pz\" (UniqueName: \"kubernetes.io/projected/beab1865-9319-40e8-9056-a0c867c7d04d-kube-api-access-kb9pz\") pod \"beab1865-9319-40e8-9056-a0c867c7d04d\" (UID: \"beab1865-9319-40e8-9056-a0c867c7d04d\") " Jan 31 10:33:19 crc kubenswrapper[4830]: I0131 10:33:19.921547 4830 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/beab1865-9319-40e8-9056-a0c867c7d04d-host\") on node \"crc\" DevicePath \"\"" Jan 31 10:33:19 crc kubenswrapper[4830]: I0131 10:33:19.927901 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/beab1865-9319-40e8-9056-a0c867c7d04d-kube-api-access-kb9pz" (OuterVolumeSpecName: "kube-api-access-kb9pz") pod "beab1865-9319-40e8-9056-a0c867c7d04d" (UID: "beab1865-9319-40e8-9056-a0c867c7d04d"). InnerVolumeSpecName "kube-api-access-kb9pz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 10:33:20 crc kubenswrapper[4830]: I0131 10:33:20.023573 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kb9pz\" (UniqueName: \"kubernetes.io/projected/beab1865-9319-40e8-9056-a0c867c7d04d-kube-api-access-kb9pz\") on node \"crc\" DevicePath \"\"" Jan 31 10:33:20 crc kubenswrapper[4830]: I0131 10:33:20.269521 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="beab1865-9319-40e8-9056-a0c867c7d04d" path="/var/lib/kubelet/pods/beab1865-9319-40e8-9056-a0c867c7d04d/volumes" Jan 31 10:33:20 crc kubenswrapper[4830]: I0131 10:33:20.743388 4830 scope.go:117] "RemoveContainer" containerID="8ec17bdd09bf12fe8c6ed358dd7051b71f1c1b75afbdccd1c05806726c8588c4" Jan 31 10:33:20 crc kubenswrapper[4830]: I0131 10:33:20.744511 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-7b67z/crc-debug-tmrvt" Jan 31 10:33:21 crc kubenswrapper[4830]: I0131 10:33:21.044214 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-7b67z/crc-debug-fwd2l"] Jan 31 10:33:21 crc kubenswrapper[4830]: E0131 10:33:21.044735 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="beab1865-9319-40e8-9056-a0c867c7d04d" containerName="container-00" Jan 31 10:33:21 crc kubenswrapper[4830]: I0131 10:33:21.044772 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="beab1865-9319-40e8-9056-a0c867c7d04d" containerName="container-00" Jan 31 10:33:21 crc kubenswrapper[4830]: I0131 10:33:21.045003 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="beab1865-9319-40e8-9056-a0c867c7d04d" containerName="container-00" Jan 31 10:33:21 crc kubenswrapper[4830]: I0131 10:33:21.045765 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-7b67z/crc-debug-fwd2l" Jan 31 10:33:21 crc kubenswrapper[4830]: I0131 10:33:21.147784 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xnz5j\" (UniqueName: \"kubernetes.io/projected/f86294ab-29ff-42ff-a1f8-e362eec88b79-kube-api-access-xnz5j\") pod \"crc-debug-fwd2l\" (UID: \"f86294ab-29ff-42ff-a1f8-e362eec88b79\") " pod="openshift-must-gather-7b67z/crc-debug-fwd2l" Jan 31 10:33:21 crc kubenswrapper[4830]: I0131 10:33:21.147944 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f86294ab-29ff-42ff-a1f8-e362eec88b79-host\") pod \"crc-debug-fwd2l\" (UID: \"f86294ab-29ff-42ff-a1f8-e362eec88b79\") " pod="openshift-must-gather-7b67z/crc-debug-fwd2l" Jan 31 10:33:21 crc kubenswrapper[4830]: I0131 10:33:21.249037 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xnz5j\" (UniqueName: \"kubernetes.io/projected/f86294ab-29ff-42ff-a1f8-e362eec88b79-kube-api-access-xnz5j\") pod \"crc-debug-fwd2l\" (UID: \"f86294ab-29ff-42ff-a1f8-e362eec88b79\") " pod="openshift-must-gather-7b67z/crc-debug-fwd2l" Jan 31 10:33:21 crc kubenswrapper[4830]: I0131 10:33:21.249410 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f86294ab-29ff-42ff-a1f8-e362eec88b79-host\") pod \"crc-debug-fwd2l\" (UID: \"f86294ab-29ff-42ff-a1f8-e362eec88b79\") " pod="openshift-must-gather-7b67z/crc-debug-fwd2l" Jan 31 10:33:21 crc kubenswrapper[4830]: I0131 10:33:21.249484 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f86294ab-29ff-42ff-a1f8-e362eec88b79-host\") pod \"crc-debug-fwd2l\" (UID: \"f86294ab-29ff-42ff-a1f8-e362eec88b79\") " pod="openshift-must-gather-7b67z/crc-debug-fwd2l" Jan 31 10:33:21 crc kubenswrapper[4830]: I0131 10:33:21.271059 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xnz5j\" (UniqueName: \"kubernetes.io/projected/f86294ab-29ff-42ff-a1f8-e362eec88b79-kube-api-access-xnz5j\") pod \"crc-debug-fwd2l\" (UID: \"f86294ab-29ff-42ff-a1f8-e362eec88b79\") " pod="openshift-must-gather-7b67z/crc-debug-fwd2l" Jan 31 10:33:21 crc kubenswrapper[4830]: I0131 10:33:21.366158 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-7b67z/crc-debug-fwd2l" Jan 31 10:33:21 crc kubenswrapper[4830]: I0131 10:33:21.756310 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-7b67z/crc-debug-fwd2l" event={"ID":"f86294ab-29ff-42ff-a1f8-e362eec88b79","Type":"ContainerStarted","Data":"7e1842dfc6a2d4ed042b69ad0b31cd48806e08c2f18ebe68a31631015b63da42"} Jan 31 10:33:21 crc kubenswrapper[4830]: I0131 10:33:21.756577 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-7b67z/crc-debug-fwd2l" event={"ID":"f86294ab-29ff-42ff-a1f8-e362eec88b79","Type":"ContainerStarted","Data":"bcafe78beda55c90d27c6e6f2a7c9a5030c5561fe6a977268d3ae0705d03c51e"} Jan 31 10:33:21 crc kubenswrapper[4830]: I0131 10:33:21.777991 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-7b67z/crc-debug-fwd2l" podStartSLOduration=0.777944929 podStartE2EDuration="777.944929ms" podCreationTimestamp="2026-01-31 10:33:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 10:33:21.770746954 +0000 UTC m=+5546.264109396" watchObservedRunningTime="2026-01-31 10:33:21.777944929 +0000 UTC m=+5546.271307371" Jan 31 10:33:22 crc kubenswrapper[4830]: I0131 10:33:22.770245 4830 generic.go:334] "Generic (PLEG): container finished" podID="f86294ab-29ff-42ff-a1f8-e362eec88b79" containerID="7e1842dfc6a2d4ed042b69ad0b31cd48806e08c2f18ebe68a31631015b63da42" exitCode=0 Jan 31 10:33:22 crc kubenswrapper[4830]: I0131 10:33:22.770335 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-7b67z/crc-debug-fwd2l" event={"ID":"f86294ab-29ff-42ff-a1f8-e362eec88b79","Type":"ContainerDied","Data":"7e1842dfc6a2d4ed042b69ad0b31cd48806e08c2f18ebe68a31631015b63da42"} Jan 31 10:33:23 crc kubenswrapper[4830]: I0131 10:33:23.911283 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-7b67z/crc-debug-fwd2l" Jan 31 10:33:23 crc kubenswrapper[4830]: I0131 10:33:23.952204 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-7b67z/crc-debug-fwd2l"] Jan 31 10:33:23 crc kubenswrapper[4830]: I0131 10:33:23.961960 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-7b67z/crc-debug-fwd2l"] Jan 31 10:33:24 crc kubenswrapper[4830]: I0131 10:33:24.114693 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xnz5j\" (UniqueName: \"kubernetes.io/projected/f86294ab-29ff-42ff-a1f8-e362eec88b79-kube-api-access-xnz5j\") pod \"f86294ab-29ff-42ff-a1f8-e362eec88b79\" (UID: \"f86294ab-29ff-42ff-a1f8-e362eec88b79\") " Jan 31 10:33:24 crc kubenswrapper[4830]: I0131 10:33:24.115079 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f86294ab-29ff-42ff-a1f8-e362eec88b79-host\") pod \"f86294ab-29ff-42ff-a1f8-e362eec88b79\" (UID: \"f86294ab-29ff-42ff-a1f8-e362eec88b79\") " Jan 31 10:33:24 crc kubenswrapper[4830]: I0131 10:33:24.115230 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f86294ab-29ff-42ff-a1f8-e362eec88b79-host" (OuterVolumeSpecName: "host") pod "f86294ab-29ff-42ff-a1f8-e362eec88b79" (UID: "f86294ab-29ff-42ff-a1f8-e362eec88b79"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 10:33:24 crc kubenswrapper[4830]: I0131 10:33:24.115709 4830 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f86294ab-29ff-42ff-a1f8-e362eec88b79-host\") on node \"crc\" DevicePath \"\"" Jan 31 10:33:24 crc kubenswrapper[4830]: I0131 10:33:24.122662 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f86294ab-29ff-42ff-a1f8-e362eec88b79-kube-api-access-xnz5j" (OuterVolumeSpecName: "kube-api-access-xnz5j") pod "f86294ab-29ff-42ff-a1f8-e362eec88b79" (UID: "f86294ab-29ff-42ff-a1f8-e362eec88b79"). InnerVolumeSpecName "kube-api-access-xnz5j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 10:33:24 crc kubenswrapper[4830]: I0131 10:33:24.216852 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xnz5j\" (UniqueName: \"kubernetes.io/projected/f86294ab-29ff-42ff-a1f8-e362eec88b79-kube-api-access-xnz5j\") on node \"crc\" DevicePath \"\"" Jan 31 10:33:24 crc kubenswrapper[4830]: I0131 10:33:24.273008 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f86294ab-29ff-42ff-a1f8-e362eec88b79" path="/var/lib/kubelet/pods/f86294ab-29ff-42ff-a1f8-e362eec88b79/volumes" Jan 31 10:33:24 crc kubenswrapper[4830]: I0131 10:33:24.798701 4830 scope.go:117] "RemoveContainer" containerID="7e1842dfc6a2d4ed042b69ad0b31cd48806e08c2f18ebe68a31631015b63da42" Jan 31 10:33:24 crc kubenswrapper[4830]: I0131 10:33:24.798778 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-7b67z/crc-debug-fwd2l" Jan 31 10:33:25 crc kubenswrapper[4830]: I0131 10:33:25.132301 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-7b67z/crc-debug-mzmt2"] Jan 31 10:33:25 crc kubenswrapper[4830]: E0131 10:33:25.132882 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f86294ab-29ff-42ff-a1f8-e362eec88b79" containerName="container-00" Jan 31 10:33:25 crc kubenswrapper[4830]: I0131 10:33:25.132897 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="f86294ab-29ff-42ff-a1f8-e362eec88b79" containerName="container-00" Jan 31 10:33:25 crc kubenswrapper[4830]: I0131 10:33:25.133095 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="f86294ab-29ff-42ff-a1f8-e362eec88b79" containerName="container-00" Jan 31 10:33:25 crc kubenswrapper[4830]: I0131 10:33:25.134044 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-7b67z/crc-debug-mzmt2" Jan 31 10:33:25 crc kubenswrapper[4830]: I0131 10:33:25.135884 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ef78c6cc-b893-40f1-a0f2-6a8032bd318d-host\") pod \"crc-debug-mzmt2\" (UID: \"ef78c6cc-b893-40f1-a0f2-6a8032bd318d\") " pod="openshift-must-gather-7b67z/crc-debug-mzmt2" Jan 31 10:33:25 crc kubenswrapper[4830]: I0131 10:33:25.136095 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rcmm8\" (UniqueName: \"kubernetes.io/projected/ef78c6cc-b893-40f1-a0f2-6a8032bd318d-kube-api-access-rcmm8\") pod \"crc-debug-mzmt2\" (UID: \"ef78c6cc-b893-40f1-a0f2-6a8032bd318d\") " pod="openshift-must-gather-7b67z/crc-debug-mzmt2" Jan 31 10:33:25 crc kubenswrapper[4830]: I0131 10:33:25.239240 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ef78c6cc-b893-40f1-a0f2-6a8032bd318d-host\") pod \"crc-debug-mzmt2\" (UID: \"ef78c6cc-b893-40f1-a0f2-6a8032bd318d\") " pod="openshift-must-gather-7b67z/crc-debug-mzmt2" Jan 31 10:33:25 crc kubenswrapper[4830]: I0131 10:33:25.239417 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ef78c6cc-b893-40f1-a0f2-6a8032bd318d-host\") pod \"crc-debug-mzmt2\" (UID: \"ef78c6cc-b893-40f1-a0f2-6a8032bd318d\") " pod="openshift-must-gather-7b67z/crc-debug-mzmt2" Jan 31 10:33:25 crc kubenswrapper[4830]: I0131 10:33:25.239866 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rcmm8\" (UniqueName: \"kubernetes.io/projected/ef78c6cc-b893-40f1-a0f2-6a8032bd318d-kube-api-access-rcmm8\") pod \"crc-debug-mzmt2\" (UID: \"ef78c6cc-b893-40f1-a0f2-6a8032bd318d\") " pod="openshift-must-gather-7b67z/crc-debug-mzmt2" Jan 31 10:33:25 crc kubenswrapper[4830]: I0131 10:33:25.264801 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rcmm8\" (UniqueName: \"kubernetes.io/projected/ef78c6cc-b893-40f1-a0f2-6a8032bd318d-kube-api-access-rcmm8\") pod \"crc-debug-mzmt2\" (UID: \"ef78c6cc-b893-40f1-a0f2-6a8032bd318d\") " pod="openshift-must-gather-7b67z/crc-debug-mzmt2" Jan 31 10:33:25 crc kubenswrapper[4830]: I0131 10:33:25.455807 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-7b67z/crc-debug-mzmt2" Jan 31 10:33:25 crc kubenswrapper[4830]: I0131 10:33:25.811527 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-7b67z/crc-debug-mzmt2" event={"ID":"ef78c6cc-b893-40f1-a0f2-6a8032bd318d","Type":"ContainerStarted","Data":"96e30d614eb19449b5821d11d6d37eed7b9e9b36a5cf2520d91971e2b3f28eef"} Jan 31 10:33:26 crc kubenswrapper[4830]: I0131 10:33:26.831955 4830 generic.go:334] "Generic (PLEG): container finished" podID="ef78c6cc-b893-40f1-a0f2-6a8032bd318d" containerID="70a7a96c27f8e9fad1bcf066c2c9753651bd1095721a3c2ec6a5e29c803cfb89" exitCode=0 Jan 31 10:33:26 crc kubenswrapper[4830]: I0131 10:33:26.832007 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-7b67z/crc-debug-mzmt2" event={"ID":"ef78c6cc-b893-40f1-a0f2-6a8032bd318d","Type":"ContainerDied","Data":"70a7a96c27f8e9fad1bcf066c2c9753651bd1095721a3c2ec6a5e29c803cfb89"} Jan 31 10:33:26 crc kubenswrapper[4830]: I0131 10:33:26.878431 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-7b67z/crc-debug-mzmt2"] Jan 31 10:33:26 crc kubenswrapper[4830]: I0131 10:33:26.893353 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-7b67z/crc-debug-mzmt2"] Jan 31 10:33:28 crc kubenswrapper[4830]: I0131 10:33:28.010327 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-7b67z/crc-debug-mzmt2" Jan 31 10:33:28 crc kubenswrapper[4830]: I0131 10:33:28.122892 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rcmm8\" (UniqueName: \"kubernetes.io/projected/ef78c6cc-b893-40f1-a0f2-6a8032bd318d-kube-api-access-rcmm8\") pod \"ef78c6cc-b893-40f1-a0f2-6a8032bd318d\" (UID: \"ef78c6cc-b893-40f1-a0f2-6a8032bd318d\") " Jan 31 10:33:28 crc kubenswrapper[4830]: I0131 10:33:28.122938 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ef78c6cc-b893-40f1-a0f2-6a8032bd318d-host\") pod \"ef78c6cc-b893-40f1-a0f2-6a8032bd318d\" (UID: \"ef78c6cc-b893-40f1-a0f2-6a8032bd318d\") " Jan 31 10:33:28 crc kubenswrapper[4830]: I0131 10:33:28.123082 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ef78c6cc-b893-40f1-a0f2-6a8032bd318d-host" (OuterVolumeSpecName: "host") pod "ef78c6cc-b893-40f1-a0f2-6a8032bd318d" (UID: "ef78c6cc-b893-40f1-a0f2-6a8032bd318d"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 10:33:28 crc kubenswrapper[4830]: I0131 10:33:28.123990 4830 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ef78c6cc-b893-40f1-a0f2-6a8032bd318d-host\") on node \"crc\" DevicePath \"\"" Jan 31 10:33:28 crc kubenswrapper[4830]: I0131 10:33:28.129694 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ef78c6cc-b893-40f1-a0f2-6a8032bd318d-kube-api-access-rcmm8" (OuterVolumeSpecName: "kube-api-access-rcmm8") pod "ef78c6cc-b893-40f1-a0f2-6a8032bd318d" (UID: "ef78c6cc-b893-40f1-a0f2-6a8032bd318d"). InnerVolumeSpecName "kube-api-access-rcmm8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 10:33:28 crc kubenswrapper[4830]: I0131 10:33:28.227183 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rcmm8\" (UniqueName: \"kubernetes.io/projected/ef78c6cc-b893-40f1-a0f2-6a8032bd318d-kube-api-access-rcmm8\") on node \"crc\" DevicePath \"\"" Jan 31 10:33:28 crc kubenswrapper[4830]: I0131 10:33:28.264415 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ef78c6cc-b893-40f1-a0f2-6a8032bd318d" path="/var/lib/kubelet/pods/ef78c6cc-b893-40f1-a0f2-6a8032bd318d/volumes" Jan 31 10:33:28 crc kubenswrapper[4830]: I0131 10:33:28.870801 4830 scope.go:117] "RemoveContainer" containerID="70a7a96c27f8e9fad1bcf066c2c9753651bd1095721a3c2ec6a5e29c803cfb89" Jan 31 10:33:28 crc kubenswrapper[4830]: I0131 10:33:28.870898 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-7b67z/crc-debug-mzmt2" Jan 31 10:33:56 crc kubenswrapper[4830]: I0131 10:33:56.767538 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_11142d9d-4725-4a33-b10e-8fc21e30c6a3/aodh-api/0.log" Jan 31 10:33:56 crc kubenswrapper[4830]: I0131 10:33:56.984166 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_11142d9d-4725-4a33-b10e-8fc21e30c6a3/aodh-notifier/0.log" Jan 31 10:33:57 crc kubenswrapper[4830]: I0131 10:33:57.031158 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_11142d9d-4725-4a33-b10e-8fc21e30c6a3/aodh-listener/0.log" Jan 31 10:33:57 crc kubenswrapper[4830]: I0131 10:33:57.048374 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_11142d9d-4725-4a33-b10e-8fc21e30c6a3/aodh-evaluator/0.log" Jan 31 10:33:57 crc kubenswrapper[4830]: I0131 10:33:57.210901 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-bd45896b-5lsfl_68bf6013-de5a-401f-868a-79325ed5ab24/barbican-api/0.log" Jan 31 10:33:57 crc kubenswrapper[4830]: I0131 10:33:57.260103 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-bd45896b-5lsfl_68bf6013-de5a-401f-868a-79325ed5ab24/barbican-api-log/0.log" Jan 31 10:33:57 crc kubenswrapper[4830]: I0131 10:33:57.312397 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-845499c66-m62t7_4d9d32f8-4cbd-41db-b7a8-041cdbb90b29/barbican-keystone-listener/0.log" Jan 31 10:33:57 crc kubenswrapper[4830]: I0131 10:33:57.503625 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-845499c66-m62t7_4d9d32f8-4cbd-41db-b7a8-041cdbb90b29/barbican-keystone-listener-log/0.log" Jan 31 10:33:57 crc kubenswrapper[4830]: I0131 10:33:57.553677 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-566d86fcf5-mxs88_6c1a8d52-a1f9-4faf-bceb-fdf75da19a4b/barbican-worker-log/0.log" Jan 31 10:33:57 crc kubenswrapper[4830]: I0131 10:33:57.582762 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-566d86fcf5-mxs88_6c1a8d52-a1f9-4faf-bceb-fdf75da19a4b/barbican-worker/0.log" Jan 31 10:33:57 crc kubenswrapper[4830]: I0131 10:33:57.753561 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-vs6t2_45dd1e1a-bac5-460f-9c7e-df3f8e11aa52/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Jan 31 10:33:57 crc kubenswrapper[4830]: I0131 10:33:57.888145 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_f2ea7efa-c50b-4208-a9df-2c3fc454762b/ceilometer-central-agent/1.log" Jan 31 10:33:58 crc kubenswrapper[4830]: I0131 10:33:58.020377 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_f2ea7efa-c50b-4208-a9df-2c3fc454762b/ceilometer-notification-agent/0.log" Jan 31 10:33:58 crc kubenswrapper[4830]: I0131 10:33:58.097529 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_f2ea7efa-c50b-4208-a9df-2c3fc454762b/proxy-httpd/0.log" Jan 31 10:33:58 crc kubenswrapper[4830]: I0131 10:33:58.165511 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_f2ea7efa-c50b-4208-a9df-2c3fc454762b/ceilometer-central-agent/0.log" Jan 31 10:33:58 crc kubenswrapper[4830]: I0131 10:33:58.172101 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_f2ea7efa-c50b-4208-a9df-2c3fc454762b/sg-core/0.log" Jan 31 10:33:58 crc kubenswrapper[4830]: I0131 10:33:58.386340 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_945c030b-2a43-431b-b898-d3a28b4e3821/cinder-api-log/0.log" Jan 31 10:33:58 crc kubenswrapper[4830]: I0131 10:33:58.398452 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_945c030b-2a43-431b-b898-d3a28b4e3821/cinder-api/0.log" Jan 31 10:33:58 crc kubenswrapper[4830]: I0131 10:33:58.568895 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_c45f6608-4c27-4322-b60a-3362294e1ab8/cinder-scheduler/1.log" Jan 31 10:33:58 crc kubenswrapper[4830]: I0131 10:33:58.623656 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_c45f6608-4c27-4322-b60a-3362294e1ab8/cinder-scheduler/0.log" Jan 31 10:33:58 crc kubenswrapper[4830]: I0131 10:33:58.658809 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_c45f6608-4c27-4322-b60a-3362294e1ab8/probe/0.log" Jan 31 10:33:58 crc kubenswrapper[4830]: I0131 10:33:58.782106 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-4wcht_dbfcb990-512f-4840-b83b-32279cec5a26/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Jan 31 10:33:58 crc kubenswrapper[4830]: I0131 10:33:58.940118 4830 pod_container_manager_linux.go:210] "Failed to delete cgroup paths" cgroupName=["kubepods","besteffort","podef78c6cc-b893-40f1-a0f2-6a8032bd318d"] err="unable to destroy cgroup paths for cgroup [kubepods besteffort podef78c6cc-b893-40f1-a0f2-6a8032bd318d] : Timed out while waiting for systemd to remove kubepods-besteffort-podef78c6cc_b893_40f1_a0f2_6a8032bd318d.slice" Jan 31 10:33:58 crc kubenswrapper[4830]: I0131 10:33:58.976067 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-p5bb9_93ba1174-bbf6-485c-bd6a-5f44b9f96116/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 31 10:33:59 crc kubenswrapper[4830]: I0131 10:33:59.022804 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-bb85b8995-fsxj6_009bab2e-2d97-42e2-aa01-2a5e9d4c74c2/init/0.log" Jan 31 10:33:59 crc kubenswrapper[4830]: I0131 10:33:59.193046 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-bb85b8995-fsxj6_009bab2e-2d97-42e2-aa01-2a5e9d4c74c2/init/0.log" Jan 31 10:33:59 crc kubenswrapper[4830]: I0131 10:33:59.264345 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-bb85b8995-fsxj6_009bab2e-2d97-42e2-aa01-2a5e9d4c74c2/dnsmasq-dns/0.log" Jan 31 10:33:59 crc kubenswrapper[4830]: I0131 10:33:59.333772 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-9jg6m_88f0db8e-690d-4b60-8eb5-473a1ab51029/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Jan 31 10:33:59 crc kubenswrapper[4830]: I0131 10:33:59.498680 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_0e63470a-95b6-4653-b917-ed1f8ff66466/glance-log/0.log" Jan 31 10:33:59 crc kubenswrapper[4830]: I0131 10:33:59.559026 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_0e63470a-95b6-4653-b917-ed1f8ff66466/glance-httpd/0.log" Jan 31 10:33:59 crc kubenswrapper[4830]: I0131 10:33:59.659281 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_65303534-fa3e-4008-9ea1-95cd77e752c9/glance-httpd/0.log" Jan 31 10:33:59 crc kubenswrapper[4830]: I0131 10:33:59.796458 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_65303534-fa3e-4008-9ea1-95cd77e752c9/glance-log/0.log" Jan 31 10:34:00 crc kubenswrapper[4830]: I0131 10:34:00.365882 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-engine-88757d59b-r55jf_3d4efcc1-d98d-466c-a7ee-6a6aa3766681/heat-engine/0.log" Jan 31 10:34:00 crc kubenswrapper[4830]: I0131 10:34:00.474989 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-api-5677f68f94-9mmb8_99dbef57-35a0-4840-a293-fefe87379a4b/heat-api/0.log" Jan 31 10:34:00 crc kubenswrapper[4830]: I0131 10:34:00.735059 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-7qkc2_dbdc4551-3d56-4feb-b897-89d6d0367388/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Jan 31 10:34:01 crc kubenswrapper[4830]: I0131 10:34:01.081895 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29497561-mfqc8_f472dd66-301e-4ce7-8279-6cec24c432c7/keystone-cron/0.log" Jan 31 10:34:01 crc kubenswrapper[4830]: I0131 10:34:01.093803 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-vxplq_52cc6156-9fe9-433a-a363-8aa0197a9bac/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 31 10:34:01 crc kubenswrapper[4830]: I0131 10:34:01.105964 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-cfnapi-546fb56cb7-54z2g_bcd98bf8-a064-4c62-9847-37dd7939889b/heat-cfnapi/0.log" Jan 31 10:34:01 crc kubenswrapper[4830]: I0131 10:34:01.328261 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_adf0d571-b5dc-4d7c-9e8d-8813354a5128/kube-state-metrics/1.log" Jan 31 10:34:01 crc kubenswrapper[4830]: I0131 10:34:01.357174 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_adf0d571-b5dc-4d7c-9e8d-8813354a5128/kube-state-metrics/0.log" Jan 31 10:34:01 crc kubenswrapper[4830]: I0131 10:34:01.687496 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-57d8f8c487-sqqph_230488d2-6bec-4165-8ff4-4854cc6d53f6/keystone-api/0.log" Jan 31 10:34:02 crc kubenswrapper[4830]: I0131 10:34:02.070358 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-94xgt_464743a2-b75e-49de-9628-6c12d7c7f8b7/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Jan 31 10:34:02 crc kubenswrapper[4830]: I0131 10:34:02.091595 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_logging-edpm-deployment-openstack-edpm-ipam-5c2kd_4edbd94d-6175-4ec1-831f-d68d8e272bd9/logging-edpm-deployment-openstack-edpm-ipam/0.log" Jan 31 10:34:02 crc kubenswrapper[4830]: I0131 10:34:02.342513 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mysqld-exporter-0_5f08189f-4613-4e22-b135-ef80b5bad065/mysqld-exporter/0.log" Jan 31 10:34:02 crc kubenswrapper[4830]: I0131 10:34:02.618787 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-cc7d8b455-4zmj7_d1262ef4-ec58-4db3-a66e-be826421d514/neutron-api/0.log" Jan 31 10:34:02 crc kubenswrapper[4830]: I0131 10:34:02.671289 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-bc986_41d8850d-86d0-4b11-ac11-7738b2359233/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Jan 31 10:34:02 crc kubenswrapper[4830]: I0131 10:34:02.673066 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-cc7d8b455-4zmj7_d1262ef4-ec58-4db3-a66e-be826421d514/neutron-httpd/0.log" Jan 31 10:34:03 crc kubenswrapper[4830]: I0131 10:34:03.509686 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_b7078937-4ecb-4aab-afd1-e60252550def/nova-cell0-conductor-conductor/0.log" Jan 31 10:34:03 crc kubenswrapper[4830]: I0131 10:34:03.601813 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_9b60a826-4072-45c8-91c8-469a728a68ae/nova-api-log/0.log" Jan 31 10:34:03 crc kubenswrapper[4830]: I0131 10:34:03.661161 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_e3543533-b215-4345-b520-286551717692/nova-cell1-conductor-conductor/0.log" Jan 31 10:34:03 crc kubenswrapper[4830]: I0131 10:34:03.985155 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_9b60a826-4072-45c8-91c8-469a728a68ae/nova-api-api/0.log" Jan 31 10:34:04 crc kubenswrapper[4830]: I0131 10:34:04.563247 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-xpd8k_8081b2b1-7847-4223-a583-0f0251f2ef52/nova-edpm-deployment-openstack-edpm-ipam/0.log" Jan 31 10:34:04 crc kubenswrapper[4830]: I0131 10:34:04.641187 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_d1290216-3656-4402-94a5-44d1fde53083/nova-cell1-novncproxy-novncproxy/0.log" Jan 31 10:34:05 crc kubenswrapper[4830]: I0131 10:34:05.019894 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_efaf79e1-d68e-4987-a73f-42a782fb9f6a/nova-metadata-log/0.log" Jan 31 10:34:05 crc kubenswrapper[4830]: I0131 10:34:05.295809 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_65751981-c5c6-41a5-bf04-3ff6bee55188/nova-scheduler-scheduler/0.log" Jan 31 10:34:05 crc kubenswrapper[4830]: I0131 10:34:05.296511 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_f37f41b4-3b56-45f9-a368-0f772bcf3002/mysql-bootstrap/0.log" Jan 31 10:34:05 crc kubenswrapper[4830]: I0131 10:34:05.486931 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_f37f41b4-3b56-45f9-a368-0f772bcf3002/mysql-bootstrap/0.log" Jan 31 10:34:05 crc kubenswrapper[4830]: I0131 10:34:05.533012 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_f37f41b4-3b56-45f9-a368-0f772bcf3002/galera/0.log" Jan 31 10:34:05 crc kubenswrapper[4830]: I0131 10:34:05.547520 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_f37f41b4-3b56-45f9-a368-0f772bcf3002/galera/1.log" Jan 31 10:34:05 crc kubenswrapper[4830]: I0131 10:34:05.764292 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_2ca5d2f1-673e-4173-848a-8d32d33b8bcc/mysql-bootstrap/0.log" Jan 31 10:34:05 crc kubenswrapper[4830]: I0131 10:34:05.964266 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_2ca5d2f1-673e-4173-848a-8d32d33b8bcc/mysql-bootstrap/0.log" Jan 31 10:34:06 crc kubenswrapper[4830]: I0131 10:34:06.016960 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_2ca5d2f1-673e-4173-848a-8d32d33b8bcc/galera/0.log" Jan 31 10:34:06 crc kubenswrapper[4830]: I0131 10:34:06.080269 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_2ca5d2f1-673e-4173-848a-8d32d33b8bcc/galera/1.log" Jan 31 10:34:06 crc kubenswrapper[4830]: I0131 10:34:06.238579 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_4ed170d0-8e88-40c3-a2b4-9908fc87a3db/openstackclient/0.log" Jan 31 10:34:06 crc kubenswrapper[4830]: I0131 10:34:06.429650 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-dqnl9_c7f2be11-cbc3-426b-8d36-55d2bec20af6/openstack-network-exporter/0.log" Jan 31 10:34:06 crc kubenswrapper[4830]: I0131 10:34:06.619125 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-gk8dv_e3aa8ed2-c434-4c0c-9a0e-8fe7ce3cc5d1/ovsdb-server-init/0.log" Jan 31 10:34:06 crc kubenswrapper[4830]: I0131 10:34:06.817300 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-gk8dv_e3aa8ed2-c434-4c0c-9a0e-8fe7ce3cc5d1/ovs-vswitchd/0.log" Jan 31 10:34:06 crc kubenswrapper[4830]: I0131 10:34:06.842986 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-gk8dv_e3aa8ed2-c434-4c0c-9a0e-8fe7ce3cc5d1/ovsdb-server/0.log" Jan 31 10:34:06 crc kubenswrapper[4830]: I0131 10:34:06.847157 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_efaf79e1-d68e-4987-a73f-42a782fb9f6a/nova-metadata-metadata/0.log" Jan 31 10:34:06 crc kubenswrapper[4830]: I0131 10:34:06.890839 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-gk8dv_e3aa8ed2-c434-4c0c-9a0e-8fe7ce3cc5d1/ovsdb-server-init/0.log" Jan 31 10:34:07 crc kubenswrapper[4830]: I0131 10:34:07.446952 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ps27t_dcebcaa8-8cb1-4a94-a15a-0ef39c9bee73/ovn-controller/0.log" Jan 31 10:34:07 crc kubenswrapper[4830]: I0131 10:34:07.524843 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-tvl2h_3b46573d-c2d5-4fe7-9bef-4a5718c0ffe1/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Jan 31 10:34:07 crc kubenswrapper[4830]: I0131 10:34:07.606185 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_26868249-8749-44ba-9f03-e4691815285d/openstack-network-exporter/0.log" Jan 31 10:34:07 crc kubenswrapper[4830]: I0131 10:34:07.694877 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_26868249-8749-44ba-9f03-e4691815285d/ovn-northd/0.log" Jan 31 10:34:07 crc kubenswrapper[4830]: I0131 10:34:07.846420 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_6f46adde-a4fc-42fc-aa3b-de8154dbc99c/openstack-network-exporter/0.log" Jan 31 10:34:07 crc kubenswrapper[4830]: I0131 10:34:07.903479 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_6f46adde-a4fc-42fc-aa3b-de8154dbc99c/ovsdbserver-nb/0.log" Jan 31 10:34:08 crc kubenswrapper[4830]: I0131 10:34:08.164144 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_e47f665d-2a2a-464a-b6a3-e255f1440eda/openstack-network-exporter/0.log" Jan 31 10:34:08 crc kubenswrapper[4830]: I0131 10:34:08.171737 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_e47f665d-2a2a-464a-b6a3-e255f1440eda/ovsdbserver-sb/0.log" Jan 31 10:34:08 crc kubenswrapper[4830]: I0131 10:34:08.369590 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-99b5d6b8d-v6s9l_75d4710e-57ca-46dd-921f-3c215c3ee94c/placement-api/0.log" Jan 31 10:34:08 crc kubenswrapper[4830]: I0131 10:34:08.568278 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_7b3b4d1e-8963-469f-abe7-204392275c48/init-config-reloader/0.log" Jan 31 10:34:08 crc kubenswrapper[4830]: I0131 10:34:08.682083 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-99b5d6b8d-v6s9l_75d4710e-57ca-46dd-921f-3c215c3ee94c/placement-log/0.log" Jan 31 10:34:08 crc kubenswrapper[4830]: I0131 10:34:08.814950 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_7b3b4d1e-8963-469f-abe7-204392275c48/init-config-reloader/0.log" Jan 31 10:34:08 crc kubenswrapper[4830]: I0131 10:34:08.842915 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_7b3b4d1e-8963-469f-abe7-204392275c48/prometheus/0.log" Jan 31 10:34:08 crc kubenswrapper[4830]: I0131 10:34:08.876321 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_7b3b4d1e-8963-469f-abe7-204392275c48/config-reloader/0.log" Jan 31 10:34:08 crc kubenswrapper[4830]: I0131 10:34:08.953500 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_7b3b4d1e-8963-469f-abe7-204392275c48/thanos-sidecar/0.log" Jan 31 10:34:09 crc kubenswrapper[4830]: I0131 10:34:09.076554 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_a5a14eb0-7ed3-44fd-a1e2-f8d582a70062/setup-container/0.log" Jan 31 10:34:09 crc kubenswrapper[4830]: I0131 10:34:09.414236 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_a5a14eb0-7ed3-44fd-a1e2-f8d582a70062/setup-container/0.log" Jan 31 10:34:09 crc kubenswrapper[4830]: I0131 10:34:09.482249 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_759f3f02-a9de-4e01-97f9-a97424c592a6/setup-container/0.log" Jan 31 10:34:09 crc kubenswrapper[4830]: I0131 10:34:09.500527 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_a5a14eb0-7ed3-44fd-a1e2-f8d582a70062/rabbitmq/0.log" Jan 31 10:34:09 crc kubenswrapper[4830]: I0131 10:34:09.824000 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-1_f60eed79-badf-4909-869b-edbfdfb774ac/setup-container/0.log" Jan 31 10:34:09 crc kubenswrapper[4830]: I0131 10:34:09.837261 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_759f3f02-a9de-4e01-97f9-a97424c592a6/setup-container/0.log" Jan 31 10:34:09 crc kubenswrapper[4830]: I0131 10:34:09.885666 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_759f3f02-a9de-4e01-97f9-a97424c592a6/rabbitmq/0.log" Jan 31 10:34:10 crc kubenswrapper[4830]: I0131 10:34:10.103796 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-1_f60eed79-badf-4909-869b-edbfdfb774ac/setup-container/0.log" Jan 31 10:34:10 crc kubenswrapper[4830]: I0131 10:34:10.230474 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-1_f60eed79-badf-4909-869b-edbfdfb774ac/rabbitmq/0.log" Jan 31 10:34:10 crc kubenswrapper[4830]: I0131 10:34:10.282713 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-2_8e40a106-74cd-45ea-a936-c34daaf9ce6e/setup-container/0.log" Jan 31 10:34:10 crc kubenswrapper[4830]: I0131 10:34:10.699807 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-2_8e40a106-74cd-45ea-a936-c34daaf9ce6e/setup-container/0.log" Jan 31 10:34:10 crc kubenswrapper[4830]: I0131 10:34:10.782348 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-5wfdc_ded31260-653f-4e1c-8840-c06cfa56a070/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 31 10:34:10 crc kubenswrapper[4830]: I0131 10:34:10.870750 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-2_8e40a106-74cd-45ea-a936-c34daaf9ce6e/rabbitmq/0.log" Jan 31 10:34:11 crc kubenswrapper[4830]: I0131 10:34:11.011295 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-z7ztp_795ae09a-4f64-42d2-ad54-45bf5b5f8954/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Jan 31 10:34:11 crc kubenswrapper[4830]: I0131 10:34:11.236431 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-n4lwf_b8c10133-0080-4638-a514-b1d8c87873e4/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Jan 31 10:34:11 crc kubenswrapper[4830]: I0131 10:34:11.511924 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-s9kdh_85767787-3aed-4aaf-a30b-a02b9aebadf7/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 31 10:34:11 crc kubenswrapper[4830]: I0131 10:34:11.599608 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-976m8_80af0309-f30b-4a92-9457-0f9c982807c0/ssh-known-hosts-edpm-deployment/0.log" Jan 31 10:34:12 crc kubenswrapper[4830]: I0131 10:34:12.362833 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-f44b7d679-6khcx_f99258ad-5714-491f-bdad-d7196ed9833a/proxy-server/0.log" Jan 31 10:34:12 crc kubenswrapper[4830]: I0131 10:34:12.568783 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-f44b7d679-6khcx_f99258ad-5714-491f-bdad-d7196ed9833a/proxy-httpd/0.log" Jan 31 10:34:12 crc kubenswrapper[4830]: I0131 10:34:12.651243 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_1023f27a-9c1d-4818-a3f5-94946296ae46/account-auditor/0.log" Jan 31 10:34:12 crc kubenswrapper[4830]: I0131 10:34:12.677200 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-4qmzq_c888f2ed-bb7b-4ee1-a17d-2b656f9464b6/swift-ring-rebalance/0.log" Jan 31 10:34:13 crc kubenswrapper[4830]: I0131 10:34:13.215903 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_1023f27a-9c1d-4818-a3f5-94946296ae46/account-reaper/0.log" Jan 31 10:34:13 crc kubenswrapper[4830]: I0131 10:34:13.217795 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_1023f27a-9c1d-4818-a3f5-94946296ae46/account-server/0.log" Jan 31 10:34:13 crc kubenswrapper[4830]: I0131 10:34:13.271511 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_1023f27a-9c1d-4818-a3f5-94946296ae46/account-replicator/0.log" Jan 31 10:34:13 crc kubenswrapper[4830]: I0131 10:34:13.297910 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_1023f27a-9c1d-4818-a3f5-94946296ae46/container-auditor/0.log" Jan 31 10:34:13 crc kubenswrapper[4830]: I0131 10:34:13.508016 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_1023f27a-9c1d-4818-a3f5-94946296ae46/container-server/0.log" Jan 31 10:34:13 crc kubenswrapper[4830]: I0131 10:34:13.525630 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_1023f27a-9c1d-4818-a3f5-94946296ae46/container-replicator/0.log" Jan 31 10:34:13 crc kubenswrapper[4830]: I0131 10:34:13.571298 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_1023f27a-9c1d-4818-a3f5-94946296ae46/container-updater/0.log" Jan 31 10:34:13 crc kubenswrapper[4830]: I0131 10:34:13.578373 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_1023f27a-9c1d-4818-a3f5-94946296ae46/object-auditor/0.log" Jan 31 10:34:14 crc kubenswrapper[4830]: I0131 10:34:14.201292 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_1023f27a-9c1d-4818-a3f5-94946296ae46/object-expirer/0.log" Jan 31 10:34:14 crc kubenswrapper[4830]: I0131 10:34:14.223152 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_1023f27a-9c1d-4818-a3f5-94946296ae46/object-server/0.log" Jan 31 10:34:14 crc kubenswrapper[4830]: I0131 10:34:14.280451 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_1023f27a-9c1d-4818-a3f5-94946296ae46/object-replicator/0.log" Jan 31 10:34:14 crc kubenswrapper[4830]: I0131 10:34:14.352537 4830 patch_prober.go:28] interesting pod/machine-config-daemon-gt7kd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 10:34:14 crc kubenswrapper[4830]: I0131 10:34:14.353469 4830 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" podUID="158dbfda-9b0a-4809-9946-3c6ee2d082dc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 10:34:14 crc kubenswrapper[4830]: I0131 10:34:14.394712 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_1023f27a-9c1d-4818-a3f5-94946296ae46/object-updater/0.log" Jan 31 10:34:14 crc kubenswrapper[4830]: I0131 10:34:14.500399 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_1023f27a-9c1d-4818-a3f5-94946296ae46/rsync/0.log" Jan 31 10:34:14 crc kubenswrapper[4830]: I0131 10:34:14.502074 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_1023f27a-9c1d-4818-a3f5-94946296ae46/swift-recon-cron/0.log" Jan 31 10:34:14 crc kubenswrapper[4830]: I0131 10:34:14.704822 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-v5kz8_501efae7-9326-4a6f-940a-32dc593da610/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Jan 31 10:34:14 crc kubenswrapper[4830]: I0131 10:34:14.920540 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-power-monitoring-edpm-deployment-openstack-edpm-mhz4d_36db0fa7-717c-4785-942e-8c98a60f2350/telemetry-power-monitoring-edpm-deployment-openstack-edpm-ipam/0.log" Jan 31 10:34:15 crc kubenswrapper[4830]: I0131 10:34:15.145076 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_9a5cf76b-5737-425c-9add-4f45212ca5da/test-operator-logs-container/0.log" Jan 31 10:34:15 crc kubenswrapper[4830]: I0131 10:34:15.345690 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-b2dgl_402466f0-5362-40ba-830b-698e51883c01/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Jan 31 10:34:15 crc kubenswrapper[4830]: I0131 10:34:15.494406 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_1fa42e50-1a05-499f-9396-a1e5dc1161f6/tempest-tests-tempest-tests-runner/0.log" Jan 31 10:34:28 crc kubenswrapper[4830]: I0131 10:34:28.678681 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_b3c26555-4046-499e-96c9-5a83b8322d8e/memcached/0.log" Jan 31 10:34:44 crc kubenswrapper[4830]: I0131 10:34:44.353675 4830 patch_prober.go:28] interesting pod/machine-config-daemon-gt7kd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 10:34:44 crc kubenswrapper[4830]: I0131 10:34:44.354291 4830 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" podUID="158dbfda-9b0a-4809-9946-3c6ee2d082dc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 10:34:50 crc kubenswrapper[4830]: I0131 10:34:50.747878 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_9d2bca841cc06ee79e2fe7d96e1fa5d1c31cf855f577b319bd394a90a0g7mhl_c3a69b5a-a2ea-4f45-aed3-524702c726d9/util/0.log" Jan 31 10:34:51 crc kubenswrapper[4830]: I0131 10:34:51.000046 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_9d2bca841cc06ee79e2fe7d96e1fa5d1c31cf855f577b319bd394a90a0g7mhl_c3a69b5a-a2ea-4f45-aed3-524702c726d9/util/0.log" Jan 31 10:34:51 crc kubenswrapper[4830]: I0131 10:34:51.036898 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_9d2bca841cc06ee79e2fe7d96e1fa5d1c31cf855f577b319bd394a90a0g7mhl_c3a69b5a-a2ea-4f45-aed3-524702c726d9/pull/0.log" Jan 31 10:34:51 crc kubenswrapper[4830]: I0131 10:34:51.042503 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_9d2bca841cc06ee79e2fe7d96e1fa5d1c31cf855f577b319bd394a90a0g7mhl_c3a69b5a-a2ea-4f45-aed3-524702c726d9/pull/0.log" Jan 31 10:34:51 crc kubenswrapper[4830]: I0131 10:34:51.277833 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_9d2bca841cc06ee79e2fe7d96e1fa5d1c31cf855f577b319bd394a90a0g7mhl_c3a69b5a-a2ea-4f45-aed3-524702c726d9/pull/0.log" Jan 31 10:34:51 crc kubenswrapper[4830]: I0131 10:34:51.279766 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_9d2bca841cc06ee79e2fe7d96e1fa5d1c31cf855f577b319bd394a90a0g7mhl_c3a69b5a-a2ea-4f45-aed3-524702c726d9/util/0.log" Jan 31 10:34:51 crc kubenswrapper[4830]: I0131 10:34:51.309883 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_9d2bca841cc06ee79e2fe7d96e1fa5d1c31cf855f577b319bd394a90a0g7mhl_c3a69b5a-a2ea-4f45-aed3-524702c726d9/extract/0.log" Jan 31 10:34:51 crc kubenswrapper[4830]: I0131 10:34:51.547650 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-8d874c8fc-cpwlp_47718a89-dc4c-4f5d-bb58-aec265aa68bf/manager/1.log" Jan 31 10:34:51 crc kubenswrapper[4830]: I0131 10:34:51.559492 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7b6c4d8c5f-kwwkw_1488b4ea-ba49-423e-a995-917dc9cbb9e2/manager/0.log" Jan 31 10:34:51 crc kubenswrapper[4830]: I0131 10:34:51.665072 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-8d874c8fc-cpwlp_47718a89-dc4c-4f5d-bb58-aec265aa68bf/manager/0.log" Jan 31 10:34:51 crc kubenswrapper[4830]: I0131 10:34:51.869282 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-6d9697b7f4-d8xvw_3f5623d3-168a-4bca-9154-ecb4c81b5b3b/manager/1.log" Jan 31 10:34:51 crc kubenswrapper[4830]: I0131 10:34:51.880759 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-6d9697b7f4-d8xvw_3f5623d3-168a-4bca-9154-ecb4c81b5b3b/manager/0.log" Jan 31 10:34:52 crc kubenswrapper[4830]: I0131 10:34:52.155376 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-8886f4c47-hcpk8_17f5c61d-5997-482b-961a-0339cfe6c15c/manager/0.log" Jan 31 10:34:52 crc kubenswrapper[4830]: I0131 10:34:52.222529 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-69d6db494d-8wnqw_dafe4db4-4a74-4cb2-8e7f-496cfa1a1c5e/manager/0.log" Jan 31 10:34:52 crc kubenswrapper[4830]: I0131 10:34:52.365323 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-5fb775575f-d9xtg_4d28fd37-b97c-447a-9165-d90d11fd4698/manager/1.log" Jan 31 10:34:52 crc kubenswrapper[4830]: I0131 10:34:52.383108 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-5fb775575f-d9xtg_4d28fd37-b97c-447a-9165-d90d11fd4698/manager/0.log" Jan 31 10:34:52 crc kubenswrapper[4830]: I0131 10:34:52.838454 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-5f4b8bd54d-slc6p_bd972fba-0692-45af-b28c-db4929fe150a/manager/1.log" Jan 31 10:34:52 crc kubenswrapper[4830]: I0131 10:34:52.966174 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-5f4b8bd54d-slc6p_bd972fba-0692-45af-b28c-db4929fe150a/manager/0.log" Jan 31 10:34:53 crc kubenswrapper[4830]: I0131 10:34:53.009506 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-79955696d6-vvv24_0b519925-01de-4cf0-8ff8-0f97137dd3d9/manager/0.log" Jan 31 10:34:53 crc kubenswrapper[4830]: I0131 10:34:53.751532 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-84f48565d4-kgrns_758269b2-16c6-4f5a-8f9f-875659eede84/manager/1.log" Jan 31 10:34:53 crc kubenswrapper[4830]: I0131 10:34:53.825243 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-84f48565d4-kgrns_758269b2-16c6-4f5a-8f9f-875659eede84/manager/0.log" Jan 31 10:34:54 crc kubenswrapper[4830]: I0131 10:34:54.073219 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-7dd968899f-4tqzd_1891b74f-fe71-4020-98a3-5796e2a67ea2/manager/1.log" Jan 31 10:34:54 crc kubenswrapper[4830]: I0131 10:34:54.106679 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-7dd968899f-4tqzd_1891b74f-fe71-4020-98a3-5796e2a67ea2/manager/0.log" Jan 31 10:34:54 crc kubenswrapper[4830]: I0131 10:34:54.131780 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-67bf948998-sbhfn_0e056a0c-ee06-43aa-bf36-35f202f76b17/manager/0.log" Jan 31 10:34:54 crc kubenswrapper[4830]: I0131 10:34:54.461040 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-585dbc889-sjf7r_617226b5-2b2c-4f6c-902d-9784c8a283de/manager/0.log" Jan 31 10:34:54 crc kubenswrapper[4830]: I0131 10:34:54.501902 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-55bff696bd-rkvx7_e681f66d-3695-4b59-9ef1-6f9bbf007ed2/manager/0.log" Jan 31 10:34:54 crc kubenswrapper[4830]: I0131 10:34:54.757312 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-6687f8d877-ld2fb_f101dda8-ba4c-42c2-a8e3-9a5e53c2ec8a/manager/1.log" Jan 31 10:34:54 crc kubenswrapper[4830]: I0131 10:34:54.892781 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-6687f8d877-ld2fb_f101dda8-ba4c-42c2-a8e3-9a5e53c2ec8a/manager/0.log" Jan 31 10:34:54 crc kubenswrapper[4830]: I0131 10:34:54.961046 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-59c4b45c4drvxhm_250c9f1b-d78c-488e-b28e-6c2b783edd9b/manager/1.log" Jan 31 10:34:54 crc kubenswrapper[4830]: I0131 10:34:54.983681 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-59c4b45c4drvxhm_250c9f1b-d78c-488e-b28e-6c2b783edd9b/manager/0.log" Jan 31 10:34:55 crc kubenswrapper[4830]: I0131 10:34:55.675779 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-54dc59fd95-sv8r9_2a183ae3-dc4b-4f75-a9ca-4832bd5faf06/operator/1.log" Jan 31 10:34:55 crc kubenswrapper[4830]: I0131 10:34:55.846634 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-54dc59fd95-sv8r9_2a183ae3-dc4b-4f75-a9ca-4832bd5faf06/operator/0.log" Jan 31 10:34:55 crc kubenswrapper[4830]: I0131 10:34:55.986933 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-nc25d_b0b831b3-e535-4264-b46c-c93f7edd51d2/registry-server/1.log" Jan 31 10:34:56 crc kubenswrapper[4830]: I0131 10:34:56.037165 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-nc25d_b0b831b3-e535-4264-b46c-c93f7edd51d2/registry-server/0.log" Jan 31 10:34:56 crc kubenswrapper[4830]: I0131 10:34:56.320438 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-5b964cf4cd-2l42c_388d9bc4-698e-4dea-8029-aa32433cf734/manager/1.log" Jan 31 10:34:56 crc kubenswrapper[4830]: I0131 10:34:56.361580 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-788c46999f-gbjts_7ff06918-8b3c-48cb-bd11-1254b9bbc276/manager/0.log" Jan 31 10:34:56 crc kubenswrapper[4830]: I0131 10:34:56.532427 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-5b964cf4cd-2l42c_388d9bc4-698e-4dea-8029-aa32433cf734/manager/0.log" Jan 31 10:34:56 crc kubenswrapper[4830]: I0131 10:34:56.641251 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-slhpt_abf5a919-4697-4468-b9e4-8a4617e3a5ca/operator/1.log" Jan 31 10:34:56 crc kubenswrapper[4830]: I0131 10:34:56.713072 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-slhpt_abf5a919-4697-4468-b9e4-8a4617e3a5ca/operator/0.log" Jan 31 10:34:56 crc kubenswrapper[4830]: I0131 10:34:56.856759 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-68fc8c869-gktql_21448bf1-0318-4469-baff-d35cf905337b/manager/0.log" Jan 31 10:34:57 crc kubenswrapper[4830]: I0131 10:34:57.107336 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-56f8bfcd9f-czm79_68f255f0-5951-47f2-979e-af80607453e8/manager/1.log" Jan 31 10:34:57 crc kubenswrapper[4830]: I0131 10:34:57.426043 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-57fbdcd888-cp9fj_2365408f-7d7a-482c-87c0-0452fa330e4e/manager/0.log" Jan 31 10:34:57 crc kubenswrapper[4830]: I0131 10:34:57.492137 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-55f549db95-67sj5_ce245704-5b88-4544-ae21-bcb30ff5d0d0/manager/0.log" Jan 31 10:34:57 crc kubenswrapper[4830]: I0131 10:34:57.526188 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-56f8bfcd9f-czm79_68f255f0-5951-47f2-979e-af80607453e8/manager/0.log" Jan 31 10:34:57 crc kubenswrapper[4830]: I0131 10:34:57.558918 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-564965969-62c8t_d4a8ef63-6ba0-4bb4-93b5-dc9fc1134bb5/manager/0.log" Jan 31 10:35:14 crc kubenswrapper[4830]: I0131 10:35:14.353222 4830 patch_prober.go:28] interesting pod/machine-config-daemon-gt7kd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 10:35:14 crc kubenswrapper[4830]: I0131 10:35:14.353775 4830 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" podUID="158dbfda-9b0a-4809-9946-3c6ee2d082dc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 10:35:14 crc kubenswrapper[4830]: I0131 10:35:14.353815 4830 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" Jan 31 10:35:14 crc kubenswrapper[4830]: I0131 10:35:14.354464 4830 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"3d1a1e3cfe2a93b485fa3e3d1d4183a5d4d87a568ef46466b13f6520a7c27ceb"} pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 31 10:35:14 crc kubenswrapper[4830]: I0131 10:35:14.355136 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" podUID="158dbfda-9b0a-4809-9946-3c6ee2d082dc" containerName="machine-config-daemon" containerID="cri-o://3d1a1e3cfe2a93b485fa3e3d1d4183a5d4d87a568ef46466b13f6520a7c27ceb" gracePeriod=600 Jan 31 10:35:14 crc kubenswrapper[4830]: E0131 10:35:14.523443 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gt7kd_openshift-machine-config-operator(158dbfda-9b0a-4809-9946-3c6ee2d082dc)\"" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" podUID="158dbfda-9b0a-4809-9946-3c6ee2d082dc" Jan 31 10:35:15 crc kubenswrapper[4830]: I0131 10:35:15.107247 4830 generic.go:334] "Generic (PLEG): container finished" podID="158dbfda-9b0a-4809-9946-3c6ee2d082dc" containerID="3d1a1e3cfe2a93b485fa3e3d1d4183a5d4d87a568ef46466b13f6520a7c27ceb" exitCode=0 Jan 31 10:35:15 crc kubenswrapper[4830]: I0131 10:35:15.107326 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" event={"ID":"158dbfda-9b0a-4809-9946-3c6ee2d082dc","Type":"ContainerDied","Data":"3d1a1e3cfe2a93b485fa3e3d1d4183a5d4d87a568ef46466b13f6520a7c27ceb"} Jan 31 10:35:15 crc kubenswrapper[4830]: I0131 10:35:15.107840 4830 scope.go:117] "RemoveContainer" containerID="7966079b95ee8b0c6a0eeec05fdab8c0893a01751591c9e2a9fe770dbf810c5f" Jan 31 10:35:15 crc kubenswrapper[4830]: I0131 10:35:15.108745 4830 scope.go:117] "RemoveContainer" containerID="3d1a1e3cfe2a93b485fa3e3d1d4183a5d4d87a568ef46466b13f6520a7c27ceb" Jan 31 10:35:15 crc kubenswrapper[4830]: E0131 10:35:15.109132 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gt7kd_openshift-machine-config-operator(158dbfda-9b0a-4809-9946-3c6ee2d082dc)\"" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" podUID="158dbfda-9b0a-4809-9946-3c6ee2d082dc" Jan 31 10:35:21 crc kubenswrapper[4830]: I0131 10:35:21.534412 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-qtqdv_ce03ae75-703f-4d6a-b98a-e866689b08e3/control-plane-machine-set-operator/0.log" Jan 31 10:35:21 crc kubenswrapper[4830]: I0131 10:35:21.747833 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-8nn2k_bc99ac19-2796-495d-82d4-6eda76879f40/machine-api-operator/0.log" Jan 31 10:35:21 crc kubenswrapper[4830]: I0131 10:35:21.801608 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-8nn2k_bc99ac19-2796-495d-82d4-6eda76879f40/kube-rbac-proxy/0.log" Jan 31 10:35:26 crc kubenswrapper[4830]: I0131 10:35:26.277750 4830 scope.go:117] "RemoveContainer" containerID="3d1a1e3cfe2a93b485fa3e3d1d4183a5d4d87a568ef46466b13f6520a7c27ceb" Jan 31 10:35:26 crc kubenswrapper[4830]: E0131 10:35:26.278755 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gt7kd_openshift-machine-config-operator(158dbfda-9b0a-4809-9946-3c6ee2d082dc)\"" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" podUID="158dbfda-9b0a-4809-9946-3c6ee2d082dc" Jan 31 10:35:36 crc kubenswrapper[4830]: I0131 10:35:36.419673 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-45w4k_25c15123-ed27-483d-8a40-7241f614a210/cert-manager-controller/0.log" Jan 31 10:35:36 crc kubenswrapper[4830]: I0131 10:35:36.585182 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-bqklj_c75f40f1-4c71-458a-906c-af1914c240de/cert-manager-cainjector/0.log" Jan 31 10:35:36 crc kubenswrapper[4830]: I0131 10:35:36.653132 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-22grv_eb0ab04d-4e0a-4a84-965a-2c0513d6d79a/cert-manager-webhook/0.log" Jan 31 10:35:41 crc kubenswrapper[4830]: I0131 10:35:41.252573 4830 scope.go:117] "RemoveContainer" containerID="3d1a1e3cfe2a93b485fa3e3d1d4183a5d4d87a568ef46466b13f6520a7c27ceb" Jan 31 10:35:41 crc kubenswrapper[4830]: E0131 10:35:41.253744 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gt7kd_openshift-machine-config-operator(158dbfda-9b0a-4809-9946-3c6ee2d082dc)\"" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" podUID="158dbfda-9b0a-4809-9946-3c6ee2d082dc" Jan 31 10:35:52 crc kubenswrapper[4830]: I0131 10:35:52.740685 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-7754f76f8b-t4b58_0782dc69-7ca6-4a3c-898b-a928694c4810/nmstate-console-plugin/0.log" Jan 31 10:35:52 crc kubenswrapper[4830]: I0131 10:35:52.955568 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-9wzdf_09ac1675-c6eb-453a-83a5-94f0a04c9665/nmstate-handler/0.log" Jan 31 10:35:53 crc kubenswrapper[4830]: I0131 10:35:53.028045 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-ppr4q_80b52808-7bda-4187-86e4-356413c4ff68/kube-rbac-proxy/0.log" Jan 31 10:35:53 crc kubenswrapper[4830]: I0131 10:35:53.082071 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-ppr4q_80b52808-7bda-4187-86e4-356413c4ff68/nmstate-metrics/0.log" Jan 31 10:35:53 crc kubenswrapper[4830]: I0131 10:35:53.212054 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-646758c888-bb7hz_2c310223-ad74-4147-9aac-1b60f4938062/nmstate-operator/0.log" Jan 31 10:35:53 crc kubenswrapper[4830]: I0131 10:35:53.305240 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-8474b5b9d8-hw8mv_a580c5e1-30c2-40b1-993d-c375cc99e2f2/nmstate-webhook/0.log" Jan 31 10:35:54 crc kubenswrapper[4830]: I0131 10:35:54.251918 4830 scope.go:117] "RemoveContainer" containerID="3d1a1e3cfe2a93b485fa3e3d1d4183a5d4d87a568ef46466b13f6520a7c27ceb" Jan 31 10:35:54 crc kubenswrapper[4830]: E0131 10:35:54.253461 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gt7kd_openshift-machine-config-operator(158dbfda-9b0a-4809-9946-3c6ee2d082dc)\"" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" podUID="158dbfda-9b0a-4809-9946-3c6ee2d082dc" Jan 31 10:36:06 crc kubenswrapper[4830]: I0131 10:36:06.262005 4830 scope.go:117] "RemoveContainer" containerID="3d1a1e3cfe2a93b485fa3e3d1d4183a5d4d87a568ef46466b13f6520a7c27ceb" Jan 31 10:36:06 crc kubenswrapper[4830]: E0131 10:36:06.265134 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gt7kd_openshift-machine-config-operator(158dbfda-9b0a-4809-9946-3c6ee2d082dc)\"" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" podUID="158dbfda-9b0a-4809-9946-3c6ee2d082dc" Jan 31 10:36:06 crc kubenswrapper[4830]: I0131 10:36:06.376587 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-688c9bff97-t8jpp_ce3329e2-9eca-4a04-bf1d-0578e12beaa5/kube-rbac-proxy/0.log" Jan 31 10:36:06 crc kubenswrapper[4830]: I0131 10:36:06.405971 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-688c9bff97-t8jpp_ce3329e2-9eca-4a04-bf1d-0578e12beaa5/manager/1.log" Jan 31 10:36:06 crc kubenswrapper[4830]: I0131 10:36:06.595845 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-688c9bff97-t8jpp_ce3329e2-9eca-4a04-bf1d-0578e12beaa5/manager/0.log" Jan 31 10:36:20 crc kubenswrapper[4830]: I0131 10:36:20.256597 4830 scope.go:117] "RemoveContainer" containerID="3d1a1e3cfe2a93b485fa3e3d1d4183a5d4d87a568ef46466b13f6520a7c27ceb" Jan 31 10:36:20 crc kubenswrapper[4830]: E0131 10:36:20.260057 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gt7kd_openshift-machine-config-operator(158dbfda-9b0a-4809-9946-3c6ee2d082dc)\"" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" podUID="158dbfda-9b0a-4809-9946-3c6ee2d082dc" Jan 31 10:36:21 crc kubenswrapper[4830]: I0131 10:36:21.481344 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-222kl_3addecb4-84c5-4b88-b751-b6a26db362be/prometheus-operator/0.log" Jan 31 10:36:21 crc kubenswrapper[4830]: I0131 10:36:21.484500 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-5d644b584c-bnxt4_419c93d3-0d80-4fbf-91cd-c88303e038e5/prometheus-operator-admission-webhook/0.log" Jan 31 10:36:21 crc kubenswrapper[4830]: I0131 10:36:21.676012 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-5d644b584c-jfsw8_2b9a494b-8847-4bf7-820e-2739aa96a464/prometheus-operator-admission-webhook/0.log" Jan 31 10:36:21 crc kubenswrapper[4830]: I0131 10:36:21.744447 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-l59nt_1ebf3f9f-75ef-4cfd-a7f7-d5fb556aeb48/operator/1.log" Jan 31 10:36:21 crc kubenswrapper[4830]: I0131 10:36:21.918465 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-l59nt_1ebf3f9f-75ef-4cfd-a7f7-d5fb556aeb48/operator/0.log" Jan 31 10:36:21 crc kubenswrapper[4830]: I0131 10:36:21.961094 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-ui-dashboards-66cbf594b5-swjf6_51e241ad-2d92-41fb-a218-1a14cd40534d/observability-ui-dashboards/0.log" Jan 31 10:36:22 crc kubenswrapper[4830]: I0131 10:36:22.096382 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-wtdqw_0af185f3-0cfa-4299-8eee-0e523d87504c/perses-operator/0.log" Jan 31 10:36:31 crc kubenswrapper[4830]: I0131 10:36:31.251529 4830 scope.go:117] "RemoveContainer" containerID="3d1a1e3cfe2a93b485fa3e3d1d4183a5d4d87a568ef46466b13f6520a7c27ceb" Jan 31 10:36:31 crc kubenswrapper[4830]: E0131 10:36:31.252515 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gt7kd_openshift-machine-config-operator(158dbfda-9b0a-4809-9946-3c6ee2d082dc)\"" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" podUID="158dbfda-9b0a-4809-9946-3c6ee2d082dc" Jan 31 10:36:37 crc kubenswrapper[4830]: I0131 10:36:37.616607 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_cluster-logging-operator-79cf69ddc8-qdl6z_e293840d-c6e3-4d1d-a859-c656d68171fe/cluster-logging-operator/0.log" Jan 31 10:36:37 crc kubenswrapper[4830]: I0131 10:36:37.733241 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_collector-mfmq7_eebf46b0-2ea9-47eb-963c-911a9f3e3f1b/collector/0.log" Jan 31 10:36:37 crc kubenswrapper[4830]: I0131 10:36:37.800662 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-compactor-0_70d5f51c-1a87-45fb-8822-7aa0997fceb1/loki-compactor/0.log" Jan 31 10:36:37 crc kubenswrapper[4830]: I0131 10:36:37.931358 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-distributor-5f678c8dd6-vm6jc_e5b91203-480c-424e-877a-5f2f437d1ada/loki-distributor/0.log" Jan 31 10:36:38 crc kubenswrapper[4830]: I0131 10:36:38.042202 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-gateway-74c87577db-fjtpt_867e058e-8774-4ff8-af99-a8f35ac530ce/gateway/0.log" Jan 31 10:36:38 crc kubenswrapper[4830]: I0131 10:36:38.066484 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-gateway-74c87577db-fjtpt_867e058e-8774-4ff8-af99-a8f35ac530ce/opa/0.log" Jan 31 10:36:38 crc kubenswrapper[4830]: I0131 10:36:38.244792 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-gateway-74c87577db-hwvhd_fd432483-7467-4c9d-a13e-8ee908a8ed2b/gateway/0.log" Jan 31 10:36:38 crc kubenswrapper[4830]: I0131 10:36:38.245714 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-gateway-74c87577db-hwvhd_fd432483-7467-4c9d-a13e-8ee908a8ed2b/opa/0.log" Jan 31 10:36:38 crc kubenswrapper[4830]: I0131 10:36:38.357792 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-index-gateway-0_efadb8be-37d4-4e2b-9df2-3d1301ae81a8/loki-index-gateway/0.log" Jan 31 10:36:38 crc kubenswrapper[4830]: I0131 10:36:38.510115 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-ingester-0_07a77a4a-344b-45bb-8488-a536a94185b1/loki-ingester/0.log" Jan 31 10:36:38 crc kubenswrapper[4830]: I0131 10:36:38.554608 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-querier-76788598db-f89hf_8aa52b7a-444c-4f07-9c3a-c2223e966e34/loki-querier/0.log" Jan 31 10:36:38 crc kubenswrapper[4830]: I0131 10:36:38.716336 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-query-frontend-69d9546745-8k7rn_6a2f00bb-9954-46d0-901b-3d9a82939850/loki-query-frontend/0.log" Jan 31 10:36:46 crc kubenswrapper[4830]: I0131 10:36:46.261322 4830 scope.go:117] "RemoveContainer" containerID="3d1a1e3cfe2a93b485fa3e3d1d4183a5d4d87a568ef46466b13f6520a7c27ceb" Jan 31 10:36:46 crc kubenswrapper[4830]: E0131 10:36:46.262326 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gt7kd_openshift-machine-config-operator(158dbfda-9b0a-4809-9946-3c6ee2d082dc)\"" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" podUID="158dbfda-9b0a-4809-9946-3c6ee2d082dc" Jan 31 10:36:55 crc kubenswrapper[4830]: I0131 10:36:55.126778 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-lhbbn_2683cf74-2506-4496-b132-4c274291727b/kube-rbac-proxy/0.log" Jan 31 10:36:55 crc kubenswrapper[4830]: I0131 10:36:55.312323 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-lhbbn_2683cf74-2506-4496-b132-4c274291727b/controller/0.log" Jan 31 10:36:55 crc kubenswrapper[4830]: I0131 10:36:55.383659 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4v2n6_d0107b00-a78b-432b-afc6-a9ccc1b3bf5b/cp-frr-files/0.log" Jan 31 10:36:55 crc kubenswrapper[4830]: I0131 10:36:55.513866 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4v2n6_d0107b00-a78b-432b-afc6-a9ccc1b3bf5b/cp-frr-files/0.log" Jan 31 10:36:56 crc kubenswrapper[4830]: I0131 10:36:56.277368 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4v2n6_d0107b00-a78b-432b-afc6-a9ccc1b3bf5b/cp-reloader/0.log" Jan 31 10:36:56 crc kubenswrapper[4830]: I0131 10:36:56.301623 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4v2n6_d0107b00-a78b-432b-afc6-a9ccc1b3bf5b/cp-reloader/0.log" Jan 31 10:36:56 crc kubenswrapper[4830]: I0131 10:36:56.302917 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4v2n6_d0107b00-a78b-432b-afc6-a9ccc1b3bf5b/cp-metrics/0.log" Jan 31 10:36:56 crc kubenswrapper[4830]: I0131 10:36:56.455230 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4v2n6_d0107b00-a78b-432b-afc6-a9ccc1b3bf5b/cp-reloader/0.log" Jan 31 10:36:56 crc kubenswrapper[4830]: I0131 10:36:56.499568 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4v2n6_d0107b00-a78b-432b-afc6-a9ccc1b3bf5b/cp-frr-files/0.log" Jan 31 10:36:56 crc kubenswrapper[4830]: I0131 10:36:56.521950 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4v2n6_d0107b00-a78b-432b-afc6-a9ccc1b3bf5b/cp-metrics/0.log" Jan 31 10:36:56 crc kubenswrapper[4830]: I0131 10:36:56.549832 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4v2n6_d0107b00-a78b-432b-afc6-a9ccc1b3bf5b/cp-metrics/0.log" Jan 31 10:36:56 crc kubenswrapper[4830]: I0131 10:36:56.678069 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4v2n6_d0107b00-a78b-432b-afc6-a9ccc1b3bf5b/cp-reloader/0.log" Jan 31 10:36:56 crc kubenswrapper[4830]: I0131 10:36:56.678077 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4v2n6_d0107b00-a78b-432b-afc6-a9ccc1b3bf5b/cp-frr-files/0.log" Jan 31 10:36:56 crc kubenswrapper[4830]: I0131 10:36:56.717844 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4v2n6_d0107b00-a78b-432b-afc6-a9ccc1b3bf5b/cp-metrics/0.log" Jan 31 10:36:56 crc kubenswrapper[4830]: I0131 10:36:56.722292 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4v2n6_d0107b00-a78b-432b-afc6-a9ccc1b3bf5b/controller/1.log" Jan 31 10:36:56 crc kubenswrapper[4830]: I0131 10:36:56.873416 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4v2n6_d0107b00-a78b-432b-afc6-a9ccc1b3bf5b/controller/0.log" Jan 31 10:36:56 crc kubenswrapper[4830]: I0131 10:36:56.907867 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4v2n6_d0107b00-a78b-432b-afc6-a9ccc1b3bf5b/frr-metrics/0.log" Jan 31 10:36:57 crc kubenswrapper[4830]: I0131 10:36:57.036654 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4v2n6_d0107b00-a78b-432b-afc6-a9ccc1b3bf5b/frr/1.log" Jan 31 10:36:57 crc kubenswrapper[4830]: I0131 10:36:57.116921 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4v2n6_d0107b00-a78b-432b-afc6-a9ccc1b3bf5b/kube-rbac-proxy/0.log" Jan 31 10:36:57 crc kubenswrapper[4830]: I0131 10:36:57.137180 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4v2n6_d0107b00-a78b-432b-afc6-a9ccc1b3bf5b/kube-rbac-proxy-frr/0.log" Jan 31 10:36:57 crc kubenswrapper[4830]: I0131 10:36:57.298357 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4v2n6_d0107b00-a78b-432b-afc6-a9ccc1b3bf5b/reloader/0.log" Jan 31 10:36:57 crc kubenswrapper[4830]: I0131 10:36:57.405838 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7df86c4f6c-zwj92_3951c2f7-8a23-4d78-9a26-1b89399bdb4e/frr-k8s-webhook-server/1.log" Jan 31 10:36:57 crc kubenswrapper[4830]: I0131 10:36:57.566102 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7df86c4f6c-zwj92_3951c2f7-8a23-4d78-9a26-1b89399bdb4e/frr-k8s-webhook-server/0.log" Jan 31 10:36:57 crc kubenswrapper[4830]: I0131 10:36:57.711962 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-74fbb6df4-hrt7k_1145e85a-d436-40c8-baef-ceb53625e06b/manager/1.log" Jan 31 10:36:57 crc kubenswrapper[4830]: I0131 10:36:57.793893 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-74fbb6df4-hrt7k_1145e85a-d436-40c8-baef-ceb53625e06b/manager/0.log" Jan 31 10:36:58 crc kubenswrapper[4830]: I0131 10:36:58.110179 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-55459579-xtkmd_328e9260-46e9-41a9-a42c-891fe870a5d1/webhook-server/0.log" Jan 31 10:36:58 crc kubenswrapper[4830]: I0131 10:36:58.277500 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-x7g8x_1d713893-e8db-40ba-872c-e9d1650a56d0/kube-rbac-proxy/0.log" Jan 31 10:36:58 crc kubenswrapper[4830]: I0131 10:36:58.587319 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-x7g8x_1d713893-e8db-40ba-872c-e9d1650a56d0/speaker/1.log" Jan 31 10:36:58 crc kubenswrapper[4830]: I0131 10:36:58.917365 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4v2n6_d0107b00-a78b-432b-afc6-a9ccc1b3bf5b/frr/0.log" Jan 31 10:36:58 crc kubenswrapper[4830]: I0131 10:36:58.988647 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-x7g8x_1d713893-e8db-40ba-872c-e9d1650a56d0/speaker/0.log" Jan 31 10:36:59 crc kubenswrapper[4830]: I0131 10:36:59.251563 4830 scope.go:117] "RemoveContainer" containerID="3d1a1e3cfe2a93b485fa3e3d1d4183a5d4d87a568ef46466b13f6520a7c27ceb" Jan 31 10:36:59 crc kubenswrapper[4830]: E0131 10:36:59.252219 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gt7kd_openshift-machine-config-operator(158dbfda-9b0a-4809-9946-3c6ee2d082dc)\"" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" podUID="158dbfda-9b0a-4809-9946-3c6ee2d082dc" Jan 31 10:37:11 crc kubenswrapper[4830]: I0131 10:37:11.252131 4830 scope.go:117] "RemoveContainer" containerID="3d1a1e3cfe2a93b485fa3e3d1d4183a5d4d87a568ef46466b13f6520a7c27ceb" Jan 31 10:37:11 crc kubenswrapper[4830]: E0131 10:37:11.252957 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gt7kd_openshift-machine-config-operator(158dbfda-9b0a-4809-9946-3c6ee2d082dc)\"" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" podUID="158dbfda-9b0a-4809-9946-3c6ee2d082dc" Jan 31 10:37:14 crc kubenswrapper[4830]: I0131 10:37:14.800205 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2sbgbc_2aa663f0-7298-40eb-a298-3173bffe5362/util/0.log" Jan 31 10:37:15 crc kubenswrapper[4830]: I0131 10:37:15.046116 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2sbgbc_2aa663f0-7298-40eb-a298-3173bffe5362/util/0.log" Jan 31 10:37:15 crc kubenswrapper[4830]: I0131 10:37:15.058167 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2sbgbc_2aa663f0-7298-40eb-a298-3173bffe5362/pull/0.log" Jan 31 10:37:15 crc kubenswrapper[4830]: I0131 10:37:15.071579 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2sbgbc_2aa663f0-7298-40eb-a298-3173bffe5362/pull/0.log" Jan 31 10:37:16 crc kubenswrapper[4830]: I0131 10:37:16.196384 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2sbgbc_2aa663f0-7298-40eb-a298-3173bffe5362/util/0.log" Jan 31 10:37:16 crc kubenswrapper[4830]: I0131 10:37:16.197918 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2sbgbc_2aa663f0-7298-40eb-a298-3173bffe5362/extract/0.log" Jan 31 10:37:16 crc kubenswrapper[4830]: I0131 10:37:16.217319 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2sbgbc_2aa663f0-7298-40eb-a298-3173bffe5362/pull/0.log" Jan 31 10:37:16 crc kubenswrapper[4830]: I0131 10:37:16.483648 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc4plln_6821ef6a-9d75-42b0-8d20-1ebbbabd7896/util/0.log" Jan 31 10:37:16 crc kubenswrapper[4830]: I0131 10:37:16.617685 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc4plln_6821ef6a-9d75-42b0-8d20-1ebbbabd7896/pull/0.log" Jan 31 10:37:16 crc kubenswrapper[4830]: I0131 10:37:16.621978 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc4plln_6821ef6a-9d75-42b0-8d20-1ebbbabd7896/util/0.log" Jan 31 10:37:16 crc kubenswrapper[4830]: I0131 10:37:16.682360 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc4plln_6821ef6a-9d75-42b0-8d20-1ebbbabd7896/pull/0.log" Jan 31 10:37:16 crc kubenswrapper[4830]: I0131 10:37:16.875770 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc4plln_6821ef6a-9d75-42b0-8d20-1ebbbabd7896/pull/0.log" Jan 31 10:37:16 crc kubenswrapper[4830]: I0131 10:37:16.890231 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc4plln_6821ef6a-9d75-42b0-8d20-1ebbbabd7896/extract/0.log" Jan 31 10:37:16 crc kubenswrapper[4830]: I0131 10:37:16.936155 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc4plln_6821ef6a-9d75-42b0-8d20-1ebbbabd7896/util/0.log" Jan 31 10:37:17 crc kubenswrapper[4830]: I0131 10:37:17.105338 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bgpsr7_e9a3487d-9bd3-40fe-9096-22c0a7afb0ec/util/0.log" Jan 31 10:37:17 crc kubenswrapper[4830]: I0131 10:37:17.322985 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bgpsr7_e9a3487d-9bd3-40fe-9096-22c0a7afb0ec/pull/0.log" Jan 31 10:37:17 crc kubenswrapper[4830]: I0131 10:37:17.332646 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bgpsr7_e9a3487d-9bd3-40fe-9096-22c0a7afb0ec/util/0.log" Jan 31 10:37:17 crc kubenswrapper[4830]: I0131 10:37:17.352137 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bgpsr7_e9a3487d-9bd3-40fe-9096-22c0a7afb0ec/pull/0.log" Jan 31 10:37:17 crc kubenswrapper[4830]: I0131 10:37:17.575037 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bgpsr7_e9a3487d-9bd3-40fe-9096-22c0a7afb0ec/pull/0.log" Jan 31 10:37:17 crc kubenswrapper[4830]: I0131 10:37:17.631083 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bgpsr7_e9a3487d-9bd3-40fe-9096-22c0a7afb0ec/util/0.log" Jan 31 10:37:17 crc kubenswrapper[4830]: I0131 10:37:17.656862 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bgpsr7_e9a3487d-9bd3-40fe-9096-22c0a7afb0ec/extract/0.log" Jan 31 10:37:17 crc kubenswrapper[4830]: I0131 10:37:17.830452 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7136q2sl_2e45971e-893f-4389-b33c-688089a3f7ec/util/0.log" Jan 31 10:37:18 crc kubenswrapper[4830]: I0131 10:37:18.696357 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7136q2sl_2e45971e-893f-4389-b33c-688089a3f7ec/pull/0.log" Jan 31 10:37:18 crc kubenswrapper[4830]: I0131 10:37:18.735029 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7136q2sl_2e45971e-893f-4389-b33c-688089a3f7ec/util/0.log" Jan 31 10:37:18 crc kubenswrapper[4830]: I0131 10:37:18.760124 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7136q2sl_2e45971e-893f-4389-b33c-688089a3f7ec/pull/0.log" Jan 31 10:37:19 crc kubenswrapper[4830]: I0131 10:37:19.015522 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7136q2sl_2e45971e-893f-4389-b33c-688089a3f7ec/util/0.log" Jan 31 10:37:19 crc kubenswrapper[4830]: I0131 10:37:19.034611 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7136q2sl_2e45971e-893f-4389-b33c-688089a3f7ec/pull/0.log" Jan 31 10:37:19 crc kubenswrapper[4830]: I0131 10:37:19.069159 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7136q2sl_2e45971e-893f-4389-b33c-688089a3f7ec/extract/0.log" Jan 31 10:37:19 crc kubenswrapper[4830]: I0131 10:37:19.103006 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08tt9f5_2a5ef80a-1adb-44b7-92a8-91e7a020a693/util/0.log" Jan 31 10:37:19 crc kubenswrapper[4830]: I0131 10:37:19.333906 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08tt9f5_2a5ef80a-1adb-44b7-92a8-91e7a020a693/util/0.log" Jan 31 10:37:19 crc kubenswrapper[4830]: I0131 10:37:19.337068 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08tt9f5_2a5ef80a-1adb-44b7-92a8-91e7a020a693/pull/0.log" Jan 31 10:37:19 crc kubenswrapper[4830]: I0131 10:37:19.443492 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08tt9f5_2a5ef80a-1adb-44b7-92a8-91e7a020a693/pull/0.log" Jan 31 10:37:19 crc kubenswrapper[4830]: I0131 10:37:19.629103 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08tt9f5_2a5ef80a-1adb-44b7-92a8-91e7a020a693/extract/0.log" Jan 31 10:37:19 crc kubenswrapper[4830]: I0131 10:37:19.647096 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08tt9f5_2a5ef80a-1adb-44b7-92a8-91e7a020a693/util/0.log" Jan 31 10:37:19 crc kubenswrapper[4830]: I0131 10:37:19.698945 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08tt9f5_2a5ef80a-1adb-44b7-92a8-91e7a020a693/pull/0.log" Jan 31 10:37:19 crc kubenswrapper[4830]: I0131 10:37:19.732495 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-jwvm4_14550547-ce63-48cc-800e-b74235d0daa1/extract-utilities/0.log" Jan 31 10:37:19 crc kubenswrapper[4830]: I0131 10:37:19.943289 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-jwvm4_14550547-ce63-48cc-800e-b74235d0daa1/extract-content/0.log" Jan 31 10:37:19 crc kubenswrapper[4830]: I0131 10:37:19.958571 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-jwvm4_14550547-ce63-48cc-800e-b74235d0daa1/extract-utilities/0.log" Jan 31 10:37:19 crc kubenswrapper[4830]: I0131 10:37:19.963553 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-jwvm4_14550547-ce63-48cc-800e-b74235d0daa1/extract-content/0.log" Jan 31 10:37:20 crc kubenswrapper[4830]: I0131 10:37:20.146595 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-jwvm4_14550547-ce63-48cc-800e-b74235d0daa1/extract-utilities/0.log" Jan 31 10:37:20 crc kubenswrapper[4830]: I0131 10:37:20.187690 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-jwvm4_14550547-ce63-48cc-800e-b74235d0daa1/extract-content/0.log" Jan 31 10:37:20 crc kubenswrapper[4830]: I0131 10:37:20.358653 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-jwvm4_14550547-ce63-48cc-800e-b74235d0daa1/registry-server/1.log" Jan 31 10:37:20 crc kubenswrapper[4830]: I0131 10:37:20.401013 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-fcmv2_c361702a-d6db-4925-809d-f08c6dd88a7d/extract-utilities/0.log" Jan 31 10:37:20 crc kubenswrapper[4830]: I0131 10:37:20.673509 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-fcmv2_c361702a-d6db-4925-809d-f08c6dd88a7d/extract-content/0.log" Jan 31 10:37:20 crc kubenswrapper[4830]: I0131 10:37:20.692905 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-fcmv2_c361702a-d6db-4925-809d-f08c6dd88a7d/extract-utilities/0.log" Jan 31 10:37:20 crc kubenswrapper[4830]: I0131 10:37:20.729440 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-fcmv2_c361702a-d6db-4925-809d-f08c6dd88a7d/extract-content/0.log" Jan 31 10:37:20 crc kubenswrapper[4830]: I0131 10:37:20.930561 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-fcmv2_c361702a-d6db-4925-809d-f08c6dd88a7d/extract-content/0.log" Jan 31 10:37:20 crc kubenswrapper[4830]: I0131 10:37:20.952135 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-fcmv2_c361702a-d6db-4925-809d-f08c6dd88a7d/extract-utilities/0.log" Jan 31 10:37:21 crc kubenswrapper[4830]: I0131 10:37:21.078617 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-jwvm4_14550547-ce63-48cc-800e-b74235d0daa1/registry-server/0.log" Jan 31 10:37:21 crc kubenswrapper[4830]: I0131 10:37:21.156906 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-58x6p_b6c3d452-2742-4f91-9857-5f5e0b50f348/marketplace-operator/1.log" Jan 31 10:37:21 crc kubenswrapper[4830]: I0131 10:37:21.454750 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-58x6p_b6c3d452-2742-4f91-9857-5f5e0b50f348/marketplace-operator/0.log" Jan 31 10:37:21 crc kubenswrapper[4830]: I0131 10:37:21.506866 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-fcmv2_c361702a-d6db-4925-809d-f08c6dd88a7d/registry-server/1.log" Jan 31 10:37:21 crc kubenswrapper[4830]: I0131 10:37:21.528222 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-g5pvp_35d308f6-fcf3-4b01-b26e-5c1848d6ee7d/extract-utilities/0.log" Jan 31 10:37:21 crc kubenswrapper[4830]: I0131 10:37:21.761004 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-g5pvp_35d308f6-fcf3-4b01-b26e-5c1848d6ee7d/extract-content/0.log" Jan 31 10:37:21 crc kubenswrapper[4830]: I0131 10:37:21.784067 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-g5pvp_35d308f6-fcf3-4b01-b26e-5c1848d6ee7d/extract-content/0.log" Jan 31 10:37:21 crc kubenswrapper[4830]: I0131 10:37:21.817968 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-g5pvp_35d308f6-fcf3-4b01-b26e-5c1848d6ee7d/extract-utilities/0.log" Jan 31 10:37:22 crc kubenswrapper[4830]: I0131 10:37:22.037891 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-g5pvp_35d308f6-fcf3-4b01-b26e-5c1848d6ee7d/extract-content/0.log" Jan 31 10:37:22 crc kubenswrapper[4830]: I0131 10:37:22.046108 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-g5pvp_35d308f6-fcf3-4b01-b26e-5c1848d6ee7d/extract-utilities/0.log" Jan 31 10:37:22 crc kubenswrapper[4830]: I0131 10:37:22.064022 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-fcmv2_c361702a-d6db-4925-809d-f08c6dd88a7d/registry-server/0.log" Jan 31 10:37:22 crc kubenswrapper[4830]: I0131 10:37:22.103898 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-g5pvp_35d308f6-fcf3-4b01-b26e-5c1848d6ee7d/registry-server/1.log" Jan 31 10:37:22 crc kubenswrapper[4830]: I0131 10:37:22.285612 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-g5pvp_35d308f6-fcf3-4b01-b26e-5c1848d6ee7d/registry-server/0.log" Jan 31 10:37:22 crc kubenswrapper[4830]: I0131 10:37:22.334776 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-56876_2626e876-9148-4165-a735-a5a1733c014d/extract-utilities/0.log" Jan 31 10:37:22 crc kubenswrapper[4830]: I0131 10:37:22.459598 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-56876_2626e876-9148-4165-a735-a5a1733c014d/extract-utilities/0.log" Jan 31 10:37:22 crc kubenswrapper[4830]: I0131 10:37:22.516538 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-56876_2626e876-9148-4165-a735-a5a1733c014d/extract-content/0.log" Jan 31 10:37:22 crc kubenswrapper[4830]: I0131 10:37:22.530849 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-56876_2626e876-9148-4165-a735-a5a1733c014d/extract-content/0.log" Jan 31 10:37:22 crc kubenswrapper[4830]: I0131 10:37:22.722517 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-56876_2626e876-9148-4165-a735-a5a1733c014d/extract-content/0.log" Jan 31 10:37:22 crc kubenswrapper[4830]: I0131 10:37:22.775365 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-56876_2626e876-9148-4165-a735-a5a1733c014d/extract-utilities/0.log" Jan 31 10:37:22 crc kubenswrapper[4830]: I0131 10:37:22.988955 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-56876_2626e876-9148-4165-a735-a5a1733c014d/registry-server/1.log" Jan 31 10:37:23 crc kubenswrapper[4830]: I0131 10:37:23.459683 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-56876_2626e876-9148-4165-a735-a5a1733c014d/registry-server/0.log" Jan 31 10:37:24 crc kubenswrapper[4830]: I0131 10:37:24.251914 4830 scope.go:117] "RemoveContainer" containerID="3d1a1e3cfe2a93b485fa3e3d1d4183a5d4d87a568ef46466b13f6520a7c27ceb" Jan 31 10:37:24 crc kubenswrapper[4830]: E0131 10:37:24.252621 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gt7kd_openshift-machine-config-operator(158dbfda-9b0a-4809-9946-3c6ee2d082dc)\"" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" podUID="158dbfda-9b0a-4809-9946-3c6ee2d082dc" Jan 31 10:37:36 crc kubenswrapper[4830]: I0131 10:37:36.266287 4830 scope.go:117] "RemoveContainer" containerID="3d1a1e3cfe2a93b485fa3e3d1d4183a5d4d87a568ef46466b13f6520a7c27ceb" Jan 31 10:37:36 crc kubenswrapper[4830]: E0131 10:37:36.267404 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gt7kd_openshift-machine-config-operator(158dbfda-9b0a-4809-9946-3c6ee2d082dc)\"" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" podUID="158dbfda-9b0a-4809-9946-3c6ee2d082dc" Jan 31 10:37:36 crc kubenswrapper[4830]: I0131 10:37:36.882493 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-5d644b584c-bnxt4_419c93d3-0d80-4fbf-91cd-c88303e038e5/prometheus-operator-admission-webhook/0.log" Jan 31 10:37:36 crc kubenswrapper[4830]: I0131 10:37:36.894453 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-5d644b584c-jfsw8_2b9a494b-8847-4bf7-820e-2739aa96a464/prometheus-operator-admission-webhook/0.log" Jan 31 10:37:36 crc kubenswrapper[4830]: I0131 10:37:36.947210 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-222kl_3addecb4-84c5-4b88-b751-b6a26db362be/prometheus-operator/0.log" Jan 31 10:37:37 crc kubenswrapper[4830]: I0131 10:37:37.101965 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-l59nt_1ebf3f9f-75ef-4cfd-a7f7-d5fb556aeb48/operator/0.log" Jan 31 10:37:37 crc kubenswrapper[4830]: I0131 10:37:37.164254 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-l59nt_1ebf3f9f-75ef-4cfd-a7f7-d5fb556aeb48/operator/1.log" Jan 31 10:37:37 crc kubenswrapper[4830]: I0131 10:37:37.173225 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-ui-dashboards-66cbf594b5-swjf6_51e241ad-2d92-41fb-a218-1a14cd40534d/observability-ui-dashboards/0.log" Jan 31 10:37:37 crc kubenswrapper[4830]: I0131 10:37:37.196413 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-wtdqw_0af185f3-0cfa-4299-8eee-0e523d87504c/perses-operator/0.log" Jan 31 10:37:50 crc kubenswrapper[4830]: I0131 10:37:50.251201 4830 scope.go:117] "RemoveContainer" containerID="3d1a1e3cfe2a93b485fa3e3d1d4183a5d4d87a568ef46466b13f6520a7c27ceb" Jan 31 10:37:50 crc kubenswrapper[4830]: E0131 10:37:50.252047 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gt7kd_openshift-machine-config-operator(158dbfda-9b0a-4809-9946-3c6ee2d082dc)\"" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" podUID="158dbfda-9b0a-4809-9946-3c6ee2d082dc" Jan 31 10:37:52 crc kubenswrapper[4830]: I0131 10:37:52.633830 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-688c9bff97-t8jpp_ce3329e2-9eca-4a04-bf1d-0578e12beaa5/manager/1.log" Jan 31 10:37:52 crc kubenswrapper[4830]: I0131 10:37:52.714206 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-688c9bff97-t8jpp_ce3329e2-9eca-4a04-bf1d-0578e12beaa5/kube-rbac-proxy/0.log" Jan 31 10:37:52 crc kubenswrapper[4830]: I0131 10:37:52.732501 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-688c9bff97-t8jpp_ce3329e2-9eca-4a04-bf1d-0578e12beaa5/manager/0.log" Jan 31 10:38:03 crc kubenswrapper[4830]: I0131 10:38:03.251827 4830 scope.go:117] "RemoveContainer" containerID="3d1a1e3cfe2a93b485fa3e3d1d4183a5d4d87a568ef46466b13f6520a7c27ceb" Jan 31 10:38:03 crc kubenswrapper[4830]: E0131 10:38:03.252906 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gt7kd_openshift-machine-config-operator(158dbfda-9b0a-4809-9946-3c6ee2d082dc)\"" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" podUID="158dbfda-9b0a-4809-9946-3c6ee2d082dc" Jan 31 10:38:08 crc kubenswrapper[4830]: I0131 10:38:08.765523 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="2ca5d2f1-673e-4173-848a-8d32d33b8bcc" containerName="galera" probeResult="failure" output="command timed out" Jan 31 10:38:16 crc kubenswrapper[4830]: I0131 10:38:16.262546 4830 scope.go:117] "RemoveContainer" containerID="3d1a1e3cfe2a93b485fa3e3d1d4183a5d4d87a568ef46466b13f6520a7c27ceb" Jan 31 10:38:16 crc kubenswrapper[4830]: E0131 10:38:16.263416 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gt7kd_openshift-machine-config-operator(158dbfda-9b0a-4809-9946-3c6ee2d082dc)\"" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" podUID="158dbfda-9b0a-4809-9946-3c6ee2d082dc" Jan 31 10:38:30 crc kubenswrapper[4830]: I0131 10:38:30.251331 4830 scope.go:117] "RemoveContainer" containerID="3d1a1e3cfe2a93b485fa3e3d1d4183a5d4d87a568ef46466b13f6520a7c27ceb" Jan 31 10:38:30 crc kubenswrapper[4830]: E0131 10:38:30.252139 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gt7kd_openshift-machine-config-operator(158dbfda-9b0a-4809-9946-3c6ee2d082dc)\"" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" podUID="158dbfda-9b0a-4809-9946-3c6ee2d082dc" Jan 31 10:38:44 crc kubenswrapper[4830]: I0131 10:38:44.251917 4830 scope.go:117] "RemoveContainer" containerID="3d1a1e3cfe2a93b485fa3e3d1d4183a5d4d87a568ef46466b13f6520a7c27ceb" Jan 31 10:38:44 crc kubenswrapper[4830]: E0131 10:38:44.253612 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gt7kd_openshift-machine-config-operator(158dbfda-9b0a-4809-9946-3c6ee2d082dc)\"" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" podUID="158dbfda-9b0a-4809-9946-3c6ee2d082dc" Jan 31 10:38:56 crc kubenswrapper[4830]: I0131 10:38:56.263715 4830 scope.go:117] "RemoveContainer" containerID="3d1a1e3cfe2a93b485fa3e3d1d4183a5d4d87a568ef46466b13f6520a7c27ceb" Jan 31 10:38:56 crc kubenswrapper[4830]: E0131 10:38:56.265011 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gt7kd_openshift-machine-config-operator(158dbfda-9b0a-4809-9946-3c6ee2d082dc)\"" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" podUID="158dbfda-9b0a-4809-9946-3c6ee2d082dc" Jan 31 10:39:11 crc kubenswrapper[4830]: I0131 10:39:11.252392 4830 scope.go:117] "RemoveContainer" containerID="3d1a1e3cfe2a93b485fa3e3d1d4183a5d4d87a568ef46466b13f6520a7c27ceb" Jan 31 10:39:11 crc kubenswrapper[4830]: E0131 10:39:11.254830 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gt7kd_openshift-machine-config-operator(158dbfda-9b0a-4809-9946-3c6ee2d082dc)\"" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" podUID="158dbfda-9b0a-4809-9946-3c6ee2d082dc" Jan 31 10:39:17 crc kubenswrapper[4830]: I0131 10:39:17.796902 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-9zdm5"] Jan 31 10:39:17 crc kubenswrapper[4830]: E0131 10:39:17.798322 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef78c6cc-b893-40f1-a0f2-6a8032bd318d" containerName="container-00" Jan 31 10:39:17 crc kubenswrapper[4830]: I0131 10:39:17.798339 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef78c6cc-b893-40f1-a0f2-6a8032bd318d" containerName="container-00" Jan 31 10:39:17 crc kubenswrapper[4830]: I0131 10:39:17.798624 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="ef78c6cc-b893-40f1-a0f2-6a8032bd318d" containerName="container-00" Jan 31 10:39:17 crc kubenswrapper[4830]: I0131 10:39:17.804429 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9zdm5" Jan 31 10:39:17 crc kubenswrapper[4830]: I0131 10:39:17.919537 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cd3034a6-7d20-4047-9f4e-a7f4cc8bbf6a-utilities\") pod \"redhat-marketplace-9zdm5\" (UID: \"cd3034a6-7d20-4047-9f4e-a7f4cc8bbf6a\") " pod="openshift-marketplace/redhat-marketplace-9zdm5" Jan 31 10:39:17 crc kubenswrapper[4830]: I0131 10:39:17.920273 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zdgwl\" (UniqueName: \"kubernetes.io/projected/cd3034a6-7d20-4047-9f4e-a7f4cc8bbf6a-kube-api-access-zdgwl\") pod \"redhat-marketplace-9zdm5\" (UID: \"cd3034a6-7d20-4047-9f4e-a7f4cc8bbf6a\") " pod="openshift-marketplace/redhat-marketplace-9zdm5" Jan 31 10:39:17 crc kubenswrapper[4830]: I0131 10:39:17.920352 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cd3034a6-7d20-4047-9f4e-a7f4cc8bbf6a-catalog-content\") pod \"redhat-marketplace-9zdm5\" (UID: \"cd3034a6-7d20-4047-9f4e-a7f4cc8bbf6a\") " pod="openshift-marketplace/redhat-marketplace-9zdm5" Jan 31 10:39:18 crc kubenswrapper[4830]: I0131 10:39:18.022186 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zdgwl\" (UniqueName: \"kubernetes.io/projected/cd3034a6-7d20-4047-9f4e-a7f4cc8bbf6a-kube-api-access-zdgwl\") pod \"redhat-marketplace-9zdm5\" (UID: \"cd3034a6-7d20-4047-9f4e-a7f4cc8bbf6a\") " pod="openshift-marketplace/redhat-marketplace-9zdm5" Jan 31 10:39:18 crc kubenswrapper[4830]: I0131 10:39:18.022242 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cd3034a6-7d20-4047-9f4e-a7f4cc8bbf6a-catalog-content\") pod \"redhat-marketplace-9zdm5\" (UID: \"cd3034a6-7d20-4047-9f4e-a7f4cc8bbf6a\") " pod="openshift-marketplace/redhat-marketplace-9zdm5" Jan 31 10:39:18 crc kubenswrapper[4830]: I0131 10:39:18.022294 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cd3034a6-7d20-4047-9f4e-a7f4cc8bbf6a-utilities\") pod \"redhat-marketplace-9zdm5\" (UID: \"cd3034a6-7d20-4047-9f4e-a7f4cc8bbf6a\") " pod="openshift-marketplace/redhat-marketplace-9zdm5" Jan 31 10:39:18 crc kubenswrapper[4830]: I0131 10:39:18.022842 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cd3034a6-7d20-4047-9f4e-a7f4cc8bbf6a-utilities\") pod \"redhat-marketplace-9zdm5\" (UID: \"cd3034a6-7d20-4047-9f4e-a7f4cc8bbf6a\") " pod="openshift-marketplace/redhat-marketplace-9zdm5" Jan 31 10:39:18 crc kubenswrapper[4830]: I0131 10:39:18.033452 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cd3034a6-7d20-4047-9f4e-a7f4cc8bbf6a-catalog-content\") pod \"redhat-marketplace-9zdm5\" (UID: \"cd3034a6-7d20-4047-9f4e-a7f4cc8bbf6a\") " pod="openshift-marketplace/redhat-marketplace-9zdm5" Jan 31 10:39:18 crc kubenswrapper[4830]: I0131 10:39:18.076111 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zdgwl\" (UniqueName: \"kubernetes.io/projected/cd3034a6-7d20-4047-9f4e-a7f4cc8bbf6a-kube-api-access-zdgwl\") pod \"redhat-marketplace-9zdm5\" (UID: \"cd3034a6-7d20-4047-9f4e-a7f4cc8bbf6a\") " pod="openshift-marketplace/redhat-marketplace-9zdm5" Jan 31 10:39:18 crc kubenswrapper[4830]: I0131 10:39:18.184108 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-9zdm5"] Jan 31 10:39:18 crc kubenswrapper[4830]: I0131 10:39:18.277290 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9zdm5" Jan 31 10:39:22 crc kubenswrapper[4830]: I0131 10:39:22.251420 4830 scope.go:117] "RemoveContainer" containerID="3d1a1e3cfe2a93b485fa3e3d1d4183a5d4d87a568ef46466b13f6520a7c27ceb" Jan 31 10:39:22 crc kubenswrapper[4830]: E0131 10:39:22.252271 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gt7kd_openshift-machine-config-operator(158dbfda-9b0a-4809-9946-3c6ee2d082dc)\"" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" podUID="158dbfda-9b0a-4809-9946-3c6ee2d082dc" Jan 31 10:39:22 crc kubenswrapper[4830]: I0131 10:39:22.917432 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-9zdm5"] Jan 31 10:39:22 crc kubenswrapper[4830]: W0131 10:39:22.976777 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcd3034a6_7d20_4047_9f4e_a7f4cc8bbf6a.slice/crio-b8d5d3f465b58ad14552c1c8fbd33c21812b6b5da1af1f44e3e6a02c21fea3ba WatchSource:0}: Error finding container b8d5d3f465b58ad14552c1c8fbd33c21812b6b5da1af1f44e3e6a02c21fea3ba: Status 404 returned error can't find the container with id b8d5d3f465b58ad14552c1c8fbd33c21812b6b5da1af1f44e3e6a02c21fea3ba Jan 31 10:39:23 crc kubenswrapper[4830]: I0131 10:39:23.287245 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9zdm5" event={"ID":"cd3034a6-7d20-4047-9f4e-a7f4cc8bbf6a","Type":"ContainerStarted","Data":"b8d5d3f465b58ad14552c1c8fbd33c21812b6b5da1af1f44e3e6a02c21fea3ba"} Jan 31 10:39:24 crc kubenswrapper[4830]: I0131 10:39:24.299488 4830 generic.go:334] "Generic (PLEG): container finished" podID="cd3034a6-7d20-4047-9f4e-a7f4cc8bbf6a" containerID="d339b3dd8d5528a639fba202ef4191c28f45eb1b5fd24f3e043f7b85f22de9b5" exitCode=0 Jan 31 10:39:24 crc kubenswrapper[4830]: I0131 10:39:24.299649 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9zdm5" event={"ID":"cd3034a6-7d20-4047-9f4e-a7f4cc8bbf6a","Type":"ContainerDied","Data":"d339b3dd8d5528a639fba202ef4191c28f45eb1b5fd24f3e043f7b85f22de9b5"} Jan 31 10:39:24 crc kubenswrapper[4830]: I0131 10:39:24.333109 4830 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 31 10:39:26 crc kubenswrapper[4830]: I0131 10:39:26.322526 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9zdm5" event={"ID":"cd3034a6-7d20-4047-9f4e-a7f4cc8bbf6a","Type":"ContainerStarted","Data":"e4e4e37247a007e0f3f6eeadf0d7bf10239e2b99a7501f04b32331f68e7b2887"} Jan 31 10:39:28 crc kubenswrapper[4830]: I0131 10:39:28.345583 4830 generic.go:334] "Generic (PLEG): container finished" podID="cd3034a6-7d20-4047-9f4e-a7f4cc8bbf6a" containerID="e4e4e37247a007e0f3f6eeadf0d7bf10239e2b99a7501f04b32331f68e7b2887" exitCode=0 Jan 31 10:39:28 crc kubenswrapper[4830]: I0131 10:39:28.345665 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9zdm5" event={"ID":"cd3034a6-7d20-4047-9f4e-a7f4cc8bbf6a","Type":"ContainerDied","Data":"e4e4e37247a007e0f3f6eeadf0d7bf10239e2b99a7501f04b32331f68e7b2887"} Jan 31 10:39:29 crc kubenswrapper[4830]: I0131 10:39:29.359673 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9zdm5" event={"ID":"cd3034a6-7d20-4047-9f4e-a7f4cc8bbf6a","Type":"ContainerStarted","Data":"a324cbc8ae46ca306203cae7ce87c3d8d66403233f6f1b3e2b566c1be3cbbb81"} Jan 31 10:39:29 crc kubenswrapper[4830]: I0131 10:39:29.392957 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-9zdm5" podStartSLOduration=7.924358833 podStartE2EDuration="12.392938244s" podCreationTimestamp="2026-01-31 10:39:17 +0000 UTC" firstStartedPulling="2026-01-31 10:39:24.301571265 +0000 UTC m=+5908.794933707" lastFinishedPulling="2026-01-31 10:39:28.770150676 +0000 UTC m=+5913.263513118" observedRunningTime="2026-01-31 10:39:29.38293824 +0000 UTC m=+5913.876300702" watchObservedRunningTime="2026-01-31 10:39:29.392938244 +0000 UTC m=+5913.886300686" Jan 31 10:39:30 crc kubenswrapper[4830]: I0131 10:39:30.694181 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-v9bsq"] Jan 31 10:39:30 crc kubenswrapper[4830]: I0131 10:39:30.699272 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-v9bsq" Jan 31 10:39:30 crc kubenswrapper[4830]: I0131 10:39:30.708592 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-v9bsq"] Jan 31 10:39:30 crc kubenswrapper[4830]: I0131 10:39:30.827431 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/19790a54-5dbd-43f0-8ab8-733de0afedbc-utilities\") pod \"redhat-operators-v9bsq\" (UID: \"19790a54-5dbd-43f0-8ab8-733de0afedbc\") " pod="openshift-marketplace/redhat-operators-v9bsq" Jan 31 10:39:30 crc kubenswrapper[4830]: I0131 10:39:30.827811 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w7mj8\" (UniqueName: \"kubernetes.io/projected/19790a54-5dbd-43f0-8ab8-733de0afedbc-kube-api-access-w7mj8\") pod \"redhat-operators-v9bsq\" (UID: \"19790a54-5dbd-43f0-8ab8-733de0afedbc\") " pod="openshift-marketplace/redhat-operators-v9bsq" Jan 31 10:39:30 crc kubenswrapper[4830]: I0131 10:39:30.827870 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/19790a54-5dbd-43f0-8ab8-733de0afedbc-catalog-content\") pod \"redhat-operators-v9bsq\" (UID: \"19790a54-5dbd-43f0-8ab8-733de0afedbc\") " pod="openshift-marketplace/redhat-operators-v9bsq" Jan 31 10:39:30 crc kubenswrapper[4830]: I0131 10:39:30.930636 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/19790a54-5dbd-43f0-8ab8-733de0afedbc-utilities\") pod \"redhat-operators-v9bsq\" (UID: \"19790a54-5dbd-43f0-8ab8-733de0afedbc\") " pod="openshift-marketplace/redhat-operators-v9bsq" Jan 31 10:39:30 crc kubenswrapper[4830]: I0131 10:39:30.930748 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w7mj8\" (UniqueName: \"kubernetes.io/projected/19790a54-5dbd-43f0-8ab8-733de0afedbc-kube-api-access-w7mj8\") pod \"redhat-operators-v9bsq\" (UID: \"19790a54-5dbd-43f0-8ab8-733de0afedbc\") " pod="openshift-marketplace/redhat-operators-v9bsq" Jan 31 10:39:30 crc kubenswrapper[4830]: I0131 10:39:30.930804 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/19790a54-5dbd-43f0-8ab8-733de0afedbc-catalog-content\") pod \"redhat-operators-v9bsq\" (UID: \"19790a54-5dbd-43f0-8ab8-733de0afedbc\") " pod="openshift-marketplace/redhat-operators-v9bsq" Jan 31 10:39:30 crc kubenswrapper[4830]: I0131 10:39:30.931228 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/19790a54-5dbd-43f0-8ab8-733de0afedbc-utilities\") pod \"redhat-operators-v9bsq\" (UID: \"19790a54-5dbd-43f0-8ab8-733de0afedbc\") " pod="openshift-marketplace/redhat-operators-v9bsq" Jan 31 10:39:30 crc kubenswrapper[4830]: I0131 10:39:30.931321 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/19790a54-5dbd-43f0-8ab8-733de0afedbc-catalog-content\") pod \"redhat-operators-v9bsq\" (UID: \"19790a54-5dbd-43f0-8ab8-733de0afedbc\") " pod="openshift-marketplace/redhat-operators-v9bsq" Jan 31 10:39:30 crc kubenswrapper[4830]: I0131 10:39:30.952453 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w7mj8\" (UniqueName: \"kubernetes.io/projected/19790a54-5dbd-43f0-8ab8-733de0afedbc-kube-api-access-w7mj8\") pod \"redhat-operators-v9bsq\" (UID: \"19790a54-5dbd-43f0-8ab8-733de0afedbc\") " pod="openshift-marketplace/redhat-operators-v9bsq" Jan 31 10:39:31 crc kubenswrapper[4830]: I0131 10:39:31.029769 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-v9bsq" Jan 31 10:39:32 crc kubenswrapper[4830]: I0131 10:39:32.524137 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-v9bsq"] Jan 31 10:39:33 crc kubenswrapper[4830]: I0131 10:39:33.252759 4830 scope.go:117] "RemoveContainer" containerID="3d1a1e3cfe2a93b485fa3e3d1d4183a5d4d87a568ef46466b13f6520a7c27ceb" Jan 31 10:39:33 crc kubenswrapper[4830]: E0131 10:39:33.253576 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gt7kd_openshift-machine-config-operator(158dbfda-9b0a-4809-9946-3c6ee2d082dc)\"" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" podUID="158dbfda-9b0a-4809-9946-3c6ee2d082dc" Jan 31 10:39:33 crc kubenswrapper[4830]: I0131 10:39:33.423078 4830 generic.go:334] "Generic (PLEG): container finished" podID="19790a54-5dbd-43f0-8ab8-733de0afedbc" containerID="215ccd892b51d556c1b0b7d1a3a77235f645e99c1b4e716dcc88f4981d9586cc" exitCode=0 Jan 31 10:39:33 crc kubenswrapper[4830]: I0131 10:39:33.423131 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v9bsq" event={"ID":"19790a54-5dbd-43f0-8ab8-733de0afedbc","Type":"ContainerDied","Data":"215ccd892b51d556c1b0b7d1a3a77235f645e99c1b4e716dcc88f4981d9586cc"} Jan 31 10:39:33 crc kubenswrapper[4830]: I0131 10:39:33.423162 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v9bsq" event={"ID":"19790a54-5dbd-43f0-8ab8-733de0afedbc","Type":"ContainerStarted","Data":"695e1cc3b29e1b8730b350fc19ced9b6ed5e389b606e64c54004af86fd019be9"} Jan 31 10:39:34 crc kubenswrapper[4830]: I0131 10:39:34.436783 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v9bsq" event={"ID":"19790a54-5dbd-43f0-8ab8-733de0afedbc","Type":"ContainerStarted","Data":"f8bf91981e127efe9e217fc00d709c615b83cd4c80a376669045ba0ed165c563"} Jan 31 10:39:38 crc kubenswrapper[4830]: I0131 10:39:38.278066 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-9zdm5" Jan 31 10:39:38 crc kubenswrapper[4830]: I0131 10:39:38.278423 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-9zdm5" Jan 31 10:39:38 crc kubenswrapper[4830]: I0131 10:39:38.342182 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-9zdm5" Jan 31 10:39:38 crc kubenswrapper[4830]: I0131 10:39:38.530546 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-9zdm5" Jan 31 10:39:38 crc kubenswrapper[4830]: I0131 10:39:38.584836 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-9zdm5"] Jan 31 10:39:40 crc kubenswrapper[4830]: I0131 10:39:40.502640 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-9zdm5" podUID="cd3034a6-7d20-4047-9f4e-a7f4cc8bbf6a" containerName="registry-server" containerID="cri-o://a324cbc8ae46ca306203cae7ce87c3d8d66403233f6f1b3e2b566c1be3cbbb81" gracePeriod=2 Jan 31 10:39:41 crc kubenswrapper[4830]: I0131 10:39:41.254894 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9zdm5" Jan 31 10:39:41 crc kubenswrapper[4830]: I0131 10:39:41.411845 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zdgwl\" (UniqueName: \"kubernetes.io/projected/cd3034a6-7d20-4047-9f4e-a7f4cc8bbf6a-kube-api-access-zdgwl\") pod \"cd3034a6-7d20-4047-9f4e-a7f4cc8bbf6a\" (UID: \"cd3034a6-7d20-4047-9f4e-a7f4cc8bbf6a\") " Jan 31 10:39:41 crc kubenswrapper[4830]: I0131 10:39:41.412012 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cd3034a6-7d20-4047-9f4e-a7f4cc8bbf6a-utilities\") pod \"cd3034a6-7d20-4047-9f4e-a7f4cc8bbf6a\" (UID: \"cd3034a6-7d20-4047-9f4e-a7f4cc8bbf6a\") " Jan 31 10:39:41 crc kubenswrapper[4830]: I0131 10:39:41.412110 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cd3034a6-7d20-4047-9f4e-a7f4cc8bbf6a-catalog-content\") pod \"cd3034a6-7d20-4047-9f4e-a7f4cc8bbf6a\" (UID: \"cd3034a6-7d20-4047-9f4e-a7f4cc8bbf6a\") " Jan 31 10:39:41 crc kubenswrapper[4830]: I0131 10:39:41.412923 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cd3034a6-7d20-4047-9f4e-a7f4cc8bbf6a-utilities" (OuterVolumeSpecName: "utilities") pod "cd3034a6-7d20-4047-9f4e-a7f4cc8bbf6a" (UID: "cd3034a6-7d20-4047-9f4e-a7f4cc8bbf6a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 10:39:41 crc kubenswrapper[4830]: I0131 10:39:41.433141 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd3034a6-7d20-4047-9f4e-a7f4cc8bbf6a-kube-api-access-zdgwl" (OuterVolumeSpecName: "kube-api-access-zdgwl") pod "cd3034a6-7d20-4047-9f4e-a7f4cc8bbf6a" (UID: "cd3034a6-7d20-4047-9f4e-a7f4cc8bbf6a"). InnerVolumeSpecName "kube-api-access-zdgwl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 10:39:41 crc kubenswrapper[4830]: I0131 10:39:41.449235 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cd3034a6-7d20-4047-9f4e-a7f4cc8bbf6a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cd3034a6-7d20-4047-9f4e-a7f4cc8bbf6a" (UID: "cd3034a6-7d20-4047-9f4e-a7f4cc8bbf6a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 10:39:41 crc kubenswrapper[4830]: I0131 10:39:41.514971 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zdgwl\" (UniqueName: \"kubernetes.io/projected/cd3034a6-7d20-4047-9f4e-a7f4cc8bbf6a-kube-api-access-zdgwl\") on node \"crc\" DevicePath \"\"" Jan 31 10:39:41 crc kubenswrapper[4830]: I0131 10:39:41.515004 4830 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cd3034a6-7d20-4047-9f4e-a7f4cc8bbf6a-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 10:39:41 crc kubenswrapper[4830]: I0131 10:39:41.515013 4830 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cd3034a6-7d20-4047-9f4e-a7f4cc8bbf6a-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 10:39:41 crc kubenswrapper[4830]: I0131 10:39:41.516297 4830 generic.go:334] "Generic (PLEG): container finished" podID="cd3034a6-7d20-4047-9f4e-a7f4cc8bbf6a" containerID="a324cbc8ae46ca306203cae7ce87c3d8d66403233f6f1b3e2b566c1be3cbbb81" exitCode=0 Jan 31 10:39:41 crc kubenswrapper[4830]: I0131 10:39:41.516341 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9zdm5" event={"ID":"cd3034a6-7d20-4047-9f4e-a7f4cc8bbf6a","Type":"ContainerDied","Data":"a324cbc8ae46ca306203cae7ce87c3d8d66403233f6f1b3e2b566c1be3cbbb81"} Jan 31 10:39:41 crc kubenswrapper[4830]: I0131 10:39:41.516372 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9zdm5" event={"ID":"cd3034a6-7d20-4047-9f4e-a7f4cc8bbf6a","Type":"ContainerDied","Data":"b8d5d3f465b58ad14552c1c8fbd33c21812b6b5da1af1f44e3e6a02c21fea3ba"} Jan 31 10:39:41 crc kubenswrapper[4830]: I0131 10:39:41.516389 4830 scope.go:117] "RemoveContainer" containerID="a324cbc8ae46ca306203cae7ce87c3d8d66403233f6f1b3e2b566c1be3cbbb81" Jan 31 10:39:41 crc kubenswrapper[4830]: I0131 10:39:41.516457 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9zdm5" Jan 31 10:39:41 crc kubenswrapper[4830]: I0131 10:39:41.566227 4830 scope.go:117] "RemoveContainer" containerID="e4e4e37247a007e0f3f6eeadf0d7bf10239e2b99a7501f04b32331f68e7b2887" Jan 31 10:39:41 crc kubenswrapper[4830]: I0131 10:39:41.573682 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-9zdm5"] Jan 31 10:39:41 crc kubenswrapper[4830]: I0131 10:39:41.585452 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-9zdm5"] Jan 31 10:39:41 crc kubenswrapper[4830]: I0131 10:39:41.591428 4830 scope.go:117] "RemoveContainer" containerID="d339b3dd8d5528a639fba202ef4191c28f45eb1b5fd24f3e043f7b85f22de9b5" Jan 31 10:39:41 crc kubenswrapper[4830]: I0131 10:39:41.652462 4830 scope.go:117] "RemoveContainer" containerID="a324cbc8ae46ca306203cae7ce87c3d8d66403233f6f1b3e2b566c1be3cbbb81" Jan 31 10:39:41 crc kubenswrapper[4830]: E0131 10:39:41.670937 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a324cbc8ae46ca306203cae7ce87c3d8d66403233f6f1b3e2b566c1be3cbbb81\": container with ID starting with a324cbc8ae46ca306203cae7ce87c3d8d66403233f6f1b3e2b566c1be3cbbb81 not found: ID does not exist" containerID="a324cbc8ae46ca306203cae7ce87c3d8d66403233f6f1b3e2b566c1be3cbbb81" Jan 31 10:39:41 crc kubenswrapper[4830]: I0131 10:39:41.671606 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a324cbc8ae46ca306203cae7ce87c3d8d66403233f6f1b3e2b566c1be3cbbb81"} err="failed to get container status \"a324cbc8ae46ca306203cae7ce87c3d8d66403233f6f1b3e2b566c1be3cbbb81\": rpc error: code = NotFound desc = could not find container \"a324cbc8ae46ca306203cae7ce87c3d8d66403233f6f1b3e2b566c1be3cbbb81\": container with ID starting with a324cbc8ae46ca306203cae7ce87c3d8d66403233f6f1b3e2b566c1be3cbbb81 not found: ID does not exist" Jan 31 10:39:41 crc kubenswrapper[4830]: I0131 10:39:41.671739 4830 scope.go:117] "RemoveContainer" containerID="e4e4e37247a007e0f3f6eeadf0d7bf10239e2b99a7501f04b32331f68e7b2887" Jan 31 10:39:41 crc kubenswrapper[4830]: E0131 10:39:41.673022 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e4e4e37247a007e0f3f6eeadf0d7bf10239e2b99a7501f04b32331f68e7b2887\": container with ID starting with e4e4e37247a007e0f3f6eeadf0d7bf10239e2b99a7501f04b32331f68e7b2887 not found: ID does not exist" containerID="e4e4e37247a007e0f3f6eeadf0d7bf10239e2b99a7501f04b32331f68e7b2887" Jan 31 10:39:41 crc kubenswrapper[4830]: I0131 10:39:41.673084 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e4e4e37247a007e0f3f6eeadf0d7bf10239e2b99a7501f04b32331f68e7b2887"} err="failed to get container status \"e4e4e37247a007e0f3f6eeadf0d7bf10239e2b99a7501f04b32331f68e7b2887\": rpc error: code = NotFound desc = could not find container \"e4e4e37247a007e0f3f6eeadf0d7bf10239e2b99a7501f04b32331f68e7b2887\": container with ID starting with e4e4e37247a007e0f3f6eeadf0d7bf10239e2b99a7501f04b32331f68e7b2887 not found: ID does not exist" Jan 31 10:39:41 crc kubenswrapper[4830]: I0131 10:39:41.673117 4830 scope.go:117] "RemoveContainer" containerID="d339b3dd8d5528a639fba202ef4191c28f45eb1b5fd24f3e043f7b85f22de9b5" Jan 31 10:39:41 crc kubenswrapper[4830]: E0131 10:39:41.673578 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d339b3dd8d5528a639fba202ef4191c28f45eb1b5fd24f3e043f7b85f22de9b5\": container with ID starting with d339b3dd8d5528a639fba202ef4191c28f45eb1b5fd24f3e043f7b85f22de9b5 not found: ID does not exist" containerID="d339b3dd8d5528a639fba202ef4191c28f45eb1b5fd24f3e043f7b85f22de9b5" Jan 31 10:39:41 crc kubenswrapper[4830]: I0131 10:39:41.673828 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d339b3dd8d5528a639fba202ef4191c28f45eb1b5fd24f3e043f7b85f22de9b5"} err="failed to get container status \"d339b3dd8d5528a639fba202ef4191c28f45eb1b5fd24f3e043f7b85f22de9b5\": rpc error: code = NotFound desc = could not find container \"d339b3dd8d5528a639fba202ef4191c28f45eb1b5fd24f3e043f7b85f22de9b5\": container with ID starting with d339b3dd8d5528a639fba202ef4191c28f45eb1b5fd24f3e043f7b85f22de9b5 not found: ID does not exist" Jan 31 10:39:42 crc kubenswrapper[4830]: I0131 10:39:42.264365 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd3034a6-7d20-4047-9f4e-a7f4cc8bbf6a" path="/var/lib/kubelet/pods/cd3034a6-7d20-4047-9f4e-a7f4cc8bbf6a/volumes" Jan 31 10:39:42 crc kubenswrapper[4830]: I0131 10:39:42.531631 4830 generic.go:334] "Generic (PLEG): container finished" podID="19790a54-5dbd-43f0-8ab8-733de0afedbc" containerID="f8bf91981e127efe9e217fc00d709c615b83cd4c80a376669045ba0ed165c563" exitCode=0 Jan 31 10:39:42 crc kubenswrapper[4830]: I0131 10:39:42.531706 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v9bsq" event={"ID":"19790a54-5dbd-43f0-8ab8-733de0afedbc","Type":"ContainerDied","Data":"f8bf91981e127efe9e217fc00d709c615b83cd4c80a376669045ba0ed165c563"} Jan 31 10:39:44 crc kubenswrapper[4830]: I0131 10:39:44.566547 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v9bsq" event={"ID":"19790a54-5dbd-43f0-8ab8-733de0afedbc","Type":"ContainerStarted","Data":"2f9b73a67a21ca5bb8b17f99bcd60cc4e5629d7ebd8eaab3e3f929919aeb447f"} Jan 31 10:39:44 crc kubenswrapper[4830]: I0131 10:39:44.600151 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-v9bsq" podStartSLOduration=4.689044892 podStartE2EDuration="14.600124811s" podCreationTimestamp="2026-01-31 10:39:30 +0000 UTC" firstStartedPulling="2026-01-31 10:39:33.427952403 +0000 UTC m=+5917.921314845" lastFinishedPulling="2026-01-31 10:39:43.339032322 +0000 UTC m=+5927.832394764" observedRunningTime="2026-01-31 10:39:44.590050524 +0000 UTC m=+5929.083412976" watchObservedRunningTime="2026-01-31 10:39:44.600124811 +0000 UTC m=+5929.093487253" Jan 31 10:39:48 crc kubenswrapper[4830]: I0131 10:39:48.258287 4830 scope.go:117] "RemoveContainer" containerID="3d1a1e3cfe2a93b485fa3e3d1d4183a5d4d87a568ef46466b13f6520a7c27ceb" Jan 31 10:39:48 crc kubenswrapper[4830]: E0131 10:39:48.258897 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gt7kd_openshift-machine-config-operator(158dbfda-9b0a-4809-9946-3c6ee2d082dc)\"" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" podUID="158dbfda-9b0a-4809-9946-3c6ee2d082dc" Jan 31 10:39:51 crc kubenswrapper[4830]: I0131 10:39:51.030709 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-v9bsq" Jan 31 10:39:51 crc kubenswrapper[4830]: I0131 10:39:51.031830 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-v9bsq" Jan 31 10:39:52 crc kubenswrapper[4830]: I0131 10:39:52.081288 4830 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-v9bsq" podUID="19790a54-5dbd-43f0-8ab8-733de0afedbc" containerName="registry-server" probeResult="failure" output=< Jan 31 10:39:52 crc kubenswrapper[4830]: timeout: failed to connect service ":50051" within 1s Jan 31 10:39:52 crc kubenswrapper[4830]: > Jan 31 10:40:00 crc kubenswrapper[4830]: I0131 10:40:00.251885 4830 scope.go:117] "RemoveContainer" containerID="3d1a1e3cfe2a93b485fa3e3d1d4183a5d4d87a568ef46466b13f6520a7c27ceb" Jan 31 10:40:00 crc kubenswrapper[4830]: E0131 10:40:00.252671 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gt7kd_openshift-machine-config-operator(158dbfda-9b0a-4809-9946-3c6ee2d082dc)\"" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" podUID="158dbfda-9b0a-4809-9946-3c6ee2d082dc" Jan 31 10:40:02 crc kubenswrapper[4830]: I0131 10:40:02.090021 4830 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-v9bsq" podUID="19790a54-5dbd-43f0-8ab8-733de0afedbc" containerName="registry-server" probeResult="failure" output=< Jan 31 10:40:02 crc kubenswrapper[4830]: timeout: failed to connect service ":50051" within 1s Jan 31 10:40:02 crc kubenswrapper[4830]: > Jan 31 10:40:12 crc kubenswrapper[4830]: I0131 10:40:12.084408 4830 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-v9bsq" podUID="19790a54-5dbd-43f0-8ab8-733de0afedbc" containerName="registry-server" probeResult="failure" output=< Jan 31 10:40:12 crc kubenswrapper[4830]: timeout: failed to connect service ":50051" within 1s Jan 31 10:40:12 crc kubenswrapper[4830]: > Jan 31 10:40:12 crc kubenswrapper[4830]: I0131 10:40:12.252006 4830 scope.go:117] "RemoveContainer" containerID="3d1a1e3cfe2a93b485fa3e3d1d4183a5d4d87a568ef46466b13f6520a7c27ceb" Jan 31 10:40:12 crc kubenswrapper[4830]: E0131 10:40:12.252654 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gt7kd_openshift-machine-config-operator(158dbfda-9b0a-4809-9946-3c6ee2d082dc)\"" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" podUID="158dbfda-9b0a-4809-9946-3c6ee2d082dc" Jan 31 10:40:13 crc kubenswrapper[4830]: I0131 10:40:13.886015 4830 generic.go:334] "Generic (PLEG): container finished" podID="71ef938b-5a48-4f89-af62-86a680856139" containerID="cb17e5846c66d872c817ced4fee10babf0b444ab2dbed64e4f46d44710127835" exitCode=0 Jan 31 10:40:13 crc kubenswrapper[4830]: I0131 10:40:13.886554 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-7b67z/must-gather-fcflx" event={"ID":"71ef938b-5a48-4f89-af62-86a680856139","Type":"ContainerDied","Data":"cb17e5846c66d872c817ced4fee10babf0b444ab2dbed64e4f46d44710127835"} Jan 31 10:40:13 crc kubenswrapper[4830]: I0131 10:40:13.887853 4830 scope.go:117] "RemoveContainer" containerID="cb17e5846c66d872c817ced4fee10babf0b444ab2dbed64e4f46d44710127835" Jan 31 10:40:14 crc kubenswrapper[4830]: I0131 10:40:14.344371 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-7b67z_must-gather-fcflx_71ef938b-5a48-4f89-af62-86a680856139/gather/0.log" Jan 31 10:40:22 crc kubenswrapper[4830]: I0131 10:40:22.086795 4830 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-v9bsq" podUID="19790a54-5dbd-43f0-8ab8-733de0afedbc" containerName="registry-server" probeResult="failure" output=< Jan 31 10:40:22 crc kubenswrapper[4830]: timeout: failed to connect service ":50051" within 1s Jan 31 10:40:22 crc kubenswrapper[4830]: > Jan 31 10:40:24 crc kubenswrapper[4830]: I0131 10:40:24.251993 4830 scope.go:117] "RemoveContainer" containerID="3d1a1e3cfe2a93b485fa3e3d1d4183a5d4d87a568ef46466b13f6520a7c27ceb" Jan 31 10:40:25 crc kubenswrapper[4830]: I0131 10:40:25.011862 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" event={"ID":"158dbfda-9b0a-4809-9946-3c6ee2d082dc","Type":"ContainerStarted","Data":"bb9aa35f3cbfb247dd90e386d734ccc6203ba2643fab7bbeba79cb65d07ea025"} Jan 31 10:40:32 crc kubenswrapper[4830]: I0131 10:40:32.083010 4830 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-v9bsq" podUID="19790a54-5dbd-43f0-8ab8-733de0afedbc" containerName="registry-server" probeResult="failure" output=< Jan 31 10:40:32 crc kubenswrapper[4830]: timeout: failed to connect service ":50051" within 1s Jan 31 10:40:32 crc kubenswrapper[4830]: > Jan 31 10:40:35 crc kubenswrapper[4830]: I0131 10:40:35.123607 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-7b67z/must-gather-fcflx"] Jan 31 10:40:35 crc kubenswrapper[4830]: I0131 10:40:35.124366 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-7b67z/must-gather-fcflx" podUID="71ef938b-5a48-4f89-af62-86a680856139" containerName="copy" containerID="cri-o://6273ca33f8a1fb1e8d19812845dc6358d83ca0e20300badc81a3dc8578790499" gracePeriod=2 Jan 31 10:40:35 crc kubenswrapper[4830]: I0131 10:40:35.142651 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-7b67z/must-gather-fcflx"] Jan 31 10:40:35 crc kubenswrapper[4830]: I0131 10:40:35.748602 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-7b67z_must-gather-fcflx_71ef938b-5a48-4f89-af62-86a680856139/copy/0.log" Jan 31 10:40:35 crc kubenswrapper[4830]: I0131 10:40:35.760913 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-7b67z/must-gather-fcflx" Jan 31 10:40:35 crc kubenswrapper[4830]: I0131 10:40:35.861341 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/71ef938b-5a48-4f89-af62-86a680856139-must-gather-output\") pod \"71ef938b-5a48-4f89-af62-86a680856139\" (UID: \"71ef938b-5a48-4f89-af62-86a680856139\") " Jan 31 10:40:35 crc kubenswrapper[4830]: I0131 10:40:35.861554 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-thxz4\" (UniqueName: \"kubernetes.io/projected/71ef938b-5a48-4f89-af62-86a680856139-kube-api-access-thxz4\") pod \"71ef938b-5a48-4f89-af62-86a680856139\" (UID: \"71ef938b-5a48-4f89-af62-86a680856139\") " Jan 31 10:40:35 crc kubenswrapper[4830]: I0131 10:40:35.930991 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/71ef938b-5a48-4f89-af62-86a680856139-kube-api-access-thxz4" (OuterVolumeSpecName: "kube-api-access-thxz4") pod "71ef938b-5a48-4f89-af62-86a680856139" (UID: "71ef938b-5a48-4f89-af62-86a680856139"). InnerVolumeSpecName "kube-api-access-thxz4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 10:40:35 crc kubenswrapper[4830]: I0131 10:40:35.964480 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-thxz4\" (UniqueName: \"kubernetes.io/projected/71ef938b-5a48-4f89-af62-86a680856139-kube-api-access-thxz4\") on node \"crc\" DevicePath \"\"" Jan 31 10:40:36 crc kubenswrapper[4830]: I0131 10:40:36.147080 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-7b67z_must-gather-fcflx_71ef938b-5a48-4f89-af62-86a680856139/copy/0.log" Jan 31 10:40:36 crc kubenswrapper[4830]: I0131 10:40:36.147845 4830 generic.go:334] "Generic (PLEG): container finished" podID="71ef938b-5a48-4f89-af62-86a680856139" containerID="6273ca33f8a1fb1e8d19812845dc6358d83ca0e20300badc81a3dc8578790499" exitCode=143 Jan 31 10:40:36 crc kubenswrapper[4830]: I0131 10:40:36.147892 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-7b67z/must-gather-fcflx" Jan 31 10:40:36 crc kubenswrapper[4830]: I0131 10:40:36.147909 4830 scope.go:117] "RemoveContainer" containerID="6273ca33f8a1fb1e8d19812845dc6358d83ca0e20300badc81a3dc8578790499" Jan 31 10:40:36 crc kubenswrapper[4830]: I0131 10:40:36.178410 4830 scope.go:117] "RemoveContainer" containerID="cb17e5846c66d872c817ced4fee10babf0b444ab2dbed64e4f46d44710127835" Jan 31 10:40:36 crc kubenswrapper[4830]: I0131 10:40:36.315477 4830 scope.go:117] "RemoveContainer" containerID="6273ca33f8a1fb1e8d19812845dc6358d83ca0e20300badc81a3dc8578790499" Jan 31 10:40:36 crc kubenswrapper[4830]: E0131 10:40:36.315954 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6273ca33f8a1fb1e8d19812845dc6358d83ca0e20300badc81a3dc8578790499\": container with ID starting with 6273ca33f8a1fb1e8d19812845dc6358d83ca0e20300badc81a3dc8578790499 not found: ID does not exist" containerID="6273ca33f8a1fb1e8d19812845dc6358d83ca0e20300badc81a3dc8578790499" Jan 31 10:40:36 crc kubenswrapper[4830]: I0131 10:40:36.315990 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6273ca33f8a1fb1e8d19812845dc6358d83ca0e20300badc81a3dc8578790499"} err="failed to get container status \"6273ca33f8a1fb1e8d19812845dc6358d83ca0e20300badc81a3dc8578790499\": rpc error: code = NotFound desc = could not find container \"6273ca33f8a1fb1e8d19812845dc6358d83ca0e20300badc81a3dc8578790499\": container with ID starting with 6273ca33f8a1fb1e8d19812845dc6358d83ca0e20300badc81a3dc8578790499 not found: ID does not exist" Jan 31 10:40:36 crc kubenswrapper[4830]: I0131 10:40:36.316008 4830 scope.go:117] "RemoveContainer" containerID="cb17e5846c66d872c817ced4fee10babf0b444ab2dbed64e4f46d44710127835" Jan 31 10:40:36 crc kubenswrapper[4830]: E0131 10:40:36.316605 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cb17e5846c66d872c817ced4fee10babf0b444ab2dbed64e4f46d44710127835\": container with ID starting with cb17e5846c66d872c817ced4fee10babf0b444ab2dbed64e4f46d44710127835 not found: ID does not exist" containerID="cb17e5846c66d872c817ced4fee10babf0b444ab2dbed64e4f46d44710127835" Jan 31 10:40:36 crc kubenswrapper[4830]: I0131 10:40:36.316631 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cb17e5846c66d872c817ced4fee10babf0b444ab2dbed64e4f46d44710127835"} err="failed to get container status \"cb17e5846c66d872c817ced4fee10babf0b444ab2dbed64e4f46d44710127835\": rpc error: code = NotFound desc = could not find container \"cb17e5846c66d872c817ced4fee10babf0b444ab2dbed64e4f46d44710127835\": container with ID starting with cb17e5846c66d872c817ced4fee10babf0b444ab2dbed64e4f46d44710127835 not found: ID does not exist" Jan 31 10:40:36 crc kubenswrapper[4830]: I0131 10:40:36.496643 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/71ef938b-5a48-4f89-af62-86a680856139-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "71ef938b-5a48-4f89-af62-86a680856139" (UID: "71ef938b-5a48-4f89-af62-86a680856139"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 10:40:36 crc kubenswrapper[4830]: I0131 10:40:36.577530 4830 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/71ef938b-5a48-4f89-af62-86a680856139-must-gather-output\") on node \"crc\" DevicePath \"\"" Jan 31 10:40:38 crc kubenswrapper[4830]: I0131 10:40:38.265542 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="71ef938b-5a48-4f89-af62-86a680856139" path="/var/lib/kubelet/pods/71ef938b-5a48-4f89-af62-86a680856139/volumes" Jan 31 10:40:42 crc kubenswrapper[4830]: I0131 10:40:42.082940 4830 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-v9bsq" podUID="19790a54-5dbd-43f0-8ab8-733de0afedbc" containerName="registry-server" probeResult="failure" output=< Jan 31 10:40:42 crc kubenswrapper[4830]: timeout: failed to connect service ":50051" within 1s Jan 31 10:40:42 crc kubenswrapper[4830]: > Jan 31 10:40:51 crc kubenswrapper[4830]: I0131 10:40:51.579490 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-cgvpr"] Jan 31 10:40:51 crc kubenswrapper[4830]: E0131 10:40:51.580504 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="71ef938b-5a48-4f89-af62-86a680856139" containerName="copy" Jan 31 10:40:51 crc kubenswrapper[4830]: I0131 10:40:51.580518 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="71ef938b-5a48-4f89-af62-86a680856139" containerName="copy" Jan 31 10:40:51 crc kubenswrapper[4830]: E0131 10:40:51.580532 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd3034a6-7d20-4047-9f4e-a7f4cc8bbf6a" containerName="extract-utilities" Jan 31 10:40:51 crc kubenswrapper[4830]: I0131 10:40:51.580538 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd3034a6-7d20-4047-9f4e-a7f4cc8bbf6a" containerName="extract-utilities" Jan 31 10:40:51 crc kubenswrapper[4830]: E0131 10:40:51.580552 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd3034a6-7d20-4047-9f4e-a7f4cc8bbf6a" containerName="extract-content" Jan 31 10:40:51 crc kubenswrapper[4830]: I0131 10:40:51.580558 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd3034a6-7d20-4047-9f4e-a7f4cc8bbf6a" containerName="extract-content" Jan 31 10:40:51 crc kubenswrapper[4830]: E0131 10:40:51.580601 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd3034a6-7d20-4047-9f4e-a7f4cc8bbf6a" containerName="registry-server" Jan 31 10:40:51 crc kubenswrapper[4830]: I0131 10:40:51.580609 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd3034a6-7d20-4047-9f4e-a7f4cc8bbf6a" containerName="registry-server" Jan 31 10:40:51 crc kubenswrapper[4830]: E0131 10:40:51.580628 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="71ef938b-5a48-4f89-af62-86a680856139" containerName="gather" Jan 31 10:40:51 crc kubenswrapper[4830]: I0131 10:40:51.580635 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="71ef938b-5a48-4f89-af62-86a680856139" containerName="gather" Jan 31 10:40:51 crc kubenswrapper[4830]: I0131 10:40:51.581017 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="71ef938b-5a48-4f89-af62-86a680856139" containerName="copy" Jan 31 10:40:51 crc kubenswrapper[4830]: I0131 10:40:51.581032 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="cd3034a6-7d20-4047-9f4e-a7f4cc8bbf6a" containerName="registry-server" Jan 31 10:40:51 crc kubenswrapper[4830]: I0131 10:40:51.581050 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="71ef938b-5a48-4f89-af62-86a680856139" containerName="gather" Jan 31 10:40:51 crc kubenswrapper[4830]: I0131 10:40:51.583100 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cgvpr" Jan 31 10:40:51 crc kubenswrapper[4830]: I0131 10:40:51.601413 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-cgvpr"] Jan 31 10:40:51 crc kubenswrapper[4830]: I0131 10:40:51.706493 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c0e443c8-637b-41e7-a729-5961eaa071c4-catalog-content\") pod \"certified-operators-cgvpr\" (UID: \"c0e443c8-637b-41e7-a729-5961eaa071c4\") " pod="openshift-marketplace/certified-operators-cgvpr" Jan 31 10:40:51 crc kubenswrapper[4830]: I0131 10:40:51.706672 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c0e443c8-637b-41e7-a729-5961eaa071c4-utilities\") pod \"certified-operators-cgvpr\" (UID: \"c0e443c8-637b-41e7-a729-5961eaa071c4\") " pod="openshift-marketplace/certified-operators-cgvpr" Jan 31 10:40:51 crc kubenswrapper[4830]: I0131 10:40:51.706717 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v2b49\" (UniqueName: \"kubernetes.io/projected/c0e443c8-637b-41e7-a729-5961eaa071c4-kube-api-access-v2b49\") pod \"certified-operators-cgvpr\" (UID: \"c0e443c8-637b-41e7-a729-5961eaa071c4\") " pod="openshift-marketplace/certified-operators-cgvpr" Jan 31 10:40:51 crc kubenswrapper[4830]: I0131 10:40:51.808824 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c0e443c8-637b-41e7-a729-5961eaa071c4-catalog-content\") pod \"certified-operators-cgvpr\" (UID: \"c0e443c8-637b-41e7-a729-5961eaa071c4\") " pod="openshift-marketplace/certified-operators-cgvpr" Jan 31 10:40:51 crc kubenswrapper[4830]: I0131 10:40:51.808964 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c0e443c8-637b-41e7-a729-5961eaa071c4-utilities\") pod \"certified-operators-cgvpr\" (UID: \"c0e443c8-637b-41e7-a729-5961eaa071c4\") " pod="openshift-marketplace/certified-operators-cgvpr" Jan 31 10:40:51 crc kubenswrapper[4830]: I0131 10:40:51.809006 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v2b49\" (UniqueName: \"kubernetes.io/projected/c0e443c8-637b-41e7-a729-5961eaa071c4-kube-api-access-v2b49\") pod \"certified-operators-cgvpr\" (UID: \"c0e443c8-637b-41e7-a729-5961eaa071c4\") " pod="openshift-marketplace/certified-operators-cgvpr" Jan 31 10:40:51 crc kubenswrapper[4830]: I0131 10:40:51.809374 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c0e443c8-637b-41e7-a729-5961eaa071c4-catalog-content\") pod \"certified-operators-cgvpr\" (UID: \"c0e443c8-637b-41e7-a729-5961eaa071c4\") " pod="openshift-marketplace/certified-operators-cgvpr" Jan 31 10:40:51 crc kubenswrapper[4830]: I0131 10:40:51.809457 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c0e443c8-637b-41e7-a729-5961eaa071c4-utilities\") pod \"certified-operators-cgvpr\" (UID: \"c0e443c8-637b-41e7-a729-5961eaa071c4\") " pod="openshift-marketplace/certified-operators-cgvpr" Jan 31 10:40:51 crc kubenswrapper[4830]: I0131 10:40:51.902893 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v2b49\" (UniqueName: \"kubernetes.io/projected/c0e443c8-637b-41e7-a729-5961eaa071c4-kube-api-access-v2b49\") pod \"certified-operators-cgvpr\" (UID: \"c0e443c8-637b-41e7-a729-5961eaa071c4\") " pod="openshift-marketplace/certified-operators-cgvpr" Jan 31 10:40:51 crc kubenswrapper[4830]: I0131 10:40:51.913405 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cgvpr" Jan 31 10:40:52 crc kubenswrapper[4830]: I0131 10:40:52.112520 4830 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-v9bsq" podUID="19790a54-5dbd-43f0-8ab8-733de0afedbc" containerName="registry-server" probeResult="failure" output=< Jan 31 10:40:52 crc kubenswrapper[4830]: timeout: failed to connect service ":50051" within 1s Jan 31 10:40:52 crc kubenswrapper[4830]: > Jan 31 10:40:52 crc kubenswrapper[4830]: I0131 10:40:52.552593 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-cgvpr"] Jan 31 10:40:53 crc kubenswrapper[4830]: I0131 10:40:53.350512 4830 generic.go:334] "Generic (PLEG): container finished" podID="c0e443c8-637b-41e7-a729-5961eaa071c4" containerID="23117677057bf3bbb8a064fd36740f21954b1e6250d77c3b0eaad065737f5cb5" exitCode=0 Jan 31 10:40:53 crc kubenswrapper[4830]: I0131 10:40:53.350557 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cgvpr" event={"ID":"c0e443c8-637b-41e7-a729-5961eaa071c4","Type":"ContainerDied","Data":"23117677057bf3bbb8a064fd36740f21954b1e6250d77c3b0eaad065737f5cb5"} Jan 31 10:40:53 crc kubenswrapper[4830]: I0131 10:40:53.350583 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cgvpr" event={"ID":"c0e443c8-637b-41e7-a729-5961eaa071c4","Type":"ContainerStarted","Data":"561fc57d04903d3ddd4a0f1fe9c8ea77eed50aa1318e685614b4483ba0bc755c"} Jan 31 10:40:55 crc kubenswrapper[4830]: I0131 10:40:55.373952 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cgvpr" event={"ID":"c0e443c8-637b-41e7-a729-5961eaa071c4","Type":"ContainerStarted","Data":"28fb01f171cc07cb8384dc76b40e1a5a8e1c2a874de86f77dc164fdd93a9d6ad"} Jan 31 10:40:59 crc kubenswrapper[4830]: I0131 10:40:59.430636 4830 generic.go:334] "Generic (PLEG): container finished" podID="c0e443c8-637b-41e7-a729-5961eaa071c4" containerID="28fb01f171cc07cb8384dc76b40e1a5a8e1c2a874de86f77dc164fdd93a9d6ad" exitCode=0 Jan 31 10:40:59 crc kubenswrapper[4830]: I0131 10:40:59.430744 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cgvpr" event={"ID":"c0e443c8-637b-41e7-a729-5961eaa071c4","Type":"ContainerDied","Data":"28fb01f171cc07cb8384dc76b40e1a5a8e1c2a874de86f77dc164fdd93a9d6ad"} Jan 31 10:41:01 crc kubenswrapper[4830]: I0131 10:41:01.456397 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cgvpr" event={"ID":"c0e443c8-637b-41e7-a729-5961eaa071c4","Type":"ContainerStarted","Data":"d19bd085786a7fe480f81784b826bb1d50bb5599d17ae3a61986ec98917c44e7"} Jan 31 10:41:01 crc kubenswrapper[4830]: I0131 10:41:01.493577 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-cgvpr" podStartSLOduration=3.631599348 podStartE2EDuration="10.493549454s" podCreationTimestamp="2026-01-31 10:40:51 +0000 UTC" firstStartedPulling="2026-01-31 10:40:53.353405123 +0000 UTC m=+5997.846767565" lastFinishedPulling="2026-01-31 10:41:00.215355239 +0000 UTC m=+6004.708717671" observedRunningTime="2026-01-31 10:41:01.47827846 +0000 UTC m=+6005.971640902" watchObservedRunningTime="2026-01-31 10:41:01.493549454 +0000 UTC m=+6005.986911906" Jan 31 10:41:01 crc kubenswrapper[4830]: I0131 10:41:01.913737 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-cgvpr" Jan 31 10:41:01 crc kubenswrapper[4830]: I0131 10:41:01.913780 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-cgvpr" Jan 31 10:41:02 crc kubenswrapper[4830]: I0131 10:41:02.091533 4830 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-v9bsq" podUID="19790a54-5dbd-43f0-8ab8-733de0afedbc" containerName="registry-server" probeResult="failure" output=< Jan 31 10:41:02 crc kubenswrapper[4830]: timeout: failed to connect service ":50051" within 1s Jan 31 10:41:02 crc kubenswrapper[4830]: > Jan 31 10:41:02 crc kubenswrapper[4830]: I0131 10:41:02.968026 4830 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-cgvpr" podUID="c0e443c8-637b-41e7-a729-5961eaa071c4" containerName="registry-server" probeResult="failure" output=< Jan 31 10:41:02 crc kubenswrapper[4830]: timeout: failed to connect service ":50051" within 1s Jan 31 10:41:02 crc kubenswrapper[4830]: > Jan 31 10:41:11 crc kubenswrapper[4830]: I0131 10:41:11.093711 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-v9bsq" Jan 31 10:41:11 crc kubenswrapper[4830]: I0131 10:41:11.154577 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-v9bsq" Jan 31 10:41:11 crc kubenswrapper[4830]: I0131 10:41:11.333898 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-v9bsq"] Jan 31 10:41:12 crc kubenswrapper[4830]: I0131 10:41:12.580040 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-v9bsq" podUID="19790a54-5dbd-43f0-8ab8-733de0afedbc" containerName="registry-server" containerID="cri-o://2f9b73a67a21ca5bb8b17f99bcd60cc4e5629d7ebd8eaab3e3f929919aeb447f" gracePeriod=2 Jan 31 10:41:12 crc kubenswrapper[4830]: I0131 10:41:12.960365 4830 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-cgvpr" podUID="c0e443c8-637b-41e7-a729-5961eaa071c4" containerName="registry-server" probeResult="failure" output=< Jan 31 10:41:12 crc kubenswrapper[4830]: timeout: failed to connect service ":50051" within 1s Jan 31 10:41:12 crc kubenswrapper[4830]: > Jan 31 10:41:13 crc kubenswrapper[4830]: I0131 10:41:13.326035 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-v9bsq" Jan 31 10:41:13 crc kubenswrapper[4830]: I0131 10:41:13.429589 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/19790a54-5dbd-43f0-8ab8-733de0afedbc-utilities\") pod \"19790a54-5dbd-43f0-8ab8-733de0afedbc\" (UID: \"19790a54-5dbd-43f0-8ab8-733de0afedbc\") " Jan 31 10:41:13 crc kubenswrapper[4830]: I0131 10:41:13.430243 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/19790a54-5dbd-43f0-8ab8-733de0afedbc-catalog-content\") pod \"19790a54-5dbd-43f0-8ab8-733de0afedbc\" (UID: \"19790a54-5dbd-43f0-8ab8-733de0afedbc\") " Jan 31 10:41:13 crc kubenswrapper[4830]: I0131 10:41:13.430306 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/19790a54-5dbd-43f0-8ab8-733de0afedbc-utilities" (OuterVolumeSpecName: "utilities") pod "19790a54-5dbd-43f0-8ab8-733de0afedbc" (UID: "19790a54-5dbd-43f0-8ab8-733de0afedbc"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 10:41:13 crc kubenswrapper[4830]: I0131 10:41:13.430372 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7mj8\" (UniqueName: \"kubernetes.io/projected/19790a54-5dbd-43f0-8ab8-733de0afedbc-kube-api-access-w7mj8\") pod \"19790a54-5dbd-43f0-8ab8-733de0afedbc\" (UID: \"19790a54-5dbd-43f0-8ab8-733de0afedbc\") " Jan 31 10:41:13 crc kubenswrapper[4830]: I0131 10:41:13.431057 4830 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/19790a54-5dbd-43f0-8ab8-733de0afedbc-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 10:41:13 crc kubenswrapper[4830]: I0131 10:41:13.443098 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/19790a54-5dbd-43f0-8ab8-733de0afedbc-kube-api-access-w7mj8" (OuterVolumeSpecName: "kube-api-access-w7mj8") pod "19790a54-5dbd-43f0-8ab8-733de0afedbc" (UID: "19790a54-5dbd-43f0-8ab8-733de0afedbc"). InnerVolumeSpecName "kube-api-access-w7mj8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 10:41:13 crc kubenswrapper[4830]: I0131 10:41:13.532794 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7mj8\" (UniqueName: \"kubernetes.io/projected/19790a54-5dbd-43f0-8ab8-733de0afedbc-kube-api-access-w7mj8\") on node \"crc\" DevicePath \"\"" Jan 31 10:41:13 crc kubenswrapper[4830]: I0131 10:41:13.593223 4830 generic.go:334] "Generic (PLEG): container finished" podID="19790a54-5dbd-43f0-8ab8-733de0afedbc" containerID="2f9b73a67a21ca5bb8b17f99bcd60cc4e5629d7ebd8eaab3e3f929919aeb447f" exitCode=0 Jan 31 10:41:13 crc kubenswrapper[4830]: I0131 10:41:13.593269 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v9bsq" event={"ID":"19790a54-5dbd-43f0-8ab8-733de0afedbc","Type":"ContainerDied","Data":"2f9b73a67a21ca5bb8b17f99bcd60cc4e5629d7ebd8eaab3e3f929919aeb447f"} Jan 31 10:41:13 crc kubenswrapper[4830]: I0131 10:41:13.593301 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v9bsq" event={"ID":"19790a54-5dbd-43f0-8ab8-733de0afedbc","Type":"ContainerDied","Data":"695e1cc3b29e1b8730b350fc19ced9b6ed5e389b606e64c54004af86fd019be9"} Jan 31 10:41:13 crc kubenswrapper[4830]: I0131 10:41:13.593327 4830 scope.go:117] "RemoveContainer" containerID="2f9b73a67a21ca5bb8b17f99bcd60cc4e5629d7ebd8eaab3e3f929919aeb447f" Jan 31 10:41:13 crc kubenswrapper[4830]: I0131 10:41:13.593347 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-v9bsq" Jan 31 10:41:13 crc kubenswrapper[4830]: I0131 10:41:13.611154 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/19790a54-5dbd-43f0-8ab8-733de0afedbc-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "19790a54-5dbd-43f0-8ab8-733de0afedbc" (UID: "19790a54-5dbd-43f0-8ab8-733de0afedbc"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 10:41:13 crc kubenswrapper[4830]: I0131 10:41:13.616091 4830 scope.go:117] "RemoveContainer" containerID="f8bf91981e127efe9e217fc00d709c615b83cd4c80a376669045ba0ed165c563" Jan 31 10:41:13 crc kubenswrapper[4830]: I0131 10:41:13.636300 4830 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/19790a54-5dbd-43f0-8ab8-733de0afedbc-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 10:41:13 crc kubenswrapper[4830]: I0131 10:41:13.652799 4830 scope.go:117] "RemoveContainer" containerID="215ccd892b51d556c1b0b7d1a3a77235f645e99c1b4e716dcc88f4981d9586cc" Jan 31 10:41:13 crc kubenswrapper[4830]: I0131 10:41:13.717272 4830 scope.go:117] "RemoveContainer" containerID="2f9b73a67a21ca5bb8b17f99bcd60cc4e5629d7ebd8eaab3e3f929919aeb447f" Jan 31 10:41:13 crc kubenswrapper[4830]: E0131 10:41:13.718133 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2f9b73a67a21ca5bb8b17f99bcd60cc4e5629d7ebd8eaab3e3f929919aeb447f\": container with ID starting with 2f9b73a67a21ca5bb8b17f99bcd60cc4e5629d7ebd8eaab3e3f929919aeb447f not found: ID does not exist" containerID="2f9b73a67a21ca5bb8b17f99bcd60cc4e5629d7ebd8eaab3e3f929919aeb447f" Jan 31 10:41:13 crc kubenswrapper[4830]: I0131 10:41:13.718191 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2f9b73a67a21ca5bb8b17f99bcd60cc4e5629d7ebd8eaab3e3f929919aeb447f"} err="failed to get container status \"2f9b73a67a21ca5bb8b17f99bcd60cc4e5629d7ebd8eaab3e3f929919aeb447f\": rpc error: code = NotFound desc = could not find container \"2f9b73a67a21ca5bb8b17f99bcd60cc4e5629d7ebd8eaab3e3f929919aeb447f\": container with ID starting with 2f9b73a67a21ca5bb8b17f99bcd60cc4e5629d7ebd8eaab3e3f929919aeb447f not found: ID does not exist" Jan 31 10:41:13 crc kubenswrapper[4830]: I0131 10:41:13.718213 4830 scope.go:117] "RemoveContainer" containerID="f8bf91981e127efe9e217fc00d709c615b83cd4c80a376669045ba0ed165c563" Jan 31 10:41:13 crc kubenswrapper[4830]: E0131 10:41:13.718957 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f8bf91981e127efe9e217fc00d709c615b83cd4c80a376669045ba0ed165c563\": container with ID starting with f8bf91981e127efe9e217fc00d709c615b83cd4c80a376669045ba0ed165c563 not found: ID does not exist" containerID="f8bf91981e127efe9e217fc00d709c615b83cd4c80a376669045ba0ed165c563" Jan 31 10:41:13 crc kubenswrapper[4830]: I0131 10:41:13.719008 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f8bf91981e127efe9e217fc00d709c615b83cd4c80a376669045ba0ed165c563"} err="failed to get container status \"f8bf91981e127efe9e217fc00d709c615b83cd4c80a376669045ba0ed165c563\": rpc error: code = NotFound desc = could not find container \"f8bf91981e127efe9e217fc00d709c615b83cd4c80a376669045ba0ed165c563\": container with ID starting with f8bf91981e127efe9e217fc00d709c615b83cd4c80a376669045ba0ed165c563 not found: ID does not exist" Jan 31 10:41:13 crc kubenswrapper[4830]: I0131 10:41:13.719041 4830 scope.go:117] "RemoveContainer" containerID="215ccd892b51d556c1b0b7d1a3a77235f645e99c1b4e716dcc88f4981d9586cc" Jan 31 10:41:13 crc kubenswrapper[4830]: E0131 10:41:13.719546 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"215ccd892b51d556c1b0b7d1a3a77235f645e99c1b4e716dcc88f4981d9586cc\": container with ID starting with 215ccd892b51d556c1b0b7d1a3a77235f645e99c1b4e716dcc88f4981d9586cc not found: ID does not exist" containerID="215ccd892b51d556c1b0b7d1a3a77235f645e99c1b4e716dcc88f4981d9586cc" Jan 31 10:41:13 crc kubenswrapper[4830]: I0131 10:41:13.719577 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"215ccd892b51d556c1b0b7d1a3a77235f645e99c1b4e716dcc88f4981d9586cc"} err="failed to get container status \"215ccd892b51d556c1b0b7d1a3a77235f645e99c1b4e716dcc88f4981d9586cc\": rpc error: code = NotFound desc = could not find container \"215ccd892b51d556c1b0b7d1a3a77235f645e99c1b4e716dcc88f4981d9586cc\": container with ID starting with 215ccd892b51d556c1b0b7d1a3a77235f645e99c1b4e716dcc88f4981d9586cc not found: ID does not exist" Jan 31 10:41:13 crc kubenswrapper[4830]: I0131 10:41:13.942862 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-v9bsq"] Jan 31 10:41:13 crc kubenswrapper[4830]: I0131 10:41:13.956542 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-v9bsq"] Jan 31 10:41:14 crc kubenswrapper[4830]: I0131 10:41:14.264182 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="19790a54-5dbd-43f0-8ab8-733de0afedbc" path="/var/lib/kubelet/pods/19790a54-5dbd-43f0-8ab8-733de0afedbc/volumes" Jan 31 10:41:22 crc kubenswrapper[4830]: I0131 10:41:22.969899 4830 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-cgvpr" podUID="c0e443c8-637b-41e7-a729-5961eaa071c4" containerName="registry-server" probeResult="failure" output=< Jan 31 10:41:22 crc kubenswrapper[4830]: timeout: failed to connect service ":50051" within 1s Jan 31 10:41:22 crc kubenswrapper[4830]: > Jan 31 10:41:31 crc kubenswrapper[4830]: I0131 10:41:31.962589 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-cgvpr" Jan 31 10:41:32 crc kubenswrapper[4830]: I0131 10:41:32.015032 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-cgvpr" Jan 31 10:41:32 crc kubenswrapper[4830]: I0131 10:41:32.201128 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-cgvpr"] Jan 31 10:41:33 crc kubenswrapper[4830]: I0131 10:41:33.838919 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-cgvpr" podUID="c0e443c8-637b-41e7-a729-5961eaa071c4" containerName="registry-server" containerID="cri-o://d19bd085786a7fe480f81784b826bb1d50bb5599d17ae3a61986ec98917c44e7" gracePeriod=2 Jan 31 10:41:35 crc kubenswrapper[4830]: I0131 10:41:34.356120 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cgvpr" Jan 31 10:41:35 crc kubenswrapper[4830]: I0131 10:41:34.508975 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c0e443c8-637b-41e7-a729-5961eaa071c4-utilities\") pod \"c0e443c8-637b-41e7-a729-5961eaa071c4\" (UID: \"c0e443c8-637b-41e7-a729-5961eaa071c4\") " Jan 31 10:41:35 crc kubenswrapper[4830]: I0131 10:41:34.509771 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c0e443c8-637b-41e7-a729-5961eaa071c4-catalog-content\") pod \"c0e443c8-637b-41e7-a729-5961eaa071c4\" (UID: \"c0e443c8-637b-41e7-a729-5961eaa071c4\") " Jan 31 10:41:35 crc kubenswrapper[4830]: I0131 10:41:34.509844 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c0e443c8-637b-41e7-a729-5961eaa071c4-utilities" (OuterVolumeSpecName: "utilities") pod "c0e443c8-637b-41e7-a729-5961eaa071c4" (UID: "c0e443c8-637b-41e7-a729-5961eaa071c4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 10:41:35 crc kubenswrapper[4830]: I0131 10:41:34.509878 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v2b49\" (UniqueName: \"kubernetes.io/projected/c0e443c8-637b-41e7-a729-5961eaa071c4-kube-api-access-v2b49\") pod \"c0e443c8-637b-41e7-a729-5961eaa071c4\" (UID: \"c0e443c8-637b-41e7-a729-5961eaa071c4\") " Jan 31 10:41:35 crc kubenswrapper[4830]: I0131 10:41:34.510641 4830 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c0e443c8-637b-41e7-a729-5961eaa071c4-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 10:41:35 crc kubenswrapper[4830]: I0131 10:41:34.516514 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c0e443c8-637b-41e7-a729-5961eaa071c4-kube-api-access-v2b49" (OuterVolumeSpecName: "kube-api-access-v2b49") pod "c0e443c8-637b-41e7-a729-5961eaa071c4" (UID: "c0e443c8-637b-41e7-a729-5961eaa071c4"). InnerVolumeSpecName "kube-api-access-v2b49". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 10:41:35 crc kubenswrapper[4830]: I0131 10:41:34.572933 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c0e443c8-637b-41e7-a729-5961eaa071c4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c0e443c8-637b-41e7-a729-5961eaa071c4" (UID: "c0e443c8-637b-41e7-a729-5961eaa071c4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 10:41:35 crc kubenswrapper[4830]: I0131 10:41:34.612946 4830 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c0e443c8-637b-41e7-a729-5961eaa071c4-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 10:41:35 crc kubenswrapper[4830]: I0131 10:41:34.613229 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v2b49\" (UniqueName: \"kubernetes.io/projected/c0e443c8-637b-41e7-a729-5961eaa071c4-kube-api-access-v2b49\") on node \"crc\" DevicePath \"\"" Jan 31 10:41:35 crc kubenswrapper[4830]: I0131 10:41:34.852117 4830 generic.go:334] "Generic (PLEG): container finished" podID="c0e443c8-637b-41e7-a729-5961eaa071c4" containerID="d19bd085786a7fe480f81784b826bb1d50bb5599d17ae3a61986ec98917c44e7" exitCode=0 Jan 31 10:41:35 crc kubenswrapper[4830]: I0131 10:41:34.852160 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cgvpr" event={"ID":"c0e443c8-637b-41e7-a729-5961eaa071c4","Type":"ContainerDied","Data":"d19bd085786a7fe480f81784b826bb1d50bb5599d17ae3a61986ec98917c44e7"} Jan 31 10:41:35 crc kubenswrapper[4830]: I0131 10:41:34.852190 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cgvpr" event={"ID":"c0e443c8-637b-41e7-a729-5961eaa071c4","Type":"ContainerDied","Data":"561fc57d04903d3ddd4a0f1fe9c8ea77eed50aa1318e685614b4483ba0bc755c"} Jan 31 10:41:35 crc kubenswrapper[4830]: I0131 10:41:34.852211 4830 scope.go:117] "RemoveContainer" containerID="d19bd085786a7fe480f81784b826bb1d50bb5599d17ae3a61986ec98917c44e7" Jan 31 10:41:35 crc kubenswrapper[4830]: I0131 10:41:34.852290 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cgvpr" Jan 31 10:41:35 crc kubenswrapper[4830]: I0131 10:41:34.891072 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-cgvpr"] Jan 31 10:41:35 crc kubenswrapper[4830]: I0131 10:41:34.900533 4830 scope.go:117] "RemoveContainer" containerID="28fb01f171cc07cb8384dc76b40e1a5a8e1c2a874de86f77dc164fdd93a9d6ad" Jan 31 10:41:35 crc kubenswrapper[4830]: I0131 10:41:34.900979 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-cgvpr"] Jan 31 10:41:35 crc kubenswrapper[4830]: I0131 10:41:34.929165 4830 scope.go:117] "RemoveContainer" containerID="23117677057bf3bbb8a064fd36740f21954b1e6250d77c3b0eaad065737f5cb5" Jan 31 10:41:35 crc kubenswrapper[4830]: I0131 10:41:34.985935 4830 scope.go:117] "RemoveContainer" containerID="d19bd085786a7fe480f81784b826bb1d50bb5599d17ae3a61986ec98917c44e7" Jan 31 10:41:35 crc kubenswrapper[4830]: E0131 10:41:34.987015 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d19bd085786a7fe480f81784b826bb1d50bb5599d17ae3a61986ec98917c44e7\": container with ID starting with d19bd085786a7fe480f81784b826bb1d50bb5599d17ae3a61986ec98917c44e7 not found: ID does not exist" containerID="d19bd085786a7fe480f81784b826bb1d50bb5599d17ae3a61986ec98917c44e7" Jan 31 10:41:35 crc kubenswrapper[4830]: I0131 10:41:34.987076 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d19bd085786a7fe480f81784b826bb1d50bb5599d17ae3a61986ec98917c44e7"} err="failed to get container status \"d19bd085786a7fe480f81784b826bb1d50bb5599d17ae3a61986ec98917c44e7\": rpc error: code = NotFound desc = could not find container \"d19bd085786a7fe480f81784b826bb1d50bb5599d17ae3a61986ec98917c44e7\": container with ID starting with d19bd085786a7fe480f81784b826bb1d50bb5599d17ae3a61986ec98917c44e7 not found: ID does not exist" Jan 31 10:41:35 crc kubenswrapper[4830]: I0131 10:41:34.987104 4830 scope.go:117] "RemoveContainer" containerID="28fb01f171cc07cb8384dc76b40e1a5a8e1c2a874de86f77dc164fdd93a9d6ad" Jan 31 10:41:35 crc kubenswrapper[4830]: E0131 10:41:34.987644 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"28fb01f171cc07cb8384dc76b40e1a5a8e1c2a874de86f77dc164fdd93a9d6ad\": container with ID starting with 28fb01f171cc07cb8384dc76b40e1a5a8e1c2a874de86f77dc164fdd93a9d6ad not found: ID does not exist" containerID="28fb01f171cc07cb8384dc76b40e1a5a8e1c2a874de86f77dc164fdd93a9d6ad" Jan 31 10:41:35 crc kubenswrapper[4830]: I0131 10:41:34.987671 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"28fb01f171cc07cb8384dc76b40e1a5a8e1c2a874de86f77dc164fdd93a9d6ad"} err="failed to get container status \"28fb01f171cc07cb8384dc76b40e1a5a8e1c2a874de86f77dc164fdd93a9d6ad\": rpc error: code = NotFound desc = could not find container \"28fb01f171cc07cb8384dc76b40e1a5a8e1c2a874de86f77dc164fdd93a9d6ad\": container with ID starting with 28fb01f171cc07cb8384dc76b40e1a5a8e1c2a874de86f77dc164fdd93a9d6ad not found: ID does not exist" Jan 31 10:41:35 crc kubenswrapper[4830]: I0131 10:41:34.987690 4830 scope.go:117] "RemoveContainer" containerID="23117677057bf3bbb8a064fd36740f21954b1e6250d77c3b0eaad065737f5cb5" Jan 31 10:41:35 crc kubenswrapper[4830]: E0131 10:41:34.988056 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"23117677057bf3bbb8a064fd36740f21954b1e6250d77c3b0eaad065737f5cb5\": container with ID starting with 23117677057bf3bbb8a064fd36740f21954b1e6250d77c3b0eaad065737f5cb5 not found: ID does not exist" containerID="23117677057bf3bbb8a064fd36740f21954b1e6250d77c3b0eaad065737f5cb5" Jan 31 10:41:35 crc kubenswrapper[4830]: I0131 10:41:34.988110 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"23117677057bf3bbb8a064fd36740f21954b1e6250d77c3b0eaad065737f5cb5"} err="failed to get container status \"23117677057bf3bbb8a064fd36740f21954b1e6250d77c3b0eaad065737f5cb5\": rpc error: code = NotFound desc = could not find container \"23117677057bf3bbb8a064fd36740f21954b1e6250d77c3b0eaad065737f5cb5\": container with ID starting with 23117677057bf3bbb8a064fd36740f21954b1e6250d77c3b0eaad065737f5cb5 not found: ID does not exist" Jan 31 10:41:35 crc kubenswrapper[4830]: E0131 10:41:35.050478 4830 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc0e443c8_637b_41e7_a729_5961eaa071c4.slice\": RecentStats: unable to find data in memory cache]" Jan 31 10:41:36 crc kubenswrapper[4830]: I0131 10:41:36.263148 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c0e443c8-637b-41e7-a729-5961eaa071c4" path="/var/lib/kubelet/pods/c0e443c8-637b-41e7-a729-5961eaa071c4/volumes" Jan 31 10:42:12 crc kubenswrapper[4830]: I0131 10:42:12.527572 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-n6dkg"] Jan 31 10:42:12 crc kubenswrapper[4830]: E0131 10:42:12.528561 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0e443c8-637b-41e7-a729-5961eaa071c4" containerName="registry-server" Jan 31 10:42:12 crc kubenswrapper[4830]: I0131 10:42:12.528574 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0e443c8-637b-41e7-a729-5961eaa071c4" containerName="registry-server" Jan 31 10:42:12 crc kubenswrapper[4830]: E0131 10:42:12.528583 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="19790a54-5dbd-43f0-8ab8-733de0afedbc" containerName="registry-server" Jan 31 10:42:12 crc kubenswrapper[4830]: I0131 10:42:12.528589 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="19790a54-5dbd-43f0-8ab8-733de0afedbc" containerName="registry-server" Jan 31 10:42:12 crc kubenswrapper[4830]: E0131 10:42:12.528615 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="19790a54-5dbd-43f0-8ab8-733de0afedbc" containerName="extract-content" Jan 31 10:42:12 crc kubenswrapper[4830]: I0131 10:42:12.528642 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="19790a54-5dbd-43f0-8ab8-733de0afedbc" containerName="extract-content" Jan 31 10:42:12 crc kubenswrapper[4830]: E0131 10:42:12.528667 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="19790a54-5dbd-43f0-8ab8-733de0afedbc" containerName="extract-utilities" Jan 31 10:42:12 crc kubenswrapper[4830]: I0131 10:42:12.528675 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="19790a54-5dbd-43f0-8ab8-733de0afedbc" containerName="extract-utilities" Jan 31 10:42:12 crc kubenswrapper[4830]: E0131 10:42:12.528690 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0e443c8-637b-41e7-a729-5961eaa071c4" containerName="extract-content" Jan 31 10:42:12 crc kubenswrapper[4830]: I0131 10:42:12.528698 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0e443c8-637b-41e7-a729-5961eaa071c4" containerName="extract-content" Jan 31 10:42:12 crc kubenswrapper[4830]: E0131 10:42:12.528822 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0e443c8-637b-41e7-a729-5961eaa071c4" containerName="extract-utilities" Jan 31 10:42:12 crc kubenswrapper[4830]: I0131 10:42:12.528830 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0e443c8-637b-41e7-a729-5961eaa071c4" containerName="extract-utilities" Jan 31 10:42:12 crc kubenswrapper[4830]: I0131 10:42:12.529069 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="c0e443c8-637b-41e7-a729-5961eaa071c4" containerName="registry-server" Jan 31 10:42:12 crc kubenswrapper[4830]: I0131 10:42:12.529090 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="19790a54-5dbd-43f0-8ab8-733de0afedbc" containerName="registry-server" Jan 31 10:42:12 crc kubenswrapper[4830]: I0131 10:42:12.530901 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-n6dkg" Jan 31 10:42:12 crc kubenswrapper[4830]: I0131 10:42:12.546064 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-n6dkg"] Jan 31 10:42:12 crc kubenswrapper[4830]: I0131 10:42:12.642000 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b7959a75-1b45-4bba-bba5-ac003c94b72f-catalog-content\") pod \"community-operators-n6dkg\" (UID: \"b7959a75-1b45-4bba-bba5-ac003c94b72f\") " pod="openshift-marketplace/community-operators-n6dkg" Jan 31 10:42:12 crc kubenswrapper[4830]: I0131 10:42:12.642103 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b7959a75-1b45-4bba-bba5-ac003c94b72f-utilities\") pod \"community-operators-n6dkg\" (UID: \"b7959a75-1b45-4bba-bba5-ac003c94b72f\") " pod="openshift-marketplace/community-operators-n6dkg" Jan 31 10:42:12 crc kubenswrapper[4830]: I0131 10:42:12.642145 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h7bz4\" (UniqueName: \"kubernetes.io/projected/b7959a75-1b45-4bba-bba5-ac003c94b72f-kube-api-access-h7bz4\") pod \"community-operators-n6dkg\" (UID: \"b7959a75-1b45-4bba-bba5-ac003c94b72f\") " pod="openshift-marketplace/community-operators-n6dkg" Jan 31 10:42:12 crc kubenswrapper[4830]: I0131 10:42:12.745677 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b7959a75-1b45-4bba-bba5-ac003c94b72f-catalog-content\") pod \"community-operators-n6dkg\" (UID: \"b7959a75-1b45-4bba-bba5-ac003c94b72f\") " pod="openshift-marketplace/community-operators-n6dkg" Jan 31 10:42:12 crc kubenswrapper[4830]: I0131 10:42:12.745767 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b7959a75-1b45-4bba-bba5-ac003c94b72f-utilities\") pod \"community-operators-n6dkg\" (UID: \"b7959a75-1b45-4bba-bba5-ac003c94b72f\") " pod="openshift-marketplace/community-operators-n6dkg" Jan 31 10:42:12 crc kubenswrapper[4830]: I0131 10:42:12.745819 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h7bz4\" (UniqueName: \"kubernetes.io/projected/b7959a75-1b45-4bba-bba5-ac003c94b72f-kube-api-access-h7bz4\") pod \"community-operators-n6dkg\" (UID: \"b7959a75-1b45-4bba-bba5-ac003c94b72f\") " pod="openshift-marketplace/community-operators-n6dkg" Jan 31 10:42:12 crc kubenswrapper[4830]: I0131 10:42:12.746185 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b7959a75-1b45-4bba-bba5-ac003c94b72f-catalog-content\") pod \"community-operators-n6dkg\" (UID: \"b7959a75-1b45-4bba-bba5-ac003c94b72f\") " pod="openshift-marketplace/community-operators-n6dkg" Jan 31 10:42:12 crc kubenswrapper[4830]: I0131 10:42:12.746273 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b7959a75-1b45-4bba-bba5-ac003c94b72f-utilities\") pod \"community-operators-n6dkg\" (UID: \"b7959a75-1b45-4bba-bba5-ac003c94b72f\") " pod="openshift-marketplace/community-operators-n6dkg" Jan 31 10:42:12 crc kubenswrapper[4830]: I0131 10:42:12.767073 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h7bz4\" (UniqueName: \"kubernetes.io/projected/b7959a75-1b45-4bba-bba5-ac003c94b72f-kube-api-access-h7bz4\") pod \"community-operators-n6dkg\" (UID: \"b7959a75-1b45-4bba-bba5-ac003c94b72f\") " pod="openshift-marketplace/community-operators-n6dkg" Jan 31 10:42:12 crc kubenswrapper[4830]: I0131 10:42:12.860926 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-n6dkg" Jan 31 10:42:13 crc kubenswrapper[4830]: I0131 10:42:13.416025 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-n6dkg"] Jan 31 10:42:14 crc kubenswrapper[4830]: I0131 10:42:14.320945 4830 generic.go:334] "Generic (PLEG): container finished" podID="b7959a75-1b45-4bba-bba5-ac003c94b72f" containerID="1313402ccabc462a457fb401c5a025df84dc5f9929cc713d128708a2c487551a" exitCode=0 Jan 31 10:42:14 crc kubenswrapper[4830]: I0131 10:42:14.321224 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n6dkg" event={"ID":"b7959a75-1b45-4bba-bba5-ac003c94b72f","Type":"ContainerDied","Data":"1313402ccabc462a457fb401c5a025df84dc5f9929cc713d128708a2c487551a"} Jan 31 10:42:14 crc kubenswrapper[4830]: I0131 10:42:14.321252 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n6dkg" event={"ID":"b7959a75-1b45-4bba-bba5-ac003c94b72f","Type":"ContainerStarted","Data":"9241fda24b38ba28ddc90faba13d3b5846144ffb194d6e218907b77cd43cb8aa"} Jan 31 10:42:15 crc kubenswrapper[4830]: I0131 10:42:15.332443 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n6dkg" event={"ID":"b7959a75-1b45-4bba-bba5-ac003c94b72f","Type":"ContainerStarted","Data":"3856def3283d32f4a2700a9645eaf169de51ed1a328337eafaefd4ef2deaf924"} Jan 31 10:42:16 crc kubenswrapper[4830]: I0131 10:42:16.343880 4830 generic.go:334] "Generic (PLEG): container finished" podID="b7959a75-1b45-4bba-bba5-ac003c94b72f" containerID="3856def3283d32f4a2700a9645eaf169de51ed1a328337eafaefd4ef2deaf924" exitCode=0 Jan 31 10:42:16 crc kubenswrapper[4830]: I0131 10:42:16.343969 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n6dkg" event={"ID":"b7959a75-1b45-4bba-bba5-ac003c94b72f","Type":"ContainerDied","Data":"3856def3283d32f4a2700a9645eaf169de51ed1a328337eafaefd4ef2deaf924"} Jan 31 10:42:17 crc kubenswrapper[4830]: I0131 10:42:17.356894 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n6dkg" event={"ID":"b7959a75-1b45-4bba-bba5-ac003c94b72f","Type":"ContainerStarted","Data":"3c2e0f408435592fec791b549b2500351c3ba4ecc14a7b375fe7f477e85c2bec"} Jan 31 10:42:17 crc kubenswrapper[4830]: I0131 10:42:17.379488 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-n6dkg" podStartSLOduration=2.958995255 podStartE2EDuration="5.37947199s" podCreationTimestamp="2026-01-31 10:42:12 +0000 UTC" firstStartedPulling="2026-01-31 10:42:14.326156782 +0000 UTC m=+6078.819519224" lastFinishedPulling="2026-01-31 10:42:16.746633527 +0000 UTC m=+6081.239995959" observedRunningTime="2026-01-31 10:42:17.375841826 +0000 UTC m=+6081.869204268" watchObservedRunningTime="2026-01-31 10:42:17.37947199 +0000 UTC m=+6081.872834432" Jan 31 10:42:22 crc kubenswrapper[4830]: I0131 10:42:22.861653 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-n6dkg" Jan 31 10:42:22 crc kubenswrapper[4830]: I0131 10:42:22.862281 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-n6dkg" Jan 31 10:42:22 crc kubenswrapper[4830]: I0131 10:42:22.907123 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-n6dkg" Jan 31 10:42:23 crc kubenswrapper[4830]: I0131 10:42:23.468018 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-n6dkg" Jan 31 10:42:23 crc kubenswrapper[4830]: I0131 10:42:23.521880 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-n6dkg"] Jan 31 10:42:25 crc kubenswrapper[4830]: I0131 10:42:25.448400 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-n6dkg" podUID="b7959a75-1b45-4bba-bba5-ac003c94b72f" containerName="registry-server" containerID="cri-o://3c2e0f408435592fec791b549b2500351c3ba4ecc14a7b375fe7f477e85c2bec" gracePeriod=2 Jan 31 10:42:26 crc kubenswrapper[4830]: I0131 10:42:26.015957 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-n6dkg" Jan 31 10:42:26 crc kubenswrapper[4830]: I0131 10:42:26.074303 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b7959a75-1b45-4bba-bba5-ac003c94b72f-utilities\") pod \"b7959a75-1b45-4bba-bba5-ac003c94b72f\" (UID: \"b7959a75-1b45-4bba-bba5-ac003c94b72f\") " Jan 31 10:42:26 crc kubenswrapper[4830]: I0131 10:42:26.074851 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b7959a75-1b45-4bba-bba5-ac003c94b72f-catalog-content\") pod \"b7959a75-1b45-4bba-bba5-ac003c94b72f\" (UID: \"b7959a75-1b45-4bba-bba5-ac003c94b72f\") " Jan 31 10:42:26 crc kubenswrapper[4830]: I0131 10:42:26.074954 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h7bz4\" (UniqueName: \"kubernetes.io/projected/b7959a75-1b45-4bba-bba5-ac003c94b72f-kube-api-access-h7bz4\") pod \"b7959a75-1b45-4bba-bba5-ac003c94b72f\" (UID: \"b7959a75-1b45-4bba-bba5-ac003c94b72f\") " Jan 31 10:42:26 crc kubenswrapper[4830]: I0131 10:42:26.075182 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b7959a75-1b45-4bba-bba5-ac003c94b72f-utilities" (OuterVolumeSpecName: "utilities") pod "b7959a75-1b45-4bba-bba5-ac003c94b72f" (UID: "b7959a75-1b45-4bba-bba5-ac003c94b72f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 10:42:26 crc kubenswrapper[4830]: I0131 10:42:26.075963 4830 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b7959a75-1b45-4bba-bba5-ac003c94b72f-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 10:42:26 crc kubenswrapper[4830]: I0131 10:42:26.081658 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b7959a75-1b45-4bba-bba5-ac003c94b72f-kube-api-access-h7bz4" (OuterVolumeSpecName: "kube-api-access-h7bz4") pod "b7959a75-1b45-4bba-bba5-ac003c94b72f" (UID: "b7959a75-1b45-4bba-bba5-ac003c94b72f"). InnerVolumeSpecName "kube-api-access-h7bz4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 10:42:26 crc kubenswrapper[4830]: I0131 10:42:26.178934 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h7bz4\" (UniqueName: \"kubernetes.io/projected/b7959a75-1b45-4bba-bba5-ac003c94b72f-kube-api-access-h7bz4\") on node \"crc\" DevicePath \"\"" Jan 31 10:42:26 crc kubenswrapper[4830]: I0131 10:42:26.464589 4830 generic.go:334] "Generic (PLEG): container finished" podID="b7959a75-1b45-4bba-bba5-ac003c94b72f" containerID="3c2e0f408435592fec791b549b2500351c3ba4ecc14a7b375fe7f477e85c2bec" exitCode=0 Jan 31 10:42:26 crc kubenswrapper[4830]: I0131 10:42:26.465045 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n6dkg" event={"ID":"b7959a75-1b45-4bba-bba5-ac003c94b72f","Type":"ContainerDied","Data":"3c2e0f408435592fec791b549b2500351c3ba4ecc14a7b375fe7f477e85c2bec"} Jan 31 10:42:26 crc kubenswrapper[4830]: I0131 10:42:26.465091 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n6dkg" event={"ID":"b7959a75-1b45-4bba-bba5-ac003c94b72f","Type":"ContainerDied","Data":"9241fda24b38ba28ddc90faba13d3b5846144ffb194d6e218907b77cd43cb8aa"} Jan 31 10:42:26 crc kubenswrapper[4830]: I0131 10:42:26.465118 4830 scope.go:117] "RemoveContainer" containerID="3c2e0f408435592fec791b549b2500351c3ba4ecc14a7b375fe7f477e85c2bec" Jan 31 10:42:26 crc kubenswrapper[4830]: I0131 10:42:26.465396 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-n6dkg" Jan 31 10:42:26 crc kubenswrapper[4830]: I0131 10:42:26.477155 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b7959a75-1b45-4bba-bba5-ac003c94b72f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b7959a75-1b45-4bba-bba5-ac003c94b72f" (UID: "b7959a75-1b45-4bba-bba5-ac003c94b72f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 10:42:26 crc kubenswrapper[4830]: I0131 10:42:26.488406 4830 scope.go:117] "RemoveContainer" containerID="3856def3283d32f4a2700a9645eaf169de51ed1a328337eafaefd4ef2deaf924" Jan 31 10:42:26 crc kubenswrapper[4830]: I0131 10:42:26.500768 4830 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b7959a75-1b45-4bba-bba5-ac003c94b72f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 10:42:26 crc kubenswrapper[4830]: I0131 10:42:26.514072 4830 scope.go:117] "RemoveContainer" containerID="1313402ccabc462a457fb401c5a025df84dc5f9929cc713d128708a2c487551a" Jan 31 10:42:26 crc kubenswrapper[4830]: I0131 10:42:26.575922 4830 scope.go:117] "RemoveContainer" containerID="3c2e0f408435592fec791b549b2500351c3ba4ecc14a7b375fe7f477e85c2bec" Jan 31 10:42:26 crc kubenswrapper[4830]: E0131 10:42:26.576359 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3c2e0f408435592fec791b549b2500351c3ba4ecc14a7b375fe7f477e85c2bec\": container with ID starting with 3c2e0f408435592fec791b549b2500351c3ba4ecc14a7b375fe7f477e85c2bec not found: ID does not exist" containerID="3c2e0f408435592fec791b549b2500351c3ba4ecc14a7b375fe7f477e85c2bec" Jan 31 10:42:26 crc kubenswrapper[4830]: I0131 10:42:26.576407 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3c2e0f408435592fec791b549b2500351c3ba4ecc14a7b375fe7f477e85c2bec"} err="failed to get container status \"3c2e0f408435592fec791b549b2500351c3ba4ecc14a7b375fe7f477e85c2bec\": rpc error: code = NotFound desc = could not find container \"3c2e0f408435592fec791b549b2500351c3ba4ecc14a7b375fe7f477e85c2bec\": container with ID starting with 3c2e0f408435592fec791b549b2500351c3ba4ecc14a7b375fe7f477e85c2bec not found: ID does not exist" Jan 31 10:42:26 crc kubenswrapper[4830]: I0131 10:42:26.576439 4830 scope.go:117] "RemoveContainer" containerID="3856def3283d32f4a2700a9645eaf169de51ed1a328337eafaefd4ef2deaf924" Jan 31 10:42:26 crc kubenswrapper[4830]: E0131 10:42:26.577054 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3856def3283d32f4a2700a9645eaf169de51ed1a328337eafaefd4ef2deaf924\": container with ID starting with 3856def3283d32f4a2700a9645eaf169de51ed1a328337eafaefd4ef2deaf924 not found: ID does not exist" containerID="3856def3283d32f4a2700a9645eaf169de51ed1a328337eafaefd4ef2deaf924" Jan 31 10:42:26 crc kubenswrapper[4830]: I0131 10:42:26.577079 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3856def3283d32f4a2700a9645eaf169de51ed1a328337eafaefd4ef2deaf924"} err="failed to get container status \"3856def3283d32f4a2700a9645eaf169de51ed1a328337eafaefd4ef2deaf924\": rpc error: code = NotFound desc = could not find container \"3856def3283d32f4a2700a9645eaf169de51ed1a328337eafaefd4ef2deaf924\": container with ID starting with 3856def3283d32f4a2700a9645eaf169de51ed1a328337eafaefd4ef2deaf924 not found: ID does not exist" Jan 31 10:42:26 crc kubenswrapper[4830]: I0131 10:42:26.577095 4830 scope.go:117] "RemoveContainer" containerID="1313402ccabc462a457fb401c5a025df84dc5f9929cc713d128708a2c487551a" Jan 31 10:42:26 crc kubenswrapper[4830]: E0131 10:42:26.577495 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1313402ccabc462a457fb401c5a025df84dc5f9929cc713d128708a2c487551a\": container with ID starting with 1313402ccabc462a457fb401c5a025df84dc5f9929cc713d128708a2c487551a not found: ID does not exist" containerID="1313402ccabc462a457fb401c5a025df84dc5f9929cc713d128708a2c487551a" Jan 31 10:42:26 crc kubenswrapper[4830]: I0131 10:42:26.577539 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1313402ccabc462a457fb401c5a025df84dc5f9929cc713d128708a2c487551a"} err="failed to get container status \"1313402ccabc462a457fb401c5a025df84dc5f9929cc713d128708a2c487551a\": rpc error: code = NotFound desc = could not find container \"1313402ccabc462a457fb401c5a025df84dc5f9929cc713d128708a2c487551a\": container with ID starting with 1313402ccabc462a457fb401c5a025df84dc5f9929cc713d128708a2c487551a not found: ID does not exist" Jan 31 10:42:26 crc kubenswrapper[4830]: I0131 10:42:26.804087 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-n6dkg"] Jan 31 10:42:26 crc kubenswrapper[4830]: I0131 10:42:26.814579 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-n6dkg"] Jan 31 10:42:28 crc kubenswrapper[4830]: I0131 10:42:28.263882 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b7959a75-1b45-4bba-bba5-ac003c94b72f" path="/var/lib/kubelet/pods/b7959a75-1b45-4bba-bba5-ac003c94b72f/volumes" Jan 31 10:42:44 crc kubenswrapper[4830]: I0131 10:42:44.353390 4830 patch_prober.go:28] interesting pod/machine-config-daemon-gt7kd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 10:42:44 crc kubenswrapper[4830]: I0131 10:42:44.354080 4830 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" podUID="158dbfda-9b0a-4809-9946-3c6ee2d082dc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 10:43:14 crc kubenswrapper[4830]: I0131 10:43:14.352843 4830 patch_prober.go:28] interesting pod/machine-config-daemon-gt7kd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 10:43:14 crc kubenswrapper[4830]: I0131 10:43:14.353378 4830 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" podUID="158dbfda-9b0a-4809-9946-3c6ee2d082dc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 10:43:44 crc kubenswrapper[4830]: I0131 10:43:44.353876 4830 patch_prober.go:28] interesting pod/machine-config-daemon-gt7kd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 10:43:44 crc kubenswrapper[4830]: I0131 10:43:44.354559 4830 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" podUID="158dbfda-9b0a-4809-9946-3c6ee2d082dc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 10:43:44 crc kubenswrapper[4830]: I0131 10:43:44.354614 4830 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" Jan 31 10:43:44 crc kubenswrapper[4830]: I0131 10:43:44.356104 4830 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"bb9aa35f3cbfb247dd90e386d734ccc6203ba2643fab7bbeba79cb65d07ea025"} pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 31 10:43:44 crc kubenswrapper[4830]: I0131 10:43:44.356181 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" podUID="158dbfda-9b0a-4809-9946-3c6ee2d082dc" containerName="machine-config-daemon" containerID="cri-o://bb9aa35f3cbfb247dd90e386d734ccc6203ba2643fab7bbeba79cb65d07ea025" gracePeriod=600 Jan 31 10:43:45 crc kubenswrapper[4830]: I0131 10:43:45.412876 4830 generic.go:334] "Generic (PLEG): container finished" podID="158dbfda-9b0a-4809-9946-3c6ee2d082dc" containerID="bb9aa35f3cbfb247dd90e386d734ccc6203ba2643fab7bbeba79cb65d07ea025" exitCode=0 Jan 31 10:43:45 crc kubenswrapper[4830]: I0131 10:43:45.412974 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" event={"ID":"158dbfda-9b0a-4809-9946-3c6ee2d082dc","Type":"ContainerDied","Data":"bb9aa35f3cbfb247dd90e386d734ccc6203ba2643fab7bbeba79cb65d07ea025"} Jan 31 10:43:45 crc kubenswrapper[4830]: I0131 10:43:45.413676 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" event={"ID":"158dbfda-9b0a-4809-9946-3c6ee2d082dc","Type":"ContainerStarted","Data":"17bac6e23fa59b94414956b8c7d6930d7e4d312d8a0bb329361a4df3b6ca0d11"} Jan 31 10:43:45 crc kubenswrapper[4830]: I0131 10:43:45.413698 4830 scope.go:117] "RemoveContainer" containerID="3d1a1e3cfe2a93b485fa3e3d1d4183a5d4d87a568ef46466b13f6520a7c27ceb" Jan 31 10:45:00 crc kubenswrapper[4830]: I0131 10:45:00.299829 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29497605-wb58w"] Jan 31 10:45:00 crc kubenswrapper[4830]: E0131 10:45:00.301521 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b7959a75-1b45-4bba-bba5-ac003c94b72f" containerName="extract-content" Jan 31 10:45:00 crc kubenswrapper[4830]: I0131 10:45:00.301541 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="b7959a75-1b45-4bba-bba5-ac003c94b72f" containerName="extract-content" Jan 31 10:45:00 crc kubenswrapper[4830]: E0131 10:45:00.301586 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b7959a75-1b45-4bba-bba5-ac003c94b72f" containerName="extract-utilities" Jan 31 10:45:00 crc kubenswrapper[4830]: I0131 10:45:00.301595 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="b7959a75-1b45-4bba-bba5-ac003c94b72f" containerName="extract-utilities" Jan 31 10:45:00 crc kubenswrapper[4830]: E0131 10:45:00.301640 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b7959a75-1b45-4bba-bba5-ac003c94b72f" containerName="registry-server" Jan 31 10:45:00 crc kubenswrapper[4830]: I0131 10:45:00.301650 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="b7959a75-1b45-4bba-bba5-ac003c94b72f" containerName="registry-server" Jan 31 10:45:00 crc kubenswrapper[4830]: I0131 10:45:00.302271 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="b7959a75-1b45-4bba-bba5-ac003c94b72f" containerName="registry-server" Jan 31 10:45:00 crc kubenswrapper[4830]: I0131 10:45:00.306027 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29497605-wb58w" Jan 31 10:45:00 crc kubenswrapper[4830]: I0131 10:45:00.315817 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29497605-wb58w"] Jan 31 10:45:00 crc kubenswrapper[4830]: I0131 10:45:00.350495 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 31 10:45:00 crc kubenswrapper[4830]: I0131 10:45:00.350506 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 31 10:45:00 crc kubenswrapper[4830]: I0131 10:45:00.371407 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c1178c6e-1ed1-4648-a245-7ed3952de30b-config-volume\") pod \"collect-profiles-29497605-wb58w\" (UID: \"c1178c6e-1ed1-4648-a245-7ed3952de30b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497605-wb58w" Jan 31 10:45:00 crc kubenswrapper[4830]: I0131 10:45:00.371465 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2bflt\" (UniqueName: \"kubernetes.io/projected/c1178c6e-1ed1-4648-a245-7ed3952de30b-kube-api-access-2bflt\") pod \"collect-profiles-29497605-wb58w\" (UID: \"c1178c6e-1ed1-4648-a245-7ed3952de30b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497605-wb58w" Jan 31 10:45:00 crc kubenswrapper[4830]: I0131 10:45:00.372025 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c1178c6e-1ed1-4648-a245-7ed3952de30b-secret-volume\") pod \"collect-profiles-29497605-wb58w\" (UID: \"c1178c6e-1ed1-4648-a245-7ed3952de30b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497605-wb58w" Jan 31 10:45:00 crc kubenswrapper[4830]: I0131 10:45:00.475843 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c1178c6e-1ed1-4648-a245-7ed3952de30b-config-volume\") pod \"collect-profiles-29497605-wb58w\" (UID: \"c1178c6e-1ed1-4648-a245-7ed3952de30b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497605-wb58w" Jan 31 10:45:00 crc kubenswrapper[4830]: I0131 10:45:00.475899 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2bflt\" (UniqueName: \"kubernetes.io/projected/c1178c6e-1ed1-4648-a245-7ed3952de30b-kube-api-access-2bflt\") pod \"collect-profiles-29497605-wb58w\" (UID: \"c1178c6e-1ed1-4648-a245-7ed3952de30b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497605-wb58w" Jan 31 10:45:00 crc kubenswrapper[4830]: I0131 10:45:00.475989 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c1178c6e-1ed1-4648-a245-7ed3952de30b-secret-volume\") pod \"collect-profiles-29497605-wb58w\" (UID: \"c1178c6e-1ed1-4648-a245-7ed3952de30b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497605-wb58w" Jan 31 10:45:00 crc kubenswrapper[4830]: I0131 10:45:00.478587 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c1178c6e-1ed1-4648-a245-7ed3952de30b-config-volume\") pod \"collect-profiles-29497605-wb58w\" (UID: \"c1178c6e-1ed1-4648-a245-7ed3952de30b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497605-wb58w" Jan 31 10:45:00 crc kubenswrapper[4830]: I0131 10:45:00.482905 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c1178c6e-1ed1-4648-a245-7ed3952de30b-secret-volume\") pod \"collect-profiles-29497605-wb58w\" (UID: \"c1178c6e-1ed1-4648-a245-7ed3952de30b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497605-wb58w" Jan 31 10:45:00 crc kubenswrapper[4830]: I0131 10:45:00.499555 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2bflt\" (UniqueName: \"kubernetes.io/projected/c1178c6e-1ed1-4648-a245-7ed3952de30b-kube-api-access-2bflt\") pod \"collect-profiles-29497605-wb58w\" (UID: \"c1178c6e-1ed1-4648-a245-7ed3952de30b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497605-wb58w" Jan 31 10:45:00 crc kubenswrapper[4830]: I0131 10:45:00.651775 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29497605-wb58w" Jan 31 10:45:01 crc kubenswrapper[4830]: W0131 10:45:01.155834 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc1178c6e_1ed1_4648_a245_7ed3952de30b.slice/crio-6aaa7569a981793d8cdd19eec21759912db026e90a0b8fdfe65a2b6b49a15bb8 WatchSource:0}: Error finding container 6aaa7569a981793d8cdd19eec21759912db026e90a0b8fdfe65a2b6b49a15bb8: Status 404 returned error can't find the container with id 6aaa7569a981793d8cdd19eec21759912db026e90a0b8fdfe65a2b6b49a15bb8 Jan 31 10:45:01 crc kubenswrapper[4830]: I0131 10:45:01.157865 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29497605-wb58w"] Jan 31 10:45:01 crc kubenswrapper[4830]: I0131 10:45:01.443648 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29497605-wb58w" event={"ID":"c1178c6e-1ed1-4648-a245-7ed3952de30b","Type":"ContainerStarted","Data":"0103e233c77091fd642206e1cc404b8d765f4f4171292cafb806219953cddf8b"} Jan 31 10:45:01 crc kubenswrapper[4830]: I0131 10:45:01.443694 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29497605-wb58w" event={"ID":"c1178c6e-1ed1-4648-a245-7ed3952de30b","Type":"ContainerStarted","Data":"6aaa7569a981793d8cdd19eec21759912db026e90a0b8fdfe65a2b6b49a15bb8"} Jan 31 10:45:02 crc kubenswrapper[4830]: I0131 10:45:02.455894 4830 generic.go:334] "Generic (PLEG): container finished" podID="c1178c6e-1ed1-4648-a245-7ed3952de30b" containerID="0103e233c77091fd642206e1cc404b8d765f4f4171292cafb806219953cddf8b" exitCode=0 Jan 31 10:45:02 crc kubenswrapper[4830]: I0131 10:45:02.456000 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29497605-wb58w" event={"ID":"c1178c6e-1ed1-4648-a245-7ed3952de30b","Type":"ContainerDied","Data":"0103e233c77091fd642206e1cc404b8d765f4f4171292cafb806219953cddf8b"} Jan 31 10:45:03 crc kubenswrapper[4830]: I0131 10:45:03.870082 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29497605-wb58w" Jan 31 10:45:03 crc kubenswrapper[4830]: I0131 10:45:03.975581 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2bflt\" (UniqueName: \"kubernetes.io/projected/c1178c6e-1ed1-4648-a245-7ed3952de30b-kube-api-access-2bflt\") pod \"c1178c6e-1ed1-4648-a245-7ed3952de30b\" (UID: \"c1178c6e-1ed1-4648-a245-7ed3952de30b\") " Jan 31 10:45:03 crc kubenswrapper[4830]: I0131 10:45:03.975655 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c1178c6e-1ed1-4648-a245-7ed3952de30b-config-volume\") pod \"c1178c6e-1ed1-4648-a245-7ed3952de30b\" (UID: \"c1178c6e-1ed1-4648-a245-7ed3952de30b\") " Jan 31 10:45:03 crc kubenswrapper[4830]: I0131 10:45:03.975772 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c1178c6e-1ed1-4648-a245-7ed3952de30b-secret-volume\") pod \"c1178c6e-1ed1-4648-a245-7ed3952de30b\" (UID: \"c1178c6e-1ed1-4648-a245-7ed3952de30b\") " Jan 31 10:45:03 crc kubenswrapper[4830]: I0131 10:45:03.976381 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c1178c6e-1ed1-4648-a245-7ed3952de30b-config-volume" (OuterVolumeSpecName: "config-volume") pod "c1178c6e-1ed1-4648-a245-7ed3952de30b" (UID: "c1178c6e-1ed1-4648-a245-7ed3952de30b"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 10:45:03 crc kubenswrapper[4830]: I0131 10:45:03.976946 4830 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c1178c6e-1ed1-4648-a245-7ed3952de30b-config-volume\") on node \"crc\" DevicePath \"\"" Jan 31 10:45:03 crc kubenswrapper[4830]: I0131 10:45:03.982209 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c1178c6e-1ed1-4648-a245-7ed3952de30b-kube-api-access-2bflt" (OuterVolumeSpecName: "kube-api-access-2bflt") pod "c1178c6e-1ed1-4648-a245-7ed3952de30b" (UID: "c1178c6e-1ed1-4648-a245-7ed3952de30b"). InnerVolumeSpecName "kube-api-access-2bflt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 10:45:03 crc kubenswrapper[4830]: I0131 10:45:03.982890 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c1178c6e-1ed1-4648-a245-7ed3952de30b-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "c1178c6e-1ed1-4648-a245-7ed3952de30b" (UID: "c1178c6e-1ed1-4648-a245-7ed3952de30b"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 10:45:04 crc kubenswrapper[4830]: I0131 10:45:04.080052 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2bflt\" (UniqueName: \"kubernetes.io/projected/c1178c6e-1ed1-4648-a245-7ed3952de30b-kube-api-access-2bflt\") on node \"crc\" DevicePath \"\"" Jan 31 10:45:04 crc kubenswrapper[4830]: I0131 10:45:04.080085 4830 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c1178c6e-1ed1-4648-a245-7ed3952de30b-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 31 10:45:04 crc kubenswrapper[4830]: I0131 10:45:04.518505 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29497605-wb58w" event={"ID":"c1178c6e-1ed1-4648-a245-7ed3952de30b","Type":"ContainerDied","Data":"6aaa7569a981793d8cdd19eec21759912db026e90a0b8fdfe65a2b6b49a15bb8"} Jan 31 10:45:04 crc kubenswrapper[4830]: I0131 10:45:04.518557 4830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6aaa7569a981793d8cdd19eec21759912db026e90a0b8fdfe65a2b6b49a15bb8" Jan 31 10:45:04 crc kubenswrapper[4830]: I0131 10:45:04.518655 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29497605-wb58w" Jan 31 10:45:04 crc kubenswrapper[4830]: I0131 10:45:04.581777 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29497560-zwwjl"] Jan 31 10:45:04 crc kubenswrapper[4830]: I0131 10:45:04.602914 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29497560-zwwjl"] Jan 31 10:45:06 crc kubenswrapper[4830]: I0131 10:45:06.279798 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fc281670-0a11-45d2-8463-657eaf396711" path="/var/lib/kubelet/pods/fc281670-0a11-45d2-8463-657eaf396711/volumes" Jan 31 10:45:21 crc kubenswrapper[4830]: I0131 10:45:21.057617 4830 scope.go:117] "RemoveContainer" containerID="82e61bfc26f5574f38dd9e925830e89ed159d9144f34a44a7a709077f5fef896" Jan 31 10:45:44 crc kubenswrapper[4830]: I0131 10:45:44.353518 4830 patch_prober.go:28] interesting pod/machine-config-daemon-gt7kd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 10:45:44 crc kubenswrapper[4830]: I0131 10:45:44.354186 4830 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gt7kd" podUID="158dbfda-9b0a-4809-9946-3c6ee2d082dc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused"